Re: [openstack-dev] [TripleO] Introspection rules aka advances profiles replacement: next steps

2015-10-14 Thread Ben Nemec
On 10/14/2015 06:38 AM, Dmitry Tantsur wrote:
> Hi OoO'ers :)
> 
> It's going to be a long letter, fasten your seat-belts (and excuse my 
> bad, as usual, English)!
> 
> In RDO Manager we used to have a feature called advanced profiles 
> matching. It's still there in the documentation at 
> http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/profile_matching.html
>  
> but the related code needed reworking and didn't quite make it upstream 
> yet. This mail is an attempt to restart the discussion on this topic.
> 
> Short explanation for those unaware of this feature: we used detailed 
> data from introspection (acquired using hardware-detect utility [1]) to 
> provide scheduling hints, which we called profiles. A profile is 
> essentially a flavor, but calculated using much more data. E.g. you 
> could sat that a profile "foo" will be assigned to nodes with 1024 <= 
> RAM <= 4096 and with GPU devices present (an artificial example). 
> Profile was put on an Ironic as a capability as a result of 
> introspection. Please read the documentation linked above for more details.
> 
> This feature had a bunch of problems with it, to name a few:
> 1. It didn't have an API
> 2. It required a user to modify files by hand to use it
> 3. It was tied to a pretty specific syntax of the hardware [1] library
> 
> So we decided to split this thing into 2 parts, which are of value one 
> their own:
> 
> 1. Pluggable introspection ramdisk - so that we don't force dependency 
> on hardware-detect on everyone.
> 2. User-defined introspection rules - some DSL that will allow a user to 
> define something like a specs file (see link above) via an API. The 
> outcome would be something, probably capabilit(y|ies) set on a node.
> 3. Scheduler helper - an utility that will take capabilities set by the 
> previous step, and turn them into exactly one profile to use.
> 
> Long story short, we got 1 and 2 implemented in appropriate projects 
> (ironic-python-agent and ironic-inspector) during the Liberty time 
> frame. Now it's time to figure out what we do in TripleO about this, namely:
> 
> 1. Do we need some standard way to define introspection rules for 
> TripleO? E.g. a JSON file like we have for ironic nodes?

Yes, please.

> 
> 2. Do we need a scheduler helper at all? We could use only capabilities 
> for scheduling, but then we can end up with the following situation: 
> node1 has capabilities C1 and C2, node2 has capability C1. First we 
> deploy a flavor with capability C1, it goes to node1. Then we deploy a 
> flavor with capability C2 and it fails, despite us having 2 correct 
> nodes initially. This is what state files were solving in [1] (again, 
> please refer to the documentation).

It sounds like the answer is yes.  If the existing scheduler can't
handle a valid use case then we need some sort of solution.

> 
> 3. If we need, where does it go? tripleo-common? Do we need an HTTP API 
> for it, or do we just do it in place where we need it? After all, it's a 
> pretty trivial manipulation with ironic nodes..

I think that would depend on what the helper ends up being.  I can't see
it needing a REST API, but presumably it will have to plug into Nova
somehow.  If it's something that would be generally useful (which it
sounds like it might be - Ironic capabilities aren't a TripleO-specific
thing), then it belongs in Nova itself IMHO.

> 
> 4. Finally, we need an option to tell introspection to use 
> python-hardware. I don't think it should be on by default, but it will 
> require rebuilding of IPA (due to a new dependency).

Can we not just build it in always, but only use it when desired?  Is
the one extra dependency that much of a burden?

> 
> Looking forward to your opinions.
> Dmitry.
> 
> [1] https://github.com/redhat-cip/hardware
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone federation] some questions about keystone IDP with SAML supported

2015-10-14 Thread Marek Denis

Hello,

On 14.10.2015 13:10, wyw wrote:

hello, keystoners.  please help me

Here is my use case:
1. use keystone as IDP , supported with SAML


Remember that Keystone is not a fully fledged Identity Provider. For 
instance it cannot handle WebSSO. To be even more specific it will only 
handle "IdP Initiated authentication workflow" and it's one of the 
variant SAML2 authentication work.



2. keystone integrates with LDAP
3. we use a java application as Service Provider, and to integrate it 
with keystone IDP.
4. we use a keystone as Service Provider, and to integrate it withe 
keystone IDP.


Did you try that already? Did it work?



The problems:
in the k2k federation case, keystone service provider requests 
authentication info with IDP via Shibboleth ECP.


Yes. Why is that a problem? K2K architecture assumes two Keystones - 
Keystone-IdP and Keystone-SP . Communication between them leverages on 
SAML2 and ECP.



in the java application, we use websso to request IDP, for example:


as mentioned earlier - no websso in keystone-idp.


idp_sso_endpoint = http://10.111.131.83:5000/v3/OS-FEDERATION/saml2/sso
but, the java redirect the sso url , it will return 404 error.
so, if we want to integrate a java application with keystone IDP, 
 should we need to support ECP in the java application?


pretty much - yes! Luckily for you the reference libraries (shibboleth) 
are written in Java so it should be easier to integrate with your 
application.




here is my some references:
1. http://docs.openstack.org/developer/keystone/configure_federation.html
2. 
http://blog.rodrigods.com/it-is-time-to-play-with-keystone-to-keystone-federation-in-kilo

3. http://docs.openstack.org/developer/keystone/extensions/federation.html
https://gist.githubusercontent.com/zaccone/3c3d4c8f39a19709bcd7/raw/d938f2f9d1cf06d29a81d57c8069c291fed66cab/k2k-env.sh
https://gist.githubusercontent.com/zaccone/4bbc07d215c0047738b4/raw/75295fe32df88b24576ece69994270dc4eb19a6e/k2k-ecp-client.py
my keystone version is kilo

help me, thanks


I hope I did! :-)

--
Marek Denis

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposing Denis Egorenko core

2015-10-14 Thread Sebastien Badia
On Wed, Oct 14, 2015 at 09:07:04AM (+0200), Yanis Guenane wrote:
> On 10/13/2015 11:02 PM, Matt Fischer wrote:
> > On Tue, Oct 13, 2015 at 2:29 PM, Emilien Macchi  wrote:
> >
> >> Denis Egorenko (degorenko) is working on Puppet OpenStack modules for
> >> quite some time now.
> >>
> >> Some statistics [1] about his contributions (last 6 months):
> >> * 270 reviews
> >> * 49 negative reviews
> >> * 216 positive reviews
> >> * 36 disagreements
> >> * 30 commits
> >>
> >> Beside stats, Denis is always here on IRC participating to meetings,
> >> helping our group discussions, and is always helpful with our community.
> >>
> >> I honestly think Denis is on the right path to become a good core member
> >> team, he has strong knowledge in OpenStack deployments, knows enough
> >> about our coding style and his involvement in the project is really
> >> great. He's also a huge consumer of our modules since he's working on Fuel.
> >>
> >> I would like to open the vote to promote Denis part of Puppet OpenStack
> >> core reviewers.
> >>
> >> [1] http://stackalytics.com/report/contribution/puppetopenstack-group/180
> >> --
> >> Emilien Macchi
> >>
> >>
> >>
> > Denis has given me some great feedback on reviews and has shown a good
> > understanding of puppet-openstack.
> >
> > +1
> 
> +1
> 
> He has been really involved and proactive (reviews + commits) in the
> community during the past months.

A big +1 also!

Denis is very involved in our community, and he has a very valuable feedback!

Thanks!

Seb

-- 
Sebastien Badia


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Cinder Design Summit Schedule

2015-10-14 Thread Sean McGinnis
Hello all,

I have the schedule updated for the Design Summit Cinder sessions.
Please take a look and let me know if you see any major issues:

https://mitakadesignsummit.sched.org/overview/type/cinder

Some sessions had to be moved around to accomodate presentation
schedules. If we have other conflicts, please let me know and I will do
my best to have the least number of conflicts.

Thanks,
Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] A few questions on configuring DevStack for Neutron

2015-10-14 Thread Sean M. Collins
I believe that pulling a localrc off a CI result is tribal knowledge. It can be 
done, but most likely it's a trick that is only known by someone who has been 
involved in OpenStack for a long time.
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Modularization activity POC

2015-10-14 Thread Evgeniy L
Hi,

We are starting Fuel modularization POC activity which is basically in one
sentence
can be explained as "Use Fuel without Fuel", it means that we want to
provide
for a user a way to use some good things from Fuel, without entire master
node and
other parts which user doesn't need.

Currently we have monolithic system which includes every aspect of
deployment
logic, those components tightly coupled together, and there are few persons
who understand all the interactions between components.

As a result of modularization we will get the next benefits:
1. reusability of components
1.1. it will lead to more components consumers (users)
1.2. better integration with the community (some community projects might
   be interested in using some parts of Fuel)
2. components decoupling will lead to clear interfaces between components
2.1. so it will be easier to replace some component
2.2. it will be easier to split the work between teams and it will help
scale teams in
   a much more efficient way

Here are some thing which naturally can be used separately:
* network configuration (is a part of POC)
* advanced partitioning/provisioning (is a part of POC)
* discovery mechanism (ironic-inspector?)
* power management (ironic?)
* network verification
* diagnostic snapshot
* etc

The main purpose of POC is to try to make partly working system to figure
out the
scope of work which we will have to do upstream in order to make it
production ready.

Here are few basic component-specific ideas:

Advanced partitioning/provisioning
* split provisioning into two independent actions partitioning and
provisioning
  (currently we have a single call which does partitioning, provisioning,
   post provisioning configuration)
* figure out the data format (currently it's too Fuel and Cobbler specific)

Network configuration
* CRUD api on any entity in the system (in Fuel not all of the data are
exposed via api,
  so user has to go and edit entries in db manually)
* no constraints regarding to network topology (in Fuel there are a lot of
hardcoded
  assumptions)

Node hardware discovery
* should be able to support different source drivers at the same time
   (csv file, fuel-nailgun-agent, CMDB, dhcp, cobbler etc)
* versioning of HW information (required for life cycle management)
* notification about changes in hardware or about node statuses
  (required for life cycle management)

Common requirements for components:
* every component must follow OpenStack coding standards when
  we start working upstream after POC (so there shouldn't be a question
  what to use pecan of flask)
* Fuel specific interfaces should be implemented as drivers
* this one is obvious, but currently one of the most problematic parts in
Fuel,
  HW gets changed, interface can be replaced, disk can be removed,
  component should have a way to handle it

Technically speaking, we want to have separate services for every component,
it is one of the technical ways to force components to have clear
interfaces.

Other architecture decision which we want to try to solve is extendability,
currently we have a problem that all of the logic which is related to
deployment
data is hardcoded in Nailgun. Plugins should be able to retrieve any data it
needs from the system, in order to perform plugin deployment, these data
should
be retrieved using some interface, and we already have interface where we
can
provide stability and versioning, it's REST API. So basically deployment
should
consist of two steps first is to retrieve all required data from any source
it needs,
and after that send data to the nodes and start deployment.

If you have any questions/suggestion don't hesitate to ask.

Thanks,
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] one SDN controller and many OpenStack environments

2015-10-14 Thread Piotr Misiak
Hi,

Do you know if there is a possibility to manage tenants networks in many
OpenStack environments by one SDN controller (via Neutron plugin)?

Suppose I have in one DC two or three OpenStack env's (deployed for
example by Fuel) and I have a SDN controller which manages all switches
in DC. Can I connect all those OpenStack env's to the SDN controller
using plugin for Neutron? I'm curious what will happen if there will be
for example the same MAC address in two OpenStack env's?
Maybe I should not connect those OpenStack env's to the SDN controller
and use a standard OVS configuration?

Which SDN controller would you recommend? I'm researching these currently:
- OpenDaylight
- Floodlight
- Ryu

Thanks,
Piotr Misiak

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone federation] some questions about keystone IDP with SAML supported

2015-10-14 Thread John Dennis

On 10/14/2015 07:10 AM, wyw wrote:

hello, keystoners.  please help me

Here is my use case:
1. use keystone as IDP , supported with SAML
2. keystone integrates with LDAP
3. we use a java application as Service Provider, and to integrate it
with keystone IDP.
4. we use a keystone as Service Provider, and to integrate it withe
keystone IDP.


Keystone is not an identity provider, or at least it's trying to get out 
of that business, the goal is to have keystone utilize actual IdP's 
instead for authentication.


K2K utilizes a limited subset of the SAML profiles and workflow. 
Keystone is not a general purpose SAML IdP supporting Web SSO.


Keystone implements those portions of various SAMLv2 profiles necessary 
to support federated Keystone and derive tokens from federated IdP's. 
Note this distinctly different than Keystone being a federated IdP.



The problems:
in the k2k federation case, keystone service provider requests
authentication info with IDP via Shibboleth ECP.


Nit, "Shibboleth ECP" is a misnomer, ECP (Enhanced Client & Proxy) is a 
SAMLv2 profile, a SAML profile Shibboleth happens to implement, however 
there other SP's and IdP's that also support ECP (e.g. mellon, Ipsilon)



in the java application, we use websso to request IDP, for example:
idp_sso_endpoint = http://10.111.131.83:5000/v3/OS-FEDERATION/saml2/sso
but, the java redirect the sso url , it will return 404 error.
so, if we want to integrate a java application with keystone IDP,
  should we need to support ECP in the java application?


You're misapplying SAML, Keystone is not a traditional IdP, if it were 
your web application could use SAML HTTP-Redirect or it could also 
function as an ECP client, but not against Keystone. Why? Keystone is 
not a general purpose federated IdP.


--
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Neutron rolling upgrade - are we there yet?

2015-10-14 Thread Korzeniewski, Artur
Hi all,
I would like to gather all upgrade activities in Neutron in one place, in order 
to summarizes the current status and future activities on rolling upgrades in 
Mitaka.


1.  RPC versioning

a.  It is already implemented in Neutron.

b.  TODO: To have the rolling upgrade we have to implement the RPC version 
pinning in conf.

i. I'm not a big fan of 
this solution, but we can work out better idea if needed.

c.  Possible unit/functional tests to catch RPC version incompatibilities 
between RPC revisions.

d.  TODO: Multi-node Grenade job to have rolling upgrades covered in CI.

2.  Message content versioning - versioned objects

a.  TODO: implement Oslo Versionobject in Mitaka cycle. The interesting 
entities to be implemented: network, subnet, port, security groups...

b.  Will OVO have impact on vendor plugins?

c.  Be strict on changes in version objects in code review, any change in 
object structure should increment the minor (backward-compatible) or major 
(breaking change) RPC version.

d.  Indirection API - message from newer format should be translated to 
older version by neutron server.

3.  Database migration

a.  Online schema migration was done in Liberty release, any work left to 
do?

b.  TODO: Online data migration to be introduced in Mitaka cycle.

i. Online data 
migration can be done during normal operation on the data.

   ii. There should be also 
the script to invoke the data migration in the background.

c.  Currently the contract phase is doing the data migration. But since the 
contract phase should be run offline, we should move the data migration to 
preceding step. Also the contract phase should be blocked if there is still 
relevant data in removed entities.

i. Contract phase can 
be executed online, if there is all new code running in setup.

d.  The other strategy is to not drop tables, alter names or remove the 
columns from the DB - what's in, it's in. We should put more attention on code 
reviews, merge only additive changes and avoid questionable DB modification.

e.  The Neutron server should be updated first, in order to do data 
translation between old format into new schema. When doing this, we can be sure 
that old data would not be inserted into old DB structures.

I have performed the manual Kilo to Liberty upgrade, both in operational manner 
and in code review of the RPC APIs. All is working fine.
We can have some discussion on cross-project session [7] or we can also review 
any issues with Neutron upgrade in Friday's unplugged session [8].

Sources:
[1] http://www.danplanet.com/blog/2015/10/05/upgrades-in-nova-rpc-apis/
[2] http://www.danplanet.com/blog/2015/10/06/upgrades-in-nova-objects/
[3] 
http://www.danplanet.com/blog/2015/10/07/upgrades-in-nova-database-migrations/
[4] 
https://github.com/openstack/neutron/blob/master/doc/source/devref/rpc_callbacks.rst
[5] 
http://www.danplanet.com/blog/2015/06/26/upgrading-nova-to-kilo-with-minimal-downtime/
[6] 
https://github.com/openstack/neutron-specs/blob/master/specs/liberty/online-schema-migrations.rst
[7] https://etherpad.openstack.org/p/mitaka-cross-project-session-planning
[8] https://etherpad.openstack.org/p/mitaka-neutron-unplugged-track

Regards,
Artur Korzeniewski
IRC: korzen

Intel Technology Poland sp. z o.o.
KRS 101882
ul. Slowackiego 173, 80-298 Gdansk


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova Design Summit Schedule

2015-10-14 Thread John Garbutt
On 14 October 2015 at 15:59, John Garbutt  wrote:
> Hi,
>
> We have a draft schedule for the summit uploaded:
> http://mitakadesignsummit.sched.org/overview/type/nova
>
> We generally picked the sessions where in person debate is needed. We
> rejected sessions where we felt a nova-spec review, or a discussion in
> the Friday meetup, or one of the Nova Unconference Slots, would be a
> better fit.
>
> Please do let me know about any big clashes or omissions ASAP, so we
> can work through those.

As a quick addition, I have created etherpads for all the sessions:
https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads#Nova

Please do help to fill out the content in those, particularly the
recommending pre-reading for each session.

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Introspection rules aka advances profiles replacement: next steps

2015-10-14 Thread John Trowbridge


On 10/14/2015 10:57 AM, Ben Nemec wrote:
> On 10/14/2015 06:38 AM, Dmitry Tantsur wrote:
>> Hi OoO'ers :)
>>
>> It's going to be a long letter, fasten your seat-belts (and excuse my 
>> bad, as usual, English)!
>>
>> In RDO Manager we used to have a feature called advanced profiles 
>> matching. It's still there in the documentation at 
>> http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/profile_matching.html
>>  
>> but the related code needed reworking and didn't quite make it upstream 
>> yet. This mail is an attempt to restart the discussion on this topic.
>>
>> Short explanation for those unaware of this feature: we used detailed 
>> data from introspection (acquired using hardware-detect utility [1]) to 
>> provide scheduling hints, which we called profiles. A profile is 
>> essentially a flavor, but calculated using much more data. E.g. you 
>> could sat that a profile "foo" will be assigned to nodes with 1024 <= 
>> RAM <= 4096 and with GPU devices present (an artificial example). 
>> Profile was put on an Ironic as a capability as a result of 
>> introspection. Please read the documentation linked above for more details.
>>
>> This feature had a bunch of problems with it, to name a few:
>> 1. It didn't have an API
>> 2. It required a user to modify files by hand to use it
>> 3. It was tied to a pretty specific syntax of the hardware [1] library
>>
>> So we decided to split this thing into 2 parts, which are of value one 
>> their own:
>>
>> 1. Pluggable introspection ramdisk - so that we don't force dependency 
>> on hardware-detect on everyone.
>> 2. User-defined introspection rules - some DSL that will allow a user to 
>> define something like a specs file (see link above) via an API. The 
>> outcome would be something, probably capabilit(y|ies) set on a node.
>> 3. Scheduler helper - an utility that will take capabilities set by the 
>> previous step, and turn them into exactly one profile to use.
>>
>> Long story short, we got 1 and 2 implemented in appropriate projects 
>> (ironic-python-agent and ironic-inspector) during the Liberty time 
>> frame. Now it's time to figure out what we do in TripleO about this, namely:
>>
>> 1. Do we need some standard way to define introspection rules for 
>> TripleO? E.g. a JSON file like we have for ironic nodes?
> 
> Yes, please.
> 
>>
>> 2. Do we need a scheduler helper at all? We could use only capabilities 
>> for scheduling, but then we can end up with the following situation: 
>> node1 has capabilities C1 and C2, node2 has capability C1. First we 
>> deploy a flavor with capability C1, it goes to node1. Then we deploy a 
>> flavor with capability C2 and it fails, despite us having 2 correct 
>> nodes initially. This is what state files were solving in [1] (again, 
>> please refer to the documentation).
> 
> It sounds like the answer is yes.  If the existing scheduler can't
> handle a valid use case then we need some sort of solution.
> 
>>
>> 3. If we need, where does it go? tripleo-common? Do we need an HTTP API 
>> for it, or do we just do it in place where we need it? After all, it's a 
>> pretty trivial manipulation with ironic nodes..
> 
> I think that would depend on what the helper ends up being.  I can't see
> it needing a REST API, but presumably it will have to plug into Nova
> somehow.  If it's something that would be generally useful (which it
> sounds like it might be - Ironic capabilities aren't a TripleO-specific
> thing), then it belongs in Nova itself IMHO.
> 
>>
>> 4. Finally, we need an option to tell introspection to use 
>> python-hardware. I don't think it should be on by default, but it will 
>> require rebuilding of IPA (due to a new dependency).
> 
> Can we not just build it in always, but only use it when desired?  Is
> the one extra dependency that much of a burden?

It pulls in python-numpy and python-pandas, which are pretty large.

> 
>>
>> Looking forward to your opinions.
>> Dmitry.
>>
>> [1] https://github.com/redhat-cip/hardware
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] one SDN controller and many OpenStack environments

2015-10-14 Thread Sean M. Collins
Please ask this question on openstack-ops, where a lot of operators
convene and can probably share experiences with the SDN controllers you
listed. 

My gut says that an SDN controller is expected to handle multiple regions and 
large
topologies, but I can't comment on if any SDN solution handles this
case, with any authority. I would imagine so - but you'd have better
luck on the ops list for specific controllers

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Tokyo summit schedule

2015-10-14 Thread Tim Hinrichs
Hi all,

I put a tentative schedule online for the Tokyo summit.  If there are
things you'd like to discuss that aren't on there, let me know.

https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads#Congress

Masahito: could you reach out to the OPNFV people that wanted to talk about
their use cases and see if the schedule works for them?

I'll reach out to the Monasca team and check with them as well.

Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone federation] some questions about keystone IDP with SAML supported

2015-10-14 Thread John Dennis

On 10/14/2015 11:58 AM, Marek Denis wrote:

pretty much - yes! Luckily for you the reference libraries (shibboleth)
are written in Java so it should be easier to integrate with your
application.


Only the Shibboleth IdP is written in Java. Shibboleth the SP is written 
in C++. If you're trying to implement an ECP client you'll probably find 
more support in the C++ SP implementation libraries for what you need.


Actually writing an ECP client is not difficult, you could probably 
cobble one together pretty easily from the standard Java libraries. An 
ECP client only needs to be able to parse and generate XML and 
communicate via HTTP. It does not need to be able to read or generate 
any SAML specific XML because an ECP client encapsulates the SAML in 
other XML (e.g. SOAP).


--
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][fwaas] fwaas driver development steps

2015-10-14 Thread Eichberger, German
Hi Oğuz,

Thank you for starting to contribute to Neutron FWaaS. We just got together as 
a new core team and we are still learning the ropes there. Please be also 
advised that we are planning some changes in future versions (especially now as 
the API has been deprecated).

In any case a good place to start is looking at the other drivers as well as 
attending our weekly irc meetings to ask questions. We also have an irc channel 
where you can ask questions as well.

Hope that points you in the right direction,
German

From: Oğuz Yarımtepe >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, October 13, 2015 at 7:58 AM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [neutron][fwaas] fwaas driver development steps

Hi,

I need to write a driver for our friewall hardware to integrate it to our 
Openstack environment. I checked the Neutron Development wiki page, FWaaS wiki 
page, fwaas driver codes written at the Github. Since there is no clear 
documentation about howto write a direwall driver for Neutron i need some 
guidance. The firewall driver will have a REST API that can be used to 
configure it so what i need now is how i will debug and develop neutron while 
writing the driver. What is the suggested way? Which functions should be 
implemented? I had seen the abstract functions like, create_friewall, 
update_firewall, the question is what are the context of the parameters coming 
there. So either i should debug one of them step by step like the iptables 
driver or some clear definition i should have.

What is the right way to do it?



--
Oğuz Yarımtepe
http://about.me/oguzy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron rolling upgrade - are we there yet?

2015-10-14 Thread Dan Smith
> I would like to gather all upgrade activities in Neutron in one place,
> in order to summarizes the current status and future activities on
> rolling upgrades in Mitaka.

Glad to see this work really picking up steam in other projects!

> b.  TODO: To have the rolling upgrade we have to implement the RPC
> version pinning in conf.
> 
> i. I’m not a big
> fan of this solution, but we can work out better idea if needed.

I'll just point to this:

  https://review.openstack.org/#/c/233289/

and if you go check the logs for the partial-ncpu job, you'll see
something like this:

  nova.compute.rpcapi  Automatically selected compute RPC version 4.5
from minimum service version 2

I think that some amount of RPC pinning is probably going to be required
for most people in most places, given our current model. But I assume
the concern is around requiring this to be a manual task the operators
have to manage. The above patch is the first step towards nova removing
this as something the operators have to know anything about.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] establishing release liaisons for mitaka

2015-10-14 Thread Armando M.
On 14 October 2015 at 08:25, Doug Hellmann  wrote:

> As with the other cross-project teams, the release management team
> relies on liaisons from each project to be available for coordination of
> work across all teams. It's the start of a new cycle, so it's time to
> find those liaison volunteers.
>
> We are working on updating the release documentation as part of the
> Project Team Guide. Release liaison responsibilities are documented in
> [0], and we will update that page with more details over time.
>
> In the past we have defaulted to having the PTL act as liaison if no
> alternate is specified, and we will continue to do that during Mitaka.
> If you plan to delegate release work to a liaison, especially for
> submitting release requests, please update the wiki [1] to provide their
> contact information. If you plan to serve as your team's liaison, please
> add your contact details to the page.
>

Just to be clear: for Neutron, Kyle Mestery has kindly volunteered to
continue to act as release liaison, so the wiki is up to date.

I am eternally grateful to Kyle for taking on this critical task, as well
as to the wider release team.


>
> Thanks,
> Doug
>
> [0]
> http://docs.openstack.org/project-team-guide/release-management.html#release-liaisons
> [1]
> https://wiki.openstack.org/wiki/CrossProjectLiaisons#Release_management
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Congress and Monasca Joint Session at Tokyo Design Summit

2015-10-14 Thread Tim Hinrichs
Hi Fabio,

We now have a schedule.  I've tentatively booked you for half of our slot
Wed 3:40-4:20.  Does that work for your team?  You can find the other
options at...

https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads#Congress

Tim


On Thu, Oct 1, 2015 at 2:06 PM Fabio Giannetti (fgiannet) <
fgian...@cisco.com> wrote:

> Thanks a lot Tim.
> I really appreciate.
> Fabio
>
> From: Tim Hinrichs 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Thursday, October 1, 2015 at 7:40 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Congress] Congress and Monasca Joint
> Session at Tokyo Design Summit
>
> Hi Fabio,
>
> The Congress team talked this over during our IRC yesterday.  It looks
> like can meet with your team during one of our meeting slots.  As far as I
> know the schedule for those meetings hasn't been set.  But once it is I'll
> reach out (or you can) to discuss the day/time.
>
> Tim
>
> On Mon, Sep 28, 2015 at 2:51 PM Tim Hinrichs  wrote:
>
>>
>> Hi Fabio: Thanks for reaching out.  We should definitely talk at the
>> summit.  I don't know if we can devote 1 of the 3 allocated Congress
>> sessions to Monasca, but we'll talk it over during IRC on Wed and let you
>> know.  Or do you have a session we could use for the discussion?  In any
>> case, I'm confident we can make good progress toward integrating Congress
>> and Monasca in Tokyo.  Monasca sounds interesting--I'm looking forward to
>> learning more!
>>
>> Congress team: if we could all quickly browse the Monasca wiki before
>> Wed's IRC, that would be great:
>> https://wiki.openstack.org/wiki/Monasca
>>
>> Tim
>>
>>
>>
>> On Mon, Sep 28, 2015 at 1:50 PM Fabio Giannetti (fgiannet) <
>> fgian...@cisco.com> wrote:
>>
>>> Tim and Congress folks,
>>>   I am writing on behalf of the Monasca community and I would like to
>>> explore the possibility of holding a joint session during the Tokyo Design
>>> Summit.
>>> We would like to explore:
>>>
>>>1. how to integrate Monasca with Congress so then Monasca can
>>>provide metrics, logs and event data for policy evaluation/enforcement
>>>2. How to leverage Monasca alarming to automatically notify about
>>>statuses that may imply policy breach
>>>3. How to automatically (if possible) convert policies (or subparts)
>>>into Monasca alarms.
>>>
>>> Please point me to a submission page if I have to create a formal
>>> proposal for the topic and/or let me know other forms we can interact at
>>> the Summit.
>>> Thanks in advance,
>>> Fabio
>>>
>>> *Fabio Giannetti*
>>> Cloud Innovation Architect
>>> Cisco Services
>>> fgian...@cisco.com
>>> Phone: *+1 408 527 1134*
>>> Mobile: *+1 408 854 0020*
>>>
>>> *Cisco Systems, Inc.*
>>> 285 W. Tasman Drive
>>> San Jose
>>> California
>>> 95134
>>> United States
>>> Cisco.com 
>>>
>>>  Think before you print.
>>>
>>> This email may contain confidential and privileged material for the sole
>>> use of the intended recipient. Any review, use, distribution or disclosure
>>> by others is strictly prohibited. If you are not the intended recipient (or
>>> authorized to receive for the recipient), please contact the sender by
>>> reply email and delete all copies of this message.
>>>
>>> Please click here
>>>  for
>>> Company Registration Information.
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] one SDN controller and many OpenStack environments

2015-10-14 Thread Kris Sterckx
Hi Piotr,

Nuage Networks is supporting the use case you describe and this is actually
deployed at operators.

Thanks,

Kris
On Oct 14, 2015 5:34 PM, "Piotr Misiak"  wrote:

> Hi,
>
> Do you know if there is a possibility to manage tenants networks in many
> OpenStack environments by one SDN controller (via Neutron plugin)?
>
> Suppose I have in one DC two or three OpenStack env's (deployed for
> example by Fuel) and I have a SDN controller which manages all switches
> in DC. Can I connect all those OpenStack env's to the SDN controller
> using plugin for Neutron? I'm curious what will happen if there will be
> for example the same MAC address in two OpenStack env's?
> Maybe I should not connect those OpenStack env's to the SDN controller
> and use a standard OVS configuration?
>
> Which SDN controller would you recommend? I'm researching these currently:
> - OpenDaylight
> - Floodlight
> - Ryu
>
> Thanks,
> Piotr Misiak
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][dns]What the meaning of"dns_assignment" and "dns_name"?

2015-10-14 Thread Zhi Chang
Hi, Miguel
Thank you so much for your reply. You are so patient!
After reading your reply, I still have some questions to ask you. :-)
Below, is my opinion about the http://paste.openstack.org/show/476210/, 
please read it and tell me whether I was right.
(1). Define a DNS domain
(2). Update a network's "dns_domain" attribute to the DNS domain which 
defined in the step1
(3). Create a VM in this network. The instance's port will assign 
instance's hostname to it's dns_name attribute
(4). Create a floating IP for this VM
(5). In Designate, there will be generate a new A record. This record is a 
link between floating IP and dns_name+domain_name. Just like your record: 
deec921d-b630-4479-8932-c5ec7c530820 | A| my-instance.my-example.org. | 
172.24.4.3
  (6). I am don't understand where the IP address "104.130.78.191" comes from. 
I think this address is a public DNS, just like 8.8.8.8. Does it right?
   (7). I can dig "my-instance.my-example.org." by a public DNS. And the result 
is the floating IP.


Does my understanding was right?


Hope For Your Reply.
Thanks 
Zhi Chang


-- Original --
From:  "Miguel Lavalle";
Date:  Wed, Oct 14, 2015 11:22 AM
To:  "OpenStack Development Mailing List (not for usage 
questions)"; 

Subject:  Re: [openstack-dev] [Neutron][dns]What the meaning of"dns_assignment" 
and "dns_name"?

 
Zhi Chang,


Thank you for your questions. We are in the process of integrating Neutron and 
Nova with an external DNS service, using Designate as the reference 
implementation. This integration is being achieved in 3 steps. What you are 
seeing is the result of only the first one. These steps are:


1) Internal DNS integration in Neutron, which merged recently: 
https://review.openstack.org/#/c/200952/. As you may know, Neutron has an 
internal DHCP / DNS service based on dnsmasq for each virtual network that you 
create. Previously, whenever you created a port on a given network, your port 
would get a default host name in dnsmasq of the form 
'host-xx-xx-xx-xx.openstacklocal.", where xx-xx-xx-xx came from the port's 
fixed ip address "xx.xx.xx.xx" and "openstacklocal" is the default domain used 
by Neutron. This name was generated by the dhcp agent. In the above mentioned 
patchset, we are moving the generation of these dns names to the Neutron 
server, with the intent of allowing the user to specify it. In order to do 
that, you need to enable it by defining in neutron.conf the 'dns_domain' 
parameter with a value different to the default 'openstacklocal'. Once you do 
that, you can create or update a port and assign a value to its 'dns_name' 
attribute. Why is this useful? Please read on.

2) External DNS integration in Neutron. The patchset is being worked now: 
https://review.openstack.org/#/c/212213/. The functionality implemented here 
allows Neutron to publish the dns_name associated with a floating ip under a 
domain in an external dns service. We are using Designate as the reference 
implementation, but the idea is that in the future other DNS services can be 
integrated.. Where does the dns name and domain of the floating ip come from? 
It can come from 2 sources. Source number 1 is the floating ip itself, because 
in this patchset we are adding a dns_name and a dns_domain attributes to it. If 
the floating ip doesn't have a dns name and domain associated with it, then 
they can come from source number 2: the port that the floating ip is associated 
with (as explained in point 1, ports now can have a dns_name attribute) and the 
port's network, since this patchset adds dns_domain to networks.


3) Integration of Nova with Neutron's DNS. I have started the implementation of 
this and over the next few days will push the code to Gerrit for first review. 
When an instance is created, nova will request to Neutron the creation of the 
corresponding port specifying the instance's hostname in the port's 'dns_name' 
attribute (as explained in point 1). If the network where that port lives has a 
dns_domain associated with it (as explained in point 2) and you assign a 
floating ip to the port, your instance's hostname will be published in the 
external dns service.


To make it clearer, here I walk you through an example that I executed in my 
devstack: http://paste.openstack.org/show/476210/


As mentioned above, we also allow the dns_name and dns_domain to be published 
in the external dns to be defined at the floating ip level. The reason for this 
is that we envision a use case where the name and ip address made public in the 
dns service are stable, regardless of the nova instance associated with the 
floating ip.


If you are attending the upcoming Tokyo summit, you could attend the following 
talk for further information:  
http://openstacksummitoctober2015tokyo.sched.org/event/5cbdd5fb4a6d080f93a5f321ff59009c#.Vh3KMZflRz2
 Looking forward to see you there!




Re: [openstack-dev] [puppet] Proposing Denis Egorenko core

2015-10-14 Thread Yanis Guenane


On 10/13/2015 11:02 PM, Matt Fischer wrote:
> On Tue, Oct 13, 2015 at 2:29 PM, Emilien Macchi  wrote:
>
>> Denis Egorenko (degorenko) is working on Puppet OpenStack modules for
>> quite some time now.
>>
>> Some statistics [1] about his contributions (last 6 months):
>> * 270 reviews
>> * 49 negative reviews
>> * 216 positive reviews
>> * 36 disagreements
>> * 30 commits
>>
>> Beside stats, Denis is always here on IRC participating to meetings,
>> helping our group discussions, and is always helpful with our community.
>>
>> I honestly think Denis is on the right path to become a good core member
>> team, he has strong knowledge in OpenStack deployments, knows enough
>> about our coding style and his involvement in the project is really
>> great. He's also a huge consumer of our modules since he's working on Fuel.
>>
>> I would like to open the vote to promote Denis part of Puppet OpenStack
>> core reviewers.
>>
>> [1] http://stackalytics.com/report/contribution/puppetopenstack-group/180
>> --
>> Emilien Macchi
>>
>>
>>
> Denis has given me some great feedback on reviews and has shown a good
> understanding of puppet-openstack.
>
> +1

+1

He has been really involved and proactive (reviews + commits) in the
community during the past months.

--
Yanis Guenane

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] Monasca Meeting @ Tokyo Summit

2015-10-14 Thread Oğuz Yarımtepe
Hi,

On Wed, Oct 14, 2015 at 7:36 AM, Fabio Giannetti (fgiannet) <
fgian...@cisco.com> wrote:

> Guys,
>I have a Cisco room S3 to held a Monasca meeting over the Tokyo Summit.
> The time slot is Thursday 4:30pm to 6pm.
> Please mark your calendar and see you there.
> Fabio
>
>
Will this meeting be open to everyone? We are using Monasca at our test
environment and planning to use it at out production also, would like to
hear the future plans and development process of it.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Anyone tried to mix-use openstack components or projects?

2015-10-14 Thread Germy Lure
Hi Salvatore,
Thank you so much.
I think I see your points now. Next step, I will have a try to check it.

Many thanks.
Germy
.


On Mon, Oct 12, 2015 at 11:11 PM, Salvatore Orlando 
wrote:

> Inline,
> Salvatore
>
> On 12 October 2015 at 10:23, Germy Lure  wrote:
>
>> Thank you, Kevin.
>> So the community just divided the whole openstack into separate
>> sub-projects(Nova,Neutron and etc.) but it's not taken into account that if
>> those modules can work together with different versions. Yes?
>>
>
> The developer community has been addressing this by ensuring, to some
> extent, backward compatibility between the APIs used for communicating
> across services. This is what allows a component at version X to operate
> with another component at version Y.
>
> In the case of Neutron and Nova, this is only done with REST over HTTP.
> Other projects also use RPC over AMQP.
> Neutron strived to be backward compatible since the v2 API was introduced
> in Folsom. Therefore you should be able to run Neutron Kilo with Nova
> Havana; as Kevin noted, you might want to disable notifications on the
> Neutron side as the nova extension that processes them does not exist in
> Havana.
>
>
>
>>
>> If so, is it possible to keep being compatible with each other in
>> technology? How about just N+1? And how about just in Neutron?
>>
>
> While it is surely possible, enforcing this, as far as I can tell, is not
> a requirement for Openstack projects. Indeed, it is not something which is
> tested in the gate. It would be interesting to have it as a part of a
> rolling upgrade test for an OpenStack cloud, where, for instance, you first
> upgrade the networking service and then the compute service. But beyond
> that I do not think the upstream developer community should provide any
> additional guarantee, notwithstanding guarantees on API backward
> compatibility.
>
>
>> Germy
>> .
>>
>> On Sun, Oct 11, 2015 at 4:33 PM, Kevin Benton  wrote:
>>
>>> For the particular Nova Neutron example, the Neutron Kilo API should
>>> still be compatible with the calls Havana Nova makes. I think you will need
>>> to disable the Nova callbacks on the Neutron side because the Havana
>>> version wasn't expecting them.
>>>
>>> I've tried out many N+1 combinations (e.g. Icehouse + Juno, Juno + Kilo)
>>> but I haven't tried a gap that big.
>>>
>>> Cheers,
>>> Kevin Benton
>>>
>>> On Sat, Oct 10, 2015 at 1:50 AM, Germy Lure 
>>> wrote:
>>>
 Hi all,

 As you know, openstack projects are developed separately. And
 theoretically, people can create networks with Neutron in Kilo version for
 Nova in Havana version.

 Did Anyone tried it?
 Do we have some pages to show what combination can work together?

 Thanks.
 Germy
 .


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> Kevin Benton
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar][cli][openstackclient] conflict in nova flavor and zaqar flavor

2015-10-14 Thread Flavio Percoco

On 13/10/15 18:27 -0400, Monty Taylor wrote:

On 10/13/2015 05:15 PM, Dean Troyer wrote:

On Tue, Oct 13, 2015 at 3:58 PM, Shifali Agrawal
> wrote:

   All above make sense, just one thing, how about using word "zaqar"
 instead of messaging? That is what all other projects are doing,
   for example:


These are the old project-specific CLIs, note that the 'keystone'
command only supports v2 auth today and will be removed entirely in the
keystoneclient 2.0 release.

   $ keystone user-create
   $ heat event-list

   This will create a separate namespace for the project and also will
   solve the issue of `openstack messaging message post`.


One of the things I have tried very hard to do is make it so users do
NOT need to know which API handles a given command.  For higher-layer
projects that is less of a concern I suppose, and that was done long
before anyone thought that 20+ APIs would be handled in a single command
set.


I agree very strongly with this goal. We've done the same thing with 
the new ansible modules. (os_server vs. nova_compute) It becomes 
especially important when there are things that are the same but 
handled by different services. Should the user know/care that in cloud 
A they get a floating IP from nova but in cloud B they get it from 
neutron? Nope. That's a mess in our yard - the user shouldn't need to 
know.



Namespacing has come up and is something we need to discuss further,
either within the 'openstack' command itself or by using additional
top-level command names.  This is one of the topics for discussion in
Tokyo, but has already started on the ML for those that will not be present.

No matter how we end up handling the namespacing issue, I will still
strongly insist that project code names not be used.  I know some
plugins already do this today and we can't stop anyone else from doing
it, but it leads to the same sort of inconsistency for users that the
original project CLIs had. It reduces the value of a single (or small
set of) CLI for the user to deal with.


FWIW - in the ansible modules we adopted a general naming policy of 
non-service names for things that end-users want to interact with 
(server, floating-ip) because they are not deployers and they should 
care.


For admin things "create keystone domain" "create nova flavor" we have 
the service name in - partially because of the namespacing problem, 
but also because an _admin_ is administering a service called "nova" - 
they are not consuming a service that might be provided by nova ... 
they can be expected to know.


So we have: os_nova_flavor and will soon have os_keystone_domain but 
os_image and os_subnet.


This is great feedback and I think it actually hits the problem we're
having. The problem is not really related to the user facing API but
the admins one.

I fully agree with the suggestion of not using project names, which is
why I originally recomended to use the catalog service type. That
said, I think this can be worked out better and we chould follow
sometihng similar to what Monty mentioned.

I guess my remaining concern here is that this standard assumes that
there's just 1 messaging service that could be used and whenever
someone calls `openstack message post...` is talking to Zaqar.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Proposing Taku Fukushima as Kuryr core

2015-10-14 Thread Irena Berezovsky
+1


On Tue, Oct 13, 2015 at 5:07 PM, Gal Sagie  wrote:

> +1
>
> Taku is a great addition to the team and hoping to see him continue
> deliver high quality
> contribution in all aspects of the project.
>
> On Tue, Oct 13, 2015 at 4:52 PM, Antoni Segura Puimedon <
> toni+openstac...@midokura.com> wrote:
>
>> Hi fellow Kurýrs,
>>
>> I would like to propose Taku Fukushima for the core Kuryr team due to his
>> unparalleled dedication to the project. He has written most of the code
>> and
>> battled through the continuous libnetwork API changes. He will be a great
>> addition to the reviewing tasks.
>>
>> Current core members, please, cast your vote by tomorrow night.
>>
>> Regards,
>>
>> Toni
>>
>>
>
>
> --
> Best Regards ,
>
> The G.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable][release] 2015.1.2

2015-10-14 Thread Ihar Hrachyshka
> On 13 Oct 2015, at 21:56, Matt Riedemann  wrote:
> 
> 
> 
> On 10/13/2015 1:57 PM, Chuck Short wrote:
>> Hi
>> 
>> Im just in the last stages of release 2015.1.2. I dont think anything is
>> stopping us from opening it up agian. The tabrlls have been created. So
>> go for it.
>> 
>> Chuck
>> 
>> On Tue, Oct 13, 2015 at 2:46 PM, Matt Riedemann
>> > wrote:
>> 
>> 
>> 
>>On 10/7/2015 7:42 PM, Chuck Short wrote:
>> 
>>Hi,
>>stable/kilo is now frozen. I expect to do a release on Tuesday.
>>If you
>>need to include something please let me know.
>> 
>>Thanks
>>chuck
>> 
>> 
>>
>> __
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe:
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>>Looks like the 2015.1.2 tag is up and now we need to bump the
>>version to 2015.1.3 in setup.cfg for the projects, I don't see a
>>series up for that yet but it's blocking anything from passing tests
>>on stable/kilo now.
>> 
>>--
>> 
>>Thanks,
>> 
>>Matt Riedemann
>> 
>> 
>>__
>> 
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe:
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> I've started these:
> 
> https://review.openstack.org/#/q/Ic97696684c8545068597b4f1efd9d3eb19294d93,n,z
> 
> But I'm wondering what to do with trove and the neutron-*aas projects since 
> they don't have a 2015.1.2 tag, do we still set their pre-version in 
> setup.cfg to 2015.1.3? That seems odd to me, but it's what we did for 
> 2015.1.2.

Chuck, have you forgotten about three more repositories for neutron-*aas?

Ihar



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][Fuel] Using Native Ruby Client for Openstack Providers

2015-10-14 Thread Gilles Dubreuil


On 14/10/15 10:36, Colleen Murphy wrote:
> 
> 
> On Tue, Oct 13, 2015 at 6:13 AM, Vladimir Kuklin  > wrote:
> 
> Puppetmaster and Fuelers,
> 
> Last week I mentioned that I would like to bring the theme of using
> native ruby OpenStack client and use it within the providers.
> 
> Emilien told me that I had already been late and the decision was
> made that puppet-openstack decided to not work with Aviator based on
> [0]. I went through this thread and did not find any unresolvable
> issues with using Aviator in comparison with potential benefits it
> could have brought up.
> 
> What I saw actually was like that:
> 
> * Pros
> 
> 1) It is a native ruby client
> 2) We can import it in puppet and use all the power of Ruby
> 3) We will not need to have a lot of forks/execs for puppet 
> 4) You are relying on negotiated and structured output provided by
> API (JSON) instead of introducing workarounds for client output like [1]
> 
> * Cons
> 
> 1) Aviator is not actively supported 
> 2) Aviator does not track all the upstream OpenStack features while
> native OpenStack client does support them
> 3) Ruby community is not really interested in OpenStack (this one is
> arguable, I think)
> 
> * Proposed solution
> 
> While I completely understand all the cons against using Aviator
> right now, I see that Pros above are essential enough to change our
> mind and invest our own resources into creating really good
> OpenStack binding in Ruby.
> Some are saying that there is not so big involvement of Ruby into
> OpenStack. But we are actually working with Puppet/Ruby and are
> invloved into community. So why should not we own this by ourselves
> and lead by example here?
> 
> I understand that many of you do already have a lot of things on
> their plate and cannot or would not want to support things like
> additional library when native OpenStack client is working
> reasonably well for you. But if I propose the following scheme to
> get support of native Ruby client for OpenStack:
> 
> 1) we (community) have these resources (speaking of the company I am
> working for, we at Mirantis have a set of guys who could be very
> interested in working on Ruby client for OpenStack)
> 2) we gradually improve Aviator code base up to the level that it
> eliminates issues that are mentioned in  'Cons' section
> 3) we introduce additional set of providers and allow users and
> operators to pick whichever they want
> 4) we leave OpenStackClient default one
> 
> Would you support it and allow such code to be merged into upstream
> puppet-openstack modules?
> 
> 
> [0] 
> https://groups.google.com/a/puppetlabs.com/forum/#!searchin/puppet-openstack/aviator$20openstackclient/puppet-openstack/GJwDHNAFVYw/ayN4cdg3EW0J
> [1] 
> https://github.com/openstack/puppet-swift/blob/master/lib/puppet/provider/swift_ring_builder.rb#L21-L86
> -- 
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04 
> +7 (926) 702-39-68 
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com 
> www.mirantis.ru 
> vkuk...@mirantis.com 
> 
> 
> The scale-tipping reason we went with python-openstackclient over the
> Aviator library was that at the same time we were trying to switch, we
> were also developing keystone v3 features and we could only get those
> features from python-openstackclient.
> 
> For the first two pros you listed, I'm not convinced they're really
> pros. Puppet types and providers are actually extremely well-suited to
> shelling out to command-line clients. There are existing, documented
> puppet APIs to do it and we get automatic debug output with it. Nearly
> every existing type and provider does this. It is not well-suited to
> call out to other non-standard ruby libraries because they need to be
> added as a dependency somehow, and doing this is not well-established in
> puppet. There are basically two options to do this:
> 
>  1) Add a gem as a package resource and make sure the package resource
> is called before any of the openstack resources. I could see this
> working as an opt-in thing, but not as a default, for the same reason we
> don't require our users to install pip libraries - there is less
> security guarantees from pypi and rubygems than from distro packages,
> plus corporate infrastructure may not allow pulling packages from these
> types of sources. (I don't see this policy documented anywhere, this was
> just something that's been instilled in me since I started working on
> this team. If we want to revisit it, formalize 

Re: [openstack-dev] [Fuel] Proposal to freeze old Fuel CLI

2015-10-14 Thread Sebastian Kalinowski
Roman, this was already discussed in [1].
The conclusion was that we will implement new features in both places so
user will not have to
use "old" fuelclient to do some things and the "new" to others.
There were no progress with moving old commands to new CLI and I didn't
seen plans to do so.
IMHO without a detailed plan on migrating old commands to new client and
without a person (or people)
that will drive this task we *cannot* freeze old fuelclient as new one is
still not fully usable.

Best,
Sebastian

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-July/070279.html


2015-10-14 10:56 GMT+02:00 Roman Prykhodchenko :

> Fuelers,
>
> as you know a big effort has been put into making Fuel Client’s CLI better
> and as a result we got a new fuel2 utility with a new set of commands and
> options. Some folks still put great effort to move everything that’s left
> in the old CLI to the new CLI.
>
> Every new thing added to the old CLI moves the point where we can finally
> remove it to the future. My proposal is to do a hard code freeze for the
> old CLI and only add new stuff to the new one.
>
>
> - romcheg
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar][cli][openstackclient] conflict in nova flavor and zaqar flavor

2015-10-14 Thread Sean Dague
On 10/12/2015 07:55 PM, Dean Troyer wrote:
> On Mon, Oct 12, 2015 at 5:25 PM, Victoria Martínez de la Cruz
> >
> wrote:
> 
> So, this commands would look like
> 
> openstack pool-flavor create
> openstack pool-flavor get
> openstack pool-flavor delete
> openstack pool-flavor update
> openstack pool-flavor list
> 
> 
> I would strongly suggest leaving the dash out of the resource name:
> 
> openstack pool flavor create
> etc
> 
> Multiple word names have been supported for a long time and the only
> other plugin I know that has them has a bug against it to remove the dash.

So, this might just be me, but I find all the multi word resources to be
really confusing to use for multiple reasons.

1) my brain is thinking[options]. When presented
with it does a lot of thinking  is a verb, oh wait
it's not, oh wait in this case is it. Is C more verby or B more verby.
etc etc.

Basically we've removed a very simple rule for how commands look, which
means more brain power figuring out how each command independently works.

2) openstack pool -h is an error. That's massively surprising. openstack
help pool (no such command!).

So while I realize this has been the pattern in the past, it's never
really worked well for me at least.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Team meeting minutes - 10/12/2015

2015-10-14 Thread Lingxian Kong
Hi, Mistral guys,

In last meeting, we have discussed deeply about Tempest usage in Mistral
project and the functional testing mechanism, I have the understanding in
terms of functional testing as below,

* run_functional_tests.sh is just used locally, will not run tests depend
on OpenStack.
* in our gate tests, all functional tests will run, since OpenStack will be
deployed before Mistral installed.

Am I right?

What's more, maybe I'm totally wrong about the Tempest usage in Mistral
functional testing and use it for DefCore purpose, I'm afraid Nikolay is
right, we can get rid of it totally, then we don't rely on it for our
testing. Or, we can use test plugin mechanism Tempest already provides(see
http://docs.openstack.org/developer/tempest/plugin.html), but I think we
are not interested in it in short term.

On Tue, Oct 13, 2015 at 1:06 AM, Renat Akhmerov 
wrote:

> Hi,
>
> Thanks for joining team meeting today.
>
> Meeting minutes:
> http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-10-12-16.00.html
> Meeting log:
> http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-10-12-16.00.log.html
>
> See you next Monday at the same time.
>
> Renat Akhmerov
> @ Mirantis Inc.
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
*Regards!*
*---*
*Lingxian Kong*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Upgrade to Gerrit 2.11

2015-10-14 Thread Thierry Carrez
Zaro wrote:
> We want to let everyone know that there is a big UI change in Gerrit
> 2.11.  The change screen (CS), which is the main view for a patchset,
> has been completely replaced with a new change screen (CS2).  While
> Gerrit 2.8 contains both old CS and CS2, I believe everyone in
> Openstack land is really just using the old CS.  CS2 really wasn't
> ready in 2.8 and really should never be used in that version.  The CS2
> has come a long way since then and many other big projects have moved
> to using Gerrit 2.11 so it's not a concern any longer.  If you would
> like a preview of Gerrit 2.11 and maybe help us test it, head over to
> http://review-dev.openstack.org.  If you are very opposed to CS2 then
> you may like Gertty (https://pypi.python.org/pypi/gertty) instead.  If
> neither option works for you then maybe you can help us create a new
> alternative :)
> 
> We are soliciting feedback so please let us know what you think.

It looks quite good to me.

My main issue with CS2 is how greedy it is in horizontal space, mostly
due to the waste of space in the "Related changes" panel. If there are
related changes, the owner/reviewer/voting panel is cramped in the
middle, while "related changes" has a lot of empty space on the right.
Is there a way to turn that panel off (move it to a dropdown like the
patchsets view), or make it use half the width, or push it under
ChangeID, or...

I also find the CS2 vote results area a lot less clear than the table we
had in CS, but that may be because I'm used to it now.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable][release] 2015.1.2

2015-10-14 Thread Thierry Carrez
Ihar Hrachyshka wrote:
> Chuck, have you forgotten about three more repositories for neutron-*aas?

Yeah, if those have fixes they should have been tagged as well.

Also we'll want a change to openstack/releases to include 2015.1.2 in
release history.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [lbaas] [octavia][heat]

2015-10-14 Thread Banashankar KV
Please review the following heat changes for the LBaaS v2
https://review.openstack.org/#/c/228598/


Thanks
Banashankar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [xen] [ovs]: how to handle the ports when there are multiple ports existing for a single VM vif

2015-10-14 Thread Jianghua Wang
Hi,
  The problem to configure both tapx.0 and vifx.0 is that the iface-id was 
thought to be unique for each port and all the ports are indexed with iface-id. 
But in our case, both tapx.0 and vifx.0 share the same iface-id. I'm thinking 
to use the port's name as the identification as the name seems unique on the 
ovs bridge. Could any experts help to confirm if there is any potential issue?
  And another idea I'm thinking of is: the ifac-id is unique for each "active" 
port; so one potential resolution is continue to use iface-id as active ports 
but treat the inactive ports as the subsidiary part to the active port. And add 
function to sync the configuration to inactive ports once any update on the 
active port.
  Any comments are welcome and appreciated.

Thanks,
Jianghua

-Original Message-
Date: Mon, 12 Oct 2015 16:12:23 +
From: Jianghua Wang 
To: "openstack-dev@lists.openstack.org"

Subject: [openstack-dev] [Neutron] [xen] [ovs]: how to handle the
ports when there are multiple ports existing for a single VM vif
Message-ID:
<382648c81696da498287d6ce037fbf6a0c3...@sinpex01cl02.citrite.net>
Content-Type: text/plain; charset="us-ascii"

Hi guys,
   I'm working on a bug #1268955 which is due to neutron ovs agent/plugin can't 
process the ports correctly when multiple ports existing for a single VM vif. I 
originally identified two potential solutions but one of them requires not 
minor change; and the other one may result in a race condition. So I'm posting 
it at here to seek help. Please let me know if you have any comments or advice. 
Thanks in advance.

Bug description:

When the guest VM is running under HVM mode, neutron doesn't set the vlan tag 
to the proper port. So guest VM lost network communication.

Problem analysis:
When VM is under HVM mode, ovs will create two ports and two interfaces for a 
single vif inside the VM: If the domID is x, one port/interface is named as 
tapx.0
which is qemu-emulated NIC, used when no PV drivers installed; The other one is 
named as vifx.0 which is the xen network frontend NIC, used when VM has PV 
drivers installed. Depending on the PV driver's existing, either port/interface 
may be used. But current ovs agent/plugin use the VM's vif id(iface-id) to 
identify the port. So depending on the ports sequence retrieved from ovs; only 
one port will be processed by neutron. Then the network problem occurs if the 
finally used port is not the same one processed by neutron (e.g. set vlan tag).



Two of my potential solutions:

1.  configure both ports regardless which port will be used finally; so both 
have the same configuration. It should be able to resolve the problem. But the 
existing code uses the iface-id as the key for each port. Both tapx.0 and 
vifx.0 have the same iface-id. With this solution, I have to change the data 
structure to hold both ports and change relative functions; such required 
change spreads at many places. So it will take much more effort by comparing to 
the 2nd choice. And I have a concern if there will be potential issues to 
configure the inactive port although I can't point it out currently.



2.  if there are multiple choices, ovs set the field of "iface-status" as 
active for the one taking effective; and others will be inactive. So the other 
solution is to return the active one only. If there is any switchover happens 
in later phase, treat this port as updated and then it will configure the new 
chosen port accordingly. In this way it will ensure the active port to be 
configured properly. The needed change is very limited. Please see the draft 
patch set for this solution: https://review.openstack.org/#/c/233498/



But the problem is it will introduce a race condition. E.g. if it sets tag on 
tapx.0 firstly; the guest VM get connection via tapx.0; then the PM driver 
loaded, so the active port switch to vifx.0; but depending on the neutron agent 
polling interval, the vifx.0 may not be tagged for a while; then during this 
period the connection is lost.


Could you share your insights? Thanks a lot.

B.R.
Jianghua Wang

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]What happened when the 3-rd controller restarted?

2015-10-14 Thread Germy Lure
Hi Salvatore and Kevin,

I'm sorry for replying so late.
I wanted to see whether the community had considered data sync for these
two style(agent and controller) integration. To solve integrating multiple
vendor's controllers, I need some help from community. That's the original
purpose of this thread. In another word, I had no idea when I sent this
message and I just asked some help.

Anyway, the issues I mentioned last mail are exists. We still need face
them. I have some rough ideas for your reference.

1.try best to keep the source is correct.
Think about CREATE operation, if the backend was be in exception and
Neutron is timeout, then the record should be destroyed or marked ERROR to
warn the operator. If Neutron was be in exception, the backend will has an
extra record. To avoid this, Neutron could store and mark a record
CREATE_PENDING before push it to backend, then scan data and check with the
backend after restarting when exception occurs. If the record in Neutron is
extra, destroy or mark ERROR to warn the operator. UPDATE and DELETE need
similar logic.
Currently in Neutron, some objects have defined XX_PENDING and some not.
2.check each other when they restart.
After restarting, the backend should report the states of all objects and
may re-load data from Neutron to rebuild or check local data. When Neutron
restarting, it should get data from backend and check it. Maybe, it can
notify backend, and backend act as it just restarted.
All in all, I think it's enough that keeping the data be correct when you
write(CUD) it and check it when restarting.

About implementation, I think a common frame is best. Plugins or even
drivers just provide methods for backend to load data, update state and
etc.

As I mentioned earlier, this is just a rough and superficial idea. Any
comment please.

Thanks,
Germy
.



On Tue, Oct 13, 2015 at 3:28 AM, Kevin Benton  wrote:

> >*But there is no such a feature in Neutron. Right? Will the community
> merge it soon? And can we consider it with agent-style mechanism together?*
>
> The agents have their own mechanisms for getting information from the
> server. The community has no plans to merge a feature that is going to be
> different for almost every vendor.
>
> We tried to come up with some common syncing stuff in the recent ML2
> meeting, the various backends had different methods of detecting when they
> were out of sync with Neutron (e.g. headers in hashes, recording errors,
> etc), all of which depended on the capabilities of the backend. Then the
> sync method itself was different between backends (sending deltas, sending
> entire state, sending a replay log, etc).
>
> About the only thing they have in common is that they need a way detect if
> they are out of sync and they need a method to sync. So that's two abstract
> methods, and we likely can't even agree on when they should be called.
>
> Echoing Salvatore's comments, what is it that you want to see?
>
> On Mon, Oct 12, 2015 at 12:29 AM, Germy Lure  wrote:
>
>> Hi Kevin,
>>
>> *Thank you for your response. Periodic data checking is a popular and
>> effective method to sync info. But there is no such a feature in Neutron.
>> Right? Will the community merge it soon? And can we consider it with
>> agent-style mechanism together?*
>>
>> Vendor-specific extension or coding a periodic task private by vendor is
>> not a good solution, I think. Because it means that Neutron-Sever could not
>> integrate with multiple vendors' controller and even the controller of
>> those vendors that introduced this extension or task could not integrate
>> with a standard community Neutron-Server.
>> That is just the tip of the iceberg. Many of the other problems
>> resulting, such as fixing bugs,upgrade,patch and etc.
>> But wait, is it a vendor-specific feature? Of course not. All software
>> systems need data checking.
>>
>> Many thanks.
>> Germy
>>
>>
>> On Sun, Oct 11, 2015 at 4:28 PM, Kevin Benton  wrote:
>>
>>> You can have a periodic task that asks your backend if it needs sync
>>> info.
>>> Another option is to define a vendor-specific extension that makes it
>>> easy to retrieve all info in one call via the HTTP API.
>>>
>>> On Sat, Oct 10, 2015 at 2:24 AM, Germy Lure 
>>> wrote:
>>>
 Hi all,

 After restarting, Agents load data from Neutron via RPC. What about
 3-rd controller? They only can re-gather data via NBI. Right?

 Is it possible to provide some mechanism for those controllers and
 agents to sync data? or something else I missed?

 Thanks
 Germy



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> Kevin 

Re: [openstack-dev] [neutron-LBaaS] Can anyone help to share a procedure to install lbaas on Kilo?

2015-10-14 Thread Banashankar KV
If that did not work, try to clone  the kilo lbaasv2 from the github and do
python setup.py install. and then do all the config changes in
neutron.conf, neutron_lbaas.conf, lbaas_agent.ini. Start the lbaas agent.

Don't forget to install the haproxy.

To know what all config changes to be done, I suggest installing the
devstack on a VM with lbaasv2 enabled and replicating the same config from
the devstack setup to  your setup.

This link might be of lil help
https://github.com/openstack-packages/neutron-lbaas/tree/rpm-kilo

If you have found a better way todo this, please do let me know :) .


Thanks
Banashankar


On Tue, Oct 13, 2015 at 9:39 AM, Kobi Samoray  wrote:

> Ah, in that case try this:
> https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun
>
> On Oct 13, 2015, at 15:05, WANG, Ming Hao (Tony T) <
> tony.a.w...@alcatel-lucent.com> wrote:
>
> Kobi,
>
> Thanks for your info very much!
> Is there any document to install lbaas manually?
> My environment is installed manually instead by devstack or packstack.
>
> Thanks,
> Tony
>
> *From:* Kobi Samoray [mailto:ksamo...@vmware.com ]
> *Sent:* Tuesday, October 13, 2015 7:55 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron-LBaaS] Can anyone help to share a
> procedure to install lbaas on Kilo?
>
> Hi Tony,
> Try the following:
>
> https://chapter60.wordpress.com/2015/02/20/installing-openstack-lbaas-version-2-on-kilo-using-devstack/
> 
>
>
>
> On Oct 13, 2015, at 14:11, WANG, Ming Hao (Tony T) <
> tony.a.w...@alcatel-lucent.com> wrote:
>
> I installed an OpenStack environment manually, and I can’t use devstack or
> packstack to install neutron lbaas.
> Can anyone help to share a procedure to install lbaas on Kilo? I can’t
> find one from Google. L
>
> Thanks in advance,
> Tony
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] The scenary to rolling upgrade Ironic

2015-10-14 Thread Tan, Lin
Hi guys,

I am looking at https://bugs.launchpad.net/ironic/+bug/1502903 which is related 
to rolling upgrade and here is Jim's patch 
https://review.openstack.org/#/c/234450
I really have a concern or question about how to do Ironic doing rolling 
upgrades. It might be my mistake, but I would like to discuss here and get some 
feedback.

I manually did a rolling upgrade for a private OpenStack Cloud before. There 
are three main tasks for upgrade:
1. upgrade the code of service.
2. change configuration. 
3. the upgrade of DB Schema in DB, which is the most difficult and 
time-consuming part.

The current rolling upgrade solution or live upgrade are highly depends on 
upgrade different services in place one-by-one while make new service A can 
still communicate with old service B.
The ideal case is after we upgrade one of the services, others can still work 
without break.
This is can be done by using versionedobject and RPC version. For example, new 
Nova-API and new Nova-conductor can talk to old Nova-compute.
In the case of Nova services, it was suggests to follow below steps:
1. expand DB schema
2. pin RPC versions and object version at current
3. upgrade all nova-conductor servers because it will talk with DB
4. upgrade all nova services on controller nodes like nova-api
5. upgrade all nova-compute nodes
6. unpin RPC versions
7. shrink DB schema.
This is perfect for Nova. Because it has many nova-compute nodes, and few 
nova-conductor nodes and nova-api nodes. It's not necessary to upgrade 
nova-compute services at one time, which is time consuming.

For Ironic, we only have ir-conductor and ir-api. So the question is should we 
upgrade ir-conductor first or ir-api?
In my opinion, the ideal case is that we can have old ir-conductor and new 
ir-conductors coexist, which means we should upgrade ir-api to latest at first. 
But it's impossible at the moment, because ir-conductor will talk to DB 
directly and we only have one DB schema. That's a large difference between 
Ironic and Nova. We are missing a layer like nova-conductor.
The second case is upgrade ir-conductors first. That means if we upgrade the DB 
Schema, we have to upgrade all ir-conductors at once. During the upgrade, we 
could not provide Ironic service at all.

So I would suggest to stop all Ironic service, and upgrade ir-api first, and 
then upgrade ir-conductor one by one. Only enable the ir-conductor which has 
done the upgrade. Or upgrade ir-api and ir-conductors at once, although it 
sounds stupid a little bit.

What do you guys think?


Best Regards,

Tan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-14 Thread Dulko, Michal
On Tue, 2015-10-13 at 08:47 -0700, Joshua Harlow wrote:
> Well great!
> 
> When is that going to be accessible :-P
> 
> Dulko, Michal wrote:
> > On Mon, 2015-10-12 at 10:58 -0700, Joshua Harlow wrote:
> >> Just a related thought/question. It really seems we (as a community)
> >> need some kind of scale testing ground. Internally at yahoo we were/are
> >> going to use a 200 hypervisor cluster for some of this and then expand
> >> that into 200 * X by using nested virtualization and/or fake drivers and
> >> such. But this is a 'lab' that not everyone can have, and therefore
> >> isn't suited toward community work IMHO. Has there been any thought on
> >> such a 'lab' that is directly in the community, perhaps trystack.org can
> >> be this? (users get free VMs, but then we can tell them this area is a
> >> lab, so don't expect things to always work, free isn't free after all...)
> >>
> >> With such a lab, there could be these kinds of experiments, graphs,
> >> tweaks and such...
> >
> > https://www.mirantis.com/blog/intel-rackspace-want-cloud/
> >
> > "The plan is to build out an OpenStack developer cloud that consists of
> > two 1,000 node clusters available for use by anyone in the OpenStack
> > community for scaling, performance, and code testing. Rackspace plans to
> > have the cloud available within the next six months."
> >
> > Stuff you've described is actually being worked on for a few months. :)

Judging from 6-month ETA and the fact that the work started in August it
seems that the answer is - beginning of 2016.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-10-14 Thread Nikola Đipanov
On 10/14/2015 04:29 AM, Tang Chen wrote:
>>
>> On Wed, Oct 14, 2015 at 10:05 AM, Tang Chen > > wrote:
>>
>> Hi, all,
>>
>> Please help to review this BP.
>>
>> https://blueprints.launchpad.net/nova/+spec/live-migration-state-machine
>>
>>
>> Currently, the migration_status field in Migration object is
>> indicating the
>> status of migration process. But in the current code, it is
>> represented
>> by pure string, like 'migrating', 'finished', and so on.
>>
>> The strings could be confusing to different developers, e.g. there
>> are 3
>> statuses representing the migration process is over successfully:
>> 'finished', 'completed' and 'done'.
>> And 2 for migration in process: 'running' and 'migrating'.
>>
>> So I think we should use constants or enum for these statuses.
>>
>>
>> Furthermore, Nikola has proposed to create a state machine for the
>> statuses,
>> which is part of another abandoned BP. And this is also the work
>> I'd like to go
>> on with. Please refer to:
>> https://review.openstack.org/#/c/197668/
>> https://review.openstack.org/#/c/197669/
>>

This is IMHO a worthwhile effort on it's own. I'd like to see it use a
defined state machine in addition to being a simple enum so that
transitions are clearly defined as well.

>>
>> Another proposal is: introduce a new member named "state" into
>> Migration.
>> Use a state machine to handle this Migration.state, and leave
>> migration_status
>> field a descriptive human readable free-form.
>>

This is a separate effort IMHO - we should do both if possible.

>
> On 10/14/2015 11:14 AM, Zhenyu Zheng wrote:
>> I think it will be better if you can submit a spec for your proposal,
>> it will be easier for people to give comment.
>
> OK, will submit one soon.

If you plan to just enumerate the possible states - that should not
require a spec. Adding automaton in the mix, and especially adding a new
'state' field probably does deserve some discussion so in that case feel
free to write up a spec.

N.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-14 Thread Adrien Cunin
Le 09/10/2015 11:42, Thierry Carrez a écrit :
> Hello everyone,
> 
> OpenStack has become quite big, and it's easier than ever to feel lost,
> to feel like nothing is really happening. It's more difficult than ever
> to feel part of a single community, and to celebrate little successes
> and progress.
> 
> In a (small) effort to help with that, I suggested making it easier to
> record little moments of joy and small success bits. Those are usually
> not worth the effort of a blog post or a new mailing-list thread, but
> they show that our community makes progress *every day*.
> 
> So whenever you feel like you made progress, or had a little success in
> your OpenStack adventures, or have some joyful moment to share, just
> throw the following message on your local IRC channel:
> 
> #success [Your message here]
> 
> The openstackstatus bot will take that and record it on this wiki page:
> 
> https://wiki.openstack.org/wiki/Successes
> 
> We'll add a few of those every week to the weekly newsletter (as part of
> the developer digest that we reecently added there).
> 
> Caveats: Obviously that only works on channels where openstackstatus is
> present (the official OpenStack IRC channels), and we may remove entries
> that are off-topic or spam.
> 
> So... please use #success liberally and record lttle everyday OpenStack
> successes. Share the joy and make the OpenStack community a happy place.
> 

Nice! I've got a question though: is there another way than the wiki
page to consume those successes?

I was thinking of an RSS/Atom feed, which would make it possible to
easily publish them elsewhere (website, social media, etc.).

Adrien



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Revision of the RFE/blueprint submission process

2015-10-14 Thread Flavio Percoco

On 13/10/15 18:38 -0700, Armando M. wrote:

Hi neutrinos,

From last cycle, the team has introduced the concept of RFE bugs [0]. I have
suggested a number of refinements over the past few days [1,2,3] to streamline/
clarify the process a bit further, also in an attempt to deal with the focus
and breadth of the project [4,5].

Having said that, I would invite not to give anything for granted, and use the
ML and/or join us on the #openstack-neutron-release irc channel to tell us what
works and what doesn't about the processes we have in place.

There's no improvement without feedback!


Thanks for sharing.

In Glance, we're about to jump into a very similar workflow and I'm
glad to know it's worked for the neutron team so far.

I'll be posting this on the mailing list soon and we can share
feedback and experiences as they happen.

Cheers,
Flavio



Cheers,
Armando

[0] http://docs.openstack.org/developer/neutron/policies/blueprints.html#
neutron-request-for-feature-enhancements
[1] https://review.openstack.org/#/c/231819/
[2] https://review.openstack.org/#/c/234491/
[3] https://review.openstack.org/#/c/234502/
[4] http://git.openstack.org/cgit/openstack/election/tree/candidates/mitaka/
Neutron/Armando_Migliaccio.txt#n29
[5] http://git.openstack.org/cgit/openstack/election/tree/candidates/mitaka/
Neutron/Armando_Migliaccio.txt#n38
[6] http://docs.openstack.org/developer/neutron/policies/office-hours.html#
neutron-ptl-office-hours



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable][release] 2015.1.2

2015-10-14 Thread Thierry Carrez
Sean Dague wrote:
> I think that whoever sets the tag should also push those fixes. We had
> some kilo content bogging down the gate today because of this kind of
> failure. Better to time this as close as possible with the tag setting.

Right. The ideal process is to push the version bump and cut the tag
from the commit just before that. That is how it's done on the main
release when we start cutting the release branch at RC1.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-10-14 Thread Tang Chen


On 10/14/2015 04:53 PM, Nikola Đipanov wrote:

On 10/14/2015 04:29 AM, Tang Chen wrote:

On Wed, Oct 14, 2015 at 10:05 AM, Tang Chen > wrote:

 Hi, all,

 Please help to review this BP.

 https://blueprints.launchpad.net/nova/+spec/live-migration-state-machine


 Currently, the migration_status field in Migration object is
 indicating the
 status of migration process. But in the current code, it is
 represented
 by pure string, like 'migrating', 'finished', and so on.

 The strings could be confusing to different developers, e.g. there
 are 3
 statuses representing the migration process is over successfully:
 'finished', 'completed' and 'done'.
 And 2 for migration in process: 'running' and 'migrating'.

 So I think we should use constants or enum for these statuses.


 Furthermore, Nikola has proposed to create a state machine for the
 statuses,
 which is part of another abandoned BP. And this is also the work
 I'd like to go
 on with. Please refer to:
 https://review.openstack.org/#/c/197668/
 https://review.openstack.org/#/c/197669/


This is IMHO a worthwhile effort on it's own. I'd like to see it use a
defined state machine in addition to being a simple enum so that
transitions are clearly defined as well.


Agreed.


 Another proposal is: introduce a new member named "state" into
 Migration.
 Use a state machine to handle this Migration.state, and leave
 migration_status
 field a descriptive human readable free-form.


This is a separate effort IMHO - we should do both if possible.


Yes, I do agree.

And I think migrate_status and migrate_state fields could share the
same state machine if we do both.




On 10/14/2015 11:14 AM, Zhenyu Zheng wrote:

I think it will be better if you can submit a spec for your proposal,
it will be easier for people to give comment.

OK, will submit one soon.

If you plan to just enumerate the possible states - that should not
require a spec. Adding automaton in the mix, and especially adding a new
'state' field probably does deserve some discussion so in that case feel
free to write up a spec.


No, I planed to introduce a state machine for migrate_status field first.
And this needs a spec, I think.

And then we will go on discussing the new state field things.

Thanks.



N.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Upgrade to Gerrit 2.11

2015-10-14 Thread Sean Dague
On 10/13/2015 08:08 PM, Zaro wrote:
> Hello All,
> 
> The openstack-infra team would like to upgrade from our Gerrit 2.8 to
> Gerrit 2.11.  We are proposing to do the upgrade shortly after the
> Mitaka summit.  The main motivation behind the upgrade is to allow us
> to take advantage of some of the new REST api, ssh commands, and
> stream events features.  Also we wanted to stay closer to upstream so
> it will be easier to pick up more recent features and fixes.
> 
> We want to let everyone know that there is a big UI change in Gerrit
> 2.11.  The change screen (CS), which is the main view for a patchset,
> has been completely replaced with a new change screen (CS2).  While
> Gerrit 2.8 contains both old CS and CS2, I believe everyone in
> Openstack land is really just using the old CS.  CS2 really wasn't
> ready in 2.8 and really should never be used in that version.  The CS2
> has come a long way since then and many other big projects have moved
> to using Gerrit 2.11 so it's not a concern any longer.  If you would
> like a preview of Gerrit 2.11 and maybe help us test it, head over to
> http://review-dev.openstack.org.  If you are very opposed to CS2 then
> you may like Gertty (https://pypi.python.org/pypi/gertty) instead.  If
> neither option works for you then maybe you can help us create a new
> alternative :)
> 
> We are soliciting feedback so please let us know what you think.
> 
> Thanks You.

+1 very excited to get new gerrit out there.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Upgrade to Gerrit 2.11

2015-10-14 Thread Flavio Percoco

On 13/10/15 17:08 -0700, Zaro wrote:

Hello All,

The openstack-infra team would like to upgrade from our Gerrit 2.8 to
Gerrit 2.11.  We are proposing to do the upgrade shortly after the
Mitaka summit.  The main motivation behind the upgrade is to allow us
to take advantage of some of the new REST api, ssh commands, and
stream events features.  Also we wanted to stay closer to upstream so
it will be easier to pick up more recent features and fixes.

We want to let everyone know that there is a big UI change in Gerrit
2.11.  The change screen (CS), which is the main view for a patchset,
has been completely replaced with a new change screen (CS2).  While
Gerrit 2.8 contains both old CS and CS2, I believe everyone in
Openstack land is really just using the old CS.  CS2 really wasn't
ready in 2.8 and really should never be used in that version.  The CS2
has come a long way since then and many other big projects have moved
to using Gerrit 2.11 so it's not a concern any longer.  If you would
like a preview of Gerrit 2.11 and maybe help us test it, head over to
http://review-dev.openstack.org.  If you are very opposed to CS2 then
you may like Gertty (https://pypi.python.org/pypi/gertty) instead.  If
neither option works for you then maybe you can help us create a new
alternative :)

We are soliciting feedback so please let us know what you think.


+1, excited to see our gerrit moving forward!

Flavio

P.S: /me has been using CS2 for a while now.

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel][library] component lead candidacy

2015-10-14 Thread Sergii Golovatiuk
Hey crew,

I'd like to send my candidacy for Fuel component lead election for a next
cycle.

I would like to focus on the following priorities:

- Synchronisation with upstream modules. As a result fuel-library will
contain only few manifests. All other manifests will be consumed from
upstream. openstack and puppet-comunity manifests will be consumed from
upstream allowing quick contribution to upstream rather than own fork.

- Implement integration testing. library is suffering from lack of
integration testing. This means that state of processes are not ensured
after deployment. Also, I am going to spend time to introduce code coverage
metrics that will be used for analysing if coverage is getting better or
not.

- HA upstreaming: library made a huge work such as OCF for rabbit/mysql.
It's time to contribute them to upstream getting more developers and
feedback.

- Granular deployment improvements. Fuel still has some monolith classes
such as openstack:* With my leadership we'll continue breaking to simple
tasks that may be skipped or enhanced by plugins

Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Tokyo Summit - dev + ops

2015-10-14 Thread Emilien Macchi
So we gathered all proposals and I worked on the agenda.

If you plan to attend Puppet OpenStack tracks, here is what you need to
know:

* general etherpad: https://etherpad.openstack.org/p/HND-puppet
* schedule
http://mitakadesignsummit.sched.org/type/Puppet%20OpenStack#.Vh6myZdm-Zc
* etherpads:
https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads#Puppet_OpenStack

If you brought a topic, please put your name in "leader". Also please
come up with some PoC or some thoughts that we can discuss.

If there are some schedule issue, please let me know.
For people who proposed a track but can't attend the Summit also please
let me know, we will make sure we discuss about the topic.

Thank you for your time,

On 09/21/2015 03:59 PM, Emilien Macchi wrote:
> Hello,
> 
> The summit is in a few weeks, and we are still defining our agenda [1].
> Some topics have already been written, but it would be good to get more
> topic, we have some room resources allocated for that purpose.
> Both devs & ops, feel free to create your topic by defining a
> description, owner (you if you can), and approximate needed time.
> 
> Thanks for your help,
> 
> [1] https://etherpad.openstack.org/p/HND-puppet
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] novaclient functional test failures

2015-10-14 Thread Matt Riedemann



On 10/12/2015 5:12 PM, melanie witt wrote:

On Oct 12, 2015, at 12:14, Kevin L. Mitchell  
wrote:


http://logs.openstack.org/77/232677/1/check/gate-novaclient-dsvm-functional/2de31bc/
(For review https://review.openstack.org/232677)

http://logs.openstack.org/99/232899/1/check/gate-novaclient-dsvm-functional/6d6dd1d/
(For review https://review.openstack.org/232899)

The first review does some stuff with functional tests, but the second
is a simple global-requirements update, and both have the same failure
signature.


I noticed in the trace, novaclient is calling a function for keystone v1 auth 
[1][2]. It had been calling v2 auth in the past and I think this commit [3] in 
devstack that writes the clouds.yaml specifying v3 as the identity API version 
is probably responsible for the change in behavior. It used to use the 
$IDENTITY_API_VERSION variable. The patch merged on Oct 7 and in logstash I 
find the failures start on Oct 8 [4]

Novaclient looks for "v2.0" in the auth url and creates a request based on that. If it 
doesn't find "v2.0" it falls back generating a v1 request. And it doesn't yet have a 
function for generating a v3 request.

-melanie (irc: melwitt)

[1] 
http://logs.openstack.org/99/232899/1/check/gate-novaclient-dsvm-functional/6d6dd1d/console.html.gz#_2015-10-09_16_01_24_088
[2] 
https://github.com/openstack/python-novaclient/blob/147a1a6ee421f9a45a562f013e233d29d43258e4/novaclient/client.py#L601-L622
[3] https://review.openstack.org/#/c/220846/
[4] http://goo.gl/5GxiiF



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Well we at least have a bug to track against now:

https://bugs.launchpad.net/python-novaclient/+bug/1506103

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Fuel Library Component Lead Nomination

2015-10-14 Thread Vladimir Kuklin
Fuel Librarians

I would like to nominate myself as a Candidate for Fuel Library Component
Lead position

Let me mention the list of main points I would like to work on. This list
is going to essentially resemble my PTL candidacy item list but it will
also differ from it significantly.

* General Standards of Decision Making

This assumes that each design decision on architecture must be applied
according to the sufficient data and expert analysis performed by subject
matter experts and cold and heartless machines that should run tests and
collect metrics of existing and proposed solutions to figure out the best
solution. Each decision on the new library or tool to choose must be
accompanied by clear report showing that this change actually makes
difference. I will start working on the unified toolchain and methodology
on how to make decisions immediately.

* HA Polishing

This one has always been one of the strongest parts of Fuel and we are
using our own unique and innovative ways of deploying HA architecture.
Nevertheless, we still have many things to work on to make it perfect:

1) Fencing and power management facilities
2) Node health monitoring
3) RabbitMQ clusterer plugin

* Reference Architecture Changes

It seems pretty obvious that our reference architecture requires some
simplification and polishing. We have several key-value storages, we are
using several HTTP servers/proxies and so on.
This makes us support lots of stuff instead of concentrating on a 'golden'
set of tools and utilities and making them work perfectly.
I want to spend significant time on figuring out possible solutions for
redefining of our architecture based on aforementioned principles.

* Quality and Deployment Velocity

Although we are among the only projects who run very extensive set of tests
including for each commit into Fuel Library - we can still work on many
things for perfect QA and CI.
These are:

1) deployment integration tests for all the components, e.g. run deployment
for each deployment-related component like nailgun, nailgun-agent,
fuel-agent, fuel-astute and others
2) significantly increase test coverage: e.g. cover each deployment task
with noop test, increase unit test coverage for all the Fuel-only modules
3) work on the ways of automated generation of test plans for each
particular feature and bugfix including regression, functional, scale,
stress and other types of tests
4) work on the way of introducing more granular tests for fuel deployment
components - we have an awesome framework to get faster feedback, written
by fuel QA team, but we have not yet integrated it into our CI

* Flexibility and Scalability

We have a lot of work to do on orchestration and capabilities of our
deployment engine.

We need to introduce hierarchy for deployment attributes like for the whole
cluster, for racks, for nodes and tasks themselves.

We also need to work on our provisioning scalability parts such as moving
towards iPXE + peer2peer base image distribution.

* Documentation

It seems we have lots of good tools like our devops toolkit, noop tests and
other things. But this info is not available to external contributors and
plugin developers. Even Fuel engineers do not know everything about how
these tools work.  I am going to setup several webinars and write a dozen
of articles on how to develop and test Fuel Library.

* Community

And the last but not least - community collaboration. We are doing great
job while gluing existing deployment libraries and testing their
integration. Community folks like puppet-openstack are also doing great
job. And, instead of duplicating it, I would like to merge with upstream
code as much as possible thus freeing our resources onto the things we do
the best along with sharing the stuff we do the best with community by
testing results of their work against our reference architecture and
installations.

Additionally, it is worth to mention that I envision significant value in
making Fuel able to install current OpenStack trunk code thus allowing us
to set up Fuel CI for each piece of OpenStack and allowing OpenStack
developers to test their code against multi-host HA-enabled reference
architecture in an easy and automated way.

Thank you all for your time and consideration!

-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-14 Thread Thomas Goirand
On 10/13/2015 09:41 AM, Cory Benfield wrote:
> 
>> On 13 Oct 2015, at 07:42, Thomas Goirand  wrote:
>> In this particular case (ie: a difficult upstream which makes it
>> impossible to have the same result with pip and system packages)
> 
> I don’t know how carefully you’ve followed this email trail

I did read carefully.

> but the “difficult upstream”

I do understand that you don't like being called this way, though this
is still the reality. Vendorizing still inflicting some major pain to a
lot of your users:
- This thread one of the demonstration of it.
- You having to contact downstream distros is another.
- The unbundling work inflicted to downstream package maintainers is a
3rd another.

So like it or not, it is a fact that it is difficult to work with
requests because of the way it is released upstream.

> has had a policy in place for six months
> that ensures that you can have the same result with pip and
> system packages. For six months we have only updated to versions
> of urllib3 that are actually released, and therefore that are
> definitely available from pip (and potentially available from
> the distribution).
> 
> The reason this has not been working is because the distributions,
> when they unbundle us, have not been populating their setup.py to
> reflect the dependency: only their own metadata. We’ve been in
> contact with them, and this change is being made in the
> distributions we have relationships with.

Though you could have avoid all of this pain if you were not bundling.
Isn't all of this make you re-think your vendorizing policy? Or still
not? I'm asking because I still didn't read your answer about the
important question: since you aren't using specially crafted versions of
urllib3 anymore, and now only using official releases, what's the reason
that keeps you vendorizing? Not trying to convince you here, just trying
to understand.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][dns]What the meaning of"dns_assignment" and "dns_name"?

2015-10-14 Thread Miguel Lavalle
Zhi Chang,

You got all the steps correct. A few clarifications:


   1. Address 104.130.78.191 is the ip address of my devstack VM. When you
   deploy Designate in devstack, it starts an instance of PowerDNS for you.
   Designate then pushes all its zones and records to that PowerDNS instance.
   When I say "dig my-instance.my-example.org @104.130.78.191" I am
   instructing dig to direct the lookup to the DNS server @ 104.130.78.191:
   in other words, my PowerDNS instance
   2. For you to be able to execute the same steps in your devstack, you
   need:
  - The code in patchset https://review.openstack.org/#/c/212213/
  - The modified nova code in nova/network/neutronv2/api.py that I
  haven't pushed to Gerrit yet
  - Configure a few parameters in /etc/neutron/neutron.conf
  - Migrate the Neutron database, because I added columns to a couple
  of tables

Let me know if you want to try this in your devstack. If the answer is yes,
I will let you know when I push the nova change to gerrit. At that point, I
will provide detailed steps to accomplish point 2 above

Best regards


Miguel


On Wed, Oct 14, 2015 at 12:53 AM, Zhi Chang 
wrote:

> Hi, Miguel
> Thank you so much for your reply. You are so patient!
> After reading your reply, I still have some questions to ask you. :-)
> Below, is my opinion about the http://paste.openstack.org/show/476210/,
> please read it and tell me whether I was right.
> (1). Define a DNS domain
> (2). Update a network's "dns_domain" attribute to the DNS domain which
> defined in the step1
> (3). Create a VM in this network. The instance's port will assign
> instance's hostname to it's dns_name attribute
> (4). Create a floating IP for this VM
> (5). In Designate, there will be generate a new A record. This record
> is a link between floating IP and dns_name+domain_name. Just like your
> record: deec921d-b630-4479-8932-c5ec7c530820 | A |
> my-instance.my-example.org. | 172.24.4.3
>   (6). I am don't understand where the IP address "104.130.78.191" comes
> from. I think this address is a public DNS, just like 8.8.8.8. Does it
> right?
>(7). I can dig "my-instance.my-example.org." by a public DNS. And the
> result is the floating IP.
>
> Does my understanding was right?
>
> Hope For Your Reply.
> Thanks
> Zhi Chang
>
> -- Original --
> *From: * "Miguel Lavalle";
> *Date: * Wed, Oct 14, 2015 11:22 AM
> *To: * "OpenStack Development Mailing List (not for usage questions)"<
> openstack-dev@lists.openstack.org>;
> *Subject: * Re: [openstack-dev] [Neutron][dns]What the meaning
> of"dns_assignment" and "dns_name"?
>
> Zhi Chang,
>
> Thank you for your questions. We are in the process of integrating Neutron
> and Nova with an external DNS service, using Designate as the reference
> implementation. This integration is being achieved in 3 steps. What you are
> seeing is the result of only the first one. These steps are:
>
> 1) Internal DNS integration in Neutron, which merged recently:
> https://review.openstack.org/#/c/200952/. As you may know, Neutron has an
> internal DHCP / DNS service based on dnsmasq for each virtual network that
> you create. Previously, whenever you created a port on a given network,
> your port would get a default host name in dnsmasq of the form
> 'host-xx-xx-xx-xx.openstacklocal.", where xx-xx-xx-xx came from the port's
> fixed ip address "xx.xx.xx.xx" and "openstacklocal" is the default domain
> used by Neutron. This name was generated by the dhcp agent. In the above
> mentioned patchset, we are moving the generation of these dns names to the
> Neutron server, with the intent of allowing the user to specify it. In
> order to do that, you need to enable it by defining in neutron.conf the
> 'dns_domain' parameter with a value different to the default
> 'openstacklocal'. Once you do that, you can create or update a port and
> assign a value to its 'dns_name' attribute. Why is this useful? Please read
> on.
>
> 2) External DNS integration in Neutron. The patchset is being worked now:
> https://review.openstack.org/#/c/212213/. The functionality implemented
> here allows Neutron to publish the dns_name associated with a floating ip
> under a domain in an external dns service. We are using Designate as the
> reference implementation, but the idea is that in the future other DNS
> services can be integrated.. Where does the dns name and domain of the
> floating ip come from? It can come from 2 sources. Source number 1 is the
> floating ip itself, because in this patchset we are adding a dns_name and a
> dns_domain attributes to it. If the floating ip doesn't have a dns name and
> domain associated with it, then they can come from source number 2: the
> port that the floating ip is associated with (as explained in point 1,
> ports now can have a dns_name attribute) and the port's network, since this
> patchset adds dns_domain to 

Re: [openstack-dev] [magnum] Creating pods results in "EOF occurred in violation of protocol" exception

2015-10-14 Thread Hongbin Lu
Hi Bertrand,

Thanks for reporting the error. I confirmed that this error was consistently 
reproducible. A bug ticket was created for that.

https://bugs.launchpad.net/magnum/+bug/1506226

Best regards,
Hongbin

-Original Message-
From: Bertrand NOEL [mailto:bertrand.n...@cern.ch] 
Sent: October-14-15 8:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Creating pods results in "EOF occurred in 
violation of protocol" exception

Hi,
I try Magnum, following instructions on the quickstart page [1]. I successfully 
create the baymodel and the bay. When I run the command to create redis pods 
(_magnum pod-create --manifest ./redis-master.yaml --bay k8sbay_), client side, 
it timeouts. And server side (m-cond.log), I get the following stack trace. It 
also happens with other Kubernetes examples.
I try with Ubuntu 14.04, with Magnum at commit 
fc8f412c87ea0f9dc0fc1c24963013e6d6209f27.


2015-10-14 12:16:40.877 ERROR oslo_messaging.rpc.dispatcher
[req-960570cf-17b2-489f-9376-81890e2bf2d8 admin admin] Exception during message 
handling: [Errno 8] _ssl.c:510: EOF occurred in violation of protocol
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py",
line 142, in _dispatch_and_reply
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher
executor_callback))
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py",
line 186, in _dispatch
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher
executor_callback)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py",
line 129, in _do_dispatch
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher result = func(ctxt, 
**new_args)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/conductor/handlers/k8s_conductor.py", line 89, in 
pod_create
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher
namespace='default')
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/apis/apiv_api.py",
line 3596, in create_namespaced_pod
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher
callback=params.get('callback'))
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/api_client.py",
line 320, in call_api
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher response_type, 
auth_settings, callback)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/api_client.py",
line 148, in __call_api
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher 
post_params=post_params, body=body)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/api_client.py",
line 350, in request
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher body=body)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/rest.py", line 
265, in POST
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher return 
self.IMPL.POST(*n, **kw)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/rest.py", line 
187, in POST
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher body=body)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/rest.py", line 
133, in request
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher headers=headers)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/urllib3/request.py", line 72, in request
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher **urlopen_kw)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/urllib3/request.py", line 149, in 
request_encode_body
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher return 
self.urlopen(method, url, **extra_kw)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/urllib3/poolmanager.py", line 161, in 
urlopen
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher response = 
conn.urlopen(method, u.request_uri, **kw)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 588, 
in urlopen
2015-10-14 12:16:40.877 

[openstack-dev] [fuel] Life cycle management use cases

2015-10-14 Thread Mike Scherbakov
Hi fuelers,
as we all know, Fuel lacks many of life cycle management (LCM) use cases.
It becomes a very hot issue for many of our users, as current LCM
capabilities are not very rich.

In order to think how we can fix it, we need to collect use cases first,
and prioritize them if needed. So that whatever a change in architecture we
are about to make, we would need to ensure that we meet LCM use cases or
have a proposal how to close it in a foreseeable future.

I started to collect use cases in the etherpad:
https://etherpad.openstack.org/p/lcm-use-cases.

Please contribute in there.

Thank you,
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] The scenary to rolling upgrade Ironic

2015-10-14 Thread Jim Rollenhagen
On Wed, Oct 14, 2015 at 08:44:08AM +, Tan, Lin wrote:
> Hi guys,
> 
> I am looking at https://bugs.launchpad.net/ironic/+bug/1502903 which
> is related to rolling upgrade and here is Jim's patch
> https://review.openstack.org/#/c/234450 I really have a concern or
> question about how to do Ironic doing rolling upgrades. It might be my
> mistake, but I would like to discuss here and get some feedback.
> 
> I manually did a rolling upgrade for a private OpenStack Cloud before.
> There are three main tasks for upgrade: 1. upgrade the code of
> service.  2. change configuration.  3. the upgrade of DB Schema in DB,
> which is the most difficult and time-consuming part.
> 
> The current rolling upgrade solution or live upgrade are highly
> depends on upgrade different services in place one-by-one while make
> new service A can still communicate with old service B.  The ideal
> case is after we upgrade one of the services, others can still work
> without break.  This is can be done by using versionedobject and RPC
> version. For example, new Nova-API and new Nova-conductor can talk to
> old Nova-compute.  In the case of Nova services, it was suggests to
> follow below steps:

> 1. expand DB schema
> 2. pin RPC versions and object version at current
> 3. upgrade all nova-conductor servers because it will talk with DB
> 4. upgrade all nova services on controller nodes like nova-api
> 5. upgrade all nova-compute nodes
> 6. unpin RPC versions
> 7. shrink DB schema.

> This is perfect for Nova. Because it has many
> nova-compute nodes, and few nova-conductor nodes and nova-api nodes.
> It's not necessary to upgrade nova-compute services at one time, which
> is time consuming.
> 
> For Ironic, we only have ir-conductor and ir-api. So the question is
> should we upgrade ir-conductor first or ir-api?  In my opinion, the
> ideal case is that we can have old ir-conductor and new ir-conductors
> coexist, which means we should upgrade ir-api to latest at first. But
> it's impossible at the moment, because ir-conductor will talk to DB
> directly and we only have one DB schema. That's a large difference
> between Ironic and Nova. We are missing a layer like nova-conductor.
> The second case is upgrade ir-conductors first. That means if we
> upgrade the DB Schema, we have to upgrade all ir-conductors at once.
> During the upgrade, we could not provide Ironic service at all.
> 
> So I would suggest to stop all Ironic service, and upgrade ir-api
> first, and then upgrade ir-conductor one by one. Only enable the
> ir-conductor which has done the upgrade. Or upgrade ir-api and
> ir-conductors at once, although it sounds stupid a little bit.

Hey Tan, thanks for bringing this up.

I've been thinking about this stuff a lot lately, and I'd like us to get
it working during the Mitaka cycle, so deployers can do a rolling
upgrade from Liberty to Mitaka.

Conductors will always need to talk to the database. APIs may not need
to talk to the database. I think we can just roll conductor
upgrades through, and then update ironic-api after that. This should
just work, as long as we're very careful about schema changes (this is
where the expand/contract thing comes into play). Different versions of
conductors are only a problem if the database schema is not compatible
with one of the versions.

We also need to remote the objects layer to the conductor from the api
service, so that the API service is no longer talking to the DB. And
allow RPC version pinning.

Beyond that, I think the Nova model should work fine for us. There's
some work to do in our objects layer, and then lots of documentation for
developers, reviewers, and deployers. I think it's totally reasonable to
complete this during Mitaka, though.

I opened this blueprint[0] yesterday to track this work. I'd like to get
the developer/reviewer docs done first, so we don't accidentally land
any changes that break assumptions here (for example, the bug you linked
before). Is this something you're willing to take the lead on?

// jim

[0] https://blueprints.launchpad.net/ironic/+spec/online-upgrade-support

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-14 Thread Thomas Goirand
On 10/13/2015 06:04 PM, Joshua Harlow wrote:
> Thomas Goirand wrote:
>> On 10/13/2015 12:44 AM, Joshua Harlow wrote:
>>> Anvil gets somewhat far on this, although its not supporting DEBs it
>>> does build its best attempt at RPMs building them automatically and
>>> turning git repos of projects into RPMs.
>>>
>>> http://anvil.readthedocs.org/en/latest/topics/summary.html (hopefully
>>> the existence of this is not news to folks).
>>>
>>> A log of this in action (very verbose) is at:
>>>
>>> http://logs.openstack.org/40/225240/4/check/gate-anvil-rpms-dsvm-devstack-centos7/0eea2a9/console.html
>>>
>>
>> Automation can only bring you so far. I also have automation which we
>> could use for debs (see the pkgos-debpypi script from the
>> openstack-pkg-tools package), however, there's always the need for
>> manual reviews. I don't believe it ever will be possible to do full
>> automation, as each Python package has specificities. Note that this is
>> mainly an issue with Python modules, if it was PHP pear packages, it
>> could be fully automated. So probably there's some PEP that we could
>> start to ease this. If only everyone was using testr, pbr, defining
>> copyright correctly and providing a parseable long and short
>> description, it wouldn't be such an issue.
> 
> Agreed, there will always be that damn 1% (ok its probably around 10%)
> of weird pypi packages that will require hand-tuning, the hope (and I
> think the reality) is that most actually don't require hand-tuning.

One major pain point is unfortunately something ridiculously easy to
fix, but which nobody seems to care about: the long & short descriptions
format. These are usually buried into the setup.py black magic, which by
the way I feel is very unsafe (does PyPi actually execute "python
setup.py" to find out about description texts? I hope they are running
this in a sandbox...).

Since everyone uses the fact that PyPi accepts RST format for the long
description, there's nothing that can really easily fit the
debian/control. Probably a rst2txt tool would help, but still, the long
description would still be polluted with things like changelog, examples
and such (damned, why people think it's the correct place to put that...).

The only way I'd see to fix this situation, would be a PEP. This will
probably take a decade to have everyone switching to a new correct way
to write a long & short description...

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-14 Thread Thomas Goirand
On 10/12/2015 07:10 PM, Monty Taylor wrote:
> On 10/12/2015 12:43 PM, Clint Byrum wrote:
>> Excerpts from Thomas Goirand's message of 2015-10-12 05:57:26 -0700:
>>> On 10/11/2015 02:53 AM, Davanum Srinivas wrote:
 Thomas,

 i am curious as well. AFAIK, cassandra works well with OpenJDK. Can you
 please elaborate what you concerns are for #1?

 Thanks,
 Dims
>>>
>>> s/works well/works/
>>>
>>> Upstream doesn't test against OpenJDK, and they close bugs without
>>> fixing them when it only affects OpenJDK and it isn't grave. I know this
>>> from one of the upstream from Cassandra, who is also a Debian developer.
>>> Because of this state of things, he gave up on packaging Cassandra in
>>> Debian (and for other reasons too, like not having enough time to work
>>> on the packaging).
>>>
>>> I trust what this Debian developer told me. If I remember correctly,
>>> it's Eric Evans  (ie, the author of the ITP at
>>> https://bugs.debian.org/585905) that I'm talking about.
>>>
>>
>> Indeed, I once took a crack at packaging it for Debian/Ubuntu too.
>> There's a reason 'apt-cache search cassandra' returns 0 results on Debian
>> and Ubuntu.
> 
> There is a different reason too - which is that (at least at one point
> in the past) upstream expressed frustration with the idea of distro
> packages of Cassandra because it led to people coming to them with
> complaints about the software which had been fixed in newer versions but
> which, because of distro support policies, were not present in the
> user's software version. (I can sympathize)

This is free software. We don't need to ask for permission from upstream
first.

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-14 Thread Robert Collins
On 15 October 2015 at 11:11, Thomas Goirand  wrote:
>
> One major pain point is unfortunately something ridiculously easy to
> fix, but which nobody seems to care about: the long & short descriptions
> format. These are usually buried into the setup.py black magic, which by
> the way I feel is very unsafe (does PyPi actually execute "python
> setup.py" to find out about description texts? I hope they are running
> this in a sandbox...).
>
> Since everyone uses the fact that PyPi accepts RST format for the long
> description, there's nothing that can really easily fit the
> debian/control. Probably a rst2txt tool would help, but still, the long
> description would still be polluted with things like changelog, examples
> and such (damned, why people think it's the correct place to put that...).
>
> The only way I'd see to fix this situation, would be a PEP. This will
> probably take a decade to have everyone switching to a new correct way
> to write a long & short description...

Perhaps Debian (1 thing) should change, rather than trying to change
all the upstreams packaged in it (>20K) :)

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Life cycle management use cases

2015-10-14 Thread Shamail
Great topic...

Please note that the Product WG[1] also has a user story focused on Lifecycle 
Management.  While FUEL is one aspect of the overall workflow, we would also 
like the team to consider project level enhancements (e.g. garbage collection 
inside the DB).

The Product WG would welcome your insights on lifecycle management 
tremendously.  Please help by posting comments to our existing user story[2].


[1] https://wiki.openstack.org/wiki/ProductTeam
[2] 
http://specs.openstack.org/openstack/openstack-user-stories/user-stories/draft/lifecycle_management.html

Thanks,
Shamail 

> On Oct 14, 2015, at 5:04 PM, Mike Scherbakov  wrote:
> 
> Hi fuelers,
> as we all know, Fuel lacks many of life cycle management (LCM) use cases. It 
> becomes a very hot issue for many of our users, as current LCM capabilities 
> are not very rich.
> 
> In order to think how we can fix it, we need to collect use cases first, and 
> prioritize them if needed. So that whatever a change in architecture we are 
> about to make, we would need to ensure that we meet LCM use cases or have a 
> proposal how to close it in a foreseeable future.
> 
> I started to collect use cases in the etherpad: 
> https://etherpad.openstack.org/p/lcm-use-cases.
> 
> Please contribute in there.
> 
> Thank you,
> -- 
> Mike Scherbakov
> #mihgen
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] getting rid of tablib completely (Requests + urllib3 + distro packages)

2015-10-14 Thread Thomas Goirand
On 10/14/2015 07:18 AM, Akihiro Motoki wrote:
> 2015-10-14 0:14 GMT+09:00 Doug Hellmann :
>> Excerpts from Thomas Goirand's message of 2015-10-13 12:38:00 +0200:
>>> On 10/12/2015 11:09 PM, Steve Baker wrote:
 On 13/10/15 02:05, Thomas Goirand wrote:
>
> BTW, the same applies for tablib which is in a even more horrible state
> that makes it impossible to package with Py3 support. But tablib could
> be removed from our (build-)dependency list, if someone cares about
> re-writing cliff-tablib, which IMO wouldn't be that much work. Doug, how
> many beers shall I offer you for that work? :)
>
 Regarding tablib, cliff has had its own table formatter for some time,
 and now has its own json and yaml formatters. I believe the only tablib
 formatter left is the HTML one, which likely wouldn't be missed if it
 was just dropped (or it could be simply reimplemented inside cliff).

 If the cliff deb depends on cliff-tablib
>>>
>>> It does.
>>
>> That dependency is backwards. cliff-tablib should depend on cliff. Cliff
>> does not need cliff-tablib, but cliff-tablib is only useful if cliff is
>> installed.
>>
>>> And also the below packages have a build-dependency on
>>> cliff-tablib:
>>>
>>> - python-neutronclient
>>> - python-openstackclient
>>>
>>> python-openstackclient also has a runtime depends on cliff-tablib.
>>
>> Now that we have a cliff with the formatters provided by tablib, we can
>> update those dependencies to remove cliff-tablib. Someone just needs to
>> follow through on that with patches to the requirements files for the
>> clients.
> 
> In neutronclient, we have cliff-tablib is test-requirements.txt,
> but it is actually unnecessary now.
> https://review.openstack.org/#/c/234334/
> 
> Akihiro

Ah, super nice! Is it also not necessary for the Liberty release of
neutronclient? Or just master?

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] getting rid of tablib completely (Requests + urllib3 + distro packages)

2015-10-14 Thread Thomas Goirand
On 10/13/2015 09:41 PM, Dean Troyer wrote:
> On Tue, Oct 13, 2015 at 10:14 AM, Doug Hellmann  > wrote:
> 
> Now that we have a cliff with the formatters provided by tablib, we can
> update those dependencies to remove cliff-tablib. Someone just needs to
> follow through on that with patches to the requirements files for the
> clients.
> 
> 
> For OpenStackClient: https://review.openstack.org/234406
> 
> dt

Cool!

Could I also remove it from the Liberty version of openstackclient?

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] getting rid of tablib completely (Requests + urllib3 + distro packages)

2015-10-14 Thread Thomas Goirand
On 10/13/2015 05:14 PM, Doug Hellmann wrote:
> Excerpts from Thomas Goirand's message of 2015-10-13 12:38:00 +0200:
>> On 10/12/2015 11:09 PM, Steve Baker wrote:
>>> On 13/10/15 02:05, Thomas Goirand wrote:

 BTW, the same applies for tablib which is in a even more horrible state
 that makes it impossible to package with Py3 support. But tablib could
 be removed from our (build-)dependency list, if someone cares about
 re-writing cliff-tablib, which IMO wouldn't be that much work. Doug, how
 many beers shall I offer you for that work? :)

>>> Regarding tablib, cliff has had its own table formatter for some time,
>>> and now has its own json and yaml formatters. I believe the only tablib
>>> formatter left is the HTML one, which likely wouldn't be missed if it
>>> was just dropped (or it could be simply reimplemented inside cliff).
>>>
>>> If the cliff deb depends on cliff-tablib
>>
>> It does.
> 
> That dependency is backwards. cliff-tablib should depend on cliff. Cliff
> does not need cliff-tablib, but cliff-tablib is only useful if cliff is
> installed.

My bad, sorry. python-cliff doesn't depends on cliff-tablib. Why did I
say yes?

>> And also the below packages have a build-dependency on
>> cliff-tablib:
>>
>> - python-neutronclient
>> - python-openstackclient
>>
>> python-openstackclient also has a runtime depends on cliff-tablib.
> 
> Now that we have a cliff with the formatters provided by tablib, we can
> update those dependencies to remove cliff-tablib. Someone just needs to
> follow through on that with patches to the requirements files for the
> clients.

Doug, the problem isn't cliff-tablib, the problem is tablib.

I don't really know how to describe the mess that this package is. It
bundles so many outdated Python modules with hacks to force Py3 support
into it, that it is impossible to package properly. Mostly, all the
embedded Python modules in tablib have had newer upstream releases with
real support for Py3 (instead of hacks in the bundled versions), though
upgrading to them breaks tablib. Just doing "python3 setup.py install"
fails on me because its trying to install the Py2 version. It's just
horrible... :(

So please don't just remove cliff-tablib, which itself is fine, but
really get rid of tablib as per the subject...

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone federation] some questions about keystone IDP with SAML supported

2015-10-14 Thread wyw
hello, keystoners.  please help me


Here is my use case:
1. use keystone as IDP , supported with SAML
2. keystone integrates with LDAP
3. we use a java application as Service Provider, and to integrate it with 
keystone IDP.
4. we use a keystone as Service Provider, and to integrate it withe keystone 
IDP.


The problems:
in the k2k federation case, keystone service provider requests authentication 
info with IDP via Shibboleth ECP. 
in the java application, we use websso to request IDP, for example??
idp_sso_endpoint = http://10.111.131.83:5000/v3/OS-FEDERATION/saml2/sso
but, the java redirect the sso url , it will return 404 error.
so, if we want to integrate a java application with keystone IDP,  should we 
need to support ECP in the java application?


here is my some references:
1. http://docs.openstack.org/developer/keystone/configure_federation.html
2. 
http://blog.rodrigods.com/it-is-time-to-play-with-keystone-to-keystone-federation-in-kilo
 3. http://docs.openstack.org/developer/keystone/extensions/federation.html
https://gist.githubusercontent.com/zaccone/3c3d4c8f39a19709bcd7/raw/d938f2f9d1cf06d29a81d57c8069c291fed66cab/k2k-env.sh
https://gist.githubusercontent.com/zaccone/4bbc07d215c0047738b4/raw/75295fe32df88b24576ece69994270dc4eb19a6e/k2k-ecp-client.py
 
my keystone version is kilo


help me, thanks__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [puppet] [neutron] Deprecating nova_admin_ options in puppet-neutron

2015-10-14 Thread Sergey Kolekonov
Hi folks,

Currently puppet-neutron module sets nova_admin_* options in neutron.conf
which are deprecated since Kilo release. I propose to replace them, but we
need to discuss how to do it better. I raised this question at
puppet-openstack weekly meeting yesterday [0]. So the main concern here is
that we need to switch to Keystone auth plugins to get rid of these options
[1] [2], but there's a possibility to create a custom plugin, so all
required parameters are unknown in general case.

It seems reasonable to support only basic plugin (password), or also token
as the most common cases, otherwise an ability to pass all required
parameters as a hash should be added. which looks like a bit overkill.

What do you think?

Thanks.

[0]
http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-10-13-15.00.log.html
[1] https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L783
[2]
http://docs.openstack.org/developer/python-keystoneclient/authentication-plugins.html
-- 
Regards,
Sergey Kolekonov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] NFS mount as cinder user instead of root

2015-10-14 Thread Francesc Pinyol Margalef
Hi,
Yes, that worked! Thanks! :)

But the process is very slow (about half an hour to create a volume).
I think the problem is the execution of "du -sb --apparent-size --exclude
*snapshot* /var/lib/cinder/mnt/9ae799cf301b19940950ae49dd800c51", as shown
in the logs:

2015-10-13 19:33:14.127 1311 INFO cinder.volume.flows.manager.create_volume
[req-f52e5048-3155-4d49-92c0-4152b8243fd6 26e01a732d9e44d4a98305c6aa11860f
36593fc96ab64bc7959eb9e0ff2f2247 - - -] Volume
5230104d-68a3-4dc0-95ec-43f5d8fbc5d3: b
eing created as raw with specification: {'status': u'creating',
'volume_size': 1, 'volume_name':
u'volume-5230104d-68a3-4dc0-95ec-43f5d8fbc5d3'}
2015-10-13 19:33:14.140 1311 INFO cinder.brick.remotefs.remotefs
[req-f52e5048-3155-4d49-92c0-4152b8243fd6 26e01a732d9e44d4a98305c6aa11860f
36593fc96ab64bc7959eb9e0ff2f2247 - - -] Already mounted:
/var/lib/cinder/mnt/9ae799cf301b19940950
ae49dd800c51
2015-10-13 19:40:27.556 1311 WARNING cinder.openstack.common.loopingcall
[req-0a4a8e09-f10b-4dc6-96bf-f7e333635f99 - - - - -] task u'>'
run outlasted int
erval by 499.80 sec
2015-10-13 19:40:27.564 1311 INFO cinder.volume.manager
[req-5b14e3f3-76d9-484e-819b-46da8f0e29a6 - - - - -] Updating volume status
2015-10-13 19:40:27.577 1311 INFO cinder.brick.remotefs.remotefs
[req-5b14e3f3-76d9-484e-819b-46da8f0e29a6 - - - - -] Already mounted:
/var/lib/cinder/mnt/9ae799cf301b19940950ae49dd800c51
2015-10-13 19:51:37.371 1311 WARNING cinder.openstack.common.loopingcall
[req-5b14e3f3-76d9-484e-819b-46da8f0e29a6 - - - - -] task u'>'
run outlasted int
erval by 609.81 sec
2015-10-13 19:51:37.378 1311 INFO cinder.volume.manager
[req-941c5a78-a85d-4fa8-9df9-033b4cc6e6f5 - - - - -] Updating volume status
2015-10-13 19:51:37.391 1311 INFO cinder.brick.remotefs.remotefs
[req-941c5a78-a85d-4fa8-9df9-033b4cc6e6f5 - - - - -] Already mounted:
/var/lib/cinder/mnt/9ae799cf301b19940950ae49dd800c51
2015-10-13 19:58:18.585 1311 ERROR cinder.openstack.common.periodic_task
[req-941c5a78-a85d-4fa8-9df9-033b4cc6e6f5 - - - - -] Error during
VolumeManager._report_driver_status: Unexpected error while running command.
Command: None
Exit code: -
Stdout: u"Unexpected error while running command.\nCommand: du -sb
--apparent-size --exclude *snapshot*
/var/lib/cinder/mnt/9ae799cf301b19940950ae49dd800c51\nExit code:
-15\nStdout: u''\nStderr: u''"
Stderr: None
2015-10-13 19:58:18.585 1311 TRACE cinder.openstack.common.periodic_task
Traceback (most recent call last):
2015-10-13 19:58:18.585 1311 TRACE cinder.openstack.common.periodic_task
File
"/usr/lib/python2.7/site-packages/cinder/openstack/common/periodic_task.py",
line 224, in run_periodic_tasks
2015-10-13 19:58:18.585 1311 TRACE
cinder.openstack.common.periodic_task task(self, context)
2015-10-13 19:58:18.585 1311 TRACE cinder.openstack.common.periodic_task
File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line
1499, in _report_driver_status
2015-10-13 19:58:18.585 1311 TRACE
cinder.openstack.common.periodic_task volume_stats =
self.driver.get_volume_stats(refresh=True)
2015-10-13 19:58:18.585 1311 TRACE cinder.openstack.common.periodic_task
File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105,
in wrapper
2015-10-13 19:58:18.585 1311 TRACE
cinder.openstack.common.periodic_task return f(*args, **kwargs)
2015-10-13 19:58:18.585 1311 TRACE cinder.openstack.common.periodic_task
File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/remotefs.py",
line 439, in get_volume_stats
2015-10-13 19:58:18.585 1311 TRACE
cinder.openstack.common.periodic_task self._update_volume_stats()
2015-10-13 19:58:18.585 1311 TRACE cinder.openstack.common.periodic_task
File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/remotefs.py",
line 458, in _update_volume_stats
2015-10-13 19:58:18.585 1311 TRACE
cinder.openstack.common.periodic_task capacity, free, used =
self._get_capacity_info(share)
2015-10-13 19:58:18.585 1311 TRACE cinder.openstack.common.periodic_task
File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/nfs.py", line
281, in _get_capacity_info
2015-10-13 19:58:18.585 1311 TRACE
cinder.openstack.common.periodic_task run_as_root=run_as_root)
2015-10-13 19:58:18.585 1311 TRACE cinder.openstack.common.periodic_task
File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 143, in
execute
2015-10-13 19:58:18.585 1311 TRACE
cinder.openstack.common.periodic_task return processutils.execute(*cmd,
**kwargs)
2015-10-13 19:58:18.585 1311 TRACE cinder.openstack.common.periodic_task
File "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py",
line 233, in execute
2015-10-13 19:58:18.585 1311 TRACE
cinder.openstack.common.periodic_task cmd=sanitized_cmd)
2015-10-13 19:58:18.585 1311 TRACE cinder.openstack.common.periodic_task
ProcessExecutionError: Unexpected error while running command.
2015-10-13 19:58:18.585 1311 TRACE cinder.openstack.common.periodic_task
Command: du -sb --apparent-size --exclude *snapshot*

[openstack-dev] [TripleO] Introspection rules aka advances profiles replacement: next steps

2015-10-14 Thread Dmitry Tantsur

Hi OoO'ers :)

It's going to be a long letter, fasten your seat-belts (and excuse my 
bad, as usual, English)!


In RDO Manager we used to have a feature called advanced profiles 
matching. It's still there in the documentation at 
http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/profile_matching.html 
but the related code needed reworking and didn't quite make it upstream 
yet. This mail is an attempt to restart the discussion on this topic.


Short explanation for those unaware of this feature: we used detailed 
data from introspection (acquired using hardware-detect utility [1]) to 
provide scheduling hints, which we called profiles. A profile is 
essentially a flavor, but calculated using much more data. E.g. you 
could sat that a profile "foo" will be assigned to nodes with 1024 <= 
RAM <= 4096 and with GPU devices present (an artificial example). 
Profile was put on an Ironic as a capability as a result of 
introspection. Please read the documentation linked above for more details.


This feature had a bunch of problems with it, to name a few:
1. It didn't have an API
2. It required a user to modify files by hand to use it
3. It was tied to a pretty specific syntax of the hardware [1] library

So we decided to split this thing into 2 parts, which are of value one 
their own:


1. Pluggable introspection ramdisk - so that we don't force dependency 
on hardware-detect on everyone.
2. User-defined introspection rules - some DSL that will allow a user to 
define something like a specs file (see link above) via an API. The 
outcome would be something, probably capabilit(y|ies) set on a node.
3. Scheduler helper - an utility that will take capabilities set by the 
previous step, and turn them into exactly one profile to use.


Long story short, we got 1 and 2 implemented in appropriate projects 
(ironic-python-agent and ironic-inspector) during the Liberty time 
frame. Now it's time to figure out what we do in TripleO about this, namely:


1. Do we need some standard way to define introspection rules for 
TripleO? E.g. a JSON file like we have for ironic nodes?


2. Do we need a scheduler helper at all? We could use only capabilities 
for scheduling, but then we can end up with the following situation: 
node1 has capabilities C1 and C2, node2 has capability C1. First we 
deploy a flavor with capability C1, it goes to node1. Then we deploy a 
flavor with capability C2 and it fails, despite us having 2 correct 
nodes initially. This is what state files were solving in [1] (again, 
please refer to the documentation).


3. If we need, where does it go? tripleo-common? Do we need an HTTP API 
for it, or do we just do it in place where we need it? After all, it's a 
pretty trivial manipulation with ironic nodes..


4. Finally, we need an option to tell introspection to use 
python-hardware. I don't think it should be on by default, but it will 
require rebuilding of IPA (due to a new dependency).


Looking forward to your opinions.
Dmitry.

[1] https://github.com/redhat-cip/hardware

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] The scenary to rolling upgrade Ironic

2015-10-14 Thread Dan Smith
> Conductors will always need to talk to the database. APIs may not need
> to talk to the database. I think we can just roll conductor
> upgrades through, and then update ironic-api after that. This should
> just work, as long as we're very careful about schema changes (this is
> where the expand/contract thing comes into play). Different versions of
> conductors are only a problem if the database schema is not compatible
> with one of the versions.

Yep, this seems like it's probably the right approach, assuming that
your API depending on RPC is still reasonable, performance-wise. It
might be confusing for people to hear that ironic's conductor goes last,
but nova's goes first. In reality, it was probably not a great idea to
have these named the same thing as they're not very similar, but oh well.

If you expand your DB schema separate from contract, it's not hard to
just have a flag that controls the online migration activity in your
objects so that it doesn't start happening until everything is upgraded
that needs to care. With that, you could upgrade all the conductors one
by one, and when the last one is upgraded, there is some indication that
it's okay to start actually writing things into the new column(s), or
whatever else. This is something I want nova's conductor to do so that
we don't have to upgrade all the conductors at once, which is currently
a requirement. We now have a global service_version counter in the
database that will provide us the info necessary to control this change.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [puppet] [neutron] Deprecating nova_admin_ options in puppet-neutron

2015-10-14 Thread Jamie Lennox
On re-reading the original email i've been thinking of auth_token
middleware rather than nova_admin_* options (i haven't done much
neutron config). I think everything still applies though and hopefully
this can be a mechanism that's reused across modules.

On 15 October 2015 at 10:37, Jamie Lennox  wrote:
> TL;DR: you can fairly easily convert existing puppet params into auth
> plugin format for now, but we're going to need the hash based config
> soon.
>
> I think as a first step it is a good idea to replace the deprecated
> options and use the password, v2password or v3password plugins*
> because you will need to maintain compatibility with the existing
> auth_user, auth_tenant etc options.
>
> However I would like to suggest all the puppet projects start to look
> at some way of passing around a hash of this information. We are
> currently at the stage where we have both kerberos and x509 auth_token
> middleware authentication working and IMO this will become the
> preferred deployment mechanism for service authentication in
> environments that have this infrastructure. (SAML and other auth
> mechanisms have also been proven to work here but are less likely to
> be used for service users). Note that this will not only apply to
> auth_token service users, but any service who has admin credentials
> configured in their conf file.
>
> I don't think it's necessary for puppet to validate the contents of
> these hashes, but i think it will be a loosing battle to add all the
> options required for all upcoming authentication types to each
> service.
>
> I'm not sure if this makes it easier for you or not, but for
> situations exactly like this loading auth plugins from a config file
> take an auth_section option so you can do:
>
> [keystone_authtoken]
> auth_section = my_authentication_conf
>
> [my_authentication_conf]
> auth_plugin = password
> ...
>
> and essentially dump that hash straight into config without fear of
> having them conflict with existing options. It would also let you
> share credentials if you configure for example the nova service user
> in multiple places in the same config file, you can point multiple
> locations to the same auth_section.
>
>
>
> * The difference is that password queries keystone for available
> versions and uses the discovered urls for the correct endpoint and so
> expects the auth_url to be 'http://keystone.url:5000/' v2 and v3 are
> v2 and v3 specific and so expect the URL to be of the /v2.0 or /v3
> form. Password will work with /v2.0 or /v3 urls because those
> endpoints return only the current url as part of discovery and so it
> is preferred. For the smallest possible change v2password is closer to
> what the old options provide but then you'll have a bigger step to get
> to v3 auth - which we want fast.
>
> On 14 October 2015 at 22:20, Sergey Kolekonov  wrote:
>> Hi folks,
>>
>> Currently puppet-neutron module sets nova_admin_* options in neutron.conf
>> which are deprecated since Kilo release. I propose to replace them, but we
>> need to discuss how to do it better. I raised this question at
>> puppet-openstack weekly meeting yesterday [0]. So the main concern here is
>> that we need to switch to Keystone auth plugins to get rid of these options
>> [1] [2], but there's a possibility to create a custom plugin, so all
>> required parameters are unknown in general case.
>>
>> It seems reasonable to support only basic plugin (password), or also token
>> as the most common cases, otherwise an ability to pass all required
>> parameters as a hash should be added. which looks like a bit overkill.
>>
>> What do you think?
>>
>> Thanks.
>>
>> [0]
>> http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-10-13-15.00.log.html
>> [1] https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L783
>> [2]
>> http://docs.openstack.org/developer/python-keystoneclient/authentication-plugins.html
>> --
>> Regards,
>> Sergey Kolekonov
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Upgrade to Gerrit 2.11

2015-10-14 Thread Ian Wienand

On 10/14/2015 11:08 AM, Zaro wrote:

We are soliciting feedback so please let us know what you think.


Since you asked :)

Mostly it's just different which is fine.  Two things I noticed when
playing around, shown in [1]

When reviewing, the order "-1 0 +1" is kind of counter-intuitive to
the usual dialog layout of the "most positive" thing on the left;
e.g. [OK] [Cancel] dialogs.  I just found it odd to interact with.

Maybe people default themselves to -1 though :)

The colours for +1/-1 seem to be missing.  You've got to think a lot
more to parse the +1/-1 rather than just glance at the colours.

-i

[1] http://imgur.com/QWXOMen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] backwards compat issue with PXEDeply and AgentDeploy drivers

2015-10-14 Thread Jim Rollenhagen
Cross posting to the ops list for visibility.

To follow up on this, we decided not to fix this after all. Besides
Ramesh's (very valid) points below, we realized that fixing this for out
of tree drivers that depended on Kilo behavior would break out of tree
drivers that implemented their own boot mechanism (e.g. virtual media),
or drivers that just generally didn't expect Ironic to handle booting
for them.

Copying my patch to our release notes here, to explain the breakage and
how to fix:

The AgentDeploy and ISCSIDeploy (formerly known as PXEDeploy) now depend
on drivers to mix in an instance of a BootInterface. For drivers that
exist out of tree, that use these deploy drivers directly, an error will
be thrown during deployment. For drivers that expect these deploy classes
to handle PXE booting, please mix in `ironic.drivers.modules.pxe.PXEBoot`
as self.boot. Drivers that handle booting itself (for example, a driver
that implements booting from virtual media) should mix in
`ironic.drivers.modules.fake.FakeBoot` as self.boot, to make calls to the
boot interface a no-op. Additionally, as mentioned before,
`ironic.drivers.modules.pxe.PXEDeploy` has moved to
`ironic.drivers.modules.iscsi_deploy.ISCSIDeploy`, which will break drivers
that mix this class in.

To out of tree driver authors: the Ironic team apologizes profusely for
this inconvenience. We're meeting up in Tokyo to discuss our driver API
and the boundaries there; please join us!

// jim

On Tue, Oct 06, 2015 at 12:05:37PM +0530, Ramakrishnan G wrote:
> Well it's nice to fix, but I really don't know if we should be fixing it.
> As discussed in one of the Ironic meetings before, we might need to define
> what is our driver API or SDK or DDK or whatever we choose to call it .
> Please see inline for my thoughts.
> 
> On Tue, Oct 6, 2015 at 5:54 AM, Devananda van der Veen <
> devananda@gmail.com> wrote:
> 
> > tldr; the boot / deploy interface split we did broke an out of tree
> > driver. I've proposed a patch. We should get a fix into stable/liberty too.
> >
> > Longer version...
> >
> > I was rebasing my AMTTool driver [0] on top of master because the in-tree
> > one still does not work for me, only to discover that my driver suddenly
> > failed to deploy. I have filed this bug
> >   https://bugs.launchpad.net/ironic/+bug/1502980
> > because we broke at least one out of tree driver (mine). I highly suspect
> > we've broken many other out of tree drivers that relied on either the
> > PXEDeploy or AgentDeploy interfaces that were present in Kilo release. Both
> > classes in Liberty are making explicit calls to "task.driver.boot" -- and
> > kilo-era driver classes did not define this interface.
> >
> 
> 
> I would like to think more about what really our driver API is ? We have a
> couple of well defined interfaces in ironic/drivers/base.py which people
> may follow, implement an out-of-tree driver, make it a stevedore entrypoint
> and get it working with Ironic.
> 
> But
> 
> 1) Do we promise them that in-tree implementations of these interfaces will
> always exist.  For example in boot/deploy work done in Liberty, we removed
> the class PxeDeploy [1].  It actually got broken down to PXEBoot and
> ISCSIDeploy.  In the first place, do we guarantee that they will exist for
> ever in the same place with the same name ? :)
> 
> 2) Do we really promise the in-tree implementations of these interfaces
> will behave the same way ? For example, the broken stuff AgentDeploy which
> is an implementation of our DeployInterface.  Do we guarantee that this
> implementation will always keep doing what ever it was every time code is
> rebased ?
> 
> [1] https://review.openstack.org/#/c/166513/19/ironic/drivers/modules/pxe.py
> 
> 
> 
> >
> > I worked out a patch for the AgentDeploy driver and have proposed it here:
> >   https://review.openstack.org/#/c/231215/1
> >
> > I'd like to ask folks to review it quickly -- we should fix this ASAP and
> > backport it to stable/liberty before the next release, if possible. We
> > should also make a similar fix for the PXEDeploy class. If anyone gets to
> > this before I do, please reply here and let me know so we don't duplicate
> > effort.
> >
> 
> 
> This isn't going to be as same as above as there is no longer a PXEDeploy
> class any more.  We might need to create a new class PXEDeploy which
> probably inherits from ISCSIDeploy and has task.driver.boot worked around
> in the same way as the above patch.
> 
> 
> 
> >
> > Also, Jim already spotted something in the review that is a bit
> > concerning. It seems like the IloVirtualMediaAgentVendorInterface class
> > expects the driver it is attached to *not* to have a boot interface and
> > *not* to call boot.clean_up_ramdisk. Conversely, other drivers may be
> > expecting AgentVendorInterface to call boot.clean_up_ramdisk -- since that
> > was its default behavior in Kilo. I'm not sure what the right way to fix
> > this is, but I lean towards updating the in-tree driver 

Re: [openstack-dev] [OpenStack-Infra] [infra] Infra Design Summit Schedule

2015-10-14 Thread Anita Kuno
On 10/14/2015 07:13 PM, Jeremy Stanley wrote:
> Based on feedback from our last meeting and the discussion and
> subsequent voting in our session ideas pad, I've taken a first pass
> at scheduling:
> 
> http://mitakadesignsummit.sched.org/type/Infrastructure
> 
> I did the best I could to avoid obvious conflicts where necessary
> participants were likely to be involved in other summit tracks or
> conference talks, but due to the compressed nature of this round
> there's some unavoidable overlap (notably with QA, Docs, Ansible,
> Oslo, TripleO, and Ironic).
> 
> Also, since it ended up making more sense to collapse the Storyboard
> and Maniphest workroom sessions into the Task Tracking fishbowl
> session (individually they had fewer votes, and otherwise we either
> had to conflict with QA or do the corresponding workrooms before the
> fishbowl), we really only had three workroom topics so I've proposed
> spending our back-to-back Wednesday sessions on the most popular of
> the three: Masterless Puppet. I have confidence you'll let me know
> if this is insane, and in that case provide alternative suggestions!
> 
> 
> 
> ___
> OpenStack-Infra mailing list
> openstack-in...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
> 

Thank you, this looks good to me.

Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [puppet] [neutron] Deprecating nova_admin_ options in puppet-neutron

2015-10-14 Thread Jamie Lennox
TL;DR: you can fairly easily convert existing puppet params into auth
plugin format for now, but we're going to need the hash based config
soon.

I think as a first step it is a good idea to replace the deprecated
options and use the password, v2password or v3password plugins*
because you will need to maintain compatibility with the existing
auth_user, auth_tenant etc options.

However I would like to suggest all the puppet projects start to look
at some way of passing around a hash of this information. We are
currently at the stage where we have both kerberos and x509 auth_token
middleware authentication working and IMO this will become the
preferred deployment mechanism for service authentication in
environments that have this infrastructure. (SAML and other auth
mechanisms have also been proven to work here but are less likely to
be used for service users). Note that this will not only apply to
auth_token service users, but any service who has admin credentials
configured in their conf file.

I don't think it's necessary for puppet to validate the contents of
these hashes, but i think it will be a loosing battle to add all the
options required for all upcoming authentication types to each
service.

I'm not sure if this makes it easier for you or not, but for
situations exactly like this loading auth plugins from a config file
take an auth_section option so you can do:

[keystone_authtoken]
auth_section = my_authentication_conf

[my_authentication_conf]
auth_plugin = password
...

and essentially dump that hash straight into config without fear of
having them conflict with existing options. It would also let you
share credentials if you configure for example the nova service user
in multiple places in the same config file, you can point multiple
locations to the same auth_section.



* The difference is that password queries keystone for available
versions and uses the discovered urls for the correct endpoint and so
expects the auth_url to be 'http://keystone.url:5000/' v2 and v3 are
v2 and v3 specific and so expect the URL to be of the /v2.0 or /v3
form. Password will work with /v2.0 or /v3 urls because those
endpoints return only the current url as part of discovery and so it
is preferred. For the smallest possible change v2password is closer to
what the old options provide but then you'll have a bigger step to get
to v3 auth - which we want fast.

On 14 October 2015 at 22:20, Sergey Kolekonov  wrote:
> Hi folks,
>
> Currently puppet-neutron module sets nova_admin_* options in neutron.conf
> which are deprecated since Kilo release. I propose to replace them, but we
> need to discuss how to do it better. I raised this question at
> puppet-openstack weekly meeting yesterday [0]. So the main concern here is
> that we need to switch to Keystone auth plugins to get rid of these options
> [1] [2], but there's a possibility to create a custom plugin, so all
> required parameters are unknown in general case.
>
> It seems reasonable to support only basic plugin (password), or also token
> as the most common cases, otherwise an ability to pass all required
> parameters as a hash should be added. which looks like a bit overkill.
>
> What do you think?
>
> Thanks.
>
> [0]
> http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-10-13-15.00.log.html
> [1] https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L783
> [2]
> http://docs.openstack.org/developer/python-keystoneclient/authentication-plugins.html
> --
> Regards,
> Sergey Kolekonov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Searchlight] Mitaka summit schedule

2015-10-14 Thread Tripp, Travis S
Hello Search Enthusiasts,

Our conference and summit schedule events are now loaded into the system.

Main conference presentation and demo:


 [1] Introducing OpenStack Searchlight - Search your cloud at the speed of 
light!

Design Summit:

 [2] Prioritizing Search Integrations and Capabilities (Nova, Neutron, Cinder, 
Swift, etc) 
[3] Cross Region Searching

Etherpads are created and linked from the design summit sessions for your 
viewing and editing pleasure.

 
 [1] http://sched.co/49ub
 [2] 
http://mitakadesignsummit.sched.org/event/1d1ef3b50d8d8607834697f0ef6d70d9#.Vh7mEhNVhBc
 [3] 
http://mitakadesignsummit.sched.org/event/b30bb4203a490efe1370ecfda5e4aaee#.Vh7mFhNVhBc


See you there!

Travis


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] backwards compat issue with PXEDeply and AgentDeploy drivers

2015-10-14 Thread Jim Rollenhagen
One more try on the ops list as it turns out I wasn't subscribed... :(

// jim 

> On Oct 14, 2015, at 17:39, Jim Rollenhagen  wrote:
> 
> Cross posting to the ops list for visibility.
> 
> To follow up on this, we decided not to fix this after all. Besides
> Ramesh's (very valid) points below, we realized that fixing this for out
> of tree drivers that depended on Kilo behavior would break out of tree
> drivers that implemented their own boot mechanism (e.g. virtual media),
> or drivers that just generally didn't expect Ironic to handle booting
> for them.
> 
> Copying my patch to our release notes here, to explain the breakage and
> how to fix:
> 
> The AgentDeploy and ISCSIDeploy (formerly known as PXEDeploy) now depend
> on drivers to mix in an instance of a BootInterface. For drivers that
> exist out of tree, that use these deploy drivers directly, an error will
> be thrown during deployment. For drivers that expect these deploy classes
> to handle PXE booting, please mix in `ironic.drivers.modules.pxe.PXEBoot`
> as self.boot. Drivers that handle booting itself (for example, a driver
> that implements booting from virtual media) should mix in
> `ironic.drivers.modules.fake.FakeBoot` as self.boot, to make calls to the
> boot interface a no-op. Additionally, as mentioned before,
> `ironic.drivers.modules.pxe.PXEDeploy` has moved to
> `ironic.drivers.modules.iscsi_deploy.ISCSIDeploy`, which will break drivers
> that mix this class in.
> 
> To out of tree driver authors: the Ironic team apologizes profusely for
> this inconvenience. We're meeting up in Tokyo to discuss our driver API
> and the boundaries there; please join us!
> 
> // jim
> 
>> On Tue, Oct 06, 2015 at 12:05:37PM +0530, Ramakrishnan G wrote:
>> Well it's nice to fix, but I really don't know if we should be fixing it.
>> As discussed in one of the Ironic meetings before, we might need to define
>> what is our driver API or SDK or DDK or whatever we choose to call it .
>> Please see inline for my thoughts.
>> 
>> On Tue, Oct 6, 2015 at 5:54 AM, Devananda van der Veen <
>> devananda@gmail.com> wrote:
>> 
>>> tldr; the boot / deploy interface split we did broke an out of tree
>>> driver. I've proposed a patch. We should get a fix into stable/liberty too.
>>> 
>>> Longer version...
>>> 
>>> I was rebasing my AMTTool driver [0] on top of master because the in-tree
>>> one still does not work for me, only to discover that my driver suddenly
>>> failed to deploy. I have filed this bug
>>>  https://bugs.launchpad.net/ironic/+bug/1502980
>>> because we broke at least one out of tree driver (mine). I highly suspect
>>> we've broken many other out of tree drivers that relied on either the
>>> PXEDeploy or AgentDeploy interfaces that were present in Kilo release. Both
>>> classes in Liberty are making explicit calls to "task.driver.boot" -- and
>>> kilo-era driver classes did not define this interface.
>> 
>> 
>> I would like to think more about what really our driver API is ? We have a
>> couple of well defined interfaces in ironic/drivers/base.py which people
>> may follow, implement an out-of-tree driver, make it a stevedore entrypoint
>> and get it working with Ironic.
>> 
>> But
>> 
>> 1) Do we promise them that in-tree implementations of these interfaces will
>> always exist.  For example in boot/deploy work done in Liberty, we removed
>> the class PxeDeploy [1].  It actually got broken down to PXEBoot and
>> ISCSIDeploy.  In the first place, do we guarantee that they will exist for
>> ever in the same place with the same name ? :)
>> 
>> 2) Do we really promise the in-tree implementations of these interfaces
>> will behave the same way ? For example, the broken stuff AgentDeploy which
>> is an implementation of our DeployInterface.  Do we guarantee that this
>> implementation will always keep doing what ever it was every time code is
>> rebased ?
>> 
>> [1] https://review.openstack.org/#/c/166513/19/ironic/drivers/modules/pxe.py
>> 
>> 
>> 
>>> 
>>> I worked out a patch for the AgentDeploy driver and have proposed it here:
>>>  https://review.openstack.org/#/c/231215/1
>>> 
>>> I'd like to ask folks to review it quickly -- we should fix this ASAP and
>>> backport it to stable/liberty before the next release, if possible. We
>>> should also make a similar fix for the PXEDeploy class. If anyone gets to
>>> this before I do, please reply here and let me know so we don't duplicate
>>> effort.
>> 
>> 
>> This isn't going to be as same as above as there is no longer a PXEDeploy
>> class any more.  We might need to create a new class PXEDeploy which
>> probably inherits from ISCSIDeploy and has task.driver.boot worked around
>> in the same way as the above patch.
>> 
>> 
>> 
>>> 
>>> Also, Jim already spotted something in the review that is a bit
>>> concerning. It seems like the IloVirtualMediaAgentVendorInterface class
>>> expects the driver it is attached to *not* to have a boot interface and
>>> *not* to call 

Re: [openstack-dev] [infra] Upgrade to Gerrit 2.11

2015-10-14 Thread Tony Breeds
On Tue, Oct 13, 2015 at 05:08:29PM -0700, Zaro wrote:
> Hello All,
> 
> The openstack-infra team would like to upgrade from our Gerrit 2.8 to
> Gerrit 2.11.  We are proposing to do the upgrade shortly after the
> Mitaka summit.  The main motivation behind the upgrade is to allow us
> to take advantage of some of the new REST api, ssh commands, and
> stream events features.  Also we wanted to stay closer to upstream so
> it will be easier to pick up more recent features and fixes.

Yeah really looking forward to 2.11!
 
> We want to let everyone know that there is a big UI change in Gerrit
> 2.11.  The change screen (CS), which is the main view for a patchset,
> has been completely replaced with a new change screen (CS2).  While
> Gerrit 2.8 contains both old CS and CS2, I believe everyone in
> Openstack land is really just using the old CS.  CS2 really wasn't

Just clarifying because -ENOTENOUGHCOFFEE

So in the Gerrit UI under (https://review.openstack.org/)
Settings -> Preferences -> Change view,  I have 3 options
 * Server Default (Old Screen)
 * Old Screen
 * New screen

I assume that 'Old Screen' == CS and 'New Screen' == CS2.

Yours Tony.


pgpc_CJE2kc7i.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] Monasca Meeting @ Tokyo Summit

2015-10-14 Thread Fabio Giannetti (fgiannet)
Hi Oğuz,
   YES this is open to everybody.
Thanks,
Fabio

[http://www.cisco.com/c/dam/assets/email-signature-tool/logo_06.png?ct=1430182397611]

Fabio Giannetti
Cloud Innovation Architect
Cisco Services
fgian...@cisco.com
Phone: +1 408 527 1134
Mobile: +1 408 854 0020


Cisco Systems, Inc.
285 W. Tasman Drive
San Jose
California
95134
United States
Cisco.com





[http://www.cisco.com/assets/swa/img/thinkbeforeyouprint.gif] Think before you 
print.

This email may contain confidential and privileged material for the sole use of 
the intended recipient. Any review, use, distribution or disclosure by others 
is strictly prohibited. If you are not the intended recipient (or authorized to 
receive for the recipient), please contact the sender by reply email and delete 
all copies of this message.

Please click 
here for 
Company Registration Information.



From: Oğuz Yarımtepe >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, October 13, 2015 at 11:22 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [Monasca] Monasca Meeting @ Tokyo Summit

Hi,

On Wed, Oct 14, 2015 at 7:36 AM, Fabio Giannetti (fgiannet) 
> wrote:
Guys,
   I have a Cisco room S3 to held a Monasca meeting over the Tokyo Summit.
The time slot is Thursday 4:30pm to 6pm.
Please mark your calendar and see you there.
Fabio


Will this meeting be open to everyone? We are using Monasca at our test 
environment and planning to use it at out production also, would like to hear 
the future plans and development process of it.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Upgrade to Gerrit 2.11

2015-10-14 Thread Anita Kuno
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/14/2015 07:07 PM, Tony Breeds wrote:
> On Tue, Oct 13, 2015 at 05:08:29PM -0700, Zaro wrote:
>> Hello All,
>> 
>> The openstack-infra team would like to upgrade from our Gerrit
>> 2.8 to Gerrit 2.11.  We are proposing to do the upgrade shortly
>> after the Mitaka summit.  The main motivation behind the upgrade
>> is to allow us to take advantage of some of the new REST api, ssh
>> commands, and stream events features.  Also we wanted to stay
>> closer to upstream so it will be easier to pick up more recent
>> features and fixes.
> 
> Yeah really looking forward to 2.11!
> 
>> We want to let everyone know that there is a big UI change in
>> Gerrit 2.11.  The change screen (CS), which is the main view for
>> a patchset, has been completely replaced with a new change screen
>> (CS2).  While Gerrit 2.8 contains both old CS and CS2, I believe
>> everyone in Openstack land is really just using the old CS.  CS2
>> really wasn't
> 
> Just clarifying because -ENOTENOUGHCOFFEE
> 
> So in the Gerrit UI under (https://review.openstack.org/) Settings
> -> Preferences -> Change view,  I have 3 options * Server Default
> (Old Screen) * Old Screen * New screen
> 
> I assume that 'Old Screen' == CS and 'New Screen' == CS2.

Correct, though Khai made the point 2.8 new screen isn't the same as
2.11 new screen. 2.11 new screen can be found at
review-dev.openstack.org (sign in like you would sign into review.o.o).

Thanks,
Anita.

> 
> Yours Tony.
> 
> 
> 
> __

>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJWHuZOAAoJELmKyZugNFU0AVsH/13q3HQlpD5CQNl6slNZRvXl
ajhuOUphFwQ8Tv7KTw1Z5G9HQL6mUd/M2HGGvRHAPJKtDczeI6i6zw4E+s9eTezu
QmB5wMN6GkNuz3AvzGif/xkUCjphiw4X/rgkXoKHPjxVg/1AqqSLouy4l8KFSFE4
UVl5/pcSXLHNzChYOH0VdsPgW0TE0UjV3yZ6J9fAaV64xNjG7cODetWcWeixcGO6
2g7Of7/4oAqfUWYGVLfMs53uU3BQy7FPlGyEBmuTKlYeZlEVRw8si/L4BtIaQYhW
lbDn1Lsup8HzzpaccWxIqcaX8QehCdPuHibui9xpQj5VVTAhM8U5/UcJr4y2luw=
=fQOk
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] establishing release liaisons for mitaka

2015-10-14 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 15/10/15 01:25, Doug Hellmann wrote:
> As with the other cross-project teams, the release management team
> relies on liaisons from each project to be available for coordination of
> work across all teams. It's the start of a new cycle, so it's time to
> find those liaison volunteers.
> 
> We are working on updating the release documentation as part of the
> Project Team Guide. Release liaison responsibilities are documented in
> [0], and we will update that page with more details over time.
> 
> In the past we have defaulted to having the PTL act as liaison if no
> alternate is specified, and we will continue to do that during Mitaka.
> If you plan to delegate release work to a liaison, especially for
> submitting release requests, please update the wiki [1] to provide their
> contact information. If you plan to serve as your team's liaison, please
> add your contact details to the page.

While PTLs are considering this important role, (and editing the cross-project 
liaison wiki page!), please consider who you would like to be your 
documentation liaison as well[1]. The docs team relies on the docs CPLs to 
provide technical depth on the things we write, so having a subject matter 
expert means we are going to be providing better documentation for your project.

Thanks,
Lana

1: https://wiki.openstack.org/wiki/CrossProjectLiaisons#Documentation


- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCAAGBQJWHvYAAAoJELppzVb4+KUy9ysH/i/sksBb128N7uBfhOpHukDM
2Z1e8Dmx1rS5F2x4k4QIBqMPIpgAhtyryjCWTwi5rz/62bVG1rT/ArTJiqV+nQT0
iLXBVVI++iZL5eAkaR3/VVyeOUUXoRIe/t+5MMrTZaVOB1nM1UkNLAWcoS6xaATf
Y96dcv7EAhyw2Mmd8TlLL3/VBTZ4DYO1aQaQbAupGlNdkzOeHnLwU4kdH0T3ajKc
ooSi5PYTY8blFTw/F1LfbPM8HkaCvV81YU8eSfRtLQeNg9WjgE5cMTXn7HIupKgA
YmKCaQfE2rGGZpGF7d16C5A8UbGdKLnsZ6XjtjcVVbxB5diAMFSk3xMgjDfVdXM=
=EZFY
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Ironic design summit schedule

2015-10-14 Thread Jim Rollenhagen
Hey all,

I sorted out the design summit schedule today, attempting to reduce
conflicts and such as much as possible.

The schedule is here: http://mitakadesignsummit.sched.org/type/Ironic

I created a general summit etherpad for us, that is here:
https://etherpad.openstack.org/p/summit-mitaka-ironic

Etherpads for the individual sessions are on the general etherpad, the
sessions themselves at sched.org, and also in this wiki (triple HA
etherpad list, yay):
https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads#Ironic

Please do let me know ASAP if there are any conflicts/questions/concerns.

Thanks!

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][Fuel] OpenstackLib Client Provider Better Exception Handling

2015-10-14 Thread Matt Fischer
On Thu, Oct 8, 2015 at 5:38 AM, Vladimir Kuklin 
wrote:

> Hi, folks
>
> * Intro
>
> Per our discussion at Meeting #54 [0] I would like to propose the uniform
> approach of exception handling for all puppet-openstack providers accessing
> any types of OpenStack APIs.
>
> * Problem Description
>
> While working on Fuel during deployment of multi-node HA-aware
> environments we faced many intermittent operational issues, e.g.:
>
> 401/403 authentication failures when we were doing scaling of OpenStack
> controllers due to difference in hashing view between keystone instances
> 503/502/504 errors due to temporary connectivity issues
> non-idempotent operations like deletion or creation - e.g. if you are
> deleting an endpoint and someone is deleting on the other node and you get
> 404 - you should continue with success instead of failing. 409 Conflict
> error should also signal us to re-fetch resource parameters and then decide
> what to do with them.
>
> Obviously, it is not optimal to rerun puppet to correct such errors when
> we can just handle an exception properly.
>
> * Current State of Art
>
> There is some exception handling, but it does not cover all the
> aforementioned use cases.
>
> * Proposed solution
>
> Introduce a library of exception handling methods which should be the same
> for all puppet openstack providers as these exceptions seem to be generic.
> Then, for each of the providers we can introduce provider-specific
> libraries that will inherit from this one.
>
> Our mos-puppet team could add this into their backlog and could work on
> that in upstream or downstream and propose it upstream.
>
> What do you think on that, puppet folks?
>
> [0]
> http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-10-06-15.00.html
>

I think that we should look into some solutions here as I'm generally for
something we can solve once and re-use. Currently we solve some of this at
TWC by serializing our deploys and disabling puppet site wide while we do
so. This avoids the issue of Keystone on one node removing and endpoint
while the other nodes (who still have old code) keep trying to add it back.

For connectivity issues especially after service restarts, we're using
puppet-healthcheck [0] and I'd like to discuss that more in Tokyo as an
alternative to explicit retries and delays. It's in the etherpad so
hopefully you can attend.

[0] - https://github.com/puppet-community/puppet-healthcheck
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-14 Thread Thierry Carrez
Adrien Cunin wrote:
> Le 09/10/2015 11:42, Thierry Carrez a écrit :
> Nice! I've got a question though: is there another way than the wiki
> page to consume those successes?
> 
> I was thinking of an RSS/Atom feed, which would make it possible to
> easily publish them elsewhere (website, social media, etc.).

A wiki page was the simplest, since the statusbot already had the
mechanics to post status updates for infrastructure to a wiki page, and
it makes spam edition quite simple. My fear with the RSS/Atom/Twitter
feed is that it will create a larger target for spam (harder to fix
after the fact once everyone received it). And nobody wants to spend
time moderating successes snippets.

With that caveat, see
http://git.openstack.org/cgit/openstack-infra/statusbot/tree/statusbot/bot.py
if you want to propose extra features :)

Note that some success snippets should appear in future editions of the
weekly newsletter too.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Upgrade to Gerrit 2.11

2015-10-14 Thread Ihar Hrachyshka
> On 14 Oct 2015, at 02:08, Zaro  wrote:
> 
> Hello All,
> 
> The openstack-infra team would like to upgrade from our Gerrit 2.8 to
> Gerrit 2.11.  We are proposing to do the upgrade shortly after the
> Mitaka summit.  The main motivation behind the upgrade is to allow us
> to take advantage of some of the new REST api, ssh commands, and
> stream events features.  Also we wanted to stay closer to upstream so
> it will be easier to pick up more recent features and fixes.
> 
> We want to let everyone know that there is a big UI change in Gerrit
> 2.11.  The change screen (CS), which is the main view for a patchset,
> has been completely replaced with a new change screen (CS2).  While
> Gerrit 2.8 contains both old CS and CS2, I believe everyone in
> Openstack land is really just using the old CS.  CS2 really wasn't
> ready in 2.8 and really should never be used in that version.  The CS2
> has come a long way since then and many other big projects have moved
> to using Gerrit 2.11 so it's not a concern any longer.  If you would
> like a preview of Gerrit 2.11 and maybe help us test it, head over to
> http://review-dev.openstack.org.  If you are very opposed to CS2 then
> you may like Gertty (https://pypi.python.org/pypi/gertty) instead.  If
> neither option works for you then maybe you can help us create a new
> alternative :)
> 
> We are soliciting feedback so please let us know what you think.

I had a chance to use the new gerrit UI on gerrithub and on internally in my 
company, and I need to admit that it breaks my keyboard based flow, and I 
haven’t found a way to behave it the old way.

Specifically, I use [ and ] keys to switch between files in the same patchset. 
It’s especially useful when comparing mostly identical patchsets, or reviewing 
backports. Now, with the new UI, it does not work for me, so I need to reach to 
my mouse to switch between files. Also ‘r’ did not work for me, I expect other 
shortcuts being affected too. It could be a local configuration issue, but the 
fact that I experience it on two independent setups suggests that it’s an 
upstream issue.

Another problem with the new UI I have is that sometimes it does not show full 
header of the page (with the links to file list, diff modes, etc.) when I’m in 
patch review mode, and I need to reload the page to see it. That could be a 
browser issue, but I checked it for both Safari and Chrome - fails the same way.

Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Etherpads list for Mitaka Design Summit

2015-10-14 Thread Thierry Carrez
Hi everyone,

I just set up the wiki page to collect all the etherpads for the Mitaka
Design Summit (yes, only 12 days away):

https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads

Please add your etherpads there when you create them, for easy reference
and facilitating session preparation ahead of the event.

Thanks!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable][release] 2015.1.2

2015-10-14 Thread Chuck Short
On Wed, Oct 14, 2015 at 5:01 AM, Thierry Carrez 
wrote:

> Ihar Hrachyshka wrote:
> > Chuck, have you forgotten about three more repositories for neutron-*aas?
>

My apologies they are available now.

>
> Yeah, if those have fixes they should have been tagged as well.
>
> Also we'll want a change to openstack/releases to include 2015.1.2 in
> release history.
>
> --
> Thierry Carrez (ttx)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Upgrade to Gerrit 2.11

2015-10-14 Thread Dean Troyer
On Wed, Oct 14, 2015 at 3:51 AM, Thierry Carrez 
wrote:

> My main issue with CS2 is how greedy it is in horizontal space, mostly
> due to the waste of space in the "Related changes" panel. If there are
> related changes, the owner/reviewer/voting panel is cramped in the
> middle, while "related changes" has a lot of empty space on the right.
> Is there a way to turn that panel off (move it to a dropdown like the
> patchsets view), or make it use half the width, or push it under
> ChangeID, or...
>

I'm going to concur with ttx here.  That 'Related changes' section can have
at least 4 horizontal sub-tabs (see
https://review-dev.openstack.org/#/c/5294/) and just really seems out of
place where it is. If it is possible to make it another component in the
vertical stack of sections I think it would look a lot better in
less-than-full-screen-width browsers.  In reviews where it is not displayed
the (formerly center) column actually looks a lot better.

Also, did I miss where it indicates order in a series of stacked reviews?
I didn't see one in the test set yet.

The rest of the changes fall into the 'change is hard' category for me and
I'll forget the current screen in a month and life will go on.

Thanks for keeping us up to date guys!

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder][Horizon] Cinder v2 endpoint name: volumev2 is required? volume is okay?

2015-10-14 Thread Akihiro Motoki
Hi Cinder team,

What is the expected service name of Cinder v2 API in keystone catalog?
DevStack configures "volumev2" for Cinder v2 API by default.

Can we "volume" for Cinder v2 API or do we always need to use "volumev2"
in deployments with only Cinder v2 API (i.e. no v1 API) ?

The question was raised during the Horizon review [1].
The current Horizon code assumes "volumev2" is configured for Cinder v2 API.
Note that review [1] itself makes Horizon work only with Cinder v2 API.
I would like to know Horizon team need to consider "volume" endpoint
for Cinder v2 API or not.

[1] https://review.openstack.org/#/c/151081/
[2] https://bugs.launchpad.net/horizon/+bug/1415712

Thanks,
Akihiro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kosmos] Skipping IRC Meeting for the next 2 weeks

2015-10-14 Thread Hayes, Graham
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

In light of all the travelling that will be happening,
we have decided to skip the weekly IRC meeting for the next 2 weeks.

Thanks,

Graham

- -- 
Graham Hayes


-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJWHkaIAAoJEPRBUqpJBgIiO1wH/0eAZkm29ZP1qSNO7eUB+asb
mukSBz98GPcWNnyqyScgDjJ2dvd42FEBYLE1pciRilkKlp8nwDhTOfrKRxgvZfKx
UMPtJTqelWEf43DOeEhQFlBJrvPEve5uZ8dj6C5F2CAHVeaTAX3WagJnPyWj2AXp
Oq6l6yQWJtjsxWmSPY82tewqDcB8hxyhdBZ3m0x5W5+WpWovLXjExkDaCmLr50oQ
wkRl5XkTuEQ6v4lSzuqNfntGAwIO2RwVsl9Q+qnV/4LaX6IOljM8Z55O1ZJVSHVY
mKNTGNIS0yTyrgoXV5oLd2jr6wSu+rhYZKDWfHBVQiRQEzK0eBBfhmtKXzHoqgo=
=G9vV
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Cross-project track schedule draft: Feedback needed

2015-10-14 Thread Flavio Percoco

On 13/10/15 07:05 +0900, Flavio Percoco wrote:

Greetings,

We have a draft schedule for the cross-project track for the Mitaka
summit[0] (find it at the bottom of the etherpad). Yay!

We would like to get feedback from folks that have proposed these
sessions - hopefully you're all cc'd - or other folks that may be
co-moderating them. If there are any conflicts for you in the current
schedule, please, do let us know and we'll re-arange as possible.

Feedback from anyone is obviously welcomed but keep in mind that this
is a process that will hardly make everyone happy.

Thanks,
Flavio

[0] https://etherpad.openstack.org/p/mitaka-cross-project-session-planning


The schedule was finalized and it can be found here:

http://mitakadesignsummit.sched.org/type/cross+project+workshops

I'd appreciate folks moderating sessions to check their sessions,
verify the abstract, names, etherpads. Don't hesitate to ping me if
there's something that doesn't look correct.

Thanks to all who participated in the scheduling session and, of
course, to all the proposers and moderators.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] A few questions on configuring DevStack for Neutron

2015-10-14 Thread Mike Spreitzer
Matt Riedemann  wrote on 10/08/2015 09:48:33 
PM:

> On 10/8/2015 11:38 AM, Sean M. Collins wrote:
> > Please see my response here:
> >
> > 
http://lists.openstack.org/pipermail/openstack-dev/2015-October/076251.html

> >
> > In the future, do not create multiple threads since responses will get
> > lost
> >
> 
> Maybe a dumb question, but couldn't people copy the localrc from the 
> gate-tempest-dsvm-neutron-full job, i.e.:
> 
> http://logs.openstack.org/20/231720/1/check/gate-tempest-dsvm-
> neutron-full/f418dc8/logs/localrc.txt.gz

Matt, thanks for the reminder.  I was pointed at one of those once before, 
but do not remember how to find them in general.  To be useful in 
http://docs.openstack.org/developer/devstack/guides/neutron.html we need 
to identify which section they illuminate, or add another section with 
appropriate explanation.  Which would it be?

Sean, you said that those URLs are tribal knowledge.  Would you recommend 
documenting them and, if so, where?  I see that the one Matt cited is part 
of a comprehensive archive from a tempest run, and there is even some 
explanatory material included within that run's archive.  Would it be 
possible and appropriate for the DevStack Neutron guide to point to some 
documentation that describes these archives?

Thanks,
Mike



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] expose quiesce unquiesce api

2015-10-14 Thread joehuang
Hi,

Currently the Nova provides VM snapshot API, which will take a consistency 
snapshot of a VM and regarding volumes, and will quiesce/unquiesce VM 
automatily with guest agent support.

We learned that the exposure of quiesce/unquiesce api in Nova was "abandoned" 
in the history. Now we have these requirement from 
OPNFV(https://wiki.opnfv.org/) multisite project for different use case from 
NFV area:

In NFV scenario, a VNF (telecom application) often is consisted of a group of 
VMs. To make it be able to restore in another site for catastrophic failures 
happened, this group of VMs snapshot/backup/restore should be done in a 
transaction way to guarantee the application level consistency but not only on 
single VM level : for example, quiesce VM1, quiesce VM2, quiesce VM3, snapshot 
VM1's volumes, snapshot VM2's volumes, snapshot VM3's volumes, unquiesce VM3, 
unquiesce VM2, unquiesce VM1. For some telecom application, the order is very 
important for a group of VMs with strong relationship.

Therefore the OPNFV multsite project expects Nova to provide atomic quiesce / 
unquiesce API, to make consistency snapshot of a group of VMs in a transaction 
way is possible (but not only one single VM instead).

If there is no great different opinion, we would like to submit spec for this 
BP, and hope it could be implemented in M release.

Refer to 
https://gerrit.opnfv.org/gerrit/#/c/1438/3/multisite-vnf-gr-requirement.rst for 
more detail description

The use case consensus is agreed in the meeting of OPNFV multisite project: 
http://ircbot.wl.linuxfoundation.org/meetings/opnfv-meeting/2015/opnfv-meeting.2015-09-03-08.02.html

OPNFV multisite project: https://wiki.opnfv.org/multisite

Best Regards
Chaoyi Huang ( Joe Huang )

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Upgrade to Gerrit 2.11

2015-10-14 Thread Sean Dague
On 10/14/2015 07:57 AM, Ihar Hrachyshka wrote:
>> On 14 Oct 2015, at 02:08, Zaro  wrote:
>>
>> Hello All,
>>
>> The openstack-infra team would like to upgrade from our Gerrit 2.8 to
>> Gerrit 2.11.  We are proposing to do the upgrade shortly after the
>> Mitaka summit.  The main motivation behind the upgrade is to allow us
>> to take advantage of some of the new REST api, ssh commands, and
>> stream events features.  Also we wanted to stay closer to upstream so
>> it will be easier to pick up more recent features and fixes.
>>
>> We want to let everyone know that there is a big UI change in Gerrit
>> 2.11.  The change screen (CS), which is the main view for a patchset,
>> has been completely replaced with a new change screen (CS2).  While
>> Gerrit 2.8 contains both old CS and CS2, I believe everyone in
>> Openstack land is really just using the old CS.  CS2 really wasn't
>> ready in 2.8 and really should never be used in that version.  The CS2
>> has come a long way since then and many other big projects have moved
>> to using Gerrit 2.11 so it's not a concern any longer.  If you would
>> like a preview of Gerrit 2.11 and maybe help us test it, head over to
>> http://review-dev.openstack.org.  If you are very opposed to CS2 then
>> you may like Gertty (https://pypi.python.org/pypi/gertty) instead.  If
>> neither option works for you then maybe you can help us create a new
>> alternative :)
>>
>> We are soliciting feedback so please let us know what you think.
> 
> I had a chance to use the new gerrit UI on gerrithub and on internally in my 
> company, and I need to admit that it breaks my keyboard based flow, and I 
> haven’t found a way to behave it the old way.
> 
> Specifically, I use [ and ] keys to switch between files in the same 
> patchset. It’s especially useful when comparing mostly identical patchsets, 
> or reviewing backports. Now, with the new UI, it does not work for me, so I 
> need to reach to my mouse to switch between files. Also ‘r’ did not work for 
> me, I expect other shortcuts being affected too. It could be a local 
> configuration issue, but the fact that I experience it on two independent 
> setups suggests that it’s an upstream issue.

On review-dev [ and ] still work for me (as well as the other keyboard
short cuts).

The one shortcut change between CS1 and CS2 that trips me up flipping
back and forth is 'r' used to be review, now it's toggle the review flag
on  file. Now it's 'a' to review and leave comments.

> Another problem with the new UI I have is that sometimes it does not show 
> full header of the page (with the links to file list, diff modes, etc.) when 
> I’m in patch review mode, and I need to reload the page to see it. That could 
> be a browser issue, but I checked it for both Safari and Chrome - fails the 
> same way.
> 
> Ihar
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Team meeting minutes - 10/12/2015

2015-10-14 Thread Anastasia Kuznetsova
Hi Lingxian,

Yes, your understanding is correct.

> * run_functional_tests.sh is just used locally, will not run tests depend
on OpenStack.
This script was written just for ability to run suite of api tempest tests
locally (with turned off auth in Mistral and 'hacked' auth in tempest in
our tests),
just to check your changes and to find defects or regressions before making
a commit. We DO NOT use this script in our gates.

> * in our gate tests, all functional tests will run, since OpenStack will
be deployed before Mistral installed.
Yes, all our functional tests run in gate-mistral-devstack-dsvm.
Firstly comes installation of the OpenStack services (specified in the
configuration of the gate) using devstack scripts, then comes running of
the tests.


Regarding usage of Tempest in our tests need to think about it separately,
need to investigate can we get rid of it or not according to DefCore
requirements.
Maybe we can have separate suite of api tempest tests and we can try to
move them into tempest repository (or store in a separate folder in our
repo)
and have tempest-independent scenario tests  in our repo. Need to think.


On Wed, Oct 14, 2015 at 11:08 AM, Lingxian Kong 
wrote:

> Hi, Mistral guys,
>
> In last meeting, we have discussed deeply about Tempest usage in Mistral
> project and the functional testing mechanism, I have the understanding in
> terms of functional testing as below,
>
> * run_functional_tests.sh is just used locally, will not run tests depend
> on OpenStack.
> * in our gate tests, all functional tests will run, since OpenStack will
> be deployed before Mistral installed.
>
> Am I right?
>
> What's more, maybe I'm totally wrong about the Tempest usage in Mistral
> functional testing and use it for DefCore purpose, I'm afraid Nikolay is
> right, we can get rid of it totally, then we don't rely on it for our
> testing. Or, we can use test plugin mechanism Tempest already provides(see
> http://docs.openstack.org/developer/tempest/plugin.html), but I think we
> are not interested in it in short term.
>
> On Tue, Oct 13, 2015 at 1:06 AM, Renat Akhmerov 
> wrote:
>
>> Hi,
>>
>> Thanks for joining team meeting today.
>>
>> Meeting minutes:
>> http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-10-12-16.00.html
>> Meeting log:
>> http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-10-12-16.00.log.html
>>
>> See you next Monday at the same time.
>>
>> Renat Akhmerov
>> @ Mirantis Inc.
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> *Regards!*
> *---*
> *Lingxian Kong*
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Anastasia Kuznetsova
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Creating pods results in "EOF occurred in violation of protocol" exception

2015-10-14 Thread Bertrand NOEL

Hi,
I try Magnum, following instructions on the quickstart page [1]. I 
successfully create the baymodel and the bay. When I run the command to 
create redis pods (_magnum pod-create --manifest ./redis-master.yaml 
--bay k8sbay_), client side, it timeouts. And server side (m-cond.log), 
I get the following stack trace. It also happens with other Kubernetes 
examples.
I try with Ubuntu 14.04, with Magnum at commit 
fc8f412c87ea0f9dc0fc1c24963013e6d6209f27.



2015-10-14 12:16:40.877 ERROR oslo_messaging.rpc.dispatcher 
[req-960570cf-17b2-489f-9376-81890e2bf2d8 admin admin] Exception during 
message handling: [Errno 8] _ssl.c:510: EOF occurred in violation of 
protocol
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", 
line 142, in _dispatch_and_reply
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", 
line 186, in _dispatch
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", 
line 129, in _do_dispatch
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/conductor/handlers/k8s_conductor.py", line 89, 
in pod_create
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher 
namespace='default')
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/apis/apiv_api.py", 
line 3596, in create_namespaced_pod
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher 
callback=params.get('callback'))
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/api_client.py", 
line 320, in call_api
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher 
response_type, auth_settings, callback)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/api_client.py", 
line 148, in __call_api
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher 
post_params=post_params, body=body)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/api_client.py", 
line 350, in request

2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher body=body)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/rest.py", line 
265, in POST
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher return 
self.IMPL.POST(*n, **kw)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/rest.py", line 
187, in POST

2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher body=body)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/rest.py", line 
133, in request

2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher headers=headers)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/urllib3/request.py", line 72, in 
request

2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher **urlopen_kw)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/urllib3/request.py", line 149, 
in request_encode_body
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher return 
self.urlopen(method, url, **extra_kw)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/urllib3/poolmanager.py", line 
161, in urlopen
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher response = 
conn.urlopen(method, u.request_uri, **kw)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 
588, in urlopen
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher raise 
SSLError(e)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher SSLError: 
[Errno 8] _ssl.c:510: EOF occurred in violation of protocol

2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher
2015-10-14 12:16:40.879 ERROR oslo_messaging._drivers.common 
[req-960570cf-17b2-489f-9376-81890e2bf2d8 admin admin] Returning 
exception [Errno 8] _ssl.c:510: EOF occurred in violation of protocol to 
caller
2015-10-14 12:16:40.879 

[openstack-dev] [Openstack] Tokyo Summit Survey: Experience Deploying CloudFoundry in OpenStack

2015-10-14 Thread Hua ZZ Zhang
Hi, OpenStack users and developers,
 
In our continued commitment to make CF the ONE true cross-cloud PaaS, we have been wondering what your experience was when deploying CF onto OpenStack using BOSH.
 
We created this short survey: https://www.surveymonkey.com/r/VQQZ5ZP to aggregate your experience.
 
This is a short survey that will take 5 minutes (or less) of your time to fill it out.
 
After a week, we will share the results with the community. Our goal is to learn the most common pain point and hopefully help fix these in both the CF and OpenStack communities. Our session is scheduled on Wednesday, October 28 4:40pm - 5:20pm: http://openstacksummitoctober2015tokyo.sched.org/event/31161b1a3c03e3ac8a968d5623ef9dfc#.Vh2sJxOqqko. please feel free to join this discussion with us. :-)
 
Best regards,
 
CF BOSH team and contributors
 
-Zhang Hua (Edward), Xing Zhou (Tom), Dr. Max


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Infra Design Summit Schedule

2015-10-14 Thread Jeremy Stanley
Based on feedback from our last meeting and the discussion and
subsequent voting in our session ideas pad, I've taken a first pass
at scheduling:

http://mitakadesignsummit.sched.org/type/Infrastructure

I did the best I could to avoid obvious conflicts where necessary
participants were likely to be involved in other summit tracks or
conference talks, but due to the compressed nature of this round
there's some unavoidable overlap (notably with QA, Docs, Ansible,
Oslo, TripleO, and Ironic).

Also, since it ended up making more sense to collapse the Storyboard
and Maniphest workroom sessions into the Task Tracking fishbowl
session (individually they had fewer votes, and otherwise we either
had to conflict with QA or do the corresponding workrooms before the
fishbowl), we really only had three workroom topics so I've proposed
spending our back-to-back Wednesday sessions on the most popular of
the three: Masterless Puppet. I have confidence you'll let me know
if this is insane, and in that case provide alternative suggestions!
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] The scenary to rolling upgrade Ironic

2015-10-14 Thread Jim Rollenhagen
On Wed, Oct 14, 2015 at 04:09:07PM -0700, Dan Smith wrote:
> > Conductors will always need to talk to the database. APIs may not need
> > to talk to the database. I think we can just roll conductor
> > upgrades through, and then update ironic-api after that. This should
> > just work, as long as we're very careful about schema changes (this is
> > where the expand/contract thing comes into play). Different versions of
> > conductors are only a problem if the database schema is not compatible
> > with one of the versions.
> 
> Yep, this seems like it's probably the right approach, assuming that
> your API depending on RPC is still reasonable, performance-wise. It
> might be confusing for people to hear that ironic's conductor goes last,
> but nova's goes first. In reality, it was probably not a great idea to
> have these named the same thing as they're not very similar, but oh well.

Sorry, if it wasn't clear, I said conductor should go first here, and
then API...

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ?????? [keystone federation] some questions aboutkeystone IDP with SAML supported

2015-10-14 Thread wyw
Many Thanks!

John, I agree with you. Keystone is not a general purpose federated IdP. 
 "Web application could use SAML HTTP-Redirect or it could also function as an 
ECP client."


Now Keystone supports token, saml2, oauth1. There is aslo a keystone plugin 
project try to support oauth2. But Keystone's goal is not to support Web SSO


BTW, if I still want to utilize Keystone, such as token authentication and SCIM 
and  integration with LDAP functionalities.
Could I use some SAMLv2 SSO Server, such as UAA or WSO2 Identity Server , to 
integrate with Keystone? 


the case maybe like this:
A Java Service Provider ==SAMLv2 SSO==>UAA/WSO2 Identity Server 
UAA/WSO2 Identity Server ==IDP integrate with==> Keystone ==datastore==>LDAP


Certainly, A Java Service Provider ==> UAA/WSO2 Identity Server ==> LDAP   
maybe make sense.


I means , Could we integrate any SSO Server for Keystone solution?
I think it can do by implementing a  java websso service, that integrated with 
Keystone's token auth.  Although it is not a standard SAMLv2 IDP solution.


Java SP ==sso==> Java WEBSSO Service(RestAPI) ==token==> Keystone(token 
auth/SCIM API)


Thanks for more help.






--  --
??: "John Dennis";;
: 2015??10??15??(??) 1:05
??: "OpenStack Development Mailing List (not for usage 
questions)"; "wyw"<93425...@qq.com>; 

: Re: [openstack-dev] [keystone federation] some questions aboutkeystone 
IDP with SAML supported



On 10/14/2015 07:10 AM, wyw wrote:
> hello, keystoners.  please help me
>
> Here is my use case:
> 1. use keystone as IDP , supported with SAML
> 2. keystone integrates with LDAP
> 3. we use a java application as Service Provider, and to integrate it
> with keystone IDP.
> 4. we use a keystone as Service Provider, and to integrate it withe
> keystone IDP.

Keystone is not an identity provider, or at least it's trying to get out 
of that business, the goal is to have keystone utilize actual IdP's 
instead for authentication.

K2K utilizes a limited subset of the SAML profiles and workflow. 
Keystone is not a general purpose SAML IdP supporting Web SSO.

Keystone implements those portions of various SAMLv2 profiles necessary 
to support federated Keystone and derive tokens from federated IdP's. 
Note this distinctly different than Keystone being a federated IdP.

> The problems:
> in the k2k federation case, keystone service provider requests
> authentication info with IDP via Shibboleth ECP.

Nit, "Shibboleth ECP" is a misnomer, ECP (Enhanced Client & Proxy) is a 
SAMLv2 profile, a SAML profile Shibboleth happens to implement, however 
there other SP's and IdP's that also support ECP (e.g. mellon, Ipsilon)

> in the java application, we use websso to request IDP, for example??
> idp_sso_endpoint = http://10.111.131.83:5000/v3/OS-FEDERATION/saml2/sso
> but, the java redirect the sso url , it will return 404 error.
> so, if we want to integrate a java application with keystone IDP,
>   should we need to support ECP in the java application?

You're misapplying SAML, Keystone is not a traditional IdP, if it were 
your web application could use SAML HTTP-Redirect or it could also 
function as an ECP client, but not against Keystone. Why? Keystone is 
not a general purpose federated IdP.

-- 
John__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev