[openstack-dev] [puppet] weekly meeting #39

2015-06-23 Thread Emilien Macchi
Hi everyone,

Here's an initial agenda for our weekly meeting today at 1500 UTC in
#openstack-meeting-4:

https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150623

Please add additional items you'd like to discuss.
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Interconnecting projects

2015-06-23 Thread Anik
Hi Assaf,
Now reading the rbac network specs carefully, I believe it does allow private 
networks to be shared to other tenants by non-admin users. 
So the command neutron rbac create net-uuid|net-name --type network 
--tenant-id tenant-uuid --action access_as_shared - can this be only used by 
an admin ? From the specs, it did not seem so. 
Also is the action access_as_external available now ? 






   

On Tue, Jun 2, 2015 at 9:14 PM, Assaf Muller amul...@redhat.com wrote:

Check out:
http://specs.openstack.org/openstack/neutron-specs/specs/liberty/rbac-networks.html
If I understand correctly, what Anik is probably asking for is way to connect 
two OpenStack projects together from a network point of view, where a private 
network in Project1 can be connected to a Router in  Project2. AFAIK, I don't 
think we are planning to expose such model in RBAC where a tenant (non-admin) 
has a way control who can see/connect-to his/her resources.
@Anik, please correct me if I am wrong. 


Kevin is trying to solve exactly this problem. We're really hoping to land it in
time for Liberty.

- Original Message -
 Hi,

 Trying to understand if somebody has come across the following scenario:

 I have a two projects: Project 1 and Project 2

 I have a neutron private network in Project 1, that I want to connect that
 private network to a neutron port in Project 2.

 This does not seem to be possible without using admin credentials. I am not
 talking about a shared provider network here.

 It seems that the problem lies in the fact that there is no data model today
 that lets one Project have knowledge about any other Project inside the same
 OpenStack region.

 Any pointers there will be helpful.
 Regards,
 Anik


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




   


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] weekly subteam status report

2015-06-23 Thread Ruby Loo
Hi,

Happy June Solstice! Things are warming up in Ironic-land. Following is the
subteam report for Ironic. As usual, this is pulled directly from the
Ironic whiteboard[0] and formatted.

Bugs (dtantsur)

As of Mon, 22 Jun 15:30 UTC (diff since 15 Jun, no diff for Nova bugs for
now):
- Open: 151 (-8). 5 new (0), 48 in progress (-3), 0 critical (0), 10 high
(-2) and 10 incomplete (-2)
- Nova bugs with Ironic tag: 23. 0 new, 0 critical, 0 high

Bug dashboard (http://ironic-bugs.divius.net/) seriously revamped:
- shows stats for nova bugs with ironic tag
- shows confirmed bugs, as they ideally also require triaging
- new glamour design

Neutron/Ironic work (jroll)
===
this review has rough consensus from the subteam, needs ironic-specs core
review https://review.openstack.org/#/c/187829/

this has an update coming, will need ironic-specs core review
https://review.openstack.org/#/c/188528/

Oslo (lintan)
==
Oslo team sent a request to all OpenStack Projects to finish below things:
- sync oslo-incubator -- Ironic has done it recently
- extend own RequestContext from oslo.context -- implemented
- Use oslo-config-generator -- implemented
- adopt new libraries if necessary: futurist, automaton 
oslo.versionedobjects -- in progress

Doc (pshige)
==
start trying to improve Ironic docs with docs team
- http://lists.openstack.org/pipermail/openstack-docs/2015-June/006990.html

Inspector (dtantsur)
===
python-ironic-inspector-client released for the first time:
http://lists.openstack.org/pipermail/openstack-announce/2015-June/000370.html
- switching our driver to using it exclusively:
https://review.openstack.org/#/c/193176/

ironic-discoverd 1.0 branch to be discontinued, ironic-inspector 2.0.0 to
be released as soon as we stop moving things around
- https://bugs.launchpad.net/ironic-inspector/+milestone/2.0.0 has a rough
TODO list

Bifrost (TheJulia)
=
Presently on implmenting support for operating systems aside from Ubuntu,
mainly Centos/RHEL.

A number of reviews have been posted to switch Bifrost's use path over to a
flexible dynamic inventory, which will provide greater use flexibility.

Drivers
==

iRMC (naohirot)
-
https://review.openstack.org//#/q/owner:+naohirot%2540jp.fujitsu.com+status:+open,n,z

Status: Reactive (solicit for core team's approval.  I believe review
process has been iterated enough to reach acceptable level of quality.
- iRMC Virtual Media Deploy Driver
bp/irmc-virtualmedia-deploy-driver
bp/non-glance-image-refs
bp/automate-uefi-bios-iso-creation
bp/local-boot-support-with-partition-images
bp/whole-disk-image-support
bp/ipa-as-default-ramdisk

Status: Active (review is on going)
- Enhance Power Interface for Soft Reboot and NMI
bp/enhance-power-interface-for-soft-reboot-and-nmi

Status: Active (started coding)
- iRMC out of band inspection
bp/ironic-node-properties-discovery



Until next week,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][Mistral] Help with a patch

2015-06-23 Thread Ed Cranford
The problem was caused by references to stackforge/python-mistralclient.git in 
contrib/devstack/lib/mistral; our devstack job in Jenkins was failing because 
it could not clone the client [1].


Merging [2] updated those references to their new openstack-owned locations.


Our issues are resolved.

Thank you for your attention.


[1] 
http://logs.openstack.org/52/191952/4/check/gate-solum-devstack-dsvm/409f9ab/logs/devstacklog.txt.gz#_2015-06-16_21_30_35_227

[2] https://review.openstack.org/#/c/192754/


From: Renat Akhmerov rakhme...@mirantis.com
Sent: Tuesday, June 23, 2015 3:15 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum][Mistral] Help with a patch

Can you please confirm that the issue has been fixed?

The thing is AFAIK Solum was using the old version of Mistral API that is no 
longer supported (was announced a couple of months ago) so I just want to make 
sure you're using the new API.

Renat Akhmerov
@ Mirantis Inc.



On 18 Jun 2015, at 20:11, Nikolay Makhotkin 
nmakhot...@mirantis.commailto:nmakhot...@mirantis.com wrote:

Hi, Devdatta!

Thank you for catching this and for the patch. I already reviewed it and it has 
been merged.

--
Best Regards,
Nikolay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] Kevin Benton added to neutron-stable-maint

2015-06-23 Thread Edgar Magana
Nice! +2




On 6/23/15, 3:37 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi all,

Just a heads-up that Kevin Benton is added to neutron-stable-maint
team so now he has all the powers to +2/+A (and -2) backports.

Kevin is very active at voting for neutron backports (well, where is
he NOT active?), so here you go.

Thanks
Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJViTblAAoJEC5aWaUY1u57lAIH/2lBqAQv5sL0avDmWYHljUXO
zolTmsaK8+qs9FXUlr+Ca3TU1KqOH5p27m49pkJS2n3Sy1ojL0TkzmQxA5sB0/Bg
ufVq2aMGzC1L0k9c8VbMiHpX6/CHOEnL/bpp4Gh6LRpovVOCGRXnlPabd+h0PPJm
krDhG428ZB6wMnd5S+ZuV77Mlr2Lrrv8o0mzd0joO1munJFepk7ar7BLwYV+QeZq
kpi8dInh7gODI3ciQ3OWuWZWk4Dsc0Dup2ARsdUlhDN0/Sfc/ElXKWmYIam+flCR
ToxtzQBrw2LT/mT/mOpT8bJRBbP8KGtcunXvDEeoGOxOTF1+2dpfMbHYlohlCX8=
=q6il
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [qpid] [zmq] [RabbitMQ] [oslo] Pending deprecation of driver(s).

2015-06-23 Thread Davanum Srinivas
Josh,

We can generate options, if folks who need/want it are not here to do
the necessary work, not much we can do :(

-- dims

On Tue, Jun 23, 2015 at 11:30 AM, Joshua Harlow harlo...@outlook.com wrote:
 Flavio Percoco wrote:

 On 22/06/15 12:43 -0700, Clint Byrum wrote:

 Excerpts from Adam Young's message of 2015-06-22 11:26:54 -0700:

 On 06/20/2015 10:28 AM, Flavio Percoco wrote:
 
 
  As promissed: https://review.openstack.org/#/c/193804/
 
  Cheers,
 You can't deprecate a driver without providing a viable alternative.

 Right now, QPID is the only driver that supports Kerberos.

 TO support Kerberos, tyou need support for the GSSAPI library, which is
 usually done via support for SASL. Why is it so
 convoluted...historical...

 We've talked with both teams (I work with Ken) and I think Proton is
 likely going to be the first to have support. The folks working on
 Rabbit have the double hurdle of getting SASL support into Erlang first,
 and then support for SASL into Rabbit. They've indicated a preference
 for getting it in to the AMQP 1.0 driver, and not bothering with the
 exisiting, but, check me on this, the Oso.Messaging code only support
 the pre 1.0 Rabbit.


 So..until we have a viable alternative, please leave QPID alone. I've
 not been bothering people about it, as there seems to be work to get
 ahead, but until either Rabbit or Proton support Kerberos, I need QPID
 as is.


 Adam that is all great information, thank you. However, the policy is
 clear: commit resources for integration testing, or it needs to move
 out of tree.

 It's not a mountain of resources. Just an integration test that passes
 reliably, and a couple of QPID+OpenStack experts who we can contact when
 it breaks. If nobody is willing to put that much effort in, then it is
 not really something we want in our official messaging library tree.

 So please if you can carry that message up to those who want it to
 stay in
 tree, that would be helpful and would put the stops on this deprecation.


 Agreed with the above.

 I'd also like to add that it was also discussed with folks previously
 maintaining the qpid driver what their plans with that work were and
 the agreement of deprecating it was reached with them.


 Just to note, something that may be acceptable for people that need this,
 and don't mind doing a little bit of work to maintain it out of tree. It
 appears the kombu qpid driver does have SASL support (from a quick glance at
 the code):

 - https://github.com/celery/kombu/blob/master/kombu/transport/qpid.py#L1250
 - https://github.com/celery/kombu/blob/master/kombu/transport/qpid.py#L1210

 So until this gets resolved and/or maintained it appears folks could just
 use the one built-in to kombu (assuming it works)? If the oslo.messaging
 'impl_rabbit.py' one was more of a kombu 'wrapper' (and renamed
 'impl_kombu.py'?) than this might have been even easier to support/make
 possible.

 Food for thought :)



 I know this doesn't solve the current problem of not having kerberos
 support but it clears that this discussion has been had already.

 That said, the point being raised is very good and unfortunate.

 Cheers,
 Flavio



 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][stable] Cinder client broken in Juno

2015-06-23 Thread Mike Perez
There was a bug raised [1] from some large deployments that the Cinder
client 1.2.0 and beyond is not working because of version discovery.
Unfortunately it's not taking into account of deployments that have a
proxy.

Cinder client asks Keystone to find a publicURL based on a version.
Keystone will gather data from the service catalog and ask Cinder for
a list of the public endpoints and compare. For the proxy cases,
Cinder is giving internal URLs back to the proxy and Keystone ends up
using that instead of the publicURL in the service catalog. As a
result, clients usually won't be able to use the internal URL and
rightfully so.

This is all correctly setup on the deployer's side, this an issue with
the server side code of Cinder.

There is a patch that allows the deployer to specify a configuration
option public_endpoint [2] which was introduced in a patch in Kilo
[3]. The problem though is we can't expect people to already be
running Kilo to take advantage of this, and it leaves deployers
running stable releases of Juno in the dark with clients upgrading and
using the latest.

Two options:

1) Revert version discovery which was introduced in Kilo for Cinder client.

2) Grant exception on backporting [4] a patch that helps with this
problem, and introduces a config option that does not change default
behavior. I'm also not sure if this should be considered for Icehouse.


[1] - https://launchpad.net/bugs/1464160
[2] - 
http://docs.openstack.org/kilo/config-reference/content/cinder-conf-changes-kilo.html
[3] - https://review.openstack.org/#/c/159374/
[4] - https://review.openstack.org/#/c/194719/

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #39

2015-06-23 Thread Emilien Macchi


On 06/23/2015 10:41 AM, Emilien Macchi wrote:
 Hi everyone,
 
 Here's an initial agenda for our weekly meeting today at 1500 UTC in
 #openstack-meeting-4:
 
 https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150623
 
 Please add additional items you'd like to discuss.
 

Our meeting is done, you can read the notes:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-06-23-15.00.html

Have a great day,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][qa] Do we turn on voting for the tempest-dsvm-cells job?

2015-06-23 Thread Sean Dague
On 06/23/2015 08:13 AM, John Garbutt wrote:
 The question for the nova team is, shall we make the tempest-dsvm-cells
 job voting on nova changes knowing that the gate can be broken with a
 change to tempest that isn't caught in the regex?  In my opinion I think
 we should make it voting so we don't regress cells with changes to nova
 that go unnoticed with the non-voting job today.  Cells v2 is a nova
 priority for Liberty so we don't want setbacks now that it's stable.
 
 I would be tempted to risk it, given the large gain, but I am a little
 biased too.
 
 But with us controlling the regex, that seems a much easier call to say yes.

Right, with the REGEX back in control of the Nova team, we can just
temporarily drop any failing tests from Tempest changes until fixes or
reverts happen.

I honestly think this kind of asymetric decoupling is a thing we're
going to need to get good at, because everything can't cross gate with
everything else. Otherwise we'll go back to 100 hr gate queues and bugs
that take a week to resolve in the intertwining.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [qpid] [zmq] [RabbitMQ] [oslo] Pending deprecation of driver(s).

2015-06-23 Thread Flavio Percoco

On 22/06/15 12:43 -0700, Clint Byrum wrote:

Excerpts from Adam Young's message of 2015-06-22 11:26:54 -0700:

On 06/20/2015 10:28 AM, Flavio Percoco wrote:


 As promissed: https://review.openstack.org/#/c/193804/

 Cheers,
You can't deprecate a driver without providing a viable alternative.

Right now, QPID is the only driver that supports  Kerberos.

TO support Kerberos, tyou need support for the GSSAPI library, which is
usually done via support for SASL.  Why is it so convoluted...historical...

We've talked with both teams (I work with Ken) and I think Proton is
likely going to be the first to have support.  The folks working on
Rabbit have the double hurdle of getting SASL support into Erlang first,
and then support for SASL into Rabbit. They've indicated a preference
for getting it in to the AMQP 1.0 driver, and not bothering with the
exisiting, but, check me on this, the Oso.Messaging  code only support
the pre 1.0 Rabbit.


So..until we have a viable alternative, please leave QPID alone. I've
not been bothering people about it, as there seems to be work to get
ahead, but until either Rabbit or  Proton support Kerberos, I need QPID
as is.



Adam that is all great information, thank you. However, the policy is
clear: commit resources for integration testing, or it needs to move
out of tree.

It's not a mountain of resources. Just an integration test that passes
reliably, and a couple of QPID+OpenStack experts who we can contact when
it breaks. If nobody is willing to put that much effort in, then it is
not really something we want in our official messaging library tree.

So please if you can carry that message up to those who want it to stay in
tree, that would be helpful and would put the stops on this deprecation.


Agreed with the above.

I'd also like to add that it was also discussed with folks previously
maintaining the qpid driver what their plans with that work were and
the agreement of deprecating it was reached with them.

I know this doesn't solve the current problem of not having kerberos
support but it clears that this discussion has been had already.

That said, the point being raised is very good and unfortunate.

Cheers,
Flavio



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpq_zrNRSEVp.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][Solum] Request for feedback on new API resource

2015-06-23 Thread Fox, Kevin M
Another thing to consider is how would you write a heat resource to consume the 
new api. Usually you can find api problems while considering this case. Heat's 
resources are very light weight to implement if the api is good. Quite 
difficult when not.

Thanks,
Kevin


From: Everett Toews
Sent: Tuesday, June 23, 2015 6:58:52 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [api][Solum] Request for feedback on new API 
resource

On Jun 18, 2015, at 3:07 PM, Devdatta Kulkarni 
devdatta.kulka...@rackspace.commailto:devdatta.kulka...@rackspace.com wrote:

Hi, API WG team,

In Solum, recently we have been working on some changes to our REST API.

Basically, we have introduced a new resource ('app'). The spec for this has 
been accepted by Solum cores.
https://github.com/stackforge/solum-specs/blob/master/specs/liberty/app-resource.rst

Right now we have a patch for review implementing this spec:
https://review.openstack.org/#/c/185147/

If it is not too much to request, I was wondering if someone from your team can 
take a look
at the spec and the review, to see if we are not violating any of your 
guidelines.

Thank you for your help.

- Devdatta

Do you have this API documented anywhere?

Is there a spec or similar for this API change?

In our experience, it’s best to consider the API design apart from the 
implementation. The separation of concerns makes for a cleaner review and a 
better design. The Glance team did a good job of this in their Artifact 
Repository API specification [1].

Regards,
Everett

[1] https://review.openstack.org/#/c/177397/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [qpid] [zmq] [RabbitMQ] [oslo] Pending deprecation of driver(s).

2015-06-23 Thread Joshua Harlow

Flavio Percoco wrote:

On 22/06/15 12:43 -0700, Clint Byrum wrote:

Excerpts from Adam Young's message of 2015-06-22 11:26:54 -0700:

On 06/20/2015 10:28 AM, Flavio Percoco wrote:


 As promissed: https://review.openstack.org/#/c/193804/

 Cheers,
You can't deprecate a driver without providing a viable alternative.

Right now, QPID is the only driver that supports Kerberos.

TO support Kerberos, tyou need support for the GSSAPI library, which is
usually done via support for SASL. Why is it so
convoluted...historical...

We've talked with both teams (I work with Ken) and I think Proton is
likely going to be the first to have support. The folks working on
Rabbit have the double hurdle of getting SASL support into Erlang first,
and then support for SASL into Rabbit. They've indicated a preference
for getting it in to the AMQP 1.0 driver, and not bothering with the
exisiting, but, check me on this, the Oso.Messaging code only support
the pre 1.0 Rabbit.


So..until we have a viable alternative, please leave QPID alone. I've
not been bothering people about it, as there seems to be work to get
ahead, but until either Rabbit or Proton support Kerberos, I need QPID
as is.



Adam that is all great information, thank you. However, the policy is
clear: commit resources for integration testing, or it needs to move
out of tree.

It's not a mountain of resources. Just an integration test that passes
reliably, and a couple of QPID+OpenStack experts who we can contact when
it breaks. If nobody is willing to put that much effort in, then it is
not really something we want in our official messaging library tree.

So please if you can carry that message up to those who want it to
stay in
tree, that would be helpful and would put the stops on this deprecation.


Agreed with the above.

I'd also like to add that it was also discussed with folks previously
maintaining the qpid driver what their plans with that work were and
the agreement of deprecating it was reached with them.


Just to note, something that may be acceptable for people that need 
this, and don't mind doing a little bit of work to maintain it out of 
tree. It appears the kombu qpid driver does have SASL support (from a 
quick glance at the code):


- https://github.com/celery/kombu/blob/master/kombu/transport/qpid.py#L1250
- https://github.com/celery/kombu/blob/master/kombu/transport/qpid.py#L1210

So until this gets resolved and/or maintained it appears folks could 
just use the one built-in to kombu (assuming it works)? If the 
oslo.messaging 'impl_rabbit.py' one was more of a kombu 'wrapper' (and 
renamed 'impl_kombu.py'?) than this might have been even easier to 
support/make possible.


Food for thought :)



I know this doesn't solve the current problem of not having kerberos
support but it clears that this discussion has been had already.

That said, the point being raised is very good and unfortunate.

Cheers,
Flavio



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] why is evacuate marked as missing for libvirt?

2015-06-23 Thread Markus Zoeller
 Daniel P. Berrange berrange at redhat.com wrote on 04/15/2015 
11:35:39 
 AM:

  From: Daniel P. Berrange berrange at redhat.com
  To: OpenStack Development Mailing List (not for usage questions) 
  openstack-dev at lists.openstack.org
  Date: 04/15/2015 11:42 AM
  Subject: Re: [openstack-dev] [nova] why is evacuate marked as missing 
  for libvirt?
  
  On Tue, Apr 14, 2015 at 01:44:45PM -0400, Russell Bryant wrote:
   On 04/14/2015 12:22 PM, Matt Riedemann wrote:
This came up in IRC this morning, but the hypervisor support 
matrix 
 is
listing evacuate as 'missing' for the libvirt driver:

http://docs.openstack.org/developer/nova/support-
  matrix.html#operation_evacuate


Does anyone know why that is?  The rebuild method in the compute 
 manager
just re-uses other virt driver operations so by default it's 
 implemented
by all drivers.  The only one that overrides rebuild for evacuate 
is 
 the
ironic driver.
   
   I think it's a case where there are a couple of different things
   referred to with 'evacuate'.  I believe this was originally added to
   track something that was effectively XenServer specific and the
   description of the feature seems to reflect that.  We've since added 

 the
   more generic 'evacuate' API, so it's pretty confusing.  It should
   probably be reworked to track which drivers work with the 'evacuate' 

 API
   call, and perhaps have a different entry for whatever this different
   XenServer thing is (or was).
  
  Yep, if there's any mistakes or bizarre things in the support matrix
  just remember that the original wiki page had essentially zero 
 information
  about what each feature item was referring to - just the two/three 
word
  feature name. When I turned it into formal docs I tried to add better
  explanations, but it is entirely possible my interpretations were 
wrong
  in places. So if in doubt assume the support matrix is wrong, and just
  send a review to update it to some saner state with better description
  of the feature. Pretty much all the features in the matrix could do
  with better explanations and/or being broken up into finer grained
  features - there's plenty of scope for people to submit patches to
  improve the granularity of items.

 I think that the confusion is caused by something called the host
 maintenance mode [1]. When this is enabled, an evacuate is triggered
 by the underlying hypervisor. This can mode can be set by the CLI [2] 
 and is not implemented by the libvirt driver.
 The probably intented API for the feature evacuate is [3] which can 
 be triggered via CLI with:
 * nova evacuate server
 * nova host-evacuate host
 * nova host-evacuate-live host

 The feature evacuate has hereby a dependency to live-migration. As
 the system z platform doesn't yet has [4] merged, evacuate is there 
 partial [5] (TODO for me) whereas for x86 there should be complete.
 Please correct me if I'm wrong here.

 Unfortunately I couldn't find any tempest tests for the evacuate 
 feature, so I tested in manually.

 [1] virt.driver.ComputeDriver.host_maintenance_mode(self, host, mode)
  
 
https://github.com/openstack/nova/blob/2015.1.0rc1/nova/virt/driver.py#L1016
 [2] Nova CLI; command nova host-update
  
 http://docs.openstack.org/cli-reference/content/novaclient_commands.html
 [3] nova.api.openstack.compute.contrib.evacuate
  
 
https://github.com/openstack/nova/blob/2015.1.0rc1/nova/api/openstack/compute/contrib/evacuate.py
 [4] libvirt: handle NotSupportedError in compareCPU
  https://review.openstack.org/#/c/166130/
 [5] Update hypervisor support matrix with column for kvm on system z
  https://review.openstack.org/#/c/172391/
 Regards,
 Markus Zoeller (markus_z)


Today the topic about evacuate and host maintenance mode came up
again in the IRC nova channel. I'd like to try again to state how I
understand it. At the end will be suggestions how this could be solved.


CLI - API - Driver

The mapping between the python-novaclient, the nova REST API and
the nova.virt.driver looks like this:

CLIAPI Driver
----
nova host-update   hosts.pyhost_maintenance_mode()
   update()set_host_enabled()
----
nova evacuate  evacuate.py rebuild()
   _evacuate() 
----   
 
nova host-evacuate evacuate.py rebuild()
[calls API in a loop]  _evacuate()
----
nova host-evacuate-liveadmin_actions.pylive_migration()
[calls API in a loop]  _migrate_live()
----

The CLI command nova host-update [1] calls the update 

[openstack-dev] [Neutron] net-create does not return on master branch

2015-06-23 Thread Liran Schour
Hi,

I am working on the master branch through devstack.
./stack.sh hangs on net-create:
+ echo_summary 'Creating initial neutron network elements'
+ [[ -t 3 ]]
+ [[ True != \T\r\u\e ]]
+ echo -e Creating initial neutron network elements
+ create_neutron_initial_network
2015-06-23 09:25:38.928 | Creating initial neutron network elements
++ get_field 1
++ grep ' demo '
++ local data field
++ read data
++ openstack project list
++ '[' 1 -lt 0 ']'
++ field='$2'
++ awk '-F[ \t]*\\|[ \t]*' '{print $2}'
++ echo '| bce07dcf1396435d91e13512e7bd80af | demo   |'
++ read data
+ TENANT_ID=bce07dcf1396435d91e13512e7bd80af
+ die_if_not_set 533 TENANT_ID 'Failure retrieving TENANT_ID for demo'
+ local exitcode=0
++ grep xtrace
++ set +o
+ local 'xtrace=set -o xtrace'
+ set +o xtrace
+ type -p neutron_plugin_create_initial_network_profile
+ is_provider_network
+ '[' '' == True ']'
+ return 1
++ get_field 2
++ local data field
++ read data
++ grep ' id '
++ neutron net-create --tenant-id bce07dcf1396435d91e13512e7bd80af private

I see cpu utilization of  ~95% (looks like endless loop)

In the backround I can see the new network:
$ neutron net-list
+--+-+-+
| id   | name| subnets |
+--+-+-+
| b31f86a7-c9f8-40d6-a9cc-c61abc0aad10 | private | |
+--+-+-+


Does it look familier to someone?

Thanks,
- Liran__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal of nova-hyper driver

2015-06-23 Thread John Garbutt
On 22 June 2015 at 16:52, Peng Zhao p...@hyper.sh wrote:

Thanks John.

 I’m also not sure what the future would be, but I’d say that it would be
 nice to have a hybrid OpenStack cluster of both VM/App-Container flavor.
 And yes, it is more about a unified model between Nova and Magnum.


In my head, I always considered heat as a good place to model using both
kinds of resources.

But I can't say I have thought through all the details as yet.

Thanks,
John



 Best,
 Peng

 -
 Hyper - Make VM run like Container



 On Mon, Jun 22, 2015 at 5:10 PM, John Garbutt j...@johngarbutt.com
 wrote:

 On 22 June 2015 at 09:18, Sahid Orentino Ferdjaoui
 sahid.ferdja...@redhat.com wrote:
  On Sun, Jun 21, 2015 at 07:18:10PM +0300, Joe Gordon wrote:
  On Fri, Jun 19, 2015 at 12:55 PM, Peng Zhao p...@hyper.sh wrote:
 
  Hi, all,
  
   I would like to propose nova-hyper driver:
   https://blueprints.launchpad.net/nova/+spec/nova-hyper.
  
  - What is Hyper?
  Put simply, Hyper is a hypervisor-agnostic Docker runtime. It is
  similar to Intel’s ClearContainer, allowing to run a Docker image
 with any
  hypervisor.
  
  - Why Hyper driver?
  Given its hypervisor nature, Hyper makes it easy to integrate with
  OpenStack ecosystem, e.g. Nova, Cinder, Neutron
  
  - How to implement?
  Similar to nova-docker driver. Hyper has a daemon “hyperd”
 running on
  each physical box. hyperd exposed a set of REST APIs. Integrating
 Nova with
  the APIs would do the job.

 For clarity, we are yet to accept the nova-docker driver into the Nova
 project, due to various concerns about its potential future direction.
 Hopefully we should get a more final answer on that soon.

  - Roadmap
  Integrate with Magnum  Ironic.
  
  
  This sounds like a better fit for something on top of Nova such as
 Magnum
  then as a  Nova driver.

 +1

 On the surface, it feels like a possible Magnum driver.
 Although I am far from certain that its an exact match.
 But I think that would be a better starting point than Nova.

  Nova only supports things that look like 'VMs'. That includes bare
 metal,
  and containers, but it only includes a subset of container features.

 +1

 In your blueprint you mention:
 The difference between LXC and VM makes the driver hard to maintain a
 unified model in Nova.

 To be clear Nova has no intention of providing a unified model, in
 part due to the truth behind your statement above. We provide things
 that look like servers. Please see:
 http://docs.openstack.org/developer/nova/project_scope.html#containers

 I would recommending talking the container subgroup, in one of their
 meetings, about how best to integrate with OpenStack:
 https://wiki.openstack.org/wiki/Meetings/Containers

  Looking at the hyper CLI [0], there are many commands that nova would
 not
  suppprt, such as:
 
  * The pod notion
  * exec
  * pull
 
  Then I guess you need to see if Hyper can implement mandatory features
  for Nova [1], [2].
 
  [1] http://docs.openstack.org/developer/nova/support-matrix.html
  [2] https://wiki.openstack.org/wiki/HypervisorSupportMatrix

 We have no intention of expanding the scope of the Nova API to include
 container operation. And the reverse is also true, we would want to
 see an intention to support all the important existing APIs before
 inclusion, and proving that be having tempest tests reliably passing.

 Many thanks,
 John

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] the need about implementing a MAC security hook framework for OpenStack

2015-06-23 Thread Yang Luo
Hi Rob,

I have several thoughts about the idea.

1) The first is for message queue, as all components talk to each other via
it. If we follow the official installation guide, we only have one account
for all the components to use the message queues and there's no access
control rules. Although the cloud user can creates its own users and rules
in RabbitMQ [1] (I don't know if there's such security mechanism in Qpid).
I think there's should be a universal message queue policy for OpenStack.
Then this policy could be translated into the low-level rules in RabbitMQ
or Qpid. This feature is not security hook related, but it seems to be
useful? And besides message queue, is there any other communication
mechanisms for OpenStack components?

2) The VM's access to resources needs to be restricted. The resources
include VM themselves, networks, disks and so on. i.e. A disk is provided
to a VM, we just mount the disk to the VM, but there's no policy to prevent
the disk is not mounted to other VMs. So I wonder if a MAC policy is
needed. Then the MAC policy will say that only this VM can access the disk.
The drawback is that the MAC policy seems to be changed very frequently
based on cloud user's choice, which doesn't look the same as the SELinux
policy.

3) For a security module, the first is to determine the subjects and
objects. All access from subjects to objects will be mediated based on
policy. Subjects can be OpenStack components, VMM or cloud user. Objects
can be OpenStack components, VMM, VM and other resources (such as disks). I
don't know if my definition for subjects and objects are suitable.

4) As for the hook implementation, the most common way is to add check code
in the source. While I found this hook mechanism [2], it seems to be more
graceful than adding check code, but it is only for nova, is there some way
that works in all components?

Any response would be appreciated.

-Yang

[1] https://www.rabbitmq.com/access-control.html
[2] http://docs.openstack.org/developer/nova/devref/hooks.html


On Wed, Jun 17, 2015 at 4:43 PM, Clark, Robert Graham robert.cl...@hp.com
wrote:

  Hi Yang,



 This is an interesting idea. Most operators running production OpenStack
 deployments will be using OS-level Mandatory Access Controls already
 (likely AppArmour or SELinux).



 I can see where there might be some application on a per-service basis,
 introducing more security for Swift, Nova etc, I’m not sure what you could
 do that would be OpenStack-wide.



 Interested to hear where you think work on this might go.



 -Rob





 *From:* Yang Luo [mailto:hslu...@gmail.com]
 *Sent:* 17 June 2015 07:47
 *To:* openstack-dev@lists.openstack.org
 *Subject:* [openstack-dev] [Security] the need about implementing a MAC
 security hook framework for OpenStack



 Hi list,



   I'd like to know the need about implementing a MAC (Mandatory Access
 Control) security hook framework for OpenStack, just like the Linux
 Security Module to Linux. It can be used to help construct a security
 module that mediates the communications between OpenStack nodes and
 controls distribution of resources (i.e., images, network, shared disks).
 This security hook framework should be cluster-wide, dynamic policy
 updating supported, non-intrusive implemented and with low performance
 overhead. The famous module in LSM, SELinux can also be imported into this
 security hook framework. In my point, as OpenStack has become a leading
 cloud operating system, it needs some kind of security architecture as
 standard OS.



 I am a Ph.D student who has been following OpenStack security closely for
 nearly 1 year. This is just my initial idea and I know this project won't
 be small, so before I actually work on it, I'd like to hear your
 suggestions or objections about it. Thanks!



 Best,

 Yang

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] RBAC Policy Basics

2015-06-23 Thread Adam Young

On 06/23/2015 06:14 AM, Osanai, Hisashi wrote:

On Tuesday, June 23, 2015 12:14 AM, Adam Young wrote:


It is not an issue if you keep each of the policy files completely
separate, but it means that each service has its own meaning for the
same name, and that confuses operators;  owner in Nova means a user
that has a role on this project where as owner in Keystone means
Objects associated with a specific user.

I understand your thought came from usability.

But it might increase development complexity, I think each component
doesn't want to define own component name in the policy.json because
it's well-known there.
Unnn... Please forget it (it might be too development thought) :-)

I want to focus on the following topic:


My concern now is:
* Service Tokens was implemented in Juno [1] but now we are not able
   to implement it with Oslo policy without extensions so far.
* I think to implement spec[2] needs more time.

[1] 
https://github.com/openstack/keystone-specs/blob/master/specs/keystonemiddleware/implemented/service-tokens.rst
[2] https://review.openstack.org/#/c/133855/

Is there any way to support spec[1] in Oslo policy? Or
Should I wait for spec[2]?

I'm sorry, I am not sure what you are asking.

I'm sorry let me explain this again.

(1) Keystone supports service tokens [1] from Juno release.
(2) Oslo policy graduated from Kilo release.
(3) Oslo policy doesn't have an ability to deal with the service tokens.
 I'm not 100% sure but in order to support the service tokens Oslo policy
 needs to handle service_roles in addition to roles stored in a credential.
 Current logic:
 If a rule which starts with 'role:', RoleCheck uses 'roles' in the 
credential.
 code: 
https://github.com/openstack/oslo.policy/blob/master/oslo_policy/_checks.py#L249
 
 My solution for this now is create ServiceRoleCheck class to handle 'service_roles' in

 the credential. This check will be used when a rule starts with 'srole:'.
 
https://review.openstack.org/#/c/149930/15/swift/common/middleware/keystoneauth.py
 L759-L767
OK, I think I get it;  you want to make a check specific to the roles on 
the service token.  The term Service roles  confused me.


You can do this check with oslo.messaging today.  Don't uyse the role 
check, just a generic check.
It looks for an elelement in a collection, and reeturns true if it is in 
there;  see



http://git.openstack.org/cgit/openstack/oslo.policy/commit/?id=a08bc79f5c117696c43feb2e44c4dc5fd0013deb



I think it's better to handle by Oslo policy because of a common issue. So I 
would like
to know a plan to handle this issue.

Thanks in advance,
Hisashi Osanai


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tripleo]Recursive validation for easier composability

2015-06-23 Thread Steven Hardy
On Tue, Jun 23, 2015 at 12:06:27PM +0100, Steven Hardy wrote:
 tripleo-heat-templates/controller/cinder_backends/enable_netapp.yaml
 
...
 
 OS::TripleO::EnableCinderBackends: [enable_netapp.yaml, enable_foo.yaml, ...]

Correction, I realised this should probably be more declarative, e.g not
the imperative enable, so more like:

OS::TripleO::CinderBackends: [netapp.yaml, foo.yaml, ...]

But hopefully the overall idea in my original message was still clear, the
logic is independent of any such naming decisions.

Cheers.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] [EDP] about get_job_status in oozie engine

2015-06-23 Thread Trevor McKay
Hi Lu,

  yes, you're right.  Return is a dictionary and for the other EDP
engines only status is returned (and we primarily care about
status).  For Oozie, there is more information.

  I'm fine with changing the name to get_job_info() throughout the
job_manager and EDP.

  It actually raises the question for me about whether or not in the
Oozie case we really even need the extra Oozie information in the Sahara
database.  I don't think we use it anywhere, not even sure the UI
displays it (but it might) or how much comes through the REST responses.

  Maybe we should have get_job_status() which returns only status, and
an optional get_job_info() that returns more? But that may be a bigger
discussion.

Best,

Trevor

On Tue, 2015-06-23 at 15:18 +0800, lu jander wrote:
 Hi Trevor
 
 
 in sahara oozie engine (sahara/service/edp/oozie/engine.py
 sahara/service/edp/oozie/oozie.py)
 
 
 function get_job_status actually returns not only the status of the
 job, but it returns all the info about the job, so i think that we
 should  rename this function as get_job_info maybe more convenient for
 us? cause I want add a function named get_job_info but i find that it
 already exists here with a confused name. 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][qa] Do we turn on voting for the tempest-dsvm-cells job?

2015-06-23 Thread John Garbutt
On 22 June 2015 at 23:03, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:


 On 6/22/2015 4:38 PM, Matt Riedemann wrote:



 On 6/22/2015 4:32 PM, Russell Bryant wrote:

 On 06/22/2015 05:23 PM, Matt Riedemann wrote:

 The check-tempest-dsvm-cells job has been in nova's check queue since
 January as non-voting and has been stable for a couple of weeks now, so
 before it's regressed melwitt proposed a change to making it voting and
 gating on nova changes [1].

 I raised a concern in that change that the tempest-dsvm-cells job is not
 in the check queue for tempest or devstack changes, so if a change is
 merged in tempest/devstack which breaks the cells job, it will block
 nova changes from merging.

 mtreinish noted that tempest already has around 30 jobs running against
 it right now in the check queue, so he'd prefer that another one isn't
 added since the nova API shouldn't be different in the case of cells,
 but we know there are quirks.  That can be seen from the massive regex
 of excluded tests for the tempest-dvsm-cells job [2].

That sounds like a good call.

For the record, cells v2 should fix this major issue, which is awesome.

 If we could turn that regex list into tempest configurations, I think
 that would make it possible to not have to run tempest changes through
 the cells job in the check queue but also feel reasonably confident that
 changes to tempest that use the config options properly won't break the
 cells job (and block nova in the gate).

 This is actually something we should do regardless of voting or not on
 nova since new tests get added which might not fall in the regex and
 break the cells job.  So I'm going to propose some changes so that the
 regex will be moved to devstack-gate (regex exodus (tm)) and we'll work
 on whittling down the regex there (and run those d-g changes against the
 tempest-dsvm-cells job in the experimental queue).

 The question for the nova team is, shall we make the tempest-dsvm-cells
 job voting on nova changes knowing that the gate can be broken with a
 change to tempest that isn't caught in the regex?  In my opinion I think
 we should make it voting so we don't regress cells with changes to nova
 that go unnoticed with the non-voting job today.  Cells v2 is a nova
 priority for Liberty so we don't want setbacks now that it's stable.

I would be tempted to risk it, given the large gain, but I am a little
biased too.

But with us controlling the regex, that seems a much easier call to say yes.

 If a change does land in tempest which breaks the cells job and blocks
 nova, we (1) fix it or (2) modify the regex so it's excluded until fixed
 as has been done up to this point.

 I think we should probably mull this over in the ML and then vote on it
 in this week's nova meeting.

 [1] https://review.openstack.org/#/c/190894/
 [2]

 http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n1004




 Regarding your regex exodus, I recently added something for this.  In
 another project, I'm setting the regex in a file I keep in the code repo
 instead of project-config.

 support for DEVSTACK_GATE_SETTINGS in devstack-gate:
 https://review.openstack.org/190321

 usage in a job definition: https://review.openstack.org/190325

 a DEVSTACK_GATE_SETTINGS file that sets DEVSTACK_GATE_TEMPEST_REGEX:
 https://review.openstack.org/186894

 It all seems to be working for me, except I still need to tweak my regex
 to get the job passing, but at least I can do that without updating
 project-config now.


 Awesome, that is way cleaner.  I'll go that route instead, thanks!


 Here is the change that moves the regex into nova:

 https://review.openstack.org/#/c/194411/

This seems like a good best of both worlds. Awesome stuff.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] We're enabling trusts by default in murano liberty

2015-06-23 Thread Kirill Zaitsev
Hello all.
I’m writing this letter to notify you of a small, but important change, that is 
about to come.
https://review.openstack.org/194615

We’re enabling trusts by default in murano and asking everyone who is using 
murano for development to upgrade and/or start using them and help us properly 
test them. Use of trusts was implemented them in kilo. The main problem this 
decision might pose is the fact that our trusts code has not yet been 
battle-tested, therefore there might be some errors associated with it. But 
ideally you shouldn’t even notice that you enabled them! =)


The reason behind this decision is simple — this is the intended behaviour of 
murano-engine. With current behaviour as soon as the user logs out of horizon — 
his or her token would become invalid and any deployment started by the user 
can fail. Trusts solve this problem naturally.


-- 
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest] Proposing Jordan Pittier for Tempest Core

2015-06-23 Thread Marc Koderer
+1

Am 23.06.2015 um 02:06 schrieb GHANSHYAM MANN ghanshyamm...@gmail.com:

 +1 :)
 
 On Tue, Jun 23, 2015 at 5:23 AM, Matthew Treinish mtrein...@kortar.org 
 wrote:
 
 
 Hi Everyone,
 
 I'd like to propose we add Jordan Pittier (jordanP) to the tempest core team.
 Jordan has been a steady contributor and reviewer on tempest over the past few
 cycles and he's been actively engaged in the Tempest community. Jordan has had
 one of the higher review counts on Tempest for the past cycle, and he has
 consistently been providing reviews that show insight into both the project
 internals and it's future direction. I feel that Jordan will make an excellent
 addition to the core team.
 
 As per the usual, if the current Tempest core team members would please vote 
 +1
 or -1(veto) to the nomination when you get a chance. We'll keep the polls open
 for 5 days or until everyone has voted.
 
 Thanks,
 
 Matt Treinish
 
 References:
 
 https://review.openstack.org/#/q/reviewer:jordan.pittier%2540scality.com+project:openstack/tempest+OR+project:openstack/tempest-lib,n,z
 
 http://stackalytics.com/?metric=marksuser_id=jordan-pittier
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Thanks  Regards
 Ghanshyam Mann
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V Meeting

2015-06-23 Thread Peter Pouliot
Hi All,

Cancelling the meeting today while we deal with some CI related issues.

p

Peter J. Pouliot CISSP
Microsoft Enterprise Cloud Solutions
C:\OpenStack
New England Research  Development Center
1 Memorial Drive
Cambridge, MA 02142
P: 1.(857).4536436
E: ppoul...@microsoft.commailto:ppoul...@microsoft.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] Formal shared VIF type library w/ objects plugins

2015-06-23 Thread Daniel P. Berrange
In response to the proposal to add a VIF plugin script to the VIF port
binding data, I have put up a counter proposal which takes things a
bit further and in slightly different direction, with a strong focus
on object modelling.

Superficially inspired by os-brick, I'm suggesting, we put create a
os-vif library that uses oslo.versionedobject to define a clear set
of VIF types and associated metadata fields that can be shared by
both Neutron and Nova. This would also define a plugin class contract
that Neutron mechanism vendors would impl to provide custom plug/unplug
actions to run on the compute nodes:

   https://review.openstack.org/#/c/193668/

This proposal is describing an architecture with the following high level
characteristics  split of responsibilities

 - Definition of VIF types and associated config metadata.

 * Owned jointly by Nova and Neutron core teams
 * Code shared in os-vif library
 * Ensures core teams have 100% control over data on
   the REST API

 - Setup of compute host OS networking stack

 * Owned by Neutron mechanism vendor team
 * Code distributed by mechanism vendor
 * Allows vendors to innovate without bottleneck on Nova
   developers in common case.
 * In the, uncommon, event a new VIF type was required,
   this would still require os-vif modification with
   Nova  Neutron core team signoff.

 - Configuration of guest virtual machine VIFs ie libvirt XML

 * Owned by Nova virt driver team
 * Code distributed as part of Nova virt / VIF driver
 * Ensures hypervisor driver retains full control over
   how the guest instances are configured

I've filed the spec against nova, but obviously we need review and buy in
from both Nova and Neutron core teams that this is workable, as it impacts
both projects. I didn't want to file a separate Neutron spec, as that
would split the discussion across two places.

Indeed, rather than replying to this mail, it'd be preferrable if people
commented on the spec

   https://review.openstack.org/#/c/193668/

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Clarification of 'Puppet Modules' Project Scope

2015-06-23 Thread Richard Raseley

Emilien Macchi wrote:

I have a preference for #1 since IMHO it makes more sense for Midokura
to have their Puppet module close to their code but I would not be
against having it on Stackforge.

[...]

If you look at contributors [1], the history shows that this module has
been written by people working on Puppet OpenStack modules and it made
sense to have this repository on Stackforge to benefit of OpenStack
integration.
Until recently, puppet-vswitch was a dependency to run puppet-neutron.
See [2].

[1]https://github.com/openstack/puppet-vswitch/graphs/contributors
[2]
https://github.com/openstack/puppet-neutron/tree/stable/juno/manifests/plugins/ovs


To be less specific, Puppet modules that reside in OpenStack namespace
aretoday:
* deploying an OpenStack project (neutron, horizon, etc)
* a dependency to deploy modules (openstacklib, vswitch)
* contain some code used by our community to help with CI,
documentation, consistency, etc (modulesync, cookiebutter, integration,
blueprints).


Emilien,

Thank you for the input on this. The criteria you listed above seem 
totally reasonable, and based upon them, I can understand the reason for 
not bringing this module into the OpenStack namespace. Just to re-state 
the criteria to ensure my own understanding:


---

OpenStack Puppet Modules:

For a module to become part of the OpenStack Puppet Modules project it 
should meet one of the following requirements:


1) Provides configuration management capability for an OpenStack project.
2) Satisfies a dependency for deploying module(s) which conform to #1 above.
3) Assists in the creation, documentation, lifecycle-management, and 
testing for modules which conform to #1 above.


StackForge Modules:

For a module to become part of the StackForge project it should meet one 
of the following requirements:


1) Provides configuration management capability for one or more 
OpenStack-related project.
2) Provides configuration management capability for a project which is 
intending to become part of OpenStack.


Proprietary Modules:

For modules not meeting any of the above-outlined requirements, we 
suggest that it live in its own vendor-provided project or repository, 
and not utilize the OpenStack-infra provided CI and tooling.


---

Does this seem to capture all the relevant pieces to you?

Regards,

Richard

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][Solum] Request for feedback on new API resource

2015-06-23 Thread Everett Toews
On Jun 18, 2015, at 3:07 PM, Devdatta Kulkarni 
devdatta.kulka...@rackspace.commailto:devdatta.kulka...@rackspace.com wrote:

Hi, API WG team,

In Solum, recently we have been working on some changes to our REST API.

Basically, we have introduced a new resource ('app'). The spec for this has 
been accepted by Solum cores.
https://github.com/stackforge/solum-specs/blob/master/specs/liberty/app-resource.rst

Right now we have a patch for review implementing this spec:
https://review.openstack.org/#/c/185147/

If it is not too much to request, I was wondering if someone from your team can 
take a look
at the spec and the review, to see if we are not violating any of your 
guidelines.

Thank you for your help.

- Devdatta

Do you have this API documented anywhere?

Is there a spec or similar for this API change?

In our experience, it’s best to consider the API design apart from the 
implementation. The separation of concerns makes for a cleaner review and a 
better design. The Glance team did a good job of this in their Artifact 
Repository API specification [1].

Regards,
Everett

[1] https://review.openstack.org/#/c/177397/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [qpid] [zmq] [RabbitMQ] [oslo] Pending deprecation of driver(s).

2015-06-23 Thread Flavio Percoco

On 23/06/15 08:30 -0700, Joshua Harlow wrote:

Flavio Percoco wrote:

On 22/06/15 12:43 -0700, Clint Byrum wrote:

Excerpts from Adam Young's message of 2015-06-22 11:26:54 -0700:

On 06/20/2015 10:28 AM, Flavio Percoco wrote:




As promissed: https://review.openstack.org/#/c/193804/

Cheers,

You can't deprecate a driver without providing a viable alternative.

Right now, QPID is the only driver that supports Kerberos.

TO support Kerberos, tyou need support for the GSSAPI library, which is
usually done via support for SASL. Why is it so
convoluted...historical...

We've talked with both teams (I work with Ken) and I think Proton is
likely going to be the first to have support. The folks working on
Rabbit have the double hurdle of getting SASL support into Erlang first,
and then support for SASL into Rabbit. They've indicated a preference
for getting it in to the AMQP 1.0 driver, and not bothering with the
exisiting, but, check me on this, the Oso.Messaging code only support
the pre 1.0 Rabbit.


So..until we have a viable alternative, please leave QPID alone. I've
not been bothering people about it, as there seems to be work to get
ahead, but until either Rabbit or Proton support Kerberos, I need QPID
as is.



Adam that is all great information, thank you. However, the policy is
clear: commit resources for integration testing, or it needs to move
out of tree.

It's not a mountain of resources. Just an integration test that passes
reliably, and a couple of QPID+OpenStack experts who we can contact when
it breaks. If nobody is willing to put that much effort in, then it is
not really something we want in our official messaging library tree.

So please if you can carry that message up to those who want it to
stay in
tree, that would be helpful and would put the stops on this deprecation.


Agreed with the above.

I'd also like to add that it was also discussed with folks previously
maintaining the qpid driver what their plans with that work were and
the agreement of deprecating it was reached with them.


Just to note, something that may be acceptable for people that need 
this, and don't mind doing a little bit of work to maintain it out of 
tree. It appears the kombu qpid driver does have SASL support (from a 
quick glance at the code):


- https://github.com/celery/kombu/blob/master/kombu/transport/qpid.py#L1250
- https://github.com/celery/kombu/blob/master/kombu/transport/qpid.py#L1210

So until this gets resolved and/or maintained it appears folks could 
just use the one built-in to kombu (assuming it works)? If the 
oslo.messaging 'impl_rabbit.py' one was more of a kombu 'wrapper' (and 
renamed 'impl_kombu.py'?) than this might have been even easier to 
support/make possible.


TBH, I'm more for making the impl_rabbit driver more rabbit-focused
rather than more kombu-focused. Using kombu for it is great and I
wouldn't advice to move away from it (not in the short run at least).
But if there are changes that we can do to make it more rabbit
specific, I'd be all for that.

Cheers,
Flavio




Food for thought :)



I know this doesn't solve the current problem of not having kerberos
support but it clears that this discussion has been had already.

That said, the point being raised is very good and unfortunate.

Cheers,
Flavio



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpU4Zqgf6Qpy.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Fuel-library] Using librarian-puppet to manage upstream fuel-library modules

2015-06-23 Thread Alex Schultz
Hello everyone,

I took some time this morning to write out a document[0] that outlines
one possible ways for us to manage our upstream modules in a more
consistent fashion. I know we've had a few emails bouncing around
lately around this topic of our use of upstream modules and how can we
improve this. I thought I would throw out my idea of leveraging
librarian-puppet to manage the upstream modules within our
fuel-library repository. Ideally, all upstream modules should come
from upstream sources and be removed from the fuel-library itself.
Unfortunately because of the way our repository sits today, this is a
very large undertaking and we do not currently have a way to manage
the inclusion of the modules in an automated way. I believe this is
where librarian-puppet can come in handy and provide a way to manage
the modules. Please take a look at my document[0] and let me know if
there are any questions.

Thanks,
-Alex

[0] 
https://docs.google.com/document/d/13aK1QOujp2leuHmbGMwNeZIRDr1bFgJi88nxE642xLA/edit?usp=sharing

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] We're enabling trusts by default in murano liberty

2015-06-23 Thread Nikolay Starodubtsev
Nice work! Thx, Kirill.



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-06-23 17:20 GMT+03:00 Kirill Zaitsev kzait...@mirantis.com:

 Hello all.
 I’m writing this letter to notify you of a small, but important change,
 that is about to come.
 https://review.openstack.org/194615

 We’re enabling trusts by default in murano and asking everyone who is
 using murano for development to upgrade and/or start using them and help us
 properly test them. Use of trusts was implemented them in kilo. The main
 problem this decision might pose is the fact that our trusts code has not
 yet been battle-tested, therefore there might be some errors associated
 with it. But ideally you shouldn’t even notice that you enabled them! =)


 The reason behind this decision is simple — this is the intended behaviour
 of murano-engine. With current behaviour as soon as the user logs out of
 horizon — his or her token would become invalid and any deployment started
 by the user can fail. Trusts solve this problem naturally.


 --
 Kirill Zaitsev
 Murano team
 Software Engineer
 Mirantis, Inc

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Modular L2 Agent

2015-06-23 Thread Mathieu Rohon
Hi,

there are still some differences in terms of features supported by the two
implementations. Those I am aware of are :
-LB can support VLAN transparent networks as mentionned in [2];
-OVS supports MPLS tagging, needed by the bagpipe driver of the bgpvpn
project;
-when arp responder is activated (with l2pop), OVS supports the fallback to
learning mode if the arp responder is not populated. The Vxlan module used
with LB does not support it, which leads to bug like [3].

The framework mentionned by irena in [1] is a good approach to report back
to the user what features are supported by the cloud and the underlying
technology in use.

[2]https://review.openstack.org/#/c/136554/3/specs/kilo/nfv-vlan-trunks.rst
[3]https://bugs.launchpad.net/neutron/+bug/1445089

On Tue, Jun 23, 2015 at 9:03 AM, Irena Berezovsky irenab@gmail.com
wrote:



 On Mon, Jun 22, 2015 at 7:48 PM, Sean M. Collins s...@coreitpro.com
 wrote:

 On Mon, Jun 22, 2015 at 10:47:39AM EDT, Salvatore Orlando wrote:
  I would probably start with something for enabling the L2 agent to
 process
  features such as QoS and security groups, working on the OVS agent,
 and
  then in a second step abstract a driver interface for communicating with
  the data plane. But I honestly do not know if this will keep the work
 too
  OVS-centric and therefore won't play well with the current efforts to
 put
  linux bridge on par with OVS in Neutron. For those question we should
 seek
  an answer from our glorious reference control plane lieutenant, and
 perhaps
  also from Sean Collins, who's coordinating efforts around linux bridge
  parity.

 I think that what Salvatore has suggested is good. If we start creating
 good API contracts, and well defined interfaces in the reference control
 plane agents - this is a good first step. Even if we start off by doing
 this just for the OVS agent, that'll be a good template for what we
 would need to do for any agent-driven L2 implementation - and it could
 easily be re-used by others.

 To be honest, if you squint hard enough there really are very few
 differences between the OVS agent and the Linux Bridge agent does -
 the parts that handle control plane communication, processing
 data updates, and so forth should all be very similar.

 They only become different at the lower
 levels where it's brctl/ip2 vs. ovs-vsctl/of-ofctl CLI calls - so why
 maintain two separate agent implementations when quite a bit of what
 they do is functionally identical?


 As Miguel mentioned, the patch [1] adds support for QoS driver in L2
 Agents. Since QoS support is planned to be leveraged by OVS and SR-IOV and
 maybe later by Linux Bridge, the idea is to make common L2Agent layer to
 enable generic support for features (extensions) and QoS as the first
 feature to support. This is not the Modular L2 Agent, but definitely the
 step into the right direction.
 This work should have minimal impact on Server side, and mostly about code
 reuse by L2 Agents.

 [1] https://review.openstack.org/#/c/189723/

 BR,
 Irena

 --
 Sean M. Collins

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Ceph Public Network Setting

2015-06-23 Thread Sergey Vasilenko


 I notice that in OpenStack deployed by Fuel, Ceph public network is on
 management network.


As I know separating cuph/public and management networks in a scope of 7.0
release.


/sv
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [qpid] [zmq] [RabbitMQ] [oslo] Pending deprecation of driver(s).

2015-06-23 Thread Joshua Harlow

Flavio Percoco wrote:

On 23/06/15 08:30 -0700, Joshua Harlow wrote:

Flavio Percoco wrote:

On 22/06/15 12:43 -0700, Clint Byrum wrote:

Excerpts from Adam Young's message of 2015-06-22 11:26:54 -0700:

On 06/20/2015 10:28 AM, Flavio Percoco wrote:




As promissed: https://review.openstack.org/#/c/193804/

Cheers,

You can't deprecate a driver without providing a viable alternative.

Right now, QPID is the only driver that supports Kerberos.

TO support Kerberos, tyou need support for the GSSAPI library,
which is
usually done via support for SASL. Why is it so
convoluted...historical...

We've talked with both teams (I work with Ken) and I think Proton is
likely going to be the first to have support. The folks working on
Rabbit have the double hurdle of getting SASL support into Erlang
first,
and then support for SASL into Rabbit. They've indicated a preference
for getting it in to the AMQP 1.0 driver, and not bothering with the
exisiting, but, check me on this, the Oso.Messaging code only support
the pre 1.0 Rabbit.


So..until we have a viable alternative, please leave QPID alone. I've
not been bothering people about it, as there seems to be work to get
ahead, but until either Rabbit or Proton support Kerberos, I need QPID
as is.



Adam that is all great information, thank you. However, the policy is
clear: commit resources for integration testing, or it needs to move
out of tree.

It's not a mountain of resources. Just an integration test that passes
reliably, and a couple of QPID+OpenStack experts who we can contact
when
it breaks. If nobody is willing to put that much effort in, then it is
not really something we want in our official messaging library tree.

So please if you can carry that message up to those who want it to
stay in
tree, that would be helpful and would put the stops on this
deprecation.


Agreed with the above.

I'd also like to add that it was also discussed with folks previously
maintaining the qpid driver what their plans with that work were and
the agreement of deprecating it was reached with them.


Just to note, something that may be acceptable for people that need
this, and don't mind doing a little bit of work to maintain it out of
tree. It appears the kombu qpid driver does have SASL support (from a
quick glance at the code):

-
https://github.com/celery/kombu/blob/master/kombu/transport/qpid.py#L1250
-
https://github.com/celery/kombu/blob/master/kombu/transport/qpid.py#L1210

So until this gets resolved and/or maintained it appears folks could
just use the one built-in to kombu (assuming it works)? If the
oslo.messaging 'impl_rabbit.py' one was more of a kombu 'wrapper' (and
renamed 'impl_kombu.py'?) than this might have been even easier to
support/make possible.


TBH, I'm more for making the impl_rabbit driver more rabbit-focused
rather than more kombu-focused. Using kombu for it is great and I
wouldn't advice to move away from it (not in the short run at least).
But if there are changes that we can do to make it more rabbit
specific, I'd be all for that.

Cheers,
Flavio


Understood, I see the benefit of going both ways and I am fine with 
however it turns out...







Food for thought :)



I know this doesn't solve the current problem of not having kerberos
support but it clears that this discussion has been had already.

That said, the point being raised is very good and unfortunate.

Cheers,
Flavio



__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][stable] Cinder client broken in Juno

2015-06-23 Thread Kuvaja, Erno
Hi Mike,

We have similar functionality in Glance and I think this is critical enough fix 
to backport to Juno.

Considered for Icehouse, definitely not: 
http://lists.openstack.org/pipermail/openstack-announce/2015-June/000372.html

- Erno (jokke_)

 -Original Message-
 From: Mike Perez [mailto:thin...@gmail.com]
 Sent: 23 June 2015 16:50
 To: OpenStack Development Mailing List
 Cc: Jesse Keating
 Subject: [openstack-dev] [cinder][stable] Cinder client broken in Juno
 
 There was a bug raised [1] from some large deployments that the Cinder
 client 1.2.0 and beyond is not working because of version discovery.
 Unfortunately it's not taking into account of deployments that have a proxy.
 
 Cinder client asks Keystone to find a publicURL based on a version.
 Keystone will gather data from the service catalog and ask Cinder for a list 
 of
 the public endpoints and compare. For the proxy cases, Cinder is giving
 internal URLs back to the proxy and Keystone ends up using that instead of
 the publicURL in the service catalog. As a result, clients usually won't be 
 able
 to use the internal URL and rightfully so.
 
 This is all correctly setup on the deployer's side, this an issue with the 
 server
 side code of Cinder.
 
 There is a patch that allows the deployer to specify a configuration option
 public_endpoint [2] which was introduced in a patch in Kilo [3]. The problem
 though is we can't expect people to already be running Kilo to take
 advantage of this, and it leaves deployers running stable releases of Juno in
 the dark with clients upgrading and using the latest.
 
 Two options:
 
 1) Revert version discovery which was introduced in Kilo for Cinder client.
 
 2) Grant exception on backporting [4] a patch that helps with this problem,
 and introduces a config option that does not change default behavior. I'm
 also not sure if this should be considered for Icehouse.
 
 
 [1] - https://launchpad.net/bugs/1464160
 [2] - http://docs.openstack.org/kilo/config-reference/content/cinder-conf-
 changes-kilo.html
 [3] - https://review.openstack.org/#/c/159374/
 [4] - https://review.openstack.org/#/c/194719/
 
 --
 Mike Perez
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [qpid] [zmq] [RabbitMQ] [oslo] Pending deprecation of driver(s).

2015-06-23 Thread Joshua Harlow

Davanum Srinivas wrote:

Josh,

We can generate options, if folks who need/want it are not here to do
the necessary work, not much we can do :(



True dat, u are very wise :-)


-- dims

On Tue, Jun 23, 2015 at 11:30 AM, Joshua Harlowharlo...@outlook.com  wrote:

Flavio Percoco wrote:

On 22/06/15 12:43 -0700, Clint Byrum wrote:

Excerpts from Adam Young's message of 2015-06-22 11:26:54 -0700:

On 06/20/2015 10:28 AM, Flavio Percoco wrote:

As promissed: https://review.openstack.org/#/c/193804/

Cheers,

You can't deprecate a driver without providing a viable alternative.

Right now, QPID is the only driver that supports Kerberos.

TO support Kerberos, tyou need support for the GSSAPI library, which is
usually done via support for SASL. Why is it so
convoluted...historical...

We've talked with both teams (I work with Ken) and I think Proton is
likely going to be the first to have support. The folks working on
Rabbit have the double hurdle of getting SASL support into Erlang first,
and then support for SASL into Rabbit. They've indicated a preference
for getting it in to the AMQP 1.0 driver, and not bothering with the
exisiting, but, check me on this, the Oso.Messaging code only support
the pre 1.0 Rabbit.


So..until we have a viable alternative, please leave QPID alone. I've
not been bothering people about it, as there seems to be work to get
ahead, but until either Rabbit or Proton support Kerberos, I need QPID
as is.


Adam that is all great information, thank you. However, the policy is
clear: commit resources for integration testing, or it needs to move
out of tree.

It's not a mountain of resources. Just an integration test that passes
reliably, and a couple of QPID+OpenStack experts who we can contact when
it breaks. If nobody is willing to put that much effort in, then it is
not really something we want in our official messaging library tree.

So please if you can carry that message up to those who want it to
stay in
tree, that would be helpful and would put the stops on this deprecation.


Agreed with the above.

I'd also like to add that it was also discussed with folks previously
maintaining the qpid driver what their plans with that work were and
the agreement of deprecating it was reached with them.


Just to note, something that may be acceptable for people that need this,
and don't mind doing a little bit of work to maintain it out of tree. It
appears the kombu qpid driver does have SASL support (from a quick glance at
the code):

- https://github.com/celery/kombu/blob/master/kombu/transport/qpid.py#L1250
- https://github.com/celery/kombu/blob/master/kombu/transport/qpid.py#L1210

So until this gets resolved and/or maintained it appears folks could just
use the one built-in to kombu (assuming it works)? If the oslo.messaging
'impl_rabbit.py' one was more of a kombu 'wrapper' (and renamed
'impl_kombu.py'?) than this might have been even easier to support/make
possible.

Food for thought :)



I know this doesn't solve the current problem of not having kerberos
support but it clears that this discussion has been had already.

That said, the point being raised is very good and unfortunate.

Cheers,
Flavio



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [HA] RFC: moving Pacemaker openstack-resource-agents to stackforge

2015-06-23 Thread Richard Raseley

Adam Spiers wrote:

Martin Loschwitz, who owns this repository, has since moved away from
OpenStack, and no longer maintains it. I recently proposed moving the
repository to StackForge, and he gave his consent and in fact said
that he had the same intention but hadn't got round to it


I think this is an excellent idea. +1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas] [octavia] No meeting this week, 6/24

2015-06-23 Thread Eichberger, German
All,

With this week the Neutron mid cycle happening we will skip the meeting 
tomorrow. We will be back next week, 7/1/15 —

Thanks,
German

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [QOS] Request for Additional QoS capabilities

2015-06-23 Thread Gal Sagie
Hi John,

Sorry for the delayed response as i was on vacation with no internet
connection (you don't know how much
you miss it until you don't have it).

The work in terms of coding is pretty much done for the reference
implementation.
We initially tried to push it as a security group extension but there is a
strong objection
to change the security group API, so FWaaS can be next best candidate if we
can find support
or other uses of this (like your use case)
(Of course that work will need to be added for supporting the connection
limit, we tried
to  tackle brute force prevention which i personally see as a more
concerning attack vector internally)

Out of curiosity can you describe scenarios of DDoS attacking from an
internal VM ?
I would assume most DDoS will happen from external traffic or a combine
attack from various internal
VM's (and then this might no longer fit as a QoS)

But if you feel this belongs in QoS this can certainly be added on top of
the framework as Miguel suggested.

Thanks
Gal.

On Fri, Jun 19, 2015 at 12:39 AM, John Joyce (joycej) joy...@cisco.com
wrote:

  Gal:

   I had seen the brute force blueprint and noticed how close the use
 case was.  Can you tell me the current status of the work?  Do you feel
 confident it can get into Liberty?  Ideally, we think this fits better with
 QoS.  Also I don’t think of it as providing FWaaS as we see that all VMs
 would be protected by this when enabled, but maybe that is just
 terminology.   We think these protections are critical so we are more
 concerned with having the ability to protect against these cases than the
 specific API or service it falls under.  Yes we would be interested in
 working together to get this pushed through.


 John



 *From:* Gal Sagie [mailto:gal.sa...@gmail.com]
 *Sent:* Wednesday, June 17, 2015 12:45 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Cc:* lionel.zer...@huawei.com; Derek Chamorro (dechamor); Eran Gampel
 *Subject:* Re: [openstack-dev] [Neutron] [QOS] Request for Additional QoS
 capabilities



 Hi John,



 We were trying to push a very similar spec to enhance the security group
 API, we covered both DDoS case

 and another use case for brute force prevention (We did a research to
 identify common protocols login behaviour

 in order to identify brute force attacks using iptables) and even some UI
 work



 You can view the specs and implementations here:

 1) https://review.openstack.org/#/c/184243/

 2) https://review.openstack.org/#/c/154535/

 3) https://review.openstack.org/#/c/151247/

 4) https://review.openstack.org/#/c/161207/



 The spec didn't got approved as there is a strong opinion to keep the
 security group API compatible with Amazon.

 I think this and your request fits much more into the FWaaS and if this is
 something you would like to work together on and push

 i can help and align the above code to use FWaaS.



 Feel free to contact me if you have any question



 Thanks

 Gal.







 On Wed, Jun 17, 2015 at 6:58 PM, John Joyce (joycej) joy...@cisco.com
 wrote:

 Hello everyone:

   I would like to test the waters on some new functionality we think
 is needed to protect OpenStack deployments from some overload situations
 due to an excessive user or DDOS scenario.   We wrote this up in the style
 of an RFE.   Please let us know your thoughts and we can proceed with a
 formal RFE with more detail if there are no concerns raised.





 *What is being requested*

 This request is to extend the QOS APIs to include the ability to provide
 connection rate limiting

 *Why is it being requested*

 There are many scenarios where a VM may be intentionally malicious or
 become harmful to the network due to its rate of initializing TCP
 connections.   The reverse direction of a VM being attacked with an
 excessive amount of TCP connection requests either intentionally or due to
 overload is also problematic.

 *Implementation Choices

There might be a number of ways to implement this,  but one of the
 easiest would appear to be to extend the APIs being developed under:
 https://review.openstack.org/#/c/187513/. An additional rule type
 “connections per-second” could be added.

 The dataplane implementation itself may be realized with netfilter and
 conntrack.

 *Alternatives

 It would be possible to extend the security groups in a similar fashion,
 but due to the addition of rate limiting, QoS seems a more nature fit.

 *Who needs it*

 Cloud operators have experienced this issue in real deployments in a
 number of cases.




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 --

 Best Regards ,

 The G.

 __
 OpenStack Development Mailing List 

[openstack-dev] [neutron] dns-nameservers order not honored

2015-06-23 Thread Paul Ward
I haven't dug into the code yet, but from testing via CLI and REST API, 
it appears neutron does not honor the order in which users specify their 
dns-nameservers.  For example, no matter what order I specify 10.0.0.1 
and 10.0.0.2 for dns-nameservers, they are always ordered with the 
numerically lowest IP first when doing a subnet-show (ie, 10.0.0.1 will 
be first, even if I specified 10.0.0.2 first).  As stated above, CLI and 
REST API behave the same.


I believe this is a problem because these are passed to activation on a 
deployed VM in the order neutron lists them in the subnet.  A user may 
have a reason they want the numerically higher DNS IP listed first, say 
if they are trying to load balance their DNS servers.  By always 
ordering them numerically, we give them no way to do this.


So my question is... is this by design or an oversight?  If it's an 
oversight, I'll dig into the code and propose a patch.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][stable] Cinder client broken in Juno

2015-06-23 Thread Monty Taylor
On 06/23/2015 11:49 AM, Mike Perez wrote:
 There was a bug raised [1] from some large deployments that the Cinder
 client 1.2.0 and beyond is not working because of version discovery.
 Unfortunately it's not taking into account of deployments that have a
 proxy.
 
 Cinder client asks Keystone to find a publicURL based on a version.
 Keystone will gather data from the service catalog and ask Cinder for
 a list of the public endpoints and compare. For the proxy cases,
 Cinder is giving internal URLs back to the proxy and Keystone ends up
 using that instead of the publicURL in the service catalog. As a
 result, clients usually won't be able to use the internal URL and
 rightfully so.
 
 This is all correctly setup on the deployer's side, this an issue with
 the server side code of Cinder.
 
 There is a patch that allows the deployer to specify a configuration
 option public_endpoint [2] which was introduced in a patch in Kilo
 [3]. The problem though is we can't expect people to already be
 running Kilo to take advantage of this, and it leaves deployers
 running stable releases of Juno in the dark with clients upgrading and
 using the latest.
 
 Two options:
 
 1) Revert version discovery which was introduced in Kilo for Cinder client.
 
 2) Grant exception on backporting [4] a patch that helps with this
 problem, and introduces a config option that does not change default
 behavior. I'm also not sure if this should be considered for Icehouse.

I'm, sadly, going to vote for (1)

I LOVE the version discovery, but it needs to be able to work with
clouds that are out there. Some of them aren't running juno yet. Some of
them might not be in a position to deploy the backport patch.

OTOH, if you go with (2) - can we add support to cinderclient for
skipping version discover if an API_VERSION is passed in?

 
 [1] - https://launchpad.net/bugs/1464160
 [2] - 
 http://docs.openstack.org/kilo/config-reference/content/cinder-conf-changes-kilo.html
 [3] - https://review.openstack.org/#/c/159374/
 [4] - https://review.openstack.org/#/c/194719/
 
 --
 Mike Perez
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Fuel-library] Using librarian-puppet to manage upstream fuel-library modules

2015-06-23 Thread Monty Taylor
On 06/23/2015 01:51 PM, Alex Schultz wrote:
 Hello everyone,
 
 I took some time this morning to write out a document[0] that outlines
 one possible ways for us to manage our upstream modules in a more
 consistent fashion. I know we've had a few emails bouncing around
 lately around this topic of our use of upstream modules and how can we
 improve this. I thought I would throw out my idea of leveraging
 librarian-puppet to manage the upstream modules within our
 fuel-library repository. Ideally, all upstream modules should come
 from upstream sources and be removed from the fuel-library itself.
 Unfortunately because of the way our repository sits today, this is a
 very large undertaking and we do not currently have a way to manage
 the inclusion of the modules in an automated way. I believe this is
 where librarian-puppet can come in handy and provide a way to manage
 the modules. Please take a look at my document[0] and let me know if
 there are any questions.

FWIW - Over in Infra we also have a giant pile of external modules we
use. We looked at and chose not to use librarian because of complexity
and also fragility.

Instead, we wrote a simple script and a simple manifest file:

http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules.env

http://git.openstack.org/cgit/openstack-infra/system-config/tree/install_modules.sh

Feel free to make use of that if it's helpful.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][Solum] Request for feedback on new API resource

2015-06-23 Thread Ed Cranford
The app-resource spec [1] is as much documentation as we have on the new 
resources at present. It does illustrate some imagined healthy interactions 
with the proposed API, though looking at the mentioned Glance example I can see 
several ways we can improve our specs, for example by explaining more verbosely 
not just what each response might look like, but conceptually what has been 
asked for and what is being returned. There is clear precedent for modifying 
specs after ratification, so there should be no problem modifying even the 
app-resource spec to make our goals clearer.


The linked review [2] is the first of a planned series of such with the goal of 
implementing that spec. It creates a new data model for one of the three 
proposed resources and then exposes CRUD actions on that resource. Future 
reviews will incrementally add the other resources, add stronger data 
validation, integrate the new resources into the engine, and finally deprecate 
the obsolete resources and interactions.


We appreciate your advice on cleaner reviews and better design, especially 
since we're asking that you take the time to look over them, but we are 
primarily seeking your advice on our adherence to the API WG's guidelines, and 
if amending the spec to add detail and clarity is necessary we won't hesitate.


Thank you very much for your help.


[1] 
https://github.com/stackforge/solum-specs/blob/master/specs/liberty/app-resource.rst

[2] https://review.openstack.org/#/c/185147/



From: Everett Toews everett.to...@rackspace.com
Sent: Tuesday, June 23, 2015 8:58 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [api][Solum] Request for feedback on new API 
resource

On Jun 18, 2015, at 3:07 PM, Devdatta Kulkarni 
devdatta.kulka...@rackspace.commailto:devdatta.kulka...@rackspace.com wrote:

Hi, API WG team,

In Solum, recently we have been working on some changes to our REST API.

Basically, we have introduced a new resource ('app'). The spec for this has 
been accepted by Solum cores.
https://github.com/stackforge/solum-specs/blob/master/specs/liberty/app-resource.rst

Right now we have a patch for review implementing this spec:
https://review.openstack.org/#/c/185147/

If it is not too much to request, I was wondering if someone from your team can 
take a look
at the spec and the review, to see if we are not violating any of your 
guidelines.

Thank you for your help.

- Devdatta

Do you have this API documented anywhere?

Is there a spec or similar for this API change?

In our experience, it's best to consider the API design apart from the 
implementation. The separation of concerns makes for a cleaner review and a 
better design. The Glance team did a good job of this in their Artifact 
Repository API specification [1].

Regards,
Everett

[1] https://review.openstack.org/#/c/177397/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Heat] Tuskar v. Heat responsibilities

2015-06-23 Thread Jay Dobies
I didn't want to hijack Steve Hardy's thread about the recursive 
validation, but I wanted to summarize the needs that Tuskar and the UI 
have been trying to answer and some of the problems we ran into.


I think it's fairly common knowledge now that Tuskar and the THT 
templates diverged over the past few months, so I won't rehash it. If 
you need a summary of what happened, look here: 
https://jdob.fedorapeople.org/tuskar-heat.jpg


Below are some of the needs that the Tuskar UI in general has when 
working with the TripleO Heat Templates. I'm hoping we can come up with 
a decent list and use that to help drive what belongs in Heat v. what 
belongs elsewhere, and ultimately what that elsewhere actually is.



= Choosing Component Implementations =

== Background ==

I'm already off to a bad start, since the word component isn't 
actually a term in this context. What I'm referring to is the fact that 
we are starting to see what is almost a plugin model in the THT templates.


Previously, we had assumed that all of the overcloud configuration would 
be done through parameters. This is no longer the case as the 
resource_registry is used to add certain functionality.


For example, in overcloud-resource-registry-puppet.yaml, we see:

 # set to controller-config-pacemaker.yaml to enable pacemaker
 OS::TripleO::ControllerConfig: puppet/controller-config.yaml

That's a major overcloud configuration setting, but that choice isn't 
made through a parameter. It's in a different location and a different 
mechanism entirely.


Similarly, enabling a Netapp backend for Cinder is done by setting a 
resource_registry entry to change the CinderBackend template [1]. This 
is a slightly different case conceptually than HA since the original 
template being overridden is a noop [2], but the mechanics of how to set 
it are the same.


There are also a number of pre and post hooks that exist in the 
overcloud template that we are seeing more and more implementations of. 
RHEL registration is implemented as such a hook [3].


I'm drawing a difference here between fundamental configuration changes 
(HA v. non-HA) and optional additions (RHEL registration). Again, 
mechanically they are implemented as resource_registry substitutions, 
though from a UI standpoint we'd likely want to treat them differently. 
Whether or not that difference is actually captured by the templates 
themselves or is purely in the UI is open to debate.


== Usage in TripleO ==

All of the examples I mentioned above have landed upstream and the Heat 
features necessary to facilitate them all exist.


What doesn't exist is a way to manipulate the resource_registry. Tuskar 
doesn't have APIs for that level of changes; it assumed all 
configuration changes would be through parameters and hasn't yet had 
time to add in support for dorking with the registry in this fashion.


While, technically, all of the resource_registry entries can be 
overridden, there are only a few that would make sense for a user to 
want to configure (I'm not talking about advanced users writing their 
own templates).


On top of that, only certain templates can be used to fulfill certain 
resource types. For instance, you can't point CinderBackend to 
rhel-registration.yaml. That information isn't explicitly captured by 
Heat templates. I suppose you could inspect usages of a resource type in 
overcloud to determine the api of that type and then compare that to 
possible implementation templates' parameter lists to figure out what is 
compatible, but that seems like a heavy-weight approach.


I mention that because part of the user experience would be knowing 
which resource types can have a template substitution made and what 
possible templates can fulfill it.


== Responsibility ==

Where should that be implemented? That's a good question.

The idea of resolving resource type uses against candidate template 
parameter lists could fall under the model Steve Hardy is proposing of 
having Heat do it (he suggested the validate call, but this may be 
leading us more towards template inspection sorts of APIs supported by Heat.


It is also possibly an addition to HOT, to somehow convey an interface 
so that we can more easily programatically look at a series of templates 
and understand how they play together. We used to be able to use the 
resource_registry to understand those relationships, but that's not 
going to work if we're trying to find substitutions into the registry.


Alternatively, if Heat/HOT has no interest in any of this, this is 
something that Tuskar (or a Tuskar-like substitute) will need to solve 
going forward.



= Consolidated Parameter List =

== Background ==

This is what Steve was getting at in his e-mail. I'll rehash the issue 
briefly.


We used to be able to look at the parameters list in the overcloud 
template and know all of the parameters that need to be specified to 
configure the overcloud.


The parameter passing is pretty strict, so if overcloud 

Re: [openstack-dev] [neutron] dns-nameservers order not honored

2015-06-23 Thread John Kasperski
Hi Paul,

There is an old bug on this issue:  
https://bugs.launchpad.net/neutron/+bug/1218629

If I remember correctly, the root of the problem was the database
definition for the DNS values.

On 06/23/2015 01:48 PM, Paul Ward wrote:
 I haven't dug into the code yet, but from testing via CLI and REST
 API, it appears neutron does not honor the order in which users
 specify their dns-nameservers.  For example, no matter what order I
 specify 10.0.0.1 and 10.0.0.2 for dns-nameservers, they are always
 ordered with the numerically lowest IP first when doing a subnet-show
 (ie, 10.0.0.1 will be first, even if I specified 10.0.0.2 first).  As
 stated above, CLI and REST API behave the same.

 I believe this is a problem because these are passed to activation on
 a deployed VM in the order neutron lists them in the subnet.  A user
 may have a reason they want the numerically higher DNS IP listed
 first, say if they are trying to load balance their DNS servers.  By
 always ordering them numerically, we give them no way to do this.

 So my question is... is this by design or an oversight?  If it's an
 oversight, I'll dig into the code and propose a patch.


 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
John Kasperski


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] apache wsgi application support

2015-06-23 Thread Morgan Fainberg
Brant,

We likely need to back port a simplified version of the wsgi files and/or make 
the Juno (and kilo) versions of dev stack use the same simplified / split 
files. Grenade doesn't re-run stack - so new files that are outside pip's 
purview won't be used afaik.

--Morgab

Sent via mobile

 On Jun 23, 2015, at 13:07, Brant Knudson b...@acm.org wrote:
 
 
 
 On Wed, Jun 17, 2015 at 1:21 PM, Sean Dague s...@dague.net wrote:
 On 06/16/2015 05:25 PM, Chris Dent wrote:
  On Tue, 16 Jun 2015, Sean Dague wrote:
 
  I was just looking at the patches that put Nova under apache wsgi for
  the API, and there are a few things that I think are going in the wrong
  direction. Largely I think because they were copied from the
  lib/keystone code, which we've learned is kind of the wrong direction.
 
  Yes, that's certainly what I've done the few times I've done it.
  devstack is deeply encouraging of cargo culting for reasons that are
  not entirely clear.
 
 Yeh, hence why I decided to put the brakes on a little here and get this
 on the list.
 
  The first is the fact that a big reason for putting {SERVICES} under
  apache wsgi is we aren't running on a ton of weird unregistered ports.
  We're running on 80 and 443 (when appropriate). In order to do this we
  really need to namespace the API urls. Which means that service catalog
  needs to be updated appropriately.
 
  So:
 
  a) I'm very glad to hear of this. I've been bristling about the weird
 ports thing for the last year.
 
  b) You make it sound like there's been a plan in place to not use
 those ports for quite some time and we'd get to that when we all
 had some spare time. Where do I go to keep abreast of such plans?
 
 Unfortunately, this is one of those in the ether kinds of plans. It's
 been talked about for so long, but it never really got written down.
 Hopefully this can be driven into the service catalog standardization
 spec (or tag along somewhere close).
 
 Or if nothing else, we're documenting it now on the mailing list as
 permanent storage.
 
  I also think this -
  https://github.com/openstack-dev/devstack/blob/master/lib/nova#L266-L268
  is completely wrong.
 
  The Apache configs should instead specify access rules such that the
  installed console entry point of nova-api can be used in place as the
  WSGIScript.
 
  I'm not able to parse this paragraph in any actionable way. The lines
  you reference are one of several ways of telling mod wsgi where the
  virtualenv is, which has to happen in some fashion if you are using
  a virtualenv.
 
  This doesn't appear to have anything to do with locating the module
  that contains the WSGI app, so I'm missing the connection. Can you
  explain please?
 
  (Basically I'm keen on getting gnocchi and ceilometer wsgi servers
  in devstack aligned with whatever the end game is, so knowing the plan
  makes it a bit easier.)
 
 Gah, the problem of linking to 'master' with line numbers. The three
 lines I cared about were:
 
 # copy proxy vhost and wsgi helper files
 sudo cp $NOVA_DIR/nova/wsgi/nova-api.py $NOVA_WSGI_DIR/nova-api
 sudo cp $NOVA_DIR/nova/wsgi/nova-ec2-api.py $NOVA_WSGI_DIR/nova-ec2-api
 
 I don't think that we should be copying py files around to other
 directories outside of normal pip install process. We should just have
 mod_wsgi reference a thing that is installed in /usr/{local}/bin or
 /usr/share via the python install process.
 
  This should also make lines like -
  https://github.com/openstack-dev/devstack/blob/master/lib/nova#L272 and
  L274 uneeded. (The WSGI Script will be in a known place). It will also
  make upgrades much more friendly.
 
  It sounds like maybe you are saying that the api console script and
  the module containing the wsgi 'application' variable ought to be the
  same thing. I don't reckon that's a great idea as the api console
  scripts will want to import a bunch of stuff that the wsgi application
  will not.
 
  Or I may be completely misreading you. It's been a long day, etc.
 
 They don't need to be actually the same thing. They could be different
 scripts, but they should be scripts that install via the normal pip
 install process to a place, and we reference them by known name.
 
  I think that we need to get these things sorted before any further
  progression here. Volunteers welcomed to help get us there.
 
  Find me, happy to help. The sooner we can kill wacky port weirdness
  the better.
 
 Agreed.
 
 -Sean
 
 --
 Sean Dague
 http://dague.net
 
 I've got a few related changes proposed to keystone and devstack:
 
 https://review.openstack.org/#/c/193891/ - Changes Apache Httpd config so 
 that /identity is the keystone public (aka main) handler and /identity_admin 
 is the keystone admin handler. Httpd can have multiple aliases for the same 
 wsgi handler so :5000 and :35357 still work. The follow-on patch at 
 https://review.openstack.org/193894 shows some further work to change config 
 so that the new endpoints 

Re: [openstack-dev] [cinder][stable] Cinder client broken in Juno

2015-06-23 Thread Jeremy Stanley
On 2015-06-23 08:49:55 -0700 (-0700), Mike Perez wrote:
[...]
 Cinder client asks Keystone to find a publicURL based on a version.
 Keystone will gather data from the service catalog and ask Cinder for
 a list of the public endpoints and compare. For the proxy cases,
 Cinder is giving internal URLs back to the proxy and Keystone ends up
 using that instead of the publicURL in the service catalog. As a
 result, clients usually won't be able to use the internal URL and
 rightfully so.
[...]

It seems like there would be an option #3: add a fallback behavior
to cinderclient to try its old connection method if it fails to
reach the discovered URL. I'm guessing there's some specific
reason that's impossible, or it would have already been in the
works?
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Fuel meeting participation.

2015-06-23 Thread Andrew Woodward
I'm not sure about 6/11 as I was traveling during the scheduled time and
there are no minuets annotated in the agenda, but I chaired 6/18 and 6/4,
and will continue to do so going forward whenever possible.

I will start to send more regular updates / reminders on the ML about the
schedule, agenda reminders and follow up after the meetings. I hope to see
improved participation as a result.

As a reminder the agenda is at:
https://etherpad.openstack.org/p/fuel-weekly-meeting-agenda


On Tue, Jun 23, 2015 at 2:26 PM Sean M. Collins s...@coreitpro.com wrote:

 Are we having these meetings every week, and if not, are we announcing
 on the mailing list that they are cancelled?

 If not, why not? That'll put a dent in attendance if there is doubt
 around if a meeting will happen or not. I see there has been a couple
 weeks where there was no meeting run.

 --
 Sean M. Collins

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] apache wsgi application support

2015-06-23 Thread Brant Knudson
On Wed, Jun 17, 2015 at 1:21 PM, Sean Dague s...@dague.net wrote:

 On 06/16/2015 05:25 PM, Chris Dent wrote:
  On Tue, 16 Jun 2015, Sean Dague wrote:
 
  I was just looking at the patches that put Nova under apache wsgi for
  the API, and there are a few things that I think are going in the wrong
  direction. Largely I think because they were copied from the
  lib/keystone code, which we've learned is kind of the wrong direction.
 
  Yes, that's certainly what I've done the few times I've done it.
  devstack is deeply encouraging of cargo culting for reasons that are
  not entirely clear.

 Yeh, hence why I decided to put the brakes on a little here and get this
 on the list.

  The first is the fact that a big reason for putting {SERVICES} under
  apache wsgi is we aren't running on a ton of weird unregistered ports.
  We're running on 80 and 443 (when appropriate). In order to do this we
  really need to namespace the API urls. Which means that service catalog
  needs to be updated appropriately.
 
  So:
 
  a) I'm very glad to hear of this. I've been bristling about the weird
 ports thing for the last year.
 
  b) You make it sound like there's been a plan in place to not use
 those ports for quite some time and we'd get to that when we all
 had some spare time. Where do I go to keep abreast of such plans?

 Unfortunately, this is one of those in the ether kinds of plans. It's
 been talked about for so long, but it never really got written down.
 Hopefully this can be driven into the service catalog standardization
 spec (or tag along somewhere close).

 Or if nothing else, we're documenting it now on the mailing list as
 permanent storage.

  I also think this -
 
 https://github.com/openstack-dev/devstack/blob/master/lib/nova#L266-L268
  is completely wrong.
 
  The Apache configs should instead specify access rules such that the
  installed console entry point of nova-api can be used in place as the
  WSGIScript.
 
  I'm not able to parse this paragraph in any actionable way. The lines
  you reference are one of several ways of telling mod wsgi where the
  virtualenv is, which has to happen in some fashion if you are using
  a virtualenv.
 
  This doesn't appear to have anything to do with locating the module
  that contains the WSGI app, so I'm missing the connection. Can you
  explain please?
 
  (Basically I'm keen on getting gnocchi and ceilometer wsgi servers
  in devstack aligned with whatever the end game is, so knowing the plan
  makes it a bit easier.)

 Gah, the problem of linking to 'master' with line numbers. The three
 lines I cared about were:

 # copy proxy vhost and wsgi helper files
 sudo cp $NOVA_DIR/nova/wsgi/nova-api.py $NOVA_WSGI_DIR/nova-api
 sudo cp $NOVA_DIR/nova/wsgi/nova-ec2-api.py $NOVA_WSGI_DIR/nova-ec2-api

 I don't think that we should be copying py files around to other
 directories outside of normal pip install process. We should just have
 mod_wsgi reference a thing that is installed in /usr/{local}/bin or
 /usr/share via the python install process.

  This should also make lines like -
  https://github.com/openstack-dev/devstack/blob/master/lib/nova#L272 and
  L274 uneeded. (The WSGI Script will be in a known place). It will also
  make upgrades much more friendly.
 
  It sounds like maybe you are saying that the api console script and
  the module containing the wsgi 'application' variable ought to be the
  same thing. I don't reckon that's a great idea as the api console
  scripts will want to import a bunch of stuff that the wsgi application
  will not.
 
  Or I may be completely misreading you. It's been a long day, etc.

 They don't need to be actually the same thing. They could be different
 scripts, but they should be scripts that install via the normal pip
 install process to a place, and we reference them by known name.

  I think that we need to get these things sorted before any further
  progression here. Volunteers welcomed to help get us there.
 
  Find me, happy to help. The sooner we can kill wacky port weirdness
  the better.

 Agreed.

 -Sean

 --
 Sean Dague
 http://dague.net


I've got a few related changes proposed to keystone and devstack:

https://review.openstack.org/#/c/193891/ - Changes Apache Httpd config so
that /identity is the keystone public (aka main) handler and
/identity_admin is the keystone admin handler. Httpd can have multiple
aliases for the same wsgi handler so :5000 and :35357 still work. The
follow-on patch at https://review.openstack.org/193894 shows some further
work to change config so that the new endpoints are used by the tests.
There are a lot of devstack variables that aren't going to apply or are
going to be reinterpreted if we switch to this so I'll have to think about
how that's going to work.

https://review.openstack.org/#/c/194442/ - Creates files
keystone/httpd/admin.py and public.py , in addition to the old
httpd/keystone.py that you had to copy and rename or symlink.


Re: [openstack-dev] [cinder][stable] Cinder client broken in Juno

2015-06-23 Thread Morgan Fainberg
My first choice here is to revert the version discovery. However, that may be 
too disruptive. If it is too disruptive then the back port patch is the right 
approach. 

In either case this is unfortunate. 

Sent via mobile

 On Jun 23, 2015, at 12:30, Monty Taylor mord...@inaugust.com wrote:
 
 On 06/23/2015 11:49 AM, Mike Perez wrote:
 There was a bug raised [1] from some large deployments that the Cinder
 client 1.2.0 and beyond is not working because of version discovery.
 Unfortunately it's not taking into account of deployments that have a
 proxy.
 
 Cinder client asks Keystone to find a publicURL based on a version.
 Keystone will gather data from the service catalog and ask Cinder for
 a list of the public endpoints and compare. For the proxy cases,
 Cinder is giving internal URLs back to the proxy and Keystone ends up
 using that instead of the publicURL in the service catalog. As a
 result, clients usually won't be able to use the internal URL and
 rightfully so.
 
 This is all correctly setup on the deployer's side, this an issue with
 the server side code of Cinder.
 
 There is a patch that allows the deployer to specify a configuration
 option public_endpoint [2] which was introduced in a patch in Kilo
 [3]. The problem though is we can't expect people to already be
 running Kilo to take advantage of this, and it leaves deployers
 running stable releases of Juno in the dark with clients upgrading and
 using the latest.
 
 Two options:
 
 1) Revert version discovery which was introduced in Kilo for Cinder client.
 
 2) Grant exception on backporting [4] a patch that helps with this
 problem, and introduces a config option that does not change default
 behavior. I'm also not sure if this should be considered for Icehouse.
 
 I'm, sadly, going to vote for (1)
 
 I LOVE the version discovery, but it needs to be able to work with
 clouds that are out there. Some of them aren't running juno yet. Some of
 them might not be in a position to deploy the backport patch.
 
 OTOH, if you go with (2) - can we add support to cinderclient for
 skipping version discover if an API_VERSION is passed in?
 
 
 [1] - https://launchpad.net/bugs/1464160
 [2] - 
 http://docs.openstack.org/kilo/config-reference/content/cinder-conf-changes-kilo.html
 [3] - https://review.openstack.org/#/c/159374/
 [4] - https://review.openstack.org/#/c/194719/
 
 --
 Mike Perez
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] apache wsgi application support

2015-06-23 Thread Dean Troyer
On Tue, Jun 23, 2015 at 4:08 PM, Morgan Fainberg morgan.fainb...@gmail.com
wrote:

 We likely need to back port a simplified version of the wsgi files and/or
 make the Juno (and kilo) versions of dev stack use the same simplified /
 split files. Grenade doesn't re-run stack - so new files that are outside
 pip's purview won't be used afaik.


You should only need to go back to kilo, the juno-kilo should continue to
work the old way, the kilo-master run should start the new way.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Fuel meeting participation.

2015-06-23 Thread Sean M. Collins
Are we having these meetings every week, and if not, are we announcing
on the mailing list that they are cancelled? 

If not, why not? That'll put a dent in attendance if there is doubt
around if a meeting will happen or not. I see there has been a couple
weeks where there was no meeting run.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] Kevin Benton added to neutron-stable-maint

2015-06-23 Thread Somanchi Trinath
Congrats Kevin! :) 

-Original Message-
From: Ihar Hrachyshka [mailto:ihrac...@redhat.com] 
Sent: Tuesday, June 23, 2015 4:07 PM
To: openstack-dev
Subject: [openstack-dev] [neutron][stable] Kevin Benton added to 
neutron-stable-maint

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi all,

Just a heads-up that Kevin Benton is added to neutron-stable-maint team so now 
he has all the powers to +2/+A (and -2) backports.

Kevin is very active at voting for neutron backports (well, where is he NOT 
active?), so here you go.

Thanks
Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJViTblAAoJEC5aWaUY1u57lAIH/2lBqAQv5sL0avDmWYHljUXO
zolTmsaK8+qs9FXUlr+Ca3TU1KqOH5p27m49pkJS2n3Sy1ojL0TkzmQxA5sB0/Bg
ufVq2aMGzC1L0k9c8VbMiHpX6/CHOEnL/bpp4Gh6LRpovVOCGRXnlPabd+h0PPJm
krDhG428ZB6wMnd5S+ZuV77Mlr2Lrrv8o0mzd0joO1munJFepk7ar7BLwYV+QeZq
kpi8dInh7gODI3ciQ3OWuWZWk4Dsc0Dup2ARsdUlhDN0/Sfc/ElXKWmYIam+flCR
ToxtzQBrw2LT/mT/mOpT8bJRBbP8KGtcunXvDEeoGOxOTF1+2dpfMbHYlohlCX8=
=q6il
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] and [lbaas] - GSLB API and backend support

2015-06-23 Thread Gandhi, Kunal
Hi All,

I have a very draft for the GSLB API and would like to upload it somewhere for 
discussion. What is the best place to upload and collaborate the draft ? Since 
the API docs have a lot of JSON payloads in it, I am not sure whether Google 
Docs will be appropriate for that.

Regards
Kunal

 On Jun 15, 2015, at 10:03 PM, Doug Wiegley doug...@parksidesoftware.com 
 wrote:
 
 Hi all,
 
 We don’t have a rough draft API doc yet, so I’m suggesting that we postpone 
 tomorrow morning’s meeting until next week. Does anyone have any other agenda 
 items, or want the meeting tomorrow?
 
 Thanks,
 doug
 
 
 On Jun 2, 2015, at 10:52 AM, Doug Wiegley doug...@parksidesoftware.com 
 wrote:
 
 Hi all,
 
 The initial meeting logs can be found at 
 http://eavesdrop.openstack.org/meetings/gslb/2015/ , and we will be having 
 another meeting next week, same time, same channel.
 
 Thanks,
 doug
 
 
 On May 31, 2015, at 1:27 AM, Samuel Bercovici samu...@radware.com wrote:
 
 Good for me - Tuesday at 1600UTC
 
 
 -Original Message-
 From: Doug Wiegley [mailto:doug...@parksidesoftware.com] 
 Sent: Thursday, May 28, 2015 10:37 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [designate] and [lbaas] - GSLB API and backend 
 support
 
 
 On May 28, 2015, at 12:47 PM, Hayes, Graham graham.ha...@hp.com wrote:
 
 On 28/05/15 19:38, Adam Harwell wrote:
 I haven't seen any responses from my team yet, but I know we'd be 
 interested as well - we have done quite a bit of work on this in the 
 past, including dealing with the Designate team on this very subject. 
 We can be available most hours between 9am-6pm Monday-Friday CST.
 
 --Adam
 
 https://keybase.io/rm_you
 
 
 From: Rakesh Saha rsahaos...@gmail.com 
 mailto:rsahaos...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Date: Thursday, May 28, 2015 at 12:22 PM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [designate] and [lbaas] - GSLB API and 
 backend support
 
 Hi Kunal,
 I would like to participate as well.
 Mon-Fri morning US Pacific time works for me.
 
 Thanks,
 Rakesh Saha
 
 On Tue, May 26, 2015 at 8:45 PM, Vijay Venkatachalam
 vijay.venkatacha...@citrix.com
 mailto:vijay.venkatacha...@citrix.com wrote:
 
 We would like to participate as well.
 
 __ __
 
 Monday-Friday Morning US time works for me..
 
 __ __
 
 Thanks,
 
 Vijay V.
 
 __ __
 
 *From:*Samuel Bercovici [mailto:samu...@radware.com
 mailto:samu...@radware.com]
 *Sent:* 26 May 2015 21:39
 
 
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Cc:* kunalhgan...@gmail.com mailto:kunalhgan...@gmail.com;
 v.jain...@gmail.com mailto:v.jain...@gmail.com;
 do...@a10networks.com mailto:do...@a10networks.com
 *Subject:* Re: [openstack-dev] [designate] and [lbaas] - GSLB
 API and backend support
 
 __ __
 
 Hi,
 
 __ __
 
 I would also like to participate.
 
 Friday is a non-working day in Israel (same as Saturday for most
 of you).
 
 So Monday- Thursday works best for me.
 
 __ __
 
 -Sam.
 
 __ __
 
 __ __
 
 *From:*Doug Wiegley [mailto:doug...@parksidesoftware.com]
 *Sent:* Saturday, May 23, 2015 8:45 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Cc:* kunalhgan...@gmail.com mailto:kunalhgan...@gmail.com;
 v.jain...@gmail.com mailto:v.jain...@gmail.com;
 do...@a10networks.com mailto:do...@a10networks.com
 *Subject:* Re: [openstack-dev] [designate] and [lbaas] - GSLB
 API and backend support
 
 __ __
 
 Of those two options, Friday would work better for me.
 
 __ __
 
 Thanks,
 
 doug
 
 __ __
 
 On May 22, 2015, at 9:33 PM, ki...@macinnes.ie
 mailto:ki...@macinnes.ie wrote:
 
 __ __
 
 Hi Kunal,
 
 Thursday/Friday works for me - early morning PT works best,
 as I'm based in Ireland.
 
 I'll find some specific times the Designate folks are
 available over the next day or two and provide some
 options.. 
 
 Thanks,
 Kiall
 
 On 22 May 2015 7:24 pm, Gandhi, Kunal
 kunalhgan...@gmail.com mailto:kunalhgan...@gmail.com
 wrote:
 
 Hi All
 
 __ __
 
 I wanted to start a discussion about adding support for GSLB
 to neutron-lbaas and designate. To be brief for folks who
 are new to GLB, GLB stands for Global Load Balancing and we
 use it for load balancing traffic across various
 geographical regions. A more detail description of GLB can
 be found at my talk at the summit 

Re: [openstack-dev] [cross-project] RBAC Policy Basics

2015-06-23 Thread Osanai, Hisashi

On Tuesday, June 23, 2015 10:30 PM, Adam Young wrote:

 OK, I think I get it;  you want to make a check specific to the roles
 on the service token.  The term Service roles  confused me.
 
 You can do this check with oslo.messaging today.  Don't uyse the role
 check, just a generic check.
 It looks for an elelement in a collection, and reeturns true if it is
 in there;  see
 
 
 http://git.openstack.org/cgit/openstack/oslo.policy/commit/?id=a08bc
 79f5c117696c43feb2e44c4dc5fd0013deb

Cool! This is what I wanted to have. :-)

Thanks!
Hisashi Oasnai

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to properly detect and fence a compromised host (and why I dislike TrustedFilter)

2015-06-23 Thread Bhandaru, Malini K
Would like to add to Shane's points below.

1) The Trust filter can be treated as an API, with different underlying 
implementations. Its default could even be Not Implemented and always return 
false.
 And Nova.conf could specify use the OAT trust implementation. This would 
not break present day users of the functionality.

2) The issue in the original bug is a a VM waking up after a reboot on a host 
that has not pre-determined whether the host is still trustable.
 This is essentially begging a feature to check that all constraints 
requested by a VM during launch are confirmed to hold when it re-awakens, even 
if it is not
 going through Nova scheduler at this point. 

 This holds even for aggregates that might be specified by geo, or even 
reservation such as Coke or Pepsi.
 What if a host, even without a reboot and certainly before a reboot was 
assigned from Coke to Pepsi, there is cross contamination.
 Perhaps we need Nova hooks that can be registered with functions that 
check expected aggregate values.

 Better still have  libvirt functionality that makes a call back for each 
VM on a host to ensure its constraints are satisfied on start-up/boot, and 
re-start when it comes out of pause.

 Using aggregate for trust with a cron job to check for trust is 
inefficient in this case, trust status gets updated only on a host reboot. 
Intel TXT is a boot
 time authentication.

Regards
Malini


-Original Message-
From: Wang, Shane [mailto:shane.w...@intel.com] 
Sent: Tuesday, June 23, 2015 9:26 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] How to properly detect and fence a 
compromised host (and why I dislike TrustedFilter)

AFAIK, TrustedFilter is using a sort of cache to cache the trusted state, which 
is designed to solve the performance issue mentioned here.

My thoughts for deprecating it are:
#1. We already have customers here in China who are using that filter. How are 
they going to do upgrade in the future?
#2. Dependency should not be a reason to deprecate a module in OpenStack, Nova 
is not a stand-alone module, and it depends on various technologies and 
libraries.

Intel is setting up the third party CI for TCP/OAT in Liberty, which is to 
address the concerns mentioned in the thread. And also, OAT is an open source 
project which is being maintained as the long-term strategy.

For the situation that a host gets compromised, OAT checks trusted or untrusted 
from the start point of boot/reboot, it is hard for OAT to detect whether a 
host gets compromised when it is running, I don't know how to detect that 
without the filter?
Back to Michael's question, the process of the verification is done by software 
automatically when a host boots or reboots, will that be an overhead for the 
admin to have a separate job?

Thanks.
--
Shane

-Original Message-
From: Michael Still [mailto:mi...@stillhq.com]
Sent: Wednesday, June 24, 2015 7:49 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] How to properly detect and fence a 
compromised host (and why I dislike TrustedFilter)

I agree. I feel like this is another example of functionality which is 
trivially implemented outside nova, and where it works much better if we don't 
do it. Couldn't an admin just have a cron job which verifies hosts, and then 
adds them to a compromised-hosts host aggregate if they're owned? I assume 
without testing it that you can migrate instances _out_ of a host aggregate you 
can't boot in?

Michael

On Tue, Jun 23, 2015 at 8:41 PM, Sylvain Bauza sba...@redhat.com wrote:
 Hi team,

 Some discussion occurred over IRC about a bug which was publicly open 
 related to TrustedFilter [1] I want to take the opportunity for 
 raising my concerns about that specific filter, why I dislike it and 
 how I think we could improve the situation - and clarify everyone's
 thoughts)

 The current situation is that way : Nova only checks if one host is 
 compromised only when the scheduler is called, ie. only when 
 booting/migrating/evacuating/unshelving an instance (well, not exactly 
 all the evacuate/live-migrate cases, but let's not discuss about that 
 now). When the request goes in the scheduler, all the hosts are 
 checked against all the enabled filters and the TrustedFilter is 
 making an external HTTP(S) call to the Attestation API service (not 
 handled by Nova) for *each host* to see if the host is valid (not 
 compromised) or not.

 To be clear, that's the only in-tree scheduler filter which explicitly 
 does an external call to a separate service that Nova is not managing.
 I can see at least 3 reasons for thinking about why it's bad :

 #1 : that's a terrible bottleneck for performance, because we're 
 IO-blocking N times given N hosts (we're even not multiplexing the 
 HTTP requests)
 #2 : all the filters are checking an internal Nova state for the host 
 

[openstack-dev] [infra][neutron] Requirements validations

2015-06-23 Thread Gary Kotton
Hi,
In the vmware_nsx project we have done the following:

  1.  In the test_requirements file we have a link to the neutron master [1]. 
The purpose for this is that the master branch needs to be in sync with the 
neutron branch and all unit tests have to pass. So each time there is a change 
in neutron or vmware_nsx we validate that nothing is broken.
  2.  On the infra side we have:
 *the bot that posts updates for requirements. This keeps the 
requirements file in sync.
 *   The requirements validation scrip running

We have now hit an issue where the requirements validation is failing. For 
example [2]. The problem is that the requirements validation does not like:
pkg_resources.RequirementParseError: Expected version spec in -e 
git://git.openstack.org/openstack/neutron.git at 
git://git.openstack.org/openstack/neutron.git

Any suggestions for addressing this issue? Has anyone else hit this?

Thanks
Gary

[1]. 
https://github.com/openstack/vmware-nsx/blob/master/test-requirements.txt#L5
[2] 
http://logs.openstack.org/60/194360/1/check/gate-vmware-nsx-requirements/b294b53/console.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][keystone] Nova calls to Keystone

2015-06-23 Thread Rodrigo Duarte
On Tue, Jun 23, 2015 at 5:48 AM, John Garbutt j...@johngarbutt.com wrote:

 On 23 June 2015 at 03:30, Adam Young ayo...@redhat.com wrote:
  On 06/22/2015 10:13 PM, Sajeesh Cimson Sasi wrote:
  
  From: Adam Young [ayo...@redhat.com]
  Sent: 23 June 2015 00:01:48
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [nova][keystone] Nova calls to Keystone
 
  On 06/20/2015 02:46 PM, Sajeesh Cimson Sasi wrote:
 
  Hi All,
 I need your advice for the implementation of the following blueprint.
  https://review.openstack.org/#/c/160605 .
 All the use cases mentioned in the blueprint have  been implemented
 and
  the complete code is up for review.
https://review.openstack.org/#/c/149828/
However, we have an issue on which we need your input. In the nova
 quota
  api call, keystone calls are made to
get the parent_id and the child project or sub project list. This is
  required because nova doesn't store any information
regarding the hierarchy.

 This is maybe a dumb question, but...

 Could this information not come from the keystone middleware at the
 same time we get all the other identity information, and just live in
 the context?


Unfortunately no, the project hierarchy information is only available in
the GET /projects API - having this in the token so it could live in the
context could be a nice improvement (although this would need to feasible
for all types of tokens).


 Hierarchy Information is  taken during run time,
  from keystone. Since the keystone calls are
made inside the api call, it is not possible to give any dummy or  fake
  values while writing the unit tests. If the keystone
call was made outside the api call, we could have given fake values in
 the
  test cases. However,  the keystone calls for
 parent_id and child projects are made inside the api call.
Can anyone suggest an elegant solution to this problem? What is the
 proper
  way to implement this ?
  Did anybody encounter and solve a  similar  problem ? Many thanks for
  any suggestions!
   best regards
 sajeesh
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  If you are talking to a live Keystone server, make sure you are using
 valid
  data.
 
  If you are not talking to a live keystone server in a unit test, use
  RequestsMock or equivalent (varied by project)  to handle the HTTP
 request
  and response.
 
  A worst case approach is to monkey patch the Keystoneclient.  Please
 don't
  do that if you can avoid it;  better to provide a mock alternative.
 
 
  Hi Adam,
 Thanks a lot. I am not planning to talk to the live
 keystone
  server in the unit test.
 I don't think that I need to monkey patch the
 KeystoneClient.
  In the nova api code, there are two methods (get_parent_project and
  get_immediate_child_list),which uses keystoneclient.I can monkey patch
 those
  two methods to return fixed data according to a fake hierarchy. Am I
 right ?
 
 
  Its not great, but not horrible.  It seems to match the scope of what you
  are testing.  However, you might want to consider doing a mock for the
 whole
  Keystoneclient call, as that really should beo utside of the unit test
 for
  the Nova code.
 

 Please use mock to do that for you, following the pattern of the
 existing Nova unit tests. I think you will find that easier.


Maybe point out where he can find similar tests?

Thanks!



 Thanks,
 John

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rodrigo Duarte Sousa
Senior Software Engineer at Advanced OpenStack Brazil
Distributed Systems Laboratory
MSc in Computer Science
Federal University of Campina Grande
Campina Grande, PB - Brazil
http://rodrigods.com http://lsd.ufcg.edu.br/%7Erodrigods
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][keystone] Nova calls to Keystone

2015-06-23 Thread Sajeesh Cimson Sasi


From: John Garbutt [j...@johngarbutt.com]
Sent: 23 June 2015 14:18:44
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][keystone] Nova calls to Keystone

On 23 June 2015 at 03:30, Adam Young ayo...@redhat.com wrote:
 On 06/22/2015 10:13 PM, Sajeesh Cimson Sasi wrote:
 
 From: Adam Young [ayo...@redhat.com]
 Sent: 23 June 2015 00:01:48
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova][keystone] Nova calls to Keystone

 On 06/20/2015 02:46 PM, Sajeesh Cimson Sasi wrote:

 Hi All,
I need your advice for the implementation of the following blueprint.
 https://review.openstack.org/#/c/160605 .
All the use cases mentioned in the blueprint have  been implemented and
 the complete code is up for review.
   https://review.openstack.org/#/c/149828/
   However, we have an issue on which we need your input. In the nova quota
 api call, keystone calls are made to
   get the parent_id and the child project or sub project list. This is
 required because nova doesn't store any information
   regarding the hierarchy.

This is maybe a dumb question, but...

Could this information not come from the keystone middleware at the
same time we get all the other identity information, and just live in
the context?

[[**

 Initially there was a plan to keep the hierarchy information in the keystone 
token itself.
But that plan  was dropped mainly because of the concerns regarding the size of 
the token.
Some other reasons also might be there. Currently , hierarchy info need to 
retrieved separately. 
*
**]]


Hierarchy Information is  taken during run time,
 from keystone. Since the keystone calls are
   made inside the api call, it is not possible to give any dummy or  fake
 values while writing the unit tests. If the keystone
   call was made outside the api call, we could have given fake values in the
 test cases. However,  the keystone calls for
parent_id and child projects are made inside the api call.
   Can anyone suggest an elegant solution to this problem? What is the proper
 way to implement this ?
 Did anybody encounter and solve a  similar  problem ? Many thanks for
 any suggestions!
  best regards
sajeesh


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 If you are talking to a live Keystone server, make sure you are using valid
 data.

 If you are not talking to a live keystone server in a unit test, use
 RequestsMock or equivalent (varied by project)  to handle the HTTP request
 and response.

 A worst case approach is to monkey patch the Keystoneclient.  Please don't
 do that if you can avoid it;  better to provide a mock alternative.


 Hi Adam,
Thanks a lot. I am not planning to talk to the live keystone
 server in the unit test.
I don't think that I need to monkey patch the KeystoneClient.
 In the nova api code, there are two methods (get_parent_project and
 get_immediate_child_list),which uses keystoneclient.I can monkey patch those
 two methods to return fixed data according to a fake hierarchy. Am I right ?


 Its not great, but not horrible.  It seems to match the scope of what you
 are testing.  However, you might want to consider doing a mock for the whole
 Keystoneclient call, as that really should beo utside of the unit test for
 the Nova code.


Please use mock to do that for you, following the pattern of the
existing Nova unit tests. I think you will find that easier.
[[**
*
 If the hierarchy info was there in the token itself, I could have easily 
followed the existing Nova unit tests. Now I am afraid that  I have to do
something  different.

*
**]]

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron] Requirements validations

2015-06-23 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 06/23/2015 11:16 AM, Gary Kotton wrote:
 Hi, In the vmware_nsx project we have done the following:
 
 1. In the test_requirements file we have a link to the neutron
 master [1]. The purpose for this is that the master branch needs to
 be in sync with the neutron branch and all unit tests have to pass.
 So each time there is a change in neutron or vmware_nsx we
 validate that nothing is broken. 2. On the infra side we have: 1.
 the bot that posts updates for requirements. This keeps the 
 requirements file in sync. 2. The requirements validation scrip
 running
 
 We have now hit an issue where the requirements validation is
 failing. For example [2]. The problem is that the requirements
 validation does not like: pkg_resources.RequirementParseError:
 Expected version spec in -e 
 git://git.openstack.org/openstack/neutron.git at 
 git://git.openstack.org/openstack/neutron.git”
 
 Any suggestions for addressing this issue? Has anyone else hit
 this?
 

http://lists.openstack.org/pipermail/openstack-dev/2015-June/065747.html

Move the dep to tox.ini, as e.g. *aas repos do.

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJViSxlAAoJEC5aWaUY1u57VbIIANj7MEcjpHJeMblQED6FhfBu
i1fDbmCrCNphOjqIq63vG6JsUnM09gyTqFs08y7R7IqzQMFqRmBf8AzYB9dHUZvf
PjK8vGjLZgFJJH/wgVzXoD5gpASNrD43nFpQ+dT5xRXnJx5J7fxnNJUBd6WelepG
iU9K8NOd9d7S/7GwetryZRiPnGdVagZZ+XOnBnGkJuj40FPTVxP/YeyFfeYRzTdv
bhob1dcG9SO61lJsbrvBFC3V0EFDZ5ov5lXcXhJuSrERsCDcc0p8APwULJyzYtiX
Ps7W9r1aw24SHw1g2/NFXTw8PUl83iOPCwIVAmqGi5MakCb+PXhrvvhNAJBuFn4=
=qvn7
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Sahara] [EDP] about get_job_status in oozie engine

2015-06-23 Thread lu jander
Hi Trevor

in sahara oozie engine (sahara/service/edp/oozie/engine.py
sahara/service/edp/oozie/oozie.py)

function get_job_status actually returns not only the status of the job,
but it returns all the info about the job, so i think that we should
 rename this function as get_job_info maybe more convenient for us? cause I
want add a function named get_job_info but i find that it already exists
here with a confused name.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Modular L2 Agent

2015-06-23 Thread Irena Berezovsky
On Mon, Jun 22, 2015 at 7:48 PM, Sean M. Collins s...@coreitpro.com wrote:

 On Mon, Jun 22, 2015 at 10:47:39AM EDT, Salvatore Orlando wrote:
  I would probably start with something for enabling the L2 agent to
 process
  features such as QoS and security groups, working on the OVS agent, and
  then in a second step abstract a driver interface for communicating with
  the data plane. But I honestly do not know if this will keep the work too
  OVS-centric and therefore won't play well with the current efforts to
 put
  linux bridge on par with OVS in Neutron. For those question we should
 seek
  an answer from our glorious reference control plane lieutenant, and
 perhaps
  also from Sean Collins, who's coordinating efforts around linux bridge
  parity.

 I think that what Salvatore has suggested is good. If we start creating
 good API contracts, and well defined interfaces in the reference control
 plane agents - this is a good first step. Even if we start off by doing
 this just for the OVS agent, that'll be a good template for what we
 would need to do for any agent-driven L2 implementation - and it could
 easily be re-used by others.

 To be honest, if you squint hard enough there really are very few
 differences between the OVS agent and the Linux Bridge agent does -
 the parts that handle control plane communication, processing
 data updates, and so forth should all be very similar.

 They only become different at the lower
 levels where it's brctl/ip2 vs. ovs-vsctl/of-ofctl CLI calls - so why
 maintain two separate agent implementations when quite a bit of what
 they do is functionally identical?


As Miguel mentioned, the patch [1] adds support for QoS driver in L2
Agents. Since QoS support is planned to be leveraged by OVS and SR-IOV and
maybe later by Linux Bridge, the idea is to make common L2Agent layer to
enable generic support for features (extensions) and QoS as the first
feature to support. This is not the Modular L2 Agent, but definitely the
step into the right direction.
This work should have minimal impact on Server side, and mostly about code
reuse by L2 Agents.

[1] https://review.openstack.org/#/c/189723/

BR,
Irena

 --
 Sean M. Collins

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][Mistral] Help with a patch

2015-06-23 Thread Renat Akhmerov
Can you please confirm that the issue has been fixed?

The thing is AFAIK Solum was using the old version of Mistral API that is no 
longer supported (was announced a couple of months ago) so I just want to make 
sure you’re using the new API.

Renat Akhmerov
@ Mirantis Inc.



 On 18 Jun 2015, at 20:11, Nikolay Makhotkin nmakhot...@mirantis.com wrote:
 
 Hi, Devdatta! 
 
 Thank you for catching this and for the patch. I already reviewed it and it 
 has been merged.
  
 -- 
 Best Regards,
 Nikolay
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] V3 Authentication for swift store

2015-06-23 Thread Coles, Alistair
Jamie - thanks for the link to your blog.

I remember the Paris discussion :) And also noted the Vancouver discussion re. 
SDK not necessarily being targeted at service-service interactions. I sense 
there is renewed desire to maintain  improve swiftclient, and a few of us are 
interested in looking into the session support (but lacking free cycles right 
now :/). We need to figure out a nice way to maintain the v1 auth mode as and 
when sessions comes into the Connection - there was a very brief conversation 
in Vancouver around maybe encapsulating the v1 auth in a keystone-session-like 
object.

Alistair

 -Original Message-
 From: Jamie Lennox [mailto:jamielen...@redhat.com]
 Sent: 19 June 2015 02:27
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] V3 Authentication for swift store
 
 
 
 - Original Message -
  From: Alistair Coles alistair.co...@hp.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Sent: Thursday, 18 June, 2015 4:39:52 PM
  Subject: Re: [openstack-dev] V3 Authentication for swift store
 
 
 
   -Original Message-
   From: Jamie Lennox [mailto:jamielen...@redhat.com]
   Sent: 18 June 2015 07:02
   To: OpenStack Development Mailing List (not for usage questions)
   Subject: [openstack-dev] [glance] V3 Authentication for swift store
  
   Hey everyone,
  
   TL;DR: glance_store requires a way to do v3 authentication to the
   swift backend.
  
   snip
  
   However in future we are trying to open up authentication so it's
   not limited to only user/password authentication. Immediate goals
   for service to service communications are to enable SSL client
   certificates and kerberos authentication. This would be handled by
   keystoneclient sessions but they are not supported by swift and it
   would require a significant rewrite of swiftclient to do, and the
   swift team has indicated they do not which to invest more time into
   their client.
 
  If we consider specifically the swiftclient Connection class, I wonder
  how significant a rewrite would be to support session objects? I'm not
  too familiar with sessions - is a session an object with an interface
  to fetch a token and service endpoint url? If so maybe Connection
  could accept a session in lieu of auth options and call the session
  rather than its get_auth methods.
 
  If we can move towards sessions in swiftclient then that would be good
  IMHO, since we have also have requirement to support fetching a
  service token [1], which I guess would (now or in future) also be handled by
 the session?
 
  [1] https://review.openstack.org/182640
 
  Alistair
 
 
 So the sessions work is built on, and is modelled after requests.Session. It
 consists of two parts, the session which is your transport object involving 
 things
 like CA certs, verify flags etc and an auth plugin which is how we can handle 
 new
 auth mechanisms. Once coupled the interface looks very similar to a
 requests.Session with get(), post(), request() etc methods, with the addition 
 that
 requests are automatically authenticated and things like the service catalog 
 are
 handled for you. I wrote a blog post a while back which explains many of the
 concepts[2].
 
 The fastest way I could see including Sessions into swiftclient would be to 
 create
 new Connection and HttpConnection objects. Would this be something swift is
 interested in? I didn't mean to offend when saying that you didn't want to put
 any more time into the client, there was a whole session in Paris about how 
 the
 client had problems but it was just going to limp along until SDK was ready. 
 Side
 note, i don't know how this decision will be affected based on Vancouver
 conversations about how SDK may not be suitable for service-service
 communications.
 
 Regarding service tokens, we have an auth plugin that is passed down from
 auth_token middleware that will include X-Service-Token in requests which I
 think swiftclient would benefit from.
 
 
 Jamie
 
 [2] http://www.jamielennox.net/blog/2014/09/15/how-to-use-keystoneclient-
 sessions/
 
 _
 _
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Proposal to add a new repository

2015-06-23 Thread Dmitry Tantsur

On 06/23/2015 02:11 AM, Devananda van der Veen wrote:

Oh - one more thing. If ahc-tools depends on data gathered by
enovance/hardware, then I'm not sure it makes sense to import one to
openstack/ without the other.


Maybe. We'll chat with our folks about it.



-Deva

On Mon, Jun 22, 2015 at 5:08 PM Devananda van der Veen
devananda@gmail.com mailto:devananda@gmail.com wrote:

I'm
On Mon, Jun 22, 2015 at 8:19 AM John Trowbridge tr...@redhat.com
mailto:tr...@redhat.com wrote:



On 06/22/2015 10:40 AM, Dmitry Tantsur wrote:
  On 06/22/2015 04:19 PM, Devananda van der Veen wrote:
  Hi John,
 
  Thanks for the excellent summary! I found it very helpful to
get caught
  up. I'd like to make sure I understand the direction ahc is
going. A
  couple questions...

Thanks for your interest.

 
  Let me add my $0.5 :)
 
 
  I see that ahc is storing its information in swift. That's
clever, but
  if Ironic provided a blob store for each node, would that be
better?

If the blob is large enough, this would be better. Originally we
stored
the data in the extra column of the Ironic db, but that proved
disastrous:

https://bugs.launchpad.net/ironic-inspector/+bug/1461252

 
  We discussed adding a search API to Ironic at the Vancouver
summit,
  though no work has been done on that yet, afaik. If ahc is
going to grow
  a REST API for searching for nodes based on specific
criteria that it
  discovered, could/should we combine these within Ironic's API?
 
  I think John meant having API to replace scripts, so I guess
search
  won't help. When we're talking about advanced matching, we're
talking
  about the following:
  1. We have a ramdisk tool (based on [8]) to get some insane
of facts
  from withing the ramdisk (say, 1000 of them)
  2. We have an Inspector plugin to put them all in Swift (or
Ironic blob
  storage as above)
  3. We have config files (aka rules) written in special
JSON-alike DSL to
  do matching (one of the weak points is that these are files -
I'd like
  API endpoint to accept these rules instead).
  4. We have a script to run this DSL and get some output
(match/not match
  + some matched variables - similar to what regexps do).
  As I understood it John want the latter to become an API
endpoint,
  accepting rules (and maybe node UUIDs) and outputting some
result.
 
  Not sure about benchmarking here, but again, it's probably an API
  endpoint that accepts some minimal expectations, and puts
failed nodes
  to maintenance mode, if they fail to comply (again, that's how I
  understood it).
 
  It's not hard to make these API endpoints part of Inspector,
but it's
  somewhat undesirable to have them optional...
 
 
   From a service coupling perspective, I like the approach
that ahc is
  optional, and also that Ironic-inspector is optional,
because this keeps
  the simple use-case for Ironic, well, simple! That said,
this seems more
  like a configuration setting (should inspector do extra
things?) than an
  entirely separate service, and separating them might be
unnecessarily
  complicated.
 
  We keep thinking about it as well. After all, right now it's
just a
  couple of utilities. There are 2 more concerns that initially
made me
  pull out this code:
  1. ahc-tools currently depends on the library [8], which I
wish would be
  developed much more openly


  2. it's cool that inspector is pluggable, but it has its
cost: there's a
  poor feedback loop from inspector processing plugins to a
user - like
  with all highly asynchronous code
  3. it's also not possible (at least for now) to request a set of
  processing plugins when starting introspection via inspector.
 
  We solved the latter 2 problems by moving code to scripts. So now
  Inspector only puts some data to Swift, and scripts can do
everything else.
 
  So now we've left with
  1. dependency on hardware library
  2. not very stable interface, much less stable than one of
Inspector
 
  We still wonder how to solve these 2 without creating one more
  repository. Any ideas are welcome :)


Oh - good point. There's some neat looking functionality in
enovance/hardware repository, but yea, 

Re: [openstack-dev] [Ironic][Horizon][Tuskar-ui] Making a dashboard for Ironic

2015-06-23 Thread NiuZhenguo
Hi folks,
I must admit that I'll drop my efforts of making the ironic dashboard, as 
ironic-webclient has been developed with the support of Ironic community, and 
now trying to add to openstack namespace at the request of Ironic PTL, so 
there's no need to duplicate efforts.
-zhenguo

From: niuzhenguo...@hotmail.com
To: openstack-dev@lists.openstack.org
Date: Mon, 22 Jun 2015 23:01:40 +0800
Subject: Re: [openstack-dev] [Ironic][Horizon][Tuskar-ui] Making a dashboard 
for Ironic




























Hi Devananda,
 The git history appears to be squashed [1], and most files don't have an 
 attribution header [2], and  none of the headers refer to the company who 
 appears to be behind this (Huawei). What's the  rationale for these 
 inconsistencies, and who is actually behind the code?
No headers refer to Huawei because the files are based on the Angular Dashboard 
from Horizon codes and now is only an empty baremetal dashboard which can be 
combined with Ironic-webclient created by Krotscheck. I think pushing the 
initial horizon compatible dashboard to stackforge or openstack is the first 
step for collaboration, then we can build a team for that.
 Are you going to maintain this project personally, or is there a team at 
 Huawei (or somewhere else)  that is going to do that? Or are you expecting 
 Ironic's current developer teams to take ownership of  this code and 
 maintain it?
There is not a team working for the project, the purpose I want to create 
ironic-dashboard is that I contribute to both Horizon and Ironic, and 
considering I can help to fill the gaps that there's no Horizon support for 
Ironic. About who should maintain the project, I'm not sure whether Horizon or 
Ironic or even a separate team.

 Are you/they going to become part of Ironic's community, attend our weekly 
 meetings, and follow  our design process?

Sure, the dashboard should be geared towards Ironic.
 What is the vision / goal that is being working towards? What is the scope of 
 this dashboard? How  does it fit in with our existing plans?
The end goal for ironic-dashboard is that it can be configured as a standalone 
UI or a dashboard within Horizon. Currently the codes on github is an empty 
dashboard, the panels and features should be developed with the ironic 
community.

-zhenguo

From: devananda@gmail.com
Date: Fri, 19 Jun 2015 22:40:53 +
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic][Horizon][Tuskar-ui] Making a dashboard 
for Ironic

Hi Jim,

Your characterization of this is incomplete. These are not two equal projects 
proposing the same thing in different ways, and while I very much want to 
encourage collaboration, I value our community and feel that this was not done 
in the spirit of that community.
To be clear: ironic-webclient has been developed with the knowledge and support 
of the Ironic developer community, and was not moved into the openstack/ 
namespace on my request, because I have been holding projects to a certain 
level of maturity before including them in Ironic, roughly equivalent to the 
TC's bar for big tent inclusion.
On the other hand, ironic-dashboard was done without the knowledge of any 
Ironic cores, nor with even a heads up to the Ironic or Horizon PTLs. Rather 
than an open design process, this code was just dropped on github and Infra was 
asked to approve the project creation. I have not had the opportunity to talk 
with its author *at all* yet.
I'm glad that ya'll didn't just approve the project creation request without 
checking with me, and I'm glad we are now having this discussion.
Now that that is cleared up, let's move on.

Hi Zhenguo,
I have some questions about ironic-dashboard that I need answered before the 
Ironic Project Team accepts responsibility for it.
The git history appears to be squashed [1], and most files don't have an 
attribution header [2], and none of the headers refer to the company who 
appears to be behind this (Huawei). What's the rationale for these 
inconsistencies, and who is actually behind the code?
Are you going to maintain this project personally, or is there a team at Huawei 
(or somewhere else) that is going to do that? Or are you expecting Ironic's 
current developer teams to take ownership of this code and maintain it?
Are you/they going to become part of Ironic's community, attend our weekly 
meetings, and follow our design process?

What is the vision / goal that is being working towards? What is the scope of 
this dashboard? How does it fit in with our existing plans?

I'm not entirely opposed to having two separate UI projects for Ironic at the 
moment, but we should be very clear about the rationale if we go that route.
-Devananda


[1] 
https://github.com/niuzhenguo/ironic-dashboard/commit/4be73d19e54eb75aa31da3d1a38fa65c1287bc7b[2]
 https://github.com/niuzhenguo/ironic-dashboard/search?q=copyright

On Fri, Jun 19, 2015 at 12:00 PM James E. Blair cor...@inaugust.com wrote:
Hi all,



I'm glad that 

Re: [openstack-dev] [ceilometer] can we get ceilometermiddleware to use a config file instead of transport_url?

2015-06-23 Thread Chris Dent

On Mon, 22 Jun 2015, gord chung wrote:


what's 'long form transport'?

it's not actually using cfg.CONF. to figure out transport url if not present. 
cfg.CONF passed in has nothing set and is basically just a bunch of 
defaults... the url obviously doesn't have a default so ceilometermiddleware 
will fail if you don't pass in a url.


cfg.CONF contains the config instructions from the filter factory, it
is being passed to messaging.get_transport, along with url=

The originating issue is that get_request_url in
devstack:lib/rpc_backend is insufficient because it returns a url.
Since we are already in charge of writing the lines in the filter
factory config, can't we write whatever is necessary there?

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][keystone] Nova calls to Keystone

2015-06-23 Thread John Garbutt
On 23 June 2015 at 03:30, Adam Young ayo...@redhat.com wrote:
 On 06/22/2015 10:13 PM, Sajeesh Cimson Sasi wrote:
 
 From: Adam Young [ayo...@redhat.com]
 Sent: 23 June 2015 00:01:48
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova][keystone] Nova calls to Keystone

 On 06/20/2015 02:46 PM, Sajeesh Cimson Sasi wrote:

 Hi All,
I need your advice for the implementation of the following blueprint.
 https://review.openstack.org/#/c/160605 .
All the use cases mentioned in the blueprint have  been implemented and
 the complete code is up for review.
   https://review.openstack.org/#/c/149828/
   However, we have an issue on which we need your input. In the nova quota
 api call, keystone calls are made to
   get the parent_id and the child project or sub project list. This is
 required because nova doesn't store any information
   regarding the hierarchy.

This is maybe a dumb question, but...

Could this information not come from the keystone middleware at the
same time we get all the other identity information, and just live in
the context?

Hierarchy Information is  taken during run time,
 from keystone. Since the keystone calls are
   made inside the api call, it is not possible to give any dummy or  fake
 values while writing the unit tests. If the keystone
   call was made outside the api call, we could have given fake values in the
 test cases. However,  the keystone calls for
parent_id and child projects are made inside the api call.
   Can anyone suggest an elegant solution to this problem? What is the proper
 way to implement this ?
 Did anybody encounter and solve a  similar  problem ? Many thanks for
 any suggestions!
  best regards
sajeesh


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 If you are talking to a live Keystone server, make sure you are using valid
 data.

 If you are not talking to a live keystone server in a unit test, use
 RequestsMock or equivalent (varied by project)  to handle the HTTP request
 and response.

 A worst case approach is to monkey patch the Keystoneclient.  Please don't
 do that if you can avoid it;  better to provide a mock alternative.


 Hi Adam,
Thanks a lot. I am not planning to talk to the live keystone
 server in the unit test.
I don't think that I need to monkey patch the KeystoneClient.
 In the nova api code, there are two methods (get_parent_project and
 get_immediate_child_list),which uses keystoneclient.I can monkey patch those
 two methods to return fixed data according to a fake hierarchy. Am I right ?


 Its not great, but not horrible.  It seems to match the scope of what you
 are testing.  However, you might want to consider doing a mock for the whole
 Keystoneclient call, as that really should beo utside of the unit test for
 the Nova code.


Please use mock to do that for you, following the pattern of the
existing Nova unit tests. I think you will find that easier.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tripleo]Recursive validation for easier composability

2015-06-23 Thread Steven Hardy
On Mon, Jun 22, 2015 at 01:21:29PM -0400, Jay Dobies wrote:
 
 
 On 06/22/2015 12:19 PM, Steven Hardy wrote:
 Hi all,
 
 Lately I've been giving some thought to how we might enable easier
 composability, and in particular how we can make it easier for folks to
 plug in deeply nested optional extra logic, then pass data in via
 parameter_defaults to that nested template.
 
 Here's an example of the use-case I'm describing:
 
 https://review.openstack.org/#/c/193143/5/environments/cinder-netapp-config.yaml
 
 Here, we want to allow someone to easily turn on an optional configuration
 or feature, in this case a netapp backend for cinder.
 
 I think the actual desired goal is bigger than just optional configuration.
 I think it revolves more around choosing a nested stack implementation for a
 resource type and how to manage custom parameters for that implementation.
 We're getting into the territory here of having a parent stack defining an
 API that nested stacks can plug into. I'd like to have some sort of way of
 deriving that information instead of having it be completely relegated to
 outside documentation (but I'm getting off topic; at the end I mention how I
 want to do a better write up of the issues Tuskar has faced and I'll
 elaborate more there).

I guess I've been thinking of it in terms of two sides of the same problem,
but I see from the responses that Thomas and Steve made that we could
consider this realization of an interface part seperately.

I was thinking that if you have sufficient parameter introspection
capabilities then you know implicitly which templates satisfy a particular
interface, based on the parameters schema exposed - essentially relying on
duck-typing such that if a subset of a templates parameters (and parameter
types) match what the parent stack provides, then it's a valid
implementation.  Clearly this is a considerably less strict approach than
that described in the tosca docs Thomas referenced, but it's potentially
simpler and more flexible too?

 The parameters specific to this feature/configuration only exist in the
 nested cinder-netapp-config.yaml template, then parameter_defaults are used
 to wire in the implementation specific data without having to pass the
 values through every parent template (potentially multiple layers of
 nesting).
 
 This approach is working out OK, but we're missing an interface which makes
 the schema for parameters over the whole tree available.
 
 This is obviously
 a problem, particularly for UI's, where you really need a clearly defined
 interface for what data is required, what type it is, and what valid values
 may be chosen.
 
 I think this is going to be an awesome addition to Heat. As you alluded to,
 we've struggled with this in TripleO. The parameter_defaults works to
 circumvent the parameter passing, but it's rough from a user experience
 point of view since getting the unified list of what's configurable is
 difficult.

Yeah, I guess enhancing our pre-create template introspection/validation is
what I'm trying to focus on as a first step in improvinig that rough user
experience.

 I'm considering an optional additional flag to our template-validate API
 which allows recursive validation of a tree of templates, with the data
 returned on success to include a tree of parameters, e.g:
 
 heat template-validate -f parent.yaml -e env.yaml --show-nested
 {
Description: The Parent,
Parameters: {
  ParentConfig: {
Default: [],
Type: Json,
NoEcho: false,
Description: ,
Label: ExtraConfig
  },
  ControllerFlavor: {
Type: String,
NoEcho: false,
Description: ,
Label: ControllerFlavor
  }
}
   NestedParameters: {
  child.yaml: {
  Parameters: {
ChildConfig: {
Default: [],
Type: Json,
NoEcho: false,
Description: ,
Label: Child ExtraConfig
}
  }
   }
 }
 
 Are you intending on resolving parameters passed into a nested stack from
 the parent against what's defined in the nested stack's parameter list? I'd
 want NestedParameters to only list things that aren't already being
 specified to the parent.

This is a good point - I was assuming we'd expose all parameters, not only
those which aren't specified by the parent.

 Specifically with regard to the TripleO Heat templates, there is still a lot
 of logic that needs to be applied to properly divide out parameters. For
 example, there are some things passed in from the parents to the nested
 stacks that are kinda namespaced by convention, but its not a hard
 convention. So to try to group the parameters by service, we'd have to look
 at a particular NestedParameters section and then also add in anything from
 the parent that applies to that service. I don't believe we can use
 parameter groups to correlate them (we might be able to, or that might be
 its own improvement).

I think we 

Re: [openstack-dev] [HA] RFC: moving Pacemaker openstack-resource-agents to stackforge

2015-06-23 Thread Jan Klare
+1, sounds great, thanks for the effort

cheers,
Jan

Adam Spiers aspi...@suse.com schrieb am Di., 23. Juni 2015 um 12:28 Uhr:

 [cross-posting to openstack-dev and pacemaker lists; please consider
 trimming the recipients list if your reply is not relevant to both
 communities]

 Hi all,

 https://github.com/madkiss/openstack-resource-agents/ is a nice
 repository of Pacemaker High Availability resource agents (RAs) for
 OpenStack, usage of which has been officially recommended in the
 OpenStack High Availability guide.  Here is one of several examples:


 http://docs.openstack.org/high-availability-guide/content/_add_openstack_identity_resource_to_pacemaker.html

 Martin Loschwitz, who owns this repository, has since moved away from
 OpenStack, and no longer maintains it.  I recently proposed moving the
 repository to StackForge, and he gave his consent and in fact said
 that he had the same intention but hadn't got round to it:


 https://github.com/madkiss/openstack-resource-agents/issues/22#issuecomment-113386505

 You can see from that same github issue that several key members of
 the OpenStack Pacemaker sub-community are all in favour.  Therefore
 I am volunteering to do the move to StackForge.

 Another possibility would be to move each RA to its corresponding
 OpenStack project, although this makes a lot less sense to me, since
 it would require the core members of every OpenStack project to care
 enough about Pacemaker to agree to maintain an RA for it.

 This raises the question of maintainership.  SUSE has a vested
 interest in these resource agents, so we would be happy to help
 maintain them.  I believe Red Hat is also using these, so any
 volunteers from there or indeed anywhere else to co-maintain would be
 welcome.  They are already fairly complete, and I don't expect there
 will be a huge amount of change.

 I'm probably getting ahead of myself, but the other big question is
 regarding CI.  Currently there are no tests at all.  Of course we
 could add bashate, and maybe even some functional tests, but
 ultimately some integration tests would be really nice.  However for
 now I propose we focus on the move and defer CI work till later.

 Thoughts?

 Thanks!
 Adam

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Ceph Public Network Setting

2015-06-23 Thread Igor Kalnitsky
Hello,

That makes sense to me. Still, I want to point that we're going to
implement advanced networking and with this feature you'll be able to
assign every single network role to any network.

That means, you'll be able to assign ceph network role to storage,
management or  whatever-you-want network. Sounds cool, ha? :)

Feel free to read a design spec [1].

Thanks,
Igor

[1]: https://review.openstack.org/#/c/115340/

On Tue, Jun 23, 2015 at 1:13 PM, Zhou Zheng Sheng / 周征晟
zhengsh...@awcloud.com wrote:
 Hi!

 I notice that in OpenStack deployed by Fuel, Ceph public network is on
 management network. In some environments, not all NICs of a physical
 server are 10Gb. Sometimes 1 or 2 among the NICs on a machine may be
 1Gb. Usually on this type of machine we assign management network to 1Gb
 NIC, and storage network to 10Gb NIC. If Ceph public network is with
 management network, the QEMU accesses Ceph using management network, and
 the performance is not optimal.

 In a small deployment, cloud controller and Ceph OSD may be assigned to
 the same machine, so it would be more effective to keep Ceph client
 traffic separated from MySQL, RabbitMQ, Pacemaker traffic. Maybe it's
 better to place Ceph public network on the storage network. Agree?

 --
 Best wishes!
 Zhou Zheng Sheng, Software Developer
 Beijing AWcloud Software Co., Ltd.




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Ceph Public Network Setting

2015-06-23 Thread Zhou Zheng Sheng / 周征晟
Hi!

I notice that in OpenStack deployed by Fuel, Ceph public network is on
management network. In some environments, not all NICs of a physical
server are 10Gb. Sometimes 1 or 2 among the NICs on a machine may be
1Gb. Usually on this type of machine we assign management network to 1Gb
NIC, and storage network to 10Gb NIC. If Ceph public network is with
management network, the QEMU accesses Ceph using management network, and
the performance is not optimal.

In a small deployment, cloud controller and Ceph OSD may be assigned to
the same machine, so it would be more effective to keep Ceph client
traffic separated from MySQL, RabbitMQ, Pacemaker traffic. Maybe it's
better to place Ceph public network on the storage network. Agree?

-- 
Best wishes!
Zhou Zheng Sheng, Software Developer
Beijing AWcloud Software Co., Ltd.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Proposing Lingxian Kong as a core reviewer

2015-06-23 Thread Renat Akhmerov
Alright, I see no objections. Then my congratulations Lingxian!

Done.

Renat Akhmerov
@ Mirantis Inc.



 On 23 Jun 2015, at 11:51, BORTMAN, Limor (Limor) 
 limor.bort...@alcatel-lucent.com wrote:
 
 +1
  
 From: W Chan [mailto:m4d.co...@gmail.com] 
 Sent: Tuesday, June 23, 2015 12:07 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [mistral] Proposing Lingxian Kong as a core reviewer
  
 +1
  
 Lingxian, keep up with the good work. :D
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron] Requirements validations

2015-06-23 Thread Gary Kotton


On 6/23/15, 12:52 PM, Ihar Hrachyshka ihrac...@redhat.com wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 06/23/2015 11:16 AM, Gary Kotton wrote:
 Hi, In the vmware_nsx project we have done the following:
 
 1. In the test_requirements file we have a link to the neutron
 master [1]. The purpose for this is that the master branch needs to
 be in sync with the neutron branch and all unit tests have to pass.
 So each time there is a change in neutron or vmware_nsx we
 validate that nothing is broken. 2. On the infra side we have: 1.
 the bot that posts updates for requirements. This keeps the
 requirements file in sync. 2. The requirements validation scrip
 running
 
 We have now hit an issue where the requirements validation is
 failing. For example [2]. The problem is that the requirements
 validation does not like: pkg_resources.RequirementParseError:
 Expected version spec in -e
 git://git.openstack.org/openstack/neutron.git at
 git://git.openstack.org/openstack/neutron.git²
 
 Any suggestions for addressing this issue? Has anyone else hit
 this?
 

http://lists.openstack.org/pipermail/openstack-dev/2015-June/065747.html

Move the dep to tox.ini, as e.g. *aas repos do.

Brilliant! And it works too.
Gracias


Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJViSxlAAoJEC5aWaUY1u57VbIIANj7MEcjpHJeMblQED6FhfBu
i1fDbmCrCNphOjqIq63vG6JsUnM09gyTqFs08y7R7IqzQMFqRmBf8AzYB9dHUZvf
PjK8vGjLZgFJJH/wgVzXoD5gpASNrD43nFpQ+dT5xRXnJx5J7fxnNJUBd6WelepG
iU9K8NOd9d7S/7GwetryZRiPnGdVagZZ+XOnBnGkJuj40FPTVxP/YeyFfeYRzTdv
bhob1dcG9SO61lJsbrvBFC3V0EFDZ5ov5lXcXhJuSrERsCDcc0p8APwULJyzYtiX
Ps7W9r1aw24SHw1g2/NFXTw8PUl83iOPCwIVAmqGi5MakCb+PXhrvvhNAJBuFn4=
=qvn7
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][stable] Kevin Benton added to neutron-stable-maint

2015-06-23 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi all,

Just a heads-up that Kevin Benton is added to neutron-stable-maint
team so now he has all the powers to +2/+A (and -2) backports.

Kevin is very active at voting for neutron backports (well, where is
he NOT active?), so here you go.

Thanks
Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJViTblAAoJEC5aWaUY1u57lAIH/2lBqAQv5sL0avDmWYHljUXO
zolTmsaK8+qs9FXUlr+Ca3TU1KqOH5p27m49pkJS2n3Sy1ojL0TkzmQxA5sB0/Bg
ufVq2aMGzC1L0k9c8VbMiHpX6/CHOEnL/bpp4Gh6LRpovVOCGRXnlPabd+h0PPJm
krDhG428ZB6wMnd5S+ZuV77Mlr2Lrrv8o0mzd0joO1munJFepk7ar7BLwYV+QeZq
kpi8dInh7gODI3ciQ3OWuWZWk4Dsc0Dup2ARsdUlhDN0/Sfc/ElXKWmYIam+flCR
ToxtzQBrw2LT/mT/mOpT8bJRBbP8KGtcunXvDEeoGOxOTF1+2dpfMbHYlohlCX8=
=q6il
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [HA] RFC: moving Pacemaker openstack-resource-agents to stackforge

2015-06-23 Thread Adam Spiers
[cross-posting to openstack-dev and pacemaker lists; please consider
trimming the recipients list if your reply is not relevant to both
communities]

Hi all,

https://github.com/madkiss/openstack-resource-agents/ is a nice
repository of Pacemaker High Availability resource agents (RAs) for
OpenStack, usage of which has been officially recommended in the
OpenStack High Availability guide.  Here is one of several examples:


http://docs.openstack.org/high-availability-guide/content/_add_openstack_identity_resource_to_pacemaker.html

Martin Loschwitz, who owns this repository, has since moved away from
OpenStack, and no longer maintains it.  I recently proposed moving the
repository to StackForge, and he gave his consent and in fact said
that he had the same intention but hadn't got round to it:


https://github.com/madkiss/openstack-resource-agents/issues/22#issuecomment-113386505

You can see from that same github issue that several key members of
the OpenStack Pacemaker sub-community are all in favour.  Therefore
I am volunteering to do the move to StackForge.

Another possibility would be to move each RA to its corresponding
OpenStack project, although this makes a lot less sense to me, since
it would require the core members of every OpenStack project to care
enough about Pacemaker to agree to maintain an RA for it.

This raises the question of maintainership.  SUSE has a vested
interest in these resource agents, so we would be happy to help
maintain them.  I believe Red Hat is also using these, so any
volunteers from there or indeed anywhere else to co-maintain would be
welcome.  They are already fairly complete, and I don't expect there
will be a huge amount of change.

I'm probably getting ahead of myself, but the other big question is
regarding CI.  Currently there are no tests at all.  Of course we
could add bashate, and maybe even some functional tests, but
ultimately some integration tests would be really nice.  However for
now I propose we focus on the move and defer CI work till later.

Thoughts?

Thanks!
Adam

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] How to properly detect and fence a compromised host (and why I dislike TrustedFilter)

2015-06-23 Thread Sylvain Bauza

Hi team,

Some discussion occurred over IRC about a bug which was publicly open 
related to TrustedFilter [1]
I want to take the opportunity for raising my concerns about that 
specific filter, why I dislike it and how I think we could improve the 
situation - and clarify everyone's thoughts)


The current situation is that way : Nova only checks if one host is 
compromised only when the scheduler is called, ie. only when 
booting/migrating/evacuating/unshelving an instance (well, not exactly 
all the evacuate/live-migrate cases, but let's not discuss about that 
now). When the request goes in the scheduler, all the hosts are checked 
against all the enabled filters and the TrustedFilter is making an 
external HTTP(S) call to the Attestation API service (not handled by 
Nova) for *each host* to see if the host is valid (not compromised) or not.


To be clear, that's the only in-tree scheduler filter which explicitly 
does an external call to a separate service that Nova is not managing. I 
can see at least 3 reasons for thinking about why it's bad :


#1 : that's a terrible bottleneck for performance, because we're 
IO-blocking N times given N hosts (we're even not multiplexing the HTTP 
requests)
#2 : all the filters are checking an internal Nova state for the host 
(called HostState) but that the TrustedFilter, which means that 
conceptually we defer the decision to a 3rd-party engine
#3 : that Attestation API services becomes a de facto dependency for 
Nova (since it's an in-tree filter) while it's not listed as a 
dependency and thus not gated.



All of these reasons could be acceptable if that would cover the exposed 
usecase given in [1] (ie. I want to make sure that if my host gets 
compromised, my instances will not be running on that host) but that 
just doesn't work, due to the situation I mentioned above.


So, given that, here are my thoughts :
a/ if a host gets compromised, we can just disable its service to 
prevent its election as a valid destination host. There is no need for a 
specialised filter.
b/ if a host is compromised, we can assume that the instances have to 
resurrect elsewhere, ie. we can call a nova evacuate
c/ checking if an host is compromised or not is not a Nova 
responsibility since it's already perfectly done by [2]


In other words, I'm considering that security usecase as something 
analog as the HA usecase [3] where we need a 3rd-party tool responsible 
for periodically checking the state of the hosts, and if compromised 
then call the Nova API for fencing the host and evacuating the 
compromised instances.


Given that, I'm proposing to deprecate TrustedFilter and explictly 
mention to drop it from in-tree in a later cycle 
https://review.openstack.org/194592


Thoughts ?
-Sylvain



[1] https://bugs.launchpad.net/nova/+bug/1456228
[2] https://github.com/OpenAttestation/OpenAttestation
[3] http://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposal/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] The methods of dealing with the extend share failure

2015-06-23 Thread Zhongjun (A)
I have a question about extend share error.
If extend share failed, the share status will change to ‘extending_error’. This 
share will not be used.
I don’t think this method is very appropriate. May be extend share failed 
should not affect share to use. We can prompt the user extend share fail, and 
not change the share status.  The share status is still ‘avalible’.

PS: A similar rationale applies to shrink share.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] RBAC Policy Basics

2015-06-23 Thread Osanai, Hisashi

On Tuesday, June 23, 2015 12:14 AM, Adam Young wrote:

 It is not an issue if you keep each of the policy files completely
 separate, but it means that each service has its own meaning for the
 same name, and that confuses operators;  owner in Nova means a user
 that has a role on this project where as owner in Keystone means
 Objects associated with a specific user.

I understand your thought came from usability.

But it might increase development complexity, I think each component
doesn't want to define own component name in the policy.json because
it's well-known there.
Unnn... Please forget it (it might be too development thought) :-)

I want to focus on the following topic:

  My concern now is:
  * Service Tokens was implemented in Juno [1] but now we are not able
to implement it with Oslo policy without extensions so far.
  * I think to implement spec[2] needs more time.
 
  [1] 
  https://github.com/openstack/keystone-specs/blob/master/specs/keystonemiddleware/implemented/service-tokens.rst
  [2] https://review.openstack.org/#/c/133855/
 
  Is there any way to support spec[1] in Oslo policy? Or
  Should I wait for spec[2]?
 
 I'm sorry, I am not sure what you are asking.

I'm sorry let me explain this again.

(1) Keystone supports service tokens [1] from Juno release.
(2) Oslo policy graduated from Kilo release.
(3) Oslo policy doesn't have an ability to deal with the service tokens.
I'm not 100% sure but in order to support the service tokens Oslo policy
needs to handle service_roles in addition to roles stored in a credential.
Current logic:
If a rule which starts with 'role:', RoleCheck uses 'roles' in the 
credential.
code: 
https://github.com/openstack/oslo.policy/blob/master/oslo_policy/_checks.py#L249

My solution for this now is create ServiceRoleCheck class to handle 
'service_roles' in
the credential. This check will be used when a rule starts with 'srole:'.

https://review.openstack.org/#/c/149930/15/swift/common/middleware/keystoneauth.py
L759-L767

I think it's better to handle by Oslo policy because of a common issue. So I 
would like
to know a plan to handle this issue.

Thanks in advance,
Hisashi Osanai


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Proposing Lingxian Kong as a core reviewer

2015-06-23 Thread Lingxian Kong
Thanks everyone, it's really an honor to work with all you guys!

On Tue, Jun 23, 2015 at 6:18 PM, Renat Akhmerov rakhme...@mirantis.com wrote:
 Alright, I see no objections. Then my congratulations Lingxian!

 Done.

 Renat Akhmerov
 @ Mirantis Inc.



 On 23 Jun 2015, at 11:51, BORTMAN, Limor (Limor)
 limor.bort...@alcatel-lucent.com wrote:

 +1

 From: W Chan [mailto:m4d.co...@gmail.com]
 Sent: Tuesday, June 23, 2015 12:07 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [mistral] Proposing Lingxian Kong as a core
 reviewer

 +1

 Lingxian, keep up with the good work. :D
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards!
---
Lingxian Kong

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements] converging the openstack-infra/project-config and openstack/requirements requirements checks

2015-06-23 Thread Robert Collins
We have inconsistent rules being applied to requirements files.

In openstack/requirements we handle:
# comments
packages[specifiers][markers]
-e / -f [ARBITRARY]
https://tarballs.openstack.org/

^- blanklines

In openstack-infra/project-config, we check that everything is one of:
- a trailing \n on the file
packages[specifiers]
# comments
http://tarballs.openstack.org/

^- blanklines

We also have a strict/nonstrict mode which appears to be the same to me.

So - I'd like to do two things here.

Firstly, I want to move all the linting code into
openstack/requirements, so we don't have two different parsers that
*can* vary in what they can handle. That seems mechanical.

More interestingly though, I want to converge the rules.

So far I have this list of variance:
1) trailing \n is mandatory in infra [requirements *generates files,
so to date hasn't cared]
2) -e and -f line are not supported in infra (but don't error in
non-strict mode)
3) infra doesn't handle markers at all

I propose the following to reconcile:
 - I'll add a lint command to openstack/requirements
 - it will check for \n
 - it will accept the set that openstack/requirements accepts in each
of non-strict and strict mode
 - anything that parses will be checked against global-requirements
for equivalence.

This is conservative in that it accepts the broader of the two sets of
things we had. I could be conservative in the other way and clamp
update-requirements down to reject -e and -f in non-strict mode, if
thats desired: but I don't really want to have to chase some number of
projects to be stricter as part of this.

While this is discussed, I'm going to prep the basic patch set, and
can fine tune based on the resulting discussion.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to properly detect and fence a compromised host (and why I dislike TrustedFilter)

2015-06-23 Thread Wang, Shane
AFAIK, TrustedFilter is using a sort of cache to cache the trusted state, which 
is designed to solve the performance issue mentioned here.

My thoughts for deprecating it are:
#1. We already have customers here in China who are using that filter. How are 
they going to do upgrade in the future?
#2. Dependency should not be a reason to deprecate a module in OpenStack, Nova 
is not a stand-alone module, and it depends on various technologies and 
libraries.

Intel is setting up the third party CI for TCP/OAT in Liberty, which is to 
address the concerns mentioned in the thread. And also, OAT is an open source 
project which is being maintained as the long-term strategy.

For the situation that a host gets compromised, OAT checks trusted or untrusted 
from the start point of boot/reboot, it is hard for OAT to detect whether a 
host gets compromised when it is running, I don't know how to detect that 
without the filter?
Back to Michael's question, the process of the verification is done by software 
automatically when a host boots or reboots, will that be an overhead for the 
admin to have a separate job?

Thanks.
--
Shane

-Original Message-
From: Michael Still [mailto:mi...@stillhq.com] 
Sent: Wednesday, June 24, 2015 7:49 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] How to properly detect and fence a 
compromised host (and why I dislike TrustedFilter)

I agree. I feel like this is another example of functionality which is 
trivially implemented outside nova, and where it works much better if we don't 
do it. Couldn't an admin just have a cron job which verifies hosts, and then 
adds them to a compromised-hosts host aggregate if they're owned? I assume 
without testing it that you can migrate instances _out_ of a host aggregate you 
can't boot in?

Michael

On Tue, Jun 23, 2015 at 8:41 PM, Sylvain Bauza sba...@redhat.com wrote:
 Hi team,

 Some discussion occurred over IRC about a bug which was publicly open 
 related to TrustedFilter [1] I want to take the opportunity for 
 raising my concerns about that specific filter, why I dislike it and 
 how I think we could improve the situation - and clarify everyone's 
 thoughts)

 The current situation is that way : Nova only checks if one host is 
 compromised only when the scheduler is called, ie. only when 
 booting/migrating/evacuating/unshelving an instance (well, not exactly 
 all the evacuate/live-migrate cases, but let's not discuss about that 
 now). When the request goes in the scheduler, all the hosts are 
 checked against all the enabled filters and the TrustedFilter is 
 making an external HTTP(S) call to the Attestation API service (not 
 handled by Nova) for *each host* to see if the host is valid (not 
 compromised) or not.

 To be clear, that's the only in-tree scheduler filter which explicitly 
 does an external call to a separate service that Nova is not managing. 
 I can see at least 3 reasons for thinking about why it's bad :

 #1 : that's a terrible bottleneck for performance, because we're 
 IO-blocking N times given N hosts (we're even not multiplexing the 
 HTTP requests)
 #2 : all the filters are checking an internal Nova state for the host 
 (called HostState) but that the TrustedFilter, which means that 
 conceptually we defer the decision to a 3rd-party engine
 #3 : that Attestation API services becomes a de facto dependency for 
 Nova (since it's an in-tree filter) while it's not listed as a 
 dependency and thus not gated.


 All of these reasons could be acceptable if that would cover the 
 exposed usecase given in [1] (ie. I want to make sure that if my host 
 gets compromised, my instances will not be running on that host) but 
 that just doesn't work, due to the situation I mentioned above.

 So, given that, here are my thoughts :
 a/ if a host gets compromised, we can just disable its service to 
 prevent its election as a valid destination host. There is no need for 
 a specialised filter.
 b/ if a host is compromised, we can assume that the instances have to 
 resurrect elsewhere, ie. we can call a nova evacuate c/ checking if an 
 host is compromised or not is not a Nova responsibility since it's 
 already perfectly done by [2]

 In other words, I'm considering that security usecase as something 
 analog as the HA usecase [3] where we need a 3rd-party tool 
 responsible for periodically checking the state of the hosts, and if 
 compromised then call the Nova API for fencing the host and evacuating the 
 compromised instances.

 Given that, I'm proposing to deprecate TrustedFilter and explictly 
 mention to drop it from in-tree in a later cycle 
 https://review.openstack.org/194592

 Thoughts ?
 -Sylvain



 [1] https://bugs.launchpad.net/nova/+bug/1456228
 [2] https://github.com/OpenAttestation/OpenAttestation
 [3] 
 http://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposa
 l/


 

Re: [openstack-dev] [nova][keystone] Nova calls to Keystone

2015-06-23 Thread Sajeesh Cimson Sasi


From: Adam Young [ayo...@redhat.com]
Sent: 23 June 2015 08:00:48
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova][keystone] Nova calls to Keystone

On 06/22/2015 10:13 PM, Sajeesh Cimson Sasi wrote:



From: Adam Young [ayo...@redhat.commailto:ayo...@redhat.com]
Sent: 23 June 2015 00:01:48
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova][keystone] Nova calls to Keystone

On 06/20/2015 02:46 PM, Sajeesh Cimson Sasi wrote:
Hi All,
   I need your advice for the implementation of the following blueprint. 
https://review.openstack.org/#/c/160605 .
   All the use cases mentioned in the blueprint have  been implemented and the 
complete code is up for review.
  https://review.openstack.org/#/c/149828/
  However, we have an issue on which we need your input. In the nova quota api 
call, keystone calls are made to
  get the parent_id and the child project or sub project list. This is required 
because nova doesn't store any information
  regarding the hierarchy. Hierarchy Information is  taken during run time,  
from keystone. Since the keystone calls are
  made inside the api call, it is not possible to give any dummy or  fake 
values while writing the unit tests. If the keystone
  call was made outside the api call, we could have given fake values in the 
test cases. However,  the keystone calls for
   parent_id and child projects are made inside the api call.
  Can anyone suggest an elegant solution to this problem? What is the proper 
way to implement this ?
Did anybody encounter and solve a  similar  problem ? Many thanks for any 
suggestions!
 best regards
   sajeesh



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


If you are talking to a live Keystone server, make sure you are using valid 
data.

If you are not talking to a live keystone server in a unit test, use 
RequestsMock or equivalent (varied by project)  to handle the HTTP request and 
response.

A worst case approach is to monkey patch the Keystoneclient.  Please don't do 
that if you can avoid it;  better to provide a mock alternative.


Hi Adam,
   Thanks a lot. I am not planning to talk to the live keystone 
server in the unit test.
   I don't think that I need to monkey patch the KeystoneClient. In 
the nova api code, there are two methods (get_parent_project and 
get_immediate_child_list),which uses keystoneclient.I can monkey patch those 
two methods to return fixed data according to a fake hierarchy. Am I right ?

Its not great, but not horrible.  It seems to match the scope of what you are 
testing.  However, you might want to consider doing a mock for the whole 
Keystoneclient call, as that really should beo utside of the unit test for the 
Nova code.

[[**
  Thank you Adam. I will check it
  best regards
   sajeesh



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] [EDP] about get_job_status in oozie engine

2015-06-23 Thread lu jander
Hi Trevor
I am huichun

I agree with you, currently sahara edp engine and DB mixed with lots of
oozieness info, abstract incompletely.

i find this issue because i want add recurrence edp job in sahara, and with
oozie implementation, i find i need these oozie information not only the
status value.

so i will make a little change with current code with little impact, but in
the long term plan,  I think we should make a disscussion about refactoring
current edp engine and oozie implementation

2015-06-23 21:17 GMT+08:00 Trevor McKay tmc...@redhat.com:

 Hi Lu,

   yes, you're right.  Return is a dictionary and for the other EDP
 engines only status is returned (and we primarily care about
 status).  For Oozie, there is more information.

   I'm fine with changing the name to get_job_info() throughout the
 job_manager and EDP.

   It actually raises the question for me about whether or not in the
 Oozie case we really even need the extra Oozie information in the Sahara
 database.  I don't think we use it anywhere, not even sure the UI
 displays it (but it might) or how much comes through the REST responses.

   Maybe we should have get_job_status() which returns only status, and
 an optional get_job_info() that returns more? But that may be a bigger
 discussion.

 Best,

 Trevor

 On Tue, 2015-06-23 at 15:18 +0800, lu jander wrote:
  Hi Trevor
 
 
  in sahara oozie engine (sahara/service/edp/oozie/engine.py
  sahara/service/edp/oozie/oozie.py)
 
 
  function get_job_status actually returns not only the status of the
  job, but it returns all the info about the job, so i think that we
  should  rename this function as get_job_info maybe more convenient for
  us? cause I want add a function named get_job_info but i find that it
  already exists here with a confused name.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][requirements] Whats with the python_version=='2.7' stuff thats showing up from today?

2015-06-23 Thread Robert Collins
You may have seen your requirements update proposals start to include
things like:
MySQL-python;python_version=='2.7'

like in
https://review.openstack.org/#/c/194325/
https://review.openstack.org/#/c/194325/3/test-requirements.txt

This is programmatic annotation of the Python versions that we need
these requirements for -- and the version(s) that we know works.

The syntax is PEP-426 environment markers:
https://www.python.org/dev/peps/pep-0426/#environment-markers

and we document it in pbr and openstack/requirements - this email is
just a heads for folk that aren't tracking those repositories.

To work with these, modern versions of pip (6 is the minimum) and
setuptools (I recommend the latest version always) and pbr (1.2.0) are
needed. Running with older versions is likely to fail - but these
tools are very backwards compatible.

The introduction of this is a necessary step to be able to calculate a
frozen set of requirements for each Python version we test on - if we
can't install all of global-requirements.txt on e.g. 2.7, then we
can't see what versions would be installed to calculate the delta for
the next upgrade. Calculating that frozen set of requirements is a
necessary condition to remove the variance of upstream releases from
impacting CI jobs and causing wedges.

We haven't (yet) introduced this machine readable data to stable/kilo,
but are likely to do so later this cycle, once we have the tooling in
place and locked in for constraints everywhere in master.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to properly detect and fence a compromised host (and why I dislike TrustedFilter)

2015-06-23 Thread Michael Still
I agree. I feel like this is another example of functionality which is
trivially implemented outside nova, and where it works much better if
we don't do it. Couldn't an admin just have a cron job which verifies
hosts, and then adds them to a compromised-hosts host aggregate if
they're owned? I assume without testing it that you can migrate
instances _out_ of a host aggregate you can't boot in?

Michael

On Tue, Jun 23, 2015 at 8:41 PM, Sylvain Bauza sba...@redhat.com wrote:
 Hi team,

 Some discussion occurred over IRC about a bug which was publicly open
 related to TrustedFilter [1]
 I want to take the opportunity for raising my concerns about that specific
 filter, why I dislike it and how I think we could improve the situation -
 and clarify everyone's thoughts)

 The current situation is that way : Nova only checks if one host is
 compromised only when the scheduler is called, ie. only when
 booting/migrating/evacuating/unshelving an instance (well, not exactly all
 the evacuate/live-migrate cases, but let's not discuss about that now). When
 the request goes in the scheduler, all the hosts are checked against all the
 enabled filters and the TrustedFilter is making an external HTTP(S) call to
 the Attestation API service (not handled by Nova) for *each host* to see if
 the host is valid (not compromised) or not.

 To be clear, that's the only in-tree scheduler filter which explicitly does
 an external call to a separate service that Nova is not managing. I can see
 at least 3 reasons for thinking about why it's bad :

 #1 : that's a terrible bottleneck for performance, because we're IO-blocking
 N times given N hosts (we're even not multiplexing the HTTP requests)
 #2 : all the filters are checking an internal Nova state for the host
 (called HostState) but that the TrustedFilter, which means that conceptually
 we defer the decision to a 3rd-party engine
 #3 : that Attestation API services becomes a de facto dependency for Nova
 (since it's an in-tree filter) while it's not listed as a dependency and
 thus not gated.


 All of these reasons could be acceptable if that would cover the exposed
 usecase given in [1] (ie. I want to make sure that if my host gets
 compromised, my instances will not be running on that host) but that just
 doesn't work, due to the situation I mentioned above.

 So, given that, here are my thoughts :
 a/ if a host gets compromised, we can just disable its service to prevent
 its election as a valid destination host. There is no need for a specialised
 filter.
 b/ if a host is compromised, we can assume that the instances have to
 resurrect elsewhere, ie. we can call a nova evacuate
 c/ checking if an host is compromised or not is not a Nova responsibility
 since it's already perfectly done by [2]

 In other words, I'm considering that security usecase as something analog
 as the HA usecase [3] where we need a 3rd-party tool responsible for
 periodically checking the state of the hosts, and if compromised then call
 the Nova API for fencing the host and evacuating the compromised instances.

 Given that, I'm proposing to deprecate TrustedFilter and explictly mention
 to drop it from in-tree in a later cycle https://review.openstack.org/194592

 Thoughts ?
 -Sylvain



 [1] https://bugs.launchpad.net/nova/+bug/1456228
 [2] https://github.com/OpenAttestation/OpenAttestation
 [3] http://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposal/


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] apache wsgi application support

2015-06-23 Thread Morgan Fainberg
Dean,

If we change how Kilo works, then we'll just move the error state from Kilo
- Master to Juno - Kilo since the same issue will now occur in upgrading
from Juno in grenade. It's unfortunate.

--Morgan

On Tue, Jun 23, 2015 at 2:22 PM, Dean Troyer dtro...@gmail.com wrote:

 On Tue, Jun 23, 2015 at 4:08 PM, Morgan Fainberg 
 morgan.fainb...@gmail.com wrote:

 We likely need to back port a simplified version of the wsgi files and/or
 make the Juno (and kilo) versions of dev stack use the same simplified /
 split files. Grenade doesn't re-run stack - so new files that are outside
 pip's purview won't be used afaik.


 You should only need to go back to kilo, the juno-kilo should continue to
 work the old way, the kilo-master run should start the new way.

 dt

 --

 Dean Troyer
 dtro...@gmail.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] apache wsgi application support

2015-06-23 Thread Morgan Fainberg
Aha! Right, Thanks Sean. I'll just go back under my rock (too many things
going on to remember them all) and see what we can do about getting Kilo -
Master fixed up.

On Tue, Jun 23, 2015 at 4:53 PM, Sean Dague s...@dague.net wrote:

 On 06/23/2015 07:49 PM, Morgan Fainberg wrote:
  Dean,
 
  If we change how Kilo works, then we'll just move the error state from
  Kilo - Master to Juno - Kilo since the same issue will now occur in
  upgrading from Juno in grenade. It's unfortunate.

 Actually Juno - Kilo is back on eventlet because of the other bug with
 wsgi stuff and keystone.

 https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate.sh#L341

 So a backport to Kilo is all that's required.

 -Sean

 --
 Sean Dague
 http://dague.net


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ClusterLabs] [HA] RFC: moving Pacemaker openstack-resource-agents to stackforge

2015-06-23 Thread Adam Spiers
Digimer li...@alteeve.ca wrote:
 Resending to the Cluster Labs mailing list, this list is deprecated

Thanks, I only realised that after getting a deprecation warning :-(

 On 23/06/15 06:27 AM, Adam Spiers wrote:
  [cross-posting to openstack-dev and pacemaker lists; please consider
  trimming the recipients list if your reply is not relevant to both
  communities]
  
  Hi all,
  
  https://github.com/madkiss/openstack-resource-agents/ is a nice
  repository of Pacemaker High Availability resource agents (RAs) for
  OpenStack, usage of which has been officially recommended in the
  OpenStack High Availability guide.  Here is one of several examples:
  
  
  http://docs.openstack.org/high-availability-guide/content/_add_openstack_identity_resource_to_pacemaker.html
  
  Martin Loschwitz, who owns this repository, has since moved away from
  OpenStack, and no longer maintains it.  I recently proposed moving the
  repository to StackForge, and he gave his consent and in fact said
  that he had the same intention but hadn't got round to it:
  
  
  https://github.com/madkiss/openstack-resource-agents/issues/22#issuecomment-113386505
  
  You can see from that same github issue that several key members of
  the OpenStack Pacemaker sub-community are all in favour.  Therefore
  I am volunteering to do the move to StackForge.
 
 There is a CusterLabs group on github that most of the HA cluster
 projects have or are moving under. Why not use that?

This question was asked and answered in the github issue:

https://github.com/madkiss/openstack-resource-agents/issues/22#issuecomment-114147300

Cheers,
Adam

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mercador] Meeting schedule

2015-06-23 Thread Geoff Arnold
My apologies for the delay in getting things set up. The weekly meeting for the 
Mercador project will be held each Friday at 1700 UTC. The first meeting will 
be this Friday, June 26.  The meetings will take place on IRC in 
#openstack-meeting. The agenda will be tracked at 

https://wiki.openstack.org/wiki/Meetings/MercadorTeamMeeting

(Yes, it’s not in Eavesdrop yet, but the update has merged.)

Geoff 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] apache wsgi application support

2015-06-23 Thread Sean Dague
On 06/23/2015 07:49 PM, Morgan Fainberg wrote:
 Dean,
 
 If we change how Kilo works, then we'll just move the error state from
 Kilo - Master to Juno - Kilo since the same issue will now occur in
 upgrading from Juno in grenade. It's unfortunate.

Actually Juno - Kilo is back on eventlet because of the other bug with
wsgi stuff and keystone.
https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate.sh#L341

So a backport to Kilo is all that's required.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] oslo.messaging version and RabbitMQ heartbeat support

2015-06-23 Thread Mike Dorman
As a follow up to https://review.openstack.org/#/c/194399/ and the meeting 
discussion earlier today, I’ve determined that everybody (RDU, Ubuntu, Debian) 
is packaging oslo.messaging 1.8.2 or 1.8.3 with the Kilo build.  (This is also 
the version we get on our internal Anvil-based build.)  This is considerably 
lower than 1.11.0 where the default rabbit_heartbeat_timeout_threshold changes 
(from 60 to 0.)

If we go forward using the default rabbit_heartbeat_timeout_threshold value of 
60, that will be the correct default behavior up through oslo.messaging 1.10.x.

When people upgrade to 1.11.0 or higher, we’ll no longer match the upstream 
default behavior.  But, it should maintain the _actual_ behavior (heartbeating 
enabled) for people doing an upgrade.  Once Liberty is cut, we should 
reevaluate to make sure we’re matching whatever the default is at that time.

However, the larger problem I see is that oslo.messaging requirements.txt in 
=1.10.x does not enforce the needed versions of kombu and amqp for heartbeat 
to work:
https://github.com/openstack/oslo.messaging/blob/1.8.2/requirements.txt#L25-L26 
 This is confusing as heartbeat is enabled by default!

I am not sure what the behavior is when heartbeat is enabled with older kombu 
or amqp.  Does anyone know?  If it silently fails, maybe we don’t care.  But if 
enabling heartbeat (by default, rabbit_heartbeat_timeout_threshold=60) actively 
breaks, that would be bad.

I see two options here:

1)  Make default rabbit_heartbeat_timeout_threshold=60 in the Puppet modules, 
to strictly follow the upstream default in Kilo.  Reevaluate this default value 
for Liberty.  Ignore the kombu/amqp library stuff and hope “it just works 
itself out naturally.”

2)  Add a rabbit_enable_heartbeat parameter to explicitly enable/disable the 
feature, and default to disable.  This goes against the current default 
behavior, but will match it for oslo.messaging =1.11.0.  I think this is the 
safest path, as we won’t be automatically enabling heartbeat for people who 
don’t have a new enough kombu or amqp.

Personally, I like #1, because I am going to enable this feature, anyway.  And 
I can’t really imagine why one would _not_ want to enable it.  But I am fine 
implementing it either way, I just want to get it done so I can get off my 
local forks. :)

Thoughts?

Mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Ceph Public Network Setting

2015-06-23 Thread Andrew Woodward
As part of moving towards the granular network roles, the roles should be
extended as I describe in [1]. I quickly hacked it together, however if you
apply both patches you would be able to change your nailgun side to the
values you need and when the complete code comes along you will be able to
use the same network role mapping to control the placement in future
versions.

[1] https://bugs.launchpad.net/fuel/+bug/1467700

On Tue, Jun 23, 2015 at 10:07 AM Sergey Vasilenko svasile...@mirantis.com
wrote:


 I notice that in OpenStack deployed by Fuel, Ceph public network is on
 management network.


 As I know separating cuph/public and management networks in a scope of 7.0
 release.


 /sv

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev