Re: [openstack-dev] How to implement and configure a new Neutron vpnaas driver from scratch?

2014-02-19 Thread Julio Carlos Barrera Juez
Thank you very much Bo. I will try all your advices and check if it works!

[image: i2cat]
Julio C. Barrera Juez
Office phone: +34 93 357 99 27
Distributed Applications and Networks Area (DANA)
i2CAT Foundation, Barcelona, Spain
http://dana.i2cat.net


On 18 February 2014 09:18, Bo Lin l...@vmware.com wrote:

 I wonder whether your neutron server codes have added the  VPNaaS
 integration with service type framework change on
 https://review.openstack.org/#/c/41827/21 , if not, the service_provider
 option is useless. You need to include the change before developing your
 own driver.

 QA (In my opinion and sth may be missing):
 - What is the difference between service drivers and device drivers?
 service drivers are driven by vpn service plugin and are responsible
 for casting rpc request (CRUD of vpnservices) to and do callbacks from vpn
 agent.
 device drivers are driven by vpn agent and are responsible for
 implementing specific vpn operations and report vpn running status.

 - Could I implement only one of them?
 device driver must be implemented based on your own device. Unless the
 default ipsec service driver is definitely appropriate, suggest you
 implement both of them. After including VPNaaS integration with service
 type framework, the service driver work is simple.

 - Where I need to put my Python implementation in my OpenStack instance?
Do you mean let your instance runs your new codes? The default source
 codes dir is /opt/stack/neutron, you need to put your new changes into the
 dir and restart the neutron server.

 - How could I configure my OpenStack instance to use this implementation?
1.  Add your new codes into source dir
2. Add appropriate vpnaas service_provider into neutron.conf and add
 appropriate vpn_device_driver option into vpn_agent.ini
3. restart n-svc and q-vpn

 Hope help you.

 --
 *From: *Julio Carlos Barrera Juez juliocarlos.barr...@i2cat.net
 *To: *OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
 *Sent: *Monday, February 17, 2014 7:18:44 PM
 *Subject: *[openstack-dev] How to implement and configure a new Neutron
 vpnaasdriver from scratch?


 Hi.

 I have asked in the QA website without success (
 https://ask.openstack.org/en/question/12072/how-to-implement-and-configure-a-new-vpnaas-driver-from-scratch/https://urldefense.proofpoint.com/v1/url?u=https://ask.openstack.org/en/question/12072/how-to-implement-and-configure-a-new-vpnaas-driver-from-scratch/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=F5etm0B6kVJ9jleIhCvNyA%3D%3D%0Am=9uhm%2F59JRfiZ3CXzuhBOpqcTqWk8APswRGJFZ8H2Tos%3D%0As=73a239e478da9a7d12255611481016295433378154fb612bd567c30d77788648
 ).

 I want to develop a vpnaas implementation. It seems that since Havana,
 there are plugins, services and device implementations. I like the plugin
 and his current API, then I don't need to reimplement it. Now I want yo
 implement a vpnaas driver, and I see I have two main parts to take into
 account: the service_drivers and the device_drivers. IPsec/OpenSwan
 implementation is the unique sample I've found.

 I'm using devstack to test my experiments.

 I tried to implement VpnDriver Python class extending the main API methods
 like IPsecVPNDriver does. I placed basic implementation files at the same
 level of IPsec/OpenSwan does and configured Neutron adding this line to
 /etc/neutron/neutron.conf file:

 service_provider =
 VPN:VPNaaS:neutron.services.vpn.service_drivers.our_python_filename.OurClassName:default

 I restarted Neutron related services in my devstack instance, but it
 seemed it didn't work.



 - What is the difference between service drivers and device drivers?
 - Could I implement only one of them?
 - Where I need to put my Python implementation in my OpenStack instance?
 - How could I configure my OpenStack instance to use this implementation?



 I didn't find almost any documentation about these topics.

 Thank you very much.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org

 https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=F5etm0B6kVJ9jleIhCvNyA%3D%3D%0Am=9uhm%2F59JRfiZ3CXzuhBOpqcTqWk8APswRGJFZ8H2Tos%3D%0As=46fe06049efb1d29a85b63f7ce101cd69695a368c3da6ea3a91bcd7d2b71ce59


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Network] Allocate MAC and IP address for a VM instance

2014-02-19 Thread Dong Liu

Jay, what the mac belong to? Is it a vm mac, or a mac of floatingip.
If it is a vm mac, you can associate any floatingip to vm port.
If it is a floatingip mac, I have no idea.

2014-02-19 11:44, Jay Lau :

Thanks Liu Dong.

In case that you may not get my previous question, so here just post it
again to see if you can give a help.

Is it possible to bind MAC to a FLOATING IP?

Thanks,

Jay



2014-02-19 10:38 GMT+08:00 Dong Liu willowd...@gmail.com
mailto:willowd...@gmail.com:

yes, it does not worked via dashboard

Dong Liu

于 2014-02-19 8:11, Jay Lau 写道:

Thanks Dong for the great help, it does worked with command line!

This seems not available via dashboard, right?

Thanks,

Jay



2014-02-19 1:11 GMT+08:00 Dong Liu willowd...@gmail.com
mailto:willowd...@gmail.com
mailto:willowd...@gmail.com mailto:willowd...@gmail.com__:


 Hi Jay,

 In neutron API, you could create port with specified
mac_address and
 fix_ip, and then create vm with this port.
 But the mapping of them need to manage by yourself.


 在 2014年2月18日,22:41,Jay Lau jay.lau@gmail.com
mailto:jay.lau@gmail.com
 mailto:jay.lau@gmail.com
mailto:jay.lau@gmail.com__ 写道:


   Greetings,
  
   Not sure if it is suitable to ask this question in
openstack-dev
 list. Here come a question related to network and want to
get some
 input or comments from you experts.
  
   My case is as this: For some security issue, I want to
put both
 MAC and internal IP address to a pool and when create VM, I
can get
 MAC and its mapped IP address and assign the MAC and IP
address to
 the VM.
  
   For example, suppose I have following MAC and IP pool:
   1) 78:2b:cb:af:78:b0, 192.168.0.10
   2) 78:2b:cb:af:78:b1, 192.168.0.11
   3) 78:2b:cb:af:78:b2, 192.168.0.12
   4) 78:2b:cb:af:78:b3, 192.168.0.13
  
   Then I can create four VMs using above MAC and IP
address, each
 row in above can be mapped to a VM.
  
   Does any of you have any idea for the solution of this?
  
   --
   Thanks,
  
   Jay
   _
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.__openstack.org
mailto:OpenStack-dev@lists.openstack.org

  
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 _
 OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.__openstack.org
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,

Jay


_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,

Jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Meetup Summary

2014-02-19 Thread Khanh-Toan Tran
 Agreed. I'm just thinking on the opportunity of providing a REST API

 on top of the scheduler RPC API with a 1:1 matching, so that the Gantt

 project would step up by itself. I don't think it's a hard stuff,
provided I

 already did that stuff for Climate (providing Pecan/WSME API). What

 do you think about it ? Even if it's not top priority, that's a
quickwin.



Well, I’m not sure about “quickwin”, though J I think that we should focus
on the main objective of having a self-contained Gantt working with Nova
first. Some of the interaction issues still worry me, especially the
host_state  host_update queries. These issues will have impact on the
Gantt API (at least for Nova to use), so I’m not sure the current RPC API
will hold up either. But I will not discourage any personal effort J







De : Sylvain Bauza [mailto:sylvain.ba...@gmail.com]
Envoyé : mardi 18 février 2014 22:41
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [Nova] Meetup Summary



Hi Don,





2014-02-18 21:28 GMT+01:00 Dugger, Donald D donald.d.dug...@intel.com:

Sylvain-



As you can tell from the meeting today the scheduler sub-group is really
not the gantt group meeting, I try to make sure that messages for things
like the agenda and what not include both `gantt’ and `scheduler’ in the
subject so it’s clear we’re talking about the same thing.





That's the main reason why I was unable to attend the previous scheduler
meetings...

Now that I attended this today meeting, that's quite clear to me. I
apologize for this misunderstanding, but as I can't dedicate all my time
on Gantt/Nova, I have to make sure the time I'm taking on it can be worth
it.



Now that we agreed on a plan for next steps, I think it's important to put
the infos on Gantt blueprints, even if most of the changes are related to
Nova. The current etherpad is huge, and frightening people who would want
to contribute IMHO.





Note that our ultimate goal is to create a scheduler that is usable by
other projects, not just nova, but that is a second task.  The first task
is to create a separate scheduler that will be usable by nova at a
minimum.  (World domination will follow later J





Agreed. I'm just thinking on the opportunity of providing a REST API on
top of the scheduler RPC API with a 1:1 matching, so that the Gantt
project would step up by itself. I don't think it's a hard stuff, provided
I already did that stuff for Climate (providing Pecan/WSME API). What do
you think about it ? Even if it's not top priority, that's a quickwin.



-Sylvain



--

Don Dugger

Censeo Toto nos in Kansa esse decisse. - D. Gale

Ph: 303/443-3786 tel:303%2F443-3786



From: Sylvain Bauza [mailto:sylvain.ba...@gmail.com]
Sent: Monday, February 17, 2014 4:26 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] Meetup Summary



Hi Russell and Don,



2014-02-17 23:41 GMT+01:00 Russell Bryant rbry...@redhat.com:

Greetings,


2) Gantt  - We discussed the progress of the Gantt effort.  After
discussing the problems encountered so far and the other scheduler work
going on, the consensus was that we really need to focus on decoupling
the scheduler from the rest of Nova while it's still in the Nova tree.

Don was still interested in working on the existing gantt tree to learn
what he can about the coupling of the scheduler to the rest of Nova.
Nobody had a problem with that, but it doesn't sound like we'll be ready
to regenerate the gantt tree to be the real gantt tree soon.  We
probably need another cycle of development before it will be ready.

As a follow-up to this, I wonder if we should rename the current gantt
repository from openstack/gantt to stackforge/gantt to avoid any
possible confusion.  We should make it clear that we don't expect the
current repo to be used yet.



There is currently no precise meeting timeslot for Gantt but the one with
Nova scheduler subteam. Would it be possible to have a status on the
current path for Gantt so that people interested in joining the effort
would be able to get in ?



There is currently a discussion on how Gantt and Nova should interact, in
particular regarding HostState and how Nova Computes could update their
status so as Gantt would be able to filter on them. There are also other
discussions about testing, API, etc. so I'm just wondering how to help and
where.



On a side note, if Gantt is becoming a Stackforge project planning to have
Nova scheduling first, could we also assume that we could also implement
this service for being used by other projects (such as Climate) in
parallel with Nova ?

The current utilization-aware-scheduling blueprint is nearly done so that
it can be used for other queries than just Nova scheduling, but
unfortunately as the scheduler is still part of Nova and without a REST
API, it can't be leveraged on third-party projects.





Thanks,

-Sylvain



[1] :

Re: [openstack-dev] [solum] Question about solum-minimal-cli BP

2014-02-19 Thread Shaunak Kashyap
Thanks Angus but I think I have managed to get confused again :)

So let me take a step back. From a users' perspective, what is the least number 
of steps they would need to take in order to have a running application with 
Solum? I understand there might be two variations on this - git-push and 
git-pull - and the answer may be different for each.

If this is documented somewhere, I'm happy to peruse through that instead; just 
point me to it.

Thanks,

Shaunak

From: Angus Salkeld [angus.salk...@rackspace.com]
Sent: Tuesday, February 18, 2014 6:13 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [solum] Question about solum-minimal-cli BP

On 18/02/14 14:19 +, Shaunak Kashyap wrote:
Thanks Angus and Devdatta. I think I understand.

Angus -- what you said seems to mirror the Heroku CLI usage: a) User runs 
app/plan create (to create the remote repo), then b) user runs git push 
... (which pushes the code to the remote repo and creates 1 assembly, 
resulting in a running application). If this is the intended flow for the 
user, it makes sense to me.

Just to be clear, I am not totally sure we are going to glue git repo
generation to create plan (it *could* be part of create assembly).


One follow up question: under what circumstances will the user need to 
explicitly run assembly create? Would it be used exclusively for adding more 
assemblies to an already running app?

If you are not using the git-push mechanism, but the git-pull.
Here you have your own repo (say on github) and there is not
a git-repo-generation phase.

-Angus


Thanks,

Shaunak


From: Angus Salkeld [angus.salk...@rackspace.com]
Sent: Monday, February 17, 2014 5:54 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [solum] Question about solum-minimal-cli BP

On 17/02/14 21:47 +, Shaunak Kashyap wrote:
Hey folks,

I was reading through 
https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/CLI-minimal-implementation
 and have a question.

If I’m understanding “app create” and “assembly create” correctly, the user 
will have to run “app create” first, followed by “assembly create” to have a 
running application. Is this correct? If so, what is the reason for “app 
create” not automatically creating one assembly as well?

On that page it seems that app create is the same as plan create.

The only reason I can see for seperating the plan from the assembly is
when you have git-push.
Then you need to have something create the git repo for you.

1 plan create (with a reference to a git-push requirement) would create
   the remote git repo for you.
2 you clone and populate the repo with your app code
3 you push, and that causes the assembly create/update.

Adrian might want to correct my here tho'

-Angus


Thanks,
Shaunak

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][nova] Neutron plugin authors: Does port status indicate liveness?

2014-02-19 Thread Mathieu Rohon
Hi Aaron,

You seem to have abandonned this patch :
https://review.openstack.org/#/c/74218/

You want neutron to update port in nova, can you please tell us how do
you want to do that?

I think that we should use such a mechanism for live-migration.
live-migration should occur once the port is set up on the destination
host. This could potentially resolve this bug :

https://bugs.launchpad.net/neutron/+bug/1274160

Best,

Mathieu

On Tue, Feb 18, 2014 at 2:55 AM, Aaron Rosen aaronoro...@gmail.com wrote:
 Hi Maru,

 Thanks for getting this thread started. I've filed the following blueprint
 for this:

 https://blueprints.launchpad.net/nova/+spec/check-neutron-port-status

 and have a have a prototype of it working here:

 https://review.openstack.org/#/c/74197/
 https://review.openstack.org/#/c/74218/

 One part that threw me a little while getting this working is that if using
 ovs and the new libvirt_vif_driver LibvirtGenericVifDriver, nova no longer
 calls ovs-vsctl to set external_ids:iface-id and that libvirt automatically
 does that for you. Unfortunately, this data seems to only make it to ovsdb
 when the instance is powered on. Because of this I needed to add back those
 calls as neutron needs this data to be set in ovsdb before it can start
 wiring the ports.

 I'm hoping this change should help out with
 https://bugs.launchpad.net/neutron/+bug/1253896 but we'll see. I'm not sure
 if it's to late to merge this in icehouse but it might be worth considering
 if we find that it helps reduce gate failures.

 Best,

 Aaron


 On Thu, Feb 13, 2014 at 3:31 AM, Mathieu Rohon mathieu.ro...@gmail.com
 wrote:

 +1 for this feature which could potentially resolve a race condition
 that could occur after port-binding refactoring in ML2 [1].
 in ML2, the port could be ACTIVE once a MD has bound the port. the
 vif_type could then be known by nova, and nova could create the
 network correctly thanks to vif_type and vif_details ( with
 vif_security embedded [2])


 [1]http://lists.openstack.org/pipermail/openstack-dev/2014-February/026750.html
 [2]https://review.openstack.org/#/c/72452/

 On Thu, Feb 13, 2014 at 3:13 AM, Maru Newby ma...@redhat.com wrote:
  Booting a Nova instance when Neutron is enabled is often unreliable due
  to the lack of coordination between Nova and Neutron apart from port
  allocation.  Aaron Rosen and I have been talking about fixing this by 
  having
  Nova perform a check for port 'liveness' after vif plug and before vm boot.
  The idea is to have Nova fail the instance if its ports are not seen to be
  'live' within a reasonable timeframe after plug.  Our initial thought is
  that the compute node would call Nova's networking subsystem which could
  query Neutron for the status of the instance's ports.
 
  The open question is whether the port 'status' field can be relied upon
  to become ACTIVE for all the plugins currently in the tree.  If this is not
  the case, please reply to this thread with an indication of how one would 
  be
  able to tell the 'liveness' of a port managed by the plugin you maintain.
 
  In the event that one or more plugins cannot reliably indicate port
  liveness, we'll need to ensure that the port liveness check can be
  optionally disabled so that the existing behavior of racing vm boot is
  maintained for plugins that need it.
 
  Thanks in advance,
 
 
  Maru
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] async / threading for python 2 and 3

2014-02-19 Thread Julien Danjou
On Wed, Feb 19 2014, Angus Salkeld wrote:

 2) use tulip and give up python 2

+ use trollius to have Python 2 support.

  https://pypi.python.org/pypi/trollius

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova]improvement-of-accessing-to-glance

2014-02-19 Thread Eiichi Aikawa
Hello,

Thanks for your comment and sorry for my late response.

IMHO, we should be adding more info the endpoint lists (like location)
in keystone and use that info from Glance client to determine which
glance-api the compute should talk to.

I understand your idea is one of the possibility, but it's not
enough as a solution to the problem we think.

We think keystone cannot control because keystone can't know
which Glance servers and Nova are located in the same chassis.

We think the suggestion of our bp, each Nova has each server list
to use Glance API server in same chassis at first, is reasonable
and simple way to control.

I added one additional materials to bp page. Please see this.

Regards,
E.Aikawa (aik...@mxk.nes.nec.co.jp)



-Original Message-
From: Flavio Percoco [mailto:fla...@redhat.com] 
Sent: Tuesday, February 11, 2014 6:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Takemori, Seishi(武森, 清市)
Subject: Re: [openstack-dev] [glance][nova]improvement-of-accessing-to-glance

On 10/02/14 05:28 +, Eiichi Aikawa wrote:
Hi, all,

Thanks for your comment.

Please let me re-explain.
The main purpose of our blueprint is to use network resources more efficiently.

To complete this purpose, we suggested the method of using 2 lists.
We think, as I wrote before, by listing near glance API servers and 
using them, the total amount of data transfer across the networks can be 
reduced.
Especially, in case of using microserver, communication destination can 
be limited within same chassis.

In addition, we think we can resume failed server during glance API 
server on secondary list are used. As a result, we can provide higher 
availability than current spec.

This bp can provide high efficiency and high availability.
But it seems you think our idea was not so good.

Please let me know your idea which component should be changed.

I understood that. I just don't think Nova is the right place to do it. I think 
this requires more than a list of weighted glance-api nodes in the compute 
server. If we want do this right, IMHO, we should be adding more info the 
endpoint lists (like location) in keystone and use that info from Glance client 
to determine which glance-api the compute should talk to.

I'm assuming you're planning to add a new configuration option to Nova into 
which you'll be able to specify a list of Glance nodes. If this is True, I'd 
highly discourage doing that. Nova has enough configuration options already and 
the compute nodes configs are quite different already. Adding this would mean 
making nova do things it shouldn't do and making it's configuration more 
complex than it is already.

That said, I think the idea of selecting the image nodes that nova speaks to is 
a great idea so by all means, keep investigating it but try to make it not nova 
specific.

[...]

Cheers,
Fla.


--
@flaper87
Flavio Percoco
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Network] Allocate MAC and IP address for a VM instance

2014-02-19 Thread Jay Lau
Thanks Liu Dong.

It is a VM mac address, so do you have any idea for how can I make sure the
VM mac address can bind to a floating ip address?

Also what do you mean by floatingip mac?

Really thanks very much for your kind help, it is really helped me a lot!

Thanks,

Jay



2014-02-19 16:21 GMT+08:00 Dong Liu willowd...@gmail.com:

 Jay, what the mac belong to? Is it a vm mac, or a mac of floatingip.
 If it is a vm mac, you can associate any floatingip to vm port.
 If it is a floatingip mac, I have no idea.

 2014-02-19 11:44, Jay Lau :

 Thanks Liu Dong.

 In case that you may not get my previous question, so here just post it
 again to see if you can give a help.

 Is it possible to bind MAC to a FLOATING IP?

 Thanks,

 Jay



 2014-02-19 10:38 GMT+08:00 Dong Liu willowd...@gmail.com
 mailto:willowd...@gmail.com:


 yes, it does not worked via dashboard

 Dong Liu

 于 2014-02-19 8:11, Jay Lau 写道:

 Thanks Dong for the great help, it does worked with command line!

 This seems not available via dashboard, right?

 Thanks,

 Jay



 2014-02-19 1:11 GMT+08:00 Dong Liu willowd...@gmail.com
 mailto:willowd...@gmail.com
 mailto:willowd...@gmail.com mailto:willowd...@gmail.com__:



  Hi Jay,

  In neutron API, you could create port with specified
 mac_address and
  fix_ip, and then create vm with this port.
  But the mapping of them need to manage by yourself.


  在 2014年2月18日,22:41,Jay Lau jay.lau@gmail.com
 mailto:jay.lau@gmail.com
  mailto:jay.lau@gmail.com
 mailto:jay.lau@gmail.com__ 写道:



Greetings,
   
Not sure if it is suitable to ask this question in
 openstack-dev
  list. Here come a question related to network and want to
 get some
  input or comments from you experts.
   
My case is as this: For some security issue, I want to
 put both
  MAC and internal IP address to a pool and when create VM, I
 can get
  MAC and its mapped IP address and assign the MAC and IP
 address to
  the VM.
   
For example, suppose I have following MAC and IP pool:
1) 78:2b:cb:af:78:b0, 192.168.0.10
2) 78:2b:cb:af:78:b1, 192.168.0.11
3) 78:2b:cb:af:78:b2, 192.168.0.12
4) 78:2b:cb:af:78:b3, 192.168.0.13
   
Then I can create four VMs using above MAC and IP
 address, each
  row in above can be mapped to a VM.
   
Does any of you have any idea for the solution of this?
   
--
Thanks,
   
Jay
_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.__openstack.org
 mailto:OpenStack-dev@lists.openstack.org

   
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__
 openstack-dev
 http://lists.openstack.org/cgi-bin/mailman/listinfo/
 openstack-dev


  _
  OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.__openstack.org
 mailto:OpenStack-dev@lists.openstack.org

 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__
 openstack-dev

 http://lists.openstack.org/cgi-bin/mailman/listinfo/
 openstack-dev




 --
 Thanks,

 Jay


 _
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__
 openstack-dev
 http://lists.openstack.org/cgi-bin/mailman/listinfo/
 openstack-dev



 _
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 --
 Thanks,

 Jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay

Re: [openstack-dev] [Neutron][LBaaS] L7 data types

2014-02-19 Thread Avishay Balderman
Hi

· I will add HTTP_METHOD to the ‘type’ enum of L7Rule

· GT,LT,GE,LE – at this phase I prefer to keep string based 
‘compare_type’ and I prefer not to add those number based compare types

· FILE_NAME,FILE_TYPE – Those two are a result of the URL 
fragmentation. Example: http://myserver/something/images/mypic.png . FILE_NAME 
= mypic FILE_TYPE = png

thanks

Avishay

From: Oleg Bondarev [mailto:obonda...@mirantis.com]
Sent: Wednesday, February 19, 2014 9:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] L7 data types

Hi folks,
please see a few comments inline.

On Wed, Feb 19, 2014 at 12:51 AM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net wrote:
A couple quick suggestions (additions):


 Entity: L7Rule

o   Attribute: type

•  Possible values:

  *   HTTP_METHOD

o   Attribute: compare_type

•  Possible values:

  *   GT (greater than)
  *   LT (less than)
  *   GE (greater than or equal to)
  *   LE (less than or equal to)
Will we be doing syntax checking based on the L7Rule type being presented?  
(eg. if w'ere going to check that HEADER X has a value that is greater than Y, 
are we going to make sure that Y is an integer? Or if we're going to check 
that the PATH STARTS_WITH Z, are we going to make sure that Z is a 
non-zero-length string? )
I think we should do these checks on the plugin level (API level doesn't 
support such checks at the moment).

Thanks,
Stephen

On Tue, Feb 18, 2014 at 3:58 AM, Avishay Balderman 
avish...@radware.commailto:avish...@radware.com wrote:
Here are the suggested values for the attributes below:

• Entity: L7Rule

o   Attribute: type

•  Possible values:

• HOST_NAME

• PATH

• FILE_NAME

• FILE_TYPE
Can somebody please clarify what FILE_NAME and FILE_TYPE mean? Just can't find 
corresponding matching criterias in haproxy.

• HEADER

• COOKIE

o   Attribute: compare_type

•  Possible values:

• EQUAL

• CONTAINS

• REGEX

• STARTS_WITH

• ENDS_WITH

• Entity:L7VipPolicyAssociation

o   Attribute:action

•  Possible values:

• POOL (must have pool id)

• REDIRECT(must have a url to be used as redirect destination)

• REJECT


From: Oleg Bondarev 
[mailto:obonda...@mirantis.commailto:obonda...@mirantis.com]
Sent: Monday, February 17, 2014 9:17 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] L7 data types

Hi,

I would add another candidate for being a closed set: 
L7VipPolicyAssociation.action (use_backend, block, etc.)

Thanks,
Oleg

On Sun, Feb 16, 2014 at 3:53 PM, Avishay Balderman 
avish...@radware.commailto:avish...@radware.com wrote:
(removing extra space from the subject – let email clients apply their filters)

From: Avishay Balderman
Sent: Sunday, February 16, 2014 9:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] L7 data types

Hi
There are 2 fields in the L7 model that are candidates for being a closed set 
(Enum).
I would like to hear your opinion.

Entity:  L7Rule
Field : type
Description:  this field holds the part of the request where we should look for 
a value
Possible values: URL,HEADER,BODY,(?)

Entity:  L7Rule
Field : compare_type
Description: The way we compare the value against a given value
Possible values: REG_EXP, EQ, GT, LT,EQ_IGNORE_CASE,(?)
Note: With REG_EXP we can cover the rest of the values.

In general In the L7rule one can express the following (Example):
“check if in the value of header named ‘Jack’  starts with X” – if this is true 
– this rule “returns” true


Thanks

Avishay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Profiling issue, need help!

2014-02-19 Thread Hua ZZ Zhang


Hi Stackers,

I'm working on the patch 53270 for Swift profiling middleware and was
blocked by a very annoying problem of the python eventlet profile which
inherits from python standard profiler. It sometimes raised AssertionError
of 'bad call' or 'bad return' in trace_dispatch_call and
trace_dispatch_return hooked function. The eventlet profiler extend the
return calls of standard profiler at line 116. This problem will be gone if
I change it back to python standard profile. So I guess it is specific to
eventlet profile. It may not correctly handle some special cases.

The 2 places in the code of standard profiler that complains:
1) https://github.com/python-git/python/blob/master/Lib/profile.py#L299
assert rframe.f_back is frame.f_back, (Bad call, rfn, rframe,
rframe.f_back, frame, frame.f_back)

2) https://github.com/python-git/python/blob/master/Lib/profile.py#L330
assert frame is self.cur[-2].f_back, (Bad return, self.cur[-3])

I don't understand why to use assert there. What does it mean if it happens
unexpected?
The profiler crashed and profiling results can't be used because of you
have an unexpected call?  (based on the practice of using assert
statement to check your call contract)
The results still can be used but not precise any more. you may need to
catch the AssertionError.
It is actually a bug need to be fixed here?

When I look into the eventlet profile,  I'm also very curious about two
line of code in eventlet profile module. It doesn't make sense to me since
they are not reachable in any case. anybody can explain why?
   
https://github.com/eventlet/eventlet/blob/master/eventlet/green/profile.py#L103
   
https://github.com/eventlet/eventlet/blob/master/eventlet/green/profile.py#L110


Here's an example of stack trace:
---
Traceback (most recent call last):

  File /opt/stack/swift/bin/swift-proxy-server, line 23, in module
sys.exit(run_wsgi(conf_file, 'proxy-server', default_port=8080,
**options))

  File /opt/stack/swift/swift/common/wsgi.py, line 407, in run_wsgi
run_server(conf, logger, sock)

  File /opt/stack/swift/swift/common/wsgi.py, line 335, in run_server
wsgi.server(sock, app, NullLogger(), custom_pool=pool)

  File /usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py, line 693,
in server
client_socket = sock.accept()

  File /usr/local/lib/python2.7/dist-packages/eventlet/greenio.py, line
183, in accept
timeout_exc=socket.timeout(timed out))

  File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/__init__.py,
line 155, in trampoline
return hub.switch()

  File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line
187, in switch
return self.greenlet.switch()

  File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line
236, in run
self.wait(sleep_time)

  File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py, line
113, in wait
self.block_detect_post()

  File /usr/lib/python2.7/profile.py, line 211, in trace_dispatch

if self.dispatch[event](self, frame,t):

  File /opt/stack/swift/swift/common/middleware/profile.py, line 239, in
trace_dispatch_return_extend_back
return self.trace_dispatch_return(frame, t);

  File /usr/lib/python2.7/profile.py, line 312, in trace_dispatch_return
assert frame is self.cur[-2].f_back, (Bad return, frame.f_code,
self.cur[-2].f_back.f_code)

AssertionError: ('Bad return', code object wait at 0x1a15030, file
/usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py, line 75,
code object wait at 0x1191030, file
/usr/local/lib/python2.7/dist-packages/eventlet/queue.py, line 123)
---

Another paste example of stack trace is here.

-Edward Zhang___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] [neutron] [ml2] Neutron and ML2 - adding new network type

2014-02-19 Thread Sławek Kapłoński

Hello,

I added my own type_driver and it looks that works ok because I can make 
network with this my type. But how to add own mechanism_driver (to any 
of network type) - I think that this could be possible too. Can someone 
send me some info about that, or maybe link to some documentation about 
that?


Thanks in advance
Slawek

W dniu 2014-02-18 22:53, Sławek Kapłoński napisał(a):

Hello,

Thanks for an answear.
I want to add own network type which will be very similiar to flat 
network (in
type_driver I think it will be the same) but will assign IPs to 
instances in
different way (not exactly with some L2 protocol). I want to add own 
network
because I want to have own name of this network that I can distinguish 
it.

Maybe there is other reason to do that.

--
Best regards
Sławek Kapłoński

Dnia wtorek, 18 lutego 2014 10:08:50 piszesz:

[Moving to -dev list]

On Feb 18, 2014, at 9:12 AM, Sławek Kapłoński sla...@kaplonski.pl 
wrote:

 Hello,

 I'm trying to make something with neutron and ML2 plugin. Now I need to
 add my own external network type (as there are Flat, VLAN, GRE and
 so on). I searched for manuals for that but I can't found anything. Can
 someone of You explain me how I should do that? Is it enough to add own
 type_driver and mechanism_driver to ML2? Or I should do something else
 also?
Hi Sławek:

Can you explain more about what you’re looking to achieve here? I’m 
just
curious how the existing TypeDrivers won’t cover your use case. ML2 
was

designed to remove segmentation management from the MechanismDrivers
so they could all share segment types. Perhaps understanding what 
you’re
trying to achieve would help better understand the approach to take 
here.


Thanks,
Kyle

 Thanks in advance
 --
 Sławek Kapłoński
 sla...@kaplonski.pl

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Meetup Summary

2014-02-19 Thread Sylvain Bauza
Hi Toan-Tran,


2014-02-19 9:40 GMT+01:00 Khanh-Toan Tran khanh-toan.t...@cloudwatt.com:

  Agreed. I'm just thinking on the opportunity of providing a REST API

  on top of the scheduler RPC API with a 1:1 matching, so that the Gantt

  project would step up by itself. I don't think it's a hard stuff,
 provided I

  already did that stuff for Climate (providing Pecan/WSME API). What

  do you think about it ? Even if it's not top priority, that's a quickwin.



 Well, I'm not sure about quickwin, though J I think that we should
 focus on the main objective of having a self-contained Gantt working with
 Nova first. Some of the interaction issues still worry me, especially the
 host_state  host_update queries. These issues will have impact on the
 Gantt API (at least for Nova to use), so I'm not sure the current RPC API
 will hold up either. But I will not discourage any personal effort J




Well, about the 2 things : first, the REST API would just be a mapping 1:1
of the scheduler RPC API, ie select_destinations(), run_instance() and
prep_resize(). The arguments passed to the object would just be serialized
in a POST query with JSON/XML data and deserialized/passed to the RPC API.
Think it as a REST wrapper, no hard stuff.

About the host_state that's something that has been discussed yesterday,
Gantt first has to provide a python lib for the calls to
update_from_compute_node, but that could also be managed thanks to a CLI
python binding later on.

-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] [neutron] [ml2] Neutron and ML2 - adding new network type

2014-02-19 Thread Akihiro Motoki
Hi,

I think you are trying to use different IP allocation algorithm
for a network based on some attribute of the network.
network_type of the provider network specifies how layer2 network
is segmented and ML2 type drivers are defined per network_type.
I think it is different from your need.

IMO it looks better to introduce a new attribute to select
IP allocation algorithm to network resource.
The idea of Pluggable IP allocation algorithms exists for
a long time in Neutron community but the progress is not good.
Once pluggable mechanism is implemented, we need a way to
map networks to IP allocation algorithms and this kind of
new attribute is one possible choice.

Thanks,
Akihiro

(2014/02/19 6:53), Sławek Kapłoński wrote:
 Hello,

 Thanks for an answear.
 I want to add own network type which will be very similiar to flat network (in
 type_driver I think it will be the same) but will assign IPs to instances in
 different way (not exactly with some L2 protocol). I want to add own network
 because I want to have own name of this network that I can distinguish it.
 Maybe there is other reason to do that.

 --
 Best regards
 Sławek Kapłoński

 Dnia wtorek, 18 lutego 2014 10:08:50 piszesz:
 [Moving to -dev list]

 On Feb 18, 2014, at 9:12 AM, Sławek Kapłoński sla...@kaplonski.pl wrote:
 Hello,

 I'm trying to make something with neutron and ML2 plugin. Now I need to
 add my own external network type (as there are Flat, VLAN, GRE and
 so on). I searched for manuals for that but I can't found anything. Can
 someone of You explain me how I should do that? Is it enough to add own
 type_driver and mechanism_driver to ML2? Or I should do something else
 also?
 Hi Sławek:

 Can you explain more about what you’re looking to achieve here? I’m just
 curious how the existing TypeDrivers won’t cover your use case. ML2 was
 designed to remove segmentation management from the MechanismDrivers
 so they could all share segment types. Perhaps understanding what you’re
 trying to achieve would help better understand the approach to take here.

 Thanks,
 Kyle

 Thanks in advance
 --
 Sławek Kapłoński
 sla...@kaplonski.pl

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] [neutron] [ml2] Neutron and ML2 - adding new network type

2014-02-19 Thread Sławek Kapłoński

Hello,

In fact I want to make something similiar to flat network but when IP is 
assigned to instance (port bind, yes?) then it should be not with arping 
announsed in network but with bgp (bgp server is installed on host). I 
know that it is not L2 protocol but I want to try that. So I want to 
write and use own Mechanism driver (or maybe I'm wrong and there is 
something other to make such things). So do You know how to add own 
mechanism driver and use it in neutron?


Thanks in advance
Slawek

W dniu 2014-02-19 11:26, Akihiro Motoki napisał(a):

Hi,

I think you are trying to use different IP allocation algorithm
for a network based on some attribute of the network.
network_type of the provider network specifies how layer2 network
is segmented and ML2 type drivers are defined per network_type.
I think it is different from your need.

IMO it looks better to introduce a new attribute to select
IP allocation algorithm to network resource.
The idea of Pluggable IP allocation algorithms exists for
a long time in Neutron community but the progress is not good.
Once pluggable mechanism is implemented, we need a way to
map networks to IP allocation algorithms and this kind of
new attribute is one possible choice.

Thanks,
Akihiro

(2014/02/19 6:53), Sławek Kapłoński wrote:

Hello,

Thanks for an answear.
I want to add own network type which will be very similiar to flat 
network (in
type_driver I think it will be the same) but will assign IPs to 
instances in
different way (not exactly with some L2 protocol). I want to add own 
network
because I want to have own name of this network that I can distinguish 
it.

Maybe there is other reason to do that.

--
Best regards
Sławek Kapłoński

Dnia wtorek, 18 lutego 2014 10:08:50 piszesz:

[Moving to -dev list]

On Feb 18, 2014, at 9:12 AM, Sławek Kapłoński sla...@kaplonski.pl 
wrote:

Hello,

I'm trying to make something with neutron and ML2 plugin. Now I need 
to
add my own external network type (as there are Flat, VLAN, GRE 
and
so on). I searched for manuals for that but I can't found anything. 
Can
someone of You explain me how I should do that? Is it enough to add 
own
type_driver and mechanism_driver to ML2? Or I should do something 
else

also?

Hi Sławek:

Can you explain more about what you’re looking to achieve here? I’m 
just
curious how the existing TypeDrivers won’t cover your use case. ML2 
was

designed to remove segmentation management from the MechanismDrivers
so they could all share segment types. Perhaps understanding what 
you’re
trying to achieve would help better understand the approach to take 
here.


Thanks,
Kyle


Thanks in advance
--
Sławek Kapłoński
sla...@kaplonski.pl

___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openst...@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Storing license information in openstack/requirements

2014-02-19 Thread Thierry Carrez
David Koo wrote:
 
 Should we store licensing information as a comment in the
 *-requirements files ? Can it be stored on the same line ? Something
 like:

 oslo.messaging=1.3.0a4  # Apache-2.0
 
 Since it's licenses we're tracking shouldn't we be tracking indirect
 dependencies too (i.e. packages pulled in by required packages)? And if
 we want to do that then the method above won't be sufficient.
 
 And, of course, we want an automated way of generating this info -
 dependencies (can) change from version to version. Do we have such a
 tool?

I think tracking licensing for first-level dependencies is a good start.
Basically, if we require a license-incompatible dependency it's clearly
our fault, whereas if a second-layer dependency requires a
license-incompatible dependency itself, we are just affected by their
mistake.

This is a first step, but it covers most of the issue we are trying to
prevent.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Storing license information in openstack/requirements

2014-02-19 Thread Thierry Carrez
Sean Dague wrote:
 Honestly, if we are going to track this, we should probably do the set
 of things that reviewers tend to do when running through these.
 
 License:
 Upstream Location:
 Ubuntu/Debian Package: Y/N? (url)
 Fedora Package: Y/N? (url)
 Suse Package: Y/N? (url)
 Last Release: Date (in case of abandonware)
 Python 3 support: Y/N? (informational only)
 
 I'd honestly stick that in a yaml file instead, and have something
 sanity check it on new requirements add.

Licensing is the only legally-binding issue at stake, the rest are
technically-binding issues that we consider when we accept or reject a
new dependency. I'm not saying there is no value in tracking that extra
information, just saying that we really need to track licensing. I don't
want perfection to get in the way of making baby steps towards making
things better.

Tracking licensing is a good first step, and having full licensing
coverage will take some time. We shouldn't block on YAML conversion or
full technical information... As a first step let's just accept patches
that mention licensing information in trailing comments, then if someone
wants to convert the requirements files to YAML so that they can contain
more information, great!

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] Question about solum-minimal-cli BP

2014-02-19 Thread Angus Salkeld

On 19/02/14 08:52 +, Shaunak Kashyap wrote:

Thanks Angus but I think I have managed to get confused again :)

So let me take a step back. From a users' perspective, what is the least number 
of steps they would need to take in order to have a running application with 
Solum? I understand there might be two variations on this - git-push and 
git-pull - and the answer may be different for each.

If this is documented somewhere, I'm happy to peruse through that instead; just 
point me to it.


https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/GitIntegration



Thanks,

Shaunak

From: Angus Salkeld [angus.salk...@rackspace.com]
Sent: Tuesday, February 18, 2014 6:13 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [solum] Question about solum-minimal-cli BP

On 18/02/14 14:19 +, Shaunak Kashyap wrote:

Thanks Angus and Devdatta. I think I understand.

Angus -- what you said seems to mirror the Heroku CLI usage: a) User runs app/plan 
create (to create the remote repo), then b) user runs git push ... (which pushes 
the code to the remote repo and creates 1 assembly, resulting in a running application). If this is 
the intended flow for the user, it makes sense to me.


Just to be clear, I am not totally sure we are going to glue git repo
generation to create plan (it *could* be part of create assembly).



One follow up question: under what circumstances will the user need to explicitly run 
assembly create? Would it be used exclusively for adding more assemblies to 
an already running app?


If you are not using the git-push mechanism, but the git-pull.
Here you have your own repo (say on github) and there is not
a git-repo-generation phase.

-Angus



Thanks,

Shaunak


From: Angus Salkeld [angus.salk...@rackspace.com]
Sent: Monday, February 17, 2014 5:54 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [solum] Question about solum-minimal-cli BP

On 17/02/14 21:47 +, Shaunak Kashyap wrote:

Hey folks,

I was reading through 
https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/CLI-minimal-implementation
 and have a question.

If I’m understanding “app create” and “assembly create” correctly, the user 
will have to run “app create” first, followed by “assembly create” to have a 
running application. Is this correct? If so, what is the reason for “app 
create” not automatically creating one assembly as well?


On that page it seems that app create is the same as plan create.

The only reason I can see for seperating the plan from the assembly is
when you have git-push.
Then you need to have something create the git repo for you.

1 plan create (with a reference to a git-push requirement) would create
  the remote git repo for you.
2 you clone and populate the repo with your app code
3 you push, and that causes the assembly create/update.

Adrian might want to correct my here tho'

-Angus



Thanks,
Shaunak



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Nova feature proposal freeze has passed

2014-02-19 Thread John Garbutt
Hi,

Lets keep track of things for Nova here:
https://etherpad.openstack.org/p/nova-icehouse-blueprint-cull

In a few hours I will start reviewing all the blueprints that are not
Needs Code Review and pushing them into next. Anything with some
live code will probably go into Juno-1, if that makes sense.

Then I will look at Needs Code Review blueprints to check they are
just blocked waiting for a review.

Do shout up if you feel like we miss-understood the current status of
your blueprint. If you think your blueprint has the wrong status,
please update that very soon, ideally yesterday.

Hopefully I can then publish a list of reviews we should concentrate
on, in the hope of getting as many of those blueprints with code
reviewed, as quickly as possible. (My script is almost ready for
that).

Thanks in advance for not hating me :)

johnthetubaguy


On 17 February 2014 10:07, Thierry Carrez thie...@openstack.org wrote:
 Hi everyone,

 Just a quick reminder that some of our projects have a deadline tomorrow
 (end of day, Feb 18th) for proposing Icehouse feature code for review.
 To my knowledge, this affects Nova, Neutron, Keystone and Cinder.

 This means that starting on Wednesday, Feb 19, blueprints for those
 projects that are not at Needs Code review status might get deferred
 to the Juno cycle. This shall allow better prediction of the features
 that are likely to make it for Icehouse, as well as a better review
 focus for the two weeks that separate us from Icehouse Feature Freeze
 (March 4).

 See https://wiki.openstack.org/wiki/FeatureProposalFreeze for more
 information.

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Network] Allocate MAC and IP address for a VM instance

2014-02-19 Thread Dong Liu
Sorry for replying so late.

Yes, that is what I mean, btw, if you only need floatingip to bind vm mac, you 
do not need to specified --fix_ip, just specify the --mac-address is ok.

What I mean floatingip-mac is that, actually, when you creating a floatingip, 
neutron will automatic create a port use that public ip, this port has a 
mac-address, I mean this one.


在 2014年2月19日,18:22,Jay Lau jay.lau@gmail.com 写道:

 Hi Liu Dong,
 
 Just found a solution for this as following, the method is using fixed ip as 
 a bridge for MAC and floating ip.
 
 Can you please help check if it is the way that you want me to do? If not, 
 can you please give some suggestion for your idea?
 
 Thanks,
 
 Jay
 
 ==My steps==
 Suppose I want to bind MAC fa:16:3e:9d:e9:11 to floating ip 9.21.52.22, I was 
 doing as following:
 
 1) Create a port for fixed ip with the MAC address fa:16:3e:9d:e9:11
 [root@db01b05 ~(keystone_admin)]#  neutron port-create IntAdmin  
 --mac-address fa:16:3e:9d:e9:11 --fixed-ip ip_address=10.0.1.2 
 Created a new port:
 +---+-+
 | Field | Value   
 |
 +---+-+
 | admin_state_up| True
 |
 | allowed_address_pairs | 
 |
 | binding:capabilities  | {port_filter: true}   
 |
 | binding:host_id   | 
 |
 | binding:vif_type  | ovs 
 |
 | device_id | 
 |
 | device_owner  | 
 |
 | fixed_ips | {subnet_id: 
 0fff20f4-142a-4e89-add1-5c5a79c6d54d, ip_address: 10.0.1.2} |
 | id| b259770d-7f9c-485a-8f84-bf7b1bbc5706
 |
 | mac_address   | fa:16:3e:9d:e9:11   
 |
 | name  | 
 |
 | network_id| fb1a75f9-e468-408b-a172-5d2b3802d862
 |
 | security_groups   | aa3f3025-ba71-476d-a126-25a9e3b34c9a
 |
 | status| DOWN
 |
 | tenant_id | f181a9c2b1b4443dbd91b1b7de716185
 |
 +---+-+
 [root@db01b05 ~(keystone_admin)]# neutron port-list | grep 10.0.1.2
 | b259770d-7f9c-485a-8f84-bf7b1bbc5706 |  | fa:16:3e:9d:e9:11 | 
 {subnet_id: 0fff20f4-142a-4e89-add1-5c5a79c6d54d, ip_address: 
 10.0.1.2}   |
 
 2) Create a floating ip with the port id created in step 1)
 [root@db01b05 ~(keystone_admin)]# neutron floatingip-create --port-id 
 b259770d-7f9c-485a-8f84-bf7b1bbc5706 Ex
 Created a new floatingip:
 +-+--+
 | Field   | Value|
 +-+--+
 | fixed_ip_address| 10.0.1.2 |
 | floating_ip_address | 9.21.52.22   |
 | floating_network_id | 9b758062-2be8-4244-a5a9-3f878f74e006 |
 | id  | 7c0db4ff-8378-4b91-9a6e-87ec06016b0f |
 | port_id | b259770d-7f9c-485a-8f84-bf7b1bbc5706 |
 | router_id   | 43ceb267-2a4b-418a-bc9a-08d39623d3c0 |
 | tenant_id   | f181a9c2b1b4443dbd91b1b7de716185 |
 +-+--+
 
 3) Boot the VM with the port id in step 1)
 [root@db01b05 ~(keystone_admin)]#  nova boot --image centos64-x86_64-cfntools 
 --flavor 2 --key-name adminkey --nic 
 port-id=b259770d-7f9c-485a-8f84-bf7b1bbc5706 vm0001
 +--+--+
 | Property | Value
 |
 +--+--+
 | OS-EXT-STS:task_state| scheduling   
 |
 | image| centos64-x86_64-cfntools 
 |
 | OS-EXT-STS:vm_state  | building 
 |
 

[openstack-dev] [Nova] Including Domains in Nova

2014-02-19 Thread Henrique Truta
Hi everyone.



It is necessary to make Nova support the Domain quotas and create a new
administrative perspective. Here are some reasons why Nova should support
domains:



1 - It's interesting to keep the main Openstack components sharing the same
concept, once it has already been made in Keystone. In Keystone, the domain
defines more administrative boundaries and makes management of its entities
easier.



2 - Nova shouldn't be so tied in to projects. Keystone was created to
abstract concepts like these to other modules, like Nova. In addition, Nova
needs to be flexible enough to work with the new functionalities that
Keystone will provide. If we keep the Nova tied in to projects (or domains),
we will be far from the Nova focus which is providing compute services.



3 - There is also the Domain Quota Driver BP (
https://blueprints.launchpad.net/nova/+spec/domain-quota-driver),
which implementation
has already began. This Blueprint allows the user to handle quotas at
domain level. Nova requires domains to make this feature work properly,
right above the project level. There is also an implementation that
includes the domain information on the token context. This implementation
have to be included as well: https://review.openstack.org/#/c/55870/ .



4 - The Nova API must be extended in order to enable domain-level
operations, that only work at project-level such as:

- Listing, viewing and deleting images;

- Deleting and listing servers;

- Perform server actions like changing passwords, reboot, rebuild and
resize;

- CRUD and listing on server metadata;

In addition to provide quota management through the API and establishment
of a new administrative scope.



In order to accomplish these features, the token must contain the domain
informations, which will be included as mentioned in item 3. Then, the Nova
API calls will be changed to consider the domain information and when a
call referent to a project is made (e.g. servers).



What do you think about it? Any additional suggestions?



Thanks.


Henrique Truta
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] status of quota class

2014-02-19 Thread Mehdi Abaakouk
Hi, 

I have recently dig into the quota class in nova and some related
subject on the ML and discovered that quota class code exists but it is
not usable.

An API V2 extension exists to manipulate quota class, these one are
stored into the database.

The quota driver engine handles quota class too, and attends to have the
'quota_class' argument into the nova RequestContext set.

But 'quota_class' is never set when a nova RequestContext is created.

The quota class API V3 have been recently removed due to the unfinished work:
https://github.com/openstack/nova/commit/1b15b23b0a629e00913a40c5def42e5ca887071c


So my question, what is the plan to finish the 'quota class' feature ? 

Can I propose a blueprint for the next cycle to store the mapping between
project and a quota_class into nova itself, to finish this feature ? 

ie: add a new API endpoint to set a quota_class to a project, store that
into the db and change the quota engine to read the quota_class from the
db instead of the RequestContext.



Best Regards, 

-- 
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-19 Thread Samuel Bercovici
Hi,

I think we mix different aspects of operations. And try to solve a non 
problem.

From APIs/Operations we are mixing the following models:

1.   Logical model (which as far as I understand is the topic of this 
discussion) - tenants define what they need logically vip--default_pool, l7 
association, ssl, etc.

2.   Physical model - operator / vendor install and specify how backend 
gets implemented.

3.   Deploying 1 on 2 - this is currently the driver's responsibility. We 
can consider making it better but this should not impact the logical model.

Another problem, which all the new proposals are trying to solve, is placing 
a pools which can be a root/default for a vip pool relationship also as 
part association with l7 policy of another vippool that is configured in 
another backend.
I think this is not a problem.
In a logical model a pool which is part of L7 policy is a logical object which 
could be placed at any backend and any existing vippool and accordingly 
configure the backend that those vippool are deployed on.
If the same pool that was part of a l7 association will also be connected to a 
vip as a default pool, than by all means this new vippool pair can be 
instantiated into some back end.
The proposal to not allow this (ex: only allow pools that are connected to the 
same lb-instance to be used for l7 association), brings the physical model into 
the logical model.

I think that the current logical model is fine with the exception that the two 
way reference between vip and pool (vippool) should be modified with only 
vip pointing to a pool (vip--pool) which allows reusing the pool with multiple 
vips. This also means that all those vips will be placed on the same place as 
the pool they are pointing to as their default pool.

Regards,
-Sam.





From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Tuesday, February 18, 2014 9:35 PM
To: OpenStack Development Mailing List
Cc: Youcef Laribi; Samuel Bercovici; sbaluk...@bluebox.net; Mark McClain; 
Salvatore Orlando
Subject: [Neutron][LBaaS] Object Model discussion

Hi folks,

Recently we were discussing LBaaS object model with Mark McClain in order to 
address several problems that we faced while approaching L7 rules and multiple 
vips per pool.

To cut long story short: with existing workflow and model it's impossible to 
use L7 rules, because
each pool being created is 'instance' object in itself, it defines another 
logical configuration and can't be attached to other existing configuration.
To address this problem, plus create a base for multiple vips per pool, the 
'loadbalancer' object was introduced (see 
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance ).
However this approach raised a concern of whether we want to let user to care 
about 'instance' object.

My personal opinion is that letting user to work with 'loadbalancer' entity is 
no big deal (and might be even useful for terminological clarity; Libra and AWS 
APIs have that) especially if existing simple workflow is preserved, so the 
'loadbalancer' entity is only required when working with L7 or multiple vips 
per pool.

The alternative solution proposed by Mark is described here under #3:
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion
In (3) the root object of the configuration is VIP, where all kinds of bindings 
are made (such as provider, agent, device, router). To address 'multiple vips' 
case another entity 'Listener' is introduced, which receives most attributes of 
former 'VIP' (attribute sets are not finalized on those pictures, so don't pay 
much attention)
If you take closer look at #2 and #3 proposals, you'll see that they are 
essentially similar, where in #3 the VIP object takes instance/loadbalancer 
role from #2.
Both #2 and #3 proposals make sense to me because they address both problems 
with L7 and multiple vips (or listeners)
My concern about #3 is that it redefines lots of workflow and API aspects and 
even if we manage to make transition to #3 in backward-compatible way, it will 
be more complex in terms of code/testing, then #2 (which is on review already 
and works).

The whole thing is important design decision, so please share your thoughts 
everyone.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Network] Allocate MAC and IP address for a VM instance

2014-02-19 Thread Jay Lau
Thanks Liu Dong. Clear now! ;-)


2014-02-19 20:17 GMT+08:00 Dong Liu willowd...@gmail.com:

 Sorry for replying so late.

 Yes, that is what I mean, btw, if you only need floatingip to bind vm mac,
 you do not need to specified --fix_ip, just specify the --mac-address is ok.

 What I mean floatingip-mac is that, actually, when you creating a
 floatingip, neutron will automatic create a port use that public ip, this
 port has a mac-address, I mean this one.


 在 2014年2月19日,18:22,Jay Lau jay.lau@gmail.com 写道:

 Hi Liu Dong,

 Just found a solution for this as following, the method is using fixed ip
 as a bridge for MAC and floating ip.

 Can you please help check if it is the way that you want me to do? If not,
 can you please give some suggestion for your idea?

 Thanks,

 Jay

 ==My steps==
 Suppose I want to bind MAC fa:16:3e:9d:e9:11 to floating ip 9.21.52.22, I
 was doing as following:

 *1) Create a port for fixed ip with the MAC address fa:16:3e:9d:e9:11*
 [root@db01b05 ~(keystone_admin)]#  neutron port-create IntAdmin
 --mac-address fa:16:3e:9d:e9:11 --fixed-ip ip_address=10.0.1.2
 Created a new port:

 +---+-+
 | Field |
 Value
 |

 +---+-+
 | admin_state_up|
 True
 |
 | allowed_address_pairs
 |
 |
 | binding:capabilities  | {port_filter:
 true}   |
 | binding:host_id
 |
 |
 | binding:vif_type  |
 ovs
 |
 | device_id
 |
 |
 | device_owner
 |
 |
 | fixed_ips | {subnet_id:
 0fff20f4-142a-4e89-add1-5c5a79c6d54d, ip_address: 10.0.1.2} |
 | id|
 b259770d-7f9c-485a-8f84-bf7b1bbc5706
 |
 | mac_address   |
 fa:16:3e:9d:e9:11
 |
 | name
 |
 |
 | network_id|
 fb1a75f9-e468-408b-a172-5d2b3802d862
 |
 | security_groups   |
 aa3f3025-ba71-476d-a126-25a9e3b34c9a
 |
 | status|
 DOWN
 |
 | tenant_id |
 f181a9c2b1b4443dbd91b1b7de716185
 |

 +---+-+
 [root@db01b05 ~(keystone_admin)]# neutron port-list | grep 10.0.1.2
 | b259770d-7f9c-485a-8f84-bf7b1bbc5706 |  | fa:16:3e:9d:e9:11 |
 {subnet_id: 0fff20f4-142a-4e89-add1-5c5a79c6d54d, ip_address:
 10.0.1.2}   |

 *2) Create a floating ip with the port id created in step 1)*
 [root@db01b05 ~(keystone_admin)]# neutron floatingip-create --port-id
 b259770d-7f9c-485a-8f84-bf7b1bbc5706 Ex
 Created a new floatingip:
 +-+--+
 | Field   | Value|
 +-+--+
 | fixed_ip_address| 10.0.1.2 |
 | floating_ip_address | 9.21.52.22   |
 | floating_network_id | 9b758062-2be8-4244-a5a9-3f878f74e006 |
 | id  | 7c0db4ff-8378-4b91-9a6e-87ec06016b0f |
 | port_id | b259770d-7f9c-485a-8f84-bf7b1bbc5706 |
 | router_id   | 43ceb267-2a4b-418a-bc9a-08d39623d3c0 |
 | tenant_id   | f181a9c2b1b4443dbd91b1b7de716185 |
 +-+--+

 *3) Boot the VM with the port id in step 1)*
 [root@db01b05 ~(keystone_admin)]#  nova boot --image
 centos64-x86_64-cfntools --flavor 2 --key-name adminkey --nic
 port-id=b259770d-7f9c-485a-8f84-bf7b1bbc5706 vm0001

 +--+--+
 | Property |
 Value|

 +--+--+
 | OS-EXT-STS:task_state|
 scheduling   |
 | image|
 centos64-x86_64-cfntools |
 | OS-EXT-STS:vm_state  |
 building |
 | OS-EXT-SRV-ATTR:instance_name|
 instance-0026|
 | OS-SRV-USG:launched_at   |
 None |
 | flavor   |
 m1.small |
 | id   |
 c0cebd6b-94ae-4305-8619-c013d45f0727 |
 | security_groups  | [{u'name':
 u'default'}]  |
 | user_id  |
 345dd87da2364fa78ffe97ed349bb71b |
 | OS-DCF:diskConfig|
 MANUAL   |
 | accessIPv4
 |  |
 | accessIPv6
 |  |
 | progress |
 0|
 | OS-EXT-STS:power_state   |
 0|
 | OS-EXT-AZ:availability_zone  |
 nova

Re: [openstack-dev] call for help with nova bug management

2014-02-19 Thread Gary Kotton
I will help out.
Thanks
Gary

From: Tracy Jones tjo...@vmware.commailto:tjo...@vmware.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, February 18, 2014 9:48 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] call for help with nova bug management

So i have been rather underwhelmed in the enthusiastic response to help out :-)



So far only wendar and johnthetubaguy have signed up.   I was hoping for at 
least 3-5 people to help with the initial triage.  Please sign up this week if 
you can help and i’ll schedule the meetings starting next week




On Feb 14, 2014, at 2:16 PM, Tracy Jones 
tjo...@vmware.commailto:tjo...@vmware.com wrote:

Hi Folks - I’ve offered to help Russell out with managing nova’s bug queue.  
The charter of this is as follows


 1.  Triage the 125 new bugs
 2.  Ensure that the critical bugs are assigned properly and are making progress

Once this part is done we will shift our focus to things like

 *   Bugs in incomplete state with no update by the reporter - they should be 
set to invalid if they requester does not update them in a timely manner.
 *   Bugs which say they are in progress but no progress is being made.   If a 
bug is assigned and simply being ignored we should remove the assignment so 
others can grab it and work on it

The bug triage policy is defined here 
https://wiki.openstack.org/wiki/BugTriagehttps://urldefense.proofpoint.com/v1/url?u=https://wiki.openstack.org/wiki/BugTriagek=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=iM9c3M7WWm7TTEGxg7DO4spZaxt3S5NgP6GRop8sfEE%3D%0As=08b0cf24ea8e2ea9f5e9f40e65000c7571d8f39d2478752bbd05798db182f36d


What can you do???  First I need a group of folks to volunteer to help with 1 
and 2.  I will start a weekly IRC meeting where we work on the triage and check 
progress on critical (or even high) prio bugs.  If you can help out, please 
sign up at the end of this etherpad and include your timezone.  Once I have a 
few people to help i will schedule the meeting at a time that I hope is 
convenient for all.

https://etherpad.openstack.org/p/nova-bug-managementhttps://urldefense.proofpoint.com/v1/url?u=https://etherpad.openstack.org/p/nova-bug-managementk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=iM9c3M7WWm7TTEGxg7DO4spZaxt3S5NgP6GRop8sfEE%3D%0As=200cb74eb4c0ca7611cb39ee5cc35cdd40bcf2bb8e50813b2f4a6c469980832c

Thanks in advance for your help.

Tracy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Fixes for the alembic migration (sqlite + postgress) aren't being reviewed

2014-02-19 Thread Thomas Goirand
Hi,

I've seen this one:
https://review.openstack.org/#/c/68611/

which is suppose to fix something for Postgress. This is funny, because
I was doing the exact same patch for fixing it for SQLite. Though this
was before the last summit in HK.

Since then, I just gave up on having my Debian specific patch [1] being
upstreamed. No review, despite my insistence. Mark, on the HK summit,
told me that it was pending discussion about what would be the policy
for SQLite.

Guys, this is disappointing. That's the 2nd time the same patch is being
blocked, with no explanations.

Could 2 core reviewers have a *serious* look at this patch, and explain
why it's not ok for it to be approved? If nobody says why, then could
this be approved, so we can move on?

Cheers,

Thomas Goirand (zigo)

[1]
http://anonscm.debian.org/gitweb/?p=openstack/neutron.git;a=blob;f=debian/patches/fix-alembic-migration-with-sqlite3.patch;h=9108b45aaaf683e49b15338bacd813e50e9f563d;hb=b44e96d9e1d750e35513d63877eb05f167a175d8

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] help the oslo team help you

2014-02-19 Thread Doug Hellmann
On Tue, Feb 18, 2014 at 9:46 PM, 黎林果 lilinguo8...@gmail.com wrote:

 +1
 But if we don't sync the oslo, sometimes we have to modify the files
 in openstack/common/ in order to run Jekins.


​If projects are making local changes to openstack/common in order to have
their tests pass, we're failing. Those changes should be made in the
incubator first.

Do you have any specific changes like this that you can point to?​

Doug




 2014-02-19 10:03 GMT+08:00 Joe Gordon joe.gord...@gmail.com:
  On Wed, Feb 12, 2014 at 12:16 PM, Doug Hellmann
  doug.hellm...@dreamhost.com wrote:
  If you have a change in your project that is blocked waiting for a
 patch to
  land in oslo (in the incubator, or any of the libraries we manage)
 *please*
  either open a blueprint or mark the associated bug as also affecting the
  relevant oslo project, then let me know about it so I can put it on our
  review priority list. We have a lot going on in oslo right now, but
 will do
  our best to prioritize reviews that affect features landing in other
  projects -- if you let us know about them.
 
 
  While I don't think this is what you meant when you said let oslo help
  you, I do have a request:
 
  While trying to do a basic oslo-incubator update ('./update.sh
  --nodeps --modules fixture --base nova --dest-dir ../nova')  I hit a
  bug https://bugs.launchpad.net/oslo/+bug/1281860
 
  Due to the nature of oslo-incubator (it may break at any time) it is
  hard for downstream projects (nova, cinder etc.) to keep there
  oslo-incubator copies up to date, so when someone wants to sync across
  a new change they have to deal with many unrelated changes, some of
  which may break things. For example
 
  oslo-incubator$ ./update.sh --config-file
  ../cinder/openstack-common.conf --base cinder --dest-dir ../cinder
  cinder$ git diff --stat HEAD
  52 files changed, 3568 insertions(+), 961 deletions(-)
 
 
  I would like to propose making the oslo team responsible for syncing
  across oslo-incubator code, they know the code base best and can fix
  things when they break.  This doesn't mean no one else can use
  update.sh it just means that the oslo team would make sure that syncs
  are done in a timely fashion, so the diffs don't get too big.
 
 
  Doug
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Some questions about Rest API and log messages translation

2014-02-19 Thread Doug Hellmann
On Tue, Feb 18, 2014 at 11:56 PM, Peng Wu peng.e...@gmail.com wrote:

 Hi,

   Currently I am analyzing the blueprint of translated message id
 generation.[1]
   Recently I just found that there is an implementation to generate both
 English and translated log messages.
   I think if English and translated log messages are provided, then we
 don't need to generate a message id for log messages.

   My question is about Rest API messages translation. If we will return
 both English and translated Rest API message, then we don't need to
 generate a message id for Rest API message, either.


I don't think we plan to return both messages. My understanding was we
would return messages in the locale specified by the headers sent from the
client (assuming those translations are available).


   And currently message id generation blueprint is only for log message
 and translated Rest API message. If we provide both English and
 translated messages, then we don't need to generate any message id for
 messages. Because we just need to read the English log and Rest API
 messages.


There may still be utility in documenting messages with a message id. For
example, a message id wouldn't change even if the wording of a message
changed slightly (to add more context information, for example).

Doug




   Feel free to comment it.

 Thanks,
   Peng Wu

 Refer URL:
 [1] https://blueprints.launchpad.net/oslo/+spec/log-messages-id
 [2]
 https://blueprints.launchpad.net/oslo/+spec/log-messages-translation-domain



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] help the oslo team help you

2014-02-19 Thread Doug Hellmann
On Tue, Feb 18, 2014 at 9:03 PM, Joe Gordon joe.gord...@gmail.com wrote:

 On Wed, Feb 12, 2014 at 12:16 PM, Doug Hellmann
 doug.hellm...@dreamhost.com wrote:
  If you have a change in your project that is blocked waiting for a patch
 to
  land in oslo (in the incubator, or any of the libraries we manage)
 *please*
  either open a blueprint or mark the associated bug as also affecting the
  relevant oslo project, then let me know about it so I can put it on our
  review priority list. We have a lot going on in oslo right now, but will
 do
  our best to prioritize reviews that affect features landing in other
  projects -- if you let us know about them.


 While I don't think this is what you meant when you said let oslo help
 you, I do have a request:

 While trying to do a basic oslo-incubator update ('./update.sh
 --nodeps --modules fixture --base nova --dest-dir ../nova')  I hit a
 bug https://bugs.launchpad.net/oslo/+bug/1281860

 Due to the nature of oslo-incubator (it may break at any time) it is
 hard for downstream projects (nova, cinder etc.) to keep there
 oslo-incubator copies up to date, so when someone wants to sync across
 a new change they have to deal with many unrelated changes, some of
 which may break things. For example

 oslo-incubator$ ./update.sh --config-file
 ../cinder/openstack-common.conf --base cinder --dest-dir ../cinder
 cinder$ git diff --stat HEAD
 52 files changed, 3568 insertions(+), 961 deletions(-)


 I would like to propose making the oslo team responsible for syncing
 across oslo-incubator code, they know the code base best and can fix
 things when they break.  This doesn't mean no one else can use
 update.sh it just means that the oslo team would make sure that syncs
 are done in a timely fashion, so the diffs don't get too big.


The intent has always been for the person making a change in the incubator
to be responsible for updating consuming projects (whether that person is
an oslo core reviewer or not). That hasn't always happened -- some changes
are only synced into the project where the change was wanted, some changes
aren't synced at all.

Although it doesn't look like it from the outside, we have made significant
progress in creating the tools and processes we need to move code out of
the incubator into libraries. I expect to accelerate those moves late in
this cycle and early in the next so the libraries can be adopted early in
the next cycle.

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Port binding information, transactions, and concurrency

2014-02-19 Thread Robert Kukura
On 02/10/2014 05:46 AM, Mathieu Rohon wrote:
 Hi,
 
 one other comment inline :

Hi Mathieu, see below:

 
 On Wed, Feb 5, 2014 at 5:01 PM, Robert Kukura rkuk...@redhat.com wrote:
 On 02/05/2014 09:10 AM, Henry Gessau wrote:
 Bob, this is fantastic, I really appreciate all the detail. A couple of
 questions ...

 On Wed, Feb 05, at 2:16 am, Robert Kukura rkuk...@redhat.com wrote:

 A couple of interrelated issues with the ML2 plugin's port binding have
 been discussed over the past several months in the weekly ML2 meetings.
 These effect drivers being implemented for icehouse, and therefore need
 to be addressed in icehouse:

 * MechanismDrivers need detailed information about all binding changes,
 including unbinding on port deletion
 (https://bugs.launchpad.net/neutron/+bug/1276395)
 * MechanismDrivers' bind_port() methods are currently called inside
 transactions, but in some cases need to make remote calls to controllers
 or devices (https://bugs.launchpad.net/neutron/+bug/1276391)
 * Semantics of concurrent port binding need to be defined if binding is
 moved outside the triggering transaction.

 I've taken the action of writing up a unified proposal for resolving
 these issues, which follows...

 1) An original_bound_segment property will be added to PortContext. When
 the MechanismDriver update_port_precommit() and update_port_postcommit()
 methods are called and a binding previously existed (whether its being
 torn down or not), this property will provide access to the network
 segment used by the old binding. In these same cases, the portbinding
 extension attributes (such as binding:vif_type) for the old binding will
 be available via the PortContext.original property. It may be helpful to
 also add bound_driver and original_bound_driver properties to
 PortContext that behave similarly to bound_segment and
 original_bound_segment.

 2) The MechanismDriver.bind_port() method will no longer be called from
 within a transaction. This will allow drivers to make remote calls on
 controllers or devices from within this method without holding a DB
 transaction open during those calls. Drivers can manage their own
 transactions within bind_port() if needed, but need to be aware that
 these are independent from the transaction that triggered binding, and
 concurrent changes to the port could be occurring.

 3) Binding will only occur after the transaction that triggers it has
 been completely processed and committed. That initial transaction will
 unbind the port if necessary. Four cases for the initial transaction are
 possible:

 3a) In a port create operation, whether the binding:host_id is supplied
 or not, all drivers' port_create_precommit() methods will be called, the
 initial transaction will be committed, and all drivers'
 port_create_postcommit() methods will be called. The drivers will see
 this as creation of a new unbound port, with PortContext properties as
 shown. If a value for binding:host_id was supplied, binding will occur
 afterwards as described in 4 below.

 PortContext.original: None
 PortContext.original_bound_segment: None
 PortContext.original_bound_driver: None
 PortContext.current['binding:host_id']: supplied value or None
 PortContext.current['binding:vif_type']: 'unbound'
 PortContext.bound_segment: None
 PortContext.bound_driver: None

 3b) Similarly, in a port update operation on a previously unbound port,
 all drivers' port_update_precommit() and port_update_postcommit()
 methods will be called, with PortContext properies as shown. If a value
 for binding:host_id was supplied, binding will occur afterwards as
 described in 4 below.

 PortContext.original['binding:host_id']: previous value or None
 PortContext.original['binding:vif_type']: 'unbound' or 'binding_failed'
 PortContext.original_bound_segment: None
 PortContext.original_bound_driver: None
 PortContext.current['binding:host_id']: current value or None
 PortContext.current['binding:vif_type']: 'unbound'
 PortContext.bound_segment: None
 PortContext.bound_driver: None

 3c) In a port update operation on a previously bound port that does not
 trigger unbinding or rebinding, all drivers' update_port_precommit() and
 update_port_postcommit() methods will be called with PortContext
 properties reflecting unchanged binding states as shown.

 PortContext.original['binding:host_id']: previous value
 PortContext.original['binding:vif_type']: previous value
 PortContext.original_bound_segment: previous value
 PortContext.original_bound_driver: previous value
 PortContext.current['binding:host_id']: previous value
 PortContext.current['binding:vif_type']: previous value
 PortContext.bound_segment: previous value
 PortContext.bound_driver: previous value

 3d) In a the port update operation on a previously bound port that does
 trigger unbinding or rebinding, all drivers' update_port_precommit() and
 update_port_postcommit() methods will be called with PortContext
 properties reflecting the previously bound and currently 

Re: [openstack-dev] Storing license information in openstack/requirements

2014-02-19 Thread Doug Hellmann
On Wed, Feb 19, 2014 at 6:13 AM, Thierry Carrez thie...@openstack.orgwrote:

 Sean Dague wrote:
  Honestly, if we are going to track this, we should probably do the set
  of things that reviewers tend to do when running through these.
 
  License:
  Upstream Location:
  Ubuntu/Debian Package: Y/N? (url)
  Fedora Package: Y/N? (url)
  Suse Package: Y/N? (url)
  Last Release: Date (in case of abandonware)
  Python 3 support: Y/N? (informational only)
 
  I'd honestly stick that in a yaml file instead, and have something
  sanity check it on new requirements add.

 Licensing is the only legally-binding issue at stake, the rest are
 technically-binding issues that we consider when we accept or reject a
 new dependency. I'm not saying there is no value in tracking that extra
 information, just saying that we really need to track licensing. I don't
 want perfection to get in the way of making baby steps towards making
 things better.

 Tracking licensing is a good first step, and having full licensing
 coverage will take some time. We shouldn't block on YAML conversion or
 full technical information... As a first step let's just accept patches
 that mention licensing information in trailing comments, then if someone
 wants to convert the requirements files to YAML so that they can contain
 more information, great!


I added a note about this to the review criteria list [1].

Doug

[1] https://wiki.openstack.org/wiki/Requirements#Review_Criteria






 --
 Thierry Carrez (ttx)


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Including Domains in Nova

2014-02-19 Thread Tiwari, Arvind
Hi Henrique,

I agree with your thoughts and in my opinion every OpenStack service has to be 
Domain aware. Specially it will be more helpful in large scale OpenStack 
deployments where IAM resources are scoped to a domain but other services (e.g. 
Nova) are just not aware of domains.

Thanks,
Arvind



From: Henrique Truta [mailto:henriquecostatr...@gmail.com]
Sent: Wednesday, February 19, 2014 5:21 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Nova] Including Domains in Nova


Hi everyone.



It is necessary to make Nova support the Domain quotas and create a new 
administrative perspective. Here are some reasons why Nova should support 
domains:



1 - It's interesting to keep the main Openstack components sharing the same 
concept, once it has already been made in Keystone. In Keystone, the domain 
defines more administrative boundaries and makes management of its entities 
easier.



2 - Nova shouldn't be so tied in to projects. Keystone was created to abstract 
concepts like these to other modules, like Nova. In addition, Nova needs to be 
flexible enough to work with the new functionalities that Keystone will 
provide. If we keep the Nova tied in to projects (or domains), we will be far 
from the Nova focus which is providing compute services.



3 - There is also the Domain Quota Driver BP 
(https://blueprints.launchpad.net/nova/+spec/domain-quota-driver), which 
implementation has already began. This Blueprint allows the user to handle 
quotas at domain level. Nova requires domains to make this feature work 
properly, right above the project level. There is also an implementation that 
includes the domain information on the token context. This implementation have 
to be included as well: https://review.openstack.org/#/c/55870/ .



4 - The Nova API must be extended in order to enable domain-level operations, 
that only work at project-level such as:

- Listing, viewing and deleting images;

- Deleting and listing servers;

- Perform server actions like changing passwords, reboot, rebuild and 
resize;

- CRUD and listing on server metadata;

In addition to provide quota management through the API and establishment of a 
new administrative scope.



In order to accomplish these features, the token must contain the domain 
informations, which will be included as mentioned in item 3. Then, the Nova API 
calls will be changed to consider the domain information and when a call 
referent to a project is made (e.g. servers).



What do you think about it? Any additional suggestions?



AT: Keystone also has to enforce the domain scoping more strongly, as in the 
current model Keystone resources are not required to be scoped  a domain.



Thanks.



Henrique Truta
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Port binding information, transactions, and concurrency

2014-02-19 Thread Robert Kukura
On 02/05/2014 10:47 AM, Mathieu Rohon wrote:
 Hi,
 
 thanks for this great proposal

Just following up on the one comment below:

 
 
 On Wed, Feb 5, 2014 at 3:10 PM, Henry Gessau ges...@cisco.com wrote:
 Bob, this is fantastic, I really appreciate all the detail. A couple of
 questions ...

 On Wed, Feb 05, at 2:16 am, Robert Kukura rkuk...@redhat.com wrote:

 A couple of interrelated issues with the ML2 plugin's port binding have
 been discussed over the past several months in the weekly ML2 meetings.
 These effect drivers being implemented for icehouse, and therefore need
 to be addressed in icehouse:

 * MechanismDrivers need detailed information about all binding changes,
 including unbinding on port deletion
 (https://bugs.launchpad.net/neutron/+bug/1276395)
 * MechanismDrivers' bind_port() methods are currently called inside
 transactions, but in some cases need to make remote calls to controllers
 or devices (https://bugs.launchpad.net/neutron/+bug/1276391)
 * Semantics of concurrent port binding need to be defined if binding is
 moved outside the triggering transaction.

 I've taken the action of writing up a unified proposal for resolving
 these issues, which follows...

 1) An original_bound_segment property will be added to PortContext. When
 the MechanismDriver update_port_precommit() and update_port_postcommit()
 methods are called and a binding previously existed (whether its being
 torn down or not), this property will provide access to the network
 segment used by the old binding. In these same cases, the portbinding
 extension attributes (such as binding:vif_type) for the old binding will
 be available via the PortContext.original property. It may be helpful to
 also add bound_driver and original_bound_driver properties to
 PortContext that behave similarly to bound_segment and
 original_bound_segment.

 2) The MechanismDriver.bind_port() method will no longer be called from
 within a transaction. This will allow drivers to make remote calls on
 controllers or devices from within this method without holding a DB
 transaction open during those calls. Drivers can manage their own
 transactions within bind_port() if needed, but need to be aware that
 these are independent from the transaction that triggered binding, and
 concurrent changes to the port could be occurring.

 3) Binding will only occur after the transaction that triggers it has
 been completely processed and committed. That initial transaction will
 unbind the port if necessary. Four cases for the initial transaction are
 possible:

 3a) In a port create operation, whether the binding:host_id is supplied
 or not, all drivers' port_create_precommit() methods will be called, the
 initial transaction will be committed, and all drivers'
 port_create_postcommit() methods will be called. The drivers will see
 this as creation of a new unbound port, with PortContext properties as
 shown. If a value for binding:host_id was supplied, binding will occur
 afterwards as described in 4 below.

 PortContext.original: None
 PortContext.original_bound_segment: None
 PortContext.original_bound_driver: None
 PortContext.current['binding:host_id']: supplied value or None
 PortContext.current['binding:vif_type']: 'unbound'
 PortContext.bound_segment: None
 PortContext.bound_driver: None

 3b) Similarly, in a port update operation on a previously unbound port,
 all drivers' port_update_precommit() and port_update_postcommit()
 methods will be called, with PortContext properies as shown. If a value
 for binding:host_id was supplied, binding will occur afterwards as
 described in 4 below.

 PortContext.original['binding:host_id']: previous value or None
 PortContext.original['binding:vif_type']: 'unbound' or 'binding_failed'
 PortContext.original_bound_segment: None
 PortContext.original_bound_driver: None
 PortContext.current['binding:host_id']: current value or None
 PortContext.current['binding:vif_type']: 'unbound'
 PortContext.bound_segment: None
 PortContext.bound_driver: None

 3c) In a port update operation on a previously bound port that does not
 trigger unbinding or rebinding, all drivers' update_port_precommit() and
 update_port_postcommit() methods will be called with PortContext
 properties reflecting unchanged binding states as shown.

 PortContext.original['binding:host_id']: previous value
 PortContext.original['binding:vif_type']: previous value
 PortContext.original_bound_segment: previous value
 PortContext.original_bound_driver: previous value
 PortContext.current['binding:host_id']: previous value
 PortContext.current['binding:vif_type']: previous value
 PortContext.bound_segment: previous value
 PortContext.bound_driver: previous value

 3d) In a the port update operation on a previously bound port that does
 trigger unbinding or rebinding, all drivers' update_port_precommit() and
 update_port_postcommit() methods will be called with PortContext
 properties reflecting the previously bound and currently unbound binding
 states 

Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-19 Thread Eugene Nikanorov
Hi Sam,

My comments inline:


On Wed, Feb 19, 2014 at 4:57 PM, Samuel Bercovici samu...@radware.comwrote:

  Hi,



 I think we mix different aspects of operations. And try to solve a non
 problem.

Not really, Advanced features we're trying to introduce are incompatible by
both object model and API.

 From APIs/Operations we are mixing the following models:

 1.   Logical model (which as far as I understand is the topic of this
 discussion) - tenants define what they need logically vipàdefault_pool,
 l7 association, ssl, etc.

That's correct. Tenant may or may not care about how it is grouped on the
backend. We need to support both cases.

  2.   Physical model - operator / vendor install and specify how
 backend gets implemented.

 3.   Deploying 1 on 2 - this is currently the driver's
 responsibility. We can consider making it better but this should not impact
 the logical model.

I think grouping vips and pools is important part of logical model, even if
some users may not care about it.


 I think this is not a problem.

 In a logical model a pool which is part of L7 policy is a logical object
 which could be placed at any backend and any existing vipßàpool and
 accordingly configure the backend that those vipßàpool are deployed on.

 That's not how it currently works - that's why we're trying to address it.
Having pool shareable between backends at least requires to move 'instance'
role from the pool to some other entity, and also that changes a number of
API aspects.

 If the same pool that was part of a l7 association will also be connected
 to a vip as a default pool, than by all means this new vipßàpool pair can
 be instantiated into some back end.

 The proposal to not allow this (ex: only allow pools that are connected to
 the same lb-instance to be used for l7 association), brings the physical
 model into the logical model.

So proposal tries to address 2 issues:
1) in many cases it is desirable to know about grouping of logical objects
on the backend
2) currently physical model implied when working with pools, because pool
is the root and corresponds to backend with 1:1 mapping



 I think that the current logical model is fine with the exception that the
 two way reference between vip and pool (vipßàpool) should be modified
 with only vip pointing to a pool (vipàpool) which allows reusing the pool
 with multiple vips.

Reusing pools by vips is not as simple as it seems.
If those vips belong to 1 backend (that by itself requires tenant to know
about that) - that's no problem, but if they don't, then:
1) what 'status' attribute of the pool would mean?
2) how health monitors for the pool will be deployed? and what their
statuses would mean?
3) what pool statistics would mean?
4) If the same pool is used on

To be able to preserve existing meaningful healthmonitors, members and
statistics API we will need to create associations for everything, or just
change API in backward incompatible way.
My opinion is that it make sense to limit such ability (reusing pools by
vips deployed on different backends) in favor of simpler code, IMO it's
really a big deal. Pool is lightweight enough to not to share it as an
object.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Keystone working with V3

2014-02-19 Thread Vinod Kumar Boppanna
Dear All,

I am doing some development in Nova and in this regard, i have to write a code 
where Nova requests some date through V3 API of keystone. But the 
keystoneclient is always falling back to V2 (keystoneclient/v3/client.py). Due 
to this the keystone V3 API which i am using is failing as it is not available 
in V2.

I did a hack to solve it

https://review.openstack.org/#/c/74678/1/keystoneclient/v3/client.py

But i was told that it is not acceptable doing this way. So, does anybody from 
Keystone doing any thing on allowing V3 APIs in keystone client (not falling 
back to V2). If so, please let me know as my code in nova requires this feature 
and then only i can submit the code to Gerrit.

Thanks  Regards,
Vinod Kumar Boppanna
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] status of quota class

2014-02-19 Thread Kevin L. Mitchell
On Wed, 2014-02-19 at 13:47 +0100, Mehdi Abaakouk wrote:
 But 'quota_class' is never set when a nova RequestContext is created.

When I created quota classes, I envisioned the authentication component
of the WSGI stack setting the quota_class on the RequestContext, but
there was no corresponding concept in Keystone.  We need some means of
identifying groups of tenants.

 So my question, what is the plan to finish the 'quota class' feature ? 

I currently have no plan to work on that, and I am not aware of any such
work.

 Can I propose a blueprint for the next cycle to store the mapping between
 project and a quota_class into nova itself, to finish this feature ? 

Of course; anyone can propose a blueprint.  Who will you have work on
the feature?

 ie: add a new API endpoint to set a quota_class to a project, store that
 into the db and change the quota engine to read the quota_class from the
 db instead of the RequestContext.

Reading the quota class from the db sounds like a bad fit to me; this
really feels like something that should be stored in Keystone, since
it's authentication-related data.  Additionally, if the attribute is in
Keystone, other services may take advantage of it.  The original goal of
quota classes was to make it easier to update the quotas of a given
tenant based on some criteria, such as the service level they've paid
for; if a customer upgrades (or downgrades) their service level, their
quotas should change to match.  This could be done by manually updating
each quota that affects them, but a single change to a single attribute
makes better sense.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] Question about solum-minimal-cli BP

2014-02-19 Thread Adrian Otto

 On Feb 17, 2014, at 3:58 PM, Angus Salkeld angus.salk...@rackspace.com 
 wrote:
 
 On 17/02/14 21:47 +, Shaunak Kashyap wrote:
 Hey folks,
 
 I was reading through 
 https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/CLI-minimal-implementation
  and have a question.
 
 If I’m understanding “app create” and “assembly create” correctly, the user 
 will have to run “app create” first, followed by “assembly create” to have a 
 running application. Is this correct? If so, what is the reason for “app 
 create” not automatically creating one assembly as well?
 
 On that page it seems that app create is the same as plan create.
 
 The only reason I can see for seperating the plan from the assembly is
 when you have git-push.
 Then you need to have something create the git repo for you.
 
 1 plan create (with a reference to a git-push requirement) would create
  the remote git repo for you.
 2 you clone and populate the repo with your app code
 3 you push, and that causes the assembly create/update.
 
 Adrian might want to correct my here tho'

Angus is right. This is the key difference between a flow that uses app/plan 
create and one that just does assembly create and passes in a plan file by 
value.

The main idea of a plan resource is so you have an in-system reference of the 
deployment plan so you can easily reference it (like a template) to create an 
arbitrary number of matching assemblies from it. It's also a convenient place 
to hang initialization requirements that need to be met before you start 
generating assemblies, like the creation of a git repo.

Adrian

 -Angus
 
 
 Thanks,
 Shaunak
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone working with V3

2014-02-19 Thread Dolph Mathews
On Wed, Feb 19, 2014 at 10:21 AM, Vinod Kumar Boppanna 
vinod.kumar.boppa...@cern.ch wrote:

  Dear All,

 I am doing some development in Nova and in this regard, i have to write a
 code where Nova requests some date through V3 API of keystone. But the
 keystoneclient is always falling back to V2 (keystoneclient/v3/client.py).
 Due to this the keystone V3 API which i am using is failing as it is not
 available in V2.

 I did a hack to solve it

 https://review.openstack.org/#/c/74678/1/keystoneclient/v3/client.py

 But i was told that it is not acceptable doing this way. So, does anybody
 from Keystone doing any thing on allowing V3 APIs in keystone client (not
 falling back to V2). If so, please let me know as my code in nova requires
 this feature and then only i can submit the code to Gerrit.


There's another mailing list discussion on the same topic:
http://lists.openstack.org/pipermail/openstack-dev/2014-February/026177.html

The short of it is that we probably need such a built-in hack for icehouse.
I'm going to experiment with your patch today, thanks!


  Thanks  Regards,
 Vinod Kumar Boppanna

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Including Domains in Nova

2014-02-19 Thread Ulrich Schwickerath

Hi, all,

we've been making good progress on this blueprint:

https://blueprints.launchpad.net/nova/+spec/domain-quota-driver-api

which relies on the domain quota driver stuff. Maybe you'd like to have 
a look at that as well.


Kind regards,
Ulrich


On 19.02.2014 16:45, Tiwari, Arvind wrote:


Hi Henrique,

I agree with your thoughts and in my opinion every OpenStack service 
has to be Domain aware. Specially it will be more helpful in large 
scale OpenStack deployments where IAM resources are scoped to a domain 
but other services (e.g. Nova) are just not aware of domains.


Thanks,

Arvind

*From:*Henrique Truta [mailto:henriquecostatr...@gmail.com]
*Sent:* Wednesday, February 19, 2014 5:21 AM
*To:* openstack-dev@lists.openstack.org
*Subject:* [openstack-dev] [Nova] Including Domains in Nova

Hi everyone.

It is necessary to make Nova support the Domain quotas and create a 
new administrative perspective. Here are some reasons why Nova should 
support domains:


1 - It's interesting to keep the main Openstack components sharing the 
same concept, once it has already been made in Keystone. In Keystone, 
the domain defines more administrative boundaries and makes management 
of its entities easier.


2 - Nova shouldn't be so tied in to projects. Keystone was created to 
abstract concepts like these to other modules, like Nova. In addition, 
Nova needs to be flexible enough to work with the new functionalities 
that Keystone will provide. If we keep the Nova tied in to projects 
(or domains), we will be far from the Nova focus which is providing 
compute services.


3 - There is also the Domain Quota Driver BP 
(https://blueprints.launchpad.net/nova/+spec/domain-quota-driver), 
which implementation has already began. This Blueprint allows the user 
to handle quotas at domain level. Nova requires domains to make this 
feature work properly, right above the project level. There is also an 
implementation that includes the domain information on the token 
context. This implementation have to be included as well: 
https://review.openstack.org/#/c/55870/.


4 - The Nova API must be extended in order to enable domain-level 
operations, that only work at project-level such as:


- Listing, viewing and deleting images;

- Deleting and listing servers;

- Perform server actions like changing passwords, reboot, rebuild and 
resize;


- CRUD and listing on server metadata;

In addition to provide quota management through the API and 
establishment of a new administrative scope.


In order to accomplish these features, the token must contain the 
domain informations, which will be included as mentioned in item 3. 
Then, the Nova API calls will be changed to consider the domain 
information and when a call referent to a project is made (e.g. servers).


What do you think about it? Any additional suggestions?

AT: Keystone also has to enforce the domain scoping more strongly, as 
in the current model Keystone resources are not required to be scoped 
 a domain.


Thanks.

Henrique Truta



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] Question about solum-minimal-cli BP

2014-02-19 Thread Adrian Otto

 On Feb 18, 2014, at 4:41 PM, Angus Salkeld angus.salk...@rackspace.com 
 wrote:
 
 On 18/02/14 14:19 +, Shaunak Kashyap wrote:
 Thanks Angus and Devdatta. I think I understand.
 
 Angus -- what you said seems to mirror the Heroku CLI usage: a) User runs 
 app/plan create (to create the remote repo), then b) user runs git push 
 ... (which pushes the code to the remote repo and creates 1 assembly, 
 resulting in a running application). If this is the intended flow for the 
 user, it makes sense to me.
 
 Just to be clear, I am not totally sure we are going to glue git repo
 generation to create plan (it *could* be part of create assembly).

Yes, it's possible to hang repo creation on create assembly, but you only need 
that when you first set up the app. After that point, the repo exists, and you 
can just make updates to the plan (when making a new release, for example), and 
create new assemblies from it.

If the user calls assembly create using a plan file that has a requirement 
like git-init, that requirement could be fulfilled by a matching git-init 
service. If the repo does not exist yet, the matching service can create one. 
Otherwise, it can return taking no action. This feature is not necessary if 
plan create has the equivalent, but both wold work exactly the same way. This 
might be convenient for users who prefer not to bother with creating any plan 
resources. This cuts one step out of the workflow, but it requires you to keep 
track of your plan file forever and keep passing it in by value every time you 
create an assembly, rather than just at initial app creation time. Subsequent 
assembly creations would be faster when using a reference to a plan that Solum 
already has, rather than passing it in by value, or causing Solum to download 
it from a code repo each time.

 One follow up question: under what circumstances will the user need to 
 explicitly run assembly create? Would it be used exclusively for adding 
 more assemblies to an already running app?
 
 If you are not using the git-push mechanism, but the git-pull.
 Here you have your own repo (say on github) and there is not
 a git-repo-generation phase.

See my remarks above on this. You might want a plan even in a git-pull scenario 
if you expect to create multiple matching assemblies.

Adrian

 -Angus
 
 
 Thanks,
 
 Shaunak
 
 
 From: Angus Salkeld [angus.salk...@rackspace.com]
 Sent: Monday, February 17, 2014 5:54 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [solum] Question about solum-minimal-cli BP
 
 On 17/02/14 21:47 +, Shaunak Kashyap wrote:
 Hey folks,
 
 I was reading through 
 https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/CLI-minimal-implementation
  and have a question.
 
 If I’m understanding “app create” and “assembly create” correctly, the user 
 will have to run “app create” first, followed by “assembly create” to have 
 a running application. Is this correct? If so, what is the reason for “app 
 create” not automatically creating one assembly as well?
 
 On that page it seems that app create is the same as plan create.
 
 The only reason I can see for seperating the plan from the assembly is
 when you have git-push.
 Then you need to have something create the git repo for you.
 
 1 plan create (with a reference to a git-push requirement) would create
  the remote git repo for you.
 2 you clone and populate the repo with your app code
 3 you push, and that causes the assembly create/update.
 
 Adrian might want to correct my here tho'
 
 -Angus
 
 
 Thanks,
 Shaunak
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPC Proposal

2014-02-19 Thread Martin, JC
Comments in line.

JC
On Feb 18, 2014, at 5:21 PM, Rudra Rugge rru...@juniper.net wrote:

 Please see inline:
 
 On Feb 18, 2014, at 2:57 PM, Martin, JC jch.mar...@gmail.com wrote:
 
 Maybe I should explain this one a bit.
 
 Shared network: If a user has defined a shared network, and they used your 
 API to create a VPC, the instances within the VPC will automatically get an 
 interface on the shared network. I don't think that this is the expected 
 behavior
 
 
 When a user launches a VM in a VPC (AWS) the user needs to specify a subnet 
 (network in openstack terminology) for each of the interfaces. Hence the 
 instances will only get interfaces on the passed subnets/networks. The 
 network being shared or not is not relevant for the VM launch. AWS APIs need 
 the subnet/network to be passed for a VM launch in VPC.

Thanks, this makes sense. 

 
 
 FIP in scope of VPC: I was not talking about the EIP for Internet access, 
 sorry if it was confusing. Since you are not really describing how you 
 create the external networks, it's not clear how you implement the multiple 
 gateways (public and private) that AWS supports, and how you connects 
 networks to routers and external networks. i.e. are the CIDRs used in the 
 VPC, NAT'ED to be routed in the customer datacenter, in which case, there is 
 a floating IP pool that is private to each private gateway and VPC (not the 
 'public' one).
 
 Gateways are built using Openstack neutron router resource. Networks are 
 connected to the router interfaces. For internet access cloud administrator 
 needs to provision a floating IP pool for the router to use. For CIDRs used 
 in the VPC we need to implement a route-table extension which holds the 
 prefix list. The prefix-list or route-table is attached to a 
 subnet(AWS)/network(Openstack).  All internal(private) routing is managed by 
 the Openstack router. NAT and VPN are used as next-hops to exit the VPC. In 
 these cases similar to AWS we need to launch NAT and VPN capable instances as 
 supported by Openstack FWAAS and VPNAAS. 

I looked in the code referenced but did not find any router attachment call. 
Did I miss something ? 
Also, what about these calls: CreateInternetGateway, AttachInternetGateway, 
CreateCustomerGateway, … don't you need that define how the VPC attach outside ?

What about mapping the optional attributes too (e.g. InstanceTenancy) ? What's 
the point of providing only partial compatibility ?

 
 
 It would be useful for you to describe the pre-setup required to do make 
 this works.
 
 The only pre-setup needed by the cloud admin is to provide a public pool for 
 floating IP. 
 
 Rudra
 
 
 
 JC
 
 
 On Feb 18, 2014, at 1:09 PM, Harshad Nakil hna...@contrailsystems.com 
 wrote:
 
 2. It does give full AWS compatibility (except for network ACL which was 
 differed). Shared networks, FIP within scope of VPC is not some thing AWS 
 provides. So it is not partial support.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-19 Thread Ladislav Smola

Hello,

I would like to have your opinion about how to deal with passwords in 
Tuskar-API


The background is, that tuskarAPI is storing heat template parameters in 
its database, it's a
preparation for more complex workflows, when we will need to store the 
data before the actual

heat stack-create.

So right now, the state is unacceptable, we are storing sensitive 
data(all the heat passwords and keys)

in a raw form in the TuskarAPI database. That is wrong right?

So is anybody aware of the reasons, why we would need to store the 
passwords? Storing them
for a small amount of time (rather in a session) should be fine, so we 
can use them for latter init of the stack.

Do we need to store them for heat stack-update? Cause heat throws them away.

If yes, this bug should change to encrypting of the all sensitive data, 
right? Cause it might be just me,

but dealing with sensitive data like this the 8th deadly sin.

The second thing is, if users will update their passwords, info in the 
TuskarAPI will be obsolete and

can't be used anyway.

There is a bug filled for it:
https://bugs.launchpad.net/tuskar/+bug/1282066

Thanks for the feedback, seems like the bug is not as straightforward as 
I thought.


Kind Regards,
Ladislav
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] help the oslo team help you

2014-02-19 Thread Doug Hellmann
On Wed, Feb 19, 2014 at 11:41 AM, Joe Gordon joe.gord...@gmail.com wrote:

 On Wed, Feb 19, 2014 at 7:06 AM, Doug Hellmann
 doug.hellm...@dreamhost.com wrote:
 
 
 
  On Tue, Feb 18, 2014 at 9:03 PM, Joe Gordon joe.gord...@gmail.com
 wrote:
 
  On Wed, Feb 12, 2014 at 12:16 PM, Doug Hellmann
  doug.hellm...@dreamhost.com wrote:
   If you have a change in your project that is blocked waiting for a
 patch
   to
   land in oslo (in the incubator, or any of the libraries we manage)
   *please*
   either open a blueprint or mark the associated bug as also affecting
 the
   relevant oslo project, then let me know about it so I can put it on
 our
   review priority list. We have a lot going on in oslo right now, but
 will
   do
   our best to prioritize reviews that affect features landing in other
   projects -- if you let us know about them.
 
 
  While I don't think this is what you meant when you said let oslo help
  you, I do have a request:
 
  While trying to do a basic oslo-incubator update ('./update.sh
  --nodeps --modules fixture --base nova --dest-dir ../nova')  I hit a
  bug https://bugs.launchpad.net/oslo/+bug/1281860
 
  Due to the nature of oslo-incubator (it may break at any time) it is
  hard for downstream projects (nova, cinder etc.) to keep there
  oslo-incubator copies up to date, so when someone wants to sync across
  a new change they have to deal with many unrelated changes, some of
  which may break things. For example
 
  oslo-incubator$ ./update.sh --config-file
  ../cinder/openstack-common.conf --base cinder --dest-dir ../cinder
  cinder$ git diff --stat HEAD
  52 files changed, 3568 insertions(+), 961 deletions(-)
 
 
  I would like to propose making the oslo team responsible for syncing
  across oslo-incubator code, they know the code base best and can fix
  things when they break.  This doesn't mean no one else can use
  update.sh it just means that the oslo team would make sure that syncs
  are done in a timely fashion, so the diffs don't get too big.
 
 
  The intent has always been for the person making a change in the
 incubator
  to be responsible for updating consuming projects (whether that person
 is an
  oslo core reviewer or not). That hasn't always happened -- some changes
 are
  only synced into the project where the change was wanted, some changes
  aren't synced at all.

 I had no idea that is how it supposed to work today. That means for
 every patch I make to oslo-incubator I should be making 10+ other
 patches as well, that is a lot of overhead.


It is. It's a bad idea to have that many projects copying code from the
incubator, and it was never the plan that the incubator would be used that
way. Making syncing easier isn't going to solve the problem. It only
perpetuates the situation where syncing is needed at all.

 Although it doesn't look like it from the outside, we have made
 significant
  progress in creating the tools and processes we need to move code out of
 the
  incubator into libraries. I expect to accelerate those moves late in this
  cycle and early in the next so the libraries can be adopted early in the
  next cycle.

 Does this mean, there are plans to stop doing code syncs via
 oslo-incubator all together?


No, although I hope to restore it to its original purpose. As I said on
IRC, oslo is meant to be a place to collaborate to evolve an API and
create libraries. Somehow the perception has become that the incubator is a
place to dump code for someone else to maintain.

I am really happy to see the accelerated move to libraries, but that
 is not an immediate solution, we still have to deal with icehouse. It
 sounds like we both agree the status quo is broken and the long term
 solution is move code out of incubator. We need a short term solution
 as well, having cinder 4k lines off of oslo or nova being almost 10k
 lines off is unacceptable.


So far, every discussion on the subject eventually comes back to how big
the required patches are, though, and no one has come up with any proposals
for addressing that. I don't have a solution either, but am open to
suggestions. I do know that whatever solution we come up with will require
work from both sides of each sync -- oslo and the receiving project.

In the mean time, I have been focusing on addressing the underlying issue
by moving code into libraries. In past releases, we graduated the config
library and the messaging library, each with a certain amount of pain. That
pain taught us valuable lessons, which we have applied when creating
processes and tools to make things go more smoothly for other libraries. We
are verifying them now with oslo.vmware and oslo.test, and when we have the
kinks worked out we will move on to other libraries [1].

Doug

[1] https://wiki.openstack.org/wiki/Oslo/GraduationStatus




 
  Doug
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  

[openstack-dev] [Neutron][QoS] It's back from the dead!

2014-02-19 Thread Collins, Sean
Hi,

I'd like to apologize for the long delays in updating the QoS API
patch sets - I am working on cleaning them up so that they are ready for
review.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-19 Thread Dougal Matthews

On 19/02/14 17:10, Ladislav Smola wrote:

Hello,

I would like to have your opinion about how to deal with passwords in
Tuskar-API

The background is, that tuskarAPI is storing heat template parameters in
its database, it's a
preparation for more complex workflows, when we will need to store the
data before the actual
heat stack-create.

So right now, the state is unacceptable, we are storing sensitive
data(all the heat passwords and keys)
in a raw form in the TuskarAPI database. That is wrong right?


I agree, this situation needs to change.

I'm +1 for not storing the passwords if we can avoid it. This would 
apply to all situations and not just Tuskar.


The question for me, is what passwords will we have and when do we need 
them? Are any of the passwords required long term.


If we do need to store passwords it becomes a somewhat thorny issue, how 
does Tuskar know what a password is? If this is flagged up by the 
UI/client then we are relying on the user to tell us which isn't wise.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] call for help with nova bug management

2014-02-19 Thread Russell Bryant
On 02/18/2014 02:48 PM, Tracy Jones wrote:
 So i have been rather underwhelmed in the enthusiastic response to help
 out :-)
 
 
 
 So far only wendar and johnthetubaguy have signed up.   I was hoping for
 at least 3-5 people to help with the initial triage.  Please sign up
 this week if you can help and i’ll schedule the meetings starting next week

I'll be sure to bring this up in the weekly Nova meeting tomorrow, as well.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Better handling of lists in Heat - a proposal to add a map function

2014-02-19 Thread Randall Burt
This may also be relevant: 
https://blueprints.launchpad.net/heat/+spec/override-resource-name-in-resource-group

On Feb 19, 2014, at 1:48 AM, Clint Byrum cl...@fewbar.com
 wrote:

 Since picking up Heat and trying to think about how to express clusters
 of things, I've been troubled by how poorly the CFN language supports
 using lists. There has always been the Fn::Select function for
 dereferencing arrays and maps, and recently we added a nice enhancement
 to HOT to allow referencing these directly in get_attr and get_param.
 
 However, this does not help us when we want to do something with all of
 the members of a list.
 
 In many applications I suspect the template authors will want to do what
 we want to do now in TripleO. We have a list of identical servers and
 we'd like to fetch the same attribute from them all, join it with other
 attributes, and return that as a string.
 
 The specific case is that we need to have all of the hosts in a cluster
 of machines addressable in /etc/hosts (please, Designate, save us,
 eventually. ;). The way to do this if we had just explicit resources
 named NovaCompute0, NovaCompute1, would be:
 
  str_join:
- \n
- - str_join:
- ' '
- get_attr:
  - NovaCompute0
  - networks.ctlplane.0
- get_attr:
  - NovaCompute0
  - name
  - str_join:
- ' '
- get_attr:
  - NovaCompute1
  - networks.ctplane.0
- get_attr:
  - NovaCompute1
  - name
 
 Now, what I'd really like to do is this:
 
 map:
  - str_join:
- \n
- - str_join:
  - ' '
  - get_attr:
- $1
- networks.ctlplane.0
  - get_attr:
- $1
- name
  - - NovaCompute0
- NovaCompute1
 
 This would be helpful for the instances of resource groups too, as we
 can make sure they return a list. The above then becomes:
 
 
 map:
  - str_join:
- \n
- - str_join:
  - ' '
  - get_attr:
- $1
- networks.ctlplane.0
  - get_attr:
- $1
- name
  - get_attr:
  - NovaComputeGroup
  - member_resources
 
 Thoughts on this idea? I will throw together an implementation soon but
 wanted to get this idea out there into the hive mind ASAP.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-19 Thread Chris Behrens
+1. I'd like to leave it experimental as well. I think the task work is 
important to the future of nova-api and I'd like to make sure we're not rushing 
anything. We're going to need to live with old API versions for a long time, so 
it's important that we get it right. I'm also not convinced there's a 
compelling enough reason for one to move to v3 as it is. Extension versioning 
is important, but I'm not sure it can't be backported to v2 in the meantime.

- Chris

 On Feb 19, 2014, at 9:36 AM, Russell Bryant rbry...@redhat.com wrote:
 
 Greetings,
 
 The v3 API effort has been going for a few release cycles now.  As we
 approach the Icehouse release, we are faced with the following question:
 Is it time to mark v3 stable?
 
 My opinion is that I think we need to leave v3 marked as experimental
 for Icehouse.
 
 There are a number of reasons for this:
 
 1) Discussions about the v2 and v3 APIs at the in-person Nova meetup
 last week made me come to the realization that v2 won't be going away
 *any* time soon.  In some cases, users have long term API support
 expectations (perhaps based on experience with EC2).  In the best case,
 we have to get all of the SDKs updated to the new API, and then get to
 the point where everyone is using a new enough version of all of these
 SDKs to use the new API.  I don't think that's going to be quick.
 
 We really don't want to be in a situation where we're having to force
 any sort of migration to a new API.  The new API should be compelling
 enough that everyone *wants* to migrate to it.  If that's not the case,
 we haven't done our job.
 
 2) There's actually quite a bit still left on the existing v3 todo list.
 We have some notes here:
 
 https://etherpad.openstack.org/p/NovaV3APIDoneCriteria
 
 One thing is nova-network support.  Since nova-network is still not
 deprecated, we certainly can't deprecate the v2 API without nova-network
 support in v3.  We removed it from v3 assuming nova-network would be
 deprecated in time.
 
 Another issue is that we discussed the tasks API as the big new API
 feature we would include in v3.  Unfortunately, it's not going to be
 complete for Icehouse.  It's possible we may have some initial parts
 merged, but it's much smaller scope than what we originally envisioned.
 Without this, I honestly worry that there's not quite enough compelling
 functionality yet to encourage a lot of people to migrate.
 
 3) v3 has taken a lot more time and a lot more effort than anyone
 thought.  This makes it even more important that we're not going to need
 a v4 any time soon.  Due to various things still not quite wrapped up,
 I'm just not confident enough that what we have is something we all feel
 is Nova's API of the future.
 
 
 Let's all take some time to reflect on what has happened with v3 so far
 and what it means for how we should move forward.  We can regroup for Juno.
 
 Finally, I would like to thank everyone who has helped with the effort
 so far.  Many hours have been put in to code and reviews for this.  I
 would like to specifically thank Christopher Yeoh for his work here.
 Chris has done an *enormous* amount of work on this and deserves credit
 for it.  He has taken on a task much bigger than anyone anticipated.
 Thanks, Chris!
 
 -- 
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-19 Thread Jason Rist
On Wed 19 Feb 2014 10:29:32 AM MST, Dougal Matthews wrote:
 On 19/02/14 17:10, Ladislav Smola wrote:
 Hello,

 I would like to have your opinion about how to deal with passwords in
 Tuskar-API

 The background is, that tuskarAPI is storing heat template parameters in
 its database, it's a
 preparation for more complex workflows, when we will need to store the
 data before the actual
 heat stack-create.

 So right now, the state is unacceptable, we are storing sensitive
 data(all the heat passwords and keys)
 in a raw form in the TuskarAPI database. That is wrong right?

 I agree, this situation needs to change.

 I'm +1 for not storing the passwords if we can avoid it. This would
 apply to all situations and not just Tuskar.

 The question for me, is what passwords will we have and when do we
 need them? Are any of the passwords required long term.

 If we do need to store passwords it becomes a somewhat thorny issue,
 how does Tuskar know what a password is? If this is flagged up by the
 UI/client then we are relying on the user to tell us which isn't wise.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Would it be possible to create some token for use throughout? Forgive 
my naivete.

--
Jason E. Rist
Senior Software Engineer
OpenStack Management UI
Red Hat, Inc.
+1.720.256.3933
Freenode: jrist
github/identi.ca: knowncitizen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-19 Thread Dougal Matthews

On 19/02/14 18:29, Jason Rist wrote:

Would it be possible to create some token for use throughout? Forgive
my naivete.


I don't think so, the token would need to be understood by all the
services that we store passwords for. I may be misunderstanding however.

Dougal

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QoS] It's back from the dead!

2014-02-19 Thread Collins, Sean
No surprise, the code is quite stale, so bear with me while I debug the
failures.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gerrit co-authors and ticket stealing

2014-02-19 Thread Dan Prince
Perhaps one of the lesser know Gerrit features is the ability to overwrite 
someone else's patchset/review with a new revision. This can be a handy thing 
for collaboration, or perhaps to make minor edits (spelling fixes for example) 
to help expedite the review process. Generally I think things are fine and 
friendly on this front. There are a couple side effect behaviors that can occur.

Things like: Changing the author or adding yourself as a co-author. Changing 
the original author should almost never happen (I'm not sure that it has). 
Adding yourself as a co-author is less of an issue, but is also somewhat 
questionable if for example all you've done is re-worded something or fixed a 
spelling issue. So long as the original author is in the know here I think it 
is probably fine to add yourself as a co-author. But making more meaningful 
changes, even to a commit message should be checked ahead of time so as not to 
disrupt the intent of the original authors patch IMO. Leaving clear Gerrit 
feedback on the most recent patchset/commit with a -1 should do just fine in 
most cases if you would like a meaningful change and aren't closely 
collaborating (already) on the fix...

It has also come to my attention that co-authoring a patch steals the Launchpad 
ticket. I believe this is something that we should watch closely (and perhaps 
fix if we can).

Not trying to point the finger at anyone specifically here. I've probably been 
guilty of clobbering violations and/or accidental ticket stealing myself. We 
just need to be careful with these more advanced collaborative coding workflows 
so as not to step on each others toes.

Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][nova] Neutron plugin authors: Does port status indicate liveness?

2014-02-19 Thread Aaron Rosen
Hi Mathieu,

The current train of thought is to have neutron notify nova via a call back
when ports are ready. This model should hopefully scale better as now
nova-compute won't need to poll on neutron checking on the port status. Dan
Smith already has a patch out that adds an api to nova for it to receive
external events : https://review.openstack.org/#/c/74565/  that neutron can
use for this.

I abandoned that patch as it takes the approach of polling neutron from
nova-compute which we don't want.

Aaron


On Wed, Feb 19, 2014 at 12:58 AM, Mathieu Rohon mathieu.ro...@gmail.comwrote:

 Hi Aaron,

 You seem to have abandonned this patch :
 https://review.openstack.org/#/c/74218/

 You want neutron to update port in nova, can you please tell us how do
 you want to do that?

 I think that we should use such a mechanism for live-migration.
 live-migration should occur once the port is set up on the destination
 host. This could potentially resolve this bug :

 https://bugs.launchpad.net/neutron/+bug/1274160

 Best,

 Mathieu

 On Tue, Feb 18, 2014 at 2:55 AM, Aaron Rosen aaronoro...@gmail.com
 wrote:
  Hi Maru,
 
  Thanks for getting this thread started. I've filed the following
 blueprint
  for this:
 
  https://blueprints.launchpad.net/nova/+spec/check-neutron-port-status
 
  and have a have a prototype of it working here:
 
  https://review.openstack.org/#/c/74197/
  https://review.openstack.org/#/c/74218/
 
  One part that threw me a little while getting this working is that if
 using
  ovs and the new libvirt_vif_driver LibvirtGenericVifDriver, nova no
 longer
  calls ovs-vsctl to set external_ids:iface-id and that libvirt
 automatically
  does that for you. Unfortunately, this data seems to only make it to
 ovsdb
  when the instance is powered on. Because of this I needed to add back
 those
  calls as neutron needs this data to be set in ovsdb before it can start
  wiring the ports.
 
  I'm hoping this change should help out with
  https://bugs.launchpad.net/neutron/+bug/1253896 but we'll see. I'm not
 sure
  if it's to late to merge this in icehouse but it might be worth
 considering
  if we find that it helps reduce gate failures.
 
  Best,
 
  Aaron
 
 
  On Thu, Feb 13, 2014 at 3:31 AM, Mathieu Rohon mathieu.ro...@gmail.com
  wrote:
 
  +1 for this feature which could potentially resolve a race condition
  that could occur after port-binding refactoring in ML2 [1].
  in ML2, the port could be ACTIVE once a MD has bound the port. the
  vif_type could then be known by nova, and nova could create the
  network correctly thanks to vif_type and vif_details ( with
  vif_security embedded [2])
 
 
  [1]
 http://lists.openstack.org/pipermail/openstack-dev/2014-February/026750.html
  [2]https://review.openstack.org/#/c/72452/
 
  On Thu, Feb 13, 2014 at 3:13 AM, Maru Newby ma...@redhat.com wrote:
   Booting a Nova instance when Neutron is enabled is often unreliable
 due
   to the lack of coordination between Nova and Neutron apart from port
   allocation.  Aaron Rosen and I have been talking about fixing this by
 having
   Nova perform a check for port 'liveness' after vif plug and before vm
 boot.
   The idea is to have Nova fail the instance if its ports are not seen
 to be
   'live' within a reasonable timeframe after plug.  Our initial thought
 is
   that the compute node would call Nova's networking subsystem which
 could
   query Neutron for the status of the instance's ports.
  
   The open question is whether the port 'status' field can be relied
 upon
   to become ACTIVE for all the plugins currently in the tree.  If this
 is not
   the case, please reply to this thread with an indication of how one
 would be
   able to tell the 'liveness' of a port managed by the plugin you
 maintain.
  
   In the event that one or more plugins cannot reliably indicate port
   liveness, we'll need to ensure that the port liveness check can be
   optionally disabled so that the existing behavior of racing vm boot is
   maintained for plugins that need it.
  
   Thanks in advance,
  
  
   Maru
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] Question about solum-minimal-cli BP

2014-02-19 Thread Shaunak Kashyap
That's exactly what I was looking for. Thank you Angus!

Shaunak


From: Angus Salkeld [angus.salk...@rackspace.com]
Sent: Wednesday, February 19, 2014 5:15 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [solum] Question about solum-minimal-cli BP

On 19/02/14 08:52 +, Shaunak Kashyap wrote:
Thanks Angus but I think I have managed to get confused again :)

So let me take a step back. From a users' perspective, what is the least 
number of steps they would need to take in order to have a running application 
with Solum? I understand there might be two variations on this - git-push and 
git-pull - and the answer may be different for each.

If this is documented somewhere, I'm happy to peruse through that instead; 
just point me to it.

https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/GitIntegration


Thanks,

Shaunak

From: Angus Salkeld [angus.salk...@rackspace.com]
Sent: Tuesday, February 18, 2014 6:13 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [solum] Question about solum-minimal-cli BP

On 18/02/14 14:19 +, Shaunak Kashyap wrote:
Thanks Angus and Devdatta. I think I understand.

Angus -- what you said seems to mirror the Heroku CLI usage: a) User runs 
app/plan create (to create the remote repo), then b) user runs git push 
... (which pushes the code to the remote repo and creates 1 assembly, 
resulting in a running application). If this is the intended flow for the 
user, it makes sense to me.

Just to be clear, I am not totally sure we are going to glue git repo
generation to create plan (it *could* be part of create assembly).


One follow up question: under what circumstances will the user need to 
explicitly run assembly create? Would it be used exclusively for adding 
more assemblies to an already running app?

If you are not using the git-push mechanism, but the git-pull.
Here you have your own repo (say on github) and there is not
a git-repo-generation phase.

-Angus


Thanks,

Shaunak


From: Angus Salkeld [angus.salk...@rackspace.com]
Sent: Monday, February 17, 2014 5:54 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [solum] Question about solum-minimal-cli BP

On 17/02/14 21:47 +, Shaunak Kashyap wrote:
Hey folks,

I was reading through 
https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/CLI-minimal-implementation
 and have a question.

If I’m understanding “app create” and “assembly create” correctly, the user 
will have to run “app create” first, followed by “assembly create” to have a 
running application. Is this correct? If so, what is the reason for “app 
create” not automatically creating one assembly as well?

On that page it seems that app create is the same as plan create.

The only reason I can see for seperating the plan from the assembly is
when you have git-push.
Then you need to have something create the git repo for you.

1 plan create (with a reference to a git-push requirement) would create
   the remote git repo for you.
2 you clone and populate the repo with your app code
3 you push, and that causes the assembly create/update.

Adrian might want to correct my here tho'

-Angus


Thanks,
Shaunak

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit co-authors and ticket stealing

2014-02-19 Thread Dolph Mathews
On Wed, Feb 19, 2014 at 12:33 PM, Dan Prince dpri...@redhat.com wrote:

 Perhaps one of the lesser know Gerrit features is the ability to
 overwrite someone else's patchset/review with a new revision. This can be a
 handy thing for collaboration, or perhaps to make minor edits (spelling
 fixes for example) to help expedite the review process. Generally I think
 things are fine and friendly on this front. There are a couple side effect
 behaviors that can occur.


o/ I do this regularly to help authors land their intended changes
(hopefully with less frustration than they would otherwise experience).
Most frequently, if the only thing holding me back from a +1 / +2 is a few
nits, I'll leave some brief review feedback on the current patchset, and
submit a subsequent patchset with the nits fixed, and leave a +1 / +2.


 Things like: Changing the author or adding yourself as a co-author.
 Changing the original author should almost never happen (I'm not sure that
 it has). Adding yourself as a co-author is less of an issue, but is also
 somewhat questionable if for example all you've done is re-worded something
 or fixed a spelling issue. So long as the original author is in the know
 here I think it is probably fine to add yourself as a co-author. But making
 more meaningful changes, even to a commit message should be checked ahead
 of time so as not to disrupt the intent of the original authors patch IMO.


+1 absolutely agree with these guidelines. Continuing the above, when I
want to make more meaningful changes, I either A) suggest a pastebin's diff
to the author, or B) go ahead and make the changes but ask that the
original author review the latest patchset themselves and express a +1 to
acknowledge the result.

Leaving clear Gerrit feedback on the most recent patchset/commit with a -1
 should do just fine in most cases if you would like a meaningful change and
 aren't closely collaborating (already) on the fix...

 It has also come to my attention that co-authoring a patch steals the
 Launchpad ticket. I believe this is something that we should watch closely
 (and perhaps fix if we can).


+1 I used to make a habit of jumping to the bug and assigning the bug
back, but depending on your definition of steal (what does it actually
impact?), I'm not sure it's worth the effort? Regardless, I'd appreciate it
if the LP bot implementing this behavior used the Author (which as you
alluded, must be manually revised, e.g. `git commit --amend --author`) on
the commit rather than the Committer.



 Not trying to point the finger at anyone specifically here. I've probably
 been guilty of clobbering violations and/or accidental ticket stealing
 myself. We just need to be careful with these more advanced collaborative
 coding workflows so as not to step on each others toes.


Thanks for bringing this up! Gerrit provides for some powerful workflows
and I'd love it if the community was more comfortable taking advantage of
them.



 Dan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-19 Thread Dougal Matthews

On 19/02/14 18:49, Hugh O. Brock wrote:

On Wed, Feb 19, 2014 at 06:31:47PM +, Dougal Matthews wrote:

On 19/02/14 18:29, Jason Rist wrote:

Would it be possible to create some token for use throughout? Forgive
my naivete.


I don't think so, the token would need to be understood by all the
services that we store passwords for. I may be misunderstanding however.


Hmm... isn't this approximately what Keystone does? Accept a password
once from the user and then return session tokens?


Right - but I think the heat template expects passwords, not tokens. I
don't know how easily we can change that.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday February 20th at 22:00UTC

2014-02-19 Thread Matthew Treinish
Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, February 20th at 22:00 UTC in the #openstack-meeting
channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 22:00 UTC is in other timezones tomorrow's
meeting will be at:

17:00 EST
07:00 JST
08:30 ACDT
23:00 CET
16:00 CST
14:00 PST

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Sent the first batch of invitations to Atlanta's Summit

2014-02-19 Thread Chris Behrens

On Jan 28, 2014, at 12:45 PM, Stefano Maffulli stef...@openstack.org wrote:

 A few minutes ago we sent the first batch of invites to people who
 contributed to any of the official OpenStack programs[1] from 00:00 UTC
 on April 4, 2014 (Grizzly release day) until present.

Something tells me that this date is not correct? :)

- Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Fixed recent gate issues

2014-02-19 Thread Alan Pevec
 Yeah it's pip weirdness where things falls apart because of version cap. It's
 basically installing bin/swift from 1.9 when it sees the version requirement
 but it leaves everything in python-swiftclient namespace from master.

 So I've actually been looking at this since late yesterday the conclusion 
 we've
 reached is to just skip the exercises on grizzly. Removing the version cap 
 isn't
 going to be simple on grizzly because there global requirements wasn't 
 enforced
 back in grizzly. We'd have to change the requirement for both glance, horizon,
 and swift and being ~3 weeks away from eol for grizzly I don't think we should
 mess with that. This failure is only an issue with cli swiftclient on grizzly
 (and one swift functional test) which as it sits now is just the devstack
 exercises on grenade. So if we just don't run those exercises on the grizzly
 side of a grenade run there shouldn't be an issue. I've got 2 patches to do
 this here:

 https://review.openstack.org/#/c/74419/

 https://review.openstack.org/#/c/74451/

Looks like only the latter is needed, devstack-gate core please
approve it to unblock stable/havana.

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]Do you think tanent_id should be verified

2014-02-19 Thread Dolph Mathews
There's an open bug [1] against nova  neutron to handle notifications [2]
from keystone about such events. I'd love to see that happen during Juno!

[1] https://bugs.launchpad.net/nova/+bug/967832
[2] http://docs.openstack.org/developer/keystone/event_notifications.html

On Mon, Feb 17, 2014 at 6:35 AM, Yongsheng Gong gong...@unitedstack.comwrote:

 It is not easy to enhance it. If we check the tenant_id on creation, if
 should we  also to do some job when keystone delete tenant?


 On Mon, Feb 17, 2014 at 6:41 AM, Dolph Mathews dolph.math...@gmail.comwrote:

 keystoneclient.middlware.auth_token passes a project ID (and name, for
 convenience) to the underlying application through the WSGI environment,
 and already ensures that this value can not be manipulated by the end user.

 Project ID's (redundantly) passed through other means, such as URLs, are
 up to the service to independently verify against keystone (or
 equivalently, against the WSGI environment), but can be directly
 manipulated by the end user if no checks are in place.

 Without auth_token in place to manage multitenant authorization, I'd
 still expect services to blindly trust the values provided in the
 environment (useful for both debugging the service and alternative
 deployment architectures).

 On Sun, Feb 16, 2014 at 8:52 AM, Dong Liu willowd...@gmail.com wrote:

 Hi stackers:

 I found that when creating network subnet and other resources, the
 attribute tenant_id
 can be set by admin tenant. But we did not verify that if the tanent_id
 is real in keystone.

 I know that we could use neutron without keystone, but do you think
 tenant_id should
 be verified when we using neutron with keystone.

 thanks
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Sent the first batch of invitations to Atlanta's Summit

2014-02-19 Thread Dolph Mathews
I just noticed the subject of this email referred to the first batch of
invitations -- are there going to be subsequent batches of invites? If so,
who was not included in the first batch that will be in subsequent batches?

On Tue, Jan 28, 2014 at 2:45 PM, Stefano Maffulli stef...@openstack.orgwrote:

 A few minutes ago we sent the first batch of invites to people who
 contributed to any of the official OpenStack programs[1] from 00:00 UTC
 on April 4, 2014 (Grizzly release day) until present.

 We'll send more invites *after each milestone* from now on and until
 feature freeze (March 6th, according to release schedule[2])

 IMPORTANT CHANGE

 Contrary to previous times, the code is a *$600 discount*. If you don't
 use it before March 22, when registration prices will increase, *you
 will be charged*.

  Use it! Now!

 And apply for the Travel Support Program if you need to:
 https://wiki.openstack.org/wiki/Travel_Support_Program

 Cheers,
 stef

 [1]
 
 http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml
 
 [2] https://wiki.openstack.org/wiki/Icehouse_Release_Schedule
 --
 Ask and answer questions on https://ask.openstack.org

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Cleaning OpenStack resources

2014-02-19 Thread David Kranz
I was looking at https://review.openstack.org/#/c/73274/1 which makes it 
configurable whether a brute-force cleanup of resources is done after 
success. This got my wondering how this should really be done. As admin, 
there are some resources that can be cleaned and some that I don't  know 
how. For example, as admin you can list all servers and delete them with 
the --all-tenants flag. But for floating ips I don't see a way to list 
all of them even as admin through the apis. Is there a way that an admin 
can, through the api, locate all resources used by a particular tenant?


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Subteam meeting 20.02.2014 at 1400 UTC

2014-02-19 Thread Eugene Nikanorov
Hi neutron folks and everyone interested in LBaaS,


Let's meet as usual on #openstack-meeting at 14-00 UTC.

The meeting agenda will be mostly around schema change.
Please look over ML discussion and this link:
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion

Currently we are evaluating the approach #3.
I'd urge everyone to focus on the discussion of the basic object
relationships
(vips/pools/listeners/healthmons/members).
I think the best outcome of the meeting should be:
1) define attribute sets for pool, vip, listener
2) agree on compatibility mode in which we will introduce the new model
3) discuss and agree on API limitation in favor of code simplicity.
The limitation is that we will require user to manually envelope complex
configurations that involve
multiple vips and pools into a single instance.
That limitation can be omitted later when we figure out how is it better to
deal with multiple backends serving the configuration.

I think (3) is pretty important: there is already a line of patches
depending on the model change.
More complex change will put more pressure on both developers, reviewers
and all those who depend on the change, so more simple approach at the cost
of API limitation is something we may consider.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPC Proposal

2014-02-19 Thread Rudra Rugge
JC,

We have a complete implementation which I had submitted earlier. But since the 
code was too large the community decided to move forward in a phased approach. 
The plan is to provide close to complete compatibility in a multi-phase manner 
as mentioned in the blueprint. Phase 4 (internet gateway, VPN, NAT etc)  was 
not added to the blueprint as that was dependent on VPNaas, FWaaS, NATaas.

Comments inline:


On Feb 19, 2014, at 9:05 AM, Martin, JC 
jch.mar...@gmail.commailto:jch.mar...@gmail.com wrote:

Comments in line.

JC
On Feb 18, 2014, at 5:21 PM, Rudra Rugge 
rru...@juniper.netmailto:rru...@juniper.net wrote:

Please see inline:

On Feb 18, 2014, at 2:57 PM, Martin, JC 
jch.mar...@gmail.commailto:jch.mar...@gmail.com wrote:

Maybe I should explain this one a bit.

Shared network: If a user has defined a shared network, and they used your API 
to create a VPC, the instances within the VPC will automatically get an 
interface on the shared network. I don't think that this is the expected 
behavior


When a user launches a VM in a VPC (AWS) the user needs to specify a subnet 
(network in openstack terminology) for each of the interfaces. Hence the 
instances will only get interfaces on the passed subnets/networks. The network 
being shared or not is not relevant for the VM launch. AWS APIs need the 
subnet/network to be passed for a VM launch in VPC.

Thanks, this makes sense.



FIP in scope of VPC: I was not talking about the EIP for Internet access, sorry 
if it was confusing. Since you are not really describing how you create the 
external networks, it's not clear how you implement the multiple gateways 
(public and private) that AWS supports, and how you connects networks to 
routers and external networks. i.e. are the CIDRs used in the VPC, NAT'ED to be 
routed in the customer datacenter, in which case, there is a floating IP pool 
that is private to each private gateway and VPC (not the 'public' one).

Gateways are built using Openstack neutron router resource. Networks are 
connected to the router interfaces. For internet access cloud administrator 
needs to provision a floating IP pool for the router to use. For CIDRs used in 
the VPC we need to implement a route-table extension which holds the prefix 
list. The prefix-list or route-table is attached to a 
subnet(AWS)/network(Openstack).  All internal(private) routing is managed by 
the Openstack router. NAT and VPN are used as next-hops to exit the VPC. In 
these cases similar to AWS we need to launch NAT and VPN capable instances as 
supported by Openstack FWAAS and VPNAAS.

I looked in the code referenced but did not find any router attachment call. 
Did I miss something ?
Also, what about these calls: CreateInternetGateway, AttachInternetGateway, 
CreateCustomerGateway, … don't you need that define how the VPC attach outside ?

[Rudra] We are going with a phased approach as I noted above. The code 
submitted is only for phase 1 of the blueprint.


What about mapping the optional attributes too (e.g. InstanceTenancy) ? What's 
the point of providing only partial compatibility ?

[Rudra] As mentioned above there is full compatibility available but we need to 
handle this in multiple phases.


Rudra




It would be useful for you to describe the pre-setup required to do make this 
works.

The only pre-setup needed by the cloud admin is to provide a public pool for 
floating IP.

Rudra



JC


On Feb 18, 2014, at 1:09 PM, Harshad Nakil 
hna...@contrailsystems.commailto:hna...@contrailsystems.com wrote:

2. It does give full AWS compatibility (except for network ACL which was 
differed). Shared networks, FIP within scope of VPC is not some thing AWS 
provides. So it is not partial support.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Cleaning OpenStack resources

2014-02-19 Thread Jay Pipes
On Wed, 2014-02-19 at 16:15 -0500, David Kranz wrote:
 I was looking at https://review.openstack.org/#/c/73274/1 which makes it 
 configurable whether a brute-force cleanup of resources is done after 
 success. This got my wondering how this should really be done. As admin, 
 there are some resources that can be cleaned and some that I don't  know 
 how. For example, as admin you can list all servers and delete them with 
 the --all-tenants flag. But for floating ips I don't see a way to list 
 all of them even as admin through the apis. Is there a way that an admin 
 can, through the api, locate all resources used by a particular tenant?

Unfortunately, I don't think this is consistently possible between all
the services :(

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] async / threading for python 2 and 3

2014-02-19 Thread Angus Salkeld

On 19/02/14 10:09 +0100, Julien Danjou wrote:

On Wed, Feb 19 2014, Angus Salkeld wrote:


2) use tulip and give up python 2


+ use trollius to have Python 2 support.

 https://pypi.python.org/pypi/trollius


So I have been giving this a go.

We use pecan and wsme (like ceilometer), I wanted to use
a httpserver library in place of wsgiref.server so had a
look at a couple and can't use them as they all have yield from
all over the place (i.e. python 3 only). The quesion I have
is:
How useful is trollius if we can't use other thirdparty libraries
written for asyncio?
https://github.com/KeepSafe/aiohttp/blob/master/aiohttp/server.py#L171

Maybe I am missing something?

http://code.google.com/p/tulip/wiki/ThirdParty

-Angus



--
Julien Danjou
/* Free Software hacker
  http://julien.danjou.info */





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][neutron] API tests in the Neutron tree

2014-02-19 Thread Jay Pipes
On Wed, 2014-02-12 at 14:17 -0800, Maru Newby wrote:
 On Feb 12, 2014, at 1:59 PM, Sean Dague s...@dague.net wrote:
 
  On 02/12/2014 04:25 PM, Maru Newby wrote:
  
  On Feb 12, 2014, at 12:36 PM, Sean Dague s...@dague.net wrote:
  
  On 02/12/2014 01:48 PM, Maru Newby wrote:
  At the last 2 summits, I've suggested that API tests could be maintained 
  in the Neutron tree and reused by Tempest.  I've finally submitted some 
  patches that demonstrate this concept:
  
  https://review.openstack.org/#/c/72585/  (implements a unit test for the 
  lifecycle of the network resource)
  https://review.openstack.org/#/c/72588/  (runs the test with tempest 
  rest clients)
  
  My hope is to make API test maintenance a responsibility of the Neutron 
  team.  The API compatibility of each Neutron plugin has to be validated 
  by Neutron tests anyway, and if the tests are structured as I am 
  proposing, Tempest can reuse those efforts rather than duplicating them.
  
  I've added this topic to this week's agenda, and I would really 
  appreciate it interested parties would take a look at the patches in 
  question to prepare themselves to participate in the discussion.
  
  
  m.
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  Realistically, having API tests duplicated in the Tempest tree is a
  feature, not a bug.
  
  tempest/api is there for double book keep accounting, and it has been
  really effective at preventing accidental breakage of our APIs (which
  used to happen all the time), so I don't think putting API testing in
  neutron obviates that.
  
  Given how limited our testing resources are, might it be worth considering 
  whether 'double-entry accounting' is actually the best way to preventing 
  accidental breakage going forward?  Might reasonable alternatives exist, 
  such as clearly separating api tests from other tests in the neutron tree 
  and giving review oversight only to qualified individuals?
  
  Our direct experience is that if we don't do this, within 2 weeks some
  project will have landed API breaking changes. This approach actually
  takes a lot of review load off the core reviewers, so reverting to a
  model which puts more work back on the review team (given the current
  review load), isn't something I think we want.
 
 Just so I'm clear, is there anything I could say that would change your mind?

I'd like to discuss this at tomorrow's IRC meeting, if possible. I think
that the model that Maru came up with (PoC code in the two patches
referenced above) are actually a pretty slick way of dealing with this
issue, and along with the ongoing efforts to libify Tempest, I think
that we should head in this direction if possible.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting Feb 20 1800 UTC

2014-02-19 Thread Sergey Lukjanov
Hi folks,

We'll be having the Savanna team meeting as usual in #openstack-meeting-alt
channel.

Agenda:
https://wiki.openstack.org/wiki/Meetings/SavannaAgenda#Agenda_for_February.2C_20

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Savanna+Meetingiso=20140220T18

The main topics are project renaming Icehouse 3 dev milestone.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][libvirt] Is there anything blocking the libvirt driver from implementing the host_maintenance_mode API?

2014-02-19 Thread Matt Riedemann
The os-hosts OS API extension [1] showed up before I was working on the 
project and I see that only the VMware and XenAPI drivers implement it, 
but was wondering why the libvirt driver doesn't - either no one wants 
it, or there is some technical reason behind not implementing it for 
that driver?


[1] 
http://docs.openstack.org/api/openstack-compute/2/content/PUT_os-hosts-v2_updateHost_v2__tenant_id__os-hosts__host_name__ext-os-hosts.html


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Proposal for model change - Multiple services per floating IP

2014-02-19 Thread Jay Pipes
On Wed, 2014-02-12 at 16:11 -0800, Stephen Balukoff wrote:

   * This seems less ambiguous from a terminology perspective. The
 name 'VIP' in other contexts means 'virtual IP address', which
 is the same thing as a floating IP, which in other contexts is
 usually considered to be unique to a subset of devices that
 share the IP (or pass it between them). It doesn't necessarily
 have anything to do with layers 4 and above in the OSI model.
 However, if in the context of Neutron LBaaS, VIP has a
 protocol-port attribute, this means it's no longer just a
 floating IP:  It's a floating IP + TCP port (plus other
 attributes that make sense for a TCP service). This feels like
 Neutron LBaaS is trying to redefine what a virtual IP is,
 and is in any case going to be confusing for new comers
 expecting it to be one thing when it's actually another.

This is an excellent point, Stephen.

-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] Nominate Andrew Lazarew for savanna-core

2014-02-19 Thread Sergey Lukjanov
Hey folks,

I'd like to nominate Andrew Lazarew (alazarev) for savanna-core.

He is among the top reviewers of Savanna subprojects. Andrew is working on
Savanna full time since September 2013 and is very familiar with current
codebase. His code contributions and reviews have demonstrated a good
knowledge of Savanna internals. Andrew have a valuable knowledge of both
core and EDP parts, IDH plugin and Hadoop itself. He's working on both bugs
and new features implementation.

Some links:

http://stackalytics.com/report/reviews/savanna-group/30
http://stackalytics.com/report/reviews/savanna-group/90
http://stackalytics.com/report/reviews/savanna-group/180
https://review.openstack.org/#/q/owner:alazarev+savanna+AND+-status:abandoned,n,z
https://launchpad.net/~alazarev

Savanna cores, please, reply with +1/0/-1 votes.

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][all] config sample tools on os x

2014-02-19 Thread Sergey Lukjanov
Hey stackers,

tools/config/generate_sample.sh isn't working on OS X due to the getopt
usage. Any recipes / proposals to fix it? I have a workaround at least.

TL;DR

So, as I said tools/config/generate_sample.sh isn't working on OS X.
Specifically it just couldn't parse command line arguments w/o any errors,
just ignoring them. The reason of such behavior is significant difference
between GNU getopt and BSD one (used in OS X). Probably, it could be easily
fixed, but I don't know both of them.

The main issue is that many projects are
using tools/config/check_uptodate.sh in pep8 tox env to ensure that their
config sample is always uptodate. So, tox -e pep8 command always failing
for such projects.

Workaround:

* install GNU getopt by using homebrew (brew install gnu-getopt) or
macports (port install getopts);
* add it to the PATH before the actual getopt before running tox;
* if you'd like to make it default just add it to your bashrc/zshrc/etcrc,
for example, for brew you should add: export PATH=$(brew --prefix
gnu-getopt)/bin:$PATH

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-19 Thread Youcef Laribi
Hi guys,

I have been catching up on this interesting thread around the object model, so 
sorry in advance to jump in late in this debate, and if I missed some of the 
subtleties of the points being made so far.

I tend to agree with Sam that the original intention of the current object 
model was never tied to a physical deployment. We seem to be confusing the 
tenant-facing object model which is completely logical (albeit with some 
properties or qualities that a tenant can express) from the 
deployment/implementation aspects of such a logical model (things like 
cluster/HA, one vs. multiple backends, virtual appliance vs. OS process, etc). 
We discussed in the past, the need for an Admin API (separate from the tenant 
API) where a cloud administrator (as opposed to a tenant) could manage the 
deployment aspects, and could construct different offerings that can be exposed 
to a tenant, but in the absence of such as admin API (which would necessarily 
be very technology-specific), this responsibility is currently shouldered by 
the drivers.

IMO a tenant should only care about whether VIPs/Pools are grouped together to 
the extent that the provider allows the tenant to express such a preference. 
Some providers will allow their tenants to express such a preference (e.g. 
because it might impact cost), and others might not as it wouldn't make sense 
in their implementation.

Also the mapping between pool and backend is not necessarily 1:1, and is not 
necessarily at the creation time of pool, as this is purely a driver 
implementation decision (I know that currently implementations are like this, 
but another driver can choose a different approach). A driver could for example 
delay mapping a pool to a backend, until a full LB configuration is completed 
(when pool has members, and a VIP is attached to the pool). A driver can also 
move these resources around between backends, if it finds out, it put them in a 
non-optimal backend initially. As long as the logical model is realized and 
remains consistent from the tenant point of view, implementations should be 
free to achieve that goal in any way they see fit.

Youcef

From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Wednesday, February 19, 2014 8:23 AM
To: Samuel Bercovici
Cc: OpenStack Development Mailing List; Mark McClain; Salvatore Orlando; 
sbaluk...@bluebox.net; Youcef Laribi; Avishay Balderman
Subject: Re: [Neutron][LBaaS] Object Model discussion

Hi Sam,

My comments inline:

On Wed, Feb 19, 2014 at 4:57 PM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com wrote:
Hi,

I think we mix different aspects of operations. And try to solve a non 
problem.
Not really, Advanced features we're trying to introduce are incompatible by 
both object model and API.

From APIs/Operations we are mixing the following models:

1.   Logical model (which as far as I understand is the topic of this 
discussion) - tenants define what they need logically vip--default_pool, l7 
association, ssl, etc.
That's correct. Tenant may or may not care about how it is grouped on the 
backend. We need to support both cases.

2.   Physical model - operator / vendor install and specify how backend 
gets implemented.

3.   Deploying 1 on 2 - this is currently the driver's responsibility. We 
can consider making it better but this should not impact the logical model.
I think grouping vips and pools is important part of logical model, even if 
some users may not care about it.


I think this is not a problem.
In a logical model a pool which is part of L7 policy is a logical object which 
could be placed at any backend and any existing vippool and accordingly 
configure the backend that those vippool are deployed on.
 That's not how it currently works - that's why we're trying to address it. 
Having pool shareable between backends at least requires to move 'instance' 
role from the pool to some other entity, and also that changes a number of API 
aspects.

If the same pool that was part of a l7 association will also be connected to a 
vip as a default pool, than by all means this new vippool pair can be 
instantiated into some back end.
The proposal to not allow this (ex: only allow pools that are connected to the 
same lb-instance to be used for l7 association), brings the physical model into 
the logical model.
So proposal tries to address 2 issues:
1) in many cases it is desirable to know about grouping of logical objects on 
the backend
2) currently physical model implied when working with pools, because pool is 
the root and corresponds to backend with 1:1 mapping


I think that the current logical model is fine with the exception that the two 
way reference between vip and pool (vippool) should be modified with only 
vip pointing to a pool (vip--pool) which allows reusing the pool with multiple 
vips.
Reusing pools by vips is not as simple as it seems.
If those vips belong to 1 backend (that by itself requires tenant to know 

Re: [openstack-dev] [oslo][all] config sample tools on os x

2014-02-19 Thread Doug Hellmann
On Wed, Feb 19, 2014 at 6:17 PM, Sergey Lukjanov slukja...@mirantis.comwrote:

 Hey stackers,

 tools/config/generate_sample.sh isn't working on OS X due to the getopt
 usage. Any recipes / proposals to fix it? I have a workaround at least.


Your workaround looks fine. I wouldn't reject a patch, but since we don't
really use OS X as a platform I don't know if we would spend any time
changing the script to work there.

Doug




 TL;DR

 So, as I said tools/config/generate_sample.sh isn't working on OS X.
 Specifically it just couldn't parse command line arguments w/o any errors,
 just ignoring them. The reason of such behavior is significant difference
 between GNU getopt and BSD one (used in OS X). Probably, it could be easily
 fixed, but I don't know both of them.

 The main issue is that many projects are
 using tools/config/check_uptodate.sh in pep8 tox env to ensure that their
 config sample is always uptodate. So, tox -e pep8 command always failing
 for such projects.

 Workaround:

 * install GNU getopt by using homebrew (brew install gnu-getopt) or
 macports (port install getopts);
 * add it to the PATH before the actual getopt before running tox;
 * if you'd like to make it default just add it to your bashrc/zshrc/etcrc,
 for example, for brew you should add: export PATH=$(brew --prefix
 gnu-getopt)/bin:$PATH

 Thanks.

 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] client 0.5.0 release

2014-02-19 Thread Sergey Lukjanov
Hi folks,

I'd like to make a 0.5.0 release of savanna client soon, please, share your
thoughts about stuff that should be included to it.

Currently we have the following major changes/fixes:

* mostly implemented CLI;
* unified entry point for python bindings like other OpenStack clients;
* auth improvements;
* base resource class improvements.

Full diff:
https://github.com/openstack/python-savannaclient/compare/0.4.1...master

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] client 0.5.0 release

2014-02-19 Thread Sergey Lukjanov
Additionally, it contains support for the latest EDP features.


On Thu, Feb 20, 2014 at 3:52 AM, Sergey Lukjanov slukja...@mirantis.comwrote:

 Hi folks,

 I'd like to make a 0.5.0 release of savanna client soon, please, share
 your thoughts about stuff that should be included to it.

 Currently we have the following major changes/fixes:

 * mostly implemented CLI;
 * unified entry point for python bindings like other OpenStack clients;
 * auth improvements;
 * base resource class improvements.

 Full diff:
 https://github.com/openstack/python-savannaclient/compare/0.4.1...master

 Thanks.

 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] config sample tools on os x

2014-02-19 Thread Sergey Lukjanov
Agreed, I just like to share/receive thoughts on it, probably the better
workaround :)


On Thu, Feb 20, 2014 at 3:48 AM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:




 On Wed, Feb 19, 2014 at 6:17 PM, Sergey Lukjanov 
 slukja...@mirantis.comwrote:

 Hey stackers,

 tools/config/generate_sample.sh isn't working on OS X due to the getopt
 usage. Any recipes / proposals to fix it? I have a workaround at least.


 Your workaround looks fine. I wouldn't reject a patch, but since we don't
 really use OS X as a platform I don't know if we would spend any time
 changing the script to work there.

 Doug




 TL;DR

 So, as I said tools/config/generate_sample.sh isn't working on OS X.
 Specifically it just couldn't parse command line arguments w/o any errors,
 just ignoring them. The reason of such behavior is significant difference
 between GNU getopt and BSD one (used in OS X). Probably, it could be easily
 fixed, but I don't know both of them.

 The main issue is that many projects are
 using tools/config/check_uptodate.sh in pep8 tox env to ensure that their
 config sample is always uptodate. So, tox -e pep8 command always failing
 for such projects.

 Workaround:

 * install GNU getopt by using homebrew (brew install gnu-getopt) or
 macports (port install getopts);
 * add it to the PATH before the actual getopt before running tox;
 * if you'd like to make it default just add it to your
 bashrc/zshrc/etcrc, for example, for brew you should add: export
 PATH=$(brew --prefix gnu-getopt)/bin:$PATH

 Thanks.

 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-announce] python-heatclient 0.2.7 released

2014-02-19 Thread Chmouel Boudjnah
On Wed, Feb 19, 2014 at 1:50 AM, Steve Baker sba...@redhat.com wrote:
 Changes in this release:
 https://launchpad.net/python-heatclient/+milestone/v0.2.7

It is probably worth mentioning[1] that python-heatclient is now using the
requests library instead of its homegrown httpclient library which should
make things more robust and secure(tm)

Chmouel.

[1] I am not sure why it's not listed in that link
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] config sample tools on os x

2014-02-19 Thread Chmouel Boudjnah
On Thu, Feb 20, 2014 at 12:17 AM, Sergey Lukjanov slukja...@mirantis.comwrote:

 tools/config/generate_sample.sh isn't working on OS X due to the getopt
 usage. Any recipes / proposals to fix it? I have a workaround at least.



thanks for the workaround, I had a look on this while reporting  bug
https://bugs.launchpad.net/heat/+bug/1261370, the easy way would be to use
bash builtin getopt but that would mean we would have to drop long options.

Is dropping long options an (pun not sure if it's intended) option?

Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Some questions about Rest API and log messages translation

2014-02-19 Thread Jay S Bryant
Response marked with  below.

Jay



From:   Doug Hellmann doug.hellm...@dreamhost.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   02/19/2014 09:22 AM
Subject:Re: [openstack-dev] [oslo] Some questions about Rest API 
and log messages translation






On Tue, Feb 18, 2014 at 11:56 PM, Peng Wu peng.e...@gmail.com wrote:
Hi,

  Currently I am analyzing the blueprint of translated message id
generation.[1]
  Recently I just found that there is an implementation to generate both
English and translated log messages.
  I think if English and translated log messages are provided, then we
don't need to generate a message id for log messages.

  My question is about Rest API messages translation. If we will return
both English and translated Rest API message, then we don't need to
generate a message id for Rest API message, either.

I don't think we plan to return both messages. My understanding was we 
would return messages in the locale specified by the headers sent from the 
client (assuming those translations are available).
  You are correct Doug.  The REST API responses will be in the default 
locale unless a different 'Accept-Language: ' is set.  Then the translated 
response will be returned.

  And currently message id generation blueprint is only for log message
and translated Rest API message. If we provide both English and
translated messages, then we don't need to generate any message id for
messages. Because we just need to read the English log and Rest API
messages.

There may still be utility in documenting messages with a message id. For 
example, a message id wouldn't change even if the wording of a message 
changed slightly (to add more context information, for example).

Doug

 

  Feel free to comment it.

Thanks,
  Peng Wu

Refer URL:
[1] https://blueprints.launchpad.net/oslo/+spec/log-messages-id
[2]
https://blueprints.launchpad.net/oslo/+spec/log-messages-translation-domain




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] config sample tools on os x

2014-02-19 Thread Sergey Lukjanov
Another option is to conditionally use different getopt args like:

if OSX: use getopt w/o long opts
else: use powerful getopt w/ long opts

It adds some inconsistency to the script but makes it useful for OS X users.

Anyway, it's still possible to just use it w/o args to ensure latest config.


On Thu, Feb 20, 2014 at 4:01 AM, Chmouel Boudjnah chmo...@enovance.comwrote:


 On Thu, Feb 20, 2014 at 12:17 AM, Sergey Lukjanov 
 slukja...@mirantis.comwrote:

 tools/config/generate_sample.sh isn't working on OS X due to the getopt
 usage. Any recipes / proposals to fix it? I have a workaround at least.



 thanks for the workaround, I had a look on this while reporting  bug
 https://bugs.launchpad.net/heat/+bug/1261370, the easy way would be to
 use bash builtin getopt but that would mean we would have to drop long
 options.

 Is dropping long options an (pun not sure if it's intended) option?

 Chmouel.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] when icehouse will be frozen

2014-02-19 Thread 马煜
who know when to freezy icehouse version ?

my bp on ml2 driver has been approved, code is under review,
but I have some trouble to deploy third-party ci on which tempest test run.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Monitoring IP Availability

2014-02-19 Thread Vilobh Meshram
Hello OpenStack Dev,

We wanted to have your input on how different companies/organizations, using 
Openstack, are monitoring IP availability as this can be useful to track the 
used IP’s and total number of IP’s.

Please let us know your thoughts.

Thanks,
Vilobh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] We should wait approving code due to bug 1280035

2014-02-19 Thread Nachi Ueno
Hi Neutron core's

We should wait approving code due to bug 1280035.
https://bugs.launchpad.net/neutron/+bug/1280035

Unittest fails very high rate in the gating and blocks gating queue.

Salvatore is working on the issue.
At first, we skip the failing unit test.
https://review.openstack.org/#/c/73832/

However, unfortunately, we get hit
another issue what's we are facing. And, the situation get worse.

so we are going to revert the change
https://review.openstack.org/#/c/74882/

This revert fix will be merged in 4 hours (hopefully)
However, the situation is only back to the 1280035..

Best
Nachi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift]stable/havana Jenkins failed

2014-02-19 Thread Dong Liu
Hi, Anyone aware of the following:
2014-02-18 11:31:13.124 | + swift stat
2014-02-18 11:31:13.186 | Traceback (most recent call last):
2014-02-18 11:31:13.186 |   File /usr/local/bin/swift, line 35, in
module
2014-02-18 11:31:13.186 | from swiftclient import Connection,
HTTPException
2014-02-18 11:31:13.187 | ImportError: cannot import name HTTPException
2014-02-18 11:31:13.195 | + die 48 'Failure geting status'
2014-02-18 11:31:13.195 | + local exitcode=1
2014-02-18 11:31:13.195 | + set +o xtrace
2014-02-18 11:31:13.231 | [ERROR]
/opt/stack/old/devstack/exercises/swift.sh:48 Failure geting status

I notice that we have changed from swiftclient import Connection,
HTTPException to from swiftclient import Connection, RequestException
at 2014-02-14, I don't know is it relational.

I have reported a bug for this:
https://bugs.launchpad.net/swift/+bug/1281886


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] Changes to oslo-incubator sync workflow

2014-02-19 Thread Matt Riedemann



On 2/19/2014 7:13 PM, Joe Gordon wrote:

Hi All,

As many of you know most oslo-incubator code is wildly out of sync.
Assuming we consider it a good idea to sync up oslo-incubator code
before cutting Icehouse, then we have a problem.

Today oslo-incubator code is synced in ad-hoc manor, resulting in
duplicated efforts and wildly out of date code. Part of the challenges
today are backwards incompatible changes and new oslo bugs. I expect
that once we get a single project to have an up to date oslo-incubator
copy it will make syncing a second project significantly easier. So
because I (hopefully) have some karma built up in nova, I would like
to volunteer nova to be the guinea pig.


To fix this I would like to propose starting an oslo-incubator/nova
sync team. They would be responsible for getting nova's oslo code up
to date.  I expect this work to involve:
* Reviewing lots of oslo sync patches
* Tracking the current sync patches
* Syncing over the low hanging fruit, modules that work without changing nova.
* Reporting bugs to oslo team
* Working with oslo team to figure out how to deal with backwards
incompatible changes
   * Update nova code or make oslo module backwards compatible
* Track all this
* Create a roadmap for other projects to follow (re: documentation)

I am looking for volunteers to help with this effort, any takers?


best,
Joe Gordon

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Well I'll get the ball rolling...

In the past when this has come up there is always a debate over should 
be just sync to sync because we should always be up to date, or is that 
dangerous and we should only sync when there is a need (which is what 
the review guidelines say now [1]).  There are pros and cons:


pros:

- we get bug fixes that we didn't know existed
- it should be less painful to sync if we do it more often

cons:

- it's more review overhead and some crazy guy thinks we need a special 
team dedicated to reviewing those changes :)
- there are some changes in o-i that would break nova; I'm specifically 
thinking of the oslo RequestContext which has domain support now (or 
some other keystone thingy) and nova has it's own RequestContext - so if 
we did sync that from o-i it would change nova's logging context and 
break on us since we didn't use oslo context.


For that last con, I'd argue that we should move to the oslo 
RequestContext, I'm not sure why we aren't.  Would that module then not 
fall under low-hanging-fruit?


I think the DB API modules have been a concern for auto-syncing before 
too but I can't remember why now...something about possibly changing the 
behavior of how the nova migrations would work?  But if they are already 
using the common code, I don't see the issue.


This is kind of an aside, but I'm kind of confused now about how the 
syncs work with things that fall under oslo.rootwrap or oslo.messaging, 
like this patch [2].  It doesn't completely match the o-i patch, i.e. 
it's not syncing over openstack/common/rootwrap/wrapper.py, and I'm 
assuming because that's in oslo.rootwrap now?  But then why does the 
code still exist in oslo-incubator?


I think the keystone guys are running into a similar issue where they 
want to remove a bunch of now-dead messaging code from keystone but 
can't because there are still some things in oslo-incubator using 
oslo.messaging code, or something weird like that. So maybe those 
modules are considered out of scope for this effort until the o-r/o-m 
code is completely out of o-i?


Finally, just like we'd like to have cores for each virt driver in nova 
and the neutron API in nova, I think this kind of thing, at least 
initially, would benefit from having some oslo cores involved in a team 
that are also familiar to a degree with nova, e.g. bnemec or dims.


[1] https://wiki.openstack.org/wiki/ReviewChecklist#Oslo_Syncing_Checklist
[2] https://review.openstack.org/#/c/73340/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] Changes to oslo-incubator sync workflow

2014-02-19 Thread Doug Hellmann
On Wed, Feb 19, 2014 at 8:13 PM, Joe Gordon joe.gord...@gmail.com wrote:

 Hi All,

 As many of you know most oslo-incubator code is wildly out of sync.
 Assuming we consider it a good idea to sync up oslo-incubator code
 before cutting Icehouse, then we have a problem.

 Today oslo-incubator code is synced in ad-hoc manor, resulting in
 duplicated efforts and wildly out of date code. Part of the challenges
 today are backwards incompatible changes and new oslo bugs. I expect
 that once we get a single project to have an up to date oslo-incubator
 copy it will make syncing a second project significantly easier. So
 because I (hopefully) have some karma built up in nova, I would like
 to volunteer nova to be the guinea pig.


Thank you for volunteering to spear-head this, Joe. 


 To fix this I would like to propose starting an oslo-incubator/nova
 sync team. They would be responsible for getting nova's oslo code up
 to date.  I expect this work to involve:
 * Reviewing lots of oslo sync patches
 * Tracking the current sync patches
 * Syncing over the low hanging fruit, modules that work without changing
 nova.
 * Reporting bugs to oslo team
 * Working with oslo team to figure out how to deal with backwards
 incompatible changes
   * Update nova code or make oslo module backwards compatible
 * Track all this
 * Create a roadmap for other projects to follow (re: documentation)

 I am looking for volunteers to help with this effort, any takers?


I will help, especially with reviews and tracking.

We are going to want someone from the team working on the db modules to
participate as well, since we know that's one area where the API has
diverged some (although we did take backwards compatibility into account).
Victor, can you help find us a volunteer?

Doug





 best,
 Joe Gordon

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] We should wait approving code due to bug 1280035

2014-02-19 Thread Nachi Ueno
Hi folks

Good news. 74882 is merged.

I'm still not sure the current UT failure rate yet with 1280035,
so I think we should wait to see the failure rate.
so please check current gating status when you approve codes.

Best
Nachi


2014-02-19 16:59 GMT-08:00 Nachi Ueno na...@ntti3.com:
 Hi Neutron core's

 We should wait approving code due to bug 1280035.
 https://bugs.launchpad.net/neutron/+bug/1280035

 Unittest fails very high rate in the gating and blocks gating queue.

 Salvatore is working on the issue.
 At first, we skip the failing unit test.
 https://review.openstack.org/#/c/73832/

 However, unfortunately, we get hit
 another issue what's we are facing. And, the situation get worse.

 so we are going to revert the change
 https://review.openstack.org/#/c/74882/

 This revert fix will be merged in 4 hours (hopefully)
 However, the situation is only back to the 1280035..

 Best
 Nachi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] config sample tools on os x

2014-02-19 Thread Doug Hellmann
On Wed, Feb 19, 2014 at 7:01 PM, Chmouel Boudjnah chmo...@enovance.comwrote:


 On Thu, Feb 20, 2014 at 12:17 AM, Sergey Lukjanov 
 slukja...@mirantis.comwrote:

 tools/config/generate_sample.sh isn't working on OS X due to the getopt
 usage. Any recipes / proposals to fix it? I have a workaround at least.



 thanks for the workaround, I had a look on this while reporting  bug
 https://bugs.launchpad.net/heat/+bug/1261370, the easy way would be to
 use bash builtin getopt but that would mean we would have to drop long
 options.

 Is dropping long options an (pun not sure if it's intended) option?


Is there anything in that script that we couldn't do with a python program
directly? I know it sources some files to set environment variables; is
there anything else?

Doug





 Chmouel.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] Changes to oslo-incubator sync workflow

2014-02-19 Thread Doug Hellmann
On Wed, Feb 19, 2014 at 9:20 PM, Matt Riedemann
mrie...@linux.vnet.ibm.comwrote:



 On 2/19/2014 7:13 PM, Joe Gordon wrote:

 Hi All,

 As many of you know most oslo-incubator code is wildly out of sync.
 Assuming we consider it a good idea to sync up oslo-incubator code
 before cutting Icehouse, then we have a problem.

 Today oslo-incubator code is synced in ad-hoc manor, resulting in
 duplicated efforts and wildly out of date code. Part of the challenges
 today are backwards incompatible changes and new oslo bugs. I expect
 that once we get a single project to have an up to date oslo-incubator
 copy it will make syncing a second project significantly easier. So
 because I (hopefully) have some karma built up in nova, I would like
 to volunteer nova to be the guinea pig.


 To fix this I would like to propose starting an oslo-incubator/nova
 sync team. They would be responsible for getting nova's oslo code up
 to date.  I expect this work to involve:
 * Reviewing lots of oslo sync patches
 * Tracking the current sync patches
 * Syncing over the low hanging fruit, modules that work without changing
 nova.
 * Reporting bugs to oslo team
 * Working with oslo team to figure out how to deal with backwards
 incompatible changes
* Update nova code or make oslo module backwards compatible
 * Track all this
 * Create a roadmap for other projects to follow (re: documentation)

 I am looking for volunteers to help with this effort, any takers?


 best,
 Joe Gordon

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Well I'll get the ball rolling...

 In the past when this has come up there is always a debate over should be
 just sync to sync because we should always be up to date, or is that
 dangerous and we should only sync when there is a need (which is what the
 review guidelines say now [1]).  There are pros and cons:

 pros:

 - we get bug fixes that we didn't know existed
 - it should be less painful to sync if we do it more often

 cons:

 - it's more review overhead and some crazy guy thinks we need a special
 team dedicated to reviewing those changes :)
 - there are some changes in o-i that would break nova; I'm specifically
 thinking of the oslo RequestContext which has domain support now (or some
 other keystone thingy) and nova has it's own RequestContext - so if we did
 sync that from o-i it would change nova's logging context and break on us
 since we didn't use oslo context.


Another con is that if we do find a critical bug in an incubator module,
and a project that uses that module is far out of date, applying the fix
may be more difficult. (This is also another motivation for moving code
out of the incubator entirely, but as Joe pointed out earlier today, that's
not really a short-term solution.)



 For that last con, I'd argue that we should move to the oslo
 RequestContext, I'm not sure why we aren't.  Would that module then not
 fall under low-hanging-fruit?

 I think the DB API modules have been a concern for auto-syncing before too
 but I can't remember why now...something about possibly changing the
 behavior of how the nova migrations would work?  But if they are already
 using the common code, I don't see the issue.


There has been some recent work on the db code to make it more suitable for
use in some of the other projects that don't have a single global session
pool. There's a compatibility shim, which should make the update painless,
but it's not just a simple file copy.



 This is kind of an aside, but I'm kind of confused now about how the syncs
 work with things that fall under oslo.rootwrap or oslo.messaging, like this
 patch [2].  It doesn't completely match the o-i patch, i.e. it's not
 syncing over openstack/common/rootwrap/wrapper.py, and I'm assuming
 because that's in oslo.rootwrap now?  But then why does the code still
 exist in oslo-incubator?


After a module graduates to a library, we treat the incubator copy as the
stable branch until all of the integrated projects that consume the
module have migrated to the new library. That way if bugs are found, the
fixes can be applied to a project without having to also migrate to the
library.

So, the best action is to port to the library. As a fall back, at least
update to the most current version from in the incubator now. I believe all
projects are already updated to use oslo.rootwrap.

I think the keystone guys are running into a similar issue where they want
 to remove a bunch of now-dead messaging code from keystone but can't
 because there are still some things in oslo-incubator using oslo.messaging
 code, or something weird like that. So maybe those modules are considered
 out of scope for this effort until the o-r/o-m code is completely out of
 o-i?


There's a notifier middleware that uses the RPC code from the incubator
still. I believe work on moving that module into 

Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-19 Thread Christopher Yeoh
On Wed, 19 Feb 2014 12:36:46 -0500
Russell Bryant rbry...@redhat.com wrote:

 Greetings,
 
 The v3 API effort has been going for a few release cycles now.  As we
 approach the Icehouse release, we are faced with the following
 question: Is it time to mark v3 stable?
 
 My opinion is that I think we need to leave v3 marked as experimental
 for Icehouse.
 

Although I'm very eager to get the V3 API released, I do agree with you.
As you have said we will be living with both the V2 and V3 APIs for a
very long time. And at this point there would be simply too many last
minute changes to the V3 API for us to be confident that we have it
right enough to release as a stable API.

 We really don't want to be in a situation where we're having to force
 any sort of migration to a new API.  The new API should be compelling
 enough that everyone *wants* to migrate to it.  If that's not the
 case, we haven't done our job.

+1

 Let's all take some time to reflect on what has happened with v3 so
 far and what it means for how we should move forward.  We can regroup
 for Juno.
 
 Finally, I would like to thank everyone who has helped with the effort
 so far.  Many hours have been put in to code and reviews for this.  I
 would like to specifically thank Christopher Yeoh for his work here.
 Chris has done an *enormous* amount of work on this and deserves
 credit for it.  He has taken on a task much bigger than anyone
 anticipated. Thanks, Chris!

Thanks Russell, that's much appreciated. I'm also very thankful to
everyone who has worked on the V3 API either through patches and/or
reviews, especially Alex Xu and Ivan Zhu who have done a lot of work on
it in Havana and Icehouse. 

Chris.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] Changes to oslo-incubator sync workflow

2014-02-19 Thread Joe Gordon
As a side to this, as an exercise I tried a oslo sync in cinder to see
what kind of issues would arise and here are my findings so far:
https://review.openstack.org/#/c/74786/

On Wed, Feb 19, 2014 at 6:20 PM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:


 On 2/19/2014 7:13 PM, Joe Gordon wrote:

 Hi All,

 As many of you know most oslo-incubator code is wildly out of sync.
 Assuming we consider it a good idea to sync up oslo-incubator code
 before cutting Icehouse, then we have a problem.

 Today oslo-incubator code is synced in ad-hoc manor, resulting in
 duplicated efforts and wildly out of date code. Part of the challenges
 today are backwards incompatible changes and new oslo bugs. I expect
 that once we get a single project to have an up to date oslo-incubator
 copy it will make syncing a second project significantly easier. So
 because I (hopefully) have some karma built up in nova, I would like
 to volunteer nova to be the guinea pig.


 To fix this I would like to propose starting an oslo-incubator/nova
 sync team. They would be responsible for getting nova's oslo code up
 to date.  I expect this work to involve:
 * Reviewing lots of oslo sync patches
 * Tracking the current sync patches
 * Syncing over the low hanging fruit, modules that work without changing
 nova.
 * Reporting bugs to oslo team
 * Working with oslo team to figure out how to deal with backwards
 incompatible changes
* Update nova code or make oslo module backwards compatible
 * Track all this
 * Create a roadmap for other projects to follow (re: documentation)

 I am looking for volunteers to help with this effort, any takers?


 best,
 Joe Gordon

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Well I'll get the ball rolling...

 In the past when this has come up there is always a debate over should be
 just sync to sync because we should always be up to date, or is that
 dangerous and we should only sync when there is a need (which is what the
 review guidelines say now [1]).  There are pros and cons:

 pros:

 - we get bug fixes that we didn't know existed
 - it should be less painful to sync if we do it more often

 cons:

 - it's more review overhead and some crazy guy thinks we need a special team
 dedicated to reviewing those changes :)
 - there are some changes in o-i that would break nova; I'm specifically
 thinking of the oslo RequestContext which has domain support now (or some
 other keystone thingy) and nova has it's own RequestContext - so if we did
 sync that from o-i it would change nova's logging context and break on us
 since we didn't use oslo context.

 For that last con, I'd argue that we should move to the oslo RequestContext,
 I'm not sure why we aren't.  Would that module then not fall under
 low-hanging-fruit?

I am classifying low hanging fruit as anything that doesn't require
any nova changes to work.


 I think the DB API modules have been a concern for auto-syncing before too
 but I can't remember why now...something about possibly changing the
 behavior of how the nova migrations would work?  But if they are already
 using the common code, I don't see the issue.

AFAIK there is already a team working on db api syncing, so I was
thinking of let them deal with it.


 This is kind of an aside, but I'm kind of confused now about how the syncs
 work with things that fall under oslo.rootwrap or oslo.messaging, like this
 patch [2].  It doesn't completely match the o-i patch, i.e. it's not syncing
 over openstack/common/rootwrap/wrapper.py, and I'm assuming because that's
 in oslo.rootwrap now?  But then why does the code still exist in
 oslo-incubator?

 I think the keystone guys are running into a similar issue where they want
 to remove a bunch of now-dead messaging code from keystone but can't because
 there are still some things in oslo-incubator using oslo.messaging code, or
 something weird like that. So maybe those modules are considered out of
 scope for this effort until the o-r/o-m code is completely out of o-i?

 Finally, just like we'd like to have cores for each virt driver in nova and
 the neutron API in nova, I think this kind of thing, at least initially,
 would benefit from having some oslo cores involved in a team that are also
 familiar to a degree with nova, e.g. bnemec or dims.

 [1] https://wiki.openstack.org/wiki/ReviewChecklist#Oslo_Syncing_Checklist
 [2] https://review.openstack.org/#/c/73340/

 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] Changes to oslo-incubator sync workflow

2014-02-19 Thread Doug Hellmann
On Wed, Feb 19, 2014 at 9:52 PM, Joe Gordon joe.gord...@gmail.com wrote:

 As a side to this, as an exercise I tried a oslo sync in cinder to see
 what kind of issues would arise and here are my findings so far:
 https://review.openstack.org/#/c/74786/

 On Wed, Feb 19, 2014 at 6:20 PM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:
 
 
  On 2/19/2014 7:13 PM, Joe Gordon wrote:
 
  Hi All,
 
  As many of you know most oslo-incubator code is wildly out of sync.
  Assuming we consider it a good idea to sync up oslo-incubator code
  before cutting Icehouse, then we have a problem.
 
  Today oslo-incubator code is synced in ad-hoc manor, resulting in
  duplicated efforts and wildly out of date code. Part of the challenges
  today are backwards incompatible changes and new oslo bugs. I expect
  that once we get a single project to have an up to date oslo-incubator
  copy it will make syncing a second project significantly easier. So
  because I (hopefully) have some karma built up in nova, I would like
  to volunteer nova to be the guinea pig.
 
 
  To fix this I would like to propose starting an oslo-incubator/nova
  sync team. They would be responsible for getting nova's oslo code up
  to date.  I expect this work to involve:
  * Reviewing lots of oslo sync patches
  * Tracking the current sync patches
  * Syncing over the low hanging fruit, modules that work without changing
  nova.
  * Reporting bugs to oslo team
  * Working with oslo team to figure out how to deal with backwards
  incompatible changes
 * Update nova code or make oslo module backwards compatible
  * Track all this
  * Create a roadmap for other projects to follow (re: documentation)
 
  I am looking for volunteers to help with this effort, any takers?
 
 
  best,
  Joe Gordon
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  Well I'll get the ball rolling...
 
  In the past when this has come up there is always a debate over should be
  just sync to sync because we should always be up to date, or is that
  dangerous and we should only sync when there is a need (which is what the
  review guidelines say now [1]).  There are pros and cons:
 
  pros:
 
  - we get bug fixes that we didn't know existed
  - it should be less painful to sync if we do it more often
 
  cons:
 
  - it's more review overhead and some crazy guy thinks we need a special
 team
  dedicated to reviewing those changes :)
  - there are some changes in o-i that would break nova; I'm specifically
  thinking of the oslo RequestContext which has domain support now (or some
  other keystone thingy) and nova has it's own RequestContext - so if we
 did
  sync that from o-i it would change nova's logging context and break on us
  since we didn't use oslo context.
 
  For that last con, I'd argue that we should move to the oslo
 RequestContext,
  I'm not sure why we aren't.  Would that module then not fall under
  low-hanging-fruit?

 I am classifying low hanging fruit as anything that doesn't require
 any nova changes to work.


+1


  I think the DB API modules have been a concern for auto-syncing before
 too
  but I can't remember why now...something about possibly changing the
  behavior of how the nova migrations would work?  But if they are already
  using the common code, I don't see the issue.

 AFAIK there is already a team working on db api syncing, so I was
 thinking of let them deal with it.


+1

Doug



 
  This is kind of an aside, but I'm kind of confused now about how the
 syncs
  work with things that fall under oslo.rootwrap or oslo.messaging, like
 this
  patch [2].  It doesn't completely match the o-i patch, i.e. it's not
 syncing
  over openstack/common/rootwrap/wrapper.py, and I'm assuming because
 that's
  in oslo.rootwrap now?  But then why does the code still exist in
  oslo-incubator?
 
  I think the keystone guys are running into a similar issue where they
 want
  to remove a bunch of now-dead messaging code from keystone but can't
 because
  there are still some things in oslo-incubator using oslo.messaging code,
 or
  something weird like that. So maybe those modules are considered out of
  scope for this effort until the o-r/o-m code is completely out of o-i?
 
  Finally, just like we'd like to have cores for each virt driver in nova
 and
  the neutron API in nova, I think this kind of thing, at least initially,
  would benefit from having some oslo cores involved in a team that are
 also
  familiar to a degree with nova, e.g. bnemec or dims.
 
  [1]
 https://wiki.openstack.org/wiki/ReviewChecklist#Oslo_Syncing_Checklist
  [2] https://review.openstack.org/#/c/73340/
 
  --
 
  Thanks,
 
  Matt Riedemann
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 

Re: [openstack-dev] [nova][oslo] Changes to oslo-incubator sync workflow

2014-02-19 Thread Lance D Bragstad

Shed a little bit of light on Matt's comment about Keystone removing
oslo-incubator code and the issues we hit. Comments below.


Best Regards,

Lance Bragstad
ldbra...@us.ibm.com

Doug Hellmann doug.hellm...@dreamhost.com wrote on 02/19/2014 09:00:29
PM:

 From: Doug Hellmann doug.hellm...@dreamhost.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 02/19/2014 09:12 PM
 Subject: Re: [openstack-dev] [nova][oslo] Changes to oslo-incubator
 sync workflow



 On Wed, Feb 19, 2014 at 9:52 PM, Joe Gordon joe.gord...@gmail.com
wrote:
 As a side to this, as an exercise I tried a oslo sync in cinder to see
 what kind of issues would arise and here are my findings so far:
 https://review.openstack.org/#/c/74786/

 On Wed, Feb 19, 2014 at 6:20 PM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:
 
 
  On 2/19/2014 7:13 PM, Joe Gordon wrote:
 
  Hi All,
 
  As many of you know most oslo-incubator code is wildly out of sync.
  Assuming we consider it a good idea to sync up oslo-incubator code
  before cutting Icehouse, then we have a problem.
 
  Today oslo-incubator code is synced in ad-hoc manor, resulting in
  duplicated efforts and wildly out of date code. Part of the challenges
  today are backwards incompatible changes and new oslo bugs. I expect
  that once we get a single project to have an up to date oslo-incubator
  copy it will make syncing a second project significantly easier. So
  because I (hopefully) have some karma built up in nova, I would like
  to volunteer nova to be the guinea pig.
 
 
  To fix this I would like to propose starting an oslo-incubator/nova
  sync team. They would be responsible for getting nova's oslo code up
  to date.  I expect this work to involve:
  * Reviewing lots of oslo sync patches
  * Tracking the current sync patches
  * Syncing over the low hanging fruit, modules that work without
changing
  nova.
  * Reporting bugs to oslo team
  * Working with oslo team to figure out how to deal with backwards
  incompatible changes
     * Update nova code or make oslo module backwards compatible
  * Track all this
  * Create a roadmap for other projects to follow (re: documentation)
 
  I am looking for volunteers to help with this effort, any takers?
 
 
  best,
  Joe Gordon
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  Well I'll get the ball rolling...
 
  In the past when this has come up there is always a debate over should
be
  just sync to sync because we should always be up to date, or is that
  dangerous and we should only sync when there is a need (which is what
the
  review guidelines say now [1]).  There are pros and cons:
 
  pros:
 
  - we get bug fixes that we didn't know existed
  - it should be less painful to sync if we do it more often
 
  cons:
 
  - it's more review overhead and some crazy guy thinks we need a special
team
  dedicated to reviewing those changes :)
  - there are some changes in o-i that would break nova; I'm specifically
  thinking of the oslo RequestContext which has domain support now (or
some
  other keystone thingy) and nova has it's own RequestContext - so if we
did
  sync that from o-i it would change nova's logging context and break on
us
  since we didn't use oslo context.
 
  For that last con, I'd argue that we should move to the oslo
RequestContext,
  I'm not sure why we aren't.  Would that module then not fall under
  low-hanging-fruit?

 I am classifying low hanging fruit as anything that doesn't require
 any nova changes to work.

 +1
  I think the DB API modules have been a concern for auto-syncing before
too
  but I can't remember why now...something about possibly changing the
  behavior of how the nova migrations would work?  But if they are
already
  using the common code, I don't see the issue.

 AFAIK there is already a team working on db api syncing, so I was
 thinking of let them deal with it.

 +1

 Doug

 
  This is kind of an aside, but I'm kind of confused now about how the
syncs
  work with things that fall under oslo.rootwrap or oslo.messaging, like
this
  patch [2].  It doesn't completely match the o-i patch, i.e. it's not
syncing
  over openstack/common/rootwrap/wrapper.py, and I'm assuming because
that's
  in oslo.rootwrap now?  But then why does the code still exist in
  oslo-incubator?
 
  I think the keystone guys are running into a similar issue where they
want
  to remove a bunch of now-dead messaging code from keystone but can't
because
  there are still some things in oslo-incubator using oslo.messaging
code, or
  something weird like that. So maybe those modules are considered out of
  scope for this effort until the o-r/o-m code is completely out of o-i?
 

For the Keystone work specifically, we were looking to remove the
openstack.common.notifier
and openstack.common.rpc modules from Keystone common 

[openstack-dev] [Neutron] Disscussion about to allow specific floating IP Address

2014-02-19 Thread Yuuichi Fujioka
Hi, folks.

I have posted a patch to allow specific IP address of floating IP. [1]

My motivation is to rescue some cases.

In the below case, the floating IP address should be specified.

 An organization wants to migrate the system to the private cloud (OpenStack).
 The system  has an in-office public IP address that is accessed by the network 
in the office.
 There is a case where we cannot change the IP address for some reasons.

I understand that the design decision is prohibiting specific floating IP 
address.
But I hope to rethinking about it.
I think there are users who want to use the function.

I propose a following solution.([1] is not enabled yet)

* In default, the Specific floating IP address is prohibited even if user is 
admin.
* When setting policy.json to allow, it is able to use.
 e.g. if the policy is rule:admin_only, non-admin cannot set.

I would like to get your feedback.

 [1] https://review.openstack.org/#/c/70286/

Thanks.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-19 Thread Stephen Balukoff
Hi guys!

This is a great discussion, and I'm glad y'all have been participating in
it thus far, eh! Thanks also for you patience digesting my mile long posts.

My comments are in-line:


On Wed, Feb 19, 2014 at 3:47 PM, Youcef Laribi youcef.lar...@citrix.comwrote:

  Hi guys,



 I have been catching up on this interesting thread around the object
 model, so sorry in advance to jump in late in this debate, and if I missed
 some of the subtleties of the points being made so far.



 I tend to agree with Sam that the original intention of the current object
 model was never tied to a physical deployment. We seem to be confusing the
 tenant-facing object model which is completely logical (albeit with some
 “properties” or “qualities” that a tenant can express) from the
 deployment/implementation aspects of such a logical model (things like
 cluster/HA, one vs. multiple backends, virtual appliance vs. OS process,
 etc). We discussed in the past, the need for an Admin API (separate from
 the tenant API) where a cloud administrator (as opposed to a tenant) could
 manage the deployment aspects, and could construct different offerings that
 can be exposed to a tenant, but in the absence of such as admin API (which
 would necessarily be very technology-specific), this responsibility is
 currently shouldered by the drivers.


Looking at the original object model but not having been here for the
origin of these things, I suspect the original intent was to duplicate the
functionality of one major cloud provider's load balancing service and to
keep things as simple as possible. Keeping things as simple as they can be
is of course a desirable goal, but unfortunately the current object model
is too simplistic to support a lot of really desirable features that cloud
tenants are asking for. (Hence the addition of L7 and SSL necessitating a
model change, for example.)

I'm still of the opinion that HA at least should be one of these features--
and although it does speak to topology considerations, it should still be
doable in a purely logical way for the generic case. And I feel pretty
strongly that intelligence around core features (of which I'd say HA
capability is one-- I know of no commercial load balancer solution that
doesn't support HA in some form) should not be delegated solely to drivers.
In addition to intelligence around HA, not having greater visibility into
the components that do the actual load balancing is going to complicate
other features as well--  like auto-provisioning of load balancing
appliances or pseudo-appliances, statistics and monitoring, and scaling.
 And again, the more of these features we delegate to drivers, the more
clients are likely to experience vendor lock-in due to specific driver
implementations being different.

Maybe we should revisit the discussion around the need for an Admin API?
I'm not convinced that all admin API features would be tied to any specific
technology. :/  Most active-standby HA configurations, for example, use
some form of floating IP to achieve this (in fact, I can't think of any
that don't right now). And although specific implementations of how this is
done will vary, a 'floating IP' is a common feature here.


 IMO a tenant should only care about whether VIPs/Pools are grouped
 together to the extent that the provider allows the tenant to express such
 a preference. Some providers will allow their tenants to express such a
 preference (e.g. because it might impact cost), and others might not as it
 wouldn’t make sense in their implementation.


Remind me to tell you about the futility of telling a client what he or she
should want sometime. :)

In all seriousness, though, we should come to a decision as to whether we
allow a tenant to make such decisions, and if so, exactly how far we let
them trespass onto operational / implementation concerns. Keep in mind that
what we decide here also directly impacts a tenant's ability to deploy load
balancing on a specific vendor's appliance. (Which, I've been lead to
believe, is a feature some tenants are going to demand.)

I've heard some talk of a concept of 'flavors' which might solve this
problem, but I've not seen enough detail about this to be able to register
an opinion on it. In the absence of a better idea, I'm still plugging for
that whole cluster + loadbalancer concept alluded to in my #4.1 diagram
in this e-mail thread. :)


 Also the mapping between pool and backend is not necessarily 1:1, and is
 not necessarily at the creation time of pool, as this is purely a driver
 implementation decision (I know that currently implementations are like
 this, but another driver can choose a different approach). A driver could
 for example delay mapping a pool to a backend, until a full LB
 configuration is completed (when pool has members, and a VIP is attached to
 the pool). A driver can also move these resources around between backends,
 if it finds out, it put them in a non-optimal backend initially. As long as
 the 

[openstack-dev] tox in Ubuntu 13.04 (raring) or earlier?

2014-02-19 Thread Mike Spreitzer
I just installed DevStack into raring, and that appeared to work.  So I 
went on to try `tox` in /opt/stack/nova.  My invocation of tox created a 
virtual environment using /usr/lib/python2.7/dist-packages/virtualenv.py. 
In raring the latest virtualenv is version 1.9.1, which installs pip 
version 1.3.1 into the virtual environments it creates.  Nova's 
requirements include hacking, which requires pip = 1.4.  So I crashed and 
burned with a version conflict on pip.  Why isn't everybody having this 
problem?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Feedback on SSL implementation

2014-02-19 Thread Stephen Balukoff
Hi y'all!

Good news! This is the last of the mile-long posts I said I would write
after the initial post last week proposing the major model change. Yay for
small miracles, right?

I'm mostly working off this document in producing this feedback:
https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL

*Regarding storing private keys:*

Please let me know if y'all want suggestions here. In our implementation we
encrypt sensitive data like this using a data encipherment passphrase (ie.
long string of gobbledygook) stored on the cloud OS application servers
before storing the encrypted SSL keys in the database. (So, encrypted SSL
key, and the decipherment passphrase are not collocated data at rest.) In
order for this to be effective in an OpenStack environment, the database
needs to not live on the same hardware as the API server. In any case, this
is a pretty solvable problem (and maybe we can delegate it to barbican?)

Also, it should be pointed out that load balancer vendors should hopefully
take pains to ensure private SSL keys are encrypted at rest when stored on
their appliances. :/

*Future Design Considerations:*  I do not think it's going to be possible
to *not* remember the private key. Specifically, certain API requests will
entail updating various aspects of the load balancer configuration that
will also require restarting processes, etc. in an automated way. It's not
possible to restart haproxy or stunnel with an encrypted private key
without providing the decipherment passphrase, which may not be available
with all API calls.

*SSL Policies Managing:*
What are the fields under the 'Pass info' section? Bits I get, but
shouldn't this be informational?

Also: “Front-End-Https:  Er... everywhere else I've seen, that field is
X-Forwarded-Proto: https

Any thoughts on adding support for HSTS here as well? (
http://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security )

*SSL Certificates Managing:*

What are your thoughts on stripping the passphrase the user enters when
adding a certificate? Don't misunderstand: I think private SSL keys should
be stored encrypted, but I'd much rather we rely on a machine-generate 80+
character-long passphrase for this than the abc123 passphrase some users
will definitely have entered when they generated their certificate requests.

And again, I don't see how we can do automated restarts of services without
having a persistent private SSL key.

Also note that when adding or updating a certificate, we should immediately
check the modulus of the public certificate against the modulus of the
private SSL key. If they do not match, we should return an error.
(Mismatching cert and key are a very common problem we encounter.)

And! In the edit screen: We should allow the user to enter a new
certificate and CA chain. It's common for certificate authorities to renew
SSL certificates without re-issuing the private SSL key.

*On all views which present a list of SSL certificates:*
Please also list the common name (CN) of the certificate, all X509v3
Subject Alternative Names, number of bits in the private SSL key, and the
expiration date of the certificate.  (It's also worthwhile highlighting in
red any certificates which are expired.)  It might not hurt to show the
key's fingerprint or modulus as well.

*SSL Trusted Certificates Managing:*
Are these for authenticating web clients connecting to the HTTPS front-end?

*Front-end versus back-end protocols:*
It's actually really common for a HTTPS-enabled front-end to speak HTTP to
the back-end.  The assumption here is that the back-end network is
trusted and therefore we don't need to bother with the (considerable)
extra CPU overhead of encrypting the back-end traffic. To be honest, if
you're going to speak HTTPS on the front-end and the back-end, then the
only possible reason for even terminating SSL on the load balancer is to
insert the X-Fowarded-For header. In this scenario, you lose almost all the
benefit of doing SSL offloading at all!

If we make a policy decision right here not to allow front-end and back-end
protocol to mismatch, this will break a lot of topologies.

*Default cert when using SNI:*
So...  while I can see that SNI is implicitly supported by:

   - vip_ssl_certificate_assoc (new, multiple certificates per vip.
   certificate may be associated with multiple vips)


In any given SNI configuration, you still need to specify a 'default' cert
to use if the client uses a protocol that doesn't support SNI, or if none
of the various hostnames from the configured certificates match the
hostname that the client has requested.  To solve this, I would recommend
adding a 'default' boolean field to the vip_ssl_certificate_assoc object,
have this be set to true for the first certificate associated with the
VIP, and write a validation (in the agent code) that exactly one
certificate associated with a given VIP is marked as default whenever the
VIP-ssl_certificate associations are created or modified.

In the screen associating 

Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-19 Thread Ladislav Smola

On 02/19/2014 08:05 PM, Dougal Matthews wrote:

On 19/02/14 18:49, Hugh O. Brock wrote:

On Wed, Feb 19, 2014 at 06:31:47PM +, Dougal Matthews wrote:

On 19/02/14 18:29, Jason Rist wrote:

Would it be possible to create some token for use throughout? Forgive
my naivete.


I don't think so, the token would need to be understood by all the
services that we store passwords for. I may be misunderstanding 
however.



Hmm... isn't this approximately what Keystone does? Accept a password
once from the user and then return session tokens?


Right - but I think the heat template expects passwords, not tokens. I
don't know how easily we can change that.



We most probably can't. Most of the passwords are sent to keystone to 
setup services. Etc.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-19 Thread Ladislav Smola

On 02/19/2014 06:29 PM, Dougal Matthews wrote:

On 19/02/14 17:10, Ladislav Smola wrote:

Hello,

I would like to have your opinion about how to deal with passwords in
Tuskar-API

The background is, that tuskarAPI is storing heat template parameters in
its database, it's a
preparation for more complex workflows, when we will need to store the
data before the actual
heat stack-create.

So right now, the state is unacceptable, we are storing sensitive
data(all the heat passwords and keys)
in a raw form in the TuskarAPI database. That is wrong right?


I agree, this situation needs to change.

I'm +1 for not storing the passwords if we can avoid it. This would 
apply to all situations and not just Tuskar.


The question for me, is what passwords will we have and when do we 
need them? Are any of the passwords required long term.




Only password I know about we need right now, is the AdminPassword. 
Which will be used for first sign in to overcloud Horizon and e.g. CLI. 
But we should not store that, just

display that at some point.

If we do need to store passwords it becomes a somewhat thorny issue, 
how does Tuskar know what a password is? If this is flagged up by the 
UI/client then we are relying on the user to tell us which isn't wise.


This is set on template level by NoEcho attribute. We are already using 
that information.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev