Re: [openstack-dev] [heat][mistral] Mistral agenda item for Heat community meeting on Apr 17

2014-04-18 Thread Renat Akhmerov
I appreciate that Zane! I’ll set up at least ten alarm clocks to wake up on 
time :)

Renat Akhmerov
@ Mirantis Inc.

On 18 Apr 2014, at 06:44, Zane Bitter zbit...@redhat.com wrote:

 On 17/04/14 00:34, Renat Akhmerov wrote:
 Ooh, I confused the day of meeting :(. My apologies, I’m in a completely 
 different timezone (for me it’s in the middle of night) so I strongly 
 believed it was on a different day. I’ll be there next time.
 
 Yeah, it's really unfortunate that it falls right at midnight UTC, because it 
 makes the dates really confusing :/ It's technically correct though, so it's 
 hard to know what to do to make it less confusing.
 
 We ended up pretty short on time anyway, so it worked out well ;) I just 
 added it to the agenda for next week, and hopefully that meeting time should 
 be marginally more convenient for you anyway.
 
 cheers,
 Zane.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo removal of use_tpool conf option

2014-04-18 Thread Roman Podoliaka
Hi all,

 I objected to this and asked (more demanded) for this to be added back into 
 oslo. It was not. What I did not realize when I was reviewing this nova 
 patch, was that nova had already synced oslo’s change. And now we’ve 
 released Icehouse with a conf option missing that existed in Havana. 
 Whatever projects were using oslo’s DB API code has had this option 
 disappear (unless an alternative was merged). Maybe it’s only nova.. I 
 don’t know.

First, I'm very sorry that Nova Icehouse release was cut with this
option missing. Whether it actually works or not, we should always
ensure we preserve backwards compatibility. I should have insisted on
making this sync from oslo-incubator 'atomic' in the first place, so
that tpool option was removed from openstack/common code and re-added
to Nova code in one commit, not two. So it's clearly my fault as a
reviewer who has made the original change to oslo-incubator.
Nevertheless, the patch re-adding this to Nova has been on review
since December the 3rd. Can we ensure it lands to master ASAP and will
be backported to icehouse?

On removing this option from oslo.db originally. As I've already
responded to your comment on review, I believe, oslo.db should neither
know, nor care if you use eventlet/gevent/OS threads/multiple
processes/callbacks/etc for handling concurrency. For the very same
reason SQLAlchemy doesn't do that. It just can't (and should not) make
such decisions for you. At the same time, eventlet provides a very
similar feature out-of-box. And
https://review.openstack.org/#/c/59760/  reuses it in Nova.

 unless you really want to live with DB calls blocking the whole process. I 
 know I don’t

Me neither. But the way we've been dealing with this in Nova and other
projects is having multiple workers processing those queries. I know,
it's not perfect, but it's what we default to (what folks use in
production mostly) and what we test. And, as we all know, something
that is untested, is broken. If eventlet tpool was a better option, I
believe, we would default to it. On the other hand, this seems to be a
fundamental issue of MySQLdb-python DB API driver. A pure python
driver (it would use more CPU time of course), as well as psycopg2
would work just fine. Probably, it's the MySQLdb-python we should fix,
rather than focusing on using of a work around provided by eventlet.

Once again, sorry for breaking things. Let's fix this and try not to
repeat the same mistakes in the future.

Thanks,
Roman

On Fri, Apr 18, 2014 at 4:42 AM, Joshua Harlow harlo...@yahoo-inc.com wrote:
 Thanks for the good explanation, was just a curiosity of mine.

 Any idea why it has taken so long for the eventlet folks to fix this (I know
 u proposed a patch/patches a while ago)? Is eventlet really that
 unmaintained? :(

 From: Chris Behrens cbehr...@codestud.com
 Date: Thursday, April 17, 2014 at 4:59 PM
 To: Joshua Harlow harlo...@yahoo-inc.com
 Cc: Chris Behrens cbehr...@codestud.com, OpenStack Development Mailing
 List (not for usage questions) openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] oslo removal of use_tpool conf option


 On Apr 17, 2014, at 4:26 PM, Joshua Harlow harlo...@yahoo-inc.com wrote:

 Just an honest question (no negativity intended I swear!).

 If a configuration option exists and only works with a patched eventlet why
 is that option an option to begin with? (I understand the reason for the
 patch, don't get me wrong).


 Right, it’s a valid question. This feature has existed one way or another in
 nova for quite a while. Initially the implementation in nova was wrong. I
 did not know that eventlet was also broken at the time, although I
 discovered it in the process of fixing nova’s code. I chose to leave the
 feature because it’s something that we absolutely need long term, unless you
 really want to live with DB calls blocking the whole process. I know I
 don’t. Unfortunately the bug in eventlet is out of our control. (I made an
 attempt at fixing it, but it’s not 100%. Eventlet folks currently have an
 alternative up that may or may not work… but certainly is not in a release
 yet.)  We have an outstanding bug on our side to track this, also.

 The below is comparing apples/oranges for me.

 - Chris


 Most users would not be able to use such a configuration since they do not
 have this patched eventlet (I assume a newer version of eventlet someday in
 the future will have this patch integrated in it?) so although I understand
 the frustration around this I don't understand why it would be an option in
 the first place. An aside, if the only way to use this option is via a
 non-standard eventlet then how is this option tested in the community, aka
 outside of said company?

 An example:

 If yahoo has some patched kernel A that requires an XYZ config turned on in
 openstack and the only way to take advantage of kernel A is with XYZ config
 'on', then it seems like that’s a yahoo only patch that is not testable and
 useable for 

Re: [openstack-dev] How to implement and configure a new Neutron vpnaas driver from scratch?

2014-04-18 Thread Bo Lin
Hi Julio, 
+1 for Paul's response. Multiple-provider VPNaaS support is delayed. But you 
can take https://review.openstack.org/#/c/74156/ and 
https://review.openstack.org/#/c/74144/ as examples to write your own vpnaas 
driver without multi-provider support. If any questions or problems in your 
codes leading to not work, just upload your codes onto the review board, we can 
find how to solve it :). 

Thanks! 
---Bo 


- Original Message -

From: Paul Michali (pcm) p...@cisco.com 
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org 
Sent: Friday, April 11, 2014 2:15:18 AM 
Subject: Re: [openstack-dev] How to implement and configure a new Neutron 
vpnaas driver from scratch? 

By not “working” do you mean you cannot get the plugin to work in a 
multi-provider environment? Multi-provider solutions have been tabled until 
Juno, where more discussion is occurring on what is the best way to support 
different service providers. 

However, you should be able to get the plugin to work as the “sole” VPN service 
provider, which is what the Cisco solution does currently. You can look at how 
I’ve done that in the cisco_ipsec.py modules in the service_drivers and 
device_drivers directories, under neutron/services/vpn/. 


Regards, 

PCM (Paul Michali) 

MAIL …..…. p...@cisco.com 
IRC ……..… pcm_ ( irc.freenode.com ) 
TW ………... @pmichali 
GPG Key … 4525ECC253E31A83 
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 



On Apr 10, 2014, at 1:51 PM, Julio Carlos Barrera Juez  
juliocarlos.barr...@i2cat.net  wrote: 




Hi. 

After 8 months of the patch creation and being abandoned weeks ago ( 
https://review.openstack.org/#/c/41827/ ) I still don't how can we develop a 
VPNaaS plugin following Bo Lin instructions. Is there any other patch trying to 
solve the problem? Is there any way to workaround the issue to get a VPNaaS 
plugin working? 

Thank you! 


Julio C. Barrera Juez 
Office phone: +34 93 357 99 27 
Distributed Applications and Networks Area (DANA) 
i2CAT Foundation, Barcelona, Spain 
http://dana.i2cat.net 


On 27 February 2014 10:51, Bo Lin  l...@vmware.com  wrote: 

blockquote

Hi Julio, 
You can take https://review.openstack.org/#/c/74156/ and 
https://review.openstack.org/#/c/74144/ as examples to write your own vpnaas 
driver. More info about service type framework, you can also refer to 
neutron/services/loadbalancer codes. 


From: Julio Carlos Barrera Juez  juliocarlos.barr...@i2cat.net  
To: OpenStack Development Mailing List (not for usage questions)  
openstack-dev@lists.openstack.org  
Sent: Thursday, February 27, 2014 5:26:32 PM 
Subject: Re: [openstack-dev] How to implement and configure a new Neutron 
vpnaas driver from scratch? 


I'm following the change you pointed a week ago. It seems that it is working 
now, and will be eventually approved soon. I will be happy when it is approved. 

Anyway, I need more information about how to develop a service driver and a 
device driver for VPN plugin. I realize doing reverse-engineering that I need 
and RPC agent and and RPC between them to communicate and use a kind of 
callbacks to answer. Where I can find documentation about it and some examples? 
Is there any best practise guide of the use of this architecture? 

Thank you again! 


Julio C. Barrera Juez 
Office phone: +34 93 357 99 27 
Distributed Applications and Networks Area (DANA) 
i2CAT Foundation, Barcelona, Spain 
http://dana.i2cat.net 


On 19 February 2014 09:13, Julio Carlos Barrera Juez  
juliocarlos.barr...@i2cat.net  wrote: 

blockquote

Thank you very much Bo. I will try all your advices and check if it works! 


Julio C. Barrera Juez 
Office phone: +34 93 357 99 27 
Distributed Applications and Networks Area (DANA) 
i2CAT Foundation, Barcelona, Spain 
http://dana.i2cat.net 


On 18 February 2014 09:18, Bo Lin  l...@vmware.com  wrote: 

blockquote

I wonder whether your neutron server codes have added the  VPNaaS integration 
with service type framework  change on 
https://review.openstack.org/#/c/41827/21 , if not, the service_provider option 
is useless. You need to include the change before developing your own driver. 

QA (In my opinion and sth may be missing): 
- What is the difference between service drivers and device drivers? 
service drivers are driven by vpn service plugin and are responsible for 
casting rpc request (CRUD of vpnservices) to and do callbacks from vpn agent. 
device drivers are driven by vpn agent and are responsible for implementing 
specific vpn operations and report vpn running status. 

- Could I implement only one of them? 
device driver must be implemented based on your own device. Unless the default 
ipsec service driver is definitely appropriate, suggest you implement both of 
them. After including VPNaaS integration with service type framework, the 
service driver work is simple. 

- Whe re I need to put my Python implementation in my OpenStack instance? 
Do you mean let your 

Re: [openstack-dev] [Ironic] Should we adopt a blueprint design process

2014-04-18 Thread Thierry Carrez
Chris Behrens wrote:
 +1

FWIW we'll have a cross-project workshop at the design summit about
tracking incoming features -- covering blueprint proposal, approval and
prioritization. We'll discuss extending the -specs repositories
experience and see how it fits the whole picture on the long run:

http://summit.openstack.org/cfp/details/3

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Proposal to add Ruslan Kamaldinov to murano-core team

2014-04-18 Thread Timur Sufiev
Ruslan,

welcome to the Murano core team :)!

On Thu, Apr 17, 2014 at 7:32 PM, Anastasia Kuznetsova
akuznets...@mirantis.com wrote:
 +1


 On Thu, Apr 17, 2014 at 7:11 PM, Stan Lagun sla...@mirantis.com wrote:

 +1

 Sincerely yours,
 Stan Lagun
 Principal Software Engineer @ Mirantis



 On Thu, Apr 17, 2014 at 6:51 PM, Georgy Okrokvertskhov
 gokrokvertsk...@mirantis.com wrote:

 +1


 On Thu, Apr 17, 2014 at 6:01 AM, Dmitry Teselkin dtesel...@mirantis.com
 wrote:

 +1

 Agree


 On Thu, Apr 17, 2014 at 4:51 PM, Alexander Tivelkov
 ativel...@mirantis.com wrote:

 +1

 Totally agree

 --
 Regards,
 Alexander Tivelkov


 On Thu, Apr 17, 2014 at 4:37 PM, Timur Sufiev tsuf...@mirantis.com
 wrote:

 Guys,

 Ruslan Kamaldinov has been doing a lot of things for Murano recently
 (including devstack integration, automation scripts, making Murano
 more compliant with OpenStack standards and doing many reviews). He's
 actively participating in our ML discussions as well. I suggest to add
 him to the core team.

 Murano folks, please say your +1/-1 word.

 --
 Timur Sufiev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,
 Dmitry Teselkin
 Deployment Engineer
 Mirantis
 http://www.mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Georgy Okrokvertskhov
 Architect,
 OpenStack Platform Products,
 Mirantis
 http://www.mirantis.com
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Timur Sufiev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel-dev][Fuel] VXLAN tunnels support

2014-04-18 Thread Vladimir Kuklin
Thanks, Oleg.

Obviously, VXLAN support will be useful after we merge support for multiple
L2 segments:
https://blueprints.launchpad.net/fuel/+spec/multiple-cluster-networks which
is also targeted for 5.1.


On Mon, Apr 14, 2014 at 2:08 PM, Mike Scherbakov
mscherba...@mirantis.comwrote:

 Hi Oleg,
 thanks for submitting it.

 It's too late for 5.0 release, as we reached Feature Freeze. But it's
 certainly the thing we want to see in 5.1.
 With accordance to https://wiki.openstack.org/wiki/FeatureProposalFreeze,
 we are concentrated on bug fixes now, and a few exceptional features only.

 We will get back to your patches and review them after code freeze, when
 RC is created:
 https://wiki.openstack.org/wiki/ReleaseCycle#Release_candidate_1.

 Thanks,


 On Fri, Apr 11, 2014 at 1:00 PM, Oleg Balakirev 
 obalaki...@mirantis.comwrote:

 Hello,

 We have implemented support for VXLAN tunnels. Please review.

 Blueprint:
 https://blueprints.launchpad.net/fuel/+spec/neutron-vxlan-support
 Review requests:
 https://review.openstack.org/#/c/86611/
 https://review.openstack.org/#/c/83767/

 --
 ___
 Best Regards
 Oleg Balakirev
 Deployment Engineer
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] No weekly meeting today

2014-04-18 Thread Dina Belova
Folks, o/

I'm really sorry, but Sylvain and I can't attend today's meeting, that's
why it was decided not to have it.

All Climate related questions and discussions are welcome in our IRC
channel and we might discuss there all things you want to :)

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][design] review based conceptual design process

2014-04-18 Thread Petr Blaho
On Wed, Apr 16, 2014 at 06:44:28AM +1200, Robert Collins wrote:
 I've been watching the nova process, and I think its working out well
 - it certainly addresses:
  - making design work visible
  - being able to tell who has had input
  - and providing clear feedback to the designers
 
 I'd like to do the same thing for TripleO this cycle..
 
 I'm thinking we can just add docs to incubator, since thats already a
 repository separate to our production code - what do folk think?
 
 -Rob
 
 -- 
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

+1 for tripleo-spec repo
I like the idea of dedicated repo for design review process.

-- 
Petr Blaho, pbl...@redhat.com
Software Engineer

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][sahara] Merging Sahara-UI Dashboard code into horizon

2014-04-18 Thread Nikita Konovalov
Hi,

As for the questions:

1. I’d say there is no reason to keep a separate dashboard along with Project 
and Admin, so the panels should go under the “Data processing” panel group in 
the Project dashboard. Projects like Trove and Heat have already done that.

2. c) looks like the most reasonable way.

Also It’s important that our panels should be available only if sahara service 
is running and registered in the catalog. The common practice I guess is adding 
special files to openstack_dashboard/enabled (or 
openstack_dashboard/local/enabled) directory. So there is no more need to 
modify horizon’s config files to enable the dashboard.

Best Regards,
Nikita Konovalov
Mirantis, Inc

On Apr 17, 2014, at 23:06 , Chad Roberts crobe...@redhat.com wrote:

 Per blueprint  
 https://blueprints.launchpad.net/horizon/+spec/merge-sahara-dashboard we are 
 merging the Sahara Dashboard UI code into the Horizon code base.
 
 Over the last week, I have been working on making this merge happen and along 
 the way some interesting questions have come up.  Hopefully, together we can 
 make the best possible decisions.
 
 Sahara is the Data Processing platform for Openstack.  During incubation and 
 prior to that, a horizon dashboard plugin was developed to work with the data 
 processing api.  Our original implementation was a separate dashboard that we 
 would activate by adding to HORIZON_CONFIG and INSTALLED_APPS.  The layout 
 gave us a root of Sahara on the same level as Admin and Project.  Under 
 Sahara, we have 9 panels that make-up the entirety of the functionality for 
 the Sahara dashboard.
 
 Over the past week there seems to be at least 2 questions that have come up.  
 I'd like to get input from anyone interested.  
 
 1)  Where should the functionality live within the Horizon UI? So far, 2 
 options have been presented.
a)  In a separate dashboard (same level as Admin and Project).  This is 
 what we had in the past, but it doesn't seem to fit the flow of Horizon very 
 well.  I had a review up for this method at one point, but it was shot down, 
 so it is currently abandoned.
b)  In a panel group under Project.  This is what I have stared work on 
 recently. This seems to mimic the way other things have been integrated, but 
 more than one person has disagreed with this approach.
c)  Any other options?
 
 
 2)  Where should the code actually reside?
a)  Under openstack_dashboards/dashboards/sahara  (or data_processing).  
 This was the initial approach when the target was a separate dashboard.
b)  Have all 9 panels reside in openstack_dashboards/dashboards/project.  
 To me, this is likely to eventually make a mess of /project if more and more 
 things are integrated there.
c)  Place all 9 data_processing panels under 
 openstack_dashboards/dashboards/project/data_processing  This essentially 
 groups the code by panel group and might make for a bit less mess.
d)  Somewhere else?
 
 
 The current plan is to discuss this at the next Horizon weekly meeting, but 
 even if you can't be there, please do add your thoughts to this thread.
 
 Thanks,
 Chad Roberts (crobertsrh on irc)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Should we adopt a blueprint design process

2014-04-18 Thread Lucas Alvares Gomes
+1 for me as well, I'd like to have a better way to track incoming features.

Also, as part of the migration progress I think that we need a good
wiki page explaining the process of how propose a new feature, with a
template of what's mandatory to fill out and what is optional. I
wouldn't like to have something like nova currently have as its
template[1], I don't know what is mandatory there and what's is not,
many times you think that you know a way to do something but as you go
you then realize that doing it in another way is much better, so I
think that in order to propose a new feature we should not assume that
the author knows _everything_ at the start.

[1] https://github.com/openstack/nova-specs/blob/master/specs/template.rst

On Fri, Apr 18, 2014 at 10:25 AM, Thierry Carrez thie...@openstack.org wrote:

 Chris Behrens wrote:
  +1

 FWIW we'll have a cross-project workshop at the design summit about
 tracking incoming features -- covering blueprint proposal, approval and
 prioritization. We'll discuss extending the -specs repositories
 experience and see how it fits the whole picture on the long run:

 http://summit.openstack.org/cfp/details/3


But before we do it I think it might worth waiting to see the output
of this session that
Thierry pointed out.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][design] review based conceptual design process

2014-04-18 Thread Ladislav Smola

On 04/15/2014 08:44 PM, Robert Collins wrote:

I've been watching the nova process, and I think its working out well
- it certainly addresses:
  - making design work visible
  - being able to tell who has had input
  - and providing clear feedback to the designers

I'd like to do the same thing for TripleO this cycle..

I'm thinking we can just add docs to incubator, since thats already a
repository separate to our production code - what do folk think?

-Rob



+1 and +1 to separate specs repo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] No weekly meeting today

2014-04-18 Thread Sylvain Bauza
Yes, sorry again about not being able to run the meeting...


2014-04-18 11:44 GMT+02:00 Dina Belova dbel...@mirantis.com:

 Folks, o/

 I'm really sorry, but Sylvain and I can't attend today's meeting, that's
 why it was decided not to have it.

 All Climate related questions and discussions are welcome in our IRC
 channel and we might discuss there all things you want to :)

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Devstack] add support for ceph

2014-04-18 Thread Sean Dague
On 04/18/2014 12:03 AM, Scott Devoid wrote:
 So I have had a chance to look over the whole review history again. I
 agree with Sean Dague and Dean Troyer's concerns that the current patch
 affects code outside of lib/storage and extras.d. We should make the
 Devstack extension system more flexible to allow for more extensions.
 Although I am not sure if this responsibility falls completely in the
 lap of those wishing to integrate Ceph.

Where should it fall? This has been pretty common with trying to bring
in anything major, the general plumbing needs to come from that same
effort. It's also a pretty sane litmus test on whether this is a drive
by contribution that will get no support in the future (and thus just
expect Dean and I to go fix things), or something which will have
someone actively contributing to keep things working in the future.

 What is more concerning though is the argument that /even when the Ceph
 patch meets these standards/ /it will still have to be pulled in from
 some external source. /Devstack is a central part of OpenStack's test
 and development system. Core projects depend upon it to develop and test
 drivers. As an operator, I use it to understand how changes might affect
 my production system. Documentation. Bug Triage. Outreach. Each of these
 tasks and efforts benefit from having a curated and maintained set
 extras in the mainline codebase. Particularly extras that are already
 represented by mainline drivers in other projects.

My concern is that there is a lot of code in devstack. And every time I
play with a different set of options we don't enable in the gate, things
get brittle. For instance, Fedora support gets broken all the time,
because it's not tested in the gate.

Something as big as using ceph for storage back end across a range of
services is big. And while there have been patches, I've yet to see
anyone volunteer 3rd party testing here to help us keep it working. Or
the long term commitment of being part of the devstack community
reviewing patches and fixing other bugs, so there is some confidence
that if people try to use this it works.

Some of the late reverts in nova for icehouse hit this same kind of
issue, where once certain rbd paths were lit in the code base within
24hrs we had user reports coming back of things exploding. That makes me
feel like there are a lot of daemons lurking here, and if this is going
to be a devstack mode, and that people are going to use a lot, then it
needs to be something that's tested.

If the user is pulling the devstack plugin from a 3rd party location,
then it's clear where the support needs to come from. If it's coming
from devstack, people are going to be private message pinging me on IRC
when it doesn't work (which happens all the time).

That being said, there are 2 devstack sessions available at design
summit. So proposing something around addressing the ceph situation
might be a good one. It's a big and interesting problem.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Should we adopt a blueprint design process

2014-04-18 Thread Vladimir Kozhukalov
+1
Great idea.

many times you think that you know a way to do something but as you go
you then realize that doing it in another way is much better, so I
think that in order to propose a new feature we should not assume that
the author knows _everything_ at the start.

Absolutely agree.

It would be great to have kind of design stages.
0) Bare idea, feature scope, use cases.
1) Preliminary architecture and API. Here we can start writing and
reviewing code.
2) Detailed understanding of architecture and API. Writing code, testing,
detailed discussing, debugging.
3) Merging feature into master, debugging.

Vladimir Kozhukalov


On Fri, Apr 18, 2014 at 2:08 PM, Lucas Alvares Gomes
lucasago...@gmail.comwrote:

 +1 for me as well, I'd like to have a better way to track incoming
 features.

 Also, as part of the migration progress I think that we need a good
 wiki page explaining the process of how propose a new feature, with a
 template of what's mandatory to fill out and what is optional. I
 wouldn't like to have something like nova currently have as its
 template[1], I don't know what is mandatory there and what's is not,
 many times you think that you know a way to do something but as you go
 you then realize that doing it in another way is much better, so I
 think that in order to propose a new feature we should not assume that
 the author knows _everything_ at the start.

 [1] https://github.com/openstack/nova-specs/blob/master/specs/template.rst

 On Fri, Apr 18, 2014 at 10:25 AM, Thierry Carrez thie...@openstack.org
 wrote:
 
  Chris Behrens wrote:
   +1
 
  FWIW we'll have a cross-project workshop at the design summit about
  tracking incoming features -- covering blueprint proposal, approval and
  prioritization. We'll discuss extending the -specs repositories
  experience and see how it fits the whole picture on the long run:
 
  http://summit.openstack.org/cfp/details/3


 But before we do it I think it might worth waiting to see the output
 of this session that
 Thierry pointed out.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Devstack] add support for ceph

2014-04-18 Thread Chmouel Boudjnah
On Fri, Apr 18, 2014 at 6:32 AM, Sean Dague s...@dague.net wrote:

 That being said, there are 2 devstack sessions available at design
 summit. So proposing something around addressing the ceph situation
 might be a good one. It's a big and interesting problem.



I have add a session that just do that here :

http://summit.openstack.org/cfp/details/379

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Requirements and API revision progress

2014-04-18 Thread Eugene Nikanorov
Folks,

As we're discussing single-call approach, I think it would be helpful to
actually implement such API (e,g. practically, in the code) and see how it
works, how compatibility is maintained and such.
I think you could start with basic features available for single call -
e.g. single vip and single pool (as supported by existing API)
In other words, let's relax the requirement of supporting everything within
one call (it should be the goal eventually), but as a first step something
more simple is enough.

Basically I would prefer if there was not a competition between API styles,
so I'm strongly against versioning. Let's do it side by side, if
single-call API proves to work well - it will be a nice addition for those
who expect it.

Thanks,
Eugene.




On Fri, Apr 18, 2014 at 6:36 AM, Brandon Logan
brandon.lo...@rackspace.comwrote:

  We look forward to your proposal and I hope that does get us closer (if
 not all the way to) an agreed upon revision.  Also, thank you for taking
 the time to fully understand our thought processes on some of the features
 we want and decisions we made in the proposal.

 Thanks,
 Brandon


 On 04/17/2014 09:01 PM, Stephen Balukoff wrote:

 Hi Brandon,

  Yep! I agree that both those definitions are correct: It all depends on
 context.

  I'm usually OK with going with whatever definition is in popular use by
 the user-base. However, load balancer as a term is so ambiguous among
 people actually developing a cloud load balancing system that a definition
 or more specific term is probably called for. :)

  In any case, all I'm really looking for is a glossary of defined terms
 attached to the API proposal, especially for terms like this that can have
 several meanings depending on context.  (That is to say, I don't think it's
 necessary to define IP address for example--  unless, say, the
 distinction between IPv4 or IPv6 becomes important to the conversation
 somehow.)

  In any case note that I actually like your API thus far and think it's a
 pretty good start: Y'all put forth the laudable effort to actually create
 it, there's obviously a lot of forethought put into your proposal, and that
 certainly deserves respect! In fact, my team and I will probably be
 building off of what you've started in creating our proposal (which, again,
 I hope to have in a showable state before next week's meeting, and which
 I'm anticipating won't be the final form this API revision takes anyway.)

  Thanks,
 Stephen

  There are only two truly difficult problems in computer science: Naming
 things, cache invalidation, and off-by-one errors.



 On Thu, Apr 17, 2014 at 6:31 PM, Brandon Logan 
 brandon.lo...@rackspace.com wrote:

  Stephen,
 Thanks for elaborating on this.  I agreed and still do that our
 proposal's load balancer falls more in line with that glossary's term for
 listener and now can see the discrepancy with load balancer.  Yes, in
 this case the term load balancer would have to be redefined, but that
 doesn't mean it is the wrong thing to do.

 I've always been on the side of the Load Balancing as a Service API using
 a root object called a load balancer.  This just really makes sense to me
 and others, but obviously it doesn't for everyone.  However, in our
 experience end users just understand the service better when the service
 takes in load balancer objects and returns load balancer objects.

 Also, since it has been tasked to defined a new API we felt that it was
 implied that some definitions were going to change, especially those that
 are subjective.  There are definitely many definitions of a load balancer.
 Is a load balancer an appliance (virtual or physical) that load balances
 many protocols and ports and is it also one that load balances a single
 protocol on a single port?  I would say that is definitely subjective.
 Obviously I, and others, feel that both of those are true.  I would like to
 hear arguments as to why one of them is not true, though.

 Either way, we could have named that object a sqonkey and given a
 definition in that glossary.  Now we can all agree that while that word is
 just an amazing word, its a terrible name to use in any context for this
 service.  It seems to me that an API can define and also redefine
 subjective terms.

 I'm glad you don't find this as a deal breaker and are okay with
 redefining the term.  I hope we all can come to agreement on an API and I
 hope it satisfies everyone's needs and ideas of a good API.

 Thanks,
 Brandon


 On 04/17/2014 07:03 PM, Stephen Balukoff wrote:

   Hi Brandon!

  Per the meeting this morning, I seem to recall you were looking to have
 me elaborate on why the term 'load balancer' as used in your API proposal
 is significantly different from the term 'load balancer' as used in the
 glossary at:  https://wiki.openstack.org/wiki/Neutron/LBaaS/Glossary

  As promised, here's my elaboration on that:

  The glossary above states:  An object that represent a logical load
 balancer that may 

[openstack-dev] [Glance] Ideas needed for v2 registry testing

2014-04-18 Thread Erno Kuvaja

Hi all,

I have been trying to enable functional testing for Glance API v2 using 
data_api = glance.db.registry.api without great success.


The current functionality of the v2 api+reg relies on the fact that 
keystone is used and our current tests does not facilitate that 
expectation.


I do not like either option I have managed to come up with so now is 
time to call for help. Currently only way I see we could run the 
registry tests is to convert our functional tests using keystone instead 
of noauth or write test suite that passes API server and targets the 
registry directly. Neither of these are great as starting keystone would 
make the already long taking functional tests even longer and more 
resource hog on top of that we would have a need to pull in keystone 
just to run glance tests; on the other hand bypassing API server would 
not give us any guarantee that the behavior of the glance is the same 
regardless which data_api is used.


At this point any ideas/discussion would be more than welcome how we can 
make these tests running on both configurations.


Thanks,
Erno

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaas] Single call API discussion

2014-04-18 Thread Eugene Nikanorov

 3. Could you describe the most complicated use case that your single-call
 API supports? Again, please be very specific here.

 Same data can be derived from the link above.


 Ok, I'm actually not seeing and complicated examples, but I'm guessing
 that any attributes at the top of the page could be expanded on according
 the the syntax described.

 Hmmm...  one of the draw-backs I see with a one-call approach is you've
 got to have really good syntax checking for everything right from the
 start, or (if you plan to handle primitives one at a time) a really solid
 roll-back strategy if anything fails or has problems, cleaning up any
 primitives that might already have been created before the whole call
 completes.

 The alternative is to not do this with primitives... but then I don't see
 how that's possible either. (And certainly not easy to write tests for:
  The great thing about small primitives is their methods tend to be easier
 to unit test.)


These are the good arguments! That's why I'd like to actually see the code
(even simplified approach will could work as a first step), i thing it
could do a lot of things clearer.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-18 Thread Oleg Bondarev
Hi all,

While investigating possible options for Nova-network to Neutron migration
I faced a couple of issues with libvirt.
One of the key requirements for the migration is that instances should stay
running and don't need restarting. In order to meet this requirement we
need
to either attach new nic to the instance or update existing one to plug it
to the Neutron network.

So what I've discovered is that attaching a new network device is only
applied
on the instance after reboot although *VIR_DOMAIN_AFFECT_LIVE* flag is
passed to
the libvirt call *attachDeviceFlags()*:
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L1412
Is that expected? Are there any other options to apply new nic without
reboot?

I also tried to update existing nic of an instance by using libvirt
*updateDeviceFlags()* call,
but it fails with the following:
*'this function is not supported by the connection driver: cannot modify
network device configuration'*
libvirt API spec (http://libvirt.org/hvsupport.html) shows that 0.8.0 as
minimal
qemu version for the virDomainUpdateDeviceFlags call, kvm --version on my
setup shows
'*QEMU emulator version 1.0 (qemu-kvm-1.0)*'
Could someone please point what am I missing here?

Any help on the above is much appreciated!

Thanks,
Oleg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaas] Single call API discussion

2014-04-18 Thread Eugene Nikanorov


  There's certainly something to be said for having a less-disruptive user
 experience. And after all, what we've been discussing is so radical a
 change that it's close to starting over from scratch in many ways.

 Yes, we assumed that starting from scratch would be the case at least as
 far as the API is concerned.


We're going to evolve, folks, not start everything over.
Any 'radical change' need to prove it works for everyone. The only viable
option for that is to implement and see.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] Cryptography audit by OSSG

2014-04-18 Thread Lisa Clark
Barbicaneers,

   Is anyone following the openstack-security list and/or part of the
OpenStack Security Group (OSSG)?  This sounds like another group and list
we should keep our eyes on.

   In the below thread on the security list, Nathan Kinder is conducting a
security audit of the various integrated OpenStack projects.  He's
answering questions such as what crypto libraries are being used in the
projects, algorithms used, sensitive data, and potential improvements that
can be made.  Check the links out in the below thread.

   Though we're not yet integrated, it might be beneficial to put together
our security audit page under Security/Icehouse/Barbican.

   Another thing to consider as you're reviewing the security audit pages
of Keystone and Heat (and others as they are added): Would Barbican help
to solve any of the security concerns/issues that these projects are
experiencing?

-Lisa


Message: 5
Date: Thu, 17 Apr 2014 16:27:30 -0700
From: Nathan Kinder nkin...@redhat.com
To: Bryan D. Payne bdpa...@acm.org, Clark, Robert Graham
   robert.cl...@hp.com
Cc: openstack-secur...@lists.openstack.org
   openstack-secur...@lists.openstack.org
Subject: Re: [Openstack-security] Cryptographic Export Controls and
   OpenStack
Message-ID: 53506362.3020...@redhat.com
Content-Type: text/plain; charset=windows-1252

On 04/16/2014 10:28 AM, Bryan D. Payne wrote:
 I'm not aware of a list of the specific changes, but this seems quite
 related to the work that Nathan has started played with... discussed on
 his blog here:
 
 https://blog-nkinder.rhcloud.com/?p=51

This is definitely related to the security audit effort that I'm
driving.  It's hard to make recommendations on configurations and
deployment architectures from a security perspective when we don't even
have a clear picture of the current state of things are in the code from
a security standpoint.  This clear picture is what I'm trying to get to
right now (along with keeping this picture up to date so it doesn't get
stale).

Once we know things such as what crypto algorithms are used and how
sensitive data is being handled, we can see what is configurable and
make recommendations.  We'll surely find that not everything is
configurable and sensitive data isn't well protected in areas, which are
things that we can turn into blueprints and bugs and work on improving
in development.

It's still up in the air as to where this information should be
published once it's been compiled.  It might be on the wiki, or possibly
in the documentation (Security Guide seems like a likely candidate).
There was some discussion of this with the PTLs from the Project Meeting
from 2 weeks ago:


http://eavesdrop.openstack.org/meetings/project/2014/project.2014-04-08-21
.03.html

I'm not so worried myself about where this should be published, as that
doesn't matter if we don't have accurate and comprehensive information
collected in the first place.  My current focus is on the collection and
maintenance of this info on a project by project basis.  Keystone and
Heat have started, which is great!:

  https://wiki.openstack.org/wiki/Security/Icehouse/Keystone
  https://wiki.openstack.org/wiki/Security/Icehouse/Heat

If any other OSSG members are developers on any of the projects, it
would be great if you could help drive this effort within your project.

Thanks,
-NGK
 
 Cheers,
 -bryan
 
 
 
 On Tue, Apr 15, 2014 at 1:38 AM, Clark, Robert Graham
 robert.cl...@hp.com mailto:robert.cl...@hp.com wrote:
 
 Does anyone have a documented run-down of changes that must be made
 to OpenStack configurations to allow them to comply with EAR
 requirements?
 http://www.bis.doc.gov/index.php/policy-guidance/encryption
 
 It seems like something we should consider putting into the security
 guide. I realise that most of the time it?s just ?don?t use your own
 libraries, call to others, make algorithms configurable? etc but
 it?s a question I?m seeing more and more, the security guide?s
 compliance section looks like a great place to have something about
EAR.
 
 -Rob
 
 ___
 Openstack-security mailing list
 openstack-secur...@lists.openstack.org
 mailto:openstack-secur...@lists.openstack.org
 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-security
 
 
 
 
 ___
 Openstack-security mailing list
 openstack-secur...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-security


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Migration to packages, step 1/2

2014-04-18 Thread Dmitry Pyzhov
Guys,

I've removed ability to use eggs packages on master node:
https://review.openstack.org/#/c/88012/

Next step is to remove gems mirror: https://review.openstack.org/#/c/88278/
It will be merged when osci team fix rubygem-yajl-ruby package. Hopefully
on Monday.

From that moment all our code will be installed everywhere from packages.
And there will be option to build packages during iso build or use
pre-built packages from our mirrors.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][db] oslo.db repository review request

2014-04-18 Thread Victor Sergeyev
Hello all,

During Icehouse release cycle our team has been working on splitting of
openstack common db code into a separate library blueprint [1]. At the
moment the issues, mentioned in this bp and [2] are solved and we are
moving forward to graduation of oslo.db. You can find the new oslo.db code
at [3]

So, before moving forward, I want to ask Oslo team to review oslo.db
repository [3] and especially the commit, that allows the unit tests to
pass [4].

Thanks,
Victor

[1] https://blueprints.launchpad.net/oslo/+spec/oslo-db-lib
[2] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
[3] https://github.com/malor/oslo.db
[4]
https://github.com/malor/oslo.db/commit/276f7570d7af4a7a62d0e1ffb4edf904cfbf0600
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Design Summit schedule for other projects and cross-project workshops tracks

2014-04-18 Thread Thierry Carrez
Hello everyone,

In 25 days we'll gather in Atlanta for the Juno Design Summit (May
13-16). The session schedule for this event is still under construction,
but will gradually be posted as we get closer to the event. Note that
you have until the end of this week to suggest session content. See
details in this previous post:

http://lists.openstack.org/pipermail/openstack-dev/2014-March/029319.html

To facilitate travel approval and future scheduling, the schedule for
the Other projects and Cross-project workshops tracks on the Tuesday
is already posted at:

http://junodesignsummit.sched.org/

This schedule is still subject to minimal changes as we refine the
content. You can find more information on the Design Summit at:

https://wiki.openstack.org/wiki/Summit

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3] Agent manager customization

2014-04-18 Thread zz elle
Hi everyone,


I would like to propose a change to simplify/allow l3 agent manager
customization and i would like the community feedback.


Just to precise my context, I deploy OpenStack for small specific business
use cases and i often customize it because of specific use case needs.
In particular must of the time i must customize L3 agent behavior in order
to:
- add custom iptables rules in the router (on router/port post-deployment),
- remove custom iptables rules in the router (on port pre-undeployment),
- update router config through sysctl (on router post-deployment),
- start an application in the router (on router/port post-deployment),
- stop an application in the router (on router/port pre-undeployment),
- etc ...
Currently (Havana,Icehouse), i create my own L3 agent manager which extends
neutron one.
And I replace neutron-l3-agent binary, indeed it's not possible to
change/hook l3 agent manager implementation by configuration.


What would be the correct way to allow l3 agent manager customization ?
 - Allow to specify l3 agent manager implementation through configuration
  == like the option router_scheduler_driver which allows to change router
scheduler implementation
 - Allow to hook l3 agent manager implementation
  == like the generic hook system in nova (nova.hooks used in
nova.compute.api)
  == or like the neutron ML2 mechanism hook system
(neutron.plugins.ml2.driver_api:MechanismDriver)
 - Other idea ?


It seems the same question could be asked for the dhcp agent ?


Thanks,

Cedric (zzelle@irc)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-18 Thread Stephen Balukoff
Howdy, folks!

Could someone explain to me the SSL usage scenario where it makes sense to
re-encrypt traffic traffic destined for members of a back-end pool?  SSL
termination on the load balancer makes sense to me, but I'm having trouble
understanding why one would be concerned about then re-encrypting the
traffic headed toward a back-end app server. (Why not just use straight TCP
load balancing in this case, and save the CPU cycles on the load balancer?)

We terminate a lot of SSL connections on our load balancers, but have yet
to have a customer use this kind of functionality.  (We've had a few ask
about it, usually because they didn't understand what a load balancer is
supposed to do-- and with a bit of explanation they went either with SSL
termination on the load balancer + clear text on the back-end, or just
straight TCP load balancing.)

Thanks,
Stephen


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Voting for the TC Election is now open

2014-04-18 Thread Anita Kuno
Voting for the TC Election is now open and will remain open until after
1300 utc April 24 2014.

We are electing 7 positions from a pool of 17 candidates[0].

Follow the instructions that are available when you vote. If you are
confused and need more instruction, close the webpage without submitting
your vote and then email myself and Tristan[1]. Your ballot will still
be enabled to vote until the election is closed, as long as you don't
submit your ballot before your close your webpage.

You are eligible to vote if are a Foundation individual member[2] that
also has committed to one of the official programs projects[3] over the
Havana-Icehouse timeframe (April 4, 2013 06:00 UTC to April 4, 2014
05:59 UTC) Or if you are one of the extra-atcs.[4]

What to do if you don't see the email and have a commit in at least one
of the official programs projects[3]:
 * check the trash of your gerrit Preferred Email address[5], in
case it went into trash or spam
 * wait a bit and check again, in case your email server is a bit slow
 * find the sha of at least one commit from the program project
repos[3] and email me and Tristan[1]. If we can confirm that you are
entitled to vote, we will add you to the voters list and you will be
emailed a ballot.

Our democratic process is important to the health of OpenStack, please
exercise your right to vote.

Candidate statements/platforms can be found linked to Candidate names on
this page:
https://wiki.openstack.org/wiki/TC_Elections_April_2014#Candidates

Happy voting,
Anita. (anteaya)

[0] https://wiki.openstack.org/wiki/TC_Elections_April_2014#Candidates
[1] Anita: anteaya at anteaya dot info Tristan: tristan dot cacqueray at
enovance dot com
[2] http://www.openstack.org/community/members/
[3]
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml?id=april-2014-elections
[4]
http://git.openstack.org/cgit/openstack/governance/tree/reference/extra-atcs
[5] Sign into review.openstack.org: Go to Settings  Contact
Information. Look at the email listed as your Preferred Email. That is
where the ballot has been sent.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Ideas needed for v2 registry testing

2014-04-18 Thread Mark Washenberger
Hi Erno,

Just looking for a little more information here. What are the particular
areas around keystone integration in the v2 api+registry stack that you
want to test? Is the v2 api + v2 registry stack using keystone differently
than how v1 api + v1 registry stack uses it?

Thanks


On Fri, Apr 18, 2014 at 6:35 AM, Erno Kuvaja kuv...@hp.com wrote:

 Hi all,

 I have been trying to enable functional testing for Glance API v2 using
 data_api = glance.db.registry.api without great success.

 The current functionality of the v2 api+reg relies on the fact that
 keystone is used and our current tests does not facilitate that expectation.

 I do not like either option I have managed to come up with so now is time
 to call for help. Currently only way I see we could run the registry tests
 is to convert our functional tests using keystone instead of noauth or
 write test suite that passes API server and targets the registry directly.
 Neither of these are great as starting keystone would make the already long
 taking functional tests even longer and more resource hog on top of that we
 would have a need to pull in keystone just to run glance tests; on the
 other hand bypassing API server would not give us any guarantee that the
 behavior of the glance is the same regardless which data_api is used.

 At this point any ideas/discussion would be more than welcome how we can
 make these tests running on both configurations.

 Thanks,
 Erno

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Basic zuul startup question: Private key file is encrypted

2014-04-18 Thread Dane Leblanc (leblancd)
Jeremy:

Thanks, this did the trick. I have zuul connecting with gerrit, and it's 
detecting neutron reviews. I still don't have all the pieces working, but I 
think I'm closing in.

Thanks,
Dane

-Original Message-
From: Jeremy Stanley [mailto:fu...@yuggoth.org] 
Sent: Tuesday, April 15, 2014 7:53 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [infra] Basic zuul startup question: Private key 
file is encrypted

On 2014-04-15 23:24:31 + (+), Jeremy Stanley wrote:
[...]
 You'll need to strip the encryption from it with something like...
 
 ssh-keygen -p -f ~/.ssh/id_dsa
[...]

Or more likely, since the patents on RSA expired about 14 years ago...

ssh-keygen -p -f ~/.ssh/id_rsa

--
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Voting for the TC Election is now open

2014-04-18 Thread Anita Kuno
On 04/18/2014 11:22 AM, Anita Kuno wrote:
 Voting for the TC Election is now open and will remain open until after
 1300 utc April 24 2014.
 
 We are electing 7 positions from a pool of 17 candidates[0].
 
 Follow the instructions that are available when you vote. If you are
 confused and need more instruction, close the webpage without submitting
 your vote and then email myself and Tristan[1]. Your ballot will still
 be enabled to vote until the election is closed, as long as you don't
 submit your ballot before your close your webpage.
 
 You are eligible to vote if are a Foundation individual member[2] that
 also has committed to one of the official programs projects[3] over the
 Havana-Icehouse timeframe (April 4, 2013 06:00 UTC to April 4, 2014
 05:59 UTC) Or if you are one of the extra-atcs.[4]
 
 What to do if you don't see the email and have a commit in at least one
 of the official programs projects[3]:
  * check the trash of your gerrit Preferred Email address[5], in
 case it went into trash or spam
  * wait a bit and check again, in case your email server is a bit slow
  * find the sha of at least one commit from the program project
 repos[3] and email me and Tristan[1]. If we can confirm that you are
 entitled to vote, we will add you to the voters list and you will be
 emailed a ballot.
 
 Our democratic process is important to the health of OpenStack, please
 exercise your right to vote.
 
 Candidate statements/platforms can be found linked to Candidate names on
 this page:
 https://wiki.openstack.org/wiki/TC_Elections_April_2014#Candidates
 
 Happy voting,
 Anita. (anteaya)
 
 [0] https://wiki.openstack.org/wiki/TC_Elections_April_2014#Candidates
 [1] Anita: anteaya at anteaya dot info Tristan: tristan dot cacqueray at
 enovance dot com
 [2] http://www.openstack.org/community/members/
 [3]
 http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml?id=april-2014-elections
 [4]
 http://git.openstack.org/cgit/openstack/governance/tree/reference/extra-atcs
 [5] Sign into review.openstack.org: Go to Settings  Contact
 Information. Look at the email listed as your Preferred Email. That is
 where the ballot has been sent.
 
I have to extend an apology to Flavio Percoco, whose name is spelled
incorrectly on both the wikipage and on the ballot.

I can't change the ballot now and will leave the wikipage with the
spelling mistake so it is consistent to voters, but I do want folks to
know I am aware of the mistake now, and I do apologize to Flavio for this.

I'm sorry,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Basic zuul startup question: Private key file is encrypted

2014-04-18 Thread Brian Bowen (brbowen)
Nice job

-Original Message-
From: Dane Leblanc (leblancd) 
Sent: Friday, April 18, 2014 11:25 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [infra] Basic zuul startup question: Private key 
file is encrypted

Jeremy:

Thanks, this did the trick. I have zuul connecting with gerrit, and it's 
detecting neutron reviews. I still don't have all the pieces working, but I 
think I'm closing in.

Thanks,
Dane

-Original Message-
From: Jeremy Stanley [mailto:fu...@yuggoth.org] 
Sent: Tuesday, April 15, 2014 7:53 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [infra] Basic zuul startup question: Private key 
file is encrypted

On 2014-04-15 23:24:31 + (+), Jeremy Stanley wrote:
[...]
 You'll need to strip the encryption from it with something like...
 
 ssh-keygen -p -f ~/.ssh/id_dsa
[...]

Or more likely, since the patents on RSA expired about 14 years ago...

ssh-keygen -p -f ~/.ssh/id_rsa

--
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-18 Thread Zhi Yan Liu
On Fri, Apr 18, 2014 at 10:52 PM, lihuiba magazine.lihu...@163.com wrote:
btw, I see but at the moment we had fixed it by network interface
device driver instead of workaround - to limit network traffic slow
down.
 Which kind of driver, in host kernel, in guest kernel or in openstack?


In compute host kernel, doesn't related with OpenStack.



There are few works done in Glance
(https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver ),
but some work still need to be taken I'm sure. There are something on
drafting, and some dependencies need to be resolved as well.
 I read the blueprints carefully, but still have some doubts.
 Will it store an image as a single volume in cinder? Or store all image

Yes

 files
 in one shared volume (with a file system on the volume, of course)?
 Openstack already has support to convert an image to a volume, and to boot
 from a volume. Are these features similar to this blueprint?

Not similar but it could be leverage for this case.



I prefer to talk this details in IRC. (And I had read all VMThunder
code at today early (my timezone), there are some questions from me as
well)

zhiyan


 Huiba Li

 National Key Laboratory for Parallel and Distributed
 Processing, College of Computer Science, National University of Defense
 Technology, Changsha, Hunan Province, P.R. China
 410073


 At 2014-04-18 12:14:25,Zhi Yan Liu lzy@gmail.com wrote:
On Fri, Apr 18, 2014 at 10:53 AM, lihuiba magazine.lihu...@163.com wrote:
It's not 100% true, in my case at last. We fixed this problem by
network interface driver, it causes kernel panic and readonly issues
under heavy networking workload actually.

 Network traffic control could help. The point is to ensure no instance
 is starved to death. Traffic control can be done with tc.


btw, I see but at the moment we had fixed it by network interface
device driver instead of workaround - to limit network traffic slow
down.



btw, we are doing some works to make Glance to integrate Cinder as a
unified block storage
 backend.
 That sounds interesting. Is there some  more materials?


There are few works done in Glance
(https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver ),
but some work still need to be taken I'm sure. There are something on
drafting, and some dependencies need to be resolved as well.



 At 2014-04-18 06:05:23,Zhi Yan Liu lzy@gmail.com wrote:
Replied as inline comments.

On Thu, Apr 17, 2014 at 9:33 PM, lihuiba magazine.lihu...@163.com
 wrote:
IMO we'd better to use backend storage optimized approach to access
remote image from compute node instead of using iSCSI only. And from
my experience, I'm sure iSCSI is short of stability under heavy I/O
workload in product environment, it could causes either VM filesystem
to be marked as readonly or VM kernel panic.

 Yes, in this situation, the problem lies in the backend storage, so no
 other

 protocol will perform better. However, P2P transferring will greatly
 reduce

 workload on the backend storage, so as to increase responsiveness.


It's not 100% true, in my case at last. We fixed this problem by
network interface driver, it causes kernel panic and readonly issues
under heavy networking workload actually.



As I said currently Nova already has image caching mechanism, so in
this case P2P is just an approach could be used for downloading or
preheating for image caching.

 Nova's image caching is file level, while VMThunder's is block-level.
 And

 VMThunder is for working in conjunction with Cinder, not Glance.
 VMThunder

 currently uses facebook's flashcache to realize caching, and dm-cache,

 bcache are also options in the future.


Hm if you say bcache, dm-cache and flashcache, I'm just thinking if
them could be leveraged by operation/best-practice level.

btw, we are doing some works to make Glance to integrate Cinder as a
unified block storage backend.


I think  P2P transferring/pre-caching sounds a  good way to go, as I
mentioned as well, but actually for the area I'd like to see something
like zero-copy + CoR. On one hand we can leverage the capability of
on-demand downloading image bits by zero-copy approach, on the other
hand we can prevent to reading data from remote image every time by
CoR.

 Yes, on-demand transferring is what you mean by zero-copy, and
 caching
 is something close to CoR. In fact, we are working on a kernel module
 called
 foolcache that realize a true CoR. See
 https://github.com/lihuiba/dm-foolcache.


Yup. And it's really interesting to me, will take a look, thanks for
 sharing.




 National Key Laboratory for Parallel and Distributed
 Processing, College of Computer Science, National University of Defense
 Technology, Changsha, Hunan Province, P.R. China
 410073


 At 2014-04-17 17:11:48,Zhi Yan Liu lzy@gmail.com wrote:
On Thu, Apr 17, 2014 at 4:41 PM, lihuiba magazine.lihu...@163.com
 wrote:
IMHO, zero-copy approach is better
 VMThunder's on-demand transferring is the same thing as your
 zero-copy
 

Re: [openstack-dev] [Devstack] add support for ceph

2014-04-18 Thread Scott Devoid
On Fri, Apr 18, 2014 at 5:32 AM, Sean Dague s...@dague.net wrote:

 On 04/18/2014 12:03 AM, Scott Devoid wrote:
  So I have had a chance to look over the whole review history again. I
  agree with Sean Dague and Dean Troyer's concerns that the current patch
  affects code outside of lib/storage and extras.d. We should make the
  Devstack extension system more flexible to allow for more extensions.
  Although I am not sure if this responsibility falls completely in the
  lap of those wishing to integrate Ceph.

 Where should it fall? This has been pretty common with trying to bring
 in anything major, the general plumbing needs to come from that same
 effort. It's also a pretty sane litmus test on whether this is a drive
 by contribution that will get no support in the future (and thus just
 expect Dean and I to go fix things), or something which will have
 someone actively contributing to keep things working in the future.


The issue is that it is very easy to suggest new features and refactoring
when you are very familiar with the codebase. To a newcomer, though, you
are basically asking me to do something that is impossible, so the logical
interpretation is you're telling me to go away.



  What is more concerning though is the argument that /even when the Ceph
  patch meets these standards/ /it will still have to be pulled in from
  some external source. /Devstack is a central part of OpenStack's test
  and development system. Core projects depend upon it to develop and test
  drivers. As an operator, I use it to understand how changes might affect
  my production system. Documentation. Bug Triage. Outreach. Each of these
  tasks and efforts benefit from having a curated and maintained set
  extras in the mainline codebase. Particularly extras that are already
  represented by mainline drivers in other projects.

 My concern is that there is a lot of code in devstack. And every time I
 play with a different set of options we don't enable in the gate, things
 get brittle. For instance, Fedora support gets broken all the time,
 because it's not tested in the gate.

 Something as big as using ceph for storage back end across a range of
 services is big. And while there have been patches, I've yet to see
 anyone volunteer 3rd party testing here to help us keep it working. Or
 the long term commitment of being part of the devstack community
 reviewing patches and fixing other bugs, so there is some confidence
 that if people try to use this it works.


100% agree. I was under the impression that integration of the ceph patches
into devstack was a precursor to a 3rd party gate on ceph functionality. We
have some VM resources to contribute to 3rd party tests, but I would need
assistance in setting that up.


 Some of the late reverts in nova for icehouse hit this same kind of
 issue, where once certain rbd paths were lit in the code base within
 24hrs we had user reports coming back of things exploding. That makes me
 feel like there are a lot of daemons lurking here, and if this is going
 to be a devstack mode, and that people are going to use a lot, then it
 needs to be something that's tested.

 If the user is pulling the devstack plugin from a 3rd party location,
 then it's clear where the support needs to come from. If it's coming
 from devstack, people are going to be private message pinging me on IRC
 when it doesn't work (which happens all the time).


I see your motivations here. There are systems to help us with this though:
redirect them to ask.openstack.org or bugs.launchpad.net and have them ping
you with the link. Delegate replies to others. I try to answer any
questions that pop up on #openstack but I need to look at the
ask.openstack.org queue more often. Perhaps we need to put more focus on
organizing community support and offloading that task from PTLs and core
devs.


 That being said, there are 2 devstack sessions available at design
 summit. So proposing something around addressing the ceph situation
 might be a good one. It's a big and interesting problem.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-18 Thread Phillip Toohill
Hello Stephen,

One use case we have, which was actually a highly requested feature for our 
service, was to ensure that traffic within the internal cloud network was not 
passed in the clear. I believe this mainly stems from the customers security 
requirements. I understand this reasoning to allow a centralized place to 
correct/prevent potential SSL attacks while still assuring data is secure all 
the way to the backend. I could probably dig up more details if this isn't 
clear enough, but is the way I understand this particular feature.


Thanks,
Phil

From: Stephen Balukoff sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, April 18, 2014 10:21 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

Howdy, folks!

Could someone explain to me the SSL usage scenario where it makes sense to 
re-encrypt traffic traffic destined for members of a back-end pool?  SSL 
termination on the load balancer makes sense to me, but I'm having trouble 
understanding why one would be concerned about then re-encrypting the traffic 
headed toward a back-end app server. (Why not just use straight TCP load 
balancing in this case, and save the CPU cycles on the load balancer?)

We terminate a lot of SSL connections on our load balancers, but have yet to 
have a customer use this kind of functionality.  (We've had a few ask about it, 
usually because they didn't understand what a load balancer is supposed to do-- 
and with a bit of explanation they went either with SSL termination on the load 
balancer + clear text on the back-end, or just straight TCP load balancing.)

Thanks,
Stephen


--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Cryptography audit by OSSG

2014-04-18 Thread Bryan D. Payne

Is anyone following the openstack-security list and/or part of the
 OpenStack Security Group (OSSG)?  This sounds like another group and list
 we should keep our eyes on.


I'm one of the OSSG leads.  We'd certainly welcome your involvement in
OSSG.  In fact, there has been much interest in OSSG about the Barbican
project.  And I believe that many people from the group are contributing to
Barbican.


In the below thread on the security list, Nathan Kinder is conducting a
 security audit of the various integrated OpenStack projects.  He's
 answering questions such as what crypto libraries are being used in the
 projects, algorithms used, sensitive data, and potential improvements that
 can be made.  Check the links out in the below thread.

Though we're not yet integrated, it might be beneficial to put together
 our security audit page under Security/Icehouse/Barbican.


This would be very helpful.  If there's anything I can do to help
facilitate this, just let me know.

Cheers,
-bryan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 4/18 state of the gate

2014-04-18 Thread Matt Riedemann
As many of you are probably aware the gate isn't in great shape right 
now. There are a few patches [1][2][3] lined up to fix this, but 
rechecking is probably futile until those are merged.


We could also use another nova core to +A this [4] so we can track down 
some racy ec2 test failures.


And this heat/tempest patch needs some love [5].

FWIW, our overall bug classification rate [6] is relatively good.

[1] https://review.openstack.org/#/c/88488/
[2] https://review.openstack.org/#/c/88494/
[3] https://review.openstack.org/#/c/88324/
[4] https://review.openstack.org/#/c/88541/
[5] https://review.openstack.org/#/c/87993/
[6] http://status.openstack.org/elastic-recheck/data/uncategorized.html

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Devstack] add support for ceph

2014-04-18 Thread Dean Troyer
On Fri, Apr 18, 2014 at 10:51 AM, Scott Devoid dev...@anl.gov wrote:

 The issue is that it is very easy to suggest new features and refactoring
 when you are very familiar with the codebase. To a newcomer, though, you
 are basically asking me to do something that is impossible, so the logical
 interpretation is you're telling me to go away.


This patch was dropped on us twice without any conversation or warning or
discussion about how might be the best approach. Suggestions _were_ made
and subsequently ignored.  If there was a lack of understanding, asking
questions is a good way to get past that.  None were asked.  I do not view
that as saying 'go away'.

The plugin mechanism is designed to facillitate these sort of additions.
 They can be completely drop-in, at minimum one file in extras.d and config
in local.conf.  The question is about the additional modifications required
to the base DevStack scripts.  We can address if anyone can articulate what
is not possible with the existing setup.  At this point I do not know what
those needs are, nobody has contacted me directly to talk about any of this
outside drive-by Gerrit review comments.  That is not the place for a
design discussion.

I see your motivations here. There are systems to help us with this though:
 redirect them to ask.openstack.org or bugs.launchpad.net and have them
 ping you with the link. Delegate replies to others. I try to answer any
 questions that pop up on #openstack but I need to look at the
 ask.openstack.org queue more often. Perhaps we need to put more focus on
 organizing community support and offloading that task from PTLs and core
 devs.


DevStack is an _opinionated_ OpenStack installer.  It can not and will not
be all things to all people.  The first priority is to address services
required by OpenStack projects (database, queue, web server, etc) and even
then we only use what is provided in the underlying distributions.  (BTW,
does zmq even still work?  I don't think it is tested.)

Layered products that require 3rd party repos have a higher bar to get over
to be included in the DevStack repo.  If an OpenStack project changes to
require such a product, and that change gets through the TC (see MongoDB
discussions for an example), then we'll have to re-evaluate that position.

All this said, I really do want to see Ceph support for Cinder, Glance,
Swift, etc in DevStack as I think it is cool and useful.  But it is not
required to be in the DevStack repo to be useful.

Chmouel has proposed a session on this subject and it is likely to be
accepted as there are no other submissions for the last slot.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-18 Thread Kyle Mestery
On Fri, Apr 18, 2014 at 8:52 AM, Oleg Bondarev obonda...@mirantis.com wrote:
 Hi all,

 While investigating possible options for Nova-network to Neutron migration
 I faced a couple of issues with libvirt.
 One of the key requirements for the migration is that instances should stay
 running and don't need restarting. In order to meet this requirement we need
 to either attach new nic to the instance or update existing one to plug it
 to the Neutron network.

Thanks for looking into this Oleg! I just wanted to mention that if
we're trying to plug a new NIC into the VM, this will likely require
modifications in the guest. The new NIC will likely have a new PCI ID,
MAC, etc., and thus the guest would have to switch to this. Therefor,
I think it may be better to try and move the existing NIC from a nova
network onto a neutron network.

 So what I've discovered is that attaching a new network device is only
 applied
 on the instance after reboot although VIR_DOMAIN_AFFECT_LIVE flag is passed
 to
 the libvirt call attachDeviceFlags():
 https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L1412
 Is that expected? Are there any other options to apply new nic without
 reboot?

 I also tried to update existing nic of an instance by using libvirt
 updateDeviceFlags() call,
 but it fails with the following:
 'this function is not supported by the connection driver: cannot modify
 network device configuration'
 libvirt API spec (http://libvirt.org/hvsupport.html) shows that 0.8.0 as
 minimal
 qemu version for the virDomainUpdateDeviceFlags call, kvm --version on my
 setup shows
 'QEMU emulator version 1.0 (qemu-kvm-1.0)'
 Could someone please point what am I missing here?

What does libvirtd -V show for the libvirt version? On my Fedora 20
setup, I see the following:

[kmestery@fedora-mac neutron]$ libvirtd -V
libvirtd (libvirt) 1.1.3.4
[kmestery@fedora-mac neutron]$

Thanks,
Kyle

 Any help on the above is much appreciated!

 Thanks,
 Oleg


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-18 Thread Eichberger, German
Hi Stephen,

The use case is that the Load Balancer needs to look at the HTTP requests be it 
to add an X-Forward field or change the timeout – but the network between the 
load balancer and the nodes is not completely private and the sensitive 
information needs to be again transmitted encrypted. This is admittedly an edge 
case but we had to implement a similar scheme for HP Cloud’s swift storage.

German

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Friday, April 18, 2014 8:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

Howdy, folks!

Could someone explain to me the SSL usage scenario where it makes sense to 
re-encrypt traffic traffic destined for members of a back-end pool?  SSL 
termination on the load balancer makes sense to me, but I'm having trouble 
understanding why one would be concerned about then re-encrypting the traffic 
headed toward a back-end app server. (Why not just use straight TCP load 
balancing in this case, and save the CPU cycles on the load balancer?)

We terminate a lot of SSL connections on our load balancers, but have yet to 
have a customer use this kind of functionality.  (We've had a few ask about it, 
usually because they didn't understand what a load balancer is supposed to do-- 
and with a bit of explanation they went either with SSL termination on the load 
balancer + clear text on the back-end, or just straight TCP load balancing.)

Thanks,
Stephen


--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-18 Thread Stephen Balukoff
Dang.  I was hoping this wasn't the case.  (I personally think it's a
little silly not to trust your service provider to secure a network when
they have root access to all the machines powering your cloud... but I
digress.)

Part of the reason I was hoping this wasn't the case, isn't just because it
consumes a lot more CPU on the load balancers, but because now we
potentially have to manage client certificates and CA certificates (for
authenticating from the proxy to back-end app servers). And we also have to
decide whether we allow the proxy to use a different client cert / CA per
pool, or per member.

Yes, I realize one could potentially use no client cert or CA (ie.
encryption but no auth)...  but that actually provides almost no extra
security over the unencrypted case:  If you can sniff the traffic between
proxy and back-end server, it's not much more of a stretch to assume you
can figure out how to be a man-in-the-middle.

Do any of you have a use case where some back-end members require SSL
authentication from the proxy and some don't? (Again, deciding whether
client cert / CA usage should attach to a pool or to a member.)

It's a bit of a rabbit hole, eh.

Stephen



On Fri, Apr 18, 2014 at 10:21 AM, Eichberger, German 
german.eichber...@hp.com wrote:

  Hi Stephen,



 The use case is that the Load Balancer needs to look at the HTTP requests
 be it to add an X-Forward field or change the timeout – but the network
 between the load balancer and the nodes is not completely private and the
 sensitive information needs to be again transmitted encrypted. This is
 admittedly an edge case but we had to implement a similar scheme for HP
 Cloud’s swift storage.



 German



 *From:* Stephen Balukoff [mailto:sbaluk...@bluebox.net]
 *Sent:* Friday, April 18, 2014 8:22 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario
 question



 Howdy, folks!



 Could someone explain to me the SSL usage scenario where it makes sense to
 re-encrypt traffic traffic destined for members of a back-end pool?  SSL
 termination on the load balancer makes sense to me, but I'm having trouble
 understanding why one would be concerned about then re-encrypting the
 traffic headed toward a back-end app server. (Why not just use straight TCP
 load balancing in this case, and save the CPU cycles on the load balancer?)



 We terminate a lot of SSL connections on our load balancers, but have yet
 to have a customer use this kind of functionality.  (We've had a few ask
 about it, usually because they didn't understand what a load balancer is
 supposed to do-- and with a bit of explanation they went either with SSL
 termination on the load balancer + clear text on the back-end, or just
 straight TCP load balancing.)



 Thanks,

 Stephen




 --
 Stephen Balukoff
 Blue Box Group, LLC
 (800)613-4305 x807

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-18 Thread Vijay Venkatachalam

There is no reasoning mentioned in AWS, but they do allow re-encryption.

http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/config-backend-auth.html

For reasons I don’t understand, the workflow allows to configure backend-server 
certificates to be trusted and it doesn’t accept client certificates or CA 
certificates.

Thanks,
Vijay V.


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Friday, April 18, 2014 11:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario 
question

Dang.  I was hoping this wasn't the case.  (I personally think it's a little 
silly not to trust your service provider to secure a network when they have 
root access to all the machines powering your cloud... but I digress.)

Part of the reason I was hoping this wasn't the case, isn't just because it 
consumes a lot more CPU on the load balancers, but because now we potentially 
have to manage client certificates and CA certificates (for authenticating from 
the proxy to back-end app servers). And we also have to decide whether we allow 
the proxy to use a different client cert / CA per pool, or per member.

Yes, I realize one could potentially use no client cert or CA (ie. encryption 
but no auth)...  but that actually provides almost no extra security over the 
unencrypted case:  If you can sniff the traffic between proxy and back-end 
server, it's not much more of a stretch to assume you can figure out how to be 
a man-in-the-middle.

Do any of you have a use case where some back-end members require SSL 
authentication from the proxy and some don't? (Again, deciding whether client 
cert / CA usage should attach to a pool or to a member.)

It's a bit of a rabbit hole, eh.

Stephen


On Fri, Apr 18, 2014 at 10:21 AM, Eichberger, German 
german.eichber...@hp.commailto:german.eichber...@hp.com wrote:
Hi Stephen,

The use case is that the Load Balancer needs to look at the HTTP requests be it 
to add an X-Forward field or change the timeout – but the network between the 
load balancer and the nodes is not completely private and the sensitive 
information needs to be again transmitted encrypted. This is admittedly an edge 
case but we had to implement a similar scheme for HP Cloud’s swift storage.

German

From: Stephen Balukoff 
[mailto:sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net]
Sent: Friday, April 18, 2014 8:22 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

Howdy, folks!

Could someone explain to me the SSL usage scenario where it makes sense to 
re-encrypt traffic traffic destined for members of a back-end pool?  SSL 
termination on the load balancer makes sense to me, but I'm having trouble 
understanding why one would be concerned about then re-encrypting the traffic 
headed toward a back-end app server. (Why not just use straight TCP load 
balancing in this case, and save the CPU cycles on the load balancer?)

We terminate a lot of SSL connections on our load balancers, but have yet to 
have a customer use this kind of functionality.  (We've had a few ask about it, 
usually because they didn't understand what a load balancer is supposed to do-- 
and with a bit of explanation they went either with SSL termination on the load 
balancer + clear text on the back-end, or just straight TCP load balancing.)

Thanks,
Stephen


--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807tel:%28800%29613-4305%20x807

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Devstack] add support for ceph

2014-04-18 Thread Scott Devoid
On Fri, Apr 18, 2014 at 11:41 AM, Dean Troyer dtro...@gmail.com wrote:

 On Fri, Apr 18, 2014 at 10:51 AM, Scott Devoid dev...@anl.gov wrote:

 The issue is that it is very easy to suggest new features and refactoring
 when you are very familiar with the codebase. To a newcomer, though, you
 are basically asking me to do something that is impossible, so the logical
 interpretation is you're telling me to go away.


 This patch was dropped on us twice without any conversation or warning or
 discussion about how might be the best approach. Suggestions _were_ made
 and subsequently ignored.  If there was a lack of understanding, asking
 questions is a good way to get past that.  None were asked.  I do not view
 that as saying 'go away'.


I was speaking more from personal experience with other patches, where the
response is oh this is great, but we really need XYZ to happen first.
Nobody is working on XYZ and I sure don't know how to make it happen. But
yea, my patch is -2 blocked on that. :-/

DevStack is an _opinionated_ OpenStack installer.  It can not and will not
 be all things to all people.  The first priority is to address services
 required by OpenStack projects (database, queue, web server, etc) and even
 then we only use what is provided in the underlying distributions.  (BTW,
 does zmq even still work?  I don't think it is tested.)

 Layered products that require 3rd party repos have a higher bar to get
 over to be included in the DevStack repo.  If an OpenStack project changes
 to require such a product, and that change gets through the TC (see MongoDB
 discussions for an example), then we'll have to re-evaluate that position.

 All this said, I really do want to see Ceph support for Cinder, Glance,
 Swift, etc in DevStack as I think it is cool and useful.  But it is not
 required to be in the DevStack repo to be useful.


I guess the question then is how we can gate with functional tests for
drivers without touching devstack?

~ Scott
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][db] oslo.db repository review request

2014-04-18 Thread Doug Hellmann
Nice work, Victor!

I left a few comments on the commits that were made after the original
history was exported from the incubator. There were a couple of small
things to address before importing the library, and a couple that can
wait until we have the normal code review system. I'd say just add new
commits to fix the issues, rather than trying to amend the existing
commits.

We haven't really discussed how to communicate when we agree the new
repository is ready to be imported, but it seems reasonable to use the
patch in openstack-infra/config that will be used to do the import:
https://review.openstack.org/#/c/78955/

Doug

On Fri, Apr 18, 2014 at 10:28 AM, Victor Sergeyev
vserge...@mirantis.com wrote:
 Hello all,

 During Icehouse release cycle our team has been working on splitting of
 openstack common db code into a separate library blueprint [1]. At the
 moment the issues, mentioned in this bp and [2] are solved and we are moving
 forward to graduation of oslo.db. You can find the new oslo.db code at [3]

 So, before moving forward, I want to ask Oslo team to review oslo.db
 repository [3] and especially the commit, that allows the unit tests to pass
 [4].

 Thanks,
 Victor

 [1] https://blueprints.launchpad.net/oslo/+spec/oslo-db-lib
 [2] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
 [3] https://github.com/malor/oslo.db
 [4]
 https://github.com/malor/oslo.db/commit/276f7570d7af4a7a62d0e1ffb4edf904cfbf0600

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Voting for the TC Election is now open

2014-04-18 Thread John Dickinson
I put together links to every candidate's nomination email at 
https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/candidates

--John




On Apr 18, 2014, at 8:29 AM, Anita Kuno ante...@anteaya.info wrote:

 On 04/18/2014 11:22 AM, Anita Kuno wrote:
 Voting for the TC Election is now open and will remain open until after
 1300 utc April 24 2014.
 
 We are electing 7 positions from a pool of 17 candidates[0].
 
 Follow the instructions that are available when you vote. If you are
 confused and need more instruction, close the webpage without submitting
 your vote and then email myself and Tristan[1]. Your ballot will still
 be enabled to vote until the election is closed, as long as you don't
 submit your ballot before your close your webpage.
 
 You are eligible to vote if are a Foundation individual member[2] that
 also has committed to one of the official programs projects[3] over the
 Havana-Icehouse timeframe (April 4, 2013 06:00 UTC to April 4, 2014
 05:59 UTC) Or if you are one of the extra-atcs.[4]
 
 What to do if you don't see the email and have a commit in at least one
 of the official programs projects[3]:
 * check the trash of your gerrit Preferred Email address[5], in
 case it went into trash or spam
 * wait a bit and check again, in case your email server is a bit slow
 * find the sha of at least one commit from the program project
 repos[3] and email me and Tristan[1]. If we can confirm that you are
 entitled to vote, we will add you to the voters list and you will be
 emailed a ballot.
 
 Our democratic process is important to the health of OpenStack, please
 exercise your right to vote.
 
 Candidate statements/platforms can be found linked to Candidate names on
 this page:
 https://wiki.openstack.org/wiki/TC_Elections_April_2014#Candidates
 
 Happy voting,
 Anita. (anteaya)
 
 [0] https://wiki.openstack.org/wiki/TC_Elections_April_2014#Candidates
 [1] Anita: anteaya at anteaya dot info Tristan: tristan dot cacqueray at
 enovance dot com
 [2] http://www.openstack.org/community/members/
 [3]
 http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml?id=april-2014-elections
 [4]
 http://git.openstack.org/cgit/openstack/governance/tree/reference/extra-atcs
 [5] Sign into review.openstack.org: Go to Settings  Contact
 Information. Look at the email listed as your Preferred Email. That is
 where the ballot has been sent.
 
 I have to extend an apology to Flavio Percoco, whose name is spelled
 incorrectly on both the wikipage and on the ballot.
 
 I can't change the ballot now and will leave the wikipage with the
 spelling mistake so it is consistent to voters, but I do want folks to
 know I am aware of the mistake now, and I do apologize to Flavio for this.
 
 I'm sorry,
 Anita.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-18 Thread Clint Byrum
Excerpts from Stephen Balukoff's message of 2014-04-18 10:36:11 -0700:
 Dang.  I was hoping this wasn't the case.  (I personally think it's a
 little silly not to trust your service provider to secure a network when
 they have root access to all the machines powering your cloud... but I
 digress.)
 

No one person or even group of people on the operator's network will have
full access to everything. Security is best when it comes in layers. Area
51 doesn't just have a guard shack and then you drive right into the
hangars with the UFO's and alien autopsies. There are sensors, mobile
guards, secondary checkpoints, locks on the outer doors, and locks on
the inner doors. And perhaps most importantly, the MP who approves your
entry into the first gate, does not even have access to the next one.

Your SSL terminator is a gate. What happens once an attacker (whoever
that may be, your disgruntled sysadmin, or rogue hackers) is behind that
gate _may_ be important.

 Part of the reason I was hoping this wasn't the case, isn't just because it
 consumes a lot more CPU on the load balancers, but because now we
 potentially have to manage client certificates and CA certificates (for
 authenticating from the proxy to back-end app servers). And we also have to
 decide whether we allow the proxy to use a different client cert / CA per
 pool, or per member.
 
 Yes, I realize one could potentially use no client cert or CA (ie.
 encryption but no auth)...  but that actually provides almost no extra
 security over the unencrypted case:  If you can sniff the traffic between
 proxy and back-end server, it's not much more of a stretch to assume you
 can figure out how to be a man-in-the-middle.


A passive attack where the MITM does not have to witness the initial
handshake or decrypt/reencrypt to sniff things is quite a bit easier to
pull off and would be harder to detect. So almost no extra security
is not really accurate. But this is just one point of data for risk
assessment.

 Do any of you have a use case where some back-end members require SSL
 authentication from the proxy and some don't? (Again, deciding whether
 client cert / CA usage should attach to a pool or to a member.)
 
 It's a bit of a rabbit hole, eh.


Security turns into an endless rat hole when you just look at it as a
product, such as A secure load balancer.

If, however, you consider that it is really just a process of risk
assessment and mitigation, then you can find a sweet spot that works
in your business model. How much does it cost to mitigate the risk
of unencrypted backend traffic from the load balancer?  What is the
potential loss if the traffic is sniffed? How likely is it that it will
be sniffed? .. Those are ongoing questions that need to be asked and
then reevaluated, but they don't have a fruitless stream of what-if's
that have to be baked in like the product discussion. It's just part of
your process, and processes go on until they aren't needed anymore.

IMO a large part of operating a cloud is decoupling the ability to setup
a system from the ability to enable your business with a system. So
if you can communicate the risks of doing without backend encryption,
and charge the users appropriately when they choose that the risk is
worth the added cost, then I think it is worth it to automate the setup
of CA's and client certs and put that behind an API. Luckily, you will
likely find many in the OpenStack community who can turn that into a
business opportunity and will help.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Voting for the TC Election is now open

2014-04-18 Thread John Dickinson
I had completely missed the links Anita had put together. Use her list (ie the 
officially updated one).

https://wiki.openstack.org/wiki/TC_Elections_April_2014#Candidates


Sorry about that, Anita!

--John



On Apr 18, 2014, at 11:33 AM, John Dickinson m...@not.mn wrote:

 I put together links to every candidate's nomination email at 
 https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/candidates
 
 --John
 
 
 
 
 On Apr 18, 2014, at 8:29 AM, Anita Kuno ante...@anteaya.info wrote:
 
 On 04/18/2014 11:22 AM, Anita Kuno wrote:
 Voting for the TC Election is now open and will remain open until after
 1300 utc April 24 2014.
 
 We are electing 7 positions from a pool of 17 candidates[0].
 
 Follow the instructions that are available when you vote. If you are
 confused and need more instruction, close the webpage without submitting
 your vote and then email myself and Tristan[1]. Your ballot will still
 be enabled to vote until the election is closed, as long as you don't
 submit your ballot before your close your webpage.
 
 You are eligible to vote if are a Foundation individual member[2] that
 also has committed to one of the official programs projects[3] over the
 Havana-Icehouse timeframe (April 4, 2013 06:00 UTC to April 4, 2014
 05:59 UTC) Or if you are one of the extra-atcs.[4]
 
 What to do if you don't see the email and have a commit in at least one
 of the official programs projects[3]:
* check the trash of your gerrit Preferred Email address[5], in
 case it went into trash or spam
* wait a bit and check again, in case your email server is a bit slow
* find the sha of at least one commit from the program project
 repos[3] and email me and Tristan[1]. If we can confirm that you are
 entitled to vote, we will add you to the voters list and you will be
 emailed a ballot.
 
 Our democratic process is important to the health of OpenStack, please
 exercise your right to vote.
 
 Candidate statements/platforms can be found linked to Candidate names on
 this page:
 https://wiki.openstack.org/wiki/TC_Elections_April_2014#Candidates
 
 Happy voting,
 Anita. (anteaya)
 
 [0] https://wiki.openstack.org/wiki/TC_Elections_April_2014#Candidates
 [1] Anita: anteaya at anteaya dot info Tristan: tristan dot cacqueray at
 enovance dot com
 [2] http://www.openstack.org/community/members/
 [3]
 http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml?id=april-2014-elections
 [4]
 http://git.openstack.org/cgit/openstack/governance/tree/reference/extra-atcs
 [5] Sign into review.openstack.org: Go to Settings  Contact
 Information. Look at the email listed as your Preferred Email. That is
 where the ballot has been sent.
 
 I have to extend an apology to Flavio Percoco, whose name is spelled
 incorrectly on both the wikipage and on the ballot.
 
 I can't change the ballot now and will leave the wikipage with the
 spelling mistake so it is consistent to voters, but I do want folks to
 know I am aware of the mistake now, and I do apologize to Flavio for this.
 
 I'm sorry,
 Anita.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Voting for the TC Election is now open

2014-04-18 Thread Anita Kuno
On 04/18/2014 03:07 PM, John Dickinson wrote:
 I had completely missed the links Anita had put together. Use her
 list (ie the officially updated one).
 
 https://wiki.openstack.org/wiki/TC_Elections_April_2014#Candidates
 
 
 Sorry about that, Anita!
 
 --John
No worries, John.

I'll do a better job of drawing attention to the list of candidates
with linked platforms in future.

Thanks for letting me know it was hard to find.

Anita.
 
 
 
 On Apr 18, 2014, at 11:33 AM, John Dickinson m...@not.mn wrote:
 
 I put together links to every candidate's nomination email at
 https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/candidates


 
--John
 
 
 
 
 On Apr 18, 2014, at 8:29 AM, Anita Kuno ante...@anteaya.info
 wrote:
 
 On 04/18/2014 11:22 AM, Anita Kuno wrote:
 Voting for the TC Election is now open and will remain open
 until after 1300 utc April 24 2014.
 
 We are electing 7 positions from a pool of 17 candidates[0].
 
 Follow the instructions that are available when you vote. If
 you are confused and need more instruction, close the webpage
 without submitting your vote and then email myself and
 Tristan[1]. Your ballot will still be enabled to vote until
 the election is closed, as long as you don't submit your
 ballot before your close your webpage.
 
 You are eligible to vote if are a Foundation individual
 member[2] that also has committed to one of the official
 programs projects[3] over the Havana-Icehouse timeframe
 (April 4, 2013 06:00 UTC to April 4, 2014 05:59 UTC) Or if
 you are one of the extra-atcs.[4]
 
 What to do if you don't see the email and have a commit in at
 least one of the official programs projects[3]: * check the
 trash of your gerrit Preferred Email address[5], in case it
 went into trash or spam * wait a bit and check again, in case
 your email server is a bit slow * find the sha of at least
 one commit from the program project repos[3] and email me and
 Tristan[1]. If we can confirm that you are entitled to vote,
 we will add you to the voters list and you will be emailed a
 ballot.
 
 Our democratic process is important to the health of
 OpenStack, please exercise your right to vote.
 
 Candidate statements/platforms can be found linked to
 Candidate names on this page: 
 https://wiki.openstack.org/wiki/TC_Elections_April_2014#Candidates


 
Happy voting,
 Anita. (anteaya)
 
 [0]
 https://wiki.openstack.org/wiki/TC_Elections_April_2014#Candidates

 
[1] Anita: anteaya at anteaya dot info Tristan: tristan dot cacqueray at
 enovance dot com [2]
 http://www.openstack.org/community/members/ [3] 
 http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml?id=april-2014-elections

 
[4]
 http://git.openstack.org/cgit/openstack/governance/tree/reference/extra-atcs

 
[5] Sign into review.openstack.org: Go to Settings  Contact
 Information. Look at the email listed as your Preferred
 Email. That is where the ballot has been sent.
 
 I have to extend an apology to Flavio Percoco, whose name is
 spelled incorrectly on both the wikipage and on the ballot.
 
 I can't change the ballot now and will leave the wikipage with
 the spelling mistake so it is consistent to voters, but I do
 want folks to know I am aware of the mistake now, and I do
 apologize to Flavio for this.
 
 I'm sorry, Anita.
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 
___
 OpenStack-dev mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][qa] Compatibility of extra values returned in json dicts and headers

2014-04-18 Thread David Kranz
Recently, as a result of the nova 2.1/3.0 discussion, tempest has been 
adding validation of the json dictionaries and headers returned by nova 
api calls. This is done by specifying json schema for these values. As 
proposed, these schema do not specify additionalProperties: False, 
which means that if a header is added or a new key is added to a 
returned dict, the tempest test will not fail. The current api change 
guidelines say this:



 Generally Considered OK

 * The change is the only way to fix a security bug
 * Fixing a bug so that a request which resulted in an error response
   before is now successful
 * Adding a new response header
 * Changing an error response code to be more accurate
 * OK when conditionally added as a new API extension
 o Adding a property to a resource representation
 o Adding an optional property to a resource representation which
   may be supplied by clients, assuming the API previously would
   ignore this property


This seems to say that you need an api extension to add a value to a 
returned dict but not to add a new header. So that would imply that 
checking the headers should allow additional properties but checking the 
body should not. Is that the desired behavior? Would there be harm in 
allowing values to be added to a returned dict as well as the headers? 
Saying that application code should check if there is an extension to 
add a new value before trying to use the value seems pretty close to 
just checking for the existence of the value. In any event, we need to 
decide what the correct value is for these schemas.


 -David
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Updates to the template for Neutron BPs

2014-04-18 Thread Kyle Mestery
Hi folks:

I just wanted to let people know that we've merged a few patches [1]
to the neutron-specs repository over the past week which have updated
the template.rst file. Specifically, Nachi has provided some
instructions for using Sphinx diagram tools in lieu of asciiflow.com.
Either approach is fine for any Neutron BP submissions, but Nachi's
patch has some examples of using both approaches. Bob merged a patch
which shows an example of defining REST APIs with attribute tables.

Just an update for anyone proposing BPs for Juno at the moment.

Thanks!
Kyle

[1] 
https://review.openstack.org/#/q/status:merged+project:openstack/neutron-specs,n,z

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Migration to packages, step 1/2

2014-04-18 Thread Mike Scherbakov
That's cool actually.
I have a few specific questions:

   1. How does it impact development process? If I change code of, let's
   say, shotgun, and then run make iso, will I get an ISO with my code of
   shotgun? What about other packages, sources of which I did not touch (let's
   say nailgun) ?
   2. How far we became from implementing simple command to build an
   OpenStack package from source?
   3. What is the time difference, is ISO build became faster? Can you
   provide numbers?
   4. We still have puppet modules not packaged, right? Do we have plans
   for packaging it too?

I assume we will document the usage of this somewhere in dev docs too.

Thanks,


On Fri, Apr 18, 2014 at 6:06 PM, Dmitry Pyzhov dpyz...@mirantis.com wrote:

 Guys,

 I've removed ability to use eggs packages on master node:
 https://review.openstack.org/#/c/88012/

 Next step is to remove gems mirror:
 https://review.openstack.org/#/c/88278/
 It will be merged when osci team fix rubygem-yajl-ruby package. Hopefully
 on Monday.

 From that moment all our code will be installed everywhere from packages.
 And there will be option to build packages during iso build or use
 pre-built packages from our mirrors.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Trying to make sense of allow_overlapping_ips in neutron

2014-04-18 Thread Ryan Moats


Apologies if this is posted to the wrong place, but after talking with Kyle
Mestery (mest...@cisco.com), he suggested that I bring my question here...

I'm trying to make sense of the allow_overlapping_ips configuration
parameter in neutron.

When this entry is true, then a tenant can have subnets with overlapping
IPs in different networks (i.e. the scope of the subnet validation search
is the other subnets associated with the network) which makes sense.

But, when this entry is false, then the validation search appears to cover
all subnets across all tenants.  I'm trying to understand the logic of
this, because I would have expected that in this case, the search scope
would be all subnets across a single tenant.

As it is now, it looks like if an install has this configuration parameter
set to false, then there is no way for that install to reuse the net 10
address space.

Can somebody lend me a hand and either (a) tell me I'm reading the code
wrong or (b) explain why that choice was made?

Thanks in advance,
Ryan Moats
rmo...@us.ibm.com___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-18 Thread Jorge Miramontes
+1 for German's use cases. We need SSL re-encryption for decisions the
load balancer needs to make at the l7 layer as well. Thanks Clint, for
your thorough explanation from a security standpoint.

Cheers,
--Jorge




On 4/18/14 1:38 PM, Clint Byrum cl...@fewbar.com wrote:

Excerpts from Stephen Balukoff's message of 2014-04-18 10:36:11 -0700:
 Dang.  I was hoping this wasn't the case.  (I personally think it's a
 little silly not to trust your service provider to secure a network when
 they have root access to all the machines powering your cloud... but I
 digress.)
 

No one person or even group of people on the operator's network will have
full access to everything. Security is best when it comes in layers. Area
51 doesn't just have a guard shack and then you drive right into the
hangars with the UFO's and alien autopsies. There are sensors, mobile
guards, secondary checkpoints, locks on the outer doors, and locks on
the inner doors. And perhaps most importantly, the MP who approves your
entry into the first gate, does not even have access to the next one.

Your SSL terminator is a gate. What happens once an attacker (whoever
that may be, your disgruntled sysadmin, or rogue hackers) is behind that
gate _may_ be important.

 Part of the reason I was hoping this wasn't the case, isn't just
because it
 consumes a lot more CPU on the load balancers, but because now we
 potentially have to manage client certificates and CA certificates (for
 authenticating from the proxy to back-end app servers). And we also
have to
 decide whether we allow the proxy to use a different client cert / CA
per
 pool, or per member.
 
 Yes, I realize one could potentially use no client cert or CA (ie.
 encryption but no auth)...  but that actually provides almost no extra
 security over the unencrypted case:  If you can sniff the traffic
between
 proxy and back-end server, it's not much more of a stretch to assume you
 can figure out how to be a man-in-the-middle.


A passive attack where the MITM does not have to witness the initial
handshake or decrypt/reencrypt to sniff things is quite a bit easier to
pull off and would be harder to detect. So almost no extra security
is not really accurate. But this is just one point of data for risk
assessment.

 Do any of you have a use case where some back-end members require SSL
 authentication from the proxy and some don't? (Again, deciding whether
 client cert / CA usage should attach to a pool or to a member.)
 
 It's a bit of a rabbit hole, eh.


Security turns into an endless rat hole when you just look at it as a
product, such as A secure load balancer.

If, however, you consider that it is really just a process of risk
assessment and mitigation, then you can find a sweet spot that works
in your business model. How much does it cost to mitigate the risk
of unencrypted backend traffic from the load balancer?  What is the
potential loss if the traffic is sniffed? How likely is it that it will
be sniffed? .. Those are ongoing questions that need to be asked and
then reevaluated, but they don't have a fruitless stream of what-if's
that have to be baked in like the product discussion. It's just part of
your process, and processes go on until they aren't needed anymore.

IMO a large part of operating a cloud is decoupling the ability to setup
a system from the ability to enable your business with a system. So
if you can communicate the risks of doing without backend encryption,
and charge the users appropriately when they choose that the risk is
worth the added cost, then I think it is worth it to automate the setup
of CA's and client certs and put that behind an API. Luckily, you will
likely find many in the OpenStack community who can turn that into a
business opportunity and will help.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaas] Single call API discussion

2014-04-18 Thread Carlos Garza

On Apr 17, 2014, at 8:39 PM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net
 wrote:

Hello German and Brandon!

Responses in-line:


On Thu, Apr 17, 2014 at 3:46 PM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
Stephen,
I have responded to your questions below.


On 04/17/2014 01:02 PM, Stephen Balukoff wrote:
Howdy folks!

Based on this morning's IRC meeting, it seems to me there's some contention and 
confusion over the need for single call functionality for load balanced 
services in the new API being discussed. This is what I understand:

* Those advocating single call are arguing that this simplifies the API for 
users, and that it more closely reflects the users' experience with other load 
balancing products. They don't want to see this functionality necessarily 
delegated to an orchestration layer (Heat), because coordinating how this works 
across two OpenStack projects is unlikely to see success (ie. it's hard enough 
making progress with just one project). I get the impression that people 
advocating for this feel that their current users would not likely make the 
leap to Neutron LBaaS unless some kind of functionality or workflow is 
preserved that is no more complicated than what they currently have to do.
Another reason, which I've mentioned many times before and keeps getting 
ignored, is because the more primitives you add the longer it will take to 
provision a load balancer.  Even if we relied on the orchestration layer to 
build out all the primitives, it still will take much more time to provision a 
load balancer than a single create call provided by the API.  Each request and 
response has an inherent time to process.  Many primitives will also have an 
inherent build time.  Combine this in an environment that becomes more and more 
dense, build times will become very unfriendly to end users whether they are 
using the API directly, going through a UI, or going through an orchestration 
layer.  This industry is always trying to improve build/provisioning times and 
there are no reasons why we shouldn't try to achieve the same goal.

Noted.

* Those (mostly) against the idea are interested in seeing the API provide 
primitives and delegating higher level single-call stuff to Heat or some 
other orchestration layer. There was also the implication that if single-call 
is supported, it ought to support both simple and advanced set-ups in that 
single call. Further, I sense concern that if there are multiple ways to 
accomplish the same thing supported in the API, this redundancy breeds 
complication as more features are added, and in developing test coverage. And 
existing Neutron APIs tend to expose only primitives. I get the impression that 
people against the idea could be convinced if more compelling reasons were 
illustrated for supporting single-call, perhaps other than we don't want to 
change the way it's done in our environment right now.
I completely disagree with we dont want to change the way it's done in our 
environment right now.  Our proposal has changed the way our current API works 
right now.  We do not have the notion of primitives in our current API and our 
proposal included the ability to construct a load balancer with primitives 
individually.  We kept that in so that those operators and users who do like 
constructing a load balancer that way can continue doing so.  What we are 
asking for is to keep our users happy when we do deploy this in a production 
environment and maintain a single create load balancer API call.


There's certainly something to be said for having a less-disruptive user 
experience. And after all, what we've been discussing is so radical a change 
that it's close to starting over from scratch in many ways.


Its not disruptive. There is nothing preventing them from continuing to use 
 the multiple primitive operations philosophy, so they can continue that 
approach.


I've mostly stayed out of this debate because our solution as used by our 
customers presently isn't single-call and I don't really understand the 
requirements around this.

So! I would love it if some of you could fill me in on this, especially since 
I'm working on a revision of the proposed API. Specifically, what I'm looking 
for is answers to the following questions:

1. Could you please explain what you understand single-call API functionality 
to be?
Single-call API functionality is a call that supports adding multiple features 
to an entity (load balancer in this case) in one API request.  Whether this 
supports all features of a load balancer or a subset is up for debate.  I 
prefer all features to be supported.  Yes it adds complexity, but complexity is 
always introduced by improving the end user experience and I hope a good user 
experience is a goal.

Got it. I think we all want to improve the user experience.

2. Could you describe the simplest use case that uses single-call API in your 

Re: [openstack-dev] [Neutron] Trying to make sense of allow_overlapping_ips in neutron

2014-04-18 Thread Mark McClain

On Apr 18, 2014, at 17:03, Ryan Moats 
rmo...@us.ibm.commailto:rmo...@us.ibm.com wrote:


Apologies if this is posted to the wrong place, but after talking with Kyle 
Mestery (mest...@cisco.commailto:mest...@cisco.com), he suggested that I 
bring my question here...

I'm trying to make sense of the allow_overlapping_ips configuration parameter 
in neutron.

When this entry is true, then a tenant can have subnets with overlapping IPs in 
different networks (i.e. the scope of the subnet validation search is the other 
subnets associated with the network) which makes sense.

But, when this entry is false, then the validation search appears to cover all 
subnets across all tenants.  I'm trying to understand the logic of this, 
because I would have expected that in this case, the search scope would be all 
subnets across a single tenant.

As it is now, it looks like if an install has this configuration parameter set 
to false, then there is no way for that install to reuse the net 10 address 
space.

Can somebody lend me a hand and either (a) tell me I'm reading the code wrong 
or (b) explain why that choice was made?

You are reading the code correctly. This feature is largely a legacy option for 
older distros that do not support namespaces or when the deployer chooses to 
run in a flat mode without tenant isolation.

mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-18 Thread Rochelle.RochelleGrober
+1 for the discussion

Remember, a cloud does not always have all its backend co-located.  There are 
sometimes AZs and often other hidden network hops.  

And, to ask the obvious, what do you think the response is when you whisper 
NSA in a crowded Google data center?

--Rocky

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com] 
Sent: Friday, April 18, 2014 2:13 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario 
question

+1 for German's use cases. We need SSL re-encryption for decisions the
load balancer needs to make at the l7 layer as well. Thanks Clint, for
your thorough explanation from a security standpoint.

Cheers,
--Jorge




On 4/18/14 1:38 PM, Clint Byrum cl...@fewbar.com wrote:

Excerpts from Stephen Balukoff's message of 2014-04-18 10:36:11 -0700:
 Dang.  I was hoping this wasn't the case.  (I personally think it's a
 little silly not to trust your service provider to secure a network when
 they have root access to all the machines powering your cloud... but I
 digress.)
 

No one person or even group of people on the operator's network will have
full access to everything. Security is best when it comes in layers. Area
51 doesn't just have a guard shack and then you drive right into the
hangars with the UFO's and alien autopsies. There are sensors, mobile
guards, secondary checkpoints, locks on the outer doors, and locks on
the inner doors. And perhaps most importantly, the MP who approves your
entry into the first gate, does not even have access to the next one.

Your SSL terminator is a gate. What happens once an attacker (whoever
that may be, your disgruntled sysadmin, or rogue hackers) is behind that
gate _may_ be important.

 Part of the reason I was hoping this wasn't the case, isn't just
because it
 consumes a lot more CPU on the load balancers, but because now we
 potentially have to manage client certificates and CA certificates (for
 authenticating from the proxy to back-end app servers). And we also
have to
 decide whether we allow the proxy to use a different client cert / CA
per
 pool, or per member.
 
 Yes, I realize one could potentially use no client cert or CA (ie.
 encryption but no auth)...  but that actually provides almost no extra
 security over the unencrypted case:  If you can sniff the traffic
between
 proxy and back-end server, it's not much more of a stretch to assume you
 can figure out how to be a man-in-the-middle.


A passive attack where the MITM does not have to witness the initial
handshake or decrypt/reencrypt to sniff things is quite a bit easier to
pull off and would be harder to detect. So almost no extra security
is not really accurate. But this is just one point of data for risk
assessment.

 Do any of you have a use case where some back-end members require SSL
 authentication from the proxy and some don't? (Again, deciding whether
 client cert / CA usage should attach to a pool or to a member.)
 
 It's a bit of a rabbit hole, eh.


Security turns into an endless rat hole when you just look at it as a
product, such as A secure load balancer.

If, however, you consider that it is really just a process of risk
assessment and mitigation, then you can find a sweet spot that works
in your business model. How much does it cost to mitigate the risk
of unencrypted backend traffic from the load balancer?  What is the
potential loss if the traffic is sniffed? How likely is it that it will
be sniffed? .. Those are ongoing questions that need to be asked and
then reevaluated, but they don't have a fruitless stream of what-if's
that have to be baked in like the product discussion. It's just part of
your process, and processes go on until they aren't needed anymore.

IMO a large part of operating a cloud is decoupling the ability to setup
a system from the ability to enable your business with a system. So
if you can communicate the risks of doing without backend encryption,
and charge the users appropriately when they choose that the risk is
worth the added cost, then I think it is worth it to automate the setup
of CA's and client certs and put that behind an API. Luckily, you will
likely find many in the OpenStack community who can turn that into a
business opportunity and will help.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-18 Thread Carlos Garza

On Apr 18, 2014, at 10:21 AM, Stephen Balukoff sbaluk...@bluebox.net wrote:

 Howdy, folks!
 
 Could someone explain to me the SSL usage scenario where it makes sense to 
 re-encrypt traffic traffic destined for members of a back-end pool?  SSL 
 termination on the load balancer makes sense to me, but I'm having trouble 
 understanding why one would be concerned about then re-encrypting the traffic 
 headed toward a back-end app server. (Why not just use straight TCP load 
 balancing in this case, and save the CPU cycles on the load balancer?)
 

1. Some customers want their servers to be external from our data centers for 
example the loadbalancer is in Chicago with rackspace hosting the loadbalancers 
and the back end pool members being on Amazon AWS servers. (We don't know why 
they would do this but a lot are doing it). They can't can't simply just audit 
the links between AWS and our DataCenters for PCI lots of backbones being 
crossed. so they just want encryption to their backend pool members. Also take 
note that amazon has chosen to support encryption 
http://aws.amazon.com/about-aws/whats-new/2011/10/04/amazon-s3-announces-server-side-encryption-support/
They've had it for a while now and for what ever reason a lot of customers are 
now demanding it from us as well.  

I agree they could simply just use HTTPS load balancing but they seem to think 
providers that don't offer encryption are inferior feature wise.

2. User that are on providers that are incapable of One Armed With Source Nat 
load balancing capabilities (See link below) are at the mercy of using 
X-forwarded for style headers to determine the original  source of a 
connection. (A must if they want to know where abusive connections are coming 
from). Under traditional NAT routing the source ip will always be the 
loadbalancers ip so X-Forwarded for has been the traditional method of show the 
server this(This applies to HTTP load balancing as well). But in the case of 
SSL the loadbalancer unless its decrypting traffic won't be able to inject 
these headers in. and when the pool members are on an external network it is 
prudent to allow for encryption, this pretty much forces them to use a trusted 
LoadBalancer as a man in the middle to decrypt add X-forwarded for, then 
encrypting to the back end. 

http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_One_Arm_Mode_with_Source_NAT_on_the_Cisco_Application_Control_Engine_Configuration_Example


3. Unless I'm mistaken it looks like encryption was already apart of the API or 
was accepted as a requirement for neutron lbaas.
https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL#Current_design 
is this document still valid?

4. We also assumed we were expected to support the use cases described in
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit?pli=1
where case 7 specifically is asking for Re-encryption.


 We terminate a lot of SSL connections on our load balancers, but have yet to 
 have a customer use this kind of functionality.  (We've had a few ask about 
 it, usually because they didn't understand what a load balancer is supposed 
 to do-- and with a bit of explanation they went either with SSL termination 
 on the load balancer + clear text on the back-end, or just straight TCP load 
 balancing.)

We terminate a lot of SSL connections on our loadbalancers as well and we 
get a lot of pressure for this kind of functionality.  I think you have no 
customers using that functionality because you are unable to offer it  which is 
the case for us as well. But due to a significant amount of pressure we have a 
solution already ready and waiting for testing on our CLB1.0 offering. 

We wished this was the case for us that only a few users are requesting 
this feature  but we have customers that really do want their backend pool 
members on a separate non secure network or worse want this as a more advance 
form of HTTPS passthrough(TCP load balancing as your describing it). 

Providers may be able to secure their loadbalancers but they may not always 
be able to secure their back and forward connections. Users still want end to 
end encrypted connectivity but want the loadbalancer to be capable of making 
intelligent decisions(requiring decryption at the loadbalancer) as well as 
inject useful headers going to the back end pool member still need encryption 
functionality.

 When your customers do Straight TCP load balancing are you noticing you 
can only offer IP based session persistence at that point? If you only allow ip 
based persistence customers that share a NAT router will all hit the same node 
every time. We have lots of customers behind corporate NAT routers and they 
notice very quickly that hundreds of clients are all being shoved onto one back 
end pool member. They as of now only have the option to turn off session 
persistence but that breaks applications that require locally maintained 
sessions. We could offer TLS 

Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-18 Thread Carlos Garza

On Apr 18, 2014, at 12:59 PM, Vijay Venkatachalam 
vijay.venkatacha...@citrix.commailto:vijay.venkatacha...@citrix.com wrote:


There is no reasoning mentioned in AWS, but they do allow re-encryption.


Is their also no reason to mention:

BigIp's F5 LoadBalancers 
http://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm_implementation/sol_http_ssl.html
A10 LoadBalaners 
http://www.a10networks.com/resources/files/CS-Earth_Class_Mail.pdf
Netscaler 
http://support.citrix.com/proddocs/topic/netscaler-traffic-management-10-map/ns-ssl-offloading-end-to-end-encypt-tsk.html
Finally Stingray https://splash.riverbed.com/thread/5473

  All big players in LoadBalancing. would be the reasoning.

http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/config-backend-auth.html

For reasons I don’t understand, the workflow allows to configure backend-server 
certificates to be trusted and it doesn’t accept client certificates or CA 
certificates.

Thanks,
Vijay V.


From: Stephen Balukoff [mailto:sbaluk...@bluebox.nethttp://bluebox.net]
Sent: Friday, April 18, 2014 11:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario 
question

Dang.  I was hoping this wasn't the case.  (I personally think it's a little 
silly not to trust your service provider to secure a network when they have 
root access to all the machines powering your cloud... but I digress.)

Part of the reason I was hoping this wasn't the case, isn't just because it 
consumes a lot more CPU on the load balancers, but because now we potentially 
have to manage client certificates and CA certificates (for authenticating from 
the proxy to back-end app servers). And we also have to decide whether we allow 
the proxy to use a different client cert / CA per pool, or per member.

Yes, I realize one could potentially use no client cert or CA (ie. encryption 
but no auth)...  but that actually provides almost no extra security over the 
unencrypted case:  If you can sniff the traffic between proxy and back-end 
server, it's not much more of a stretch to assume you can figure out how to be 
a man-in-the-middle.

Do any of you have a use case where some back-end members require SSL 
authentication from the proxy and some don't? (Again, deciding whether client 
cert / CA usage should attach to a pool or to a member.)

It's a bit of a rabbit hole, eh.

Stephen


On Fri, Apr 18, 2014 at 10:21 AM, Eichberger, German 
german.eichber...@hp.commailto:german.eichber...@hp.com wrote:
Hi Stephen,

The use case is that the Load Balancer needs to look at the HTTP requests be it 
to add an X-Forward field or change the timeout – but the network between the 
load balancer and the nodes is not completely private and the sensitive 
information needs to be again transmitted encrypted. This is admittedly an edge 
case but we had to implement a similar scheme for HP Cloud’s swift storage.

German

From: Stephen Balukoff 
[mailto:sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net]
Sent: Friday, April 18, 2014 8:22 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

Howdy, folks!

Could someone explain to me the SSL usage scenario where it makes sense to 
re-encrypt traffic traffic destined for members of a back-end pool?  SSL 
termination on the load balancer makes sense to me, but I'm having trouble 
understanding why one would be concerned about then re-encrypting the traffic 
headed toward a back-end app server. (Why not just use straight TCP load 
balancing in this case, and save the CPU cycles on the load balancer?)

We terminate a lot of SSL connections on our load balancers, but have yet to 
have a customer use this kind of functionality.  (We've had a few ask about it, 
usually because they didn't understand what a load balancer is supposed to do-- 
and with a bit of explanation they went either with SSL termination on the load 
balancer + clear text on the back-end, or just straight TCP load balancing.)

Thanks,
Stephen


--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807tel:%28800%29613-4305%20x807

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-18 Thread Carlos Garza

On Apr 18, 2014, at 12:36 PM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net wrote:

Dang.  I was hoping this wasn't the case.  (I personally think it's a little 
silly not to trust your service provider to secure a network when they have 
root access to all the machines powering your cloud... but I digress.)

Part of the reason I was hoping this wasn't the case, isn't just because it 
consumes a lot more CPU on the load balancers, but because now we potentially 
have to manage client certificates and CA certificates (for authenticating from 
the proxy to back-end app servers). And we also have to decide whether we allow 
the proxy to use a different client cert / CA per pool, or per member.

   If you choose to support re-encryption on your service then you are free to 
charge for the extra CPU cycles. I'm convinced re-encryption and SslTermination 
is general needs to be mandatory but I think the API should allow them to be 
specified.

Yes, I realize one could potentially use no client cert or CA (ie. encryption 
but no auth)...  but that actually provides almost no extra security over the 
unencrypted case:  If you can sniff the traffic between proxy and back-end 
server, it's not much more of a stretch to assume you can figure out how to be 
a man-in-the-middle.

Yes but considering you have no problem advocating pure ssl termination for 
your customers(Decryption on the front end and plain text) on the back end I'm 
actually surprised this disturbs you. I would recommend users use Straight SSL 
passthrough or re-enecryption but I wouldn't force this on them should they 
choose naked encryption with no checking.


Do any of you have a use case where some back-end members require SSL 
authentication from the proxy and some don't? (Again, deciding whether client 
cert / CA usage should attach to a pool or to a member.)

When you say client Cert are you referring to the end users X509 Certificate 
(To be rejected by the backend server)or are you referring to the back end 
servers X509Certificate which the loadbalancer would reject if it discovered 
the back end server had a bad signature or mismatched key? I am speaking of the 
case where the user wants re-encryption but wants to be able to install CA 
certificates that sign backend servers Keys via PKIX path building. I would 
even like to offer the customer the ability to skip hostname validation since 
not every one wants to expose DNS entries for IPs that are not publicly 
routable anyways. Unless your suggesting that we should force this on the user 
which likewise forces us to host a name server that maps hosts to the X509s 
subject CN fields.  Users should be free to validate back end hostnames, just 
the subject name and key or no validation at all. It should be up to them.




It's a bit of a rabbit hole, eh.
Stephen



On Fri, Apr 18, 2014 at 10:21 AM, Eichberger, German 
german.eichber...@hp.commailto:german.eichber...@hp.com wrote:
Hi Stephen,

The use case is that the Load Balancer needs to look at the HTTP requests be it 
to add an X-Forward field or change the timeout – but the network between the 
load balancer and the nodes is not completely private and the sensitive 
information needs to be again transmitted encrypted. This is admittedly an edge 
case but we had to implement a similar scheme for HP Cloud’s swift storage.

German

From: Stephen Balukoff 
[mailto:sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net]
Sent: Friday, April 18, 2014 8:22 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question


Howdy, folks!

Could someone explain to me the SSL usage scenario where it makes sense to 
re-encrypt traffic traffic destined for members of a back-end pool?  SSL 
termination on the load balancer makes sense to me, but I'm having trouble 
understanding why one would be concerned about then re-encrypting the traffic 
headed toward a back-end app server. (Why not just use straight TCP load 
balancing in this case, and save the CPU cycles on the load balancer?)

We terminate a lot of SSL connections on our load balancers, but have yet to 
have a customer use this kind of functionality.  (We've had a few ask about it, 
usually because they didn't understand what a load balancer is supposed to do-- 
and with a bit of explanation they went either with SSL termination on the load 
balancer + clear text on the back-end, or just straight TCP load balancing.)

Thanks,
Stephen


--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807tel:%28800%29613-4305%20x807

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list

[openstack-dev] [neutron][ovs-neutron-agent] Issue with initializing enable_tunneling after setup_rpc()

2014-04-18 Thread Nader Lahouti
Hi,

It seems there could be a potential issue  in OVSNeutronAgent where
self.enable_tunneling is initialized.
Here is the code:

116 
http://www.xrefs.info/openstack-neutron-latest/xref/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#116*class*
OVSNeutronAgent
http://www.xrefs.info/openstack-neutron-latest/xref/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#OVSNeutronAgent(sg_rpc
http://www.xrefs.info/openstack-neutron-latest/xref/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#sg_rpc.SecurityGroupAgentRpcCallbackMixin
http://www.xrefs.info/openstack-neutron-latest/s?defs=SecurityGroupAgentRpcCallbackMixin,117
http://www.xrefs.info/openstack-neutron-latest/xref/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#117
 l2population_rpc
http://www.xrefs.info/openstack-neutron-latest/xref/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#l2population_rpc.L2populationRpcCallBackMixin
http://www.xrefs.info/openstack-neutron-latest/s?defs=L2populationRpcCallBackMixin):118
http://www.xrefs.info/openstack-neutron-latest/xref/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#118
   '''Implements OVS-based tunneling, VLANs and flat networks.

...

149 
http://www.xrefs.info/openstack-neutron-latest/xref/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#149
   *def* __init__
http://www.xrefs.info/openstack-neutron-latest/s?refs=__init__(self
http://www.xrefs.info/openstack-neutron-latest/s?defs=self, integ_br
http://www.xrefs.info/openstack-neutron-latest/s?defs=integ_br,
tun_br http://www.xrefs.info/openstack-neutron-latest/s?defs=tun_br,
local_ip http://www.xrefs.info/openstack-neutron-latest/s?defs=local_ip,150
http://www.xrefs.info/openstack-neutron-latest/xref/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#150
bridge_mappings
http://www.xrefs.info/openstack-neutron-latest/s?defs=bridge_mappings,
root_helper 
http://www.xrefs.info/openstack-neutron-latest/s?defs=root_helper,151
http://www.xrefs.info/openstack-neutron-latest/xref/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#151
polling_interval
http://www.xrefs.info/openstack-neutron-latest/s?defs=polling_interval,
tunnel_types 
http://www.xrefs.info/openstack-neutron-latest/s?defs=tunnel_types=*None*,152
http://www.xrefs.info/openstack-neutron-latest/xref/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#152
veth_mtu
http://www.xrefs.info/openstack-neutron-latest/s?defs=veth_mtu=*None*,
l2_population 
http://www.xrefs.info/openstack-neutron-latest/s?defs=l2_population=False
http://www.xrefs.info/openstack-neutron-latest/s?defs=False,153
http://www.xrefs.info/openstack-neutron-latest/xref/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#153
minimize_polling
http://www.xrefs.info/openstack-neutron-latest/s?defs=minimize_polling=False
http://www.xrefs.info/openstack-neutron-latest/s?defs=False,154
http://www.xrefs.info/openstack-neutron-latest/xref/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#154
ovsdb_monitor_respawn_interval
http://www.xrefs.info/openstack-neutron-latest/s?defs=ovsdb_monitor_respawn_interval=(155
http://www.xrefs.info/openstack-neutron-latest/xref/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#155
constants
http://www.xrefs.info/openstack-neutron-latest/xref/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#constants.DEFAULT_OVSDBMON_RESPAWN
http://www.xrefs.info/openstack-neutron-latest/s?defs=DEFAULT_OVSDBMON_RESPAWN)):156
http://www.xrefs.info/openstack-neutron-latest/xref/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#156
   '''Constructor.157
http://www.xrefs.info/openstack-neutron-latest/xref/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#157

...

193 
http://www.xrefs.info/openstack-neutron-latest/xref/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#193194
http://www.xrefs.info/openstack-neutron-latest/xref/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#194
   self http://www.xrefs.info/openstack-neutron-latest/s?defs=self.int_br
http://www.xrefs.info/openstack-neutron-latest/s?defs=int_br =
ovs_lib 
http://www.xrefs.info/openstack-neutron-latest/xref/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#ovs_lib.OVSBridge
http://www.xrefs.info/openstack-neutron-latest/s?defs=OVSBridge(integ_br
http://www.xrefs.info/openstack-neutron-latest/s?defs=integ_br, self
http://www.xrefs.info/openstack-neutron-latest/s?defs=self.root_helper
http://www.xrefs.info/openstack-neutron-latest/s?defs=root_helper)*195
http://www.xrefs.info/openstack-neutron-latest/xref/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#195
   self 
http://www.xrefs.info/openstack-neutron-latest/s?defs=self.setup_rpc
http://www.xrefs.info/openstack-neutron-latest/xref/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#setup_rpc()
 Initialize the RPC*196

[openstack-dev] Why does my Windows 7 VM running under Linux' KVM not use all the virtual processors?

2014-04-18 Thread shenwei9008
I hava a problem that in windows 7 VM about the number of vCPU. Then I set up 
windows instance and give it 8cores, and in windonw7 Device Manager I check out 
that there are 8 cores. But In cmd.exe execute “wmic-cpu get * find it out 
that there are 2 single core, I don't know Why.  And I modify 
/etc/libvirt/qemu/instance-*.xml as follow:
vcpu8/vcpu
 cpu
 topology sockets='1' cores='4' threads='2'/
 /cpu
But this is modified after VM is set up. In according to my need, before VM 
setting up, VM can be given 8 cores not 2 single core.




shenwei9008___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] qa-specs Repo and QA Program Juno Blueprint Review Process

2014-04-18 Thread Matthew Treinish
Hi Everyone,

Just like Nova [1] the QA program has adopted the proposal [2] to use gerrit to
review blueprint specifications.

The openstack/qa-specs repo is now ready for submissions. Changes are submitted
to it like any other gerrit project. The README and a template for submitting
new specifications can be found here:

http://git.openstack.org/cgit/openstack/qa-specs/tree/README.rst

http://git.openstack.org/cgit/openstack/qa-specs/tree/template.rst

Please note that *all* Juno blueprints, including ones that were previously
approved for a previous cycle, must go through this new process.  This will
help ensure that blueprints previously approved still make sense, as well as
ensure that all Juno specs follow a more complete and consistent format. All
outstanding Tempest blueprints from Icehouse have already been moved back into
the 'New' state on Launchpad in preparation for a specification proposal using
the new process.

Everyone, not just tempest and grenade cores, should feel welcome to provide
reviews and feedback on these specification proposals. Just like for code
reviews we really appreciate anyone who takes the time to provide an insightful
review.

Since this is still a new process for all the projects I fully expect this
process to evolve throughout the Juno cycle. But, I can honestly say that we
have already seen positive effects from this new process even with only a
handful of specifications going through the process.


Thanks,

Matt Treinish

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-March/030576.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2014-March/029232.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-18 Thread Stephen Balukoff
Hi y'all!

Carlos: When I say 'client cert' I'm talking about the certificate / key
combination the load balancer will be using to initiate the SSL connection
to the back-end server. The implication here is that if the back-end server
doesn't like the client cert, it will reject the connection (as being not
from a trusted source). By 'CA cert' I'm talking about the certificate
(sans key) that the load balancer will be using the authenticate the
back-end server. If the back-end server's server certificate isn't signed
by the CA, then the load balancer should reject the connection.

Of course, the use of a client cert or CA cert on the load balancer should
be optional: As Clint pointed out, for some users, just using SSL without
doing any particular authentication (on either the part of the load
balancer or back-end) is going to be good enough.

Anyway, the case for supporting re-encryption on the load-balancers has
been solidly made, and the API proposal we're making will reflect this
capability. Next question:

When specific client certs / CAs are used for re-encryption, should these
be associated with the pool or member?

I could see an argument for either case:

*Pool* (ie. one client cert / CA cert will be used for all members in a
pool):
* Consistency of back-end nodes within a pool is probably both extremely
common, and a best practice. It's likely all will be accessed the same way.
* Less flexible than certs associated with members, but also less
complicated config.
* For CA certs, assumes user knows how to manage their own PKI using a CA.

*Member* (ie. load balancer will potentially use a different client cert /
CA cert for each member individually):
* Customers will sometimes run with inconsistent back-end nodes (eg.
local nodes in a pool treated differently than remote nodes in a pool).
* More flexible than certs associated with members, more complicated
configuration.
* If back-end certs are all individually self-signed (ie. no single CA used
for all nodes), then certs must be associated with members.

What are people seeing in the wild? Are your users using
inconsistently-signed or per-node self-signed certs in a single pool?

Thanks,
Stephen





On Fri, Apr 18, 2014 at 5:56 PM, Carlos Garza carlos.ga...@rackspace.comwrote:


  On Apr 18, 2014, at 12:36 PM, Stephen Balukoff sbaluk...@bluebox.net
 wrote:

  Dang.  I was hoping this wasn't the case.  (I personally think it's a
 little silly not to trust your service provider to secure a network when
 they have root access to all the machines powering your cloud... but I
 digress.)

  Part of the reason I was hoping this wasn't the case, isn't just because
 it consumes a lot more CPU on the load balancers, but because now we
 potentially have to manage client certificates and CA certificates (for
 authenticating from the proxy to back-end app servers). And we also have to
 decide whether we allow the proxy to use a different client cert / CA per
 pool, or per member.


 If you choose to support re-encryption on your service then you are
 free to charge for the extra CPU cycles. I'm convinced re-encryption and
 SslTermination is general needs to be mandatory but I think the API should
 allow them to be specified.

  Yes, I realize one could potentially use no client cert or CA (ie.
 encryption but no auth)...  but that actually provides almost no extra
 security over the unencrypted case:  If you can sniff the traffic between
 proxy and back-end server, it's not much more of a stretch to assume you
 can figure out how to be a man-in-the-middle.


  Yes but considering you have no problem advocating pure ssl
 termination for your customers(Decryption on the front end and plain text)
 on the back end I'm actually surprised this disturbs you. I would recommend
 users use Straight SSL passthrough or re-enecryption but I wouldn't force
 this on them should they choose naked encryption with no checking.


  Do any of you have a use case where some back-end members require SSL
 authentication from the proxy and some don't? (Again, deciding whether
 client cert / CA usage should attach to a pool or to a member.)


  When you say client Cert are you referring to the end users X509
 Certificate (To be rejected by the backend server)or are you referring to
 the back end servers X509Certificate which the loadbalancer would reject if
 it discovered the back end server had a bad signature or mismatched key? I
 am speaking of the case where the user wants re-encryption but wants to be
 able to install CA certificates that sign backend servers Keys via PKIX
 path building. I would even like to offer the customer the ability to skip
 hostname validation since not every one wants to expose DNS entries for IPs
 that are not publicly routable anyways. Unless your suggesting that we
 should force this on the user which likewise forces us to host a name
 server that maps hosts to the X509s subject CN fields.  Users should be
 free to validate back end 

Re: [openstack-dev] [neutron] Neutron BP review process for Juno

2014-04-18 Thread Nader Lahouti
Do I need any permission to upload a design specification in the
'specs/juno' folder in neutron-specs?

I tried to upload and get this message:
fatal: unable to access 'https://github.com/openstack/neutron-specs.git/':
The requested URL returned error: 403

Please advise.

Thanks,
Nader.



On Thu, Apr 17, 2014 at 12:18 PM, Carl Baldwin c...@ecbaldwin.net wrote:

 Wow, easiest merge ever!  Can we get this repository counted in our
 stats?!  ;)

 Carl

 On Thu, Apr 17, 2014 at 1:09 PM, Kyle Mestery mest...@noironetworks.com
 wrote:
  On Thu, Apr 17, 2014 at 1:18 PM, Carl Baldwin c...@ecbaldwin.net
 wrote:
  Sure thing [1].  The easiest change I saw was to remove the
  restriction that the number of sub titles is exactly 9.  This won't
  require any of the other blueprints already posted for review to
  change.  See what you think.
 
  This was a good change, and in fact it's already been merged. Thanks!
 
  Kyle
 
  Carl
 
  [1] https://review.openstack.org/#/c/88381/
 
  On Wed, Apr 16, 2014 at 3:43 PM, Kyle Mestery 
 mest...@noironetworks.com wrote:
  On Wed, Apr 16, 2014 at 4:26 PM, Carl Baldwin c...@ecbaldwin.net
 wrote:
  Neutron (and Nova),
 
  I have had one thing come up as I've been using the template.  I find
  that I would like to add just a little document structure in the form
  of a sub-heading or two under the Proposed change heading but before
  the required Alternatives sub-heading.  However, this is not allowed
  by the tests.
 
  Proposed change
  =
 
  I want to add a little bit of document structure here but I cannot
  because any sub-headings would be counted among the exactly 9
  sub-headings I'm required to have starting with Alternatives.  This
  seems a bit unnatural to me.
 
  Alternatives
  
  ...
 
 
  The sub-headings allow structure underneath but the first heading
  doesn't.  Could be do it a little bit differentely?  Maybe something
  like this?
 
  Proposed change
  =
 
  Overview
  
 
  I could add structure under here.
 
  Alternatives
  
  ...
 
  Thoughts?  Another idea might be to change the test to require at
  least the nine required sub-headings but allow for the addition of
  another.
 
  I'm fine with either of these proposed changes to be honest. Carl,
  please submit a patch to neutron-specs and we can review it there.
 
  Also, I'm in the process of adding some jenkins jobs for neutron-specs
  similar to nova-specs.
 
  Thanks,
  Kyle
 
  Carl
 
  On Tue, Apr 15, 2014 at 4:07 PM, Kyle Mestery 
 mest...@noironetworks.com wrote:
  Given the success the Nova team has had in handling reviews using
  their new nova-specs gerrit repository, I think it makes a lot of
  sense for Neutron to do the same. With this in mind, I've added
  instructions to the BP wiki [1] for how to do. Going forward in Juno,
  this is how Neutron BPs will be handled by the Neutron core team. If
  you are currently working on a BP or code for Juno which is attached
  to a BP, please file the BP using the process here [1].
 
  Given this is our first attempt at using this for reviews, I
  anticipate there may be a few hiccups along the way. Please reply on
  this thread or reach out in #openstack-neutron and we'll sort through
  whatever issues we find.
 
  Thanks!
  Kyle
 
  [1] https://wiki.openstack.org/wiki/Blueprints#Neutron
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaas] Single call API discussion

2014-04-18 Thread Stephen Balukoff
Hi y'all!

Yes-- I agree that is a very bad idea to delete a primitive that's being
shared by other load balancing configurations. The only case where this
seems acceptable to me is the case that German mentioned where all assets
on a given user's account are being wrapped up. But even in this case,
assuming each load balancing service is being deleted from the root level,
eventually none of the primitives will be shared anymore...

In fact, what do y'all think of this policy?  If a primitive is shared at
all, an attempt to delete it directly should return an error indicating
it's still in use.

Also, I've been making this assumption the whole time, but it's probably
worth mentioning: I'm assuming primitives cannot be shared among tenants.
(It seems to me that we'd be opening up a whole nasty can of worms
security-wise if we allowed tenants to share primitives.)

Here's an alternate idea for handling the case of abandoned primitives
suggested by my team-mate Dustin Lundquist:

Instead of doing any kind of cascading delete when a root object gets
deleted, follow Brandon's idea and simply detach the primitives from the
object being deleted. Later, a periodic garbage-collection process goes
through and removes any orphaned primitives a reasonable interval after it
was orphaned (this interval could be configurable per operator, per tenant,
per user... whatever). If we do this:

* Each primitive would need an additional 'orphaned_at' attribute
* Primitives (other than the root object) created outside of a single-call
interface would be created in an orphaned state.
* Viewing the attributes of a primitive should also show whether it's an
orphan, and if so, when the garbage collector will delete it.
* Update and creation tasks that reference any connected primitives would
clear the above orphaned_at attributes of said primitives
* Deletion tasks would need to check connected primitives and update their
orphaned_at attribute if said deletion orphans the primitive (ie. do an
immediate reference check, set orphaned_at if references = 0)
* It probably also makes sense to allow some primitives to have a flag set
by the user to prevent garbage collection from ever cleaning them up (ex.
SSL certificates are probably a good candidate for this). Maybe a persist
boolean attribute or something?
* Garbage collection never cleans up root objects.

It seems to me the above:
* Would prevent the CI user from having a bazillion orphaned primitives.
* Does not immediately nuke anything the customer wasn't planning on nuking
* Probably follows the principle of least surprise a little better than my
previous conditional cascading delete suggestion
* Still allows the simple act of deleting all the root objects to
eventually delete all primitives on an account (ie. the delete this
account task German was talking about.)
* Has a secondary side-benefit of not immediately destroying all the
supporting primitives of an object if the user accidentally nukes the wrong
object.

Also, it may be appropriate for some primitives to skip the garbage
collection process and simply get deleted when their immediate parent
primitive is deleted. This applies to primitives that aren't allowed to be
shared, like:
* Members (ie, deleting a pool deletes all its members, immediately)
* join objects, like the 'rule' object that associates a listener with a
non-default pool in the case of L7 switching

I still like the idea of also having a cascading delete of some kind, in
case the user wants to see all the primitives go away immediately (though
they could always step through these themselves), or, again, as an
immediate way to ensure all objects associated with a customer account are
cleaned up quickly when the account is deleted. Though, I'm also thinking
this should be an explicit flag the user has to set. (And again, even with
this flag set, shared primitives are not deleted.)

What do y'all think of the above idea?

Thanks,
Stephen



On Fri, Apr 18, 2014 at 2:35 PM, Carlos Garza carlos.ga...@rackspace.comwrote:


  On Apr 17, 2014, at 8:39 PM, Stephen Balukoff sbaluk...@bluebox.net
  wrote:

  Hello German and Brandon!

  Responses in-line:


 On Thu, Apr 17, 2014 at 3:46 PM, Brandon Logan 
 brandon.lo...@rackspace.com wrote:

 Stephen,
 I have responded to your questions below.


 On 04/17/2014 01:02 PM, Stephen Balukoff wrote:

  Howdy folks!

  Based on this morning's IRC meeting, it seems to me there's some
 contention and confusion over the need for single call functionality for
 load balanced services in the new API being discussed. This is what I
 understand:

  * Those advocating single call are arguing that this simplifies the
 API for users, and that it more closely reflects the users' experience with
 other load balancing products. They don't want to see this functionality
 necessarily delegated to an orchestration layer (Heat), because
 coordinating how this works across two OpenStack projects is unlikely to
 see success (ie. it's hard