[openstack-dev] Arbitrary extra specs for compute nodes?

2014-06-07 Thread Joe Cropper
Hi Folks,

I was wondering if there was any such mechanism in the compute node
structure to hold arbitrary key-value pairs, similar to flavors'
extra_specs concept?

It appears there are entries for things like pci_stats, stats and recently
added extra_resources -- but these all tend to have more specific usages
vs. just arbitrary data that may want to be maintained about the compute
node over the course of its lifetime.

Unless I'm overlooking an existing construct for this, would this be
something that folks would welcome a Juno blueprint for--i.e., adding
extra_specs style column with a JSON-formatted string that could be loaded
as a dict of key-value pairs?

Thoughts?

Thanks,
Joe
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Mid cycle meetup

2014-06-07 Thread Clint Byrum
Excerpts from Devananda van der Veen's message of 2014-06-06 12:04:08 -0700:
 I have just announced the Ironic mid-cycle in Beaverton, co-located
 with Nova. That's the main one for Ironic.
 
 However, there are many folks working on both TripleO and Ironic, so I
 wouldn't be surprised if there is a (small?) group at the TripleO
 sprint hacking on Ironic, even if there's nothing official, and even
 if the dates overlap (which I really hope they don't). I'm going to
 try to attend the TripleO sprint if at all possible, as that project
 remains one of the largest users of Ironic that I'm aware of.
 

Yes, we desperately need expertise as our intention is to push forward
on scale testing, and we'll need experts on Ironic's internals to push
optimizations where they're needed. I hope that the Ironic team is
large enough that there can be some at the Nova sprint, and some at the
TripleO sprint if they happen to be concurrent.

I believe we would like for the TripleO sprint to be a bit later in the
cycle though, and I'm seeing dates proposed that would reflect that.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nova-compute vfsguestfs

2014-06-07 Thread Richard W.M. Jones
On Tue, May 27, 2014 at 03:25:10PM +0530, abhishek jain wrote:
 Hi Daniel
 
 Thanks for the help.
 The end result of my setup is that the VM is stucking at Spawning state on
 my compute node whereas it is working fine on the controller node.
 Therefore I'm comparing nova-compute logs of both compute node as well as
 controller node and trying to proceed step by step.
 I'm having all the above packages enabled
 
 Do you have any idea regarding reason for VM stucking at spawning state.

The most common reason is that nested virt is broken.  libguestfs is the canary
in the mine here, not the cause of the problem.

Rich.

 
 
 On Tue, May 27, 2014 at 2:38 PM, Daniel P. Berrange 
 berra...@redhat.comwrote:
 
  On Tue, May 27, 2014 at 12:04:23PM +0530, abhishek jain wrote:
   Hi
   Below is the code to which I'm going to reffer to..
  
vim /opt/stack/nova/nova/virt/disk/vfs/api.py
  
   #
  
   try:
   LOG.debug(_(Trying to import guestfs))
   importutils.import_module(guestfs)
   hasGuestfs = True
   except Exception:
   pass
  
   if hasGuestfs:
   LOG.debug(_(Using primary VFSGuestFS))
   return importutils.import_object(
   nova.virt.disk.vfs.guestfs.VFSGuestFS,
   imgfile, imgfmt, partition)
   else:
   LOG.debug(_(Falling back to VFSLocalFS))
   return importutils.import_object(
   nova.virt.disk.vfs.localfs.VFSLocalFS,
   imgfile, imgfmt, partition)
  
   ###
  
   When I'm launching  VM from the controller node onto compute node,the
   nova compute logs on the compute node displays...Falling back to
   VFSLocalFS and the result is that the VM is stuck in spawning state.
   However When I'm trying to launch a VM onto controller node form the
   controller node itself,the nova compute logs on the controller node
   dislpays ...Using primary VFSGuestFS and I'm able to launch VM on
   controller node.
   Is there any module in the kernel or any package that i need to
   enable.Please help regarding this.
 
  VFSGuestFS requires the libguestfs python module  corresponding native
  package to be present, and only works with KVM/QEMU enabled hosts.
 
  VFSLocalFS requires loopback module, nbd module, qemu-nbd, kpartx and
  a few other misc host tools
 
  Neither of these should cause a VM getting stuck in the spawning
  state, even if stuff they need is missing.
 
  Regards,
  Daniel
  --
  |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/:|
  |: http://libvirt.org  -o- http://virt-manager.org:|
  |: http://autobuild.org   -o- http://search.cpan.org/~danberr/:|
  |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc:|
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-builder quickly builds VMs from scratch
http://libguestfs.org/virt-builder.1.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] nova-compute deadlock

2014-06-07 Thread Richard W.M. Jones
On Sat, May 31, 2014 at 01:25:04AM +0800, Qin Zhao wrote:
 Hi all,
 
 When I run Icehouse code, I encountered a strange problem. The nova-compute
 service becomes stuck, when I boot instances. I report this bug in
 https://bugs.launchpad.net/nova/+bug/1313477.
 
 After thinking several days, I feel I know its root cause. This bug should
 be a deadlock problem cause by pipe fd leaking.  I draw a diagram to
 illustrate this problem.
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720
 
 However, I have not find a very good solution to prevent this deadlock.
 This problem is related with Python runtime, libguestfs, and eventlet. The
 situation is a little complicated. Is there any expert who can help me to
 look for a solution? I will appreciate for your help!

Thanks for the useful diagram.  libguestfs itself is very careful to
open all file descriptors with O_CLOEXEC (atomically if the OS
supports that), so I'm fairly confident that the bug is in Python 2,
not in libguestfs.

Another thing to say is that g.shutdown() sends a kill 9 signal to the
subprocess.  Furthermore you can obtain the qemu PID (g.get_pid()) and
send any signal you want to the process.

I wonder if a simpler way to fix this wouldn't be something like
adding a tiny C extension to the Python code to use pipe2 to open the
Python pipe with O_CLOEXEC atomically?  Are we allowed Python
extensions in OpenStack?

BTW do feel free to CC libgues...@redhat.com on any libguestfs
problems you have.  You don't need to subscribe to the list.

Rich.


-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-p2v converts physical machines to virtual machines.  Boot with a
live CD or over the network (PXE) and turn machines into KVM guests.
http://libguestfs.org/virt-v2v

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][all] Tracking bug and patch statuses

2014-06-07 Thread Matt Riedemann



On 6/6/2014 1:40 AM, Joe Gordon wrote:

Hi All,

In the nova meeting this week, we discussed some of the shortcomings of
our recent bug day, one of the ideas that was brought up was to do a
better job of keeping track of stale bugs (assigned but not worked on)
[0]. To that end I put something together based on what infra uses for
there bug days to go through all the open bugs in a project and list the
related gerrit patches and there state [1].

I ran this on nova [2] (just the first 750 bugs or so) and
python-novaclient [3].  From the looks of it we can be doing a much
better job of keeping bug states in sync with patches etc.

[0]
http://eavesdrop.openstack.org/meetings/nova/2014/nova.2014-06-05-21.01.log.html
[1] https://github.com/jogo/openstack-infra-scripts
[2] http://paste.openstack.org/show/83055/
[3] http://paste.openstack.org/show/83057


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Can you paste 2 and 3 somewhere besides p.o.o?  That doesn't seem to 
work anymore.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-07 Thread Mike Bayer

On Jun 6, 2014, at 8:12 PM, Devananda van der Veen devananda@gmail.com 
wrote:

 I think some things are broken in the oslo-incubator db migration code.
 
 Ironic moved to this when Juno opened and things seemed fine, until recently 
 when Lucas tried to add a DB migration and noticed that it didn't run... So I 
 looked into it a bit today. Below are my findings.
 
 Firstly, I filed this bug and proposed a fix, because I think that tests that 
 don't run any code should not report that they passed -- they should report 
 that they were skipped.
   https://bugs.launchpad.net/oslo/+bug/1327397
   No notice given when db migrations are not run due to missing engine
 
 Then, I edited the test_migrations.conf file appropriately for my local mysql 
 service, ran the tests again, and verified that migration tests ran -- and 
 they passed. Great!
 
 Now, a little background... Ironic's TestMigrations class inherits from 
 oslo's BaseMigrationTestCase, then opportunistically checks each back-end, 
 if it's available. This opportunistic checking was inherited from Nova so 
 that tests could pass on developer workstations where not all backends are 
 present (eg, I have mysql installed, but not postgres), and still 
 transparently run on all backends in the gate. I couldn't find such 
 opportunistic testing in the oslo db migration test code, unfortunately - but 
 maybe it's well hidden.
 
 Anyhow. When I stopped the local mysql service (leaving the configuration 
 unchanged), I expected the tests to be skipped, but instead I got two 
 surprise failures:
 - test_mysql_opportunistically() failed because setUp() raises an exception 
 before the test code could call calling _have_mysql()
 - test_mysql_connect_fail() actually failed! Again, because setUp() raises an 
 exception before running the test itself
 
 Unfortunately, there's one more problem... when I run the tests in parallel, 
 they fail randomly because sometimes two test threads run different migration 
 tests, and the setUp() for one thread (remember, it calls _reset_databases) 
 blows up the other test.
 
 Out of 10 runs, it failed three times, each with different errors:
   NoSuchTableError: `chassis`
   ERROR 1007 (HY000) at line 1: Can't create database 'test_migrations'; 
 database exists
   ProgrammingError: (ProgrammingError) (1146, Table 
 'test_migrations.alembic_version' doesn't exist)
 
 As far as I can tell, this is all coming from:
   
 https://github.com/openstack/oslo-incubator/blob/master/openstack/common/db/sqlalchemy/test_migrations.py#L86;L111

Hello -

Just an introduction, I’m Mike Bayer, the creator of SQLAlchemy and Alembic 
migrations. I’ve just joined on as a full time Openstack contributor, and 
trying to help improve processes such as these is my primary responsibility.

I’ve had several conversations already about how migrations are run within test 
suites in various openstack projects.   I’m kind of surprised by this approach 
of dropping and recreating the whole database for individual tests.   Running 
tests in parallel is obviously made very difficult by this style, but even 
beyond that, a lot of databases don’t respond well to lots of 
dropping/rebuilding of tables and/or databases in any case; while SQLite and 
MySQL are probably the most forgiving of this, a backend like Postgresql is 
going to lock tables from being dropped more aggressively, if any open 
transactions or result cursors against those tables remain, and on a backend 
like Oracle, the speed of schema operations starts to become prohibitively 
slow.   Dropping and creating tables is in general not a very speedy task on 
any backend, and on a test suite that runs many tests against a fixed schema, I 
don’t see why a full drop is necessary.

If you look at SQLAlchemy’s own tests, they do in fact create tables on each 
test, or just as often for a specific suite of tests.  However, this is due to 
the fact that SQLAlchemy tests are testing SQLAlchemy itself, so the database 
schemas used for these tests are typically built explicitly for small groups or 
individual tests, and there are ultimately thousands of small “mini schemas” 
built up and torn down for these tests.   A lot of framework code is involved 
within the test suite to keep more picky databases like Postgresql and Oracle 
happy when building up and dropping tables so frequently.

However, when testing an application that uses a fixed set of tables, as should 
be the case for the majority if not all Openstack apps, there’s no reason that 
these tables need to be recreated for every test.   Typically, the way I 
recommend is that the test suite includes a “per suite” activity which creates 
the test schema just once (with or without using CREATE DATABASE; I’m not a fan 
of tests running CREATE DATABASE as this is not a command so easily available 
in some environments).   The tests themselves then run within a transactional 
container, such that each test performs all of its work within a 

Re: [openstack-dev] [openstack-sdk-php] Transport Clients, Service Clients, and state

2014-06-07 Thread Matthew Farina
My comments are inline below...


On Fri, Jun 6, 2014 at 8:47 AM, Jamie Hannaford 
jamie.hannaf...@rackspace.com wrote:

Whether the same one is used for each service or a new one is used for
 each service doesn't matter.


  Yes, it does matter IMO - and here are the reasons why:

  1. By sharing a global transport object you’re introducing the risk of
 side effects. A transport object contains state that can be modified by its
 service object. Somewhere along the line, a Swift service could introduce a
 state modification that’s completely incompatible with a Nova service.
 What’s worse is that it will be a nightmare to debug - you’d have to trawl
 the entire service to see points where it interacts with the transport
 client. This is why people don’t use singletons - it’s incredibly risky and
 hard to debug. Object instantiations, on the other hand, are cheap and they
 offer protection through isolation.


There are two things here.

First, if the transport client has no state for the service than it doesn't
get mixed up on state. A Swift client would never introduce state for swift
to the transport client because the transport client has no state for this.
It's for transporting.

Second, it's not a singleton. You could have the same transport client for
all of them, a different transport client for each, or any permutation in
between. If the transport client contains no state to a service than it
doesn't matter.

To quote wikipedia, the singleton pattern is a design pattern that
restricts the instantiation of a class to one object. A singleton is an
intended restriction. This isn't a restriction. It's about options.

If the service client is responsible for state for the service and the
transport client is responsible for transporting information and the state
of transport (e.g., is the info going through a proxy) than you don't run
into issues where the transport client knows state of a service because
that's the responsibility of the service client not the transport client.




   2. Certain services rely on custom transport configurations. Each
 transport client has a base URL that is used for issuing HTTP requests -
 every time you execute a request, you’re effectively adding relevant paths
 for that API operation. A Swift service will have different URL endpoints
 from a Nova one - so there’s no point sharing. Another example is custom
 headers. Marconi requests custom headers to be sent, as does Glance. You
 save these as default headers on the transport client, that are sent for
 all requests that the service executes. These custom headers are not
 applicable to any other service except Marconi/Glance.


If a transport client know the base URL than it knows state about the
service. The separation of concerns is broken. Why does it need to know the
URL? Why does it need to know about custom headers? Customizations and
state for a service are the responsibility of the service client and not
the transport client.

Why does a service client and transport client need to both know the state
of the service? The responsibility become blurred here.




  In the use-cases you mentioned, you’d easily handle that. You’d pass in
 proxy settings through the OpenStack entry point (like you do with your
 username and password), which would then percolate down into the transport
 clients as they’re created. These settings would be injected into each
 transport client. So if you require a different set-up for public clouds -
 that’s fine - you define different settings and fire up another $openstack
 object.


How things get passed around isn't an issues. I don't think we need to
debase how we pass settings around right now. The issue is separation of
concerns between the service clients and the transport clients.




  *-OR-* you could define different transport settings for different
 services - by passing them into the $openstack-get(‘compute’,
 [‘custom_settings’ = true]); call. This is great because it gives users
 the ability to apply custom transport options to certain services. So if I
 want to interact with a private Compute instance, I’d pass in a custom
 transport configuration for that service; if I wanted to use a proxy with
 my Swift service, I can pass details into that service when creating it.
 You can only do this (provide custom transport settings for 1 service) if
 each transport client is isolated, i.e. if there’s a 1-to-1 relationship
 between service and transport client. If you have a global one, you
 couldn’t introduce custom settings per service because it’d affect ALL
 others, which is a bad user experience.


We're not talking about an application. We're talking about an SDK with a
simple entry point for ease and building blocks you can do a lot with. This
isn't about a 1-to-1 relationship between a service and transport client OR
a global one. It's different than that.

They should have different responsibilities. Entirely different. A
transport client moves data. It doesn't know 

Re: [openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

2014-06-07 Thread Eugene Nikanorov
Hi folks,

There was a small discussion about the better way of doing sql operations
for vni synchronization with the config.
Initial proposal was to handle those in chunks. Carl also suggested to
issue a single sql query.
I've did some testing with my sql and postgress.
I've tested the following scenario: vxlan range is changed from
5:15 to 0:10 and vice versa.
That involves adding and deleting 5 vni in each test.

Here are the numbers:
50k vnis to add/deletePg adding vnisPg deleting vnisPg TotalMysql adding
vnisMysql deleting vnisMysql totalnon-chunked sql232245142034chunked in 100
201737141731

I've done about 5 tries to get each number to minimize random floating
factor (due to swaps, disc or cpu activity or other factors)
That might be surprising that issuing multiple sql statements instead one
big is little bit more efficient, so I would appreciate if someone could
reproduce those numbers.
Also I'd like to note that part of code that iterates over vnis fetched
from db is taking 10 seconds both on mysql and postgress and is a part of
deleting vnis numbers.
In other words, difference between multiple DELETE sql statements and
single one is even bigger (in percent) than these numbers show.

The code which I used to test is here:
http://paste.openstack.org/show/83298/
Right now the chunked version is commented out, so to switch between
versions some lines should be commented and some - uncommented.

Thanks,
Eugene.


P.S. I'm also afraid that issuing one big sql statement (and it will be
megabytes big) could lead to timeouts/deadlocks just because it will take
too much time, how ever I'm not 100% sure about that, it's just my bare
concern.


On Thu, Jun 5, 2014 at 1:06 PM, Xurong Yang ido...@gmail.com wrote:

 great.
 I will do more test base on Eugene Nikanorov's modification.

 *Thanks,*


 2014-06-05 11:01 GMT+08:00 Isaku Yamahata isaku.yamah...@gmail.com:

 Wow great.
 I think the same applies to gre type driver.
 so we should create similar one after vxlan case is resolved.

 thanks,


 On Thu, Jun 05, 2014 at 12:36:54AM +0400,
 Eugene Nikanorov enikano...@mirantis.com wrote:

  We hijacked the vxlan initialization performance thread with ipam! :)
  I've tried to address initial problem with some simple sqla stuff:
  https://review.openstack.org/97774
  With sqlite it gives ~3x benefit over existing code in master.
  Need to do a little bit more testing with real backends to make sure
  parameters are optimal.
 
  Thanks,
  Eugene.
 
 
  On Thu, Jun 5, 2014 at 12:29 AM, Carl Baldwin c...@ecbaldwin.net
 wrote:
 
   Yes, memcached is a candidate that looks promising.  First things
 first,
   though.  I think we need the abstraction of an ipam interface merged.
  That
   will take some more discussion and work on its own.
  
   Carl
   On May 30, 2014 4:37 PM, Eugene Nikanorov enikano...@mirantis.com
   wrote:
  
I was thinking it would be a separate process that would
 communicate over
   the RPC channel or something.
   memcached?
  
   Eugene.
  
  
   On Sat, May 31, 2014 at 2:27 AM, Carl Baldwin c...@ecbaldwin.net
 wrote:
  
   Eugene,
  
   That was part of the whole new set of complications that I
   dismissively waved my hands at.  :)
  
   I was thinking it would be a separate process that would communicate
   over the RPC channel or something.  More complications come when you
   think about making this process HA, etc.  It would mean going over
 RPC
   to rabbit to get an allocation which would be slow.  But the current
   implementation is slow.  At least going over RPC is greenthread
   friendly where going to the database doesn't seem to be.
  
   Carl
  
   On Fri, May 30, 2014 at 2:56 PM, Eugene Nikanorov
   enikano...@mirantis.com wrote:
Hi Carl,
   
The idea of in-memory storage was discussed for similar problem,
 but
   might
not work for multiple server deployment.
Some hybrid approach though may be used, I think.
   
Thanks,
Eugene.
   
   
On Fri, May 30, 2014 at 8:53 PM, Carl Baldwin c...@ecbaldwin.net
 
   wrote:
   
This is very similar to IPAM...  There is a space of possible
 ids or
addresses that can grow very large.  We need to track the
 allocation
of individual ids or addresses from that space and be able to
 quickly
come up with a new allocations and recycle old ones.  I've had
 this in
the back of my mind for a week or two now.
   
A similar problem came up when the database would get populated
 with
the entire free space worth of ip addresses to reflect the
availability of all of the individual addresses.  With a large
 space
(like an ip4 /8 or practically any ip6 subnet) this would take a
 very
long time or never finish.
   
Neutron was a little smarter about this.  It compressed
 availability
in to availability ranges in a separate table.  This solved the
original problem but is not problem free.  It turns out that
 writing
database operations to manipulate both 

Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-07 Thread Jain, Vivek
+1 for #2.

In addition, I think it would be nice if barbican maintains versioned data
on updates. Which means consumer of barbican APIs can request for data
from older version if needed. This can address concerns expressed by
German. For example if certificates were updated on barbican but somehow
update is not compatible with load balancer device, then lbaas API user
gets an option to fall back to older working certificate. That will avoid
downtime of lbaas managed applications.

Thanks,
Vivek

On 6/6/14, 3:52 PM, Eichberger, German german.eichber...@hp.com wrote:

Jorge + John,

I am most concerned with a user changing his secret in barbican and then
the LB trying to update and causing downtime. Some users like to control
when the downtime occurs.

For #1 it was suggested that once the event is delivered it would be up
to a user to enable an auto-update flag.

In the case of #2 I am a bit worried about error cases: e.g. uploading
the certificates succeeds but registering the loadbalancer(s) fails. So
using the barbican system for those warnings might not as fool proof as
we are hoping. 

One thing I like about #2 over #1 is that it pushes a lot of the
information to Barbican. I think a user would expect when he uploads a
new certificate to Barbican that the system warns him right away about
load balancers using the old cert. With #1 he might get an e-mails from
LBaaS telling him things changed (and we helpfully updated all affected
load balancers) -- which isn't as immediate as #2.

If we implement an auto-update flag for #1 we can have both. User's who
like #2 juts hit the flag. Then the discussion changes to what we should
implement first and I agree with Jorge + John that this should likely be
#2.

German

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Friday, June 06, 2014 3:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
Integration Ideas

Hey John,

Correct, I was envisioning that the Barbican request would not be
affected, but rather, the GUI operator or API user could use the
registration information to do so should they want to do so.

Cheers,
--Jorge




On 6/6/14 4:53 PM, John Wood john.w...@rackspace.com wrote:

Hello Jorge,

Just noting that for option #2, it seems to me that the registration
feature in Barbican would not be required for the first version of this
integration effort, but we should create a blueprint for it nonetheless.

As for your question about services not registering/unregistering, I
don't see an issue as long as the presence or absence of registered
services on a Container/Secret does not **block** actions from
happening, but rather is information that can be used to warn clients
through their processes. For example, Barbican would still delete a
Container/Secret even if it had registered services.

Does that all make sense though?

Thanks,
John


From: Youcef Laribi [youcef.lar...@citrix.com]
Sent: Friday, June 06, 2014 2:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
Integration Ideas

+1 for option 2.

In addition as an additional safeguard, the LBaaS service could check
with Barbican when failing to use an existing secret to see if the
secret has changed (lazy detection).

Youcef

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Friday, June 06, 2014 12:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
Integration Ideas

Hey everyone,

Per our IRC discussion yesterday I'd like to continue the discussion on
how Barbican and Neutron LBaaS will interact. There are currently two
ideas in play and both will work. If you have another idea please free
to add it so that we may evaluate all the options relative to each other.
Here are the two current ideas:

1. Create an eventing system for Barbican that Neutron LBaaS (and other
services) consumes to identify when to update/delete updated secrets
from Barbican. For those that aren't up to date with the Neutron LBaaS
API Revision, the project/tenant/user provides a secret (container?) id
when enabling SSL/TLS functionality.

* Example: If a user makes a change to a secret/container in Barbican
then Neutron LBaaS will see an event and take the appropriate action.

PROS:
 - Barbican is going to create an eventing system regardless so it will
be supported.
 - Decisions are made on behalf of the user which lessens the amount of
calls the user has to make.

CONS:
 - An eventing framework can become complex especially since we need to
ensure delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI
think.

2. Push orchestration decisions to API users. This idea comes with two
assumptions. The 

Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-07 Thread Eugene Nikanorov
  If a user makes a change to a secret
Can we just disable that by making LBaaS a separate user so it would store
secrets under LBaaS 'fake' tenant id?

Eugene.


On Sun, Jun 8, 2014 at 7:29 AM, Jain, Vivek vivekj...@ebay.com wrote:

 +1 for #2.

 In addition, I think it would be nice if barbican maintains versioned data
 on updates. Which means consumer of barbican APIs can request for data
 from older version if needed. This can address concerns expressed by
 German. For example if certificates were updated on barbican but somehow
 update is not compatible with load balancer device, then lbaas API user
 gets an option to fall back to older working certificate. That will avoid
 downtime of lbaas managed applications.

 Thanks,
 Vivek

 On 6/6/14, 3:52 PM, Eichberger, German german.eichber...@hp.com wrote:

 Jorge + John,
 
 I am most concerned with a user changing his secret in barbican and then
 the LB trying to update and causing downtime. Some users like to control
 when the downtime occurs.
 
 For #1 it was suggested that once the event is delivered it would be up
 to a user to enable an auto-update flag.
 
 In the case of #2 I am a bit worried about error cases: e.g. uploading
 the certificates succeeds but registering the loadbalancer(s) fails. So
 using the barbican system for those warnings might not as fool proof as
 we are hoping.
 
 One thing I like about #2 over #1 is that it pushes a lot of the
 information to Barbican. I think a user would expect when he uploads a
 new certificate to Barbican that the system warns him right away about
 load balancers using the old cert. With #1 he might get an e-mails from
 LBaaS telling him things changed (and we helpfully updated all affected
 load balancers) -- which isn't as immediate as #2.
 
 If we implement an auto-update flag for #1 we can have both. User's who
 like #2 juts hit the flag. Then the discussion changes to what we should
 implement first and I agree with Jorge + John that this should likely be
 #2.
 
 German
 
 -Original Message-
 From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
 Sent: Friday, June 06, 2014 3:05 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
 Integration Ideas
 
 Hey John,
 
 Correct, I was envisioning that the Barbican request would not be
 affected, but rather, the GUI operator or API user could use the
 registration information to do so should they want to do so.
 
 Cheers,
 --Jorge
 
 
 
 
 On 6/6/14 4:53 PM, John Wood john.w...@rackspace.com wrote:
 
 Hello Jorge,
 
 Just noting that for option #2, it seems to me that the registration
 feature in Barbican would not be required for the first version of this
 integration effort, but we should create a blueprint for it nonetheless.
 
 As for your question about services not registering/unregistering, I
 don't see an issue as long as the presence or absence of registered
 services on a Container/Secret does not **block** actions from
 happening, but rather is information that can be used to warn clients
 through their processes. For example, Barbican would still delete a
 Container/Secret even if it had registered services.
 
 Does that all make sense though?
 
 Thanks,
 John
 
 
 From: Youcef Laribi [youcef.lar...@citrix.com]
 Sent: Friday, June 06, 2014 2:47 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
 Integration Ideas
 
 +1 for option 2.
 
 In addition as an additional safeguard, the LBaaS service could check
 with Barbican when failing to use an existing secret to see if the
 secret has changed (lazy detection).
 
 Youcef
 
 -Original Message-
 From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
 Sent: Friday, June 06, 2014 12:16 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
 Integration Ideas
 
 Hey everyone,
 
 Per our IRC discussion yesterday I'd like to continue the discussion on
 how Barbican and Neutron LBaaS will interact. There are currently two
 ideas in play and both will work. If you have another idea please free
 to add it so that we may evaluate all the options relative to each other.
 Here are the two current ideas:
 
 1. Create an eventing system for Barbican that Neutron LBaaS (and other
 services) consumes to identify when to update/delete updated secrets
 from Barbican. For those that aren't up to date with the Neutron LBaaS
 API Revision, the project/tenant/user provides a secret (container?) id
 when enabling SSL/TLS functionality.
 
 * Example: If a user makes a change to a secret/container in Barbican
 then Neutron LBaaS will see an event and take the appropriate action.
 
 PROS:
  - Barbican is going to create an eventing system regardless so it will
 be supported.
  - Decisions are made on