Re: [openstack-dev] [infra][nova][all] Pillow breaking gate?

2015-10-01 Thread Carlos Garza
I fixed this on my local ubuntu 14.04 box by doing ³apt-get install
libjpeg-dev²
Can we just make that a low level package dependency on the images in gate
so that
We can move forward?

On 10/1/15, 5:48 PM, "Kevin L. Mitchell" 
wrote:

>It looks like Pillow (pulled in by blockdiag, pulled in by
>sphinxcontrib-seqdiag, in test-requirements.txt of nova and probably
>others) had a 3.0.0 release today, and now the gate is breaking because
>libjpeg isn't available in the imageŠthoughts on how best to address
>this problem?
>-- 
>Kevin L. Mitchell 
>Rackspace
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] tox -egenconfig not working

2015-10-01 Thread Carlos Garza

   If its because of an Error like "ValueError: --enable-jpeg requested
but jpeg not found, aborting² triggered by a dependency pull of
Python pillow which then I got around it by doing an ³apt-get install
libjpeg-dev² on my ubuntu build.
I agree its seemed odd when it happened on tox but not on pip but I
noticed it was during the documentation tests.


On 10/1/15, 7:20 PM, "Michal Rostecki"  wrote:

>Hi,
>
>On Wed, Sep 30, 2015 at 2:27 PM, Vikas Choudhary
> wrote:
>> Hi,
>>
>> I tried to generate sample kuryr.using "tox -e genconfig", but it is
>> failing:
>>
>> genconfig create: /home/vikas/kuryr/.tox/genconfig
>> genconfig installdeps: -r/home/vikas/kuryr/requirements.txt,
>> -r/home/vikas/kuryr/test-requirements.txt
>> ERROR: could not install deps [-r/home/vikas/kuryr/requirements.txt,
>> -r/home/vikas/kuryr/test-requirements.txt]
>> ___
>>summary
>> ___
>> ERROR:   genconfig: could not install deps
>> [-r/home/vikas/kuryr/requirements.txt,
>> -r/home/vikas/kuryr/test-requirements.txt]
>>
>> 
>
>Command "tox -e genconfig" is working perfectly for me. Please:
>- ensure you have up-to-date repo
>- try to remove .tox/ directory and run the command again
>
>>
>> But if i run "pip install -r requirements.txt", its giving no error.
>
>Does "pip install -r test-requirements.txt" give no error as well?
>
>>
>> How to generalr sample config file? Please suggest.
>>
>>
>> -Vikas
>
>Regards,
>Michal Rostecki
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] new common module for Barbican TLS containers interaction

2014-07-30 Thread Carlos Garza

This is not sufficient per our designs. For example the container Consumer 
registration was supposed to be atomic and not handled in separate calls and I 
don't like the inflexibility that we are just using tls_ids them selves for 
every call. (For example We are forcing calls to the slow barbican APi for 
pretty much every thing that is being done in the API and driver. I'm 
discussing with adam a more appropriate set of stub modules. 

Which brings another point up I want some separation between the Barican 
interaction code and the low lever X509 parsing code. Hold on though I'm still 
discussing/negotiating a proposal with Adam Harwell. 


On Jul 27, 2014, at 7:21 AM, Evgeny Fedoruk evge...@radware.com wrote:

 Carlos,
 The module skeleton, including API functions and their brief description, was 
 committed at https://review.openstack.org/#/c/109849/
 
 checkContainerExistance   -should be used by LBaaS API, I will 
 merge it into TLS implementation change.
   - Throwing TLSContainerNotFound 
 exception
 validateContainer -should be used LBaaS API instead of 
 checkContainerExistance if we will be able to implement it for Juno.
   - Throwing TLSContainerNotFound or 
 TLSContainerInvalid exceptions
 _getContainerAndRegisterConsumer  - internal. Used by 
 checkContainerExistance and validateContainer. Getting container by posting 
 service as a container consumer.
 unregisterContainerConsumer   - should be used by LBaaS API when 
 container is not used for listeners anymore. I will implement it in TLS 
 implementation change. Also used by
   - also used by checkContainerExistance 
 and validateContainer in order not to leave containers consumed in Barbican 
 before
   - driver does the real consumer 
 registration with getCertificateX509 and/or extractCertificateHostNames
 getCertificateX509- should be used by specific vendor 
 driver. Getting container by posting service as a container consumer. Returns 
 certificate's X509
 extractCertificateHostNames   - should be used by specific vendor 
 driver. Getting certificate's X509 by using getCertificateX509 and returns 
 SCN and SAN names dict.
 
 I will appreciate your opinion on this API. 
 
 Thanks,
 Evg
 
 
 -Original Message-
 From: Carlos Garza [mailto:carlos.ga...@rackspace.com] 
 Sent: Thursday, July 24, 2014 7:08 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] new common module for Barbican 
 TLS containers interaction
 
 Sorry I meant to say I'm pretty agreeable just park a stub module so I can 
 populate it.
 On Jul 24, 2014, at 11:06 AM, Carlos Garza carlos.ga...@rackspace.com
 wrote:
 
 I'Just park a module with a stub call that I can populate with pyasn1.
 On Jul 24, 2014, at 10:38 AM, Evgeny Fedoruk evge...@radware.com
 wrote:
 
 Hi,
 
 Following our talk on TLS work items split, We need to decide how 
 will we validate/extract certificates Barbican TLS containers.
 As we agreed on IRC, the first priority should be certificates fetching.
 
 TLS RST describes a new common module that will be used by LBaaS API and 
 LBaaS drivers.
 It's proposed front-end API is currently:
 1. Ensuring Barbican TLS container existence (used by LBaaS API) 2. 
 Validating Barbican TLS container (used by LBaaS API)
  This API will also register LBaaS as a container's consumer in 
 Barbican's repository.
  POST request:
  http://admin-api/v1/containers/{container-uuid}/consumers
  {
   type: LBaaS,
   URL: https://lbaas.myurl.net/loadbalancers/lbaas_loadbalancer_id/
  }
 
 3. Extracting SubjectCommonName and SubjectAltName information
   from certificates' X509 (used by LBaaS front-end API)
  As for now, only dNSName (and optionally directoryName) types will be 
 extracted from
   SubjectAltName sequence,
 
 4. Fetching certificate's data from Barbican TLS container
   (used by provider/driver code)
 
 5. Unregistering LBaaS as a consumer of the container when container is not
used by any listener any more (used by LBaaS front-end API)
 
 So this new module's front-end is used by LBaaS API/drivers and its 
 back-end is facing Barbican API.
 Please give your feedback on module API, should we merge 1 and 2?
 
 I will be able to start working on the new module skeleton on Sunday 
 morning. It will include API functions.
 
 TLS implementation patch has a spot where container validation should 
 happen:https://review.openstack.org/#/c/109035/3/neutron/db/loadbalancer/loadbalancer_dbv2.py
  line 518 After submitting the module skeleton I can make the TLS 
 implementation patch to depend on that module patch and use its API.
 
 As an alternative we might leave this job to drivers, if common 
 module will be not implemented
 
 What are your thoughts/suggestions/plans

Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work division

2014-07-24 Thread Carlos Garza
Are you close to adding the stub modules for the X509 parsing and barbicn 
integration etc.

On Jul 24, 2014, at 6:38 AM, Evgeny Fedoruk evge...@radware.com wrote:

 Hi Doug,
 I agree with Brandon, since there is no flavors framework yet, each driver 
 not supporting TLS is in charge of throwing the unsupported exception.
 The driver can do it once getting a listener with TERMINATED-HTTPS protocol.
 
 Evg
 
 
 -Original Message-
 From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
 Sent: Wednesday, July 23, 2014 9:09 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] FW: [Neutron][LBaaS] TLS capability - work 
 division
 
 @Evgeny: Did you intend on adding another patchset in the reviews I've been 
 working on? If so I don't really see any changes, so if they're are some 
 changes you needed in there let me know.
 
 @Doug: I think if the drivers see the TERMINATED_HTTPS protocol then they can 
 throw an exception.  I don't think a driver interface change is needed.
 
 Thanks,
 Brandon
 
 
 On Wed, 2014-07-23 at 17:02 +, Doug Wiegley wrote:
 Do we want any driver interface changes for this?  At one level, with 
 the current interface, conforming drivers could just reference 
 listener.sni_containers, with no changes.  But, do we want something 
 in place so that the API can return an unsupported error for non-TLS 
 v2 drivers?  Or must all v2 drivers support TLS?
 
 doug
 
 
 
 On 7/23/14, 10:54 AM, Evgeny Fedoruk evge...@radware.com wrote:
 
 My code is here:
 https://review.openstack.org/#/c/109035/1
 
 
 
 -Original Message-
 From: Evgeny Fedoruk
 Sent: Wednesday, July 23, 2014 6:54 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - work 
 division
 
 Hi Carlos,
 
 As I understand you are working on common module for Barbican 
 interactions.
 I will commit my code later today and I will appreciate if you and 
 anybody else  who is interested will review this change.
 There is one specific spot for the common Barbican interactions 
 module API integration.
 After the IRC meeting tomorrow, we can discuss the work items and 
 decide who is interested/available to do them.
 Does it make sense?
 
 Thanks,
 Evg
 
 -Original Message-
 From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
 Sent: Wednesday, July 23, 2014 6:15 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work 
 division
 
   Do you have any idea as to how we can split up the work?
 
 On Jul 23, 2014, at 6:01 AM, Evgeny Fedoruk evge...@radware.com
 wrote:
 
 Hi,
 
 I'm working on TLS integration with loadbalancer v2 extension and db.
 Basing on Brandon's  patches 
 https://review.openstack.org/#/c/105609 , 
 https://review.openstack.org/#/c/105331/  , 
 https://review.openstack.org/#/c/105610/
 I will abandon previous 2 patches for TLS which are 
 https://review.openstack.org/#/c/74031/ and 
 https://review.openstack.org/#/c/102837/
 Managing to submit my change later today. It will include lbaas 
 extension v2 modification, lbaas db v2 modifications, alembic 
 migration for schema changes and new tests in unit testing for lbaas db v2.
 
 Thanks,
 Evg
 
 -Original Message-
 From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
 Sent: Wednesday, July 23, 2014 3:54 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - work 
 division
 
Since it looks like the TLS blueprint was approved I''m sure were 
 all eager to start coded so how should we divide up work on the source 
 code.
 I have Pull requests in pyopenssl
 https://github.com/pyca/pyopenssl/pull/143;. and a few one liners 
 in pica/cryptography to expose the needed low-level that I'm hoping 
 will be added pretty soon to that PR 143 test's can pass. Incase it 
 doesn't we will fall back to using the pyasn1_modules as it already 
 also has a means to fetch what we want at a lower level.
 I'm just hoping that we can split the work up so that we can 
 collaborate together on this with out over serializing the work were 
 people become dependent on waiting for some one else to complete 
 their work or worse one person ending up doing all the work.
 
 
  Carlos D. Garza 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list

Re: [openstack-dev] [Neutron][LBaaS] new common module for Barbican TLS containers interaction

2014-07-24 Thread Carlos Garza
I'Just park a madule with a stub call that I can populate with 
pyasn1.
On Jul 24, 2014, at 10:38 AM, Evgeny Fedoruk evge...@radware.com
 wrote:

 Hi,
  
 Following our talk on TLS work items split,
 We need to decide how will we validate/extract certificates Barbican TLS 
 containers.
 As we agreed on IRC, the first priority should be certificates fetching.
  
 TLS RST describes a new common module that will be used by LBaaS API and 
 LBaaS drivers.
 It’s proposed front-end API is currently:
 1. Ensuring Barbican TLS container existence (used by LBaaS API)
 2. Validating Barbican TLS container (used by LBaaS API)
This API will also register LBaaS as a container's consumer in 
 Barbican's repository.
POST request:
http://admin-api/v1/containers/{container-uuid}/consumers
{
 type: LBaaS,
 URL: https://lbaas.myurl.net/loadbalancers/lbaas_loadbalancer_id/
}
  
 3. Extracting SubjectCommonName and SubjectAltName information
 from certificates’ X509 (used by LBaaS front-end API)
As for now, only dNSName (and optionally directoryName) types will be 
 extracted from
 SubjectAltName sequence,
  
 4. Fetching certificate’s data from Barbican TLS container
 (used by provider/driver code)
  
 5. Unregistering LBaaS as a consumer of the container when container is not
  used by any listener any more (used by LBaaS front-end API)
  
 So this new module’s front-end is used by LBaaS API/drivers and its back-end 
 is facing Barbican API.
 Please give your feedback on module API, should we merge 1 and 2?
  
 I will be able to start working on the new module skeleton on Sunday morning. 
 It will include API functions.
  
 TLS implementation patch has a spot where container validation should 
 happen:https://review.openstack.org/#/c/109035/3/neutron/db/loadbalancer/loadbalancer_dbv2.py
  line 518
 After submitting the module skeleton I can make the TLS implementation patch 
 to depend on that module patch and use its API.
  
 As an alternative we might leave this job to drivers, if common module will 
 be not implemented
  
 What are your thoughts/suggestions/plans?
  
 Thanks,
 Evg
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] new common module for Barbican TLS containers interaction

2014-07-24 Thread Carlos Garza
Sorry I meant to say I'm pretty agreeable just park a stub module so I can 
populate it.
On Jul 24, 2014, at 11:06 AM, Carlos Garza carlos.ga...@rackspace.com
 wrote:

 I'Just park a module with a stub call that I can populate with 
 pyasn1.
 On Jul 24, 2014, at 10:38 AM, Evgeny Fedoruk evge...@radware.com
 wrote:
 
 Hi,
 
 Following our talk on TLS work items split,
 We need to decide how will we validate/extract certificates Barbican TLS 
 containers.
 As we agreed on IRC, the first priority should be certificates fetching.
 
 TLS RST describes a new common module that will be used by LBaaS API and 
 LBaaS drivers.
 It’s proposed front-end API is currently:
 1. Ensuring Barbican TLS container existence (used by LBaaS API)
 2. Validating Barbican TLS container (used by LBaaS API)
   This API will also register LBaaS as a container's consumer in 
 Barbican's repository.
   POST request:
   http://admin-api/v1/containers/{container-uuid}/consumers
   {
type: LBaaS,
URL: https://lbaas.myurl.net/loadbalancers/lbaas_loadbalancer_id/
   }
 
 3. Extracting SubjectCommonName and SubjectAltName information
from certificates’ X509 (used by LBaaS front-end API)
   As for now, only dNSName (and optionally directoryName) types will be 
 extracted from
SubjectAltName sequence,
 
 4. Fetching certificate’s data from Barbican TLS container
(used by provider/driver code)
 
 5. Unregistering LBaaS as a consumer of the container when container is not
 used by any listener any more (used by LBaaS front-end API)
 
 So this new module’s front-end is used by LBaaS API/drivers and its back-end 
 is facing Barbican API.
 Please give your feedback on module API, should we merge 1 and 2?
 
 I will be able to start working on the new module skeleton on Sunday 
 morning. It will include API functions.
 
 TLS implementation patch has a spot where container validation should 
 happen:https://review.openstack.org/#/c/109035/3/neutron/db/loadbalancer/loadbalancer_dbv2.py
  line 518
 After submitting the module skeleton I can make the TLS implementation patch 
 to depend on that module patch and use its API.
 
 As an alternative we might leave this job to drivers, if common module will 
 be not implemented
 
 What are your thoughts/suggestions/plans?
 
 Thanks,
 Evg
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work division

2014-07-23 Thread Carlos Garza
Do you have any idea as to how we can split up the work?

On Jul 23, 2014, at 6:01 AM, Evgeny Fedoruk evge...@radware.com
 wrote:

 Hi,
 
 I'm working on TLS integration with loadbalancer v2 extension and db.
 Basing on Brandon's  patches https://review.openstack.org/#/c/105609 , 
 https://review.openstack.org/#/c/105331/  , 
 https://review.openstack.org/#/c/105610/
 I will abandon previous 2 patches for TLS which are 
 https://review.openstack.org/#/c/74031/ and 
 https://review.openstack.org/#/c/102837/ 
 Managing to submit my change later today. It will include lbaas extension v2 
 modification, lbaas db v2 modifications, alembic migration for schema changes 
 and new tests in unit testing for lbaas db v2.
 
 Thanks,
 Evg
 
 -Original Message-
 From: Carlos Garza [mailto:carlos.ga...@rackspace.com] 
 Sent: Wednesday, July 23, 2014 3:54 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - work division
 
   Since it looks like the TLS blueprint was approved I''m sure were all 
 eager to start coded so how should we divide up work on the source code. I 
 have Pull requests in pyopenssl https://github.com/pyca/pyopenssl/pull/143;. 
 and a few one liners in pica/cryptography to expose the needed low-level that 
 I'm hoping will be added pretty soon to that PR 143 test's can pass. Incase 
 it doesn't we will fall back to using the pyasn1_modules as it already also 
 has a means to fetch what we want at a lower level. 
 I'm just hoping that we can split the work up so that we can collaborate 
 together on this with out over serializing the work were people become 
 dependent on waiting for some one else to complete their work or worse one 
 person ending up doing all the work.
 
   
  Carlos D. Garza ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work division

2014-07-23 Thread Carlos Garza
Yes we can discuss this during the meeting as well.  

On Jul 23, 2014, at 10:53 AM, Evgeny Fedoruk evge...@radware.com
 wrote:

 Hi Carlos,
 
 As I understand you are working on common module for Barbican  interactions.
 I will commit my code later today and I will appreciate if you and anybody 
 else  who is interested will review this change.
 There is one specific spot for the common Barbican interactions module API 
 integration.
 After the IRC meeting tomorrow, we can discuss the work items and decide who 
 is interested/available to do them.
 Does it make sense?
 
 Thanks,
 Evg
 
 -Original Message-
 From: Carlos Garza [mailto:carlos.ga...@rackspace.com] 
 Sent: Wednesday, July 23, 2014 6:15 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work division
 
Do you have any idea as to how we can split up the work?
 
 On Jul 23, 2014, at 6:01 AM, Evgeny Fedoruk evge...@radware.com
 wrote:
 
 Hi,
 
 I'm working on TLS integration with loadbalancer v2 extension and db.
 Basing on Brandon's  patches https://review.openstack.org/#/c/105609 , 
 https://review.openstack.org/#/c/105331/  , 
 https://review.openstack.org/#/c/105610/
 I will abandon previous 2 patches for TLS which are 
 https://review.openstack.org/#/c/74031/ and 
 https://review.openstack.org/#/c/102837/ 
 Managing to submit my change later today. It will include lbaas extension v2 
 modification, lbaas db v2 modifications, alembic migration for schema 
 changes and new tests in unit testing for lbaas db v2.
 
 Thanks,
 Evg
 
 -Original Message-
 From: Carlos Garza [mailto:carlos.ga...@rackspace.com] 
 Sent: Wednesday, July 23, 2014 3:54 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - work division
 
  Since it looks like the TLS blueprint was approved I''m sure were all 
 eager to start coded so how should we divide up work on the source code. I 
 have Pull requests in pyopenssl 
 https://github.com/pyca/pyopenssl/pull/143;. and a few one liners in 
 pica/cryptography to expose the needed low-level that I'm hoping will be 
 added pretty soon to that PR 143 test's can pass. Incase it doesn't we will 
 fall back to using the pyasn1_modules as it already also has a means to 
 fetch what we want at a lower level. 
 I'm just hoping that we can split the work up so that we can collaborate 
 together on this with out over serializing the work were people become 
 dependent on waiting for some one else to complete their work or worse one 
 person ending up doing all the work.
 
  
  Carlos D. Garza ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-22 Thread Carlos Garza

On Jul 17, 2014, at 4:59 PM, Stephen Balukoff sbaluk...@bluebox.net wrote:

 From the comments there, I think the reason for storing the subjectAltNames 
 was to minimize the number of calls we will need to make to barbican, and 
 because the barbican container is immutable, and therefore the list of 
 subjectAltNames won't change so long as the container exists, and we don't 
 have to worry about cache invalidation. (Because really, storing the 
 subjectAltNames locally is a cache.)  We could accomplish the same thing by 
 storing the cert (NOT the key) in our database as well and extracting the 
 information from the x509 cert that we want on the fly. But this also seems 
 like we're doing more work than necessary to keep extracting the same data 
 from the same certificate that will never change.

I'm more forgiving of duplicating work at API time but not on every backend 
LB https request. Thats insane.
The question for me is more of a balancing act between attempting Single 
Source of Truth design versus painful repeated computation. which I though 
repeating the computations at the API layer was acceptable. If we are 
disciplined enough to guarantee the cert won't fall out of sync with the 
entries in the database I'm fine with not reparseing the X509 on the fly. I 
just don't know what level of trust we have for our selves.


 How we store this in the database is something I'm less opinionated about, 
 but your idea that storing this data in a separate table seems to make sense.
 
 Do you really see a need to be concerned with anything but GEN_DNS entries 
 here? Or put another way, is there an application that would likely be used 
 in load balancing that makes use of any subjectAltName entries that are not 
 DNSNames? (I'm pretty sure that's all that all the major browsers look at 
 anyway-- and I don't see them changing any time soon since this satisfies the 
 need for implementing SNI.)  Secondary to this, does supporting other 
 subjectAltName types in our code cause any extra significant complication?  

Well no GEN_DNS is it for now but to make the code 
https://github.com/pyca/pyopenssl/pull/143 more attractive for merging to the 
pyopenssl folks I implemented most of the other entry types as well but we can 
ignore all but the dNSName entries.

 In practice, I think anything that does TERMINATED_HTTPS as the listener 
 protocol is only going to care about dNSName entries and ignore the rest-- 
 but if supporting the rest opens the door for more general-purpose forms of 
 TLS, I don't see harm in extracting these other subjectAltName types from the 
 x509 cert. It certainly feels more correct to treat these for what they 
 are: the tuples you've described.
 
 Thanks,
 Stephen
 
 
 
 On Thu, Jul 17, 2014 at 2:29 PM, Carlos Garza carlos.ga...@rackspace.com 
 wrote:
 I added the following comments to patch 14 but I'm not -1 it but I think its 
 a mistake
 to assume the altSubjectName is a string type. See below.
 
 --- Comments on patch 14 below 
 
 SubjectAltNames are not a string and should be thought
  of as an array of tuples. Example
  [('dNSName','www.somehost.com'),
 ('dirNameCN','www.somehostFromAltCN.org'),
 ('dirNameCN','www.anotherHostFromAltCN.org')]
 
 for right now we only care about entries that are of type dNSName
 or the entries that are of type DirName that also contain a CN in the DirName 
 container. All other AltNames can be ignores as they don't seem to be apart 
 of hostname validation in PKIX
 
 Also we don't need to store these in the object model. since these
 can be extracted from the X509 on the fly. Just be aware though that
 the SubjectAltName should not be treated as a simple string but as a
 list of (general_name_type,general_name_value) tuples
 
 were really close to the end but we can't mess this one up.
 
 I'm flexible if you want these values store in the database
 or not. If we do store it in a database we need a table called
 general_names that contains varchars for type and value for
 now with what ever you guys want to use for the keys. to
 map back to the tls_container_id. unless we want with a
 firm decision on what strings in type should map to
 GEN_DNS and GEN_DIRNAME CN entries from the
 OpenSSL layer.
 
 For now we can skip GEN_DIRNAME entries since RFC2818 doesn't mandate its 
 support and I'm not sure if fetching the CN from the DirName is in practice 
 now. I'm leery of using CN's from DirName entries as I can imagine people 
 signing differen't X509Names as a DirName with no intention of host name 
 validation. Excample
 (dirName, 'cn=john.garza,ou=people,o=somecompany)
 
 dNSName and DirName encodings are mentioned in RFC2459. if you want a more 
 formal definition.
 
 On Jul 17, 2014, at 10:19 AM, Stephen Balukoff sbaluk...@bluebox.net wrote:
 
  Ok, folks!
 
  Per the IRC meeting this morning, we came to the following consensus 
  regarding how TLS certificates are handled, how SAN is handled, and how 
  hostname

Re: [openstack-dev] [Neutron][LBaaS] TLS capability - certificates data persistency

2014-07-22 Thread Carlos Garza

On Jul 20, 2014, at 6:32 AM, Evgeny Fedoruk evge...@radware.com wrote:

 Hi folks,
  
 In a current version of TLS capabilities RST certificate SubjectCommonName 
 and SubjectAltName information is cached in a database.
 This may be not necessary and here is why:
  
 1.   TLS containers are immutable, meaning once a container was 
 associated to a listener and was validated, it’s not necessary to validate 
 the container anymore.
 This is relevant for both, default container and containers used for SNI.
 2.   LBaaS front-end API can check if TLS containers ids were changed for 
 a listener as part of an update operation. Validation of containers will be 
 done for
 new containers only. This is stated in “Performance Impact” section of the 
 RST, excepting the last statement that proposes persistency for SCN and SAN.
 3.   Any interaction with Barbican API for getting containers data will 
 be performed via a common module API only. This module’s API is mentioned in
 “SNI certificates list management” section of the RST.
 4.   In case when driver really needs to extract certificate information 
 prior to the back-end system provisioning, it will do it via the common 
 module API.
 5.   Back-end provisioning system may cache any certificate data, except 
 private key, in case of a specific need of the vendor.
  
 IMO, There is no real need to store certificates data in Neutron database and 
 manage its life cycle.
 Does anyone sees a reason why caching certificates’ data in Neutron database 
 is critical?

Its not so much caching the certificate. Lets just say when an lb change 
comes into the API that wants to add an X509 then we need to parse the 
subjectNames and SubjectAltNames from the previous X509s which aren't available 
to us so we must grab them all from barbican over the rest interface. Like I 
said in an earlier email its a balancing act between Single Source of Truth 
vs how much lag were willing to deal with.



 Thank you,
 Evg
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] TLS capability - work division

2014-07-22 Thread Carlos Garza
Since it looks like the TLS blueprint was approved I''m sure
were all eager to start coded so how should we divide up work on the
source code. I have Pull requests in pyopenssl 
https://github.com/pyca/pyopenssl/pull/143;. and a few one liners
in pica/cryptography to expose the needed low-level that I'm hoping 
will be added pretty soon to that PR 143 test's can pass. Incase it
doesn't we will fall back to using the pyasn1_modules as it 
already also has a means to fetch what we want at a lower level. 
I'm just hoping that we can split the work up so that we can
collaborate together on this with out over serializing the work
were people become dependent on waiting for some one else to
complete their work or worse one person ending up doing all
the work.


Carlos D. Garza
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-17 Thread Carlos Garza
I added the following comments to patch 14 but I'm not -1 it but I think its a 
mistake
to assume the altSubjectName is a string type. See below.

--- Comments on patch 14 below 

SubjectAltNames are not a string and should be thought
 of as an array of tuples. Example
 [('dNSName','www.somehost.com'),
('dirNameCN','www.somehostFromAltCN.org'),
('dirNameCN','www.anotherHostFromAltCN.org')]

for right now we only care about entries that are of type dNSName
or the entries that are of type DirName that also contain a CN in the DirName 
container. All other AltNames can be ignores as they don't seem to be apart of 
hostname validation in PKIX

Also we don't need to store these in the object model. since these 
can be extracted from the X509 on the fly. Just be aware though that 
the SubjectAltName should not be treated as a simple string but as a 
list of (general_name_type,general_name_value) tuples

were really close to the end but we can't mess this one up.

I'm flexible if you want these values store in the database
or not. If we do store it in a database we need a table called
general_names that contains varchars for type and value for
now with what ever you guys want to use for the keys. to
map back to the tls_container_id. unless we want with a
firm decision on what strings in type should map to
GEN_DNS and GEN_DIRNAME CN entries from the
OpenSSL layer.

For now we can skip GEN_DIRNAME entries since RFC2818 doesn't mandate its 
support and I'm not sure if fetching the CN from the DirName is in practice 
now. I'm leery of using CN's from DirName entries as I can imagine people 
signing differen't X509Names as a DirName with no intention of host name 
validation. Excample
(dirName, 'cn=john.garza,ou=people,o=somecompany)

dNSName and DirName encodings are mentioned in RFC2459. if you want a more 
formal definition.

On Jul 17, 2014, at 10:19 AM, Stephen Balukoff sbaluk...@bluebox.net wrote:

 Ok, folks!
 
 Per the IRC meeting this morning, we came to the following consensus 
 regarding how TLS certificates are handled, how SAN is handled, and how 
 hostname conflict resolution is handled. I will be responding to all three of 
 the currently ongoing mailing list discussions with this info:
 
   • Driver does not have to use SAN that is passed from API layer, but 
 SAN will be available to drivers at the API layer. This will be mentioned 
 explicitly in the spec.
   • Order is a mandatory attribute. It's intended to be used as a hint 
 for hostname conflict resolution, but it's ultimately up to the driver to 
 decide how to resolve the conflict. (In other words, although it is a 
 mandatory attribute in our model, drivers are free to ignore it.)
   • Drivers are allowed to vary their behavior when choosing how to 
 implement hostname conflict resolution since there is no single algorithm 
 here that all vendors are able to support. (This is anticipated to be a rare 
 edge case anyway.)
 I think Evgeny will be updating the specs to reflect this decision so that it 
 is documented--  we hope to get ultimate approval of the spec in the next day 
 or two.
 
 Thanks,
 Stephen
 
 
 
 
 On Wed, Jul 16, 2014 at 7:31 PM, Stephen Balukoff sbaluk...@bluebox.net 
 wrote:
 Just saw this thread after responding to the other:
 
 I'm in favor of Evgeny's proposal. It sounds like it should resolve most (if 
 not all) of the operators', vendors' and users' concerns with regard to 
 handling TLS certificates.
 
 Stephen
 
 
 On Wed, Jul 16, 2014 at 12:35 PM, Carlos Garza carlos.ga...@rackspace.com 
 wrote:
 
 On Jul 16, 2014, at 10:55 AM, Vijay Venkatachalam 
 vijay.venkatacha...@citrix.com
  wrote:
 
  Apologies for the delayed response.
 
  I am OK with displaying the certificates contents as part of the API, that 
  should not harm.
 
  I think the discussion has to be split into 2 topics.
 
  1.   Certificate conflict resolution. Meaning what is expected when 2 
  or more certificates become eligible during SSL negotiation
  2.   SAN support
 
 
 Ok cool that makes more sense. #2 seems to be met by Evgeny proposal. 
 I'll let you folks decide the conflict resolution issue #1.
 
 
  I will send out 2 separate mails on this.
 
 
  From: Samuel Bercovici [mailto:samu...@radware.com]
  Sent: Tuesday, July 15, 2014 11:52 PM
  To: OpenStack Development Mailing List (not for usage questions); Vijay 
  Venkatachalam
  Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
  Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
 
  OK.
 
  Let me be more precise, extracting the information for view sake / 
  validation would be good.
  Providing values that are different than what is in the x509 is what I am 
  opposed to.
 
  +1 for Carlos on the library and that it should be ubiquitously used.
 
  I will wait for Vijay to speak for himself in this regard…
 
  -Sam.
 
 
  From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
  Sent: Tuesday, July 15, 2014 8:35 PM

Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-16 Thread Carlos Garza

On Jul 16, 2014, at 10:55 AM, Vijay Venkatachalam 
vijay.venkatacha...@citrix.com
 wrote:

 Apologies for the delayed response.

 I am OK with displaying the certificates contents as part of the API, that 
 should not harm.
  
 I think the discussion has to be split into 2 topics.
  
 1.   Certificate conflict resolution. Meaning what is expected when 2 or 
 more certificates become eligible during SSL negotiation
 2.   SAN support
  

Ok cool that makes more sense. #2 seems to be met by Evgeny proposal. I'll 
let you folks decide the conflict resolution issue #1.


 I will send out 2 separate mails on this.
  
  
 From: Samuel Bercovici [mailto:samu...@radware.com] 
 Sent: Tuesday, July 15, 2014 11:52 PM
 To: OpenStack Development Mailing List (not for usage questions); Vijay 
 Venkatachalam
 Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
  
 OK.
  
 Let me be more precise, extracting the information for view sake / validation 
 would be good.
 Providing values that are different than what is in the x509 is what I am 
 opposed to.
  
 +1 for Carlos on the library and that it should be ubiquitously used.
  
 I will wait for Vijay to speak for himself in this regard…
  
 -Sam.
  
  
 From: Stephen Balukoff [mailto:sbaluk...@bluebox.net] 
 Sent: Tuesday, July 15, 2014 8:35 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
  
 +1 to German's and  Carlos' comments.
  
 It's also worth pointing out that some UIs will definitely want to show SAN 
 information and the like, so either having this available as part of the API, 
 or as a standard library we write which then gets used by multiple drivers is 
 going to be necessary.
  
 If we're extracting the Subject Common Name in any place in the code then we 
 also need to be extracting the Subject Alternative Names at the same place. 
 From the perspective of the SNI standard, there's no difference in how these 
 fields should be treated, and if we were to treat SANs differently then we're 
 both breaking the standard and setting a bad precedent.
  
 Stephen
  
 
 On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza carlos.ga...@rackspace.com 
 wrote:
 
 On Jul 15, 2014, at 10:55 AM, Samuel Bercovici samu...@radware.com
  wrote:
 
  Hi,
 
 
  Obtaining the domain name from the x509 is probably more of a 
  driver/backend/device capability, it would make sense to have a library 
  that could be used by anyone wishing to do so in their driver code.
 
 You can do what ever you want in *your* driver. The code to extract this 
 information will be apart of the API and needs to be mentioned in the spec 
 now. PyOpenSSL with PyASN1 are the most likely candidates.
 
 Carlos D. Garza
 
  -Sam.
 
 
 
  From: Eichberger, German [mailto:german.eichber...@hp.com]
  Sent: Tuesday, July 15, 2014 6:43 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
  Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
 
  Hi,
 
  My impression was that the frontend would extract the names and hand them 
  to the driver.  This has the following advantages:
 
  · We can be sure all drivers can extract the same names
  · No duplicate code to maintain
  · If we ever allow the user to specify the names on UI rather in 
  the certificate the driver doesn’t need to change.
 
  I think I saw Adam say something similar in a comment to the code.
 
  Thanks,
  German
 
  From: Evgeny Fedoruk [mailto:evge...@radware.com]
  Sent: Tuesday, July 15, 2014 7:24 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
  SubjectCommonName and/or SubjectAlternativeNames from X509
 
  Hi All,
 
  Since this issue came up from TLS capabilities RST doc review, I opened a 
  ML thread for it to make the decision.
  Currently, the document says:
 
  “
  For SNI functionality, tenant will supply list of TLS containers in specific
  Order.
  In case when specific back-end is not able to support SNI capabilities,
  its driver should throw an exception. The exception message should state
  that this specific back-end (provider) does not support SNI capability.
  The clear sign of listener's requirement for SNI capability is
  a none empty SNI container ids list.
  However, reference implementation must support SNI capability.
 
  Specific back-end code may retrieve SubjectCommonName and/or altSubjectNames
  from the certificate which will determine the hostname(s) the certificate
  is associated with.
 
  The order of SNI containers list may be used by specific back-end code,
  like Radware's, for specifying priorities

Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SubjectAlternativeNames (SAN)

2014-07-16 Thread Carlos Garza

On Jul 16, 2014, at 12:30 PM, Vijay Venkatachalam 
vijay.venkatacha...@citrix.com
 wrote:

We will have the code that will parse the X509 in the API scope of the 
code. The validation I'm refering to is making sure the key matches the cert 
used and that we mandate that at a minimum the backend driver support RSA and 
that since the X509 validation is happeneing at the api layer this same module 
will also handling the extraction of the SANs. I am proposing that the methods 
that can extract the SAN SCN from the x509 be present in the api portion of the 
code and that drivers can call these methods if they need too. Infact I'm 
already working to get these extraction methods contributed to the PyOpenSSL 
project so that they will already available at a more fundemental layer then 
our nuetron/LBAAS code. At the very least I want to spec to declare that SAN 
SCN and parsing must be made available from the API layer. If the PyOpenSSL has 
the methods available at that time then I we can simply write wrappers for this 
in the API or simple write more higher level methods in the API module. Bottom 
line I 

 I am partioally open to the idea of letting the driver handle the behavior 
of the cert parsing. Although I defer this to the rest of the folks as I get 
this feeling having differn't implementations exhibiting differen't behavior 
may sound scary. 

  
 I think it is best not to mention about SAN in the OpenStack 
 TLS spec. It is expected that the backend should implement according to the 
 SSL/SNI IETF spec.
 Let’s leave the implementation/validation part to the driver.  For ex. 
 NetScaler does not support SAN and the NetScaler driver could either throw an 
 error if certs with SAN are used or ignore it.

How is netscaler making the decision when choosing the cert to associate 
with the SNI handshake?

  
 Does anyone see a requirement for detailing?
  
  
 Thanks,
 Vijay V.
  
  
 From: Vijay Venkatachalam 
 Sent: Wednesday, July 16, 2014 8:54 AM
 To: 'Samuel Bercovici'; 'OpenStack Development Mailing List (not for usage 
 questions)'
 Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
  
 Apologies for the delayed response.

 I am OK with displaying the certificates contents as part of the API, that 
 should not harm.
  
 I think the discussion has to be split into 2 topics.
  
 1.   Certificate conflict resolution. Meaning what is expected when 2 or 
 more certificates become eligible during SSL negotiation
 2.   SAN support
  
 I will send out 2 separate mails on this.
  
  
 From: Samuel Bercovici [mailto:samu...@radware.com] 
 Sent: Tuesday, July 15, 2014 11:52 PM
 To: OpenStack Development Mailing List (not for usage questions); Vijay 
 Venkatachalam
 Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
  
 OK.
  
 Let me be more precise, extracting the information for view sake / validation 
 would be good.
 Providing values that are different than what is in the x509 is what I am 
 opposed to.
  
 +1 for Carlos on the library and that it should be ubiquitously used.
  
 I will wait for Vijay to speak for himself in this regard…
  
 -Sam.
  
  
 From: Stephen Balukoff [mailto:sbaluk...@bluebox.net] 
 Sent: Tuesday, July 15, 2014 8:35 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
  
 +1 to German's and  Carlos' comments.
  
 It's also worth pointing out that some UIs will definitely want to show SAN 
 information and the like, so either having this available as part of the API, 
 or as a standard library we write which then gets used by multiple drivers is 
 going to be necessary.
  
 If we're extracting the Subject Common Name in any place in the code then we 
 also need to be extracting the Subject Alternative Names at the same place. 
 From the perspective of the SNI standard, there's no difference in how these 
 fields should be treated, and if we were to treat SANs differently then we're 
 both breaking the standard and setting a bad precedent.
  
 Stephen
  
 
 On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza carlos.ga...@rackspace.com 
 wrote:
 
 On Jul 15, 2014, at 10:55 AM, Samuel Bercovici samu...@radware.com
  wrote:
 
  Hi,
 
 
  Obtaining the domain name from the x509 is probably more of a 
  driver/backend/device capability, it would make sense to have a library 
  that could be used by anyone wishing to do so in their driver code.
 
 You can do what ever you want in *your* driver. The code to extract this 
 information will be apart of the API and needs to be mentioned in the spec 
 now. PyOpenSSL with PyASN1 are the most likely candidates.
 
 Carlos D. Garza
 
  -Sam.
 
 
 
  From

Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SubjectAlternativeNames (SAN)

2014-07-16 Thread Carlos Garza

On Jul 16, 2014, at 3:49 PM, Carlos Garza carlos.ga...@rackspace.com wrote:

 
 On Jul 16, 2014, at 12:30 PM, Vijay Venkatachalam 
 vijay.venkatacha...@citrix.com
 wrote:
 
We will have the code that will parse the X509 in the API scope of the 
 code. The validation I'm refering to is making sure the key matches the cert 
 used and that we mandate that at a minimum the backend driver support RSA and 
 that since the X509 validation is happeneing at the api layer this same 
 module will also handling the extraction of the SANs. I am proposing that the 
 methods that can extract the SAN SCN from the x509 be present in the api 
 portion of the code and that drivers can call these methods if they need too. 
 Infact I'm already working to get these extraction methods contributed to the 
 PyOpenSSL project so that they will already available at a more fundemental 
 layer then our nuetron/LBAAS code. At the very least I want to spec to 
 declare that SAN SCN and parsing must be made available from the API layer. 
 If the PyOpenSSL has the methods available at that time then I we can simply 
 write wrappers for this in the API or simple write more higher level methods 
 in the API module.  

I meant to say bottom line I want the parsing code exposed in the API and 
not duplicated in everyone elses driver.

 I am partioally open to the idea of letting the driver handle the 
 behavior of the cert parsing. Although I defer this to the rest of the folks 
 as I get this feeling having differn't implementations exhibiting differen't 
 behavior may sound scary. 
 
 
I think it is best not to mention about SAN in the OpenStack 
 TLS spec. It is expected that the backend should implement according to the 
 SSL/SNI IETF spec.
 Let’s leave the implementation/validation part to the driver.  For ex. 
 NetScaler does not support SAN and the NetScaler driver could either throw 
 an error if certs with SAN are used or ignore it.
 
How is netscaler making the decision when choosing the cert to associate 
 with the SNI handshake?
 
 
 Does anyone see a requirement for detailing?
 
 
 Thanks,
 Vijay V.
 
 
 From: Vijay Venkatachalam 
 Sent: Wednesday, July 16, 2014 8:54 AM
 To: 'Samuel Bercovici'; 'OpenStack Development Mailing List (not for usage 
 questions)'
 Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
 
 Apologies for the delayed response.
 
 I am OK with displaying the certificates contents as part of the API, that 
 should not harm.
 
 I think the discussion has to be split into 2 topics.
 
 1.   Certificate conflict resolution. Meaning what is expected when 2 or 
 more certificates become eligible during SSL negotiation
 2.   SAN support
 
 I will send out 2 separate mails on this.
 
 
 From: Samuel Bercovici [mailto:samu...@radware.com] 
 Sent: Tuesday, July 15, 2014 11:52 PM
 To: OpenStack Development Mailing List (not for usage questions); Vijay 
 Venkatachalam
 Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
 
 OK.
 
 Let me be more precise, extracting the information for view sake / 
 validation would be good.
 Providing values that are different than what is in the x509 is what I am 
 opposed to.
 
 +1 for Carlos on the library and that it should be ubiquitously used.
 
 I will wait for Vijay to speak for himself in this regard…
 
 -Sam.
 
 
 From: Stephen Balukoff [mailto:sbaluk...@bluebox.net] 
 Sent: Tuesday, July 15, 2014 8:35 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
 
 +1 to German's and  Carlos' comments.
 
 It's also worth pointing out that some UIs will definitely want to show SAN 
 information and the like, so either having this available as part of the 
 API, or as a standard library we write which then gets used by multiple 
 drivers is going to be necessary.
 
 If we're extracting the Subject Common Name in any place in the code then we 
 also need to be extracting the Subject Alternative Names at the same place. 
 From the perspective of the SNI standard, there's no difference in how these 
 fields should be treated, and if we were to treat SANs differently then 
 we're both breaking the standard and setting a bad precedent.
 
 Stephen
 
 
 On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza carlos.ga...@rackspace.com 
 wrote:
 
 On Jul 15, 2014, at 10:55 AM, Samuel Bercovici samu...@radware.com
 wrote:
 
 Hi,
 
 
 Obtaining the domain name from the x509 is probably more of a 
 driver/backend/device capability, it would make sense to have a library 
 that could be used by anyone wishing to do so in their driver code.
 
You can do what ever you want in *your* driver. The code to extract this 
 information

Re: [openstack-dev] [Neutron][LBaaS] TLS capability - Certificate conflict resolution

2014-07-16 Thread Carlos Garza

On Jul 16, 2014, at 11:07 AM, Vijay Venkatachalam 
vijay.venkatacha...@citrix.com wrote:

  
 Do you know if the SSL/SNI IETF spec details about conflict resolution. I am 
 assuming not.
 

The specs I have seen just describe SNI as a way
of passing an intended host name in the clear during the TLS handshake.  The 
specs do not
describe the behavior of what the server should do with the SNI host or what 
peer certificate
they should return based on it. The whole idea of SNI was that the server or 
something like a
load balancer(Like we are doing) could make decisions based on this unencrypted 
value on the
server side with out even knowing the private key. IE a loadbalancer doesn't 
even need to interact
with the handshake(I've seen at least one tool that doesn't even use an SSL 
library to peek at the
SNI host (looking at blue box)) and simply forward they tcp stream an 
appropriate back end node, at
which point the back end interacts with the TLS handshake.

 In short the SAN SCN cruft was added to the spec as a convience method so 
that users could just
upload their X509 set for SNI vs the original plan to upload a set of 
(hostname,X509ContainerId) tuples. The RFC
seems to implie that it intends to deprecate the use of the SubjectCN to store 
the hostname for web certificates 
but since its so popular I'm guessing that'll never happen.


By the way:
   RFC 2818 (HTTP-TLS) does dicate that if a subjectAltName extention with a 
dNSName entry exists then the
dNSNames entries should be used for PKIX validation and not the SubjectCN. so 
PKIX validation that ignores
the subjectAltName is already breaking RFC2818.

 Because of this ambiguity each backend employs its own mechanism to resolve 
 conflicts.
  
 There are 3 choices now
 1.   The LBaaS extension does not allow conflicting certificates to be 
 bound using validation
 2.   Allow each backend conflict resolution mechanism to get into the spec
 3.   Does not specify anything in the spec, no mechanism introduced and 
 let the driver deal with it. 

I propose another optionspecifically #1 is not acceptable. 
  4. The spec should mandate that each driver document their SNI behavior and 
more specifically 
behavior on conflicts resolution. The vendor documentation doesn't have to be 
in the same spec or even in
the lbaas project it just has to be documented some where central side beside 
with other vendors docs.

 Both HA proxy and Radware uses configuration as a mechanism to resolve. 
 Radware uses order while HA Proxy uses externally specified DNS names.
 NetScaler implementation uses the best possible match algorithm
  
 For ex, let’s say 3 certs are bound to the same endpoint with the following 
 SNs
 www.finance.abc.com
 *.finance.abc.com
 *.*.abc.com
 If the host request is  payroll.finance.abc.com  we shall  use  
 *.finance.abc.com
 If it is  payroll.engg.abc.com  we shall use  *.*.abc.com
  
 NetScaler won’t not allow 2 certs to have the same SN.

In this case NetScaler could document the behavior of their driver at that 
case.

 From: Samuel Bercovici [mailto:samu...@radware.com] 
 Sent: Tuesday, July 15, 2014 11:52 PM
 To: OpenStack Development Mailing List (not for usage questions); Vijay 
 Venkatachalam
 Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
  
 OK.
  
 Let me be more precise, extracting the information for view sake / validation 
 would be good.
 Providing values that are different than what is in the x509 is what I am 
 opposed to.
  
 +1 for Carlos on the library and that it should be ubiquitously used.
  
 I will wait for Vijay to speak for himself in this regard…
  
 -Sam.
  
  
 From: Stephen Balukoff [mailto:sbaluk...@bluebox.net] 
 Sent: Tuesday, July 15, 2014 8:35 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
  
 +1 to German's and  Carlos' comments.
  
 It's also worth pointing out that some UIs will definitely want to show SAN 
 information and the like, so either having this available as part of the API, 
 or as a standard library we write which then gets used by multiple drivers is 
 going to be necessary.
  
 If we're extracting the Subject Common Name in any place in the code then we 
 also need to be extracting the Subject Alternative Names at the same place. 
 From the perspective of the SNI standard, there's no difference in how these 
 fields should be treated, and if we were to treat SANs differently then we're 
 both breaking the standard and setting a bad precedent.
  
 Stephen
  
 
 On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza carlos.ga...@rackspace.com 
 wrote:
 
 On Jul 15, 2014, at 10:55 AM, Samuel Bercovici samu...@radware.com
  wrote:
 
  Hi,
 
 
  Obtaining the domain name from the x509 is probably more of a 
  driver

Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-15 Thread Carlos Garza

On Jul 15, 2014, at 9:24 AM, Evgeny Fedoruk evge...@radware.com wrote:

 The question is about SCN and SAN extraction from X509.
 1.   Extraction of SCN/ SAN should be done while provisioning and not 
 during TLS handshake
   Yes that makes the most sense. If some strange backend really wants to 
repeatedly extract this during TLS hand shake
I guess they are free to do this although its pretty brain damaged since the 
extracted fields will always be the same.

 2.   Every back-end code/driver must(?) extract SCN and(?) SAN and use it 
 for certificate determination for host

No need for this to be in driver code. It was my understanding that the 
X509 was going to be pulled apart in the API code via pyOpenSSL(Which is what 
I'm working on now). Since we would be validating the key and x509 at the API 
layer already it made more sense to extract the SubjectAltName and SubjectSN 
here as well. If you want to do it in the driver as well at least use the same 
code thats already in the API layer.


  
 Please give your feedback
  
 Thanks,
 Evg
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-15 Thread Carlos Garza

On Jul 15, 2014, at 10:55 AM, Samuel Bercovici samu...@radware.com
 wrote:

 Hi,
  
 
 Obtaining the domain name from the x509 is probably more of a 
 driver/backend/device capability, it would make sense to have a library that 
 could be used by anyone wishing to do so in their driver code.

You can do what ever you want in *your* driver. The code to extract this 
information will be apart of the API and needs to be mentioned in the spec now. 
PyOpenSSL with PyASN1 are the most likely candidates.

Carlos D. Garza
  
 -Sam.
  
  
  
 From: Eichberger, German [mailto:german.eichber...@hp.com] 
 Sent: Tuesday, July 15, 2014 6:43 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
  
 Hi,
  
 My impression was that the frontend would extract the names and hand them to 
 the driver.  This has the following advantages:
  
 · We can be sure all drivers can extract the same names
 · No duplicate code to maintain
 · If we ever allow the user to specify the names on UI rather in the 
 certificate the driver doesn’t need to change.
  
 I think I saw Adam say something similar in a comment to the code.
  
 Thanks,
 German
  
 From: Evgeny Fedoruk [mailto:evge...@radware.com] 
 Sent: Tuesday, July 15, 2014 7:24 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
 SubjectCommonName and/or SubjectAlternativeNames from X509
  
 Hi All,
  
 Since this issue came up from TLS capabilities RST doc review, I opened a ML 
 thread for it to make the decision.
 Currently, the document says:
  
 “
 For SNI functionality, tenant will supply list of TLS containers in specific
 Order.
 In case when specific back-end is not able to support SNI capabilities,
 its driver should throw an exception. The exception message should state
 that this specific back-end (provider) does not support SNI capability.
 The clear sign of listener's requirement for SNI capability is
 a none empty SNI container ids list.
 However, reference implementation must support SNI capability.
  
 Specific back-end code may retrieve SubjectCommonName and/or altSubjectNames
 from the certificate which will determine the hostname(s) the certificate
 is associated with.
  
 The order of SNI containers list may be used by specific back-end code,
 like Radware's, for specifying priorities among certificates.
 In case when two or more uploaded certificates are valid for the same DNS name
 and the tenant has specific requirements around which one wins this collision,
 certificate ordering provides a mechanism to define which cert wins in the
 event of a collision.
 Employing the order of certificates list is not a common requirement for
 all back-end implementations.
 “
  
 The question is about SCN and SAN extraction from X509.
 1.   Extraction of SCN/ SAN should be done while provisioning and not 
 during TLS handshake
 2.   Every back-end code/driver must(?) extract SCN and(?) SAN and use it 
 for certificate determination for host
  
 Please give your feedback
  
 Thanks,
 Evg
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] subjAltName and CN extraction from x509 certificates

2014-06-27 Thread Carlos Garza
   Too late guys. I'm already grabbing the fields from pyasn1. I'm not writing 
an ASN1 
parser I'm using the one from pyasn1_modules.rfc2459.

   I am in favor of using a common crypto lib which is why I was planning to use
the cryptography package that barbican already depends on to handle the 
decrypting of keys etc. 


On Jun 27, 2014, at 12:48 PM, Dustin Lundquist dus...@null-ptr.net wrote:

 It doesn't look like NSS is currently used within Neutron or Keystone. 
 Another alternative would be to write the certificate to a temp file and then 
 invoke openssl x509 -text -noout -in $TEMP_FILE and parse the output, 
 Keystone currently does similar (keystone/common/openssl.py). Given renewed 
 focus by security researchers on cryptographic libraries, I think we should 
 avoid requiring additional cryptographic libraries and use what is already in 
 use within OpenStack.

I'd really like to avoid piping out to the command line then writing 
another parser for the output. I'm kinda shocked that keystone is actually 
doing this. :( anyways theirs plenty of hooks to get into the low level OpenSSL 
lib if we need to.

 
 -Dustin
 
 
 On Fri, Jun 27, 2014 at 7:26 AM, John Dennis jden...@redhat.com wrote:
 On 06/27/2014 12:21 AM, Carlos Garza wrote:
I don't know where we can check in experimental code so I have a 
  demonstration
  of how to extract CNs subjAltNames or what ever we want from x509 
  certificates. Later on
  I plan to use the OpenSSL libraries to verify certs coming from barbican 
  are valid and
  actually do sign the private_key it is associated with.
 
  https://github.com/crc32a/ssl_exp.git
 
 
 I'm always leary of reinventing the wheel, we already have code to
 manage pem files (maybe this should be in oslo, it was proposed once)
 
 keystone/common/pemutils.py
 
 I'm also leary of folks writing their own ASN.1 parsing as opposed to
 using existing libraries. Why? It's really hard to get right so you
 correctly handle all the cases, long established robust libraries are
 better at this.
 
 python-nss (which is a Python binding to the NSS crypto library) has
 easy to use code to extract just about anything from a cert, here is an
 example python script using your example pem file. If using NSS isn't an
 option I'd rather see us provide the necessary binding in pyopenssl than
 handcraft one-off routines. FWIW virtually everything you see in the
 cert output below can be accessed as Pythonically as a Python object(s)
 when using python-nss.
 
 #!/usr/bin/python
 
 import sys
 import nss.nss as nss
 
 nss.nss_init_nodb()
 
 filename = sys.argv[1]
 
 # Read the PEM file
 try:
 binary_cert = nss.read_der_from_file(filename, True)
 except Exception as e:
 print e
 sys.exit(1)
 else:
 print loaded cert from file: %s % filename
 
 # Create a Certificiate object from the binary data
 cert = nss.Certificate(binary_cert)
 
 # Dump some basic information
 print
 print cert subject: %s  % cert.subject
 print cert CN: %s  % cert.subject_common_name
 print cert validity:
 print Not Before: %s % cert.valid_not_before_str
 print Not After: %s % cert.valid_not_after_str
 
 print
 print \ncert has %d extensions % len(cert.extensions)
 
 for extension in cert.extensions:
 print %s (critical: %s) % (extension.name, extension.critical)
 
 print
 extension = cert.get_extension(nss.SEC_OID_X509_SUBJECT_ALT_NAME)
 if extension:
 print Subject Alt Names:
 for name in nss.x509_alt_name(extension.value):
 print %s % name
 else:
 print cert does not have a subject alt name extension
 
 # Dump entire cert in friendly format
 print
 print  Entire cert contents 
 print cert
 
 sys.exit(0)
 
 Yields this output:
 
 loaded cert from file: cr1.pem
 
 cert subject: CN=www.digicert.com,O=DigiCert, 
 Inc.,L=Lehi,ST=Utah,C=US,postalCode=84043,STREET=2600 West Executive 
 Parkway,STREET=Suite 
 500,serialNumber=5299537-0142,incorporationState=Utah,incorporationCountry=US,businessCategory=Private
  Organization
 cert CN: www.digicert.com
 cert validity:
 Not Before: Thu Mar 20 00:00:00 2014 UTC
 Not After: Sun Jun 12 12:00:00 2016 UTC
 
 
 cert has 10 extensions
 Certificate Authority Key Identifier (critical: False)
 Certificate Subject Key ID (critical: False)
 Certificate Subject Alt Name (critical: False)
 Certificate Key Usage (critical: True)
 Extended Key Usage (critical: False)
 CRL Distribution Points (critical: False)
 Certificate Policies (critical: False)
 Authority Information Access (critical: False)
 Certificate Basic Constraints (critical: True)
 OID.1.3.6.1.4.1.11129.2.4.2 (critical: False)
 
 Subject Alt Names:
 www.digicert.com
 content.digicert.com
 digicert.com
 www.origin.digicert.com
 login.digicert.com
 
  Entire cert contents 
 Data:
 Version:   3 (0x2)
 Serial Number: 13518267578909330747227050733614153347 
 (0xa2b860cca01f45fd7ee63601b1c3e83

Re: [openstack-dev] [Neutron][LBaaS] subjAltName and CN extraction from x509 certificates

2014-06-27 Thread Carlos Garza

On Jun 27, 2014, at 9:26 AM, John Dennis jden...@redhat.com wrote:

 On 06/27/2014 12:21 AM, Carlos Garza wrote:
  I don't know where we can check in experimental code so I have a 
 demonstration
 of how to extract CNs subjAltNames or what ever we want from x509 
 certificates. Later on
 I plan to use the OpenSSL libraries to verify certs coming from barbican are 
 valid and
 actually do sign the private_key it is associated with. 
 
 https://github.com/crc32a/ssl_exp.git
 
 
 I'm always leary of reinventing the wheel, we already have code to
 manage pem files (maybe this should be in oslo, it was proposed once)
 
 keystone/common/pemutils.py
 
 I'm also leary of folks writing their own ASN.1 parsing as opposed to
 using existing libraries. Why? It's really hard to get right so you
 correctly handle all the cases, long established robust libraries are
 better at this.

I'm not writing an ASN.1 parser. I'm using pyasn1 and the 
pyasn1_modules.rfc2459 module to reading interesting fields from
the x509. 


 python-nss (which is a Python binding to the NSS crypto library) has
 easy to use code to extract just about anything from a cert, here is an
 example python script using your example pem file. If using NSS isn't an
 option I'd rather see us provide the necessary binding in pyopenssl than
 handcraft one-off routines. FWIW virtually everything you see in the
 cert output below can be accessed as Pythonically as a Python object(s)
 when using python-mss.

looks to me like pip install python-nss is broken for python3 :(

I am planning on using another library to handle signature verification etc
but for the most part pyasn1 a pure python module
is pretty good at extracting the fields I need. 

 #!/usr/bin/python
 
 import sys
 import nss.nss as nss
 
 nss.nss_init_nodb()
 
 filename = sys.argv[1]
 
 # Read the PEM file
 try:
binary_cert = nss.read_der_from_file(filename, True)
 except Exception as e:
print e
sys.exit(1) 
 else:
print loaded cert from file: %s % filename
 
 # Create a Certificiate object from the binary data
 cert = nss.Certificate(binary_cert)
 
 # Dump some basic information
 print
 print cert subject: %s  % cert.subject
 print cert CN: %s  % cert.subject_common_name
 print cert validity:
 print Not Before: %s % cert.valid_not_before_str
 print Not After: %s % cert.valid_not_after_str
 
 print
 print \ncert has %d extensions % len(cert.extensions)
 
 for extension in cert.extensions:
print %s (critical: %s) % (extension.name, extension.critical)
 
 print
 extension = cert.get_extension(nss.SEC_OID_X509_SUBJECT_ALT_NAME)
 if extension:
print Subject Alt Names:
for name in nss.x509_alt_name(extension.value):
print %s % name
 else:
print cert does not have a subject alt name extension
 
 # Dump entire cert in friendly format
 print
 print  Entire cert contents 
 print cert
 
 sys.exit(0)
 
 Yields this output:
 
 loaded cert from file: cr1.pem
 
 cert subject: CN=www.digicert.com,O=DigiCert, 
 Inc.,L=Lehi,ST=Utah,C=US,postalCode=84043,STREET=2600 West Executive 
 Parkway,STREET=Suite 
 500,serialNumber=5299537-0142,incorporationState=Utah,incorporationCountry=US,businessCategory=Private
  Organization 
 cert CN: www.digicert.com 
 cert validity:
Not Before: Thu Mar 20 00:00:00 2014 UTC
Not After: Sun Jun 12 12:00:00 2016 UTC
 
 
 cert has 10 extensions
Certificate Authority Key Identifier (critical: False)
Certificate Subject Key ID (critical: False)
Certificate Subject Alt Name (critical: False)
Certificate Key Usage (critical: True)
Extended Key Usage (critical: False)
CRL Distribution Points (critical: False)
Certificate Policies (critical: False)
Authority Information Access (critical: False)
Certificate Basic Constraints (critical: True)
OID.1.3.6.1.4.1.11129.2.4.2 (critical: False)
 
 Subject Alt Names:
www.digicert.com
content.digicert.com
digicert.com
www.origin.digicert.com
login.digicert.com
 
 Entire cert contents 
 Data:
Version:   3 (0x2)
Serial Number: 13518267578909330747227050733614153347 
 (0xa2b860cca01f45fd7ee63601b1c3e83)
Signature Algorithm:
Algorithm: PKCS #1 SHA-256 With RSA Encryption
Issuer: CN=DigiCert SHA2 Extended Validation Server 
 CA,OU=www.digicert.com,O=DigiCert Inc,C=US
Validity:
Not Before: Thu Mar 20 00:00:00 2014 UTC
Not After:  Sun Jun 12 12:00:00 2016 UTC
Subject: CN=www.digicert.com,O=DigiCert, 
 Inc.,L=Lehi,ST=Utah,C=US,postalCode=84043,STREET=2600 West Executive 
 Parkway,STREET=Suite 
 500,serialNumber=5299537-0142,incorporationState=Utah,incorporationCountry=US,businessCategory=Private
  Organization
Subject Public Key Info:
Public Key Algorithm:
Algorithm: PKCS #1 RSA Encryption
RSA Public Key:
Modulus:
a8:89:b3:3b:91:94:57:87:72:09:5b:5f:cb:2c:42:2a

Re: [openstack-dev] [Neutron][LBaaS] subjAltName and CN extraction from x509 certificates

2014-06-27 Thread Carlos Garza

On Jun 28, 2014, at 12:01 AM, Carlos Garza carlos.ga...@rackspace.com
 wrote:
 
 example python script using your example pem file. If using NSS isn't an
 option I'd rather see us provide the necessary binding in pyopenssl than
 handcraft one-off routines.

Are you saying you prefer us contribute the helper functions in pyOpenSSL 
and then upstream it and 
then consume those helper functions from PyOpenSSL in the lbaas project?

   If your just talking about adding the binding alone then your implying I 
still have to do the write those helper functions that
reach down to the lower level OpenSSL library. Is that what your suggesting? 
Help me understand your proposal.

I eventually want to do this but I feel our due date of august is 
prohibitive. 




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-16 Thread Carlos Garza
Sorry for responding so late but I don't think we should be doing ref 
counting at all.
In a closed system its hard enough to guarantee they are correct but in an open 
distributed system I really doubt every service will bother decrementing and 
incrementing the counters properly.

On Jun 16, 2014, at 1:35 PM, Stephen Balukoff sbaluk...@bluebox.net wrote:

 I would like to see something more sophisticated than a simple counter (it's 
 so easy for a counter to get off when dealing with non-atomic asynchronous 
 commands). But a counter is a good place to start.
 
 On Jun 13, 2014 6:54 AM, Jain, Vivek vivekj...@ebay.com wrote:
 +2. I totally agree with your comments Doug. It defeats the purpose if 
 Barbican does not want to deal with consumers of its service.
 
 Barbican can simply have a counter field on each container to signify how 
 many consumers are using it. Every time a consumer uses a container, it 
 increases the counter using barbican API.  If counter is 0, container is safe 
 to delete.
 
 —vivek
 
 From: Doug Wiegley do...@a10networks.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, June 10, 2014 at 2:41 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
 Integration Ideas
 
 Of what use is a database that randomly delete rows?  That is, in effect, 
 what you’re allowing.
 
 The secrets are only useful when paired with a service.  And unless I’m 
 mistaken, there’s no undo.  So you’re letting users shoot themselves in the 
 foot, for what reason, exactly?  How do you expect openstack to rely on a 
 data store that is fundamentally random at the whim of users?  Every single 
 service that uses Barbican will now have to hack in a defense mechanism of 
 some kind, because they can’t trust that the secret they rely on will still 
 be there later.  Which defeats the purpose of this mission statement:  
 Barbican is a ReST API designed for the secure storage, provisioning and 
 management of secrets.”
 
 (And I don’t think anyone is suggesting that blind refcounts are the answer.  
 At least, I hope not.)
 
 Anyway, I hear this has already been decided, so, so be it.  Sounds like 
 we’ll hack around it.
 
 Thanks,
 doug
 
 
 From: Douglas Mendizabal douglas.mendiza...@rackspace.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, June 10, 2014 at 3:26 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
 Integration Ideas
 
 I think that having Barbican decide whether the user is or isn’t allowed to 
 delete a secret that they own based on a reference count that is not directly 
 controlled by them is unacceptable.   This is indeed policy enforcement, and 
 we’d rather not go down that path.
 
 I’m opposed to the idea of reference counting altogether, but a couple of 
 other Barbican-core members are open to it, as long as it does not affect the 
 delete behaviors.
 
 -Doug M.
 
 From: Adam Harwell adam.harw...@rackspace.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, June 10, 2014 at 4:17 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
 Integration Ideas
 
 Doug: Right, we actually have a blueprint draft for EXACTLY this, but the 
 Barbican team gave us a flat not happening, we reject this change on 
 causing a delete to fail. The shadow-copy solution I proposed only came about 
 because the option you are proposing is not possible. :(
 
 I also realized that really, this whole thing is an issue for the backend, 
 not really for the API itself — the LBaaS API will be retrieving the key/cert 
 from Barbican and passing it to the backend, and the backend it what's 
 responsible for handling it from that point (F5, Stingray etc would never 
 actually call back to Barbican). So, really, the Service-VM solution we're 
 architecting is where the shadow-copy solution needs to live, at which point 
 it no longer is really an issue we'd need to discuss on this mailing list, I 
 think. Stephen, does that make sense to you?
 --Adam
 
 https://keybase.io/rm_you
 
 
 From: Doug Wiegley do...@a10networks.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, June 10, 2014 4:10 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
 Integration Ideas
 
 A third option, that is neither shadow copying nor policy 

Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-16 Thread Carlos Garza

On Jun 16, 2014, at 3:22 PM, Doug Wiegley do...@a10networks.com wrote:

 nobody is calling Barbican a database. It is a place to store
 
 Š did you at least feel a heavy sense of irony as you typed those two
 statements?  ³It¹s not a database, it just stores things!²  :-)
 
 The real irony here is that in this rather firm stand of keeping the user
 in control of their secrets, you are actually making the user LESS in
 control of their secrets.  Copies of secrets will have to be made, whether
 stored under another tenant, or shadow copied somewhere.  And the user
 will have no way to delete them, or even know that they exist.
 
 The force flag would eliminate the common mistake cases enough that I¹d
 wager lbaas and most others would cease to worry, not duplicate, and just
 reference barbican id¹s and nothing else.  (Not including backends that
 will already make a copy of the secret, but things like servicevm will not
 need to dup it.)  The earlier assertion that we have to deal with the
 missing secrets case even with a force flag is, I think, false, because
 once the common errors have been eliminated, the potential window of
 accidental pain is reduced to those that really ask for it.
 

The force flag is not an option and should no longer be considered.
I'm thinking we should leave it to the user to properly delete their secrets
and not try to open a can of behavioral worms by enforcing policy on the
barbican side. :(



 Thanks,
 Doug
 
 
 
 
 
 
 On 6/16/14, 1:56 PM, Clint Byrum cl...@fewbar.com wrote:
 
 Excerpts from Doug Wiegley's message of 2014-06-10 14:41:29 -0700:
 Of what use is a database that randomly delete rows?  That is, in
 effect, what you¹re allowing.
 
 The secrets are only useful when paired with a service.  And unless I¹m
 mistaken, there¹s no undo.  So you¹re letting users shoot themselves in
 the foot, for what reason, exactly?  How do you expect openstack to rely
 on a data store that is fundamentally random at the whim of users?
 Every single service that uses Barbican will now have to hack in a
 defense mechanism of some kind, because they can¹t trust that the secret
 they rely on will still be there later.  Which defeats the purpose of
 this mission statement:  Barbican is a ReST API designed for the secure
 storage, provisioning and management of secrets.²
 
 (And I don¹t think anyone is suggesting that blind refcounts are the
 answer.  At least, I hope not.)
 
 Anyway, I hear this has already been decided, so, so be it.  Sounds
 like we¹ll hack around it.
 
 
 
 Doug, nobody is calling Barbican a database. It is a place to store
 secrets.
 
 The idea is to loosely couple things, and if you need more assurances,
 use something like Heat to manage the relationships.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-16 Thread Carlos Garza

On Jun 16, 2014, at 4:06 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Doug Wiegley's message of 2014-06-16 13:22:26 -0700:
 nobody is calling Barbican a database. It is a place to store
 
 Š did you at least feel a heavy sense of irony as you typed those two
 statements?  ³It¹s not a database, it just stores things!²  :-)
 
 
 Not at all, though I understand that, clipped as so, it may look a bit
 ironic.
 
 I was using shorthand of database to mean a general purpose database. I
 should have qualified it to avoid any confusion. It is a narrow purpose
 storage service with strong access controls. We can call that a database
 if you like, but I think it has one very tiny role, and that is to audit
 and control access to secrets.
 
 The real irony here is that in this rather firm stand of keeping the user
 in control of their secrets, you are actually making the user LESS in
 control of their secrets.  Copies of secrets will have to be made, whether
 stored under another tenant, or shadow copied somewhere.  And the user
 will have no way to delete them, or even know that they exist.
 
 
 Why would you need to make copies outside of the in-RAM copy that is
 kept while the service runs? You're trying to do too much instead of
 operating in a nice loosely coupled fashion.

Because the service may be restarted?


 
 The force flag would eliminate the common mistake cases enough that I¹d
 wager lbaas and most others would cease to worry, not duplicate, and just
 reference barbican id¹s and nothing else.  (Not including backends that
 will already make a copy of the secret, but things like servicevm will not
 need to dup it.)  The earlier assertion that we have to deal with the
 missing secrets case even with a force flag is, I think, false, because
 once the common errors have been eliminated, the potential window of
 accidental pain is reduced to those that really ask for it.
 
 The accidental pain thing makes no sense to me. I'm a user and I take
 responsibility for my data. If I don't want to have that responsibility,
 I will use less privileged users and delegate the higher amount of
 privilege to a system that does manage those relationships for me.
 
 Do we have mandatory file locking in Unix? No we don't. Why? Because some
 users want the power to remove files _no matter what_. We build in the
 expectation that things may disappear no matter what you do to prevent
 it. I think your LBaaS should be written with the same assumption. It
 will be more resilient and useful to more people if they do not have to
 play complicated games to remove a secret.
 
 Anyway, nobody has answered this. What user would indiscriminately delete
 their own data and expect that things depending on that data will continue
 to work indefinitely?

Users that are expecting barbican operations to only occur during the 
initial loadbalancer provisioning. IE users that don't realize their LBconfigs 
don't natively store the private keys and would be be retrieving keys on the 
fly during every migration,HA spin up, service restart(from power failure) etc. 
But I agree we shouldn't do force flag locking as the barbican team has already 
dismissed the possibility of adding policy enforcement on behalf of other 
services. Shadow copying(into a lbaas owned account on barbican) was just so 
that our lbaas backend can access the keys outside of the users control if need 
be. :|



 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-11 Thread Carlos Garza

On Jun 11, 2014, at 9:31 AM, Evgeny Fedoruk evge...@radware.com wrote:

 Regarding the case when back-end system tries to retrieve secret from deleted 
 Barbican TLS container,
 Is this a real use case? I mean, is there a back-end system which will get 
 container ID from somewhere, try to retrieve secrets from Barbican by itself 
 and hope for good?

I'm of the opinion that the backend systems should not be talking to 
barbican and that any key passing should happen from the API to the back end.
I see it being very complex trying to code the backend so that its configurable 
with barbican since I would have assumed the backend won't even have knowledge 
of open stack.


 In my understanding, there is a plugin and a driver who can always check TLS 
 container existence before even start configuring the back-end system. In 
 case of a problem tenant will receive a clear error message and back-end 
 system will remain up and running.

This is the case when the API is spinning up the back end system. The 
concern is when a backend tries to be HA by duplicating a loadbalancer for 
redundancy. But I would argue that the front end as its being treated now would 
not be managing details for HA so that the back end providing HA would 
duplicate the keys between backend loadbalancers. Example F5 must store the 
private key on its side and if its providing HA it would have access to the key 
already.

 In case when back-end system itself triggers secret retrieval (outside of 
 OpenStack scope)  – first it should check container existence and only after 
 that destroy previous TLS setup and perform a new setup.
 LBaaS back-end system may not get a container ID at all,  but get its content 
 and not interact with Barbican by itself.
 In case when new LBaaS back-end system is created (HA event, for example), 
 whoever created an instance and gave it container ID, should check its 
 existence.
  
 Is there a specific use case when:
 back-end system, having container ID, up and running, offloading encrypted 
 traffic with a certificate from that container (by this time deleted from 
 Barbican),
 at some time, goes and tries to retrieve the secret, fails, loses its 
 previous TLS settings and causing downtime?
  
 Regards,
 Evgeny
  
  
  
 From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com] 
 Sent: Wednesday, June 11, 2014 4:14 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [Caution: Message contains Suspicious URL content] Re: 
 [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas
  
 +1 – Warning on a deletion of certificate in use can be considered as a 
 “nice-to-have” feature and not “must-have”!
  
 From: Samuel Bercovici [mailto:samu...@radware.com] 
 Sent: Wednesday, June 11, 2014 4:16 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
 Integration Ideas
  
 Hi,
  
 For Juno -
 I think that the existing capabilities in Barbican should be enough to start 
 with.
 A good detection and error message in LBaaS should also be sufficient to 
 start with.
  
 After Juno -
 We can consider a fix enhancement to Barbican later, IF deleting a 
 certificate in use and expressing an explicit error, will be common and 
 become an issue.
  
 Regards,
 -Sam.
  
  
  
 From: Doug Wiegley [mailto:do...@a10networks.com] 
 Sent: Wednesday, June 11, 2014 12:41 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [Caution: Message contains Suspicious URL content] Re: 
 [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas
  
 Of what use is a database that randomly delete rows?  That is, in effect, 
 what you’re allowing.
  
 The secrets are only useful when paired with a service.  And unless I’m 
 mistaken, there’s no undo.  So you’re letting users shoot themselves in the 
 foot, for what reason, exactly?  How do you expect openstack to rely on a 
 data store that is fundamentally random at the whim of users?  Every single 
 service that uses Barbican will now have to hack in a defense mechanism of 
 some kind, because they can’t trust that the secret they rely on will still 
 be there later.  Which defeats the purpose of this mission statement:  
 Barbican is a ReST API designed for the secure storage, provisioning and 
 management of secrets.”
  
 (And I don’t think anyone is suggesting that blind refcounts are the answer.  
 At least, I hope not.)
  
 Anyway, I hear this has already been decided, so, so be it.  Sounds like 
 we’ll hack around it.
  
 Thanks,
 doug
  
  
 From: Douglas Mendizabal douglas.mendiza...@rackspace.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, June 10, 2014 at 3:26 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] 

Re: [openstack-dev] [Neutron][LBaaS] TLS support RST document on Gerrit

2014-06-10 Thread Carlos Garza
 Ok but we still need input from Stephen Balukoff and Jorge to see how this 
will integrate with the API being proposed. I'm not sure if they were intending 
to use the attributes your discussing as well as which object was going to 
contain them.
On Jun 10, 2014, at 6:13 AM, Evgeny Fedoruk evge...@radware.com
wrote:

 Hi All,
 
 Carlos, Vivek, German, thanks for reviewing the RST doc.
 There are some issues I want to pinpoint final decision on them here, in ML, 
 before writing it down in the doc.
 Other issues will be commented on the document itself.
 
 1.   Support/No support in JUNO
 Referring to summit’s etherpad 
 https://etherpad.openstack.org/p/neutron-lbaas-ssl-l7,
 a.   SNI certificates list was decided to be supported. Was decision made 
 not to support it?
 Single certificate with multiple domains can only partly address the need for 
 SNI, still, different applications 
 on back-end will need different certificates.
 b.  Back-end re-encryption was decided to be supported. Was decision made 
 not to support it?
 c.   With front-end client authentication and back-end server 
 authentication not supported, 
 Should certificate chains be supported?
 2.   Barbican TLS containers
 a.   TLS containers are immutable.
 b.  TLS container is allowed to be deleted, always.
 i.  Even when 
 it is used by LBaaS VIP listener (or other service).
   ii.  Meta data 
 on TLS container will help tenant to understand that container is in use by 
 LBaaS service/VIP listener
  iii.  If every 
 VIP listener will “register” itself in meta-data while retrieving container, 
 how that “registration” will be removed when VIP listener stops using the 
 certificate?
 
 Please comment on these points and review the document on gerrit 
 (https://review.openstack.org/#/c/98640)
 I will update the document with decisions on above topics.
 
 Thank you!
 Evgeny
 
 
 From: Evgeny Fedoruk 
 Sent: Monday, June 09, 2014 2:54 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] TLS support RST document on Gerrit
 
 Hi All,
 
 A Spec. RST  document for LBaaS TLS support was added to Gerrit for review
 https://review.openstack.org/#/c/98640
 
 You are welcome to start commenting it for any open discussions.
 I tried to address each aspect being discussed, please add comments about 
 missing things.
 
 Thanks,
 Evgeny
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Carlos Garza
I understand this concern and was advocating that a configuration option be 
available to disable or enable auto updating of SSL certificates. But since 
every one is in favor of storing meta data on the barbican container directly I 
guess this is a moot point now.

On Jun 6, 2014, at 5:52 PM, Eichberger, German german.eichber...@hp.com 
wrote:

 Jorge + John,
 
 I am most concerned with a user changing his secret in barbican and then the 
 LB trying to update and causing downtime. Some users like to control when the 
 downtime occurs.
 
 For #1 it was suggested that once the event is delivered it would be up to a 
 user to enable an auto-update flag.
 
 In the case of #2 I am a bit worried about error cases: e.g. uploading the 
 certificates succeeds but registering the loadbalancer(s) fails. So using the 
 barbican system for those warnings might not as fool proof as we are hoping. 
 
 One thing I like about #2 over #1 is that it pushes a lot of the information 
 to Barbican. I think a user would expect when he uploads a new certificate to 
 Barbican that the system warns him right away about load balancers using the 
 old cert. With #1 he might get an e-mails from LBaaS telling him things 
 changed (and we helpfully updated all affected load balancers) -- which isn't 
 as immediate as #2. 
 
 If we implement an auto-update flag for #1 we can have both. User's who 
 like #2 juts hit the flag. Then the discussion changes to what we should 
 implement first and I agree with Jorge + John that this should likely be #2.
 
 German
 
 -Original Message-
 From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com] 
 Sent: Friday, June 06, 2014 3:05 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
 Integration Ideas
 
 Hey John,
 
 Correct, I was envisioning that the Barbican request would not be affected, 
 but rather, the GUI operator or API user could use the registration 
 information to do so should they want to do so.
 
 Cheers,
 --Jorge
 
 
 
 
 On 6/6/14 4:53 PM, John Wood john.w...@rackspace.com wrote:
 
 Hello Jorge,
 
 Just noting that for option #2, it seems to me that the registration 
 feature in Barbican would not be required for the first version of this 
 integration effort, but we should create a blueprint for it nonetheless.
 
 As for your question about services not registering/unregistering, I 
 don't see an issue as long as the presence or absence of registered 
 services on a Container/Secret does not **block** actions from 
 happening, but rather is information that can be used to warn clients 
 through their processes. For example, Barbican would still delete a 
 Container/Secret even if it had registered services.
 
 Does that all make sense though?
 
 Thanks,
 John
 
 
 From: Youcef Laribi [youcef.lar...@citrix.com]
 Sent: Friday, June 06, 2014 2:47 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
 Integration Ideas
 
 +1 for option 2.
 
 In addition as an additional safeguard, the LBaaS service could check 
 with Barbican when failing to use an existing secret to see if the 
 secret has changed (lazy detection).
 
 Youcef
 
 -Original Message-
 From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
 Sent: Friday, June 06, 2014 12:16 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
 Integration Ideas
 
 Hey everyone,
 
 Per our IRC discussion yesterday I'd like to continue the discussion on 
 how Barbican and Neutron LBaaS will interact. There are currently two 
 ideas in play and both will work. If you have another idea please free 
 to add it so that we may evaluate all the options relative to each other.
 Here are the two current ideas:
 
 1. Create an eventing system for Barbican that Neutron LBaaS (and other
 services) consumes to identify when to update/delete updated secrets 
 from Barbican. For those that aren't up to date with the Neutron LBaaS 
 API Revision, the project/tenant/user provides a secret (container?) id 
 when enabling SSL/TLS functionality.
 
 * Example: If a user makes a change to a secret/container in Barbican 
 then Neutron LBaaS will see an event and take the appropriate action.
 
 PROS:
 - Barbican is going to create an eventing system regardless so it will 
 be supported.
 - Decisions are made on behalf of the user which lessens the amount of 
 calls the user has to make.
 
 CONS:
 - An eventing framework can become complex especially since we need to 
 ensure delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI 
 think.
 
 2. Push orchestration decisions to API users. This idea comes with two 
 assumptions. The first assumption is that most providers' customers use 
 the cloud via 

Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Carlos Garza
   The barbican team was considering making the container mutable but I don't 
think it matters now
since every one has chimed in and wants the container to be immutable. The 
current discussion now is that
the TLS container will be immutable but the meta data will not be.

I'm not sure what is meant by versioning.  If vivek cares to elaborate that 
would be helpful.


On Jun 9, 2014, at 2:30 PM, Samuel Bercovici samu...@radware.com wrote:

 As far as I understand the Current Barbican implementation is immutable.
 Can anyone from Barbican comment on this?
 
 -Original Message-
 From: Jain, Vivek [mailto:vivekj...@ebay.com] 
 Sent: Monday, June 09, 2014 8:34 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
 Integration Ideas
 
 +1 for the idea of making certificate immutable.
 However, if Barbican allows updating certs/containers then versioning is a 
 must.
 
 Thanks,
 Vivek
 
 
 On 6/8/14, 11:48 PM, Samuel Bercovici samu...@radware.com wrote:
 
 Hi,
 
 I think that option 2 should be preferred at this stage.
 I also think that certificate should be immutable, if you want a new 
 one, create a new one and update the listener to use it.
 This removes any chance of mistakes, need for versioning etc.
 
 -Sam.
 
 -Original Message-
 From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
 Sent: Friday, June 06, 2014 10:16 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
 Integration Ideas
 
 Hey everyone,
 
 Per our IRC discussion yesterday I'd like to continue the discussion on 
 how Barbican and Neutron LBaaS will interact. There are currently two 
 ideas in play and both will work. If you have another idea please free 
 to add it so that we may evaluate all the options relative to each other.
 Here are the two current ideas:
 
 1. Create an eventing system for Barbican that Neutron LBaaS (and other
 services) consumes to identify when to update/delete updated secrets 
 from Barbican. For those that aren't up to date with the Neutron LBaaS 
 API Revision, the project/tenant/user provides a secret (container?) id 
 when enabling SSL/TLS functionality.
 
 * Example: If a user makes a change to a secret/container in Barbican 
 then Neutron LBaaS will see an event and take the appropriate action.
 
 PROS:
 - Barbican is going to create an eventing system regardless so it will 
 be supported.
 - Decisions are made on behalf of the user which lessens the amount of 
 calls the user has to make.
 
 CONS:
 - An eventing framework can become complex especially since we need to 
 ensure delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI 
 think.
 
 2. Push orchestration decisions to API users. This idea comes with two 
 assumptions. The first assumption is that most providers' customers use 
 the cloud via a GUI, which in turn can handle any orchestration 
 decisions that need to be made. The second assumption is that power API 
 users are savvy and can handle their decisions as well. Using this 
 method requires services, such as LBaaS, to register in the form of 
 metadata to a barbican container.
 
 * Example: If a user makes a change to a secret the GUI can see which 
 services are registered and opt to warn the user of consequences. Power 
 users can look at the registered services and make decisions how they 
 see fit.
 
 PROS:
 - Very simple to implement. The only code needed to make this a 
 reality is at the control plane (API) level.
 - This option is more loosely coupled that option #1.
 
 CONS:
 - Potential for services to not register/unregister. What happens in 
 this case?
 - Pushes complexity of decision making on to GUI engineers and power 
 API users.
 
 
 I would like to get a consensus on which option to move forward with 
 ASAP since the hackathon is coming up and delivering Barbican to 
 Neutron LBaaS integration is essential to exposing SSL/TLS 
 functionality, which almost everyone has stated is a #1/#2 priority.
 
 I'll start the decision making process by advocating for option #2. My 
 reason for choosing option #2 has to deal mostly with the simplicity of 
 implementing such a mechanism. Simplicity also means we can implement 
 the necessary code and get it approved much faster which seems to be a 
 concern for everyone. What option does everyone else want to move 
 forward with?
 
 
 
 Cheers,
 --Jorge
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Carlos Garza
   The use case was that a cert inside the container could be updated while the 
private key stays the same. IE a new cert would be a resigning of the same old 
key. By immutable we mean to say that the same UUID would be used on the lbaas 
side. This is a heavy handed way of expecting the user to manually update their 
lbaas instances when they update a cert. 

Yes we can live with an immutable container which seems to be the direction 
we are going now.

On Jun 9, 2014, at 2:54 PM, Tiwari, Arvind arvind.tiw...@hp.com wrote:

 As per current implementation, containers are immutable. 
 Do we have any use case to make it mutable? Can we live with new container 
 instead of updating an existing container?
 
 Arvind 
 
 -Original Message-
 From: Samuel Bercovici [mailto:samu...@radware.com] 
 Sent: Monday, June 09, 2014 1:31 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
 Integration Ideas
 
 As far as I understand the Current Barbican implementation is immutable.
 Can anyone from Barbican comment on this?
 
 -Original Message-
 From: Jain, Vivek [mailto:vivekj...@ebay.com]
 Sent: Monday, June 09, 2014 8:34 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
 Integration Ideas
 
 +1 for the idea of making certificate immutable.
 However, if Barbican allows updating certs/containers then versioning is a 
 must.
 
 Thanks,
 Vivek
 
 
 On 6/8/14, 11:48 PM, Samuel Bercovici samu...@radware.com wrote:
 
 Hi,
 
 I think that option 2 should be preferred at this stage.
 I also think that certificate should be immutable, if you want a new 
 one, create a new one and update the listener to use it.
 This removes any chance of mistakes, need for versioning etc.
 
 -Sam.
 
 -Original Message-
 From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
 Sent: Friday, June 06, 2014 10:16 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
 Integration Ideas
 
 Hey everyone,
 
 Per our IRC discussion yesterday I'd like to continue the discussion on 
 how Barbican and Neutron LBaaS will interact. There are currently two 
 ideas in play and both will work. If you have another idea please free 
 to add it so that we may evaluate all the options relative to each other.
 Here are the two current ideas:
 
 1. Create an eventing system for Barbican that Neutron LBaaS (and other
 services) consumes to identify when to update/delete updated secrets 
 from Barbican. For those that aren't up to date with the Neutron LBaaS 
 API Revision, the project/tenant/user provides a secret (container?) id 
 when enabling SSL/TLS functionality.
 
 * Example: If a user makes a change to a secret/container in Barbican 
 then Neutron LBaaS will see an event and take the appropriate action.
 
 PROS:
 - Barbican is going to create an eventing system regardless so it will 
 be supported.
 - Decisions are made on behalf of the user which lessens the amount of 
 calls the user has to make.
 
 CONS:
 - An eventing framework can become complex especially since we need to 
 ensure delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI 
 think.
 
 2. Push orchestration decisions to API users. This idea comes with two 
 assumptions. The first assumption is that most providers' customers use 
 the cloud via a GUI, which in turn can handle any orchestration 
 decisions that need to be made. The second assumption is that power API 
 users are savvy and can handle their decisions as well. Using this 
 method requires services, such as LBaaS, to register in the form of 
 metadata to a barbican container.
 
 * Example: If a user makes a change to a secret the GUI can see which 
 services are registered and opt to warn the user of consequences. Power 
 users can look at the registered services and make decisions how they 
 see fit.
 
 PROS:
 - Very simple to implement. The only code needed to make this a 
 reality is at the control plane (API) level.
 - This option is more loosely coupled that option #1.
 
 CONS:
 - Potential for services to not register/unregister. What happens in 
 this case?
 - Pushes complexity of decision making on to GUI engineers and power 
 API users.
 
 
 I would like to get a consensus on which option to move forward with 
 ASAP since the hackathon is coming up and delivering Barbican to 
 Neutron LBaaS integration is essential to exposing SSL/TLS 
 functionality, which almost everyone has stated is a #1/#2 priority.
 
 I'll start the decision making process by advocating for option #2. My 
 reason for choosing option #2 has to deal mostly with the simplicity of 
 implementing such a mechanism. Simplicity also means we can implement 
 the necessary code and get it approved much faster which seems to be a 
 

Re: [openstack-dev] [Neutron][LBaaS]TLS API support for authentication

2014-05-28 Thread Carlos Garza

On May 27, 2014, at 9:13 PM, Stephen Balukoff sbaluk...@bluebox.net
 wrote:

 Hi y'all!
 
 I would advocate that if the user asks the front-end API for the private key 
 information (ie. GET request), what they get back is the key's modulus and 
 nothing else. This should work to verify whether a given key matches a given 
 cert, and doesn't break security requirements for those who are never allowed 
 to actually touch private key data. And if a user added the key themselves 
 through the front-end API, I think it's safe to assume the responsibility for 
 keeping a copy of the key they can access lies with the user.

I'm thinking at this point all user interaction with their cert and key be 
handled by barbican directly instead of through our API. It seems like we've 
punted everything but the IDs to barbican. Returning the modulus is still RSA 
centric though. 


 
 Having said this, of course anything which spins up virtual appliances, or 
 configures physical appliances is going to need access to the actual private 
 key. So any back-end API(s) will probably need to have different behavior 
 here.
 
 One thing I do want to point out is that with the 'transient' nature of 
 back-end guests / virtual servers, it's probably going to be important to 
 store the private key data in something non-volatile, like barbican's store. 
 While it may be a good idea to add a feature that generates a private key and 
 certificate signing request via our API someday for certain organizations' 
 security requirements, one should never have the only store for this private 
 key be a single virtual server. This is also going to be important if a 
 certificate + key combination gets re-used in another listener in some way, 
 or when horizontal scaling features get added.

I don't think our API needs to handle the CSRs it looks like barbican 
aspires to do this so our API really is pretty insulated.

 
 Thanks,
 Stephen
 
 -- 
 Stephen Balukoff 
 Blue Box Group, LLC 
 (800)613-4305 x807
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]TLS API support for authentication

2014-05-23 Thread Carlos Garza
Right so are you advocating that the front end API never return a 
private key back to the user once regardless if the key was generated
on the back end or sent in to the API from the user? We kind of are
already are implying that they can refer to the key via a private 
key id.


On May 23, 2014, at 9:11 AM, John Dennis jden...@redhat.com
 wrote:

 Using standard formats such as PEM and PKCS12 (most people don't use
 PKCS8 directly) is a good approach.

We had to deal with PKCS8 manually in our CLB1.0 offering at rackspace. Too 
many customers complained.

 Be mindful that some cryptographic
 services do not provide *any* direct access to private keys (makes
 sense, right?). Private keys are shielded in some hardened container and
 the only way to refer to the private key is via some form of name
 association.

Were anticipating referring the keys via a barbican key id which will be 
named later.


 Therefore your design should never depend on having access
 to a private key and

But we need access enough to transport the key to the back end 
implementation though.

 should permit having the private key stored in some
 type of secure key storage.

   A secure repository for the private key is already a requirement that
we are attempting to meat with barbican.


 -- 
 John
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]LBaaS 2nd Session etherpad

2014-05-21 Thread Carlos Garza
   I'm crc32 on free node.  My TimeZone is U.S. CST (UTC-5).
Let me know when we can clear this up. I need to know what the intent was for 
with the Trusted certificates before we can decide what fields were needed for 
it.



On May 21, 2014, at 9:14 AM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com wrote:

Hi Carlos,

What is your IRC nick?
In what time zone you are located?

Regards,
-Sam.






From: Carlos Garza [mailto:carlos.ga...@rackspace.comhttp://rackspace.com]
Sent: Wednesday, May 21, 2014 2:52 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]LBaaS 2nd Session etherpad

I'm reading through the https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL 
docs as well as the https://etherpad.openstack.org/p/neutron-lbaas-ssl-l7
document that your referencing below and I think who ever wrote the documents 
may have misunder stood the Association between X509 certificates and Private 
and public Keys.
I think we should clean those up and unambiguously declare that.

A certificate shall be defined as a PEM encoded X509 certificate.
For example

Certificate:
-BEGIN CERTIFICATE-
   blah blah blah base64 stuff goes here
-END CERTIFICATE-

A private key shall be a PEM encoded private key that may or may not 
necessarily be an RSA key. For example it could be
a curve key but most likely it will be RSA



a public-key shall mean an actual Pem encoded public key and not the x509 
certificate that contains it. example
-BEGIN PUBLIC KEY-
bah blah blah base64 stuff goes here
-END PUBLIC KEY-

A Private key shall mean a PEM encoded private key.
Example
-BEGIN RSA PRIVATE KEY-
blah blah blah base64 goes here.
-END RSA PRIVATE KEY-

Also the same key could be encoded as pkcs8

-BEGIN PRIVATE KEY-
base64 stuff here
-END PRIVATE KEY-

I would think that we should allow for PKCS8 so that users are not restricted 
to PKCS1 RSA keys via BEGIN PRIVATE KEY. I'm ok with forcing the user to not 
use PKCS8 to send both
the certificate and key.

There seems to be confusion in the neutron-lbaas-ssl-i7 ether pad doc as well 
as the doc at URL https://etherpad.openstack.org/p/neutron-lbaas-ssl-l7
The confusion being that the term public key and certificate are being used 
interchangeably.

For example in the wiki page?
under Resource change:
SSL certidficate(new) declares

certificate_chain : list of PEM-formatted public keys, not mandatory
This should be changed to
certificate_chain: list of PEM-formatted x509 certificates, not mandatory

Also in the CLI portion of the doc their are entries like
neutron ssl-certificate-create --public-key CERTIFICATE-FILE --private-key 
PRIVATE-KEY-FILE --passphrase PASSPHRASE --cert-chain 
INTERMEDIATE-KEY-FILE-1, INTERMEDIATE-KEY-FILE-2 certificate name
The option --public-key should be changed to --cert since it specifies the 
X509. Also the names INTERMEDIATE-KEY-FILE-1 etc should be changed to 
INTERMEDIATE-CERT-FILE-1 since these are x509s and not certs.


The below line mass no sense to me.
neutron ssl-trusted-certificate-create --key PUBLIC-KEY-FILE key name

Are you truing to give the certificate a name? We also will never need to work 
with public keys in general as the public key can be extracted from the x509 or 
the private key file.
Or was the intent to use ssl-trusted-certificates to specify the private keys 
that the Loadbalancer will use when communicating with back end servers that 
are doing client auth?

the rational portion of the doc is declaring that trusted certificates are for 
back end encryption but don't mention if this is for client auth either. Was 
the intent to use a specific key for the SSL session between the load balancer 
and the back end server or was the intention to advertise the client vert to 
the backend server so the the back end server can authenticate with what ever 
CA it(the server) trusts.

in either case both the private key and the certificate or chain should be used 
in this configuration since the loadbalancer needs the private key during the 
SSL session.
the command should look something alone the lines of
neutron ssl-trusted-certificate-create --key PRIVATE_KEY_FILE --cert 
CERTIFICATE-file.


I would like to help out with this but I need to know the intent of the 
person that initially interchanged the terms key and certificate, and its much 
better to fix this sooner then later.


On May 15, 2014, at 10:58 PM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com wrote:

Hi Everyone,

https://etherpad.openstack.org/p/neutron-lbaas-ssl-l7

Feel free to modify and update, please make sure you use your name so we will 
know who have added the modification.

Regards,
-Sam.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http

Re: [openstack-dev] [Neutron][LBaaS]LBaaS 2nd Session etherpad

2014-05-20 Thread Carlos Garza
I'm reading through the https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL 
docs as well as the https://etherpad.openstack.org/p/neutron-lbaas-ssl-l7
document that your referencing below and I think who ever wrote the documents 
may have misunder stood the Association between X509 certificates and Private 
and public Keys.
I think we should clean those up and unambiguously declare that.

A certificate shall be defined as a PEM encoded X509 certificate.
For example

Certificate:
-BEGIN CERTIFICATE-
   blah blah blah base64 stuff goes here
-END CERTIFICATE-

A private key shall be a PEM encoded private key that may or may not 
necessarily be an RSA key. For example it could be
a curve key but most likely it will be RSA



a public-key shall mean an actual Pem encoded public key and not the x509 
certificate that contains it. example
-BEGIN PUBLIC KEY-
bah blah blah base64 stuff goes here
-END PUBLIC KEY-

A Private key shall mean a PEM encoded private key.
Example
-BEGIN RSA PRIVATE KEY-
blah blah blah base64 goes here.
-END RSA PRIVATE KEY-

Also the same key could be encoded as pkcs8

-BEGIN PRIVATE KEY-
base64 stuff here
-END PRIVATE KEY-

I would think that we should allow for PKCS8 so that users are not restricted 
to PKCS1 RSA keys via BEGIN PRIVATE KEY. I'm ok with forcing the user to not 
use PKCS8 to send both
the certificate and key.

There seems to be confusion in the neutron-lbaas-ssl-i7 ether pad doc as well 
as the doc at URL https://etherpad.openstack.org/p/neutron-lbaas-ssl-l7
The confusion being that the term public key and certificate are being used 
interchangeably.

For example in the wiki page?
under Resource change:
SSL certidficate(new) declares

certificate_chain : list of PEM-formatted public keys, not mandatory
This should be changed to
certificate_chain: list of PEM-formatted x509 certificates, not mandatory

Also in the CLI portion of the doc their are entries like
neutron ssl-certificate-create --public-key CERTIFICATE-FILE --private-key 
PRIVATE-KEY-FILE --passphrase PASSPHRASE --cert-chain 
INTERMEDIATE-KEY-FILE-1, INTERMEDIATE-KEY-FILE-2 certificate name
The option --public-key should be changed to --cert since it specifies the 
X509. Also the names INTERMEDIATE-KEY-FILE-1 etc should be changed to 
INTERMEDIATE-CERT-FILE-1 since these are x509s and not certs.


The below line mass no sense to me.
neutron ssl-trusted-certificate-create --key PUBLIC-KEY-FILE key name

Are you truing to give the certificate a name? We also will never need to work 
with public keys in general as the public key can be extracted from the x509 or 
the private key file.
Or was the intent to use ssl-trusted-certificates to specify the private keys 
that the Loadbalancer will use when communicating with back end servers that 
are doing client auth?

the rational portion of the doc is declaring that trusted certificates are for 
back end encryption but don't mention if this is for client auth either. Was 
the intent to use a specific key for the SSL session between the load balancer 
and the back end server or was the intention to advertise the client vert to 
the backend server so the the back end server can authenticate with what ever 
CA it(the server) trusts.

in either case both the private key and the certificate or chain should be used 
in this configuration since the loadbalancer needs the private key during the 
SSL session.
the command should look something alone the lines of
neutron ssl-trusted-certificate-create --key PRIVATE_KEY_FILE --cert 
CERTIFICATE-file.


I would like to help out with this but I need to know the intent of the 
person that initially interchanged the terms key and certificate, and its much 
better to fix this sooner then later.


On May 15, 2014, at 10:58 PM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com wrote:

Hi Everyone,

https://etherpad.openstack.org/p/neutron-lbaas-ssl-l7

Feel free to modify and update, please make sure you use your name so we will 
know who have added the modification.

Regards,
-Sam.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Multiple VIPs per loadbalancer

2014-05-10 Thread Carlos Garza

On May 10, 2014, at 1:52 AM, Eugene Nikanorov enikano...@mirantis.com
 wrote:

 Hi Carlos,
 I think you had a chance to hear this argument yourself (from several 
 different core members: Mark McClain, Salvatore Orlando, Kyle Mestery) on 
 those meetings we had in past 2 months.
 I was advocating 'loadbalancer' (in it's extended version) once too, 
 receiving negative opinions as well.
 In general this approach puts too much of control of a backend to user's 
 hands and this goes in opposite direction than neutron project.
 
 If it's just about the name of the root object - VIP suits this role too, so 
 I'm fine with that. I also find VIP/Listeners model a bit more clearer per 
 definitions in our glossary.
 
 Thanks,
 Eugene.

I was in those meetings Eugene and was read the irc logs of the two I 
missed and was of the impression they were open to discussion.I'll defer this 
topic until the summit.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-09 Thread Carlos Garza

On May 9, 2014, at 3:26 AM, Eugene Nikanorov 
enikano...@mirantis.commailto:enikano...@mirantis.com
 wrote:

Carlos,

The general objection is that if we don't need multiple VIPs (different ip, not 
just tcp ports) per single logical loadbalancer, then we don't need 
loadbalancer because everything else is addressed by VIP playing a role of 
loadbalancer.

Thats pretty much our objection. You seem to be masquerading vips as if 
they were loadbalancers. APIs that don't model reality are not a good fit as 
far as were concerned.

We do not recognize the logical connection between we will use a 
loadbalancer top level object if and only if it will contain multiple ports or 
vips. We view this as a straw man attempt to get those in favor of a 
loadbalancer top level object to some how reform an argument that we now need 
multiple ports, vips etc which isn't what we are arguing at all.

I have no doubt that even if we ever did have a use case for this you'll just 
reject the use case or come up with another bizarre constraint as to why we 
Don't need a loadbalancer top level object.
That was never the argument we were trying to make in the first place.

Regarding conclusions - I think we've heard enough negative opinions on the 
idea of 'container' to at least postpone this discussion to the point when 
we'll get some important use cases that could not be addressed by 'VIP as 
loadbalancer'

We haven't really heard any Negative opinions other that what is coming 
from you and Sam. And it looks like Sam's objection is that he has predefined 
physical loadbalancers already sitting on a rack. For example if he has a rack 
of 8 physical loadbalancers then he only has 8 loadbalancer_ids and that are 
shared by many users and for some reason this is locking him into the belief 
that he shouldn't expose loadbalancer objects directly to the customer. This is 
some what alien to us as we also have physicals in our CLB1.0 product but we 
still use the notion of loadbalancer objects that are shared across a single 
sting ray host. We don't equate a loadbalancer with an actual sting ray host.

If same needs help wrapping a virtual loadbalancer object in his API let us 
know we would like to help with that as we firmly know its awkward to take 
something such as neutron/lbaas and interpret it to be Virtual Ips as a 
service.  We've done that with our API in CLB1.0.

Carlos.

Eugene.

On Fri, May 9, 2014 at 8:33 AM, Carlos Garza 
carlos.ga...@rackspace.commailto:carlos.ga...@rackspace.com wrote:

On May 8, 2014, at 2:45 PM, Eugene Nikanorov 
enikano...@mirantis.commailto:enikano...@mirantis.com wrote:

Hi Carlos,

Are you saying that we should only have a loadbalancer resource only in the 
case where we want it to span multiple L2 networks as if it were a router? I 
don't see how you arrived at that conclusion. Can you explain further.
No, I mean that loadbalancer instance is needed if we need several *different* 
L2 endpoints for several front ends.
That's basically 'virtual appliance' functionality that we've discussed on 
today's meeting.

   From looking at the irc log it looks like nothing conclusive came out of the 
meeting. I don't understand a lot of the conclusions you arrive at. For example 
your rejecting the notion of a loadbalancer concrete object unless its needed 
to include multi l2 network support. Will you make an honest effort to describe 
your objections here in the ML cause if we can't resolve it here its going to 
spill over into the summit. I certainly don't want this to dominate the summit.



Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert implementation for LBaaS and VPN

2014-05-08 Thread Carlos Garza

On May 8, 2014, at 5:19 AM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com
 wrote:

Hi,

Please note as commented also by other XaaS services that managing SSL 
certificates is not a sole LBaaS challenge.
This calls for either an OpenStack wide service or at least a Neutron wide 
service to implement such use cases.

So it here are the options as far as I have seen so far.
1.   Barbican as a central service to store and retrieve SSL certificates. 
I think the Certificates generation is currently a lower priority requirement
2.   Using Heat as a centralized service
3.   Implementing such service in Neutron
4.   LBaaS private implementation

BTW, on all the options above, Barbican can optionally be used to store the 
certificates or the private part of the certificates.

   Is your statement equivalent to On all the options above, Babican can 
optionally be used to store the (X509,private_key) or just the private_key.
If thats what you mean then we are on the same page. private part of a 
certificate is not a valid statement for me since x509 certs don't contain 
private parts.

I'm advocating the latter where barbican stores the key only and we store the 
X509 on our own database.

I think that we either follow option 3 if SSL management is only a Neutron 
requirement (LBaaS, VPNaaS, FWaaS) and maybe as a transition project to an 
OpenStack wide solution (1 or 2).
Option 1 or 2 might be the ultimate goal.

Regards,
-Sam.





From: Clark, Robert Graham [mailto:robert.cl...@hp.comhttp://hp.com]
Sent: Thursday, May 08, 2014 12:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert 
implementation for LBaaS and VPN

The certificate management that LBaaS requires might be slightly different to 
the normal flow of things in OpenStack services, after all you are talking 
about externally provided certificates and private keys.

There’s already a standard for a nice way to bundle those two elements 
together, it’s described in PKCS#12 and you’ve likely come across it in the 
form of ‘.pfx’ files. I’d suggest that perhaps it would make sense for LBaaS to 
store pfx files in the LBaaS DB and store the key for the pfx files in 
Barbican. You could probably store them together in Barbican, using containers 
if you really wanted to 
(https://blueprints.launchpad.net/barbican/+spec/crud-endpoints-secret-container)

-Rob

From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
Sent: 08 May 2014 04:30
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert 
implementation for LBaaS and VPN


On May 7, 2014, at 10:53 AM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com wrote:

Hi John,

If the user already has an SSL certificate that was acquired outside of the 
barbican Ordering system, how can he store it securely in Barbican as a SSL 
Certificate?
The Container stored this information in a very generic way, I think that there 
should be an API that formalizes a specific way in which SSL certificates can 
be stored and read back as SSL Certificates and not as loosely coupled 
container structure.
This such API should have RBAC that allows getting back only the public part of 
an SSL certificate vs. allowing to get back all the details of the SSL 
certificate.

Why the need for that complexity? The X509s are public by nature and don't 
need to be stored in Barbican so there isn't really a private part of the 
certificate. The actual private key itself is what needs to be secured so I 
would advocate that the private key is what will be stored in barbican. I also 
think we should also declare that the private key need not be an RSA key as 
x509 supports other asymmetric encryption types so storing as a generic blob in 
barbican is probably a good idea.





-Sam.



From: John Wood [mailto:john.w...@rackspace.comhttp://RACKSPACE.COM]
Sent: Thursday, May 01, 2014 11:28 PM
To: OpenStack Development Mailing List (not for usage 
questions);os.v...@gmail.commailto:os.v...@gmail.com
Subject: Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert 
implementation for LBaaS and VPN

Hello Samuel,

Just noting that the link below shows current-state Barbican. We are in the 
process of designing SSL certificate support for Barbican via blueprints such 
as this one: 
https://wiki.openstack.org/wiki/Barbican/Blueprints/ssl-certificates
We intend to discuss this feature in Atlanta to enable coding in earnest for 
Juno.

The Container resource is intended to capture/store the final certificate 
details.

Thanks,
John



From: Samuel Bercovici [samu...@radware.commailto:samu...@radware.com]
Sent: Thursday, May 01, 2014 12:50 PM
To: OpenStack Development Mailing List (not for usage questions); 
os.v...@gmail.commailto:os.v...@gmail.com
Subject: Re: [openstack-dev

Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-08 Thread Carlos Garza

On May 8, 2014, at 8:01 AM, Eugene Nikanorov 
enikano...@mirantis.commailto:enikano...@mirantis.com
 wrote:

Hi Adam,

My comments inline:

1. We shouldn't be looking at the current model and deciding which object is 
the root object, or what object to rename as a  loadbalancer... That's 
totally backwards! *We don't define which object is named the loadbalancer by 
looking for the root object -- we define which object is the root by looking 
for the object named loadbalancer.* I had hoped that was clear from the JSON 
examples in our API proposal, but I think maybe there was too much focus on the 
object model chart, where this isn't nearly as clearly communicated.

2. As I believe I have also said before, if I'm using X as a Service then I 
expect to get back an object of type X. I would be very frustrated/confused 
if, as a user, LBaaS returned me an object of type VIP when I POST a Create 
for my new load balancer. On this last point, I feel like I've said this enough 
times that I'm beating a dead horse...

I think we definitely should be looking at existing API/BBG proposal for the 
root object.
The question about whether we need additional 'Loadbalancer' resource or not is 
not a question about terminology, so (2) is not a valid argument.

It's pretty awkward to have a REST api that doesn't have a resource 
representation of the object its supposed to be creating and handing out. It's 
really awkward to identify a loadbalancer by vip id.
Thats like trying going to a car dealership API and only being able to look up 
a car by its parking spot ID.

Do you believe Neutron/Lbaas is actually LoadBalancerVip as a Service 
which would entirely explain the disconnect we are having with you.

What really matters in answering the question about 'loadbalancer' resource is 
do we need multiple L2 ports per single loadbalancer. If we do - that could be 
a justification to add it. Right now the common perception is that this is not 
needed and hence, 'loadbalancer' is not required in the API or obj model.

Are you saying that we should only have a loadbalancer resource only in the 
case where we want it to span multiple L2 networks as if it were a router? I 
don't see how you arrived at that conclusion. Can you explain further.

Thanks Carlos.


Thanks,
Eugene.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-08 Thread Carlos Garza

On May 8, 2014, at 2:45 PM, Eugene Nikanorov 
enikano...@mirantis.commailto:enikano...@mirantis.com wrote:

Hi Carlos,

Are you saying that we should only have a loadbalancer resource only in the 
case where we want it to span multiple L2 networks as if it were a router? I 
don't see how you arrived at that conclusion. Can you explain further.
No, I mean that loadbalancer instance is needed if we need several *different* 
L2 endpoints for several front ends.
That's basically 'virtual appliance' functionality that we've discussed on 
today's meeting.

   From looking at the irc log it looks like nothing conclusive came out of the 
meeting. I don't understand a lot of the conclusions you arrive at. For example 
your rejecting the notion of a loadbalancer concrete object unless its needed 
to include multi l2 network support. Will you make an honest effort to describe 
your objections here in the ML cause if we can't resolve it here its going to 
spill over into the summit. I certainly don't want this to dominate the summit.



Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert implementation for LBaaS and VPN

2014-05-07 Thread Carlos Garza
I thought the requirement was from the need to ensure the backend was 
secure. IE people would throw a fit if they find out your storing keys in 
sqllite or MySQL. Wasn't the purpose to find A Secure repository?

On May 7, 2014, at 10:38 AM, Zang MingJie zealot0...@gmail.com wrote:

 +1 to implement a modular framework where user can choose whether to
 use barbican or sqldb
 
 On Fri, May 2, 2014 at 4:28 AM, John Wood john.w...@rackspace.com wrote:
 Hello Samuel,
 
 Just noting that the link below shows current-state Barbican. We are in the
 process of designing SSL certificate support for Barbican via blueprints
 such as this one:
 https://wiki.openstack.org/wiki/Barbican/Blueprints/ssl-certificates
 We intend to discuss this feature in Atlanta to enable coding in earnest for
 Juno.
 
 The Container resource is intended to capture/store the final certificate
 details.
 
 Thanks,
 John
 
 
 
 From: Samuel Bercovici [samu...@radware.com]
 Sent: Thursday, May 01, 2014 12:50 PM
 To: OpenStack Development Mailing List (not for usage questions);
 os.v...@gmail.com
 Subject: Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert
 implementation for LBaaS and VPN
 
 Hi Vijay,
 
 
 
 I have looked at the Barbican APIs –
 https://github.com/cloudkeep/barbican/wiki/Application-Programming-Interface
 
 I was no able to see a “native” API that will accept an SSL certificate
 (private key, public key, CSR, etc.) and will store it.
 
 We can either store the whole certificate as a single file as a secret or
 use a container and store all the certificate parts as secrets.
 
 
 
 I think that having LBaaS reference Certificates as IDs using some service
 is the right way to go so this might be achived by either:
 
 1.   Adding to Barbican and API to store / generate certificates
 
 2.   Create a new “module” that might start by being hosted in neutron
 or keystone that will allow to manage certificates and will use Barbican
 behind the scenes to store them.
 
 3.   Decide on a container structure to use in Babican but implement the
 way to access and arrange it as a neutron library
 
 
 
 Was any decision made on how to proceed?
 
 
 
 Regards,
 
-Sam.
 
 
 
 
 
 
 
 
 
 From: Vijay B [mailto:os.v...@gmail.com]
 Sent: Wednesday, April 30, 2014 3:24 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron] [LBaaS][VPN] SSL cert implementation for
 LBaaS and VPN
 
 
 
 Hi,
 
 
 
 It looks like there are areas of common effort in multiple efforts that are
 proceeding in parallel to implement SSL for LBaaS as well as VPN SSL in
 neutron.
 
 
 
 Two relevant efforts are listed below:
 
 
 
 
 
 https://review.openstack.org/#/c/74031/
 (https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL)
 
 
 
 https://review.openstack.org/#/c/58897/
 (https://blueprints.launchpad.net/openstack/?searchtext=neutron-ssl-vpn)
 
 
 
 
 
 
 
 Both VPN and LBaaS will use SSL certificates and keys, and this makes it
 better to implement SSL entities as first class citizens in the OS world.
 So, three points need to be discussed here:
 
 
 
 1. The VPN SSL implementation above is putting the SSL cert content in a
 mapping table, instead of maintaining certs separately and referencing them
 using IDs. The LBaaS implementation stores certificates in a separate table,
 but implements the necessary extensions and logic under LBaaS. We propose
 that both these implementations move away from this and refer to SSL
 entities using IDs, and that the SSL entities themselves are implemented as
 their own resources, serviced either by a core plugin or a new SSL plugin
 (assuming neutron; please also see point 3 below).
 
 
 
 2. The actual data store where the certs and keys are stored should be
 configurable at least globally, such that the SSL plugin code will
 singularly refer to that store alone when working with the SSL entities. The
 data store candidates currently are Barbican and a sql db. Each should have
 a separate backend driver, along with the required config values. If further
 evaluation of Barbican shows that it fits all SSL needs, we should make it a
 priority over a sqldb driver.
 
 
 
 3. Where should the primary entries for the SSL entities be stored? While
 the actual certs themselves will reside on Barbican or SQLdb, the entities
 themselves are currently being implemented in Neutron since they are being
 used/referenced there. However, we feel that implementing them in keystone
 would be most appropriate. We could also follow a federated model where a
 subset of keys can reside on another service such as Neutron. We are fine
 with starting an initial implementation in neutron, in a modular manner, and
 move it later to keystone.
 
 
 
 
 
 Please provide your inputs on this.
 
 
 
 
 
 Thanks,
 
 Regards,
 
 Vijay
 
 
 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert implementation for LBaaS and VPN

2014-05-07 Thread Carlos Garza

On May 7, 2014, at 10:53 AM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com wrote:

Hi John,

If the user already has an SSL certificate that was acquired outside of the 
barbican Ordering system, how can he store it securely in Barbican as a SSL 
Certificate?
The Container stored this information in a very generic way, I think that there 
should be an API that formalizes a specific way in which SSL certificates can 
be stored and read back as SSL Certificates and not as loosely coupled 
container structure.
This such API should have RBAC that allows getting back only the public part of 
an SSL certificate vs. allowing to get back all the details of the SSL 
certificate.

Why the need for that complexity? The X509s are public by nature and don't 
need to be stored in Barbican so there isn't really a private part of the 
certificate. The actual private key itself is what needs to be secured so I 
would advocate that the private key is what will be stored in barbican. I also 
think we should also declare that the private key need not be an RSA key as 
x509 supports other asymmetric encryption types so storing as a generic blob in 
barbican is probably a good idea.





-Sam.



From: John Wood [mailto:john.w...@rackspace.comhttp://RACKSPACE.COM]
Sent: Thursday, May 01, 2014 11:28 PM
To: OpenStack Development Mailing List (not for usage questions); 
os.v...@gmail.commailto:os.v...@gmail.com
Subject: Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert 
implementation for LBaaS and VPN

Hello Samuel,

Just noting that the link below shows current-state Barbican. We are in the 
process of designing SSL certificate support for Barbican via blueprints such 
as this one: 
https://wiki.openstack.org/wiki/Barbican/Blueprints/ssl-certificates
We intend to discuss this feature in Atlanta to enable coding in earnest for 
Juno.

The Container resource is intended to capture/store the final certificate 
details.

Thanks,
John



From: Samuel Bercovici [samu...@radware.commailto:samu...@radware.com]
Sent: Thursday, May 01, 2014 12:50 PM
To: OpenStack Development Mailing List (not for usage questions); 
os.v...@gmail.commailto:os.v...@gmail.com
Subject: Re: [openstack-dev] [Neutron] [LBaaS][VPN][Barbican] SSL cert 
implementation for LBaaS and VPN
Hi Vijay,

I have looked at the Barbican APIs 
–https://github.com/cloudkeep/barbican/wiki/Application-Programming-Interface
I was no able to see a “native” API that will accept an SSL certificate 
(private key, public key, CSR, etc.) and will store it.
We can either store the whole certificate as a single file as a secret or use a 
container and store all the certificate parts as secrets.

I think that having LBaaS reference Certificates as IDs using some service is 
the right way to go so this might be achived by either:
1.   Adding to Barbican and API to store / generate certificates
2.   Create a new “module” that might start by being hosted in neutron or 
keystone that will allow to manage certificates and will use Barbican behind 
the scenes to store them.
3.   Decide on a container structure to use in Babican but implement the 
way to access and arrange it as a neutron library

Was any decision made on how to proceed?

Regards,
-Sam.




From: Vijay B [mailto:os.v...@gmail.com]
Sent: Wednesday, April 30, 2014 3:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron] [LBaaS][VPN] SSL cert implementation for 
LBaaS and VPN

Hi,

It looks like there are areas of common effort in multiple efforts that are 
proceeding in parallel to implement SSL for LBaaS as well as VPN SSL in neutron.

Two relevant efforts are listed below:


https://review.openstack.org/#/c/74031/   
(https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL)

https://review.openstack.org/#/c/58897/   
(https://blueprints.launchpad.net/openstack/?searchtext=neutron-ssl-vpn)



Both VPN and LBaaS will use SSL certificates and keys, and this makes it better 
to implement SSL entities as first class citizens in the OS world. So, three 
points need to be discussed here:

1. The VPN SSL implementation above is putting the SSL cert content in a 
mapping table, instead of maintaining certs separately and referencing them 
using IDs. The LBaaS implementation stores certificates in a separate table, 
but implements the necessary extensions and logic under LBaaS. We propose that 
both these implementations move away from this and refer to SSL entities using 
IDs, and that the SSL entities themselves are implemented as their own 
resources, serviced either by a core plugin or a new SSL plugin (assuming 
neutron; please also see point 3 below).

2. The actual data store where the certs and keys are stored should be 
configurable at least globally, such that the SSL plugin code will singularly 
refer to that store alone when working with the SSL entities. The data store 
candidates 

Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

2014-05-01 Thread Carlos Garza
 Balukoff I'm liking your API spec so far but can you elaborate on what 
this loadbalancer object you refer to is. on page You declare its immutable and 
refer to it like an actual primitive object yet I don't
see a schema for it. I see loadbalancer_id in the vip request that reference. 
The top part of the doc declares a loadbalancer is is the first object created 
according to the definition in the glossary.
https://wiki.openstack.org/wiki/Neutron/LBaaS/Glossary#Loadbalancer where it is 
defined as the root Object that is first created or can be fully populated but 
in you API proposal it looks like the vip object is the created top level 
primitive with flavor attribute is apart of a VIP. Are you intending to rename 
what we call a loadbalancer to a VIP? Could you provide a work flow of a 
created loadbalancer. It looks good either way.

Is it cool if we rename ca_certificate_id to client_ca or client_ca_certificate 
to make it clear the purpose of the CA is to snub clients. Later on if we need 
to do encryption to back end pool members that have x509s signed by their own 
CA we can then use a parameter like reencryption_ca_certificate.

Consider the following cases.

The user wants SSL_ID based persistence on an HTTPS LoadBalancer where the 
loadbalancer does not know the key or cert but has access to the unencrypted 
RFC5246: 7.4.1.2 uncrypted Session ID
to identify persistence to the back end HTTPS pool member?

On the pool side of the loadbalaancer can a loadbalancer still encrypt if no 
ca_certificate_id or client_certificate_id is present? How would they signal to 
the api that they extend to encrypt with out host name validation or even vert 
validation at all. Not sure why they would want to other then they don't feel 
the need to pay for certs on their backend nodes or worse yes pay for a signing 
cert.

The user feels secure on their network and wants SSL termination at the 
loadbalancer so the loadbalancer has the Cert and Key and extends to use plain 
old HTTP to the pool members with some headers injected. What would the 
protocol on the listener be HTTPS and would placing a CERT and KEY imply 
deception should happen.

Also I've been burned in an earlier project when I started noticing some CA's 
were using ECDSA certs instead of RSA? should we take non RSA x509s into 
account as well? Right now it looks like the API assumes everything is RSA.


By placing
On May 1, 2014, at 5:35 PM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net wrote:

German,

They certainly are essential-- but as far as I can tell, we haven't been 
concentrating on them, so the list there is likely very incomplete.

Stephen


On Thu, May 1, 2014 at 1:04 PM, Eichberger, German 
german.eichber...@hp.commailto:german.eichber...@hp.com wrote:
Stephen,

I would prefer if we can vote on them, too. They are essential and I would like 
to make sure they are considered first-class citizen when it comes to use cases.

Thanks,
German

From: Stephen Balukoff 
[mailto:sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net]
Sent: Thursday, May 01, 2014 12:52 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey


Yep, I'm all for this as well!

Note: We're just talking about user use cases in this survey, correct?  
(We'll leave the operator use cases for later when we have more of a story 
and/or model to work with on how we're going to approach those, yes?)

Thanks,
Stephen

On Thu, May 1, 2014 at 11:54 AM, Jorge Miramontes 
jorge.miramon...@rackspace.commailto:jorge.miramon...@rackspace.com wrote:
That sounds good to me. The only thing I would caution is that we have 
prioritized certain requirements (like HA and SSL Termination) and I want to 
ensure we use the survey to compliment what we have already mutually agreed 
upon. Thanks for spearheading this!

Cheers,
--Jorge

From: Samuel Bercovici samu...@radware.commailto:samu...@radware.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, May 1, 2014 12:39 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

Hi Everyone!

To assist in evaluating the use cases that matter and since we now have ~45 use 
cases, I would like to propose to conduct a survey using something like 
surveymonkey.
The idea is to have a non-anonymous survey listing the use cases and ask you 
identify and vote.
Then we will publish the results and can prioritize based on this.

To do so in a timely manner, I would like to freeze the document for editing 
and allow only comments by Monday May 5th 08:00AMUTC and publish the survey 
link to ML ASAP after that.

Please let me know if this is acceptable.

Regards,
-Sam.





Re: [openstack-dev] [Neutron][LBaaS] Use Case Question

2014-05-01 Thread Carlos Garza
   our stingray nodes don't allow you to specify. Its just an enable or disable 
option.
On May 1, 2014, at 7:35 PM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net
 wrote:

Question for those of you using the SSL session ID for persistency: About how 
long do you typically set these sessions to persist?

Also, I think this is a cool way to handle this kind of persistence 
efficiency-- I'd never seen it done that way before, eh!

It should also almost go without saying that of course in the case where the 
SSL session is not terminated on the load balancer, you can't do anything else 
with the content (like insert X-Forwarded-For headers or do anything else that 
has to do with L7).

Stephen


On Wed, Apr 30, 2014 at 9:39 AM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com wrote:
Hi,

As stated, this could either be handled by SSL session ID persistency or by SSL 
termination and using cookie based persistency options.
If there is no need to inspect the content hence to terminate the SSL 
connection on the load balancer for this sake, than using SSL session ID based 
persistency is obviously a much more efficient way.
The reference to source client IP changing was to negate the use of source IP 
as the stickiness algorithm.


-Sam.


From: Trevor Vardeman 
[mailto:trevor.varde...@rackspace.commailto:trevor.varde...@rackspace.com]
Sent: Thursday, April 24, 2014 7:26 PM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron][LBaaS] Use Case Question

Hey,

I'm looking through the use-cases doc for review, and I'm confused about one of 
them.  I'm familiar with HTTP cookie based session persistence, but to satisfy 
secure-traffic for this case would there be decryption of content, injection of 
the cookie, and then re-encryption?  Is there another session persistence type 
that solves this issue already?  I'm copying the doc link and the use case 
specifically; not sure if the document order would change so I thought it would 
be easiest to include both :)

Use Cases:  
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis

Specific Use Case:  A project-user wants to make his secured web based 
application (HTTPS) highly available. He has n VMs deployed on the same private 
subnet/network. Each VM is installed with a web server (ex: apache) and 
content. The application requires that a transaction which has started on a 
specific VM will continue to run against the same VM. The application is also 
available to end-users via smart phones, a case in which the end user IP might 
change. The project-user wishes to represent them to the application users as a 
web application available via a single IP.

-Trevor Vardeman

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Use-Cases with VPNs Distinction

2014-05-01 Thread Carlos Garza

On May 1, 2014, at 7:48 PM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net
 wrote:

Hi Trevor,

I was the one who wrote that use case based on discussion that came out of the 
question I wrote the list last week about SSL re-encryption:  Someone had 
stated that sometimes pool members are local, and sometimes they are hosts 
across the internet, accessible either through the usual default route, or via 
a VPN tunnel.

The point of this use case is to make the distinction that if we associate a 
neutron_subnet with the pool (rather than with the member), then some members 
of the pool that don't exist in that neutron_subnet might not be accessible 
from that neutron_subnet.  However, if the behavior of the system is such that 
attempting to reach a host through the subnet's default route still works 
(whether that leads to communication over a VPN or the usual internet routes), 
then this might not be a problem.

The other option is to associate the neutron_subnet with a pool member. But in 
this case there might be problems too. Namely:

  *   The device or software that does the load balancing may need to have an 
interface on each of the member subnets, and presumably an IP address from 
which to originate requests.
  *   How does one resolve cases where subnets have overlapping IP ranges?

In the end, it may be simpler not to associate neutron_subnet with a pool at 
all. Maybe it only makes sense to do this for a VIP, and then the assumption 
would be that any member addresses one adds to pools must be accessible from 
the VIP subnet.  (Which is easy, if the VIP exists on the same neutron_subnet. 
But this might require special routing within Neutron itself if it doesn't.)
This topology question (ie. what is feasible, what do people actually want to 
do, and what is supported by the model) is one of the more difficult ones to 
answer, especially given that users of OpenStack that I've come in contact with 
barely understand the Neutron networking model, if at all.

I would think we'd want to use a single subnet with a pool and if the user 
specifies an pool member thats not routable theres not much we can do. Should 
we introduces the concepts of routers into the pool object to bridge the 
subnets if need be. Or we leave it up to the user to add the appropriate 
host_routes on their loadbalancers subnet. and have an interface or 
port_id(With an ip) specified on the pool object.  I don't know if attaching a 
neutron port to a pool and using host_routes makes the flow any easier. But 
routing constructs in Neutron are available. I know networking but not a whole 
lot of the neutron perspective on it.  I've yet to look over how the VPN stuff 
is handled. If the pools do happen to have IP collisions then the first match 
in the LoadBalancer's subnet host_routes wins.

subnet.host_route =  [{'destination': CIDR, nexthop: valid IP 
address}...]. according to 
https://wiki.openstack.org/wiki/Neutron/APIv2-specification#High-level_flow
with the ip address being the pool neutron ports on your side of the 
loadbalancer.




On Thu, May 1, 2014 at 1:52 PM, Trevor Vardeman 
trevor.varde...@rackspace.commailto:trevor.varde...@rackspace.com wrote:
Hello,

After going back through the use-cases to double check some of my
understanding, I realized I didn't quite understand the ones I had
already answered.  I'll use a specific use-case as an example of my
misunderstanding here, and hopefully the clarification can be easily
adapted to the rest of the use-cases that are similar.

Use Case 13:  A project-user has an HTTPS application in which some of
the back-end servers serving this application are in the same subnet,
and others are across the internet, accessible via VPN. He wants this
HTTPS application to be available to web clients via a single IP
address.

In this use-case, is the Load Balancer going to act as a node in the
VPN?  What I mean here, is the Load Balancer supposed to establish a
connection to this VPN for the client, and simulate itself as a computer
on the VPN?  If this is not the case, wouldn't the VPN have a subnet ID,
and simply be added to a pool during its creation?  If the latter is
accurate, would this not just be a basic HTTPS Load Balancer creation?
After looking through the VPNaaS API, you would provide a subnet ID to
the create VPN service request, and it establishes a VPN on said subnet.
Couldn't this be provided to the Load Balancer pool as its subnet?

Forgive me for requiring so much distinction here, but what may be clear
to the creator of this use-case, it has left me confused.  This same
type of clarity would be very helpful across many of the other
VPN-related use-cases.  Thanks again!

-Trevor
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Stephen Balukoff
Blue Box Group, LLC

Re: [openstack-dev] [Neutron][LBaaS] Use Case Question

2014-04-29 Thread Carlos Garza
   I mis quoted it should be in RFC 5246 not 5264.
On Apr 25, 2014, at 2:50 AM, Carlos Garza 
carlos.ga...@rackspace.commailto:carlos.ga...@rackspace.com wrote:

Trevor is referring to our plans on using the SSL session ID of the 
ClientHello to provide session persistence.
See RFC 5264 section 7.4.1.2 which sends an SSL session ID in the clear 
(Unencrypted) so that a load balancer with out the decrypting key can use it to 
make decisions on which
back end node to send the request to.  Users browsers while typically use the 
same session ID for a while between connections.

Also note this is supported in TLS 1.1 as well in the same section according to 
RFC 4346. And in TLS 1.0 RFC2246 as well.

So we have the ability to offer http cookie based persistence as you 
described only if we have the key but if not we can also offer SSL Session Id 
based persistence.



On Apr 24, 2014, at 7:53 PM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net wrote:

Hi Trevor,

If the use case here requires the same client (identified by session cookie) to 
go to the same back-end, the only way to do this with HTTPS is to decrypt on 
the load balancer. Re-encryption of the HTTP request may or may not happen on 
the back-end depending on the user's needs. Again, if the client can 
potentially change IP addresses, and the session still needs to go to the same 
back-end, the only way the load balancer is going to know this is by decrypting 
the HTTPS request. I know of no other way to make this work.

Stephen


On Thu, Apr 24, 2014 at 9:25 AM, Trevor Vardeman 
trevor.varde...@rackspace.commailto:trevor.varde...@rackspace.com wrote:
Hey,

I'm looking through the use-cases doc for review, and I'm confused about one of 
them.  I'm familiar with HTTP cookie based session persistence, but to satisfy 
secure-traffic for this case would there be decryption of content, injection of 
the cookie, and then re-encryption?  Is there another session persistence type 
that solves this issue already?  I'm copying the doc link and the use case 
specifically; not sure if the document order would change so I thought it would 
be easiest to include both :)

Use Cases:  
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis

Specific Use Case:  A project-user wants to make his secured web based 
application (HTTPS) highly available. He has n VMs deployed on the same private 
subnet/network. Each VM is installed with a web server (ex: apache) and 
content. The application requires that a transaction which has started on a 
specific VM will continue to run against the same VM. The application is also 
available to end-users via smart phones, a case in which the end user IP might 
change. The project-user wishes to represent them to the application users as a 
web application available via a single IP.

-Trevor Vardeman

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] BBG edit of new API proposal

2014-04-29 Thread Carlos Garza

On Apr 27, 2014, at 8:35 AM, Eugene Nikanorov 
enikano...@mirantis.commailto:enikano...@mirantis.com wrote:

On Fri, Apr 25, 2014 at 4:03 AM, Eugene Nikanorov 
enikano...@mirantis.commailto:enikano...@mirantis.com wrote:

Speaking of SSL - we have a few few project-wise issues such as lack of secure 
storage, lack of secure messaging, requirement to have opensource impl of SSL 
API (which yet to be added).
There's also a patch on review that is worth looking at, because someone has 
already put efforts into both design and code.
And before throwing away all those efforts we need to be sure it completely not 
suitable with the rest of API.


Where are these proposed patches your referring too. I don't seem them in 
openstack-neutron or in neutron-spec or the icehouse code in github? The 
closest thing I find is radware mentioning it as apart of the restClient driver.




3. L7 is also a separate work, it will not be accounted in 'API improvement' 
blueprint. You can sync with Samuel for this as we already have pretty detailed 
blueprints on that.

Please provide an example of how multiple pools will actually be used without 
L7.
As said above, the bp targets just the baseline for L7: removing 'root object' 
role from the Pool, to actually allow multiple pools in configuration when L7 
rules are added.


And again: Let's move the discussion off the mailing list where it's been 
happening over to the system where apparently others have been pressing forward 
without the obvious knowledge or consent of the people having the discussion. 
Oh, and by the way, because we've already written code here, a lot of what you 
propose is tacitly no longer up for discussion.

4. Attribute differences in REST resources.
This falls into two categories:
- existing attributes that should belong to one or another resource,

The devil's in the details here. I was very specific about which attributes 
belong to which objects because getting this wrong has serious implications for 
use cases. (For example, if neutron_subnet is an attribute of a pool, then 
this implies all members of the pool must be in the same neutron_subnet. And 
this breaks the use case that was described on the mailing list during the SSL 
re-encryption discussion where some members are in the same subnet, and some 
are across the internet on the other side of a VPN or otherwise.)
Right. I should have said it's a work item for me - to look closely through 
your doc and address the differences. Also, that is exactly the kind of work 
item where gerrit helps a lot.


You also need to define which attributes are immutable after creation of the 
object, and which can be changed with later updates (and how). This also has 
implications for use cases.
Well, this is usually defined in the code, I can add this to the spec as well. 
I hoped to avoid duplicate work.



If your proposal doesn't define this level of detail, then your proposal isn't 
addressing the use cases and is therefore incomplete.
I'm working on that.


- attributes that should be added (e.g. they didn't exist in current API)

Right. As I said last week, my intention was to try to address as many concerns 
and feature requests from the mailing list discussions, wiki pages, and google 
documents / spreadsheets as possible. I was hoping to take a stab at HA as 
well, but the result of that discussion so far is that we're nowhere near 
having any idea how to accomplish this in a way that is going to be generically 
possible for all operators.

I mean: You all do realize that if there's a key missing feature that one 
operator absolutely needs in order to continue their business operations, that 
isn't addressed in our proposals... then that operator is NOT going to use our 
product, right?

And that doesn't mean you need to offer all features out of the gate-- but you 
ought to have put some thought into how you're going to do it when the time 
comes. This proposal is trying to be that plan for how we're eventually going 
to do it. It will, of course, be developed incrementally.
So, what are we arguing about? I'm just doing the small part that fixes 
relationship issue with the obj model.


It is true that I haven't had the time to fill out all the use cases we thought 
about, or that became apparent from mailing list discussions as we were writing 
our API revision proposal--  our main drive was to get the proposal out the 
door. My plan was (and still is) to back-fill these use cases in the right 
place (which I'm no longer sure is the google doc that Samuel created, or the 
gerrit system) once we got the API proposal out. So I do apologize that I'm 
making reference to stuff that has been considered but not shared thus far and 
realize that not having it in the shared document weakens my position. I had to 
sleep.

The first class is better to be addressed in the blueprint review. The second 
class could be a small action items/blueprints or even bugs.
Example:
  1) custom_503 - that attribute 

Re: [openstack-dev] [Neutron][LBaaS] BBG edit of new API proposal

2014-04-29 Thread Carlos Garza
   This blueprint was marked abandoned.

On Apr 29, 2014, at 3:55 PM, Vijay B 
os.v...@gmail.commailto:os.v...@gmail.com
 wrote:

Hi Sam, Evgeny,

I've reviewed https://review.openstack.org/#/c/74031 with my comments. I am not 
sure if there is a request with code newer than this link - please do let me 
know if that is the case.

Thanks,
Regards,
Vijay


On Mon, Apr 28, 2014 at 2:12 PM, Eichberger, German 
german.eichber...@hp.commailto:german.eichber...@hp.com wrote:
Sam,

The use cases where pretty complete the last time I checked so let's move them 
to gerrit so we can all vote.

Echoing Kyle I would love to see us focusing on getting things ready for the 
summit.

German

-Original Message-
From: Samuel Bercovici [mailto:samu...@radware.commailto:samu...@radware.com]
Sent: Monday, April 28, 2014 11:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] BBG edit of new API proposal

Hi,

I was just working to push the use cases into the new format .rst but I agree 
that using google doc would be more intuitive.
Let me know what you prefer to do with the use cases document:
1. leave it at google docs at - 
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit?pli=1
2. Move it to the new format under - 
http://git.openstack.org/cgit/openstack/neutron-specs, I have already files a 
blue print https://blueprints.launchpad.net/neutron/+spec/lbaas-use-cases and 
can complete the .rst process by tomorrow.

Regards,
-Sam.






-Original Message-
From: Kyle Mestery 
[mailto:mest...@noironetworks.commailto:mest...@noironetworks.com]
Sent: Monday, April 28, 2014 4:33 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] BBG edit of new API proposal

Folks, sorry for the top post here, but I wanted to make sure to gather 
people's attention in this thread.

I'm very happy to see all the passion around LBaaS in Neutron for this cycle. 
As I've told a few people, seeing all the interest from operators and providers 
is fantastic, as it gives us valuable input from that side of things before we 
embark on designing and coding.
I've also attended the last few LBaaS IRC meetings, and I've been catching up 
on the LBaaS documents and emails. There is a lot of great work and passion by 
many people. However, the downside of what I've seen is that there is a logjam 
around progress here. Given we're two weeks out from the Summit, I'm going to 
start running the LBaaS meetings with Eugene to try and help provide some focus 
there.
Hopefully we can use this week and next week's meetings to drive to a 
consistent Summit agenda and lay the groundwork for LBaaS in Juno and beyond.

Also, while our new neutron-specs BP repository has been great so far for 
developers, based on feedback from operators, it may not be ideal for those who 
are not used to contributing using gerrit. I don't want to lose the voice of 
those people, so I'm pondering what to do. This is really affecting the LBaaS 
discussion at the moment. I'm thinking that we should ideally try to use Google 
Docs for these initial discussions and then move the result of that into a BP 
on neutron-specs. What do people think of that?

If we go down this path, we need to decide on a single Google Doc for people to 
collaborate on. I don't want to put Stephen on the spot, but his document may 
be a good starting point.

I'd like to hear what others think on this plan as well.

Thanks,
Kyle


On Sun, Apr 27, 2014 at 6:06 PM, Eugene Nikanorov 
enikano...@mirantis.commailto:enikano...@mirantis.com wrote:
 Hi,


 You knew from the action items that came out of the IRC meeting of
 April
 17 that my team would be working on an API revision proposal. You
 also knew that this proposal was to be accompanied by an object model
 diagram and glossary, in order to clear up confusion. You were in
 that meeting, you saw the action items being created. Heck, you even
 added the to prepare API for SSL and L7 directive for my team yourself!

 The implied but not stated assumption about this work was that it
 would be fairly evaluated once done, and that we would be given a short 
 window (ie.
 about a week) in which to fully prepare and state our proposal.

 Your actions, though, were apparently to produce your own version of
 the same in blueprint form without notifying anyone in the group that
 you were going to be doing this, let alone my team. How could you
 have given my API proposal a fair shake prior to publishing your
 blueprint, if both came out on the same day? (In fact, I'm lead to
 believe that you and other Neutron LBaaS developers hadn't even
 looked at my proposal before the meeting on 4/24, where y'all started
 determining product direction, apparently by
 edict.)


 Therefore, looking honestly at your actions on this and trying to
 give you the benefit of the doubt, I still must assume that you 

Re: [openstack-dev] [Neutron][LBaaS] Use Case Question

2014-04-25 Thread Carlos Garza
Trevor is referring to our plans on using the SSL session ID of the 
ClientHello to provide session persistence.
See RFC 5264 section 7.4.1.2 which sends an SSL session ID in the clear 
(Unencrypted) so that a load balancer with out the decrypting key can use it to 
make decisions on which
back end node to send the request to.  Users browsers while typically use the 
same session ID for a while between connections.

Also note this is supported in TLS 1.1 as well in the same section according to 
RFC 4346. And in TLS 1.0 RFC2246 as well.

So we have the ability to offer http cookie based persistence as you 
described only if we have the key but if not we can also offer SSL Session Id 
based persistence.



On Apr 24, 2014, at 7:53 PM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net wrote:

Hi Trevor,

If the use case here requires the same client (identified by session cookie) to 
go to the same back-end, the only way to do this with HTTPS is to decrypt on 
the load balancer. Re-encryption of the HTTP request may or may not happen on 
the back-end depending on the user's needs. Again, if the client can 
potentially change IP addresses, and the session still needs to go to the same 
back-end, the only way the load balancer is going to know this is by decrypting 
the HTTPS request. I know of no other way to make this work.

Stephen


On Thu, Apr 24, 2014 at 9:25 AM, Trevor Vardeman 
trevor.varde...@rackspace.commailto:trevor.varde...@rackspace.com wrote:
Hey,

I'm looking through the use-cases doc for review, and I'm confused about one of 
them.  I'm familiar with HTTP cookie based session persistence, but to satisfy 
secure-traffic for this case would there be decryption of content, injection of 
the cookie, and then re-encryption?  Is there another session persistence type 
that solves this issue already?  I'm copying the doc link and the use case 
specifically; not sure if the document order would change so I thought it would 
be easiest to include both :)

Use Cases:  
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis

Specific Use Case:  A project-user wants to make his secured web based 
application (HTTPS) highly available. He has n VMs deployed on the same private 
subnet/network. Each VM is installed with a web server (ex: apache) and 
content. The application requires that a transaction which has started on a 
specific VM will continue to run against the same VM. The application is also 
available to end-users via smart phones, a case in which the end user IP might 
change. The project-user wishes to represent them to the application users as a 
web application available via a single IP.

-Trevor Vardeman

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-21 Thread Carlos Garza

On Apr 21, 2014, at 1:51 PM, Eichberger, German 
german.eichber...@hp.commailto:german.eichber...@hp.com
 wrote:

Hi,

Despite there are some good use cases for the re-encryption I think it’s out of 
scope for a Load Balancer. We can defer that functionality to the VPN – as long 
as we have a mechanism to insert a LoadBalancer as a VPN node we should get all 
kind of encryption infrastructure “for free”.

   I think the feature should be apart of the API but I think it should be up 
to the vender to implement the feature or not since some venders can't.
Plus an end user might not be able to append a vpn tunnel on the tail of the 
loadbalancer.

I like the Unix philosophy of little programs doing one task very well and can 
be chained. So in our case we might want to chain a firewall to a load balancer 
to a VPN to get the functionality we want.

   I like that philosophy as well but must admit that the chains do break when 
versions or interactions  of these components change. GNU's Autotools for 
example is a nightmare compared to Maven for Java. Even simpler tools like  
sort, tail,  broke some tools I used to use. Monolithic tools like emacs 
likewise seem to be doing daily well.

I get the impression that a the simple chained tool philosophy came from 
the era where individual programs had to be small enough to fit in memory and 
data would be spooled to tape as the intermediary pipe. Still a nice idea 
though for admins.

Thoughts?

German

From: Stephen Balukoff [mailto:sbaluk...@bluebox.nethttp://bluebox.net]
Sent: Friday, April 18, 2014 9:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario 
question

Hi y'all!

Carlos: When I say 'client cert' I'm talking about the certificate / key 
combination the load balancer will be using to initiate the SSL connection to 
the back-end server. The implication here is that if the back-end server 
doesn't like the client cert, it will reject the connection (as being not from 
a trusted source). By 'CA cert' I'm talking about the certificate (sans key) 
that the load balancer will be using the authenticate the back-end server. If 
the back-end server's server certificate isn't signed by the CA, then the 
load balancer should reject the connection.

Of course, the use of a client cert or CA cert on the load balancer should be 
optional: As Clint pointed out, for some users, just using SSL without doing 
any particular authentication (on either the part of the load balancer or 
back-end) is going to be good enough.

Anyway, the case for supporting re-encryption on the load-balancers has been 
solidly made, and the API proposal we're making will reflect this capability. 
Next question:

When specific client certs / CAs are used for re-encryption, should these be 
associated with the pool or member?

I could see an argument for either case:

Pool (ie. one client cert / CA cert will be used for all members in a pool):
* Consistency of back-end nodes within a pool is probably both extremely 
common, and a best practice. It's likely all will be accessed the same way.
* Less flexible than certs associated with members, but also less complicated 
config.
* For CA certs, assumes user knows how to manage their own PKI using a CA.

Member (ie. load balancer will potentially use a different client cert / CA 
cert for each member individually):
* Customers will sometimes run with inconsistent back-end nodes (eg. local 
nodes in a pool treated differently than remote nodes in a pool).
* More flexible than certs associated with members, more complicated 
configuration.
* If back-end certs are all individually self-signed (ie. no single CA used for 
all nodes), then certs must be associated with members.

What are people seeing in the wild? Are your users using 
inconsistently-signed or per-node self-signed certs in a single pool?

Thanks,
Stephen




On Fri, Apr 18, 2014 at 5:56 PM, Carlos Garza 
carlos.ga...@rackspace.commailto:carlos.ga...@rackspace.com wrote:

On Apr 18, 2014, at 12:36 PM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net wrote:


Dang.  I was hoping this wasn't the case.  (I personally think it's a little 
silly not to trust your service provider to secure a network when they have 
root access to all the machines powering your cloud... but I digress.)

Part of the reason I was hoping this wasn't the case, isn't just because it 
consumes a lot more CPU on the load balancers, but because now we potentially 
have to manage client certificates and CA certificates (for authenticating from 
the proxy to back-end app servers). And we also have to decide whether we allow 
the proxy to use a different client cert / CA per pool, or per member.

   If you choose to support re-encryption on your service then you are free to 
charge for the extra CPU cycles. I'm convinced re-encryption and SslTermination 
is general needs to be mandatory but I think the API

Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-20 Thread Carlos Garza

On Apr 18, 2014, at 11:06 PM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net
 wrote:

Hi y'all!

Carlos: When I say 'client cert' I'm talking about the certificate / key 
combination the load balancer will be using to initiate the SSL connection to 
the back-end server. The implication here is that if the back-end server 
doesn't like the client cert, it will reject the connection (as being not from 
a trusted source). By 'CA cert' I'm talking about the certificate (sans key) 
that the load balancer will be using the authenticate the back-end server. If 
the back-end server's server certificate isn't signed by the CA, then the 
load balancer should reject the connection.

I see no problem with server auth as well as client auth making its way 
into the API.



Of course, the use of a client cert or CA cert on the load balancer should be 
optional: As Clint pointed out, for some users, just using SSL without doing 
any particular authentication (on either the part of the load balancer or 
back-end) is going to be good enough.

It should be optical for the API implementers to support it or not. This is 
an advanced feature which would lock out many venders if they can't support it.


Anyway, the case for supporting re-encryption on the load-balancers has been 
solidly made, and the API proposal we're making will reflect this capability. 
Next question:

When specific client certs / CAs are used for re-encryption, should these be 
associated with the pool or member?

I could see an argument for either case:

Pool (ie. one client cert / CA cert will be used for all members in a pool):
* Consistency of back-end nodes within a pool is probably both extremely 
common, and a best practice. It's likely all will be accessed the same way.
* Less flexible than certs associated with members, but also less complicated 
config.
* For CA certs, assumes user knows how to manage their own PKI using a CA.

Member (ie. load balancer will potentially use a different client cert / CA 
cert for each member individually):
* Customers will sometimes run with inconsistent back-end nodes (eg. local 
nodes in a pool treated differently than remote nodes in a pool).
* More flexible than certs associated with members, more complicated 
configuration.
* If back-end certs are all individually self-signed (ie. no single CA used for 
all nodes), then certs must be associated with members.


I'm not invested in an argument this far in.


What are people seeing in the wild? Are your users using 
inconsistently-signed or per-node self-signed certs in a single pool?
Thanks,
Stephen





On Fri, Apr 18, 2014 at 5:56 PM, Carlos Garza 
carlos.ga...@rackspace.commailto:carlos.ga...@rackspace.com wrote:

On Apr 18, 2014, at 12:36 PM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net wrote:

Dang.  I was hoping this wasn't the case.  (I personally think it's a little 
silly not to trust your service provider to secure a network when they have 
root access to all the machines powering your cloud... but I digress.)

Part of the reason I was hoping this wasn't the case, isn't just because it 
consumes a lot more CPU on the load balancers, but because now we potentially 
have to manage client certificates and CA certificates (for authenticating from 
the proxy to back-end app servers). And we also have to decide whether we allow 
the proxy to use a different client cert / CA per pool, or per member.

   If you choose to support re-encryption on your service then you are free to 
charge for the extra CPU cycles. I'm convinced re-encryption and SslTermination 
is general needs to be mandatory but I think the API should allow them to be 
specified.

Yes, I realize one could potentially use no client cert or CA (ie. encryption 
but no auth)...  but that actually provides almost no extra security over the 
unencrypted case:  If you can sniff the traffic between proxy and back-end 
server, it's not much more of a stretch to assume you can figure out how to be 
a man-in-the-middle.

Yes but considering you have no problem advocating pure ssl termination for 
your customers(Decryption on the front end and plain text) on the back end I'm 
actually surprised this disturbs you. I would recommend users use Straight SSL 
passthrough or re-enecryption but I wouldn't force this on them should they 
choose naked encryption with no checking.


Do any of you have a use case where some back-end members require SSL 
authentication from the proxy and some don't? (Again, deciding whether client 
cert / CA usage should attach to a pool or to a member.)

When you say client Cert are you referring to the end users X509 Certificate 
(To be rejected by the backend server)or are you referring to the back end 
servers X509Certificate which the loadbalancer would reject if it discovered 
the back end server had a bad signature or mismatched key? I am speaking of the 
case where the user wants re-encryption but wants to be able

Re: [openstack-dev] [Neutron][LBaas] Single call API discussion

2014-04-18 Thread Carlos Garza

On Apr 17, 2014, at 8:39 PM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net
 wrote:

Hello German and Brandon!

Responses in-line:


On Thu, Apr 17, 2014 at 3:46 PM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
Stephen,
I have responded to your questions below.


On 04/17/2014 01:02 PM, Stephen Balukoff wrote:
Howdy folks!

Based on this morning's IRC meeting, it seems to me there's some contention and 
confusion over the need for single call functionality for load balanced 
services in the new API being discussed. This is what I understand:

* Those advocating single call are arguing that this simplifies the API for 
users, and that it more closely reflects the users' experience with other load 
balancing products. They don't want to see this functionality necessarily 
delegated to an orchestration layer (Heat), because coordinating how this works 
across two OpenStack projects is unlikely to see success (ie. it's hard enough 
making progress with just one project). I get the impression that people 
advocating for this feel that their current users would not likely make the 
leap to Neutron LBaaS unless some kind of functionality or workflow is 
preserved that is no more complicated than what they currently have to do.
Another reason, which I've mentioned many times before and keeps getting 
ignored, is because the more primitives you add the longer it will take to 
provision a load balancer.  Even if we relied on the orchestration layer to 
build out all the primitives, it still will take much more time to provision a 
load balancer than a single create call provided by the API.  Each request and 
response has an inherent time to process.  Many primitives will also have an 
inherent build time.  Combine this in an environment that becomes more and more 
dense, build times will become very unfriendly to end users whether they are 
using the API directly, going through a UI, or going through an orchestration 
layer.  This industry is always trying to improve build/provisioning times and 
there are no reasons why we shouldn't try to achieve the same goal.

Noted.

* Those (mostly) against the idea are interested in seeing the API provide 
primitives and delegating higher level single-call stuff to Heat or some 
other orchestration layer. There was also the implication that if single-call 
is supported, it ought to support both simple and advanced set-ups in that 
single call. Further, I sense concern that if there are multiple ways to 
accomplish the same thing supported in the API, this redundancy breeds 
complication as more features are added, and in developing test coverage. And 
existing Neutron APIs tend to expose only primitives. I get the impression that 
people against the idea could be convinced if more compelling reasons were 
illustrated for supporting single-call, perhaps other than we don't want to 
change the way it's done in our environment right now.
I completely disagree with we dont want to change the way it's done in our 
environment right now.  Our proposal has changed the way our current API works 
right now.  We do not have the notion of primitives in our current API and our 
proposal included the ability to construct a load balancer with primitives 
individually.  We kept that in so that those operators and users who do like 
constructing a load balancer that way can continue doing so.  What we are 
asking for is to keep our users happy when we do deploy this in a production 
environment and maintain a single create load balancer API call.


There's certainly something to be said for having a less-disruptive user 
experience. And after all, what we've been discussing is so radical a change 
that it's close to starting over from scratch in many ways.


Its not disruptive. There is nothing preventing them from continuing to use 
 the multiple primitive operations philosophy, so they can continue that 
approach.


I've mostly stayed out of this debate because our solution as used by our 
customers presently isn't single-call and I don't really understand the 
requirements around this.

So! I would love it if some of you could fill me in on this, especially since 
I'm working on a revision of the proposed API. Specifically, what I'm looking 
for is answers to the following questions:

1. Could you please explain what you understand single-call API functionality 
to be?
Single-call API functionality is a call that supports adding multiple features 
to an entity (load balancer in this case) in one API request.  Whether this 
supports all features of a load balancer or a subset is up for debate.  I 
prefer all features to be supported.  Yes it adds complexity, but complexity is 
always introduced by improving the end user experience and I hope a good user 
experience is a goal.

Got it. I think we all want to improve the user experience.

2. Could you describe the simplest use case that uses single-call API in your 

Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-18 Thread Carlos Garza

On Apr 18, 2014, at 10:21 AM, Stephen Balukoff sbaluk...@bluebox.net wrote:

 Howdy, folks!
 
 Could someone explain to me the SSL usage scenario where it makes sense to 
 re-encrypt traffic traffic destined for members of a back-end pool?  SSL 
 termination on the load balancer makes sense to me, but I'm having trouble 
 understanding why one would be concerned about then re-encrypting the traffic 
 headed toward a back-end app server. (Why not just use straight TCP load 
 balancing in this case, and save the CPU cycles on the load balancer?)
 

1. Some customers want their servers to be external from our data centers for 
example the loadbalancer is in Chicago with rackspace hosting the loadbalancers 
and the back end pool members being on Amazon AWS servers. (We don't know why 
they would do this but a lot are doing it). They can't can't simply just audit 
the links between AWS and our DataCenters for PCI lots of backbones being 
crossed. so they just want encryption to their backend pool members. Also take 
note that amazon has chosen to support encryption 
http://aws.amazon.com/about-aws/whats-new/2011/10/04/amazon-s3-announces-server-side-encryption-support/
They've had it for a while now and for what ever reason a lot of customers are 
now demanding it from us as well.  

I agree they could simply just use HTTPS load balancing but they seem to think 
providers that don't offer encryption are inferior feature wise.

2. User that are on providers that are incapable of One Armed With Source Nat 
load balancing capabilities (See link below) are at the mercy of using 
X-forwarded for style headers to determine the original  source of a 
connection. (A must if they want to know where abusive connections are coming 
from). Under traditional NAT routing the source ip will always be the 
loadbalancers ip so X-Forwarded for has been the traditional method of show the 
server this(This applies to HTTP load balancing as well). But in the case of 
SSL the loadbalancer unless its decrypting traffic won't be able to inject 
these headers in. and when the pool members are on an external network it is 
prudent to allow for encryption, this pretty much forces them to use a trusted 
LoadBalancer as a man in the middle to decrypt add X-forwarded for, then 
encrypting to the back end. 

http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_One_Arm_Mode_with_Source_NAT_on_the_Cisco_Application_Control_Engine_Configuration_Example


3. Unless I'm mistaken it looks like encryption was already apart of the API or 
was accepted as a requirement for neutron lbaas.
https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL#Current_design 
is this document still valid?

4. We also assumed we were expected to support the use cases described in
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit?pli=1
where case 7 specifically is asking for Re-encryption.


 We terminate a lot of SSL connections on our load balancers, but have yet to 
 have a customer use this kind of functionality.  (We've had a few ask about 
 it, usually because they didn't understand what a load balancer is supposed 
 to do-- and with a bit of explanation they went either with SSL termination 
 on the load balancer + clear text on the back-end, or just straight TCP load 
 balancing.)

We terminate a lot of SSL connections on our loadbalancers as well and we 
get a lot of pressure for this kind of functionality.  I think you have no 
customers using that functionality because you are unable to offer it  which is 
the case for us as well. But due to a significant amount of pressure we have a 
solution already ready and waiting for testing on our CLB1.0 offering. 

We wished this was the case for us that only a few users are requesting 
this feature  but we have customers that really do want their backend pool 
members on a separate non secure network or worse want this as a more advance 
form of HTTPS passthrough(TCP load balancing as your describing it). 

Providers may be able to secure their loadbalancers but they may not always 
be able to secure their back and forward connections. Users still want end to 
end encrypted connectivity but want the loadbalancer to be capable of making 
intelligent decisions(requiring decryption at the loadbalancer) as well as 
inject useful headers going to the back end pool member still need encryption 
functionality.

 When your customers do Straight TCP load balancing are you noticing you 
can only offer IP based session persistence at that point? If you only allow ip 
based persistence customers that share a NAT router will all hit the same node 
every time. We have lots of customers behind corporate NAT routers and they 
notice very quickly that hundreds of clients are all being shoved onto one back 
end pool member. They as of now only have the option to turn off session 
persistence but that breaks applications that require locally maintained 
sessions. We could offer TLS 

Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-18 Thread Carlos Garza

On Apr 18, 2014, at 12:59 PM, Vijay Venkatachalam 
vijay.venkatacha...@citrix.commailto:vijay.venkatacha...@citrix.com wrote:


There is no reasoning mentioned in AWS, but they do allow re-encryption.


Is their also no reason to mention:

BigIp's F5 LoadBalancers 
http://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm_implementation/sol_http_ssl.html
A10 LoadBalaners 
http://www.a10networks.com/resources/files/CS-Earth_Class_Mail.pdf
Netscaler 
http://support.citrix.com/proddocs/topic/netscaler-traffic-management-10-map/ns-ssl-offloading-end-to-end-encypt-tsk.html
Finally Stingray https://splash.riverbed.com/thread/5473

  All big players in LoadBalancing. would be the reasoning.

http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/config-backend-auth.html

For reasons I don’t understand, the workflow allows to configure backend-server 
certificates to be trusted and it doesn’t accept client certificates or CA 
certificates.

Thanks,
Vijay V.


From: Stephen Balukoff [mailto:sbaluk...@bluebox.nethttp://bluebox.net]
Sent: Friday, April 18, 2014 11:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario 
question

Dang.  I was hoping this wasn't the case.  (I personally think it's a little 
silly not to trust your service provider to secure a network when they have 
root access to all the machines powering your cloud... but I digress.)

Part of the reason I was hoping this wasn't the case, isn't just because it 
consumes a lot more CPU on the load balancers, but because now we potentially 
have to manage client certificates and CA certificates (for authenticating from 
the proxy to back-end app servers). And we also have to decide whether we allow 
the proxy to use a different client cert / CA per pool, or per member.

Yes, I realize one could potentially use no client cert or CA (ie. encryption 
but no auth)...  but that actually provides almost no extra security over the 
unencrypted case:  If you can sniff the traffic between proxy and back-end 
server, it's not much more of a stretch to assume you can figure out how to be 
a man-in-the-middle.

Do any of you have a use case where some back-end members require SSL 
authentication from the proxy and some don't? (Again, deciding whether client 
cert / CA usage should attach to a pool or to a member.)

It's a bit of a rabbit hole, eh.

Stephen


On Fri, Apr 18, 2014 at 10:21 AM, Eichberger, German 
german.eichber...@hp.commailto:german.eichber...@hp.com wrote:
Hi Stephen,

The use case is that the Load Balancer needs to look at the HTTP requests be it 
to add an X-Forward field or change the timeout – but the network between the 
load balancer and the nodes is not completely private and the sensitive 
information needs to be again transmitted encrypted. This is admittedly an edge 
case but we had to implement a similar scheme for HP Cloud’s swift storage.

German

From: Stephen Balukoff 
[mailto:sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net]
Sent: Friday, April 18, 2014 8:22 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

Howdy, folks!

Could someone explain to me the SSL usage scenario where it makes sense to 
re-encrypt traffic traffic destined for members of a back-end pool?  SSL 
termination on the load balancer makes sense to me, but I'm having trouble 
understanding why one would be concerned about then re-encrypting the traffic 
headed toward a back-end app server. (Why not just use straight TCP load 
balancing in this case, and save the CPU cycles on the load balancer?)

We terminate a lot of SSL connections on our load balancers, but have yet to 
have a customer use this kind of functionality.  (We've had a few ask about it, 
usually because they didn't understand what a load balancer is supposed to do-- 
and with a bit of explanation they went either with SSL termination on the load 
balancer + clear text on the back-end, or just straight TCP load balancing.)

Thanks,
Stephen


--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807tel:%28800%29613-4305%20x807

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-18 Thread Carlos Garza

On Apr 18, 2014, at 12:36 PM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net wrote:

Dang.  I was hoping this wasn't the case.  (I personally think it's a little 
silly not to trust your service provider to secure a network when they have 
root access to all the machines powering your cloud... but I digress.)

Part of the reason I was hoping this wasn't the case, isn't just because it 
consumes a lot more CPU on the load balancers, but because now we potentially 
have to manage client certificates and CA certificates (for authenticating from 
the proxy to back-end app servers). And we also have to decide whether we allow 
the proxy to use a different client cert / CA per pool, or per member.

   If you choose to support re-encryption on your service then you are free to 
charge for the extra CPU cycles. I'm convinced re-encryption and SslTermination 
is general needs to be mandatory but I think the API should allow them to be 
specified.

Yes, I realize one could potentially use no client cert or CA (ie. encryption 
but no auth)...  but that actually provides almost no extra security over the 
unencrypted case:  If you can sniff the traffic between proxy and back-end 
server, it's not much more of a stretch to assume you can figure out how to be 
a man-in-the-middle.

Yes but considering you have no problem advocating pure ssl termination for 
your customers(Decryption on the front end and plain text) on the back end I'm 
actually surprised this disturbs you. I would recommend users use Straight SSL 
passthrough or re-enecryption but I wouldn't force this on them should they 
choose naked encryption with no checking.


Do any of you have a use case where some back-end members require SSL 
authentication from the proxy and some don't? (Again, deciding whether client 
cert / CA usage should attach to a pool or to a member.)

When you say client Cert are you referring to the end users X509 Certificate 
(To be rejected by the backend server)or are you referring to the back end 
servers X509Certificate which the loadbalancer would reject if it discovered 
the back end server had a bad signature or mismatched key? I am speaking of the 
case where the user wants re-encryption but wants to be able to install CA 
certificates that sign backend servers Keys via PKIX path building. I would 
even like to offer the customer the ability to skip hostname validation since 
not every one wants to expose DNS entries for IPs that are not publicly 
routable anyways. Unless your suggesting that we should force this on the user 
which likewise forces us to host a name server that maps hosts to the X509s 
subject CN fields.  Users should be free to validate back end hostnames, just 
the subject name and key or no validation at all. It should be up to them.




It's a bit of a rabbit hole, eh.
Stephen



On Fri, Apr 18, 2014 at 10:21 AM, Eichberger, German 
german.eichber...@hp.commailto:german.eichber...@hp.com wrote:
Hi Stephen,

The use case is that the Load Balancer needs to look at the HTTP requests be it 
to add an X-Forward field or change the timeout – but the network between the 
load balancer and the nodes is not completely private and the sensitive 
information needs to be again transmitted encrypted. This is admittedly an edge 
case but we had to implement a similar scheme for HP Cloud’s swift storage.

German

From: Stephen Balukoff 
[mailto:sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net]
Sent: Friday, April 18, 2014 8:22 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question


Howdy, folks!

Could someone explain to me the SSL usage scenario where it makes sense to 
re-encrypt traffic traffic destined for members of a back-end pool?  SSL 
termination on the load balancer makes sense to me, but I'm having trouble 
understanding why one would be concerned about then re-encrypting the traffic 
headed toward a back-end app server. (Why not just use straight TCP load 
balancing in this case, and save the CPU cycles on the load balancer?)

We terminate a lot of SSL connections on our load balancers, but have yet to 
have a customer use this kind of functionality.  (We've had a few ask about it, 
usually because they didn't understand what a load balancer is supposed to do-- 
and with a bit of explanation they went either with SSL termination on the load 
balancer + clear text on the back-end, or just straight TCP load balancing.)

Thanks,
Stephen


--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807tel:%28800%29613-4305%20x807

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list

Re: [openstack-dev] [Neutron][LBaas] Single call API discussion

2014-04-17 Thread Carlos Garza

On Apr 17, 2014, at 2:11 PM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net
 wrote:

Oh! One other question:

5. Should single-call stuff work for the lifecycle of a load balancing 
service? That is to say, should delete functionality also clean up all 
primitives associated with the service?


We were advocating leaving the primitives behind for the user to delete out 
of respect for shared objects.
The proposal mentions this too.


On Thu, Apr 17, 2014 at 11:44 AM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net wrote:
Hi Sri,

Yes, the meeting minutes  etc. are all available here, usually a few minutes 
after the meeting is over:  
http://eavesdrop.openstack.org/meetings/neutron_lbaas/2014/

(You are also, of course, welcome to join!)

Stephen


On Thu, Apr 17, 2014 at 11:34 AM, Sri 
sri.networ...@gmail.commailto:sri.networ...@gmail.com wrote:
hello Stephen,


I am interested in LBaaS and want to know if we post the weekly meeting's
chat transcripts online?
or may be update an etherpad?


Can you please share the links?

thanks,
SriD



--
View this message in context: 
http://openstack.10931.n7.nabble.com/Neutron-LBaas-Single-call-API-discussion-tp38533p38542.html
Sent from the Developer mailing list archive at Nabble.comhttp://Nabble.com.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807tel:%28800%29613-4305%20x807



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] HA functionality discussion

2014-04-17 Thread Carlos Garza

On Apr 17, 2014, at 5:49 PM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net
 wrote:

Heyas, y'all!

So, given both the prioritization and usage info on HA functionality for 
Neutron LBaaS here:  
https://docs.google.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWcusp=sharing

It's clear that:

A. HA seems to be a top priority for most operators
B. Almost all load balancer functionality deployed is done so in an 
Active/Standby HA configuration

I know there's been some round-about discussion about this on the list in the 
past (which usually got stymied in implementation details disagreements), but 
it seems to me that with so many players putting a high priority on HA 
functionality, this is something we need to discuss and address.

This is also apropos, as we're talking about doing a major revision of the API, 
and it probably makes sense to seriously consider if or how HA-related stuff 
should make it into the API. I'm of the opinion that almost all the HA stuff 
should be hidden from the user/tenant, but that the admin/operator at the very 
least is going to need to have some visibility into HA-related functionality. 
The hope here is to discover what things make sense to have as a least common 
denominator and what will have to be hidden behind a driver-specific 
implementation.

I certainly have a pretty good idea how HA stuff works at our organization, but 
I have almost no visibility into how this is done elsewhere, leastwise not 
enough detail to know what makes sense to write API controls for.

So! Since gathering data about actual usage seems to have worked pretty well 
before, I'd like to try that again. Yes, I'm going to be asking about 
implementation details, but this is with the hope of discovering any least 
common denominator factors which make sense to build API around.

For the purposes of this document, when I say load balancer devices I mean 
either physical or virtual appliances, or software executing on a host 
somewhere that actually does the load balancing. It need not directly 
correspond with anything physical... but probably does. :P

And... all of these questions are meant to be interpreted from the perspective 
of the cloud operator.

Here's what I'm looking to learn from those of you who are allowed to share 
this data:

1. Are your load balancer devices shared between customers / tenants, not 
shared, or some of both?
 If by shared you mean the ability to add and delete loadbalancer Our 
loadbalancers are not shared by different customers which we call accounts. If 
your referring to networking then yes they are on the same clan. Our clusters 
are basically a physical grouping of 4 or 5 stingray devices that share IPs on 
the vip side. The configs are created on all stingray nodes in a cluster. If a 
stingray loadbalancer goes down all its vips will be taken over by one of the 
other 4 or 5 machines. We achieve HA by moving noisy customers IPs to another 
stingray node. The machine taking over an ip will send a gratuitous ARP 
response for the router to train its arp table on.  Usually we have 2 stingray 
nodes available for fail over. We could have spread the load across all boxes 
evenly but we felt that if we were near the end of the capacity for a given 
cluster if one of the nodes tanked this would have degraded performance as the 
other nodes were already nearing capacity.

We also have the usual dual switch dual router set up incase one dies 
config.

1a. If shared, what is your strategy to avoid or deal with collisions of 
customer rfc1918 address space on back-end networks? (For example, I know of no 
load balancer device that can balance traffic for both customer A and customer 
B if both are using the 10.0.0.0/24http://10.0.0.0/24 subnet for their 
back-end networks containing the nodes to be balanced, unless an extra layer of 
NATing is happening somewhere.)

We order a set of CIDR blocks from our backbone and route them to our 
Cluster via a 10Gig/s link which In our bigger clusters can be upgraded via 
link bonding.
downstream we have two routes to one route for our own internal ServiceNet 
10.0.0.0/8 space and the public Internet for everything not on our service net. 
Our pool members are specified by CIDR block only with no association to a 
layer 2 network. When customers create their cloud servers they will be 
assigned an IP with in the address space of 10.0.0.0/24 and also get a publicly 
routable IP address. At that point the customer cane achieve isolation via IP 
tables or what ever tools their VM supports. In theory a user could mistaking 
punch in an IP address in a node that doesn't belong to them but that just 
means the lb will route to only one machine but the loadbalancer would be 
useless at that point. We don't charge our users for bandwidth going across 
service net since each DC has its own service net and our customers want to 
have the LoadBalancer close to their servers anyways. If 

Re: [openstack-dev] [Neutron][LBaaS] Requirements and API revision progress

2014-04-16 Thread Carlos Garza

On Apr 14, 2014, at 8:20 PM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net wrote:

Hello y'all!

Over the last few months, I feel like we've seen a renewed vigor for 
participation in making the LBaaS project successful. After the (still 
unresolved) object model discussion started in January, based on feedback we 
were getting from Neutron core developers (mainly Mark McClain, from what I 
understand) this was followed up by a requirements discussion, a use cases 
discussion, and as of the last weekly IRC meeting, I think there are people in 
this group now working on proposals for API revision. We've coordinated this 
using various documents, and I think the ones that have carried the most weight 
are:

* Object model revision summary as started by Eugene:
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion

(Feedback from core was the 'load balancer' object was an implementation 
detail. I think most people on this project don't think so, but it's clear more 
work was needed here.)

* Requirements document as started by Jorge:
https://wiki.openstack.org/wiki/Neutron/LBaaS/requirements

(Feedback was that requirements needed to be stated in the form of user or 
operator requirements, and not in the form of what a load balancer should do, 
per se.)

* Samuel then created this google document to describe several use cases from 
the user's point of view:
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit?usp=sharing

* And to prioritize what features are needed, Jorge started this document to 
collect operator feature usage data (with current load balancer deployments, 
presumably outside of OpenStack, since Neutron LBaaS doesn't presently have 
many of these features):
https://docs.google.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWcusp=sharing

(Feedback on this is that everyone agrees the legacy API is really confusing, 
and that a clean break for the API for Juno is probably prudent, possibly 
preserving some backward compatibility with a versioned API. Further, it was 
clear we needed an example of what the new API should look like.)

There are also these proposal documents for L7 and SSL functionality, 
presumably on hold until either the API draft being made is closer to reality, 
or until we come to an agreement on the required changes to the object model 
the proposals imply:
https://wiki.openstack.org/wiki/Neutron/LBaaS/l7
https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL


So! On this front:

1. Does is make sense to keep filling out use cases in Samuel's document above? 
I can think of several more use cases that our customers actually use on our 
current deployments which aren't considered in the 8 cases in Samuel's document 
thus far. Plus nobody has create any use cases from the cloud operator 
perspective yet.

We have been using the document when discussing our API proposal. The use 
cases had some surprising implications for our api proposal which we had to 
rethink. Particularly the L7 URL routing use case #7

As far as operator requirements I know our team is advocating a management API 
that is separate from the public api (Separate meaning regular users can reach 
its endpoint)   but still apart of the CORE codebase.

2. It looks like we've started to get real-world data on Load Balancer features 
in use in the real world. If you've not added your organization's data, please 
be sure to do so soon so we can make informed decisions about product 
direction. On this front, when will we be making these decisions?

Would it be prudent to make these decisions at the Atlanta summit or 
thereafter.


3. Jorge-- I know an action item from the last meeting was to draft a revision 
of the API (probably starting from something similar to the Atlas API). Have 
you had a chance to get started on this, and are you open for collaboration on 
this document at this time? Alternatively, I'd be happy to take a stab at it 
this week (though I'm not very familiar with the Atlas API-- so my proposal 
might not look all that similar).

Our team, (Jorge's team) are investigating the API from the perspective of 
supporting Single Loadbalancer creation calls that is still compatible with the 
ability to create separate components such as vips pools ssl confs and 
lasteltly making a call that joins them to a loadbalancer. We wanted to iron 
out some of the gotcha's we've been encountering before we presented the 
proposals. Most recently we are looking at how allowing inplace object creation 
which Im calling literals vs the ability to create parent objects with the 
ids of previously created objects. I'll let brandon logan follow uo on this 
later on today. We are still in meetings right now about the API.

|What format or template should we be following to create the API 
documentation?  (I see this here:  
http://docs.openstack.org/api/openstack-network/2.0/content/ch_preface.html  
but this 

Re: [openstack-dev] [Neutron][LBaaS] Requirements and API revision progress

2014-04-16 Thread Carlos Garza

On Apr 16, 2014, at 4:31 PM, Eugene Nikanorov 
enikano...@mirantis.commailto:enikano...@mirantis.com wrote:

Hi folks,

I've briefly looked over the doc.

I think whole idea to base the API on Atlas misses the content switching use 
case, which is very important:
We need multiple pools within loadbalancer, and API doesn't seem to allow that.
If it would, then you'll face another problem: you need to reference those 
pools somehow inside the json you use in POST.
There are two options here: use names or IDs, both are putting constraints and 
create complexity for both user of such API and for the implementation.

That particular problem becomes worse when it comes to objects which might not 
have names while it's better to not provide ID in POST and rely on their random 
generation. E.g. when you need to create references between objects in json 
input - you'll need to create artificial attributes just for the parser to 
understand that such input means.

So that makes me think that right now a 'single-call API' is not flexible 
enough to comply with our requirements.

We have demonstrated that you can create loadbalancers in separate 
transactions and in a single call fashion using both reference_ids to previous 
pools and as well as using a transient names to create objects in the same 
single call and reference them later on in other objects. The single call API 
is very flexible in that it allows you to create sub objects(We proposed 
transient ids to allow the user to avoid creating duplicate objects with 
different ids) on the fly as well as reference preexisting objects by id. The 
allowance for transient ids is adding flexibility to the api not taking away 
from it as you declared. I would like you to really be clear on what our 
requirements? What requirement is our single API call violating?

We have thus far attempted to support a single call API that doesn't 
interfere with multiple smaller object creation calls. If we are just adding to 
the API  in a demonstrably workable fashion what is the real objection.


While I understand that it might be simpler to use such API for some cases, it 
makes complex configurations fall back to our existing approach which is 
creating configuration on per object basis.
While the problem with complex configurations is not sorted out, I'd prefer if 
we focus on existing 'object-oriented' approach.

Your basically saying
P1: The single API call proposal doesn't support *ALL* complex configurations
P2:  if the single API proposal doesn't support *ALL* complex configurations 
the proposal should be rejected

We have demonstrated that the proposed single API call can handle complex 
configurations via transient ids. So we already disagree with preposition 1.

We don't agree with preposition 2 either:
We believe it is unfair to punish the API end user due to the religious 
belief that The single API calls must support all possible configurations or 
you as the customer can't be allowed to use the single API call even for 
simpler configurations.

We want the single API call proposal to be as useful as possible so we are like 
wise looking at ways to have it solve ALL complex configurations and so far we 
feel transient IDs solve this problem already.

Is the real objection that a single API call makes the implementation too 
complex? We are advocating that a single API makes it easier on the end user of 
the API and are of the impression that its better to have a complex 
implementation inside our neutron/lbaas code rather then passing that 
complexity down to the end user of the API.

We don't object to multiple smaller object creation transactions we just 
want the addition of having single API call.


On the other hand, without single-call API the rest of proposal seems to be 
similar to approaches discussed in 
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion
Since you linked the object model proposals could you also link the rest 
of the proposals or are you referring to our draft as rest of proposal?


Thanks,
Eugene.





On Thu, Apr 17, 2014 at 12:59 AM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
Sorry about that.  It should be readable now.

From: Eugene Nikanorov [enikano...@mirantis.commailto:enikano...@mirantis.com]
Sent: Wednesday, April 16, 2014 3:51 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements and API revision 
progress

Hi Brandon,

Seems that doc has not been made public, so please share.

Thanks,
Eugene.


On Thu, Apr 17, 2014 at 12:39 AM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
Here is Jorge and team’s API proposal based on Atlas.  The document has some 
questions and answers about why decisions were made.  Feel free to open up a 
discussion about these questions and answers and really about 

Re: [openstack-dev] [Neutron][LBaaS] Requirements and API revision progress

2014-04-16 Thread Carlos Garza
 submitted it to the mailing list which lets 
be honest was going to be rejected early on if we didn't which is why you don't 
see it being half baked  in this document we issued today. We would also hope 
that the community would like wise not jump to conclusions and dismiss the 
single API call simply because they don't have a requirement.

Currently we were thinking of having a DecryptSSL object that could be 
attached to the Listener/LoadBalancer(Depending on what term is agreed apron) 
as well as a ReEncryptSSL object that could be attached to the pool to meet 
item 7 in the doc 
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit?pli=1
with content switching on the URI determining which pool to use.  I am 
interested in your SSL proposal.


As an aside, it seems everyone's number one feature request at this time is HA. 
(more so than SSL and L7, yo!)

   I would agree HA tops the list, but I don't think the API proposals are 
ignoring this? I would think HA would be strongly influenced by the low level 
implantation of the (Driver, Provider or what ever you wish to call it) and 
much less so by the API message format discussions.

   I think concerns about HA are really stemming from people desiring an 
implementation that scales has failover capabilities with floating-ips in the 
same network. IE they want an option that lets a LoadBalancer failover its IP 
to another load balancer should the host machine its on via a VM or a physical 
box, process LXC container should fail. It feels at this point the user is 
wanting to associate an IP on two ports on the same neutron network. In the 
real world the heart beat steals the IP of the failed machine by bringing up 
the ip on its interface which causes it to advertise new arp responses since 
the dead node can't. This has strong implications for the low level driver.

Note that I certainly won't have this ready for tomorrow's meeting, but could 
probably have a draft to show y'all at next week's meeting if y'all think it 
would be helpful to produce such a thing. Anyway, we can discuss this at 
tomorrow's meeting…

Yes we are very interested in your concrete ideas. So far all we saw on ssl 
is an entry in the vip table of a proposed object model but nothing else.
We would like to hear your ideas.
Thanks,
Stephen




On Wed, Apr 16, 2014 at 4:17 PM, Carlos Garza 
carlos.ga...@rackspace.commailto:carlos.ga...@rackspace.com wrote:

On Apr 16, 2014, at 4:31 PM, Eugene Nikanorov 
enikano...@mirantis.commailto:enikano...@mirantis.com wrote:

Hi folks,

I've briefly looked over the doc.

I think whole idea to base the API on Atlas misses the content switching use 
case, which is very important:
We need multiple pools within loadbalancer, and API doesn't seem to allow that.
If it would, then you'll face another problem: you need to reference those 
pools somehow inside the json you use in POST.
There are two options here: use names or IDs, both are putting constraints and 
create complexity for both user of such API and for the implementation.

That particular problem becomes worse when it comes to objects which might not 
have names while it's better to not provide ID in POST and rely on their random 
generation. E.g. when you need to create references between objects in json 
input - you'll need to create artificial attributes just for the parser to 
understand that such input means.

So that makes me think that right now a 'single-call API' is not flexible 
enough to comply with our requirements.

We have demonstrated that you can create loadbalancers in separate 
transactions and in a single call fashion using both reference_ids to previous 
pools and as well as using a transient names to create objects in the same 
single call and reference them later on in other objects. The single call API 
is very flexible in that it allows you to create sub objects(We proposed 
transient ids to allow the user to avoid creating duplicate objects with 
different ids) on the fly as well as reference preexisting objects by id. The 
allowance for transient ids is adding flexibility to the api not taking away 
from it as you declared. I would like you to really be clear on what our 
requirements? What requirement is our single API call violating?

We have thus far attempted to support a single call API that doesn't 
interfere with multiple smaller object creation calls. If we are just adding to 
the API  in a demonstrably workable fashion what is the real objection.


While I understand that it might be simpler to use such API for some cases, it 
makes complex configurations fall back to our existing approach which is 
creating configuration on per object basis.
While the problem with complex configurations is not sorted out, I'd prefer if 
we focus on existing 'object-oriented' approach.

Your basically saying
P1: The single API call proposal doesn't support *ALL* complex configurations
P2:  if the single API proposal doesn't