Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-17 Thread Thomas Herve

  The check url is already a part of Neutron LBaaS IIRC.
 
 Yep. LBaaS is a work in progress, right?

You mean more than OpenStack in general? :) The LBaaS API in Neutron has been 
working fine since Havana. It's certainly has shortcomings and it seems there 
is a big refactoring in plan, though.

 Those of use using Nova networking are not feeling the love, unfortunately.

That's to be expected. nova-network is going to be supported, but you won't get 
new features for it.

 As far as Heat goes, there is no LBaaS resource type. The
 OS::Neutron::LoadBalancer resource type does not have any health checking
 properties.

There are 4 resources related to neutron load balancing. 
OS::Neutron::LoadBalancer is probably the least useful and the one you can 
*not* use, as it's only there for compatibility with 
AWS::AutoScaling::AutoScalingGroup. OS::Neutron::HealthMonitor does the health 
checking part, although maybe not in the way you want it.

-- 
Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] glance.store repo created

2014-07-17 Thread Flavio Percoco
Greeting,

I'd like to announce that we finally got the glance.store repo created.

This library pulls out of glance the code related to stores. Not many
changes were made to the API during this process. The main goal, for
now, is to switch glance over and keep backwards compatibility with
Glance to reduce the number of changes required. We'll improve and
revamp the store API during K - FWIW, I've a spec draft with ideas for it.

The library still needs some work and this is a perfect moment for
anyone interested to chime in and contribute to the library. Some things
that are missing:

- Swift store (Nikhil is working on this)
- Sync latest changes made to the store code.

If you've recently made changes to any of the stores, please go ahead
and contribute them back to `glance.store` or let me know so I can do it.

I'd also like to ask reviewers to request contributions to the `store`
code in glance to be proposed to `glance.store` as well. This way, we'll
be able to keep parity.

I'll be releasing an alpha version soon so we can start reviewing the
glance switch-over. We won't obviously merge it until we have feature
parity.

Any feedback is obviously very welcome,
Cheers,
Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nova] Weekly nova-network / neutron parity meeting

2014-07-17 Thread Oleg Bondarev
Thanks for setting this up Kyle, Wednesday 1500 UTC works for me.

Thanks,
Oleg


On Thu, Jul 17, 2014 at 6:35 AM, Kyle Mestery mest...@mestery.com wrote:

 On Wed, Jul 16, 2014 at 9:28 PM, Michael Still mi...@stillhq.com wrote:
  That time is around 1am for me. I'm ok with that as long as someone on
  the nova side can attend in my place.
 
  Michael
 
 Some of the neutron contributors to this effort are in Europe and
 Russia, so finding a time slot to get everyone could prove tricky.
 I'll leave this slot now and hope we can get someone else from nova to
 attend Michael. If not, we'll move this to another time.

 Thanks!
 Kyle

  On Thu, Jul 17, 2014 at 12:22 PM, Kyle Mestery mest...@mestery.com
 wrote:
  As we're getting down to the wire in Juno, I'd like to propose we have
  a weekly meeting on the nova-network and neutron parity effort. I'd
  like to start this meeting next week, and I'd like to propose
  Wednesday at 1500 UTC on #openstack-meeting-3 as the time and
  location. If this works for people, please reply on this thread, or
  suggest an alternate time. I've started a meeting page [1] to track
  agenda for the first meeting next week.
 
  Thanks!
  Kyle
 
  [1] https://wiki.openstack.org/wiki/Meetings/NeutronNovaNetworkParity
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  --
  Rackspace Australia
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceph-users] [Nova] [RBD] Copy-on-write cloning for RBD-backed disks

2014-07-17 Thread Dennis Kramer (DT)
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Dmitry,

I've been using Ubuntu 14.04LTS + Icehouse /w CEPH as a storage
backend for glance, cinder and nova (kvm/libvirt). I *really* would
love to see this patch cycle in Juno. It's been a real performance
issue because of the unnecessary re-copy from-and-to CEPH when using
the default boot from image-option. It seems that the your fix would
be the solution to all. IMHO this is one of the most important
features when using CEPH RBD as a backend for Openstack Nova.

Can you point me in the right direction in how to apply this patch of
yours on a default Ubuntu14.04LTS + Icehouse installation? I'm using
the default ubuntu packages since Icehouse lives in core and I'm not
sure how to apply the patch series. I would love to test and review it.

With regards,

Dennis

On 07/16/2014 11:18 PM, Dmitry Borodaenko wrote:
 I've got a bit of good news and bad news about the state of landing
 the rbd-ephemeral-clone patch series for Nova in Juno.
 
 The good news is that the first patch in the series 
 (https://review.openstack.org/91722 fixing a data loss inducing bug
 with live migrations of instances with RBD backed ephemeral drives)
 was merged yesterday.
 
 The bad news is that after 2 months of sitting in review queue and 
 only getting its first a +1 from a core reviewer on the spec 
 approval freeze day, the spec for the blueprint 
 rbd-clone-image-handler (https://review.openstack.org/91486)
 wasn't approved in time. Because of that, today the blueprint was
 rejected along with the rest of the commits in the series, even
 though the code itself was reviewed and approved a number of
 times.
 
 Our last chance to avoid putting this work on hold for yet another 
 OpenStack release cycle is to petition for a spec freeze exception 
 in the next Nova team meeting: 
 https://wiki.openstack.org/wiki/Meetings/Nova
 
 If you're using Ceph RBD as backend for ephemeral disks in Nova and
 are interested this patch series, please speak up. Since the 
 biggest concern raised about this spec so far has been lack of CI 
 coverage, please let us know if you're already using this patch 
 series with Juno, Icehouse, or Havana.
 
 I've put together an etherpad with a summary of where things are 
 with this patch series and how we got here: 
 https://etherpad.openstack.org/p/nova-ephemeral-rbd-clone-status
 
 Previous thread about this patch series on ceph-users ML: 
 http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-March/028097.html

___
ceph-users
 
mailing list
ceph-us...@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.15 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlPHc3EACgkQiJDTKUBxIRtAEgCgiNRTedwsydYOWY4rkC6v2vbS
FTEAn34qSiwTyBNCDrXGWOmGPpFu+4PQ
=tK4K
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Use Launcher/ProcessLauncher in glance

2014-07-17 Thread Tailor, Rajesh
Hi all,

Why glance is not using Launcher/ProcessLauncher (oslo-incubator) for its wsgi 
service like it is used in other openstack projects i.e. nova, cinder, keystone 
etc.

As of now when SIGHUP signal is sent to glance-api parent process, it calls the 
callback handler and then throws OSError.
The OSError is thrown because os.wait system call was interrupted due to SIGHUP 
callback handler.
As a result of this parent process closes the server socket.
All the child processes also gets terminated without completing existing api 
requests because the server socket is already closed and the service doesn't 
restart.

Ideally when SIGHUP signal is received by the glance-api process, it should 
process all the pending requests and then restart the glance-api service.

If (oslo-incubator) Launcher/ProcessLauncher is used in glance then it will 
handle service restart on 'SIGHUP' signal properly.

Can anyone please let me know what will be the positive/negative impact of 
using Launcher/ProcessLauncher (oslo-incubator) in glance?

Thank You,
Rajesh Tailor


__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Neutron ML2 Blueprints

2014-07-17 Thread Andrew Woodward
[2] still has no positive progress, simply making puppet stop the
services isn't all that usefull, will need to move towards always
using over-ride files
[3] is closed as it hasn't occurred in two days
[4] may be closed as its not occuring in CI or on my testing anymore

[5] is closed, was due to [7]

[7] https://bugs.launchpad.net/puppet-neutron/+bug/1343009

CI is passing CentOS now, and only failing ubuntu in OSTF. This
appears to be due services not being properly managed in
corosync/pacemaker

On Tue, Jul 15, 2014 at 11:24 PM, Andrew Woodward xar...@gmail.com wrote:
 [2] appears to be made worse, if not caused by neutron services
 autostarting with debian, no patch yet, need to add mechanism to ha
 layer to generate override files.
 [3] appears to have stopped with this mornings master
 [4] deleting the cluster, and restarting mostly removed this, was
 getting issue with $::osnailyfacter::swift_partition/.. not existing
 (/var/lib/glance), but is fixed in rev 29

 [5] is still the critical issue blocking progress, I'm super at a loss
 of why this is occuring. Changes to ordering have no affect. Next
 steps probably involve pre-hacking keystone and neutron and
 nova-client to be more verbose about it's key usage. As a hack we
 could simply restart neutron-server but I'm not convinced the issue
 can't come back since we don't know how it started.



 On Tue, Jul 15, 2014 at 6:34 AM, Sergey Vasilenko
 svasile...@mirantis.com wrote:
 [1] fixed in https://review.openstack.org/#/c/107046/
 Thanks for report a bug.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Andrew
 Mirantis
 Ceph community



-- 
Andrew
Mirantis
Ceph community

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] i18n switch-over heads up

2014-07-17 Thread Flavio Percoco
Greetings,

As part of [0], glance has been switched-over to oslo.i18n. This has no
big impact on the way things are done in Glance. However, you should be
aware that wherever you would use `openstack.common.gettextutils` you
should now use `glance.i18n`.

The work is not complete yet. Glance still relies on `i18n.install`
which registers `_` as a builtint. This is on its way of being
deprecated, which means we need to move glance away from that and use
explicit imports.

Keep an eye on the reviews queue, the patch is coming soon.
Cheers,
Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Heat Db Model updates

2014-07-17 Thread Steven Hardy
On Thu, Jul 17, 2014 at 12:31:05AM -0400, Zane Bitter wrote:
 On 16/07/14 23:48, Manickam, Kanagaraj wrote:
 I have gone thru the Heat Database and found the drawbacks in the
 existing model as listed below.  Could you review and add anything
 missing here. Thanks.
 
 Heat Database model is having following drawbacks:
 
 1.Duplicate information
 
 2.Incomplete naming of columns
 
 3.Inconsistency in the identifiers (id) and deleted_at columns across
 the tables
 
 4.resource table is specific to nova and make it generic
 
 5.Pre-defined constants are not using enum.
 
 And the section provided below describes these problem on  table vice.
 
 *Stack*
 
 Duplicate info
 
 Tenant  stack_user_project_id
 
 These are different things; stack_user_project_id is the project/tenant in
 which Heat creates users (in a different domain); tenant is the
 project/tenant in which the stack itself was created.

+1

See this blog post for further info on the stack user project:

http://hardysteven.blogspot.co.uk/2014/04/heat-auth-model-updates-part-2-stack.html

A better cleanup related to the tenant entry imo would be to migrate all
references to project_id, but there are some issues with that:

- The code inconsistently stores the tenant ID and name in the tenant
  fields (in the stack and user_creds tables repectively)
- Some clients (the swift Connection class in particular) seem to need the
  tenant *name*, which seems wrong, and probably won't work for non-default
  domains, so we need to investigate this and figure out if we can migrate
  to just storing and using the project_id everywhere.

 Credentials_id  username , owner_id.
 
 Tenant is also part of user_creds and Stack always has credentials_id,
 so what is the need of having tenant info in stack table and in stack
 table only the credentials_id is sufficient.
 
 tenant is in the Stack table because we routinely query by tenant and we
 don't want to have to do a join.
 
 There may be a legitimate reason for the UserCreds table to exist separately
 from the Stack table but I don't know what it is, so merging the two is an
 option.

Yes, there is IMO - the user_creds record is not necessarily unique to a
stack - we've been creating a new row for all stacks (including backup and
nested stacks), but really you only need one set of stored credentials for
the top level stack which is then inherited by all child stacks.

I posted two patches yesterday implementing this:

https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bug/1342593_2,n,z

So when they land, we won't (easily) be able to merge the usre_creds and
stack tables.  Note I'm also working on a cleanup which abstracts the
user_creds data behind a class, similar to Stack/Template etc.

 
 Status  action should be enum of predefined status
 
 +1. I assume it is still easy to add more actions later?

Well they're already defined via tuples in the respective classes, but OK.

 *User_creads*
 
 correct the spelling in Truststore_id
 
 trustor_user_id is correct.

Yeah, see the trust documentation and my trusts blog post:

https://github.com/openstack/identity-api/blob/master/v3/src/markdown/identity-api-v3-os-trust-ext.md

http://hardysteven.blogspot.co.uk/2014/04/heat-auth-model-updates-part-1-trusts.html

 *Resource*
 
 Status  action should be enum of predefined status
 
 +1
 
 Rsrc_metadata - make full name resource_metadata
 
 -0. I don't see any benefit here.

Agreed

 Why only nova_instance column how about for other services like cinder,
 glance resource, could be renamed to be generic enough??
 
 +1 this should have been called physical_resource_id.

+1, the nova_instance thing is just a historical mistake which never got
cleaned up.

 
 Last_evaluated - append _at
 
 I really don't see the point.

Me neither, particularly since we plan to deprecate the WatchRule
implementation (possibly for Juno):

https://blueprints.launchpad.net/heat/+spec/deprecate-stack-watch

 
 State should be an enum
 
 +1
 
 *Event*
 
 Why uuid and id both used?
 
 I believe it's because you should always use an integer as the primary key.
 I'm not sure if it makes a difference even though we _never_ do a lookup by
 the (integer) id.
 
 Resource_action is being used in both event and resource table, so it
 should be moved to common table
 
 I'm not sure what this means. Do you mean a common base class?

The resource action/status in the event is a snapshot of the resource
action/status at the time of the event, ergo we can't reference some common
data (such as the resource table) as the data will change.

 Resource_status should be any enum
 
 +1

Ok, but note we're doing a weird thing in resource.Resource.signal which
may need to be fixed first (using signal as the action, which isn't a
real resource action and gives odd output combined with the status)

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [cinder][nova] cinder querying nova-api

2014-07-17 Thread Abbass MAROUNI

Thanks Thomas,

What I'm trying to achieve is the following :
To be able to create a VM on a host (that's a compute and volume host at 
the same time) then call cinder and let it find the host and create and 
attach volumes there.


I guess the biggest problem is to be able to identify the host, as you 
said the host_id is of little use here. Unless I try to query the nova 
api as admin and get a list of hypervisors and their vms. But then I'll 
have to match the nova host name with the cinder host name.


Any thoughts on this ? Is there any blueprint for attaching local 
volumes in cinder ?


Thanks,

Abbass,

On 07/16/2014 05:44 PM, openstack-dev-requ...@lists.openstack.org wrote:

So I see a couple of issues here:

1) reliability - need to decide what the scheduler does if the nova
api isn't responding - hanging and ignoring future scheduling requests
is not a good option... a timeout and putting the volume into error
might be fine.

2) Nova doesn't expose hostname as identifiers unless I'm mistaken, it
exposes some abstract host_id. Need to figure out the mapping between
those and cinder backends.

With those to caveats in mind, I don't see why not, nor indeed any
other way of solving the problem unless / until the
grand-unified-sheduler-of-everything happens.


Starting a cinder spec on the subject might be the best place to
collect people's thoughts?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Gap 0 (database migrations) closed!

2014-07-17 Thread Paddu Krishnan (padkrish)
Thanks Jakub. Certainly helps, appreciate it.

-Paddu

On 7/16/14 7:41 AM, Jakub Libosvar libos...@redhat.com wrote:

On 07/16/2014 04:29 PM, Paddu Krishnan (padkrish) wrote:
 Hello,
 A follow-up development question related to this:
 
 As a part of https://review.openstack.org/#/c/105563/, which was
 introducing a new table in Neutron DB, I was trying to send for review a
 new file in neutron/db/migration/alembic_migrations/versions/
 
https://review.openstack.org/#/c/105563/4/neutron/db/migration/alembic_m
igrations/versions/1be5bdeb1d9a_ml2_network_overlay_type_driver.py which
 got generated through script neutron-db-manage. This also
 updated  neutron/db/migration/alembic_migrations/versions/
 
https://review.openstack.org/#/c/105563/4/neutron/db/migration/alembic_m
igrations/versions/1be5bdeb1d9a_ml2_network_overlay_type_driver.pyHEAD.
 I was trying to send this file for review as well.
 
 git review failed and I saw merge errors
 in neutron/db/migration/alembic_migrations/versions/
 
https://review.openstack.org/#/c/105563/4/neutron/db/migration/alembic_m
igrations/versions/1be5bdeb1d9a_ml2_network_overlay_type_driver.pyHEAD.
 
 W/O HEAD modified, jenkins was failing. I am working to fix this and saw
 this e-mail. 
 
 I had to go through all the links in detail in this thread. But,
 meanwhile, the two points mentioned below looks related to the
 patch/issues I am facing.
 So, if I add a new table, I don't need to run the neutron-db-manage
 script to generate the file and modify the HEAD anymore? Is (2) below
 need to be done manually?
Hi Paddu,

the process is the same (create migration script, update HEAD file), but
all migrations should have

migration_for_plugins = ['*']


Because you created a new DB model in new module, you also need to add

from neutron.plugins.ml2.drivers import type_network_overlay

to neutron/db/migration/models/head.py module.

I hope it helps.

Kuba

 
 Thanks,
 Paddu



 
 From: Anna Kamyshnikova akamyshnik...@mirantis.com
 mailto:akamyshnik...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Date: Wednesday, July 16, 2014 1:14 AM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron] Gap 0 (database migrations)
closed!
 
 Hello everyone!
 
 I would like to bring the next two points to everybody's attention:
 
 1) As Henry mentioned if you add new migration you should make it
 unconditional. Conditional migrations should not be merged since now.
 
 2) If you add some new models you should ensure that module containing
 it is imported in /neutron/db/migration/models/head.py.
 
 The second point in important for testing which I hope will be merged
 soon: https://review.openstack.org/76520.
 
 Regards,
 Ann
 
 
 
 On Wed, Jul 16, 2014 at 5:54 AM, Kyle Mestery mest...@mestery.com
 mailto:mest...@mestery.com wrote:
 
 On Tue, Jul 15, 2014 at 5:49 PM, Henry Gessau ges...@cisco.com
 mailto:ges...@cisco.com wrote:
  I am happy to announce that the first (zero'th?) item in the
Neutron Gap
  Coverage[1] has merged[2]. The Neutron database now contains all
tables for
  all plugins, and database migrations are no longer conditional on
the
  configuration.
 
  In the short term, Neutron developers who write migration scripts
need to set
migration_for_plugins = ['*']
  but we will soon clean up the template for migration scripts so
that this will
  be unnecessary.
 
  I would like to say special thanks to Ann Kamyshnikova and Jakub
Libosvar for
  their great work on this solution. Also thanks to Salvatore
Orlando and Mark
  McClain for mentoring this through to the finish.
 
  [1]
  
https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap
_Coverage
  [2] https://review.openstack.org/96438
 
 This is great news! Thanks to everyone who worked on this particular
 gap. We're making progress on the other gaps identified in that
plan,
 I'll send an email out once Juno-2 closes with where we're at.
 
 Thanks,
 Kyle
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-17 Thread Daniel P. Berrange
On Thu, Jul 17, 2014 at 08:46:12AM +1000, Michael Still wrote:
 Top posting to the original email because I want this to stand out...
 
 I've added this to the agenda for the nova mid cycle meetup, I think
 most of the contributors to this thread will be there. So, if we can
 nail this down here then that's great, but if we think we'd be more
 productive in person chatting about this then we have that option too.

FYI, I'm afraid I won't be at the mid-cycle meetup since it clashed with
my being on holiday. So I'd really prefer if we keep the discussion on
this mailing list where everyone has a chance to participate.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-17 Thread Sean Dague
On 07/17/2014 12:45 AM, Michael Still wrote:
 On Thu, Jul 17, 2014 at 3:27 AM, Vishvananda Ishaya
 vishvana...@gmail.com wrote:
 On Jul 16, 2014, at 8:28 AM, Daniel P. Berrange berra...@redhat.com wrote:
 On Wed, Jul 16, 2014 at 08:12:47AM -0700, Clark Boylan wrote:

 I am worried that we would just regress to the current process because
 we have tried something similar to this previously and were forced to
 regress to the current process.

 IMHO the longer we wait between updating the gate to new versions
 the bigger the problems we create for ourselves. eg we were switching
 from 0.9.8 released Dec 2011, to  1.1.1 released Jun 2013, so we
 were exposed to over 1 + 1/2 years worth of code churn in a single
 event. The fact that we only hit a couple of bugs in that, is actually
 remarkable given the amount of feature development that had gone into
 libvirt in that time. If we had been tracking each intervening libvirt
 release I expect the majority of updates would have had no ill effect
 on us at all. For the couple of releases where there was a problem we
 would not be forced to rollback to a version years older again, we'd
 just drop back to the previous release at most 1 month older.

 This is a really good point. As someone who has to deal with packaging
 issues constantly, it is odd to me that libvirt is one of the few places
 where we depend on upstream packaging. We constantly pull in new python
 dependencies from pypi that are not packaged in ubuntu. If we had to
 wait for packaging before merging the whole system would grind to a halt.

 I think we should be updating our libvirt version more frequently vy
 installing from source or our own ppa instead of waiting for the ubuntu
 team to package it.
 
 I agree with Vish here, although I do recognise its a bunch of work
 for someone. One of the reasons we experienced bugs in the gate is
 that we jumped 18 months in libvirt versions in a single leap. If we
 had flexibility of packaging, we could have stepped through each major
 version along the way, and that would have helped us identify problems
 in a more controlled manner.

We've talked about the 'CI the world plan' for a while, which this would
be part of. That's a ton of work that no one is signed up for.

But more importantly setting up and running the tests is  10% of the
time cost. Triage and fixing bugs long term is a real cost. As we've
seen with the existing gate bugs we can't even close the bugs that are
preventing ourselves from merging code -
http://status.openstack.org/elastic-recheck/, so I'm not sure which band
of magical elves we'd expect to debug and fix these things. :)

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-17 Thread Daniel P. Berrange
On Wed, Jul 16, 2014 at 12:38:44PM -0600, Chris Friesen wrote:
 On 07/16/2014 11:59 AM, Monty Taylor wrote:
 On 07/16/2014 07:27 PM, Vishvananda Ishaya wrote:
 
 This is a really good point. As someone who has to deal with packaging
 issues constantly, it is odd to me that libvirt is one of the few places
 where we depend on upstream packaging. We constantly pull in new python
 dependencies from pypi that are not packaged in ubuntu. If we had to
 wait for packaging before merging the whole system would grind to a halt.
 
 I think we should be updating our libvirt version more frequently vy
 installing from source or our own ppa instead of waiting for the ubuntu
 team to package it.
 
 Shrinking in terror from what I'm about to say ... but I actually agree
 with this, There are SEVERAL logistical issues we'd need to sort, not
 the least of which involve the actual mechanics of us doing that and
 properly gating,etc. But I think that, like the python depends where we
 tell distros what version we _need_ rather than using what version they
 have, libvirt, qemu, ovs and maybe one or two other things are areas in
 which we may want or need to have a strongish opinion.
 
 I'll bring this up in the room tomorrow at the Infra/QA meetup, and will
 probably be flayed alive for it - but maybe I can put forward a
 straw-man proposal on how this might work.
 
 How would this work...would you have them uninstall the distro-provided
 libvirt/qemu and replace them with newer ones?  (In which case what happens
 if the version desired by OpenStack has bugs in features that OpenStack
 doesn't use, but that some other software that the user wants to run does
 use?)

Having upstream testing the latest version of libvirt doesn't mean that
the latest version of libvirt is a mandatory requirement for distros. We
already have places where we use a feature of libvirt from say, 1.1.0, but
our reported min libvirt is still 0.9.6. In some cases Nova will take
alternative code paths for compat, in other cases attempts to use the
feature will just  be reported as an error to the caller.

 Or would you have OpenStack versions of them installed in parallel in an
 alternate location?

If the distros do not have the latest version of libvirt though, they are
responsible for running the OpenStack CI tests against their version and
figuring out whether it is still functional to the level they require to
satisfy their users/customers demands.

We're already in this situation today - eg gate now tests Ubuntu 14.04
with libvirt 1.2.2, but many people shipping OpenStack are certainly not
running on libvirt 1.2.2. So this is just business as usual for distros
and downstream vendors

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [specs] how to continue spec discussion

2014-07-17 Thread Daniel P. Berrange
On Wed, Jul 16, 2014 at 03:30:56PM +0100, John Garbutt wrote:
 My intention was that once the specific project is open for K specs,
 people will restore their original patch set, and move the spec to the
 K directory, thus keeping all the history.
 
 For Nova, the open reviews, with a -2, are ones that are on the
 potential exception list, and so still might need some reviews. If
 they gain an exception, the -2 will be removed. The list of possible
 exceptions is currently included in bottom of this etherpad:
 https://etherpad.openstack.org/p/nova-juno-spec-priorities
 
 At some point we will open nova-specs for K, right now we are closed
 for all spec submissions. We already have more blueprints approved
 than we will be able to merge during the rest of Juno.
 
 The idea is that everyone can now focus more on fixing bugs, reviewing
 bug fixes, and reviewing remaining higher priority features, rather
 than reviewing designs for K features. It is syncing a lot of
 reviewers time looking at nova-specs, and it feels best to divert
 attention.
 
 We could leave the reviews open in gerrit, but we are trying hard to
 set expectations around the likelihood of being reviewed and/or
 accepted. In the past people have got very frustraighted and
 complained about not finding out about what is happening (or not) with
 what they have up for reviews.

The main thing I'd say in favour of allowing people to submit
K^H^H^H^H^H Kilo specs now is that it gives everyone a good
heads up on what people are thinking about. We've had a number
of cases where people have basically been working on specs for
the same feature. If we don't open up Kilo specs now, people
will write their specs up  keep them private meaning there is
less opportunity for people to discover they are working on the
same problem. IMHO anything we can do to facilitate collaboration
and communication between people with interests in the same area
is worthwhile.

I agree though, that if we do allow Kilo specs now, we should
explicitly state that we will not be reviewing them yet. Perhaps
we could even add a boilerplate comment to Kilo specs that are
submitted before review period opens to this effect so that
people aren't disappointed by lack of formal review.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-17 Thread Sean Dague
On 07/16/2014 05:08 PM, Mark McLoughlin wrote:
 On Wed, 2014-07-16 at 16:15 +0200, Sean Dague wrote:
 ..
 Based on these experiences, libvirt version differences seem to be as
 substantial as major hypervisor differences. There is a proposal here -
 https://review.openstack.org/#/c/103923/ to hold newer versions of
 libvirt to the same standard we hold xen, vmware, hyperv, docker,
 ironic, etc.
 
 That's a bit of a mis-characterization - in terms of functional test
 coverage, the libvirt driver is the bar that all the other drivers
 struggle to meet.
 
 And I doubt any of us pay too close attention to the feature coverage
 that the 3rd party CI test jobs have.
 
 I'm somewhat concerned that the -2 pile on in this review is a double
 standard of libvirt features, and features exploiting really new
 upstream features. I feel like a lot of the language being used here
 about the burden of doing this testing is exactly the same as was
 presented by the docker team before their driver was removed, which was
 ignored by the Nova team at the time.
 
 Personally, I wasn't very comfortable with the docker driver move. It
 certainly gave an outward impression that we're an unfriendly community.
 The mitigating factor was that a lot of friendly, collaborative,
 coaching work went on in the background for months. Expectations were
 communicated well in advance.
 
 Kicking the docker driver out of the tree has resulted in an uptick in
 the amount of work happening on it, but I suspect most people involved
 have a bad taste in their mouths. I guess there's incentives at play
 which mean they'll continue plugging away at it, but those incentives
 aren't always at play.

I agree. The whole history of the docker driver is sorted. The fact that
it was rushed in, was broken about 4 weeks later, the pleas for getting
the install path in devstack fixed were ignored, and remained basically
broken until it was on threat of removal. I think there is a bad taste
in everyone's mouth around it.

 It was the concern by the freebsd
 team, which was also ignored and they were told to go land libvirt
 patches instead.

 I'm ok with us as a project changing our mind and deciding that the test
 bar needs to be taken down a notch or two because it's too burdensome to
 contributors and vendors, but if we are doing that, we need to do it for
 everyone. A lot of other organizations have put a ton of time and energy
 into this, and are carrying a maintenance cost of running these systems
 to get results back in a timely basis.
 
 I don't agree that we need to apply the same rules equally to everyone.
 
 At least part of the reasoning behind the emphasis on 3rd party CI
 testing was that projects (Neutron in particular) were being overwhelmed
 by contributions to drivers from developers who never contributed in any
 way to the core. The corollary of that is the contributors who do
 contribute to the core should be given a bit more leeway in return.
 
 There's a natural building of trust and element of human relationships
 here. As a reviewer, you learn to trust contributors with a good track
 record and perhaps prioritize contributions from them.

I agree with this. However, I'm not sure the currently 3rd party CI
model fixed the issue. The folks doing CI work at most of these entities
aren't the developers in the core, and are not often even in the same
groups as the developers in the projects.

 As we seem deadlocked in the review, I think the mailing list is
 probably a better place for this.

 If we want to reduce the standards for libvirt we should reconsider
 what's being asked of 3rd party CI teams, and things like the docker
 driver, as well as the A, B, C driver classification. Because clearly
 libvirt 1.2.5+ isn't actually class A supported.
 
 No, there are features or code paths of the libvirt 1.2.5+ driver that
 aren't as well tested as the class A designation implies. And we have
 a proposal to make sure these aren't used by default:
 
   https://review.openstack.org/107119

That's interesting, I had not seen that go through. There are also auto
feature selection by qemu, do we need that as well?

 i.e. to stray off the class A path, an operator has to opt into it by
 changing a configuration option that explains they will be enabling code
 paths which aren't yet tested upstream.
 
 These features have value to some people now, they don't risk regressing
 the class A driver and there's a clear path to them being elevated to
 class A in time. We should value these contributions and nurture these
 contributors.
 
 Appending some of my comments from the review below. The tl;dr is that I
 think we're losing sight of the importance of welcoming and nurturing
 contributors, and valuing whatever contributions they can make. That
 terrifies me. 

Honestly, I agree, which is why I started this thread mostly on a level
playing field. Because we're now 6 months into the 3rd Party CI
requirements experiment for Nova, and while some things 

Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-17 Thread Daniel P. Berrange
On Thu, Jul 17, 2014 at 10:59:27AM +0200, Sean Dague wrote:
 We've talked about the 'CI the world plan' for a while, which this would
 be part of. That's a ton of work that no one is signed up for.
 
 But more importantly setting up and running the tests is  10% of the
 time cost. Triage and fixing bugs long term is a real cost. As we've
 seen with the existing gate bugs we can't even close the bugs that are
 preventing ourselves from merging code -
 http://status.openstack.org/elastic-recheck/, so I'm not sure which band
 of magical elves we'd expect to debug and fix these things. :)

Yep, this is really a critical blocking problem we need to figure out and
resolve, before we attempt any plan to make our testing requirements
stricter. Making our testing reqiurements stricter without first improving
our overall test reliability will inflict untold pain  misery on both our
code contributors and people who are maintaining the CI systems.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Third Party CI systems don't email patch watchers or reviewers any more

2014-07-17 Thread Joe Gordon
We just updated gerrit to prevent prevent third party CI comments from
sending emails
to change reviewers and watchers. Instead, only the authors of the change
and those who starred it will be emailed.

https://gerrit-review.googlesource.com/Documentation/access-control.html#capability_emailReviewers


best,
Joe
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Integrated with iSCSI target Question

2014-07-17 Thread Johnson Cheng
Dear All,

I installed iSCSI target at my controller node (IP: 192.168.106.20),
#iscsitarget open-iscsi iscsitarget-dkms

then modify my cinder.conf at controller node as below,
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
#iscsi_helper = tgtadm
iscsi_helper = ietadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
#state_path = /var/lib/cinder
#lock_path = /var/lock/cinder
#volumes_dir = /var/lib/cinder/volumes
iscsi_ip_address=192.168.106.20

rpc_backend = cinder.openstack.common.rpc.impl_kombu
rabbit_host = controller
rabbit_port = 5672
rabbit_userid = guest
rabbit_password = demo

glance_host = controller

enabled_backends=lvmdriver-1,lvmdriver-2
[lvmdriver-1]
volume_group=cinder-volumes-1
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=LVM_iSCSI
[lvmdriver-2]
volume_group=cinder-volumes-2
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=LVM_iSCSI_b
[database]
connection = mysql://cinder:demo@controller/cinder

[keystone_authtoken]
auth_uri = http://controller:5000
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = demo

Now I use the following command to create a cinder volume, and it can be 
created successfully.
# cinder create --volume-type lvm_controller --display-name vol 1

Unfortunately it seems not attach to a iSCSI LUN automatically because I can 
not discover it from iSCSI initiator,
# iscsiadm -m discovery -t st -p 192.168.106.20

Do I miss something?


Regards,
Johnson


From: Manickam, Kanagaraj [mailto:kanagaraj.manic...@hp.com]
Sent: Thursday, July 17, 2014 1:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] Integrated with iSCSI target Question

I think, It should be on the cinder node which is usually deployed on the 
controller node

From: Johnson Cheng [mailto:johnson.ch...@qsantechnology.com]
Sent: Thursday, July 17, 2014 10:38 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Cinder] Integrated with iSCSI target Question

Dear All,

I have three nodes, a controller node and two compute nodes(volume node).
The default value for iscsi_helper in cinder.conf is tgtadm, I will change to 
ietadm to integrate with iSCSI target.
Unfortunately I am not sure that iscsitarget should be installed at controller 
node or compute node?
Have any reference?


Regards,
Johnson

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Keystone multi-domain ldap + sql in Icehouse

2014-07-17 Thread foss geek
Dear All,

We are using LDAP as identity back end and SQL as assignment back end.

Now I am trying to evaluate Keystone multi-domain support with LDAP
(identity) + SQL (assignment)

Does any one managed to setup LDAP/SQL multi-domain environment in
Havana/Icehouse?

Does keystone have suggested LDAP DIT for domains?

I gone through the below thread  [1] and [2], it seems Keystone
multi-domain with LDAP+SQL is not ready in Icehouse.

Hope some one will help.

Thanks for your time.

[1]http://www.gossamer-threads.com/lists/openstack/dev/37705

[2]http://lists.openstack.org/pipermail/openstack/2014-January/004900.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Third Party CI systems don't email patch watchers or reviewers any more

2014-07-17 Thread Robert Collins
3

-Rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][CI] DB migration error

2014-07-17 Thread trinath.soman...@freescale.com
Hi Kevin-

The fix given in the bug report is not working for my CI. I think I need to 
wait for the real fix in the main stream.

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

From: Kevin Benton [mailto:blak...@gmail.com]
Sent: Wednesday, July 16, 2014 10:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][CI] DB migration error


This bug is also affecting Ryu and the Big Switch CI.
There is a patch to bump the version requirement for alembic linked in the bug 
report that should fix it. It we can't get that merged we may have to revert 
the healing patch.

https://bugs.launchpad.net/bugs/1342507
On Jul 16, 2014 9:27 AM, 
trinath.soman...@freescale.commailto:trinath.soman...@freescale.com 
trinath.soman...@freescale.commailto:trinath.soman...@freescale.com wrote:
Hi-

With the neutron Update to my CI, I get the following error while configuring 
Neutron in devstack.

2014-07-16 16:12:06.349 | INFO  [alembic.autogenerate.compare] Detected server 
default on column 'poolmonitorassociations.status'
2014-07-16 16:12:06.411 | INFO  
[neutron.db.migration.alembic_migrations.heal_script] Detected added foreign 
key for column 'id' on table u'ml2_brocadeports'
2014-07-16 16:12:14.853 | Traceback (most recent call last):
2014-07-16 16:12:14.853 |   File /usr/local/bin/neutron-db-manage, line 10, 
in module
2014-07-16 16:12:14.853 | sys.exit(main())
2014-07-16 16:12:14.854 |   File 
/opt/stack/new/neutron/neutron/db/migration/cli.py, line 171, in main
2014-07-16 16:12:14.854 | CONF.command.func(config, 
CONF.command.namehttp://CONF.command.name)
2014-07-16 16:12:14.854 |   File 
/opt/stack/new/neutron/neutron/db/migration/cli.py, line 85, in 
do_upgrade_downgrade
2014-07-16 16:12:14.854 | do_alembic_command(config, cmd, revision, 
sql=CONF.command.sql)
2014-07-16 16:12:14.854 |   File 
/opt/stack/new/neutron/neutron/db/migration/cli.py, line 63, in 
do_alembic_command
2014-07-16 16:12:14.854 | getattr(alembic_command, cmd)(config, *args, 
**kwargs)
2014-07-16 16:12:14.854 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/command.py, line 124, in 
upgrade
2014-07-16 16:12:14.854 | script.run_env()
2014-07-16 16:12:14.854 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/script.py, line 199, in run_env
2014-07-16 16:12:14.854 | util.load_python_file(self.dir, 'env.py')
2014-07-16 16:12:14.854 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/util.py, line 205, in 
load_python_file
2014-07-16 16:12:14.854 | module = load_module_py(module_id, path)
2014-07-16 16:12:14.854 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/compat.py, line 58, in 
load_module_py
2014-07-16 16:12:14.854 | mod = imp.load_source(module_id, path, fp)
2014-07-16 16:12:14.854 |   File 
/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py, line 
106, in module
2014-07-16 16:12:14.854 | run_migrations_online()
2014-07-16 16:12:14.855 |   File 
/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py, line 
90, in run_migrations_online
2014-07-16 16:12:14.855 | options=build_options())
2014-07-16 16:12:14.855 |   File string, line 7, in run_migrations
2014-07-16 16:12:14.855 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/environment.py, line 681, in 
run_migrations
2014-07-16 16:12:14.855 | self.get_context().run_migrations(**kw)
2014-07-16 16:12:14.855 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/migration.py, line 225, in 
run_migrations
2014-07-16 16:12:14.855 | change(**kw)
2014-07-16 16:12:14.856 |   File 
/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/versions/1d6ee1ae5da5_db_healing.py,
 line 32, in upgrade
2014-07-16 16:12:14.856 | heal_script.heal()
2014-07-16 16:12:14.856 |   File 
/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py,
 line 78, in heal
2014-07-16 16:12:14.856 | execute_alembic_command(el)
2014-07-16 16:12:14.856 |   File 
/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py,
 line 93, in execute_alembic_command
2014-07-16 16:12:14.856 | parse_modify_command(command)
2014-07-16 16:12:14.856 |   File 
/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py,
 line 126, in parse_modify_command
2014-07-16 16:12:14.856 | op.alter_column(table, column, **kwargs)
2014-07-16 16:12:14.856 |   File string, line 7, in alter_column
2014-07-16 16:12:14.856 |   File string, line 1, in lambda
2014-07-16 16:12:14.856 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/util.py, line 322, in go
2014-07-16 16:12:14.857 | return fn(*arg, **kw)
2014-07-16 16:12:14.857 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/operations.py, line 300, in 
alter_column
2014-07-16 16:12:14.857 | existing_autoincrement=existing_autoincrement
2014-07-16 16:12:14.857 |   File 

Re: [openstack-dev] [Neutron] minimal device driver for VPN

2014-07-17 Thread Julio Carlos Barrera Juez
I have __init__.py in the directory. Sorry my code is not public, but I can
show you some contents, anyway is an experiment with no functional code.

My /etc/neutron/vpn_agent.ini:


   [DEFAULT]
   [vpnagent]
   # implementation location:
/opt/stack/neutron/neutron/services/vpn/junos_vpnaas/device_drivers/fake_device_driver.py
   
vpn_device_driver=neutron.services.vpn.junos_vpnaas.device_drivers.fake_device_driver.FakeDeviceDriver


FakeDeviceDriver is an empty class with a constructor located in file
/opt/stack/neutron/neutron/services/vpn/junos_vpnaas/device_drivers/fake_device_driver.py.

I don't have access to my devstask instance, but the error was
produced in q-vpn service:

DeviceDriverImportError: Can not load driver
:neutron.services.vpn.junos_vpnaas.device_drivers.fake_device_driver.FakeDeviceDriver

I can provide full stack this afternoon.

Thank you.


http://dana.i2cat.net   http://www.i2cat.net/en
Julio C. Barrera Juez  [image: View my profile on LinkedIn]
http://es.linkedin.com/in/jcbarrera/en
Office phone: (+34) 93 357 99 27 (ext. 527)
Office mobile phone: (+34) 625 66 77 26
Distributed Applications and Networks Area (DANA)
i2CAT Foundation, Barcelona


On 16 July 2014 20:59, Paul Michali (pcm) p...@cisco.com wrote:

 Do you have a repo with the code that is visible to the public?

 What does the /etc/neutron/vpn_agent.ini look like?

 Can you put the log output of the actual error messages seen?

 Regards,

 PCM (Paul Michali)

 MAIL …..…. p...@cisco.com
 IRC ……..… pcm_ (irc.freenode.com)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



 On Jul 16, 2014, at 2:43 PM, Julio Carlos Barrera Juez 
 juliocarlos.barr...@i2cat.net wrote:

 I am fighting with this for months. I want to develop a VPN Neutron
 plugin, but it is almost impossible to realize how to achieve it. this is a
 thread I opened months ago and Paul Mchali helped me a lot:
 http://lists.openstack.org/pipermail/openstack-dev/2014-February/028389.html

 I want to know the minimum requirements to develop a device driver and a
 service driver for a VPN Neutron plugin. I tried adding an empty device
 driver and I got this error:

 DeviceDriverImportError: Can not load driver
 :neutron.services.vpn.junos_vpnaas.device_drivers.fake_device_driver.FakeDeviceDriver

 Both Python file and class exists, but the implementation is empty. What
 is the problem? What I need to include in this file/class to avoid this
 error?

 Thank you.

  http://dana.i2cat.net/   http://www.i2cat.net/en
 Julio C. Barrera Juez  [image: View my profile on LinkedIn]
 http://es.linkedin.com/in/jcbarrera/en
 Office phone: (+34) 93 357 99 27 (ext. 527)
 Office mobile phone: (+34) 625 66 77 26
 Distributed Applications and Networks Area (DANA)
 i2CAT Foundation, Barcelona
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Keystone multi-domain ldap + sql in Icehouse

2014-07-17 Thread Henry Nash
Hi

So the bad news is that you are correct, multi-domain LDAP is not ready in 
IceHouse (It is marked as experimental.and it has serious flaws).  The good 
news is that this is fixed for Juno - and this support has already been merged 
- and will be in the Juno milestone 2 release.  Here's the spec that describes 
the work done:

https://github.com/openstack/keystone-specs/blob/master/specs/juno/multi-backend-uuids.rst

This support uses the domain-specifc config files approach that is already in 
IceHouse - so the way you define the LDAP parameters for each domain does not 
change.

Henry
On 17 Jul 2014, at 10:52, foss geek thefossg...@gmail.com wrote:

 Dear All,
 
 We are using LDAP as identity back end and SQL as assignment back end.
 
 Now I am trying to evaluate Keystone multi-domain support with LDAP 
 (identity) + SQL (assignment)
 
 Does any one managed to setup LDAP/SQL multi-domain environment in 
 Havana/Icehouse?
 
 Does keystone have suggested LDAP DIT for domains?
 
 I gone through the below thread  [1] and [2], it seems Keystone multi-domain 
 with LDAP+SQL is not ready in Icehouse. 
 
 Hope some one will help.
 
 Thanks for your time. 
 
 [1]http://www.gossamer-threads.com/lists/openstack/dev/37705
 
 [2]http://lists.openstack.org/pipermail/openstack/2014-January/004900.html
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] [Swift] Composite Auth question

2014-07-17 Thread McCabe, Donagh
Hi,



I'm working on the Swift implications of using composite authorization [1] [2].



My question for Keystone developers is : what  project-id do we expect the 
service token to be scoped to - the service's project or the end-user's 
project? When reviewing the Keystone spec, I had assumed the former. However, 
now that I'm looking at it in more detail, I would like to check my 
understanding.



The implications are:



1/ If scoped to the service's project, the role used must be exclusive to 
Glance/Cinder. I.e. an end-user must never be assigned this role. In effect, a 
role on one project grants the service user some privileges on every project.



2/ if scoped to the end-user's project, the glance/cinder service user must 
have a role on every project that uses them (including across domains); this 
seems infeasible.



Regards,

Donagh



[1] swift-specs: https://review.openstack.org/105228
[2] keystone-specs: https://review.openstack.org/#/c/96315/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] multiple backend issue

2014-07-17 Thread Johnson Cheng
Dear All,

I have two machines as below,
Machine1 (192.168.106.20): controller node (cinder node and volume node)
Machine2 (192.168.106.30): compute node (volume node)

I can successfully create a cinder volume, but there is an error in 
cinder-volume.log.
2014-07-17 18:49:01.105 5765 AUDIT cinder.service [-] Starting cinder-volume 
node (version 2014.1)
2014-07-17 18:49:01.113 5765 INFO cinder.volume.manager 
[req-82bf4ed2-0076-4f75-9d5b-9e9945cd6be2 - - - - -] Starting volume driver L
VMISCSIDriver (2.0.0)
2014-07-17 18:49:01.114 5764 AUDIT cinder.service [-] Starting cinder-volume 
node (version 2014.1)
2014-07-17 18:49:01.124 5764 INFO cinder.volume.manager 
[req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] Starting volume driver L
VMISCSIDriver (2.0.0)
2014-07-17 18:49:01.965 5765 ERROR cinder.volume.manager 
[req-82bf4ed2-0076-4f75-9d5b-9e9945cd6be2 - - - - -] Error encountered durin
g initialization of driver: LVMISCSIDriver
2014-07-17 18:49:01.971 5765 ERROR cinder.volume.manager 
[req-82bf4ed2-0076-4f75-9d5b-9e9945cd6be2 - - - - -] Unexpected error while
running command.
Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs 
--noheadings -o name cinder-volumes-2
Exit code: 5
Stdout: ''
Stderr: '  Volume group cinder-volumes-2 not found\n'
2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Traceback (most recent 
call last):
2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager   File 
/usr/lib/python2.7/dist-packages/cinder/volume/manager.py, line 243
, in init_host
2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager 
self.driver.check_for_setup_error()
2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager   File 
/usr/lib/python2.7/dist-packages/cinder/volume/drivers/lvm.py, line
83, in check_for_setup_error
2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager 
executor=self._execute)
2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager   File 
/usr/lib/python2.7/dist-packages/cinder/brick/local_dev/lvm.py, lin
e 81, in __init__
2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager if 
self._vg_exists() is False:
2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager   File 
/usr/lib/python2.7/dist-packages/cinder/brick/local_dev/lvm.py, lin
e 106, in _vg_exists
2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager self.vg_name, 
root_helper=self._root_helper, run_as_root=True)
2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager   File 
/usr/lib/python2.7/dist-packages/cinder/utils.py, line 136, in exec
ute
2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager return 
processutils.execute(*cmd, **kwargs)
2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager   File 
/usr/lib/python2.7/dist-packages/cinder/openstack/common/processutil
s.py, line 173, in execute
2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager cmd=' '.join(cmd))
2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager ProcessExecutionError: 
Unexpected error while running command.
2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Command: sudo 
cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --n
oheadings -o name cinder-volumes-2
2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Exit code: 5
2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Stdout: ''
2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Stderr: '  Volume 
group cinder-volumes-2 not found\n'
2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager
2014-07-17 18:49:03.236 5765 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connected to AMQP server on controller:5672
2014-07-17 18:49:03.890 5764 INFO cinder.volume.manager 
[req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] volume 5811b9af-b24a-44f
e-a424-a61f011f7a4c: skipping export
2014-07-17 18:49:03.891 5764 INFO cinder.volume.manager 
[req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] volume 8266e05b-6c87-421
a-a625-f5d6e94f2c9f: skipping export
2014-07-17 18:49:03.892 5764 INFO cinder.volume.manager 
[req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] Updating volume status
2014-07-17 18:49:04.081 5764 INFO oslo.messaging._drivers.impl_rabbit 
[req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] Connected
to AMQP server on controller:5672
2014-07-17 18:49:04.136 5764 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connected to AMQP server on controller:5672
2014-07-17 18:49:18.258 5765 INFO cinder.volume.manager 
[req-00ee01b9-9601-42f5-baf7-169086ac53bb - - - - -] Updating volume status
2014-07-17 18:49:18.259 5765 WARNING cinder.volume.manager 
[req-00ee01b9-9601-42f5-baf7-169086ac53bb - - - - -] Unable to update stat
s, LVMISCSIDriver -2.0.0 (config name lvmdriver-2) driver is uninitialized.


Should I ignore it?

Here is my cinder.conf
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
#iscsi_helper = tgtadm
iscsi_helper = ietadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True

Re: [openstack-dev] [Nova] [Gantt] Scheduler split status (updated)

2014-07-17 Thread Sylvain Bauza
Le 17/07/2014 01:24, Robert Collins a écrit :
 On 15 July 2014 06:10, Jay Pipes jaypi...@gmail.com wrote:

 Frankly, I don't think a lot of the NFV use cases are well-defined.

 Even more frankly, I don't see any benefit to a split-out scheduler to a
 single NFV use case.


 Don't you see each Summit the lots of talks (and people attending
 them) talking about how OpenStack should look at Pets vs. Cattle and
 saying that the scheduler should be out of Nova ?

 There's been no concrete benefits discussed to having the scheduler outside
 of Nova.

 I don't really care how many people say that the scheduler should be out of
 Nova unless those same people come to the table with concrete reasons why.
 Just saying something is a benefit does not make it a benefit, and I think
 I've outlined some of the very real dangers -- in terms of code and payload
 complexity -- of breaking the scheduler out of Nova until the interfaces are
 cleaned up and the scheduler actually owns the resources upon which it
 exercises placement decisions.
 I agree with the risks if we get it wrong.

 In terms of benefits, I want to do cross-domain scheduling: 'Give me
 five Galera servers with no shared non-HA infrastructure and
 resiliency to no less than 2 separate failures'. By far the largest
 push back I get is 'how do I make Ironic pick the servers I want it
 to' when talking to ops folk about using Ironic. And when you dig into
 that, it falls into two buckets:
  - role based mappings (e.g. storage optimised vs cpu optimised) -
 which Ironic can trivially do
  - failure domain and performance domain optimisation
- which Nova cannot do at all today.

 I want this very very very badly, and I would love to be pushing
 directly on it, but its just under a few other key features like
 'working HA' and 'efficient updates' that sadly matter more in the
 short term.


I share your views on what should be the scheduler, once pushed out of Nova.
As I said, there are various concerns and asked features for the
scheduler that are missing here and now, and which are too big to be fit
in Nova.

In order to make sure we're going into the right direction, we decided
during last Gantt meeting to provide usecases where an external
Scheduler could be interesting. Don't hesitate to add your baremetal
usecases (for deployment with TripleO or others) in there :
https://etherpad.openstack.org/p/SchedulerUseCases

Take it as a first attempt to identify what would be the mission
statement for Gantt, if you wish.



 Sorry, I'm not following you. Who is saying to Gantt I want to store this
 data?

 All I am saying is that the thing that places a resource on some provider of
 that resource should be the thing that owns the process of a requester
 *claiming* the resources on that provider, and in order to properly track
 resources in a race-free way in such a system, then the system needs to
 contain the resource tracker.
 Trying to translate that:
  - the scheduler (thing that places a resource)
  - should own the act of claiming a resource
  - to avoid races the scheduler should own the tracker

 So I think we need to aknowledge that Right Now we have massive races.
 We can choose where we put our efforts - we can try to fix them in the
 current architecture, we can try to fix them by changing the
 architecture.

 I think you agree that the current architecture is wrong; and that
 from a risk perspective the gantt extraction should not change the
 architecture - as part of making it graceful and cinder-like with
 immediate use by Nova.

 But once extracted the architecture can change - versioned APIs FTW.

 To my mind the key question is not whether the thing will be *better*
 with gantt extracted, it is whether it will be *no worse*, while
 simultaneously enabling a bunch of pent up demand in another part of
 the community.

 That seems hard to answer categorically, but it seems to me the key
 risk is whether changing the architecture will be too hard / unsafe
 post extraction.

 However in Nova it takes months and months to land things (and I'm not
 poking here - TripleO has the same issue at the moment) - I think
 there is a very real possibility that gantt can improve much faster
 and efficiently as a new project, once forklifted out. Patches to Nova
 to move to newer APIs can be posted and worked with while folk work on
 other bits of key plumbing like performance (e.g. not loading every
 host in the entire cloud into ram on every scheduling request),
 scalability (e.g. elegantly solving the current racy behaviour between
 different scheduler instances) and begin the work to expose the
 scheduler to neutron and cinder.


Unless I misunderstood (and that happens, I'm badly human - and French
-), I'm giving a +2 to your statement : yes, there are race conditions
in the scheduler (and I saw the bug you filed and I'm hesitating to
handle it now), yes the scheduler is not that perfect now, yes we should
ensure that claiming a resource should be 'ACID' 

Re: [openstack-dev] [Fuel] Neutron ML2 Blueprints

2014-07-17 Thread Vladimir Kuklin
Andrew, we have extended system tests passing with our current pacemaker
corosync code. Either it is your environment or some bug we cannot
reproduce. Also, it may be related to puppet ordering issues thus trying to
start some services before some others. As [2] is the only issue you are
pointing at now, let's create a bug and track it in Launchpad.


On Thu, Jul 17, 2014 at 11:20 AM, Andrew Woodward xar...@gmail.com wrote:

 [2] still has no positive progress, simply making puppet stop the
 services isn't all that usefull, will need to move towards always
 using over-ride files
 [3] is closed as it hasn't occurred in two days
 [4] may be closed as its not occuring in CI or on my testing anymore

 [5] is closed, was due to [7]

 [7] https://bugs.launchpad.net/puppet-neutron/+bug/1343009

 CI is passing CentOS now, and only failing ubuntu in OSTF. This
 appears to be due services not being properly managed in
 corosync/pacemaker

 On Tue, Jul 15, 2014 at 11:24 PM, Andrew Woodward xar...@gmail.com
 wrote:
  [2] appears to be made worse, if not caused by neutron services
  autostarting with debian, no patch yet, need to add mechanism to ha
  layer to generate override files.
  [3] appears to have stopped with this mornings master
  [4] deleting the cluster, and restarting mostly removed this, was
  getting issue with $::osnailyfacter::swift_partition/.. not existing
  (/var/lib/glance), but is fixed in rev 29
 
  [5] is still the critical issue blocking progress, I'm super at a loss
  of why this is occuring. Changes to ordering have no affect. Next
  steps probably involve pre-hacking keystone and neutron and
  nova-client to be more verbose about it's key usage. As a hack we
  could simply restart neutron-server but I'm not convinced the issue
  can't come back since we don't know how it started.
 
 
 
  On Tue, Jul 15, 2014 at 6:34 AM, Sergey Vasilenko
  svasile...@mirantis.com wrote:
  [1] fixed in https://review.openstack.org/#/c/107046/
  Thanks for report a bug.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Andrew
  Mirantis
  Ceph community



 --
 Andrew
 Mirantis
 Ceph community

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] multiple backend issue

2014-07-17 Thread git harry
You are using multibackend but it appears you haven't created both volume 
groups:

Stderr: ' Volume group cinder-volumes-2 not found\n'

If you can create volumes it suggest the other backend is correctly configured. 
So you can ignore the error if you want but you will not be able to use the 
second backend you have attempted to setup.


 From: johnson.ch...@qsantechnology.com 
 To: openstack-dev@lists.openstack.org 
 Date: Thu, 17 Jul 2014 11:03:41 + 
 Subject: [openstack-dev] [Cinder] multiple backend issue 
 
 
 Dear All, 
 
 
 
 I have two machines as below, 
 
 Machine1 (192.168.106.20): controller node (cinder node and 
 volume node) 
 
 Machine2 (192.168.106.30): compute node (volume node) 
 
 
 
 I can successfully create a cinder volume, but there is an error in 
 cinder-volume.log. 
 
 2014-07-17 18:49:01.105 5765 AUDIT cinder.service [-] Starting 
 cinder-volume node (version 2014.1) 
 
 2014-07-17 18:49:01.113 5765 INFO cinder.volume.manager 
 [req-82bf4ed2-0076-4f75-9d5b-9e9945cd6be2 - - - - -] Starting volume 
 driver L 
 
 VMISCSIDriver (2.0.0) 
 
 2014-07-17 18:49:01.114 5764 AUDIT cinder.service [-] Starting 
 cinder-volume node (version 2014.1) 
 
 2014-07-17 18:49:01.124 5764 INFO cinder.volume.manager 
 [req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] Starting volume 
 driver L 
 
 VMISCSIDriver (2.0.0) 
 
 2014-07-17 18:49:01.965 5765 ERROR cinder.volume.manager 
 [req-82bf4ed2-0076-4f75-9d5b-9e9945cd6be2 - - - - -] Error encountered 
 durin 
 
 g initialization of driver: LVMISCSIDriver 
 
 2014-07-17 18:49:01.971 5765 ERROR cinder.volume.manager 
 [req-82bf4ed2-0076-4f75-9d5b-9e9945cd6be2 - - - - -] Unexpected error 
 while 
 
 running command. 
 
 Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C 
 vgs --noheadings -o name cinder-volumes-2 
 
 Exit code: 5 
 
 Stdout: '' 
 
 Stderr: ' Volume group cinder-volumes-2 not found\n' 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Traceback 
 (most recent call last): 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
 /usr/lib/python2.7/dist-packages/cinder/volume/manager.py, line 243 
 
 , in init_host 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager 
 self.driver.check_for_setup_error() 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
 /usr/lib/python2.7/dist-packages/cinder/volume/drivers/lvm.py, line 
 
 83, in check_for_setup_error 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager 
 executor=self._execute) 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
 /usr/lib/python2.7/dist-packages/cinder/brick/local_dev/lvm.py, lin 
 
 e 81, in __init__ 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager if 
 self._vg_exists() is False: 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
 /usr/lib/python2.7/dist-packages/cinder/brick/local_dev/lvm.py, lin 
 
 e 106, in _vg_exists 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager 
 self.vg_name, root_helper=self._root_helper, run_as_root=True) 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
 /usr/lib/python2.7/dist-packages/cinder/utils.py, line 136, in exec 
 
 ute 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager return 
 processutils.execute(*cmd, **kwargs) 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/processutil 
 
 s.py, line 173, in execute 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager cmd=' 
 '.join(cmd)) 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager 
 ProcessExecutionError: Unexpected error while running command. 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Command: sudo 
 cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --n 
 
 oheadings -o name cinder-volumes-2 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Exit code: 5 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Stdout: '' 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Stderr: ' 
 Volume group cinder-volumes-2 not found\n' 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager 
 
 2014-07-17 18:49:03.236 5765 INFO oslo.messaging._drivers.impl_rabbit 
 [-] Connected to AMQP server on controller:5672 
 
 2014-07-17 18:49:03.890 5764 INFO cinder.volume.manager 
 [req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] volume 
 5811b9af-b24a-44f 
 
 e-a424-a61f011f7a4c: skipping export 
 
 2014-07-17 18:49:03.891 5764 INFO cinder.volume.manager 
 [req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] volume 
 8266e05b-6c87-421 
 
 a-a625-f5d6e94f2c9f: skipping export 
 
 2014-07-17 18:49:03.892 5764 INFO cinder.volume.manager 
 [req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] Updating volume 
 status 
 
 2014-07-17 18:49:04.081 5764 INFO oslo.messaging._drivers.impl_rabbit 
 [req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] Connected 
 
 to 

Re: [openstack-dev] [neutron] [nova] Weekly nova-network / neutron parity meeting

2014-07-17 Thread Thierry Carrez
Kyle Mestery wrote:
 As we're getting down to the wire in Juno, I'd like to propose we have
 a weekly meeting on the nova-network and neutron parity effort. I'd
 like to start this meeting next week, and I'd like to propose
 Wednesday at 1500 UTC on #openstack-meeting-3 as the time and
 location.

This conflicts on the agenda with the PHP SDK Team Meeting.
That said, I'm not sure they still run this one regularly.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-17 Thread Mark McLoughlin
On Thu, 2014-07-17 at 09:58 +0100, Daniel P. Berrange wrote:
 On Thu, Jul 17, 2014 at 08:46:12AM +1000, Michael Still wrote:
  Top posting to the original email because I want this to stand out...
  
  I've added this to the agenda for the nova mid cycle meetup, I think
  most of the contributors to this thread will be there. So, if we can
  nail this down here then that's great, but if we think we'd be more
  productive in person chatting about this then we have that option too.
 
 FYI, I'm afraid I won't be at the mid-cycle meetup since it clashed with
 my being on holiday. So I'd really prefer if we keep the discussion on
 this mailing list where everyone has a chance to participate.

Same here. Pre-arranged vacation, otherwise I'd have been there.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Flavor framework proposal

2014-07-17 Thread Eugene Nikanorov
Folks,

Initial implementation is here: https://review.openstack.org/#/c/105982/
It's pretty much complete (in terms of code parts) but may require some
adjustments and of course fixes.

I'm working on the client to test this end-to-end.

Thanks,
Eugene.

P.S. Almost got it under 1k lines!



On Thu, Jul 17, 2014 at 7:36 AM, Stephen Balukoff sbaluk...@bluebox.net
wrote:

 Hi Salvatore!

 Thank you for reading through my book-length e-mail and responding to all
 my points!

 Unfortunately, I have more responses for you, inline:

 On Wed, Jul 16, 2014 at 4:22 PM, Salvatore Orlando sorla...@nicira.com
 wrote:

 Hi Stephen,

 Thanks for your exhaustive comments!


 I'm always happy to exhaust others with my comments. ;)


 I think your points are true and valid for most cloud operators; besides
 the first all the point you provided indeed pertain operators and vendors.
 However you can't prove, I think, the opposite - that is to say that no
 cloud operator will find multi-service flavors useful. At the end of the
 day Openstack is always about choice - in this case the choice of having
 flavours spanning services or flavours limited to a single service.
 This discussion however will just end up slowly drifting into the realm
 of the theoretical and hypotethical and therefore won't bring anything good
 to our cause. Who know, in a few post we might just end up calling godwin's
 law!


 That's certainly true.  But would you be willing to agree that both the
 model and logic behind single-service_type flavors is likely to be simpler
 to implement, troubleshoot, and maintain than multi-service_type flavors?

 If you agree, then I would say: Let's go with single-service_type flavors
 for now so that we can actually get an implementation done by Juno (and
 thus free up development that is currently being blocked by lack of
 flavors), and leave the more complicated multi-service_type flavors for
 some later date when there's a more obvious need for them.

 For what it's worth, I'm not against multi-service_type flavors if someone
 can come up with a good usage scenario that is best solved using the same.
 But I think it's more complication than we want or need right now, and
 shooting for it now is likely to ensure we wouldn't get flavors in time for
 Juno.



   There are other considerations which could be made, but since they're
 dependent on features which do not yet exist (NFV, service insertion,
 chaining and steering) I think there is no point in arguing over it.


 Agreed. Though, I don't think single-service flavors paint us into a
 corner here at all. Again, things get complicated enough when it comes to
 service insertion, chaining, steering, etc. that what we'll really need at
 that point is actual orchestration. Flavors alone will not solve these
 problems, and orchestration can work with many single-service flavors to
 provide the illusion of multi-service flavors.


 Don't take it the wrong way - but this is what I mean by theoretical and
 hypothetical. I agree with you. I think that's totally possible. But there
 are so many pieces which are yet missing from the puzzle that this
 discussion is probably worthless. Anyway, I started it, and I'm the one to
 be punished for it!


 Hah! Indeed. Ok, I'll stop speculating down that path for now, eh. ;)


  In conclusion I think the idea makes sense, and is a minor twist in the
 current design which should not either make the feature too complex neither
 prevent any other use case for which the flavours are being conceived. For
 the very same reason however, it is worth noting that this is surely not an
 aspect which will cause me or somebody else to put a veto on this work 
 item.


 I don't think this is a minor twist in the current design, actually:
 * We'll have to deal with cases like the above where no valid service
 profiles can be found for a given kind of flavor (which we can avoid
 entirely if a flavor can have service profiles valid for only one kind of
 service).


 Point taken, but does not require a major change to the design since a
 service flavour like this should probably be caught by a validation
 routine. Still you'd need more pervasive validation in different points of
 the API.


 ... which sounds like significantly more complication to me. But at this
 point, we're arguing over what a minor twist is, which is not likely to
 lead to anywhere useful...


  * When and if tags/capabilities/extensions get introduced, we would
 need to provide an additional capabilities list on the service profiles in
 order to be able to select which service profiles provide the capabilities
 requested.


 Might be... but I don't see how that would be worse with multiple service
 types, especially if profiles are grouped by type.


 Presumably, with single-service_type flavors, all service profiles
 associated with the flavor should be capable of providing all the features
 advertised as being provided by the flavor (first in the 'description' and
 possibly 

Re: [openstack-dev] [Neutron] minimal device driver for VPN

2014-07-17 Thread Paul Michali (pcm)
See line @PCM


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On Jul 17, 2014, at 6:32 AM, Julio Carlos Barrera Juez 
juliocarlos.barr...@i2cat.net wrote:

 I have __init__.py in the directory. Sorry my code is not public, but I can 
 show you some contents, anyway is an experiment with no functional code.

@PCM Could you provide a patch with the files so we could patch it into a local 
repo and try things? I’m assuming since it is an experiment with no functional 
code that maybe there’s nothing proprietary? :)



 
 My /etc/neutron/vpn_agent.ini:
 
 
 [DEFAULT]
 
 [vpnagent]
 # implementation location: 
 /opt/stack/neutron/neutron/services/vpn/junos_vpnaas/device_drivers/fake_device_driver.py
 vpn_device_driver=neutron.services.vpn.junos_vpnaas.device_drivers.fake_device_driver.FakeDeviceDriver

@PCM Hmmm… Just a wild guess... I’m wondering if this is the issue. You class 
would need to inherit from the base device driver class. Does your 
fake_device_driver.py have the correct import paths? I say that, because your 
hierarchy is different.  For example, the layout currently is…

neutron/services/vpn/  - plugin
neutron/services/vpn/device_drivers/ - reference and Cisco device drivers
neutron/services/vpn/service_drivers/ - reference and Cisco service drivers

Your hierarchy has another level…

neutron/services/vpn/junos_vpnaas/device_drivers/

I’m wondering if there is some import wrong. For example, the reference device 
driver has:

from neutron.services.vpn import device_drivers
…
@six.add_metaclass(abc.ABCMeta)
class IPsecDriver(device_drivers.DeviceDriver):
VPN Device Driver for IPSec.

Where the import is used to access the base class DeviceDriver. If you’re doing 
the same, that file may be failing to load.

Regards,

PCM

 
 
 
 FakeDeviceDriver is an empty class with a constructor located in file 
 /opt/stack/neutron/neutron/services/vpn/junos_vpnaas/device_drivers/fake_device_driver.py.
 
 
 
 I don't have access to my devstask instance, but the error was produced in 
 q-vpn service:
 DeviceDriverImportError: Can not load driver 
 :neutron.services.vpn.junos_vpnaas.device_drivers.fake_device_driver.FakeDeviceDriver
 
 
 I can provide full stack this afternoon.
 
 
 
 Thank you.
 
 
 
   
 Julio C. Barrera Juez  
 Office phone: (+34) 93 357 99 27 (ext. 527)
 Office mobile phone: (+34) 625 66 77 26
 Distributed Applications and Networks Area (DANA)
 i2CAT Foundation, Barcelona
 
 
 On 16 July 2014 20:59, Paul Michali (pcm) p...@cisco.com wrote:
 Do you have a repo with the code that is visible to the public?
 
 What does the /etc/neutron/vpn_agent.ini look like?
 
 Can you put the log output of the actual error messages seen?
 
 Regards,
 
 PCM (Paul Michali)
 
 MAIL …..…. p...@cisco.com
 IRC ……..… pcm_ (irc.freenode.com)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
 
 
 
 On Jul 16, 2014, at 2:43 PM, Julio Carlos Barrera Juez 
 juliocarlos.barr...@i2cat.net wrote:
 
 I am fighting with this for months. I want to develop a VPN Neutron plugin, 
 but it is almost impossible to realize how to achieve it. this is a thread I 
 opened months ago and Paul Mchali helped me a lot: 
 http://lists.openstack.org/pipermail/openstack-dev/2014-February/028389.html
 
 I want to know the minimum requirements to develop a device driver and a 
 service driver for a VPN Neutron plugin. I tried adding an empty device 
 driver and I got this error:
 
 DeviceDriverImportError: Can not load driver 
 :neutron.services.vpn.junos_vpnaas.device_drivers.fake_device_driver.FakeDeviceDriver
 
 Both Python file and class exists, but the implementation is empty. What is 
 the problem? What I need to include in this file/class to avoid this error?
 
 Thank you.
 
   
 Julio C. Barrera Juez  
 Office phone: (+34) 93 357 99 27 (ext. 527)
 Office mobile phone: (+34) 625 66 77 26
 Distributed Applications and Networks Area (DANA)
 i2CAT Foundation, Barcelona
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] request to tag novaclient 2.18.0

2014-07-17 Thread Joe Gordon
On Wed, Jul 16, 2014 at 11:28 PM, Steve Baker sba...@redhat.com wrote:

  On 12/07/14 09:25, Joe Gordon wrote:




 On Fri, Jul 11, 2014 at 4:42 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2014-07-11 11:21:19 +0200 (+0200), Matthias Runge wrote:
  this broke horizon stable and master; heat stable is affected as
  well.
  [...]

 I guess this is a plea for applying something like the oslotest
 framework to client libraries so they get backward-compat jobs run
 against unit tests of all dependant/consuming software... branchless
 tempest already alleviates some of this, but not the case of changes
 in a library which will break unit/functional tests of another
 project.


  We actually do have some tests for backwards compatibility, and they all
 passed. Presumably because both heat and horizon have poor integration test.

  We ran


- check-tempest-dsvm-full-havana

 http://logs.openstack.org/66/94166/3/check/check-tempest-dsvm-full-havana/8e09faa
 SUCCESS in 40m 47s (non-voting)
- check-tempest-dsvm-neutron-havana

 http://logs.openstack.org/66/94166/3/check/check-tempest-dsvm-neutron-havana/b4ad019
 SUCCESS in 36m 17s (non-voting)
- check-tempest-dsvm-full-icehouse

 http://logs.openstack.org/66/94166/3/check/check-tempest-dsvm-full-icehouse/c0c62e5
 SUCCESS in 53m 05s
- check-tempest-dsvm-neutron-icehouse

 http://logs.openstack.org/66/94166/3/check/check-tempest-dsvm-neutron-icehouse/a54aedb
 SUCCESS in 57m 28s


  on the offending patches (https://review.openstack.org/#/c/94166/)


  Infra patch that added these tests:
 https://review.openstack.org/#/c/80698/


   Heat-proper would have continued working fine with novaclient 2.18.0.
 The regression was with raising novaclient exceptions, which is only
 required in our unit tests. I saw this break coming and switched to raising
 via from_response
 https://review.openstack.org/#/c/97977/22/heat/tests/v1_1/fakes.py

 Unit tests tend to deal with more internals of client libraries just for
 mocking purposes, and there have been multiple breaks in unit tests for
 heat and horizon when client libraries make internal changes.

 This could be avoided if the client gate jobs run the unit tests for the
 projects which consume them.


That may work but isn't this exactly what integration testing is for?


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Heat Db Model updates

2014-07-17 Thread Ryan Brown

On 07/17/2014 03:33 AM, Steven Hardy wrote:

On Thu, Jul 17, 2014 at 12:31:05AM -0400, Zane Bitter wrote:

On 16/07/14 23:48, Manickam, Kanagaraj wrote:

SNIP
*Resource*

Status  action should be enum of predefined status


+1


Rsrc_metadata - make full name resource_metadata


-0. I don't see any benefit here.


Agreed



I'd actually be in favor of the change from rsrc-resource, I feel like 
rsrc is a pretty opaque abbreviation.


--
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][CI] DB migration error

2014-07-17 Thread Jakub Libosvar
On 07/17/2014 12:18 PM, trinath.soman...@freescale.com wrote:
 Hi Kevin-
 
  
 
 The fix given in the bug report is not working for my CI. I think I need
 to wait for the real fix in the main stream.
What version of alembic library did you have at the time of error?
Are you sure you re-run
pip install -r requirements.txt after you changed the version?

Kuba
 
  
 
 --
 
 Trinath Somanchi - B39208
 
 trinath.soman...@freescale.com| extn: 4048
 
  
 
 *From:*Kevin Benton [mailto:blak...@gmail.com]
 *Sent:* Wednesday, July 16, 2014 10:01 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Neutron][CI] DB migration error
 
  
 
 This bug is also affecting Ryu and the Big Switch CI.
 There is a patch to bump the version requirement for alembic linked in
 the bug report that should fix it. It we can't get that merged we may
 have to revert the healing patch.
 
 https://bugs.launchpad.net/bugs/1342507
 
 On Jul 16, 2014 9:27 AM, trinath.soman...@freescale.com
 mailto:trinath.soman...@freescale.com trinath.soman...@freescale.com
 mailto:trinath.soman...@freescale.com wrote:
 
 Hi-
 
  
 
 With the neutron Update to my CI, I get the following error while
 configuring Neutron in devstack.
 
  
 
 2014-07-16 16:12:06.349 | INFO  [alembic.autogenerate.compare]
 Detected server default on column 'poolmonitorassociations.status'
 
 2014-07-16 16:12:06.411 | INFO 
 [neutron.db.migration.alembic_migrations.heal_script] Detected added
 foreign key for column 'id' on table u'ml2_brocadeports'
 
 2014-07-16 16:12:14.853 | Traceback (most recent call last):
 
 2014-07-16 16:12:14.853 |   File /usr/local/bin/neutron-db-manage,
 line 10, in module
 
 2014-07-16 16:12:14.853 | sys.exit(main())
 
 2014-07-16 16:12:14.854 |   File
 /opt/stack/new/neutron/neutron/db/migration/cli.py, line 171, in main
 
 2014-07-16 16:12:14.854 | CONF.command.func(config,
 CONF.command.name http://CONF.command.name)
 
 2014-07-16 16:12:14.854 |   File
 /opt/stack/new/neutron/neutron/db/migration/cli.py, line 85, in
 do_upgrade_downgrade
 
 2014-07-16 16:12:14.854 | do_alembic_command(config, cmd,
 revision, sql=CONF.command.sql)
 
 2014-07-16 16:12:14.854 |   File
 /opt/stack/new/neutron/neutron/db/migration/cli.py, line 63, in
 do_alembic_command
 
 2014-07-16 16:12:14.854 | getattr(alembic_command, cmd)(config,
 *args, **kwargs)
 
 2014-07-16 16:12:14.854 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/command.py, line
 124, in upgrade
 
 2014-07-16 16:12:14.854 | script.run_env()
 
 2014-07-16 16:12:14.854 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/script.py, line
 199, in run_env
 
 2014-07-16 16:12:14.854 | util.load_python_file(self.dir, 'env.py')
 
 2014-07-16 16:12:14.854 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/util.py, line 205,
 in load_python_file
 
 2014-07-16 16:12:14.854 | module = load_module_py(module_id, path)
 
 2014-07-16 16:12:14.854 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/compat.py, line 58,
 in load_module_py
 
 2014-07-16 16:12:14.854 | mod = imp.load_source(module_id, path, fp)
 
 2014-07-16 16:12:14.854 |   File
 /opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py,
 line 106, in module
 
 2014-07-16 16:12:14.854 | run_migrations_online()
 
 2014-07-16 16:12:14.855 |   File
 /opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py,
 line 90, in run_migrations_online
 
 2014-07-16 16:12:14.855 | options=build_options())
 
 2014-07-16 16:12:14.855 |   File string, line 7, in run_migrations
 
 2014-07-16 16:12:14.855 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/environment.py,
 line 681, in run_migrations
 
 2014-07-16 16:12:14.855 | self.get_context().run_migrations(**kw)
 
 2014-07-16 16:12:14.855 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/migration.py, line
 225, in run_migrations
 
 2014-07-16 16:12:14.855 | change(**kw)
 
 2014-07-16 16:12:14.856 |   File
 
 /opt/stack/new/neutron/neutron/db/migration/alembic_migrations/versions/1d6ee1ae5da5_db_healing.py,
 line 32, in upgrade
 
 2014-07-16 16:12:14.856 | heal_script.heal()
 
 2014-07-16 16:12:14.856 |   File
 
 /opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py,
 line 78, in heal
 
 2014-07-16 16:12:14.856 | execute_alembic_command(el)
 
 2014-07-16 16:12:14.856 |   File
 
 /opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py,
 line 93, in execute_alembic_command
 
 2014-07-16 16:12:14.856 | parse_modify_command(command)
 
 2014-07-16 16:12:14.856 |   File
 
 

Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-17 Thread Joe Gordon
On Wed, Jul 16, 2014 at 5:07 PM, Morgan Fainberg morgan.fainb...@gmail.com
wrote:

 Reposted now will a lot less bad quote issues. Thanks for being patient
 with the re-send!

 --
 From: Joe Gordon joe.gord...@gmail.com
 Reply: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: July 16, 2014 at 02:27:42
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject:  Re: [openstack-dev] [devstack][keystone] Devstack, auth_token
 and keystone v3

  On Tue, Jul 15, 2014 at 7:20 AM, Morgan Fainberg
  wrote:
 
  
  
   On Tuesday, July 15, 2014, Steven Hardy wrote:
  
   On Mon, Jul 14, 2014 at 02:43:19PM -0400, Adam Young wrote:
On 07/14/2014 11:47 AM, Steven Hardy wrote:
Hi all,

I'm probably missing something, but can anyone please tell me when
   devstack
will be moving to keystone v3, and in particular when API
 auth_token
   will
be configured such that auth_version is v3.0 by default?

Some months ago, I posted this patch, which switched auth_version
 to
   v3.0
for Heat:

https://review.openstack.org/#/c/80341/

That patch was nack'd because there was apparently some version
   discovery
code coming which would handle it, but AFAICS I still have to
 manually
configure auth_version to v3.0 in the heat.conf for our API to work
properly with requests from domains other than the default.

The same issue is observed if you try to use non-default-domains
 via
python-heatclient using this soon-to-be-merged patch:

https://review.openstack.org/#/c/92728/

Can anyone enlighten me here, are we making a global devstack move
 to
   the
non-deprecated v3 keystone API, or do I need to revive this
 devstack
   patch?

The issue for Heat is we support notifications from stack domain
   users,
who are created in a heat-specific domain, thus won't work if the
auth_token middleware is configured to use the v2 keystone API.

Thanks for any information :)

Steve
There are reviews out there in client land now that should work. I
 was
testing discover just now and it seems to be doing the right thing.
 If
   the
AUTH_URL is chopped of the V2.0 or V3 the client should be able to
   handle
everything from there on forward.
  
   Perhaps I should restate my problem, as I think perhaps we still have
   crossed wires:
  
   - Certain configurations of Heat *only* work with v3 tokens, because
 we
   create users in a non-default domain
   - Current devstack still configures versioned endpoints, with v2.0
   keystone
   - Heat breaks in some circumstances on current devstack because of
 this.
   - Adding auth_version='v3.0' to the auth_token section of heat.conf
 fixes
   the problem.
  
   So, back in March, client changes were promised to fix this problem,
 and
   now, in July, they still have not - do I revive my patch, or are
 fixes for
   this really imminent this time?
  
   Basically I need the auth_token middleware to accept a v3 token for a
 user
   in a non-default domain, e.g validate it *always* with the v3 API not
   v2.0,
   even if the endpoint is still configured versioned to v2.0.
  
   Sorry to labour the point, but it's frustrating to see this still
 broken
   so long after I proposed a fix and it was rejected.
  
  
   We just did a test converting over the default to v3 (and falling back
 to
   v2 as needed, yes fallback will still be needed) yesterday (Dolph
 posted a
   couple of test patches and they seemed to succeed - yay!!) It looks
 like it
   will just work. Now there is a big caveate, this default will only
 change
   in the keystone middleware project, and it needs to have a patch or
 three
   get through gate converting projects over to use it before we accept
 the
   code.
  
   Nova has approved the patch to switch over, it is just fighting with
 Gate.
   Other patches are proposed for other projects and are in various
 states of
   approval.
  
 
  I assume you mean switch over to keystone middleware project [0], not

 Correct, switch to middleware (a requirement before we landed this patch
 in middleware). I was unclear in that statement. Sorry didn’t mean to make
 anyone jumpy that something was approved in Nova that shouldn’t have been
 or that did massive re-workings internal to Nova.

  switch over to keystone v3. Based on [1] my understanding is no changes
 to
  nova are needed to use the v2 compatible parts of the v3 API, But are
  changes needed to support domains or is this not a problem because the
 auth
  middleware uses uuids for user_id and project_id, so nova doesn't need to
  have any concept of domains? Are any nova changes needed to support the
 v3
  API?
 

 This change simply makes it so the middleware will prefer v3 over v2 if
 both are available
 for validating UUID tokens and fetching 

Re: [openstack-dev] [cinder][nova] cinder querying nova-api

2014-07-17 Thread Duncan Thomas
On 17 July 2014 08:36, Abbass MAROUNI abbass.maro...@virtualscale.fr wrote:
 Thanks Thomas,

 What I'm trying to achieve is the following :
 To be able to create a VM on a host (that's a compute and volume host at the
 same time) then call cinder and let it find the host and create and attach
 volumes there.

 I guess the biggest problem is to be able to identify the host, as you said
 the host_id is of little use here. Unless I try to query the nova api as
 admin and get a list of hypervisors and their vms. But then I'll have to
 match the nova host name with the cinder host name.

 Any thoughts on this ? Is there any blueprint for attaching local volumes in
 cinder ?

You're far from the only person trying to achieve that result, there's
plenty of interest in it, and a few approaches being suggested.

I can't see a current cinder spec open for this, so it might be worth
you starting one, you're more likely to get detailed feedback about
the pitfalls there.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-17 Thread Sean Dague
On 07/17/2014 02:13 PM, Mark McLoughlin wrote:
 On Thu, 2014-07-17 at 09:58 +0100, Daniel P. Berrange wrote:
 On Thu, Jul 17, 2014 at 08:46:12AM +1000, Michael Still wrote:
 Top posting to the original email because I want this to stand out...

 I've added this to the agenda for the nova mid cycle meetup, I think
 most of the contributors to this thread will be there. So, if we can
 nail this down here then that's great, but if we think we'd be more
 productive in person chatting about this then we have that option too.

 FYI, I'm afraid I won't be at the mid-cycle meetup since it clashed with
 my being on holiday. So I'd really prefer if we keep the discussion on
 this mailing list where everyone has a chance to participate.
 
 Same here. Pre-arranged vacation, otherwise I'd have been there.

I'll be there, but I agree that we should do this somewhere we have a
record for later. Recorded memory is important so that in a years time
whatever reasoning we come to is somewhere we can look at the archives.

Which is also why I think this ought to remain on email and not IRC, as
we have a record of it here.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-17 Thread Sean Dague
On 07/16/2014 08:15 PM, Eric Windisch wrote:
 
 
 
 On Wed, Jul 16, 2014 at 12:55 PM, Roman Bogorodskiy
 rbogorods...@mirantis.com mailto:rbogorods...@mirantis.com wrote:
 
   Eric Windisch wrote:
 
  This thread highlights more deeply the problems for the FreeBSD folks.
  First, I still disagree with the recommendation that they
 contribute to
  libvirt. It's a classic example of creating two or more problems
 from one.
  Once they have support in libvirt, how long before their code is in a
  version of libvirt acceptable to Nova? When they hit edge-cases or
 bugs,
  requiring changes in libvirt, how long before those fixes are
 accepted by
  Nova?
 
 Could you please elaborate why you disagree on the contributing patches
 to libvirt approach and what the alternative approach do you propose?
 
 
 I don't necessarily disagree with contributing patches to libvirt. I
 believe that the current system makes it difficult to perform quick,
 iterative development. I wish to see this thread attempt to solve that
 problem and reduce the barrier to getting stuff done.
  
 
 Also, could you please elaborate on what is 'version of libvirt
 acceptable to Nova'? Cannot we just say that e.g. Nova requires libvirt
 X.Y to be deployed on FreeBSD?
 
 
 This is precisely my point, that we need to support different versions
 of libvirt and to test those versions. If we're going to support
  different versions of libvirt on FreeBSD, Ubuntu, and RedHat - those
 should be tested, possibly as third-party options.
 
 The primary testing path for libvirt upstream should be with the latest
 stable release with a non-voting test against trunk. There might be
 value in testing against a development snapshot as well, where we know
 there are features we want in an unreleased version of libvirt but where
 we cannot trust trunk to be stable enough for gate.
  
 
 Anyway, speaking about FreeBSD support I assume we actually talking
 about Bhyve support. I think it'd be good to break the task and
 implement FreeBSD support for libvirt/Qemu first
 
 
  I believe Sean was referencing to Bhyve support, this is how I
 interpreted it.

Yes, I meant Bhyve.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [specs] how to continue spec discussion

2014-07-17 Thread Russell Bryant
On 07/16/2014 10:30 AM, John Garbutt wrote:
 On 16 July 2014 14:07, Thierry Carrez thie...@openstack.org wrote:
 Daniel P. Berrange wrote:
 On Wed, Jul 16, 2014 at 11:57:33AM +, Tim Bell wrote:
 It seems a pity to archive the comments and reviewer lists along
 with losing a place to continue the discussions even if we are not
 expecting to see code in Juno.
 
 Agreed we should keep those comments.
 
 Agreed, that is sub-optimal to say the least.

 The spec documents themselves are in a release specific directory
 though. Any which are to be postponed to Kxxx would need to move
 into a specs/k directory instead of specs/juno, but we don't
 know what the k directory needs to be called yet :-(

 The poll ends in 18 hours, so that should no longer be a blocker :)
 
 Aww, there goes our lame excuse for punting making a decision on this.
 
 I think what we don't really want to abandon those specs and lose
 comments and history... but we want to shelve them in a place where they
 do not interrupt core developers workflow as they concentrate on Juno
 work. It will be difficult to efficiently ignore them if they are filed
 in a next or a kxxx directory, as they would still clutter /most/ Gerrit
 views.
 
 +1
 
 My intention was that once the specific project is open for K specs,
 people will restore their original patch set, and move the spec to the
 K directory, thus keeping all the history.
 
 For Nova, the open reviews, with a -2, are ones that are on the
 potential exception list, and so still might need some reviews. If
 they gain an exception, the -2 will be removed. The list of possible
 exceptions is currently included in bottom of this etherpad:
 https://etherpad.openstack.org/p/nova-juno-spec-priorities

I think we can track potential exceptions without abandoning patches.  I
think having them still open helps retain a dashboard of outstanding
specs.  I'm also worried about how contributors feel having their spec
abandoned when it's not especially clear why in the review.  Anyway, I'd
prefer just leaving them all open (with a -2 is fine) unless we think
it's a dead end.

For exceptions, I think we should require a core review sponsor for any
exception, similar to how we've handled feature freeze exceptions in the
past.  I don't think it makes much sense to provide an exception unless
we're confident we can get it in.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - Location for common third-party libs?

2014-07-17 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 16/07/14 20:42, Kevin Benton wrote:
 I have filed a bug in Red Hat[1], however I'm not sure if it's in
 the right place.
 
 Ihar, can you verify that it's correct or move it to the
 appropriate location?

Thank you, it's correct. Let's follow up in bugzilla.

 
 1. https://bugzilla.redhat.com/show_bug.cgi?id=1120332
 
 
 On Wed, Jul 9, 2014 at 3:29 AM, Ihar Hrachyshka
 ihrac...@redhat.com mailto:ihrac...@redhat.com wrote:
 
 Reviving the old thread.
 
 On 17/06/14 11:23, Kevin Benton wrote:
 Hi Ihar,
 
 What is the reason to breakup neutron into so many packages? A 
 quick disk usage stat shows the plugins directory is currently 
 3.4M. Is that considered to be too much space for a package, or
 was it for another reason?
 
 I think the reasoning was that we don't want to pollute systems
 with unneeded files, and it seems to be easily achievable by
 splitting files into separate packages. It turned out now it's not
 that easy now that we have dependencies between ml2 mechanisms and
 separate plugins.
 
 So I would be in favor of merging plugin packages back into 
 python-neutron package. AFAIK there is still no bug for that in
 Red Hat Bugzilla, so please report one.
 
 
 Thanks, Kevin Benton
 
 
 On Mon, Jun 16, 2014 at 3:37 PM, Ihar Hrachyshka 
 ihrac...@redhat.com mailto:ihrac...@redhat.com wrote:
 
 On 17/06/14 00:10, Anita Kuno wrote:
 On 06/16/2014 06:02 PM, Kevin Benton wrote:
 Hello,
 
 In the Big Switch ML2 driver, we rely on quite a bit of 
 code from the Big Switch plugin. This works fine for 
 distributions that include the entire neutron code base. 
 However, some break apart the neutron code base into 
 separate packages. For example, in CentOS I can't use
 the Big Switch ML2 driver with just ML2 installed because
 the Big Switch plugin directory is gone.
 
 Is there somewhere where we can put common third party
 code that will be safe from removal during packaging?
 
 
 Hi,
 
 I'm a neutron packager for redhat based distros.
 
 AFAIK the main reason is to avoid installing lots of plugins to 
 systems that are not going to use them. No one really spent too 
 much time going file by file and determining internal 
 interdependencies.
 
 In your case, I would move those Brocade specific ML2 files to 
 Brocade plugin package. I would suggest to report the bug in Red 
 Hat bugzilla. I think this won't get the highest priority, but
 once packagers will have spare cycles, this can be fixed.
 
 Cheers, /Ihar
 
 ___ OpenStack-dev 
 mailing list OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 
 
 
 
 
 
 ___ OpenStack-dev 
 mailing list OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 mailto:OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- Kevin Benton
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJTx9RmAAoJEC5aWaUY1u57Fc4H/i011lCJWfwO5cY6ikR94rol
HccxdZIcIkXIPDwBHkOgvqWIK6tYYMsAaVNaG35VmsgKpbQxKb2hbsJosuX2gaww
+tuXH8QdvjCdQGzEbqvCsKvgUInYfF/v5Pi2tgkhK5elb+QXtiJHMgHnLzsNmD2I
cKFRRbkJHS2seQFYiNW62bGYsvn7lEwM9saEWiWWbdVqRreqcqG0Bpp7mQX2vsM9
4t3t+wGCO2BlNn6znQE/njj3op88N5tLjrZFVyJOy+GQbAV11qAXXzvziA5WBKvg
qY5T6t9P07D9LPBpvLbM7asyoirpgFke2ozR+pXu0tU8U/uhPOtistloWXQ2tMM=
=nxTq
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gate] Automatic elastic rechecks

2014-07-17 Thread Matthew Booth
Elastic recheck is a great tool. It leaves me messages like this:

===
I noticed jenkins failed, I think you hit bug(s):

check-devstack-dsvm-cells: https://bugs.launchpad.net/bugs/1334550
gate-tempest-dsvm-large-ops: https://bugs.launchpad.net/bugs/1334550

We don't automatically recheck or reverify, so please consider doing
that manually if someone hasn't already. For a code review which is not
yet approved, you can recheck by leaving a code review comment with just
the text:

recheck bug 1334550

For bug details see: http://status.openstack.org/elastic-recheck/
===

In an ideal world, every person seeing this would diligently check that
the fingerprint match was accurate before submitting a recheck request.

In the real world, how about we just do it automatically?

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][API] Programmatic access to hypervisor servers

2014-07-17 Thread Abbass MAROUNI

Hi all,

I'm trying to get the equivalent of the following nova client command 
programmatically :


root@cloudController1:~# nova hypervisor-servers computeHost1
+--+---+---+-+
| ID   | Name  | Hypervisor 
ID | Hypervisor Hostname |

+--+---+---+-+
| b22fa835-4118-434d-a536-a8q95720a21a | instance-0030 | 
1 | compute1|
| c1778664-e31d-455e-81f2-8776c8a8fa0e | instance-0031 | 
1 | compute1|
| 318d61db-f67c-475c-84a7-a9580bkh53cc | instance-0035 | 
1 | compute1|
| 5f5baaed-f8ca-4161-95eb-cbpopc41b70b | instance-0039 | 
1 | compute1|

+--+---+---+-+

I'm in the process of writing a filter and I need to get the hypervisor 
that's running a given VM. I have access to the novaclient api and a 
context object but cannot figure out how get the above result.


Thanks,

--
--
Abbass MAROUNI
VirtualScale

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-17 Thread Dolph Mathews
On Thu, Jul 17, 2014 at 7:56 AM, Joe Gordon joe.gord...@gmail.com wrote:




 On Wed, Jul 16, 2014 at 5:07 PM, Morgan Fainberg 
 morgan.fainb...@gmail.com wrote:

 Reposted now will a lot less bad quote issues. Thanks for being patient
 with the re-send!

 --
 From: Joe Gordon joe.gord...@gmail.com
 Reply: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: July 16, 2014 at 02:27:42
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject:  Re: [openstack-dev] [devstack][keystone] Devstack, auth_token
 and keystone v3

  On Tue, Jul 15, 2014 at 7:20 AM, Morgan Fainberg
  wrote:
 
  
  
   On Tuesday, July 15, 2014, Steven Hardy wrote:
  
   On Mon, Jul 14, 2014 at 02:43:19PM -0400, Adam Young wrote:
On 07/14/2014 11:47 AM, Steven Hardy wrote:
Hi all,

I'm probably missing something, but can anyone please tell me when
   devstack
will be moving to keystone v3, and in particular when API
 auth_token
   will
be configured such that auth_version is v3.0 by default?

Some months ago, I posted this patch, which switched auth_version
 to
   v3.0
for Heat:

https://review.openstack.org/#/c/80341/

That patch was nack'd because there was apparently some version
   discovery
code coming which would handle it, but AFAICS I still have to
 manually
configure auth_version to v3.0 in the heat.conf for our API to
 work
properly with requests from domains other than the default.

The same issue is observed if you try to use non-default-domains
 via
python-heatclient using this soon-to-be-merged patch:

https://review.openstack.org/#/c/92728/

Can anyone enlighten me here, are we making a global devstack
 move to
   the
non-deprecated v3 keystone API, or do I need to revive this
 devstack
   patch?

The issue for Heat is we support notifications from stack domain
   users,
who are created in a heat-specific domain, thus won't work if the
auth_token middleware is configured to use the v2 keystone API.

Thanks for any information :)

Steve
There are reviews out there in client land now that should work. I
 was
testing discover just now and it seems to be doing the right
 thing. If
   the
AUTH_URL is chopped of the V2.0 or V3 the client should be able to
   handle
everything from there on forward.
  
   Perhaps I should restate my problem, as I think perhaps we still have
   crossed wires:
  
   - Certain configurations of Heat *only* work with v3 tokens, because
 we
   create users in a non-default domain
   - Current devstack still configures versioned endpoints, with v2.0
   keystone
   - Heat breaks in some circumstances on current devstack because of
 this.
   - Adding auth_version='v3.0' to the auth_token section of heat.conf
 fixes
   the problem.
  
   So, back in March, client changes were promised to fix this problem,
 and
   now, in July, they still have not - do I revive my patch, or are
 fixes for
   this really imminent this time?
  
   Basically I need the auth_token middleware to accept a v3 token for
 a user
   in a non-default domain, e.g validate it *always* with the v3 API not
   v2.0,
   even if the endpoint is still configured versioned to v2.0.
  
   Sorry to labour the point, but it's frustrating to see this still
 broken
   so long after I proposed a fix and it was rejected.
  
  
   We just did a test converting over the default to v3 (and falling
 back to
   v2 as needed, yes fallback will still be needed) yesterday (Dolph
 posted a
   couple of test patches and they seemed to succeed - yay!!) It looks
 like it
   will just work. Now there is a big caveate, this default will only
 change
   in the keystone middleware project, and it needs to have a patch or
 three
   get through gate converting projects over to use it before we accept
 the
   code.
  
   Nova has approved the patch to switch over, it is just fighting with
 Gate.
   Other patches are proposed for other projects and are in various
 states of
   approval.
  
 
  I assume you mean switch over to keystone middleware project [0], not

 Correct, switch to middleware (a requirement before we landed this patch
 in middleware). I was unclear in that statement. Sorry didn’t mean to make
 anyone jumpy that something was approved in Nova that shouldn’t have been
 or that did massive re-workings internal to Nova.

  switch over to keystone v3. Based on [1] my understanding is no changes
 to
  nova are needed to use the v2 compatible parts of the v3 API, But are
  changes needed to support domains or is this not a problem because the
 auth
  middleware uses uuids for user_id and project_id, so nova doesn't need
 to
  have any concept of domains? Are any nova changes needed to support the
 v3
  API?
 

 This change simply makes it so the middleware will 

Re: [openstack-dev] [devstack][keystone] (98)Address already in use: make_sock: could not bind to address [::]:5000 0.0.0.0:5000

2014-07-17 Thread Rich Megginson

On 07/16/2014 10:40 PM, Joe Jiang wrote:

Hi all,
Thanks for your responds.

I try to running # sudo semanage port -l|grep 5000 in my envrionment 
and get same infomation.

 ...
 commplex_main_port_t tcp 5000
 commplex_main_port_t udp 5000
then, I wanna remove this port(5000) from SELinux policy rules list 
use this command(semanage port -d -p tcp -t commplex_port_t 5000),
the console echo is /usr/sbin/semanage: Port tcp/5000 is defined in 
policy, cannot be deleted, and 'udp/5000' is same reply.
Some sounds[1] say, this port is declared in the corenetwork source 
policy which is compiled in the base module.

So, Have to recompile selinux module?


I think that's the only way to do it if you want to relabel port 5000.





Thanks.
Joe.

[1]
http://www.redhat.com/archives/fedora-selinux-list/2009-September/msg00056.html





 Another problem with port 5000 in Fedora, and probably more recent
 versions of RHEL, is the selinux policy:

 # sudo semanage port -l|grep 5000
 ...
 commplex_main_port_t tcp 5000
 commplex_main_port_t udp 5000

 There is some service called commplex that has already claimed port
 5000 for its use, at least as far as selinux goes.






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Swift] Composite Auth question

2014-07-17 Thread Dolph Mathews
On Thu, Jul 17, 2014 at 5:43 AM, McCabe, Donagh donagh.mcc...@hp.com
wrote:

  Hi,



 I’m working on the Swift implications of using composite authorization [1]
 [2].



 My question for Keystone developers is : what  project-id do we expect the
 service token to be scoped to - the service's project or the end-user's
 project? When reviewing the Keystone spec, I had assumed the former.
 However, now that I'm looking at it in more detail, I would like to check
 my understanding.

FWIW, prior to reading the below, I would have said I don't think it should
matter to Swift. The project in the primary X-Auth-Token (rather than the
secondary X-Service-Token) should be the one that conveys the scope. But...
I'm probably wrong, and I think I prefer option 2 below.



 The implications are:



 1/ If scoped to the service's project, the role used must be exclusive to
 Glance/Cinder.

I.e. an end-user must never be assigned this role.

In effect, a role on one project grants the service user some privileges on
 every project.

Keystone would never make this last behavior ^ explicit, without
hierarchical multitenancy (and a single root project)... because we already
have the solution I describe below...



 2/ if scoped to the end-user's project, the glance/cinder service user
 must have a role on every project that uses them (including across
 domains); this seems infeasible.

Keystone does have domain-level role assignments that are inherited to
projects... so a service could be assigned a role on a domain with
inheritance, and then the service can generate project-scoped tokens (with
the domain-level role applied) for any project in the domain. I think that
would make option 2 much more feasible, but yes- you still need a role
assignment for per domain in the system.



 Regards,

 Donagh



 [1] swift-specs: https://review.openstack.org/105228

 [2] keystone-specs: https://review.openstack.org/#/c/96315/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] glance.store repo created

2014-07-17 Thread Jay Pipes

On 07/17/2014 02:42 AM, Flavio Percoco wrote:

Greeting,

I'd like to announce that we finally got the glance.store repo created.

This library pulls out of glance the code related to stores. Not many
changes were made to the API during this process. The main goal, for
now, is to switch glance over and keep backwards compatibility with
Glance to reduce the number of changes required. We'll improve and
revamp the store API during K - FWIW, I've a spec draft with ideas for it.


Link to that spec? It's important for some other in-flight nova-specs 
around zero-copy image handling and refactoring the nova.image module.



The library still needs some work and this is a perfect moment for
anyone interested to chime in and contribute to the library. Some things
that are missing:

- Swift store (Nikhil is working on this)


? Why wasn't the existing glance.store.swift used?

Best,
-jay


- Sync latest changes made to the store code.

If you've recently made changes to any of the stores, please go ahead
and contribute them back to `glance.store` or let me know so I can do it.

I'd also like to ask reviewers to request contributions to the `store`
code in glance to be proposed to `glance.store` as well. This way, we'll
be able to keep parity.

I'll be releasing an alpha version soon so we can start reviewing the
glance switch-over. We won't obviously merge it until we have feature
parity.

Any feedback is obviously very welcome,
Cheers,
Flavio



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] os-refresh-config run frequency

2014-07-17 Thread Michael Kerrin
On Thursday 26 June 2014 12:20:30 Clint Byrum wrote:
 Excerpts from Macdonald-Wallace, Matthew's message of 2014-06-26 04:13:31 
-0700:
  Hi all,
  
  I've been working more and more with TripleO recently and whilst it does
  seem to solve a number of problems well, I have found a couple of
  idiosyncrasies that I feel would be easy to address.
  
  My primary concern lies in the fact that os-refresh-config does not run on
  every boot/reboot of a system.  Surely a reboot *is* a configuration
  change and therefore we should ensure that the box has come up in the
  expected state with the correct config?
  
  This is easily fixed through the addition of an @reboot entry in
  /etc/crontab to run o-r-c or (less easily) by re-designing o-r-c to run
  as a service.
  
  My secondary concern is that through not running os-refresh-config on a
  regular basis by default (i.e. every 15 minutes or something in the same
  style as chef/cfengine/puppet), we leave ourselves exposed to someone
  trying to make a quick fix to a production node and taking that node
  offline the next time it reboots because the config was still left as
  broken owing to a lack of updates to HEAT (I'm thinking a quick change
  to allow root access via SSH during a major incident that is then left
  unchanged for months because no-one updated HEAT).
  
  There are a number of options to fix this including Modifying
  os-collect-config to auto-run os-refresh-config on a regular basis or
  setting os-refresh-config to be its own service running via upstart or
  similar that triggers every 15 minutes
  
  I'm sure there are other solutions to these problems, however I know from
  experience that claiming this is solved through education of users or
  (more severely!) via HR is not a sensible approach to take as by the time
  you realise that your configuration has been changed for the last 24
  hours it's often too late!
 So I see two problems highlighted above.
 
 1) We don't re-assert ephemeral state set by o-r-c scripts. You're right,
 and we've been talking about it for a while. The right thing to do is
 have os-collect-config re-run its command on boot. I don't think a cron
 job is the right way to go, we should just have a file in /var/run that
 is placed there only on a successful run of the command. If that file
 does not exist, then we run the command.
 
 I've just opened this bug in response:
 
 https://bugs.launchpad.net/os-collect-config/+bug/1334804
 

I have been looking into bug #1334804 and I have a review up to resolve it. I 
want to highlight something.

Currently on a reboot we start all services via upstart (on debian anyways) 
and there have been quite a lot of issues around this - missing upstart 
scripts and timing issues. I don't know the issues on fedora.

So with a fix to #1334804, on a reboot upstart will start all the services 
first (with potentially out-of-date configuration), then o-c-c will start o-r-
c and will now configure all services and restart them or start them if 
upstart isn't configured properly.

I would like to turn off all boot scripts for services we configure and leave 
all this to o-r-c. I think this will simplify things and put us in control of 
starting services. I believe that it will also narrow the gap between fedora 
and debian or debian and debian so what works on one should work on the other 
and make it easier for developers.

Having the ability to service nova-api stop|start|restart is very handy but 
this will be a manually thing and I intend to leave that there.

What do people think and how best do I push this forward. I feel that this 
leads into the the re-assert-system-state spec but mainly I think this is a 
bug and doesn't require a spec.

I will be at the tripleo mid-cycle meetup next and willing to discuss this 
with anyone interested in this and put together the necessary bits to make 
this happen.

Michael

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] (98)Address already in use: make_sock: could not bind to address [::]:5000 0.0.0.0:5000

2014-07-17 Thread Ray Chen
try to disable the selinux module. I can setup devstack env on my fedora
machine with selinux disabled

on my fedora machine, selinux is disable, and port 5000 look likes are
still used by selinux,
[ray@fedora devstack]$ sudo semanage port -l|grep 5000
cluster_port_t tcp  5149, 40040, 50006-50008
cluster_port_t udp  5149, 50006-50008
commplex_main_port_t   tcp  5000
commplex_main_port_t   udp  5000

[ray@fedora devstack]$ netstat -anp | grep 5000

tcp0  0 0.0.0.0:50000.0.0.0:*
LISTEN  6171/python
[ray@fedora devstack]$ ps -ef | grep python
ray   6171  5695  0 21:34 pts/300:00:07 python
/opt/stack/keystone/bin/keystone-all --config-file
/etc/keystone/keystone.conf --debug




On Thu, Jul 17, 2014 at 10:23 PM, Rich Megginson rmegg...@redhat.com
wrote:

  On 07/16/2014 10:40 PM, Joe Jiang wrote:

  Hi all,
 Thanks for your responds.

  I try to running # sudo semanage port -l|grep 5000 in my envrionment and
 get same infomation.
  ...
  commplex_main_port_t tcp 5000
  commplex_main_port_t udp 5000
 then, I wanna remove this port(5000) from SELinux policy rules list use
 this command(semanage port -d -p tcp -t commplex_port_t 5000),
 the console echo is /usr/sbin/semanage: Port tcp/5000 is defined in
 policy, cannot be deleted, and 'udp/5000' is same reply.
 Some sounds[1] say, this port is declared in the corenetwork source policy
 which is compiled in the base module.
 So, Have to recompile selinux module?


 I think that's the only way to do it if you want to relabel port 5000.





  Thanks.
  Joe.

  [1]

 http://www.redhat.com/archives/fedora-selinux-list/2009-September/msg00056.html





  Another problem with port 5000 in Fedora, and probably more recent
  versions of RHEL, is the selinux policy:
 
  # sudo semanage port -l|grep 5000
  ...
  commplex_main_port_t tcp 5000
  commplex_main_port_t udp 5000
 
  There is some service called commplex that has already claimed port
  5000 for its use, at least as far as selinux goes.







 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Keystone multi-domain ldap + sql in Icehouse

2014-07-17 Thread foss geek
Hi Henry,

Thanks for the update. I will wait for Juno.

Thanks for your time.



On Thu, Jul 17, 2014 at 3:50 PM, Henry Nash hen...@linux.vnet.ibm.com
wrote:

 Hi

 So the bad news is that you are correct, multi-domain LDAP is not ready in
 IceHouse (It is marked as experimental.and it has serious flaws).  The
 good news is that this is fixed for Juno - and this support has already
 been merged - and will be in the Juno milestone 2 release.  Here's the spec
 that describes the work done:


 https://github.com/openstack/keystone-specs/blob/master/specs/juno/multi-backend-uuids.rst

 This support uses the domain-specifc config files approach that is already
 in IceHouse - so the way you define the LDAP parameters for each domain
 does not change.

 Henry
 On 17 Jul 2014, at 10:52, foss geek thefossg...@gmail.com wrote:

 Dear All,

 We are using LDAP as identity back end and SQL as assignment back end.

 Now I am trying to evaluate Keystone multi-domain support with LDAP
 (identity) + SQL (assignment)

 Does any one managed to setup LDAP/SQL multi-domain environment in
 Havana/Icehouse?

 Does keystone have suggested LDAP DIT for domains?

 I gone through the below thread  [1] and [2], it seems Keystone
 multi-domain with LDAP+SQL is not ready in Icehouse.

 Hope some one will help.

 Thanks for your time.

 [1]http://www.gossamer-threads.com/lists/openstack/dev/37705

 [2]http://lists.openstack.org/pipermail/openstack/2014-January/004900.html


  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-17 Thread Stephen Balukoff
Ok, folks!

Per the IRC meeting this morning, we came to the following consensus
regarding how TLS certificates are handled, how SAN is handled, and how
hostname conflict resolution is handled. I will be responding to all three
of the currently ongoing mailing list discussions with this info:


   - Driver does not have to use SAN that is passed from API layer, but SAN
   will be available to drivers at the API layer. This will be mentioned
   explicitly in the spec.
   - Order is a mandatory attribute. It's intended to be used as a hint
   for hostname conflict resolution, but it's ultimately up to the driver to
   decide how to resolve the conflict. (In other words, although it is a
   mandatory attribute in our model, drivers are free to ignore it.)
   - Drivers are allowed to vary their behavior when choosing how to
   implement hostname conflict resolution since there is no single algorithm
   here that all vendors are able to support. (This is anticipated to be a
   rare edge case anyway.)

I think Evgeny will be updating the specs to reflect this decision so that
it is documented--  we hope to get ultimate approval of the spec in the
next day or two.

Thanks,
Stephen




On Wed, Jul 16, 2014 at 7:31 PM, Stephen Balukoff sbaluk...@bluebox.net
wrote:

 Just saw this thread after responding to the other:

 I'm in favor of Evgeny's proposal. It sounds like it should resolve most
 (if not all) of the operators', vendors' and users' concerns with regard to
 handling TLS certificates.

 Stephen


 On Wed, Jul 16, 2014 at 12:35 PM, Carlos Garza carlos.ga...@rackspace.com
  wrote:


 On Jul 16, 2014, at 10:55 AM, Vijay Venkatachalam 
 vijay.venkatacha...@citrix.com
  wrote:

  Apologies for the delayed response.
 
  I am OK with displaying the certificates contents as part of the API,
 that should not harm.
 
  I think the discussion has to be split into 2 topics.
 
  1.   Certificate conflict resolution. Meaning what is expected when
 2 or more certificates become eligible during SSL negotiation
  2.   SAN support
 

 Ok cool that makes more sense. #2 seems to be met by Evgeny proposal.
 I'll let you folks decide the conflict resolution issue #1.


  I will send out 2 separate mails on this.
 
 
  From: Samuel Bercovici [mailto:samu...@radware.com]
  Sent: Tuesday, July 15, 2014 11:52 PM
  To: OpenStack Development Mailing List (not for usage questions); Vijay
 Venkatachalam
  Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI -
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
 
  OK.
 
  Let me be more precise, extracting the information for view sake /
 validation would be good.
  Providing values that are different than what is in the x509 is what I
 am opposed to.
 
  +1 for Carlos on the library and that it should be ubiquitously used.
 
  I will wait for Vijay to speak for himself in this regard…
 
  -Sam.
 
 
  From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
  Sent: Tuesday, July 15, 2014 8:35 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI -
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
 
  +1 to German's and  Carlos' comments.
 
  It's also worth pointing out that some UIs will definitely want to show
 SAN information and the like, so either having this available as part of
 the API, or as a standard library we write which then gets used by multiple
 drivers is going to be necessary.
 
  If we're extracting the Subject Common Name in any place in the code
 then we also need to be extracting the Subject Alternative Names at the
 same place. From the perspective of the SNI standard, there's no difference
 in how these fields should be treated, and if we were to treat SANs
 differently then we're both breaking the standard and setting a bad
 precedent.
 
  Stephen
 
 
  On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza 
 carlos.ga...@rackspace.com wrote:
 
  On Jul 15, 2014, at 10:55 AM, Samuel Bercovici samu...@radware.com
   wrote:
 
   Hi,
  
  
   Obtaining the domain name from the x509 is probably more of a
 driver/backend/device capability, it would make sense to have a library
 that could be used by anyone wishing to do so in their driver code.
 
  You can do what ever you want in *your* driver. The code to extract
 this information will be apart of the API and needs to be mentioned in the
 spec now. PyOpenSSL with PyASN1 are the most likely candidates.
 
  Carlos D. Garza
  
   -Sam.
  
  
  
   From: Eichberger, German [mailto:german.eichber...@hp.com]
   Sent: Tuesday, July 15, 2014 6:43 PM
   To: OpenStack Development Mailing List (not for usage questions)
   Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI -
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
  
   Hi,
  
   My impression was that the frontend would extract the names and hand
 them to the driver.  This has the 

Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SubjectAlternativeNames (SAN)

2014-07-17 Thread Stephen Balukoff
Ok, folks!

Per the IRC meeting this morning, we came to the following consensus
regarding how TLS certificates are handled, how SAN is handled, and how
hostname conflict resolution is handled. I will be responding to all three
of the currently ongoing mailing list discussions with this info:


   - Driver does not have to use SAN that is passed from API layer, but SAN
   will be available to drivers at the API layer. This will be mentioned
   explicitly in the spec.
   - Order is a mandatory attribute. It's intended to be used as a hint
   for hostname conflict resolution, but it's ultimately up to the driver to
   decide how to resolve the conflict. (In other words, although it is a
   mandatory attribute in our model, drivers are free to ignore it.)
   - Drivers are allowed to vary their behavior when choosing how to
   implement hostname conflict resolution since there is no single algorithm
   here that all vendors are able to support. (This is anticipated to be a
   rare edge case anyway.)

I think Evgeny will be updating the specs to reflect this decision so that
it is documented--  we hope to get ultimate approval of the spec in the
next day or two.

Thanks,
Stephen


On Wed, Jul 16, 2014 at 7:41 PM, Stephen Balukoff sbaluk...@bluebox.net
wrote:

 Vijay--

 I'm confused: If NetScaler doesn't actually look at the SAN, isn't it not
 actually following the SNI standard? (RFC2818 page 4, paragraph 2, as I
 think Carlos pointed out in another thread.) Or at least, isn't that
 ignoring how every major browser on the market that support SNI operates?

 Anyway, per the other thread we've had on this, and Evgeny's proposal
 there, do you see harm in having SAN available at the API level
 (informationally, at least). In any case, duplication of code is something
 I think we can all agree is not desirable, and because so many other
 implementations are likely to need the SAN info, it should be available to
 drivers via a universal library (as Carlos is describing).

 Stephen


 On Wed, Jul 16, 2014 at 3:43 PM, Eichberger, German 
 german.eichber...@hp.com wrote:

 +1 for not duplicating code

 For me it's scary as well if different implementation exhibit different
 behavior. This very contrary to what we would like to do with exposing LBs
 only as flavor...

 German

 -Original Message-
 From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
 Sent: Wednesday, July 16, 2014 2:05 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability -
 SubjectAlternativeNames (SAN)


 On Jul 16, 2014, at 3:49 PM, Carlos Garza carlos.ga...@rackspace.com
 wrote:

 
  On Jul 16, 2014, at 12:30 PM, Vijay Venkatachalam
  vijay.venkatacha...@citrix.com
  wrote:
 
 We will have the code that will parse the X509 in the API scope of
 the code. The validation I'm refering to is making sure the key matches the
 cert used and that we mandate that at a minimum the backend driver support
 RSA and that since the X509 validation is happeneing at the api layer this
 same module will also handling the extraction of the SANs. I am proposing
 that the methods that can extract the SAN SCN from the x509 be present in
 the api portion of the code and that drivers can call these methods if they
 need too. Infact I'm already working to get these extraction methods
 contributed to the PyOpenSSL project so that they will already available at
 a more fundemental layer then our nuetron/LBAAS code. At the very least I
 want to spec to declare that SAN SCN and parsing must be made available
 from the API layer. If the PyOpenSSL has the methods available at that time
 then I we can simply write wrappers for this in the API or simple write
 more higher level methods in the API module.

 I meant to say bottom line I want the parsing code exposed in the API
 and not duplicated in everyone elses driver.

  I am partioally open to the idea of letting the driver handle the
 behavior of the cert parsing. Although I defer this to the rest of the
 folks as I get this feeling having differn't implementations exhibiting
 differen't behavior may sound scary.
 
 
 I think it is best not to mention about SAN in the
 OpenStack TLS spec. It is expected that the backend should implement
 according to the SSL/SNI IETF spec.
  Let's leave the implementation/validation part to the driver.  For ex.
 NetScaler does not support SAN and the NetScaler driver could either throw
 an error if certs with SAN are used or ignore it.
 
 How is netscaler making the decision when choosing the cert to
 associate with the SNI handshake?
 
 
  Does anyone see a requirement for detailing?
 
 
  Thanks,
  Vijay V.
 
 
  From: Vijay Venkatachalam
  Sent: Wednesday, July 16, 2014 8:54 AM
  To: 'Samuel Bercovici'; 'OpenStack Development Mailing List (not for
 usage questions)'
  Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI -
  Extracting SubjectCommonName and/or 

Re: [openstack-dev] [neutron] [nova] Weekly nova-network / neutron parity meeting

2014-07-17 Thread Kyle Mestery
On Thu, Jul 17, 2014 at 6:42 AM, Thierry Carrez thie...@openstack.org wrote:
 Kyle Mestery wrote:
 As we're getting down to the wire in Juno, I'd like to propose we have
 a weekly meeting on the nova-network and neutron parity effort. I'd
 like to start this meeting next week, and I'd like to propose
 Wednesday at 1500 UTC on #openstack-meeting-3 as the time and
 location.

 This conflicts on the agenda with the PHP SDK Team Meeting.
 That said, I'm not sure they still run this one regularly.

We could do Wednesday or Friday at 1500 UTC if that works better for
folks. I'd prefer Wednesday, though. We could also do 1400UTC
Wednesday.

Perhaps I'll split the difference and do 1430 UTC Wednesday. Does that
sound ok to everyone?

Thanks,
Kyle

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - Certificate conflict resolution

2014-07-17 Thread Stephen Balukoff
Ok, folks!

Per the IRC meeting this morning, we came to the following consensus
regarding how TLS certificates are handled, how SAN is handled, and how
hostname conflict resolution is handled. I will be responding to all three
of the currently ongoing mailing list discussions with this info:


   - Driver does not have to use SAN that is passed from API layer, but SAN
   will be available to drivers at the API layer. This will be mentioned
   explicitly in the spec.
   - Order is a mandatory attribute. It's intended to be used as a hint
   for hostname conflict resolution, but it's ultimately up to the driver to
   decide how to resolve the conflict. (In other words, although it is a
   mandatory attribute in our model, drivers are free to ignore it.)
   - Drivers are allowed to vary their behavior when choosing how to
   implement hostname conflict resolution since there is no single algorithm
   here that all vendors are able to support. (This is anticipated to be a
   rare edge case anyway.)

I think Evgeny will be updating the specs to reflect this decision so that
it is documented--  we hope to get ultimate approval of the spec in the
next day or two.

Thanks,
Stephen


On Wed, Jul 16, 2014 at 7:26 PM, Stephen Balukoff sbaluk...@bluebox.net
wrote:

 Hi Vijay,



 On Wed, Jul 16, 2014 at 9:07 AM, Vijay Venkatachalam 
 vijay.venkatacha...@citrix.com wrote:



 Do you know if the SSL/SNI IETF spec details about conflict resolution. I
 am assuming not.



 Because of this ambiguity each backend employs its own mechanism to
 resolve conflicts.



 There are 3 choices now

 1.   The LBaaS extension does not allow conflicting certificates to
 be bound using validation

 2.   Allow each backend conflict resolution mechanism to get into
 the spec

 3.   Does not specify anything in the spec, no mechanism introduced
 and let the driver deal with it.



 Both HA proxy and Radware uses configuration as a mechanism to resolve.
 Radware uses order while HA Proxy uses externally specified DNS names.

 NetScaler implementation uses the best possible match algorithm



 For ex, let’s say 3 certs are bound to the same endpoint with the
 following SNs

 www.finance.abc.com

 *.finance.abc.com

 *.*.abc.com



 If the host request is  payroll.finance.abc.com  we shall  use  *.
 finance.abc.com

 If it is  payroll.engg.abc.com  we shall use  *.*.abc.com



 NetScaler won’t not allow 2 certs to have the same SN.




 Did you mean CN as in Common Name above?

 In any case, it sounds like:

 1. Conflicts are going to be relatively rare in any case
 2. Conflict resolution as such can probably be left to the vendor. Since
 the Neutron LBaaS reference implementation uses HAProxy, it seems logical
 that this should be considered normal behavior for the Neutron LBaaS
 service-- though again the slight variations in vendor implementations for
 conflict resolution are unlikely to cause serious issues for most users.

 If NetScaler runs into a situation where the SCN of a cert conflicts with
 the SCN or SAN of another cert, then perhaps they can return an
 'UnsupportedConfigruation' error or whatnot? (I believe we're trying to get
 the ability to return such errors with the flavor framework, correct?)

 In any case, is there any reason to delay going forward with a model and
 code that:
 A. Uses an 'order' attribute on the SNI-related objects to resolve name
 conflicts.
 B. Includes a ubiquitous library for extracting SCN and SAN that any
 driver may use if they don't use the 'order' attribute?

 Thanks,
 Stephen


 --
 Stephen Balukoff
 Blue Box Group, LLC
 (800)613-4305 x807




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Treating notifications as a contract

2014-07-17 Thread Chris Dent

On Tue, 15 Jul 2014, Mark McLoughlin wrote:


So you're proposing that all payloads should contain something like:
[...] a class, type, id, value, unit and a space to put additional metadata.


That's the gist of it, but I'm only presenting that as a way to get
somebody else to point out what's wrong with it so we can get
closer to what's actually needed...


On the subject of notifications as a contract, calling the additional
metadata field 'extra' suggests to me that there are no stability
promises being made about those fields. Was that intentional?


...and as you point out, if everything that doesn't fit in the
known fields goes in 'extra' then the goal of contractual
stability may be lost.

What I think that shows us is that what we probably want is three
levels of contract. Currently we have one:

* There is a thing called a notification and it has a small number
  of top-level fields include 'payload'

At the second level would be:

* There is a list of things _in_ the payload which are events and
  they have some known general struture that allows injestion (as
  data) by lots of consumers.

And the third level would be:

* Each event has an internal structure (below the general structure)
  which is based on its type. In the simplest cases (some meters for
  example) a third level could either not be necessary or at least
  very small[1]. This is the badly named extras above.

Basically: If people are willing to pay the price (in terms of changes)
for contractual stability may as well get some miles out of it.
Three layers of abstraction means there can be three distinct levels
in applications or tools, each of which are optimized to a different
level of the topology: transporting messages, persisting/publishing
messages, extracting meaning from messages.

That's kind of present already, but it is done in a way that
requires a lot of duplication of knowledge between producer and
consumer and within different parts of the consumer. Which makes
effective testing and scaling more complex that it ought to be.

I know from various IRC chatter that this is a perennial topic round
these parts, frequently foundering, but perhaps each time we get a
bit further, learn a bit more?

In any case, as Eoghan and I stated elsewhere in the thread I'm
going to try to drive this forward, but I'm not going to rush it as
there's no looming deadline.

[1] Part of the reason I drove us off into the so-called weeds
earlier in the thread is a sense  that a large number of
events/samples/notification-payloads are capable of being classed as
nearly the same thing (except for their name and units) and thus
would not warrant their own specific schema. These are just events
that should have a very similar structure. If that structure is well
known and well accepted we will likely find that many existing
events can be bent to fit that structure for the sake of less code
and more reuse. As part of this process I'll try to figure out if this
is true .

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] Reminder: Mid-cycle Meetup - Attendance Confirmation

2014-07-17 Thread Jordan O'Mara
Here's the logistics plan for next week:

Each day, show up at the Red Hat Tower main lobby before 9am. You'll need to 
check in with security and get a visitor sticker. All of you have been 
pre-registered, so you just need to check in and it should be very quick. 
Someone from Red Hat (myself, Jarda, Mike Orazi or Tzu-mainn) will then escort 
you to the Annex (outside of RHT) where we're having our meeting. You can say 
you're with the OpenStack Midcycle meetup group, they are supposed to have the 
list ready :)

Here's the map to the lobby entrance - its = 5 minutes from the Marriott: 
http://goo.gl/maps/sBDEP. If you get lost, its the 19 story building with the 
giant red roof. 

Looking forward to seeing everyone!




- Original Message -
 On 30/06/14 13:02 -0400, Jordan OMara wrote:
  On 25/06/14 14:32 -0400, Jordan OMara wrote:
  On 25/06/14 18:20 +, Carlino, Chuck (OpenStack TripleO, Neutron)
  wrote:
  Is $179/day the expected rate?
 
  Thanks,
  Chuck
 
  Yes, that's the best rate available from both of the downtown
  (walkable) hotels.
 
  Just an update that we only have a few rooms left in our block at the
  Marriott. Please book ASAP if you haven't
 
 Final reminder: our group rate expires tomorrow!
 
 
 --
 Jordan O'Mara jomara at redhat.com
 Red Hat Engineering, Raleigh
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Jordan O'Mara jomara at redhat.com
Red Hat Engineering, Raleigh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-17 Thread Daniel P. Berrange
On Wed, Jul 16, 2014 at 09:44:55AM -0700, Johannes Erdfelt wrote:
 On Wed, Jul 16, 2014, Mark McLoughlin mar...@redhat.com wrote:
  No, there are features or code paths of the libvirt 1.2.5+ driver that
  aren't as well tested as the class A designation implies. And we have
  a proposal to make sure these aren't used by default:
  
https://review.openstack.org/107119
  
  i.e. to stray off the class A path, an operator has to opt into it by
  changing a configuration option that explains they will be enabling code
  paths which aren't yet tested upstream.
 
 So that means the libvirt driver will be a mix of tested and untested
 features, but only the tested code paths will be enabled by default?
 
 The gate not only tests code as it gets merged, it tests to make sure it
 doesn't get broken in the future by other changes.
 
 What happens when it comes time to bump the default version_cap in the
 future? It looks like there could potentially be a scramble to fix code
 that has been merged but doesn't work now that it's being tested. Which
 potentially further slows down development since now unrelated code
 needs to be fixed.
 
 This sounds like we're actively weakening the gate we currently have.

If the gate has libvirt 1.2.2 and a feature is added to Nova that
depends on libvirt 1.2.5, then the gate is already not testing that
codepath since it lacks the libvirt version neccessary to test it.
The version cap should not be changing that, it is just making it
more explicit that it hasn't been tested

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] multiple backend issue

2014-07-17 Thread Johnson Cheng
Dear git-harry,

I have created a volume group cinder-volume-1 at my controller node, and 
another volume group cinder-volume-2 at my compute node.

I can create volume successfully on dedicated backend.
Of course I can ignore the error message, but I have to know if any side-effect?

Regards,
Johnson

-Original Message-
From: git harry [mailto:git-ha...@live.co.uk] 
Sent: Thursday, July 17, 2014 7:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] multiple backend issue

You are using multibackend but it appears you haven't created both volume 
groups:

Stderr: ' Volume group cinder-volumes-2 not found\n'

If you can create volumes it suggest the other backend is correctly configured. 
So you can ignore the error if you want but you will not be able to use the 
second backend you have attempted to setup.


 From: johnson.ch...@qsantechnology.com
 To: openstack-dev@lists.openstack.org
 Date: Thu, 17 Jul 2014 11:03:41 +
 Subject: [openstack-dev] [Cinder] multiple backend issue
 
 
 Dear All,
 
 
 
 I have two machines as below,
 
 Machine1 (192.168.106.20): controller node (cinder node and volume 
 node)
 
 Machine2 (192.168.106.30): compute node (volume node)
 
 
 
 I can successfully create a cinder volume, but there is an error in 
 cinder-volume.log.
 
 2014-07-17 18:49:01.105 5765 AUDIT cinder.service [-] Starting 
 cinder-volume node (version 2014.1)
 
 2014-07-17 18:49:01.113 5765 INFO cinder.volume.manager
 [req-82bf4ed2-0076-4f75-9d5b-9e9945cd6be2 - - - - -] Starting volume 
 driver L
 
 VMISCSIDriver (2.0.0)
 
 2014-07-17 18:49:01.114 5764 AUDIT cinder.service [-] Starting 
 cinder-volume node (version 2014.1)
 
 2014-07-17 18:49:01.124 5764 INFO cinder.volume.manager
 [req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] Starting volume 
 driver L
 
 VMISCSIDriver (2.0.0)
 
 2014-07-17 18:49:01.965 5765 ERROR cinder.volume.manager
 [req-82bf4ed2-0076-4f75-9d5b-9e9945cd6be2 - - - - -] Error encountered 
 durin
 
 g initialization of driver: LVMISCSIDriver
 
 2014-07-17 18:49:01.971 5765 ERROR cinder.volume.manager
 [req-82bf4ed2-0076-4f75-9d5b-9e9945cd6be2 - - - - -] Unexpected error 
 while
 
 running command. 
 
 Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C 
 vgs --noheadings -o name cinder-volumes-2
 
 Exit code: 5
 
 Stdout: '' 
 
 Stderr: ' Volume group cinder-volumes-2 not found\n' 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Traceback 
 (most recent call last):
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
 /usr/lib/python2.7/dist-packages/cinder/volume/manager.py, line 243
 
 , in init_host
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager
 self.driver.check_for_setup_error()
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
 /usr/lib/python2.7/dist-packages/cinder/volume/drivers/lvm.py, line
 
 83, in check_for_setup_error
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager
 executor=self._execute)
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
 /usr/lib/python2.7/dist-packages/cinder/brick/local_dev/lvm.py, lin
 
 e 81, in __init__
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager if
 self._vg_exists() is False: 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
 /usr/lib/python2.7/dist-packages/cinder/brick/local_dev/lvm.py, lin
 
 e 106, in _vg_exists
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager self.vg_name, 
 root_helper=self._root_helper, run_as_root=True)
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
 /usr/lib/python2.7/dist-packages/cinder/utils.py, line 136, in exec
 
 ute
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager return 
 processutils.execute(*cmd, **kwargs)
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/processutil
 
 s.py, line 173, in execute
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager cmd=' 
 '.join(cmd))
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager
 ProcessExecutionError: Unexpected error while running command. 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Command: sudo 
 cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --n
 
 oheadings -o name cinder-volumes-2
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Exit code: 5
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Stdout: '' 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Stderr: ' 
 Volume group cinder-volumes-2 not found\n' 
 
 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager
 
 2014-07-17 18:49:03.236 5765 INFO oslo.messaging._drivers.impl_rabbit
 [-] Connected to AMQP server on controller:5672
 
 2014-07-17 18:49:03.890 5764 INFO cinder.volume.manager
 [req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] volume 
 5811b9af-b24a-44f
 
 

Re: [openstack-dev] [devstack][keystone] (98)Address already in use: make_sock: could not bind to address [::]:5000 0.0.0.0:5000

2014-07-17 Thread Ryan Hallisey
Hi,

You can handle this one of two ways.

1)
semanage port -m -t the new label you choose -p tcp 5000

Which will relabel port 5000 as whatever you choose.

2)
Or you could allow the label you choose to bind to commplex_main_port_t

allow your label commplex_main_port_t:tcp_socket name_bind;

This will allow your label to connect to any port labeled 
commplex_main_port_t. 

Sincerely,
Ryan

- Original Message -
From: Ray Chen chenrano2...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Thursday, July 17, 2014 10:57:41 AM
Subject: Re: [openstack-dev] [devstack][keystone] (98)Address already in use: 
make_sock: could not bind to address [::]:5000  0.0.0.0:5000

try to disable the selinux module. I can setup devstack env on my fedora 
machine with selinux disabled 

on my fedora machine, selinux is disable, and port 5000 look likes are still 
used by selinux, 
[ray@fedora devstack]$ sudo semanage port -l|grep 5000 
cluster_port_t tcp 5149, 40040, 50006-50008 
cluster_port_t udp 5149, 50006-50008 
commplex_main_port_t tcp 5000 
commplex_main_port_t udp 5000 

[ray@fedora devstack]$ netstat -anp | grep 5000 

tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 6171/python 
[ray@fedora devstack]$ ps -ef | grep python 
ray 6171 5695 0 21:34 pts/3 00:00:07 python 
/opt/stack/keystone/bin/keystone-all --config-file /etc/keystone/keystone.conf 
--debug 




On Thu, Jul 17, 2014 at 10:23 PM, Rich Megginson  rmegg...@redhat.com  wrote: 



On 07/16/2014 10:40 PM, Joe Jiang wrote: 



Hi all, 
Thanks for your responds. 

I try to running # sudo semanage port -l|grep 5000 in my envrionment and get 
same infomation. 
 ... 
 commplex_main_port_t tcp 5000 
 commplex_main_port_t udp 5000 
then, I wanna remove this port(5000) from SELinux policy rules list use this 
command(semanage port -d -p tcp -t commplex_port_t 5000), 
the console echo is /usr/sbin/semanage: Port tcp/5000 is defined in policy, 
cannot be deleted , and 'udp/5000' is same reply. 
Some sounds[1] say, this port is declared in the corenetwork source policy 
which is compiled in the base module. 
So, Have to recompile selinux module? 

I think that's the only way to do it if you want to relabel port 5000. 








Thanks. 
Joe. 

[1] 
http://www.redhat.com/archives/fedora-selinux-list/2009-September/msg00056.html 




 Another problem with port 5000 in Fedora, and probably more recent
 versions of RHEL, is the selinux policy:
  
 # sudo semanage port -l|grep 5000
 ...
 commplex_main_port_t tcp 5000
 commplex_main_port_t udp 5000
  
 There is some service called commplex that has already claimed port
 5000 for its use, at least as far as selinux goes. 




___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Swift] Composite Auth question

2014-07-17 Thread McCabe, Donagh
Dolph,

 The project in the primary X-Auth-Token (rather than the secondary 
 X-Service-Token) should be the one that conveys the scope

That had been my assumption and had hoped that was ok. It certainly seemed good 
enough for Swift

 …so a service could be assigned a role on a domain with inheritance, and then 
 the service can generate project-scoped tokens...

This creates two issues:

A/ There is an extra overhead on domain owners to setup the service user with 
the appropriate role. In a multi-domain environment, this means that the glance 
service user is probably in a different domain than the end-user's domain 
(e.g., glance user is setup in domain A; the owners of domains B, C, D need to 
know details of a domain A user). This is more feasible than doing it for every 
project, but still is some overhead. [footnote]

B/ The glance service needs to ask Keystone for a token for every request -- it 
cannot simply re-use a glance-user token. So there are several extra calls 
needed: glance to ask for a token scoped to the end-user's project; Keystone to 
create the token; Swift won't find it in memcache, so it will need to validate 
the token.. I may be overstating this. If glance/cinder is the use-case, this 
is not much overhead for a comparatively infrequent event.

I'm toying with the idea that Swift would know what projects were needed to use 
an X-Service-Token. In effect, Swift would know that container X is owned by 
project X, but is also owned-for-update by glance project. With this scheme, 
the composite auth is teamed with composite ownership. Specifically, the 
X-Service-Token is scoped to the glance project, X-Auth-Token is scoped to the 
end-user's project and Swift checks both.

I didn't want to offer up this idea because it's more complexity for the 
deployer (but once done, the domain admins would not have to do anything)

A general question: I understand the Swift use-case. I believe there are other 
use cases for composite auth. Do they have similar project-role-scope issues? I 
guess Swift is different in that the user gets to name the project (in the 
path) whereas other projects derive the target project form the X-Auth-Token.

[footnote] I use glance as an example...applies to Cinder and *every* service 
that plans to use composite auth


From: Dolph Mathews [mailto:dolph.math...@gmail.com] 
Sent: 17 July 2014 15:30
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Keystone] [Swift] Composite Auth question


On Thu, Jul 17, 2014 at 5:43 AM, McCabe, Donagh donagh.mcc...@hp.com wrote:
Hi,
 
I’m working on the Swift implications of using composite authorization [1] [2].
 
My question for Keystone developers is : what  project-id do we expect the 
service token to be scoped to - the service's project or the end-user's 
project? When reviewing the Keystone spec, I had assumed the former. However, 
now that I'm looking at it in more detail, I would like to check my 
understanding.
FWIW, prior to reading the below, I would have said I don't think it should 
matter to Swift. The project in the primary X-Auth-Token (rather than the 
secondary X-Service-Token) should be the one that conveys the scope. But... I'm 
probably wrong, and I think I prefer option 2 below.
 
The implications are:
 
1/ If scoped to the service's project, the role used must be exclusive to 
Glance/Cinder.
I.e. an end-user must never be assigned this role.
In effect, a role on one project grants the service user some privileges on 
every project.
Keystone would never make this last behavior ^ explicit, without hierarchical 
multitenancy (and a single root project)... because we already have the 
solution I describe below...
 
2/ if scoped to the end-user's project, the glance/cinder service user must 
have a role on every project that uses them (including across domains); this 
seems infeasible.
Keystone does have domain-level role assignments that are inherited to 
projects... so a service could be assigned a role on a domain with inheritance, 
and then the service can generate project-scoped tokens (with the domain-level 
role applied) for any project in the domain. I think that would make option 2 
much more feasible, but yes- you still need a role assignment for per domain in 
the system.
 
Regards,
Donagh
 
[1] swift-specs: https://review.openstack.org/105228
[2] keystone-specs: https://review.openstack.org/#/c/96315/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-17 Thread Chuck Short
On Thu, Jul 17, 2014 at 8:13 AM, Mark McLoughlin mar...@redhat.com wrote:

 On Thu, 2014-07-17 at 09:58 +0100, Daniel P. Berrange wrote:
  On Thu, Jul 17, 2014 at 08:46:12AM +1000, Michael Still wrote:
   Top posting to the original email because I want this to stand out...
  
   I've added this to the agenda for the nova mid cycle meetup, I think
   most of the contributors to this thread will be there. So, if we can
   nail this down here then that's great, but if we think we'd be more
   productive in person chatting about this then we have that option too.
 
  FYI, I'm afraid I won't be at the mid-cycle meetup since it clashed with
  my being on holiday. So I'd really prefer if we keep the discussion on
  this mailing list where everyone has a chance to participate.

 Same here. Pre-arranged vacation, otherwise I'd have been there.

 Mark.


Ill be there.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-17 Thread Evgeny Fedoruk
Thanks for summarizing it, Stephen.

I made changes and pushed a new patch for review.
https://review.openstack.org/#/c/98640/14

Please review it.

Thank you all !
Evg


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Thursday, July 17, 2014 6:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
SubjectCommonName and/or SubjectAlternativeNames from X509

Ok, folks!

Per the IRC meeting this morning, we came to the following consensus regarding 
how TLS certificates are handled, how SAN is handled, and how hostname conflict 
resolution is handled. I will be responding to all three of the currently 
ongoing mailing list discussions with this info:


  *   Driver does not have to use SAN that is passed from API layer, but SAN 
will be available to drivers at the API layer. This will be mentioned 
explicitly in the spec.
  *   Order is a mandatory attribute. It's intended to be used as a hint for 
hostname conflict resolution, but it's ultimately up to the driver to decide 
how to resolve the conflict. (In other words, although it is a mandatory 
attribute in our model, drivers are free to ignore it.)
  *   Drivers are allowed to vary their behavior when choosing how to implement 
hostname conflict resolution since there is no single algorithm here that all 
vendors are able to support. (This is anticipated to be a rare edge case 
anyway.)
I think Evgeny will be updating the specs to reflect this decision so that it 
is documented--  we hope to get ultimate approval of the spec in the next day 
or two.

Thanks,
Stephen



On Wed, Jul 16, 2014 at 7:31 PM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net wrote:
Just saw this thread after responding to the other:

I'm in favor of Evgeny's proposal. It sounds like it should resolve most (if 
not all) of the operators', vendors' and users' concerns with regard to 
handling TLS certificates.

Stephen

On Wed, Jul 16, 2014 at 12:35 PM, Carlos Garza 
carlos.ga...@rackspace.commailto:carlos.ga...@rackspace.com wrote:

On Jul 16, 2014, at 10:55 AM, Vijay Venkatachalam 
vijay.venkatacha...@citrix.commailto:vijay.venkatacha...@citrix.com
 wrote:

 Apologies for the delayed response.

 I am OK with displaying the certificates contents as part of the API, that 
 should not harm.

 I think the discussion has to be split into 2 topics.

 1.   Certificate conflict resolution. Meaning what is expected when 2 or 
 more certificates become eligible during SSL negotiation
 2.   SAN support

Ok cool that makes more sense. #2 seems to be met by Evgeny proposal. I'll 
let you folks decide the conflict resolution issue #1.


 I will send out 2 separate mails on this.


 From: Samuel Bercovici 
 [mailto:samu...@radware.commailto:samu...@radware.com]
 Sent: Tuesday, July 15, 2014 11:52 PM
 To: OpenStack Development Mailing List (not for usage questions); Vijay 
 Venkatachalam
 Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

 OK.

 Let me be more precise, extracting the information for view sake / validation 
 would be good.
 Providing values that are different than what is in the x509 is what I am 
 opposed to.

 +1 for Carlos on the library and that it should be ubiquitously used.

 I will wait for Vijay to speak for himself in this regard…

 -Sam.


 From: Stephen Balukoff 
 [mailto:sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net]
 Sent: Tuesday, July 15, 2014 8:35 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

 +1 to German's and  Carlos' comments.

 It's also worth pointing out that some UIs will definitely want to show SAN 
 information and the like, so either having this available as part of the API, 
 or as a standard library we write which then gets used by multiple drivers is 
 going to be necessary.

 If we're extracting the Subject Common Name in any place in the code then we 
 also need to be extracting the Subject Alternative Names at the same place. 
 From the perspective of the SNI standard, there's no difference in how these 
 fields should be treated, and if we were to treat SANs differently then we're 
 both breaking the standard and setting a bad precedent.

 Stephen


 On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza 
 carlos.ga...@rackspace.commailto:carlos.ga...@rackspace.com wrote:

 On Jul 15, 2014, at 10:55 AM, Samuel Bercovici 
 samu...@radware.commailto:samu...@radware.com
  wrote:

  Hi,
 
 
  Obtaining the domain name from the x509 is probably more of a 
  driver/backend/device capability, it would make sense to have a library 
  that could be used by anyone wishing to do so in their driver code.

 You can do what ever you want in *your* driver. The code to 

[openstack-dev] [Rally] PTL Elections

2014-07-17 Thread Sergey Lukjanov
Hi folks,

due to the requirement to have PTL for the program, we're running
elections for the Rally PTL for Juno cycle. Schedule and policies
are fully aligned with official OpenStack PTLs elections.

You can find more info in official Juno elections wiki page [0] and
the same page for Rally elections [1], additionally some more info
in the past official nominations opening email [2].

Timeline:

till 05:59 UTC July 23, 2014: Open candidacy to PTL positions
July 23, 2014 - 1300 UTC July 30, 2014: PTL elections

To announce your candidacy please start a new openstack-dev at
lists.openstack.org mailing list thread with the following subject:
[Rally] PTL Candidacy.

[0] https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014
[1] https://wiki.openstack.org/wiki/Rally/PTL_Elections_Juno
[2] http://lists.openstack.org/pipermail/openstack-dev/2014-March/031239.html

Thank you.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Treating notifications as a contract

2014-07-17 Thread Chris Dent

On Tue, 15 Jul 2014, Sandy Walsh wrote:


This looks like a particular schema for one event-type (let's say
foo.sample).  It's hard to extrapolate this one schema to a generic
set of common metadata applicable to all events. Really the only common
stuff we can agree on is the stuff already there: tenant, user, server,
message_id, request_id, timestamp, event_type, etc.


This is pretty much what I'm trying to figure out. We can,
relatively agree on a small set of stuff (like what you mention).
Presumably there are three more sets:

* special keys that could be changed to something more generally
  meaningful if we tried hard enough

* special keys that really are special and must be saved as such

* special keys that nobody cares about and can be tossed

Everybody thinks their own stuff is special[1] but it is often the case
that it's not.

In your other message you linked to http://paste.openstack.org/show/54140/
which shows some very complicated payloads (but only gets through
the first six events).

Is there related data (even speculative) for how many of those keys
are actually used?

And just looking at the paste (and the problem) generally, does it make
sense for the accessors in the dictionaries (the keys) to be terms which
are specific to the producer? Obviously that will increase the
appearance of disjunction between different events. A different
representation might not be as problematic.

Or maybe I'm completely wrong, just thinking out loud.


This way, we can keep important notifications in a priority queue and
handle them accordingly (since they hold important data), but let the
samples get routed across less-reliable transports (like UDP) via the
RoutingNotifier.


Presumably a more robust, uh, contract, for notifications, will
allow them to be dispatched (and re-dispatched) more effectively.


Also, send the samples one-at-a-time and let them either a) drop on the
floor (udp) or b) let the aggregator roll them up into something smaller
(sliding window, etc). Making these large notifications contain a list
of samples means we had to store state somewhere on the server until
transmission time. Ideally something we wouldn't want to rely on.


I've wondered about this too. Is there history for why some of the
notifications which include samples are rolled up lists instead of
fired off one at a time. Seems like that will hurt parallelism
opportunities?

[1] There's vernacular here that I'd prefer to use but this is a
family mailing list.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nova] Weekly nova-network / neutron parity meeting

2014-07-17 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 17/07/14 17:12, Kyle Mestery wrote:
 On Thu, Jul 17, 2014 at 6:42 AM, Thierry Carrez
 thie...@openstack.org wrote:
 Kyle Mestery wrote:
 As we're getting down to the wire in Juno, I'd like to propose
 we have a weekly meeting on the nova-network and neutron parity
 effort. I'd like to start this meeting next week, and I'd like
 to propose Wednesday at 1500 UTC on #openstack-meeting-3 as the
 time and location.
 
 This conflicts on the agenda with the PHP SDK Team Meeting. 
 That said, I'm not sure they still run this one regularly.
 
 We could do Wednesday or Friday at 1500 UTC if that works better
 for folks. I'd prefer Wednesday, though. We could also do 1400UTC 
 Wednesday.

I'm for Wednesday. Friday is s lazy. ;)

 
 Perhaps I'll split the difference and do 1430 UTC Wednesday. Does
 that sound ok to everyone?
 
 Thanks, Kyle
 
 -- Thierry Carrez (ttx)
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJTx/aTAAoJEC5aWaUY1u57jNUH/Rmrd/zf1T8C4kmkvH49Q4gL
pfkNYIHqP0byFYxENzTC6tvVcMwR5bsf4p7BJH7EMH3+MQWsMXYO3Ts6m8BhV0MX
kXeXUNDMDBj2BhmVIBO0mr7VGpaE1hSENRDuiHoKAHQaj5Zk1H1mQEzynW28vIBW
YvYJeqNVUWubezeSaxzG2fDhzyzRzFkOcC+dfusN9EvHz4EqJ/ELxbtoCsGyI0hP
pA99vWOnCe3OYuLYHbv4/JPl7ujf4MCPl7Zlm6akFFMqSJZxNH2VuGL9v77+beRa
TFVm5eenBpuX2jqpo5tfASx8yi9ZZb833SL7/O2QcXk6gKanw7aKAHfYTLf0/3U=
=FqR5
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] glance.store repo created

2014-07-17 Thread Flavio Percoco
On 07/17/2014 04:56 PM, Jay Pipes wrote:
 On 07/17/2014 02:42 AM, Flavio Percoco wrote:
 Greeting,

 I'd like to announce that we finally got the glance.store repo created.

 This library pulls out of glance the code related to stores. Not many
 changes were made to the API during this process. The main goal, for
 now, is to switch glance over and keep backwards compatibility with
 Glance to reduce the number of changes required. We'll improve and
 revamp the store API during K - FWIW, I've a spec draft with ideas for
 it.
 
 Link to that spec? It's important for some other in-flight nova-specs
 around zero-copy image handling and refactoring the nova.image module.

https://blueprints.launchpad.net/glance/+spec/create-store-package

 
 The library still needs some work and this is a perfect moment for
 anyone interested to chime in and contribute to the library. Some things
 that are missing:

 - Swift store (Nikhil is working on this)
 
 ? Why wasn't the existing glance.store.swift used?
 

It'll be used, it hasn't been ported yet. Porting means applying the
small changes we did to the store API. It's on the works and it will
hopefully be up soon.

Cheers,
Flavio

 Best,
 -jay
 
 - Sync latest changes made to the store code.

 If you've recently made changes to any of the stores, please go ahead
 and contribute them back to `glance.store` or let me know so I can do it.

 I'd also like to ask reviewers to request contributions to the `store`
 code in glance to be proposed to `glance.store` as well. This way, we'll
 be able to keep parity.

 I'll be releasing an alpha version soon so we can start reviewing the
 glance switch-over. We won't obviously merge it until we have feature
 parity.

 Any feedback is obviously very welcome,
 Cheers,
 Flavio

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Meeting time change

2014-07-17 Thread Flavio Percoco
On 07/16/2014 06:31 PM, Malini Kamalambal wrote:
 
 On 7/16/14 4:43 AM, Flavio Percoco fla...@redhat.com wrote:
 
 On 07/15/2014 06:20 PM, Kurt Griffiths wrote:
 Hi folks, we¹ve been talking about this in IRC, but I wanted to bring it
 to the ML to get broader feedback and make sure everyone is aware. We¹d
 like to change our meeting time to better accommodate folks that live
 around the globe. Proposals:

 Tuesdays, 1900 UTC
 Wednessdays, 2000 UTC
 Wednessdays, 2100 UTC

 I believe these time slots are free, based
 on: https://wiki.openstack.org/wiki/Meetings

 Please respond with ONE of the following:

 A. None of these times work for me
 B. An ordered list of the above times, by preference
 C. I am a robot

 I don't like the idea of switching days :/

 Since the reason we're using Wednesday is because we don't want the
 meeting to overlap with the TC and projects meeting, what if we change
 the day of both meeting times in order to keep them on the same day (and
 perhaps also channel) but on different times?

 I think changing day and time will be more confusing than just changing
 the time.
 
 If we can find an agreeable time on a non Tuesday, I take the ownership of
 pinging  getting you to #openstack-meeting-alt ;)
 
From a quick look, #openstack-meeting-alt is free on Wednesdays on both
 times: 15 UTC and 21 UTC. Does this sound like a good day/time/idea to
 folks?
 
 1500 UTC might still be too early for our NZ folks - I thought we wanted
 to have the meeting at/after 1900 UTC.
 That being said, I will be able to attend only part of the meeting any
 time after 1900 UTC - unless it is @ Thursday 1900 UTC
 Sorry for making this a puzzle :(

We'll have 2 times. The idea is to keep the current time and have a
second time slot that is good for NZ folks. What I'm proposing is to
pick a day in the week that is good for both times and just rotate on
the time instead of time+day_of_the_week.

Again, the proposal is not to have 1 time but just 1 day and alternate
times on that day. For example, Glance meetings are *always* on
Thursdays and time is alternated each other week. We can do the same for
Marconi on Mondays, Wednesdays or Fridays.

Thoughts?


Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-17 Thread Clint Byrum
Excerpts from Chris Friesen's message of 2014-07-16 11:38:44 -0700:
 On 07/16/2014 11:59 AM, Monty Taylor wrote:
  On 07/16/2014 07:27 PM, Vishvananda Ishaya wrote:
 
  This is a really good point. As someone who has to deal with packaging
  issues constantly, it is odd to me that libvirt is one of the few places
  where we depend on upstream packaging. We constantly pull in new python
  dependencies from pypi that are not packaged in ubuntu. If we had to
  wait for packaging before merging the whole system would grind to a halt.
 
  I think we should be updating our libvirt version more frequently vy
  installing from source or our own ppa instead of waiting for the ubuntu
  team to package it.
 
  Shrinking in terror from what I'm about to say ... but I actually agree
  with this, There are SEVERAL logistical issues we'd need to sort, not
  the least of which involve the actual mechanics of us doing that and
  properly gating,etc. But I think that, like the python depends where we
  tell distros what version we _need_ rather than using what version they
  have, libvirt, qemu, ovs and maybe one or two other things are areas in
  which we may want or need to have a strongish opinion.
 
  I'll bring this up in the room tomorrow at the Infra/QA meetup, and will
  probably be flayed alive for it - but maybe I can put forward a
  straw-man proposal on how this might work.
 
 How would this work...would you have them uninstall the distro-provided 
 libvirt/qemu and replace them with newer ones?  (In which case what 
 happens if the version desired by OpenStack has bugs in features that 
 OpenStack doesn't use, but that some other software that the user wants 
 to run does use?)
 
 Or would you have OpenStack versions of them installed in parallel in an 
 alternate location?

Yes. See: docker, lxc, chroot. (Listed in descending hipsterness order).

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nova] Weekly nova-network / neutron parity meeting

2014-07-17 Thread Miguel Lavalle
Mestery,

+2. Excellent idea. As I said last week, I'm not an early riser, so this
works just great for me: 10am US central time

Cheers


On Wed, Jul 16, 2014 at 9:22 PM, Kyle Mestery mest...@mestery.com wrote:

 As we're getting down to the wire in Juno, I'd like to propose we have
 a weekly meeting on the nova-network and neutron parity effort. I'd
 like to start this meeting next week, and I'd like to propose
 Wednesday at 1500 UTC on #openstack-meeting-3 as the time and
 location. If this works for people, please reply on this thread, or
 suggest an alternate time. I've started a meeting page [1] to track
 agenda for the first meeting next week.

 Thanks!
 Kyle

 [1] https://wiki.openstack.org/wiki/Meetings/NeutronNovaNetworkParity

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] glance.store repo created

2014-07-17 Thread Jay Pipes

On 07/17/2014 12:19 PM, Flavio Percoco wrote:

On 07/17/2014 04:56 PM, Jay Pipes wrote:

On 07/17/2014 02:42 AM, Flavio Percoco wrote:

Greeting,

I'd like to announce that we finally got the glance.store repo created.

This library pulls out of glance the code related to stores. Not many
changes were made to the API during this process. The main goal, for
now, is to switch glance over and keep backwards compatibility with
Glance to reduce the number of changes required. We'll improve and
revamp the store API during K - FWIW, I've a spec draft with ideas for
it.


Link to that spec? It's important for some other in-flight nova-specs
around zero-copy image handling and refactoring the nova.image module.


https://blueprints.launchpad.net/glance/+spec/create-store-package


Thanks!


The library still needs some work and this is a perfect moment for
anyone interested to chime in and contribute to the library. Some things
that are missing:

- Swift store (Nikhil is working on this)


? Why wasn't the existing glance.store.swift used?



It'll be used, it hasn't been ported yet. Porting means applying the
small changes we did to the store API. It's on the works and it will
hopefully be up soon.


Got it,
-jay


Cheers,
Flavio


Best,
-jay


- Sync latest changes made to the store code.

If you've recently made changes to any of the stores, please go ahead
and contribute them back to `glance.store` or let me know so I can do it.

I'd also like to ask reviewers to request contributions to the `store`
code in glance to be proposed to `glance.store` as well. This way, we'll
be able to keep parity.

I'll be releasing an alpha version soon so we can start reviewing the
glance switch-over. We won't obviously merge it until we have feature
parity.

Any feedback is obviously very welcome,
Cheers,
Flavio



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vmware] Mine Sweeper failures, logs are 404s

2014-07-17 Thread Matthew Booth
I've been seeing a log of Mine Sweeper failures lately, and the link to
the logs is always a 404. Example here:

https://review.openstack.org/#/c/106082/

Log link is to:

http://208.91.1.172/logs/106082/3

Interestingly, http://208.91.1.172/logs/106082/1 *does* exist. Does
anybody know what's going on here?

Thanks,

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] os-refresh-config run frequency

2014-07-17 Thread Clint Byrum
Excerpts from Michael Kerrin's message of 2014-07-17 07:54:26 -0700:
 On Thursday 26 June 2014 12:20:30 Clint Byrum wrote:
  Excerpts from Macdonald-Wallace, Matthew's message of 2014-06-26 04:13:31 
 -0700:
   Hi all,
   
   I've been working more and more with TripleO recently and whilst it does
   seem to solve a number of problems well, I have found a couple of
   idiosyncrasies that I feel would be easy to address.
   
   My primary concern lies in the fact that os-refresh-config does not run on
   every boot/reboot of a system.  Surely a reboot *is* a configuration
   change and therefore we should ensure that the box has come up in the
   expected state with the correct config?
   
   This is easily fixed through the addition of an @reboot entry in
   /etc/crontab to run o-r-c or (less easily) by re-designing o-r-c to run
   as a service.
   
   My secondary concern is that through not running os-refresh-config on a
   regular basis by default (i.e. every 15 minutes or something in the same
   style as chef/cfengine/puppet), we leave ourselves exposed to someone
   trying to make a quick fix to a production node and taking that node
   offline the next time it reboots because the config was still left as
   broken owing to a lack of updates to HEAT (I'm thinking a quick change
   to allow root access via SSH during a major incident that is then left
   unchanged for months because no-one updated HEAT).
   
   There are a number of options to fix this including Modifying
   os-collect-config to auto-run os-refresh-config on a regular basis or
   setting os-refresh-config to be its own service running via upstart or
   similar that triggers every 15 minutes
   
   I'm sure there are other solutions to these problems, however I know from
   experience that claiming this is solved through education of users or
   (more severely!) via HR is not a sensible approach to take as by the time
   you realise that your configuration has been changed for the last 24
   hours it's often too late!
  So I see two problems highlighted above.
  
  1) We don't re-assert ephemeral state set by o-r-c scripts. You're right,
  and we've been talking about it for a while. The right thing to do is
  have os-collect-config re-run its command on boot. I don't think a cron
  job is the right way to go, we should just have a file in /var/run that
  is placed there only on a successful run of the command. If that file
  does not exist, then we run the command.
  
  I've just opened this bug in response:
  
  https://bugs.launchpad.net/os-collect-config/+bug/1334804
  
 
 I have been looking into bug #1334804 and I have a review up to resolve it. I 
 want to highlight something.
 
 Currently on a reboot we start all services via upstart (on debian anyways) 
 and there have been quite a lot of issues around this - missing upstart 
 scripts and timing issues. I don't know the issues on fedora.
 
 So with a fix to #1334804, on a reboot upstart will start all the services 
 first (with potentially out-of-date configuration), then o-c-c will start o-r-
 c and will now configure all services and restart them or start them if 
 upstart isn't configured properly.
 
 I would like to turn off all boot scripts for services we configure and leave 
 all this to o-r-c. I think this will simplify things and put us in control of 
 starting services. I believe that it will also narrow the gap between fedora 
 and debian or debian and debian so what works on one should work on the other 
 and make it easier for developers.

Agreed, and that is actually really simple. I hate to steal your thunder,
but this is the patch:

https://review.openstack.org/107772

 
 Having the ability to service nova-api stop|start|restart is very handy but 
 this will be a manually thing and I intend to leave that there.
 
 What do people think and how best do I push this forward. I feel that this 
 leads into the the re-assert-system-state spec but mainly I think this is a 
 bug and doesn't require a spec.
 
 I will be at the tripleo mid-cycle meetup next and willing to discuss this 
 with anyone interested in this and put together the necessary bits to make 
 this happen.

As I said, it is simple. :) I suggest testing the patch above and adding
anything I missed to it.

Systemd based systems will likely need something different. I'm still
burying my head int he sand and not learning systemd, but perhaps a
follow-up patch from somebody who understands it can make those systems
do the same thing.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nova] Weekly nova-network / neutron parity meeting

2014-07-17 Thread Maru Newby

On Jul 17, 2014, at 8:12 AM, Kyle Mestery mest...@mestery.com wrote:

 On Thu, Jul 17, 2014 at 6:42 AM, Thierry Carrez thie...@openstack.org wrote:
 Kyle Mestery wrote:
 As we're getting down to the wire in Juno, I'd like to propose we have
 a weekly meeting on the nova-network and neutron parity effort. I'd
 like to start this meeting next week, and I'd like to propose
 Wednesday at 1500 UTC on #openstack-meeting-3 as the time and
 location.
 
 This conflicts on the agenda with the PHP SDK Team Meeting.
 That said, I'm not sure they still run this one regularly.
 
 We could do Wednesday or Friday at 1500 UTC if that works better for
 folks. I'd prefer Wednesday, though. We could also do 1400UTC
 Wednesday.
 
 Perhaps I'll split the difference and do 1430 UTC Wednesday. Does that
 sound ok to everyone?
 
 Thanks,
 Kyle

+1 from me.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] RE: Barbican doesn't authenticate with Keystoine for one particular tenant

2014-07-17 Thread John Wood
Hello Robert,

You should get 400 errors for unauthenticated requests, so it seems barbican 
still isn't running with keystone. Can you reply back with your 
/etc/barbican/barbican-api-paste.ini file (with passwords removed)?

Thanks,
John



From: Robert Marshall -X (robemars - TEKSYSTEMS INC at Cisco) 
[robem...@cisco.com]
Sent: Thursday, July 17, 2014 11:28 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Barbican doesn't authenticate with Keystoine for one 
particular tenant

We have configure Barbican and Keystone according to the documentation and the 
direction provided by John Wood. We see Barbican authenticating with Keystone. 
The problem we see is that when we use the “service” tenant, we can retrieve 
secrets without authenticating with Keystone, where the “service” tenant is the 
tenant that we associate with the “barbican” service in Keystone:

(barbican27)[root@iac-int-ma15 ~]# curl -XGET -d '{}' -H Content-type: 
application/json -H X_IDENTITY_STATUS: Confirmed -H X_AU
TH_TOKEN:  http://localhost:9311/v1/474bc6aec82f447ca5e4452516e0b2aa/secrets   

 =  Empty token   -   “admin” tenant UID
{secrets: [], total: 0} 

   
= Doesn’t authenticate therefore no secrets


(barbican27)[root@iac-int-ma15 ~]# curl -XGET -d '{}' -H Content-type: 
application/json -H X_IDENTITY_   
  = Empty token   -“service” tenant UID
TH_TOKEN:  http://localhost:9311/v1/4d9cc78fdd7746c1895d0c0fc37d8a24/secrets   

  = Shouldn’t authenticate, but secrets returned
{secrets: [{status: ACTIVE, secret_ref: 
http://localhost:9311/v1/4d9cc78fdd7746c1895d0c0fc37d8a24/secrets/88a99c9f-5c32-444
6-8868-f732ca0c8df3, updated: 2014-07-09T14:21:53.845324, name: test, 
algorithm: null, created: 2014-07-09T14:21:53.84
5315, content_types: {default: text/plain}, mode: null, bit_length: 
null, expiration: null}, {status: ACTIVE, secr
et_ref: 
http://localhost:9311/v1/4d9cc78fdd7746c1895d0c0fc37d8a24/secrets/005c4bba-959e-4deb-9be3-1115516ff20f;,
 updated: 2014-
07-09T14:33:13.908756, name: ppm, algorithm: null, created: 
2014-07-09T14:33:13.908746, content_types: {default: tex
t/plain}, mode: null, bit_length: null, expiration: null}, {status: 
ACTIVE, secret_ref: http://localhost:9311/v1/4d9cc
78fdd7746c1895d0c0fc37d8a24/secrets/1c095c70-c8bb-452d-b4c4-db0685d448b5, 
updated: 2014-07-09T14:34:44.042212, name: nsapi,  snip

The following comes from the “populate-data.sh” script that we use to populate 
the empty Keystone DB at Keystone/Barbican setup:
# Add Roles to Users in Tenants
keystone user-role-add --user $ADMIN_USER --role $ADMIN_ROLE --tenant-id 
$ADMIN_TENANT
keystone user-role-add --user $SERVICE_USER --role $SERVICE_ROLE --tenant-id 
$SERVICE_TENANT
keystone user-role-add --user $BARBICAN_USER --role $ADMIN_ROLE  --tenant-id 
$SERVICE_TENANT

# Create BARBICAN Service
BARBICAN_SERVICE=$(get_id keystone service-create --name=barbican 
--type=keystore --description=Barbican Key Management Service)

# Create BARBICAN Endpoint
keystone endpoint-create --region RegionOne --service_id $BARBICAN_SERVICE 
--publicurl http://localhost:9311/v1; --adminurl http://localhost:9312/v1; 
--internalurl http://localhost:9313/v1


Some questions:

1.  Should the “service” tenant bypass authentication and return secrets, 
due to its registered association with the Barbican Service?

2.  If so, is there a way to turn this off, so that the service tenant 
can’t be used to gather secrets?

3.  How do we turn on DEBUG in Keystone, so that we can see authentication 
occurring in Keystone in real time?

We are planning on distributing Barbican and Keystone as a secure keystore in 
an upcoming release of Cisco Systems cloud automation software, and are hoping 
to nail this down soon to get this release ready, so we are grateful for any 
help we can get to wrap this up.

Bob Marshall
Cloud Developer
Cisco Systems
Austin Campus
210-853-7041


From: John Wood [mailto:john.w...@rackspace.com]
Sent: Wednesday, July 16, 2014 5:30 PM
To: Robert Marshall -X (robemars - TEKSYSTEMS INC at Cisco)
Cc: Matt Brown -X (mattbro2 - KFORCE INC at Cisco); Yogi Porla -X (yporla - 
KFORCE INC at Cisco); Greg Brown (gbrown2)
Subject: RE: Barbican issue: Failure to authenticate with Keystone: Security 
Issue?

Hello Robert,

Please feel free to send such emails out to the public list at: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org and 
also add '[barbican]' before the subject name.

To 

Re: [openstack-dev] [neutron] [nova] Weekly nova-network / neutron parity meeting

2014-07-17 Thread Carl Baldwin
Kyle,

Somehow I missed this thread until now.  I will encourage the dvr team to
be there.  It's this still the time?

Carl
On Jul 16, 2014 8:24 PM, Kyle Mestery mest...@mestery.com wrote:

 As we're getting down to the wire in Juno, I'd like to propose we have
 a weekly meeting on the nova-network and neutron parity effort. I'd
 like to start this meeting next week, and I'd like to propose
 Wednesday at 1500 UTC on #openstack-meeting-3 as the time and
 location. If this works for people, please reply on this thread, or
 suggest an alternate time. I've started a meeting page [1] to track
 agenda for the first meeting next week.

 Thanks!
 Kyle

 [1] https://wiki.openstack.org/wiki/Meetings/NeutronNovaNetworkParity

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-17 Thread Morgan Fainberg


  I wasn't aware that PKI tokens had domains in them. What happens to nova
  in this case, It just works?
 
  
 Both PKI and UUID responses from v3 contain:
  
 1. the user's domain
  
 And if it's a project scoped token:
  
 2. the project's domain
  
 Or if it's a domain-scoped token:
  
 3. a domain scope
  
 The answer to your question is that if nova receives a project-scoped token
 (1  2), it doesn't need to be domain-aware: project ID's are globally
 unique and nova doesn't need to know about project-domain relationships.
  
 If nova receives a domain-scoped token (1  3), the policy layer can balk
 with an HTTP 401 because there's no project in scope, and it's not
 domain-aware. From nova's perspective, this is identical to the scenario
 where the policy layer returns an HTTP 401 because nova was presented with
 an unscoped token (1 only) from keystone.

Let me add some specifics based upon the IRC discussion I had with Joe Gordon.

In addition to what Dolph has outlined here we have this document 
http://docs.openstack.org/developer/keystone/http-api.html#how-do-i-migrate-from-v2-0-to-v3
 that should help with what is needed to do the conversion. The change to use 
v3 largely relies on a deployer enabling the V3 API in Keystone.

By and large, the change is all in the middleware. The middleware will handle 
either token, so it really comes down to when a V3 token is requested by the 
end user and subsequently used to interact with the various OpenStack services. 
This part requires no change on Nova (or any other services) part (with 
exception to the Domain-Scoped tokens outlined above and the needed changes to 
policy if those are to be supported).

Each of the client libraries will need to be updated to utilize the V3 API. 
This has been in process for a while (you’ve seen the code from Jamie Lennox 
and Guang Yee) and is mostly supported by converting each of the libraries to 
utilize the Session object from keystoneclient instead of the many various 
implementations to talk to auth.

Last but not least here are a couple bullet points that make V3 much better 
than the V2 Keystone API (all the details of what V3 brings to the table can be 
found here: 
https://github.com/openstack/identity-api/tree/master/v3/src/markdown ). A lot 
of these benefits are operator specific.

* Federated Identity. V3 Keystone supports the use of SAML (via shibboleth) 
from a number of sources as a form of Identity (instead of having to keep the 
users all within Keystone’s Identity backend). The federation support relies 
heavily upon the domain constructs in Keystone (which are part of V3). There is 
work to expand the support beyond SAML (including a proposal to support 
keystone-to-keystone federation).

* Pluggable Auth. V3 Keystone supports pluggable authentication mechanisms (a 
light weight module that can authenticate the user), this is a bit more 
friendly than needing to subclass the entire Identity backend with a  bunch of 
conditional logic. Plugins are configured via the Keystone configuration file.

* Better admin-scoping support. Domains allow us to better handle “admin” vs 
“non-admin” and limit bleeding those roles across projects (a big complaint in 
v2: you were either an admin or not an admin globally). Due to backwards 
compatibility requirements, we have largely left this as it was, but the 
support is there and can be seen via the policy.v3cloudsample.json file 
provided in the Keystone tree.

* The hierarchical multi tenancy work is being done against the V3 Keystone 
API. This is again related to the domain construct and support. This will 
likely require changes to more than just Keystone to make full use of the new 
functionality, but specifics are still up in the air as this is under active 
development.

These are just some of benefits of V3, there are a lot of improvements over V2 
that are not on this list (or are truly transparent to the end-user and 
deployer).


Cheers,
Morgan Fainberg







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-17 Thread Russell Bryant
On 07/17/2014 11:40 AM, Daniel P. Berrange wrote:
 On Wed, Jul 16, 2014 at 09:44:55AM -0700, Johannes Erdfelt wrote:
 On Wed, Jul 16, 2014, Mark McLoughlin mar...@redhat.com wrote:
 No, there are features or code paths of the libvirt 1.2.5+ driver that
 aren't as well tested as the class A designation implies. And we have
 a proposal to make sure these aren't used by default:

   https://review.openstack.org/107119

 i.e. to stray off the class A path, an operator has to opt into it by
 changing a configuration option that explains they will be enabling code
 paths which aren't yet tested upstream.

 So that means the libvirt driver will be a mix of tested and untested
 features, but only the tested code paths will be enabled by default?

 The gate not only tests code as it gets merged, it tests to make sure it
 doesn't get broken in the future by other changes.

 What happens when it comes time to bump the default version_cap in the
 future? It looks like there could potentially be a scramble to fix code
 that has been merged but doesn't work now that it's being tested. Which
 potentially further slows down development since now unrelated code
 needs to be fixed.

 This sounds like we're actively weakening the gate we currently have.
 
 If the gate has libvirt 1.2.2 and a feature is added to Nova that
 depends on libvirt 1.2.5, then the gate is already not testing that
 codepath since it lacks the libvirt version neccessary to test it.
 The version cap should not be changing that, it is just making it
 more explicit that it hasn't been tested

And hopefully it will make future updates a little smoother.  We can
turn on the new features in only a subset of jobs to minimize potential
disruption.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Barbican] Is there an agreed way for plugins to log output

2014-07-17 Thread Clark, Robert Graham
As above, couldn’t see any conventions.

Thanks
-Rob


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] minimal device driver for VPN

2014-07-17 Thread Julio Carlos Barrera Juez
We have followed your advices:

- We created our fake device driver located in the same level as other
device drivers
(/opt/stack/neutron/neutron/services/vpn//device_drivers/fake_device_driver.py):

import abc
import six

from neutron.openstack.common import log
from neutron.services.vpn import device_drivers


LOG = log.getLogger(__name__)

@six.add_metaclass(abc.ABCMeta)
class FakeDeviceDriver(device_drivers.DeviceDriver):
'''
classdocs
'''

def __init__(self, agent, host):
pass

def sync(self, context, processes):
pass

def create_router(self, process_id):
pass

def destroy_router(self, process_id):
pass


- Our service driver located in
/opt/stack/neutron/neutron/services/vpn/service_drivers/fake_service_driver.py:

from neutron.openstack.common import log

LOG = log.getLogger(__name__)

class FakeServiceDriver():
'''
classdocs
'''

def get_vpnservices(self, context, filters=None, fields=None):
LOG.info('XX Calling method: ' + __name__)
pass

def get_vpnservice(self, context, vpnservice_id, fields=None):
LOG.info('XX Calling method: ' + __name__)
pass

def create_vpnservice(self, context, vpnservice):
LOG.info('XX Calling method: ' + __name__)
pass

def update_vpnservice(self, context, vpnservice_id, vpnservice):
LOG.info('XX Calling method: ' + __name__)
pass

def delete_vpnservice(self, context, vpnservice_id):
LOG.info('XX Calling method: ' + __name__)
pass

def get_ipsec_site_connections(self, context, filters=None,
fields=None):
LOG.info('XX Calling method: ' + __name__)
pass

def get_ipsec_site_connection(self, context,
ipsecsite_conn_id, fields=None):
LOG.info('XX Calling method: ' + __name__)
pass

def get_ikepolicy(self, context, ikepolicy_id, fields=None):
LOG.info('XX Calling method: ' + __name__)
pass

def get_ikepolicies(self, context, filters=None, fields=None):
LOG.info('XX Calling method: ' + __name__)
pass

def create_ikepolicy(self, context, ikepolicy):
LOG.info('XX Calling method: ' + __name__)
pass

def update_ikepolicy(self, context, ikepolicy_id, ikepolicy):
LOG.info('XX Calling method: ' + __name__)
pass

def delete_ikepolicy(self, context, ikepolicy_id):
LOG.info('XX Calling method: ' + __name__)
pass

def get_ipsecpolicies(self, context, filters=None, fields=None):
LOG.info('XX Calling method: ' + __name__)
pass

def get_ipsecpolicy(self, context, ipsecpolicy_id, fields=None):
LOG.info('XX Calling method: ' + __name__)
pass

def create_ipsecpolicy(self, context, ipsecpolicy):
LOG.info('XX Calling method: ' + __name__)
pass

def update_ipsecpolicy(self, context, ipsecpolicy_id, ipsecpolicy):
LOG.info('XX Calling method: ' + __name__)
pass

def delete_ipsecpolicy(self, context, ipsecpolicy_id):
LOG.info('XX Calling method: ' + __name__)
pass



- Our /etc/neutron/vpn_agent.ini:

[DEFAULT]
# VPN-Agent configuration file
# Note vpn-agent inherits l3-agent, so you can use configs on l3-agent also

[vpnagent]
# vpn device drivers which vpn agent will use
# If we want to use multiple drivers,  we need to define this option
multiple times.
# vpn_device_driver=neutron.services.vpn.device_drivers.ipsec.OpenSwanDriver
#
vpn_device_driver=neutron.services.vpn.device_drivers.cisco_ipsec.CiscoCsrIPsecDriver
# vpn_device_driver=another_driver

# custom config
# implementation location:
/opt/stack/neutron/neutron/services/vpn//device_drivers/fake_device_driver.py
vpn_device_driver=neutron.services.vpn.device_drivers.fake_device_driver.FakeDeviceDriver

[ipsec]
# Status check interval
# ipsec_status_check_interval=60


It seems now everything is working and q-vpn starts. In one line in his log
we see:

2014-07-16 21:59:45.009 DEBUG neutron.openstack.common.service
[req-fb6ed9ca-0e71-4783-804b-81ea34b16679 None None]
service_providers.service_provider =
['VPN:fake_junos_vpnaas:neutron.services.vpn.service_drivers.fake_service_driver.FakeServiceDriver:default']
from (pid=14423) log_opt_values
/usr/local/lib/python2.7/dist-packages/oslo/config/cfg.py:1988

But now we don't know how to continue. We don't any of our logs in q-vpn
when we execute commands like:

neutron vpn-ipsecpolicy-create test-ike-policy
neutron vpn-ikepolicy-list
neutron vpn-service-list

We don't see any error anyway.

How we could proceed?

Thank you.

 http://dana.i2cat.net   http://www.i2cat.net/en
Julio C. Barrera Juez  [image: View my profile on LinkedIn]
http://es.linkedin.com/in/jcbarrera/en
Office phone: 

Re: [openstack-dev] [Barbican] Is there an agreed way for plugins to log output

2014-07-17 Thread John Wood
Hello Robert,

We don't have logging conventions for plugins, so any recommendations you have 
would be welcome.

Thanks,
John



From: Clark, Robert Graham [robert.cl...@hp.com]
Sent: Thursday, July 17, 2014 1:14 PM
To: OpenStack List
Subject: [openstack-dev] [Barbican] Is there an agreed way for plugins to   
log output

As above, couldn’t see any conventions.

Thanks
-Rob


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-17 Thread Johannes Erdfelt
On Thu, Jul 17, 2014, Daniel P. Berrange berra...@redhat.com wrote:
 On Wed, Jul 16, 2014 at 09:44:55AM -0700, Johannes Erdfelt wrote:
  So that means the libvirt driver will be a mix of tested and untested
  features, but only the tested code paths will be enabled by default?
  
  The gate not only tests code as it gets merged, it tests to make sure it
  doesn't get broken in the future by other changes.
  
  What happens when it comes time to bump the default version_cap in the
  future? It looks like there could potentially be a scramble to fix code
  that has been merged but doesn't work now that it's being tested. Which
  potentially further slows down development since now unrelated code
  needs to be fixed.
  
  This sounds like we're actively weakening the gate we currently have.
 
 If the gate has libvirt 1.2.2 and a feature is added to Nova that
 depends on libvirt 1.2.5, then the gate is already not testing that
 codepath since it lacks the libvirt version neccessary to test it.
 The version cap should not be changing that, it is just making it
 more explicit that it hasn't been tested

It kind of helps. It's still implicit in that you need to look at what
features are enabled at what version and determine if it is being
tested.

But the behavior is still broken since code is still getting merged that
isn't tested. Saying that is by design doesn't help the fact that
potentially broken code exists.

Also, this explanation doesn't answer my question about what happens
when the gate finally gets around to actually testing those potentially
broken code paths.

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] minimal device driver for VPN

2014-07-17 Thread Paul Michali (pcm)
So you have your driver loading… great!

The service driver will log in screen-q-svc.log, provided you have the service 
driver called out in neutron.conf (as the only one for VPN).

Later, you’ll need the supporting RPC classes to send messages from service 
driver to device driver…


Regards,


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On Jul 17, 2014, at 2:18 PM, Julio Carlos Barrera Juez 
juliocarlos.barr...@i2cat.net wrote:

 We have followed your advices:
 
 - We created our fake device driver located in the same level as other device 
 drivers 
 (/opt/stack/neutron/neutron/services/vpn//device_drivers/fake_device_driver.py):
 
 import abc
 import six
 
 from neutron.openstack.common import log
 from neutron.services.vpn import device_drivers
 
 
 LOG = log.getLogger(__name__)
 
 @six.add_metaclass(abc.ABCMeta)
 class FakeDeviceDriver(device_drivers.DeviceDriver):
 '''
 classdocs
 '''
 
 def __init__(self, agent, host):
 pass
 
 def sync(self, context, processes):
 pass
 
 def create_router(self, process_id):
 pass
 
 def destroy_router(self, process_id):
 pass
 
 - Our service driver located in 
 /opt/stack/neutron/neutron/services/vpn/service_drivers/fake_service_driver.py:
 
 from neutron.openstack.common import log
 
 LOG = log.getLogger(__name__)
  
 class FakeServiceDriver():
 '''
 classdocs
 '''
  
 def get_vpnservices(self, context, filters=None, fields=None):
 LOG.info('XX Calling method: ' + __name__)
 pass
 
 def get_vpnservice(self, context, vpnservice_id, fields=None):
 LOG.info('XX Calling method: ' + __name__)
 pass
 
 def create_vpnservice(self, context, vpnservice):
 LOG.info('XX Calling method: ' + __name__)
 pass
 
 def update_vpnservice(self, context, vpnservice_id, vpnservice):
 LOG.info('XX Calling method: ' + __name__)
 pass
 
 def delete_vpnservice(self, context, vpnservice_id):
 LOG.info('XX Calling method: ' + __name__)
 pass
 
 def get_ipsec_site_connections(self, context, filters=None, fields=None):
 LOG.info('XX Calling method: ' + __name__)
 pass
 
 def get_ipsec_site_connection(self, context,
 ipsecsite_conn_id, fields=None):
 LOG.info('XX Calling method: ' + __name__)
 pass
 
 def get_ikepolicy(self, context, ikepolicy_id, fields=None):
 LOG.info('XX Calling method: ' + __name__)
 pass
 
 def get_ikepolicies(self, context, filters=None, fields=None):
 LOG.info('XX Calling method: ' + __name__)
 pass
 
 def create_ikepolicy(self, context, ikepolicy):
 LOG.info('XX Calling method: ' + __name__)
 pass
 
 def update_ikepolicy(self, context, ikepolicy_id, ikepolicy):
 LOG.info('XX Calling method: ' + __name__)
 pass
 
 def delete_ikepolicy(self, context, ikepolicy_id):
 LOG.info('XX Calling method: ' + __name__)
 pass
 
 def get_ipsecpolicies(self, context, filters=None, fields=None):
 LOG.info('XX Calling method: ' + __name__)
 pass
 
 def get_ipsecpolicy(self, context, ipsecpolicy_id, fields=None):
 LOG.info('XX Calling method: ' + __name__)
 pass
 
 def create_ipsecpolicy(self, context, ipsecpolicy):
 LOG.info('XX Calling method: ' + __name__)
 pass
 
 def update_ipsecpolicy(self, context, ipsecpolicy_id, ipsecpolicy):
 LOG.info('XX Calling method: ' + __name__)
 pass
 
 def delete_ipsecpolicy(self, context, ipsecpolicy_id):
 LOG.info('XX Calling method: ' + __name__)
 pass
 
 
 - Our /etc/neutron/vpn_agent.ini:
 
 [DEFAULT]
 # VPN-Agent configuration file
 # Note vpn-agent inherits l3-agent, so you can use configs on l3-agent also
 
 [vpnagent]
 # vpn device drivers which vpn agent will use
 # If we want to use multiple drivers,  we need to define this option multiple 
 times.
 # vpn_device_driver=neutron.services.vpn.device_drivers.ipsec.OpenSwanDriver
 # 
 vpn_device_driver=neutron.services.vpn.device_drivers.cisco_ipsec.CiscoCsrIPsecDriver
 # vpn_device_driver=another_driver
 
 # custom config
 # implementation location: 
 /opt/stack/neutron/neutron/services/vpn//device_drivers/fake_device_driver.py
 vpn_device_driver=neutron.services.vpn.device_drivers.fake_device_driver.FakeDeviceDriver
 
 [ipsec]
 # Status check interval
 # ipsec_status_check_interval=60
 
 
 It seems now everything is working and q-vpn starts. In one line in his log 
 we see:
 
 2014-07-16 21:59:45.009 DEBUG 

Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-17 Thread Russell Bryant
On 07/17/2014 02:31 PM, Johannes Erdfelt wrote:
 On Thu, Jul 17, 2014, Daniel P. Berrange berra...@redhat.com wrote:
 On Wed, Jul 16, 2014 at 09:44:55AM -0700, Johannes Erdfelt wrote:
 So that means the libvirt driver will be a mix of tested and untested
 features, but only the tested code paths will be enabled by default?

 The gate not only tests code as it gets merged, it tests to make sure it
 doesn't get broken in the future by other changes.

 What happens when it comes time to bump the default version_cap in the
 future? It looks like there could potentially be a scramble to fix code
 that has been merged but doesn't work now that it's being tested. Which
 potentially further slows down development since now unrelated code
 needs to be fixed.

 This sounds like we're actively weakening the gate we currently have.

 If the gate has libvirt 1.2.2 and a feature is added to Nova that
 depends on libvirt 1.2.5, then the gate is already not testing that
 codepath since it lacks the libvirt version neccessary to test it.
 The version cap should not be changing that, it is just making it
 more explicit that it hasn't been tested
 
 It kind of helps. It's still implicit in that you need to look at what
 features are enabled at what version and determine if it is being
 tested.
 
 But the behavior is still broken since code is still getting merged that
 isn't tested. Saying that is by design doesn't help the fact that
 potentially broken code exists.

Well, it may not be tested in our CI yet, but that doesn't mean it's not
tested some other way, at least.

I think there are some good ideas in other parts of this thread to look
at how we can more reguarly rev libvirt in the gate to mitigate this.

There's also been work going on to get Fedora enabled in the gate, which
is a distro that regularly carries a much more recent version of libvirt
(among other things), so that's another angle that may help.

 Also, this explanation doesn't answer my question about what happens
 when the gate finally gets around to actually testing those potentially
 broken code paths.

I think we would just test out the bump and make sure it's working fine
before it's enabled for every job.  That would keep potential breakage
localized to people working on debugging/fixing it until it's ready to go.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Barbican] Is there an agreed way for plugins to log output

2014-07-17 Thread Doug Hellmann

On Jul 17, 2014, at 2:14 PM, Clark, Robert Graham robert.cl...@hp.com wrote:

 As above, couldn’t see any conventions.
 
 Thanks
 -Rob

While not directly related to plugins, Sean Dague is shepherding a logging 
guidelines spec in the nova-specs repository. If we think plugins, drivers, 
etc. need special handling, we should add those details to the broader spec.

https://review.openstack.org/#/c/91446/

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceph-users] [Nova] [RBD] Copy-on-write cloning for RBD-backed disks

2014-07-17 Thread Dmitry Borodaenko
In case of Icehouse on Ubuntu 14.04, you should be able to test this
patch series by grabbing this branch from github:
https://github.com/angdraug/nova/tree/rbd-ephemeral-clone-stable-icehouse

and replacing contents of /usr/share/pyshared/nova with contents of
nova/ from that branch. You may also need to clean out related .pyc
files from /usr/lib/python2.7/dist-packages/nova/.


On Wed, Jul 16, 2014 at 11:22 PM, Dennis Kramer (DT) den...@holmes.nl wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi Dmitry,

 I've been using Ubuntu 14.04LTS + Icehouse /w CEPH as a storage
 backend for glance, cinder and nova (kvm/libvirt). I *really* would
 love to see this patch cycle in Juno. It's been a real performance
 issue because of the unnecessary re-copy from-and-to CEPH when using
 the default boot from image-option. It seems that the your fix would
 be the solution to all. IMHO this is one of the most important
 features when using CEPH RBD as a backend for Openstack Nova.

 Can you point me in the right direction in how to apply this patch of
 yours on a default Ubuntu14.04LTS + Icehouse installation? I'm using
 the default ubuntu packages since Icehouse lives in core and I'm not
 sure how to apply the patch series. I would love to test and review it.

 With regards,

 Dennis

 On 07/16/2014 11:18 PM, Dmitry Borodaenko wrote:
 I've got a bit of good news and bad news about the state of
 landing the rbd-ephemeral-clone patch series for Nova in Juno.

 The good news is that the first patch in the series
 (https://review.openstack.org/91722 fixing a data loss inducing
 bug with live migrations of instances with RBD backed ephemeral
 drives) was merged yesterday.

 The bad news is that after 2 months of sitting in review queue and
 only getting its first a +1 from a core reviewer on the spec
 approval freeze day, the spec for the blueprint
 rbd-clone-image-handler (https://review.openstack.org/91486) wasn't
 approved in time. Because of that, today the blueprint was rejected
 along with the rest of the commits in the series, even though the
 code itself was reviewed and approved a number of times.

 Our last chance to avoid putting this work on hold for yet another
 OpenStack release cycle is to petition for a spec freeze exception
 in the next Nova team meeting:
 https://wiki.openstack.org/wiki/Meetings/Nova

 If you're using Ceph RBD as backend for ephemeral disks in Nova
 and are interested this patch series, please speak up. Since the
 biggest concern raised about this spec so far has been lack of CI
 coverage, please let us know if you're already using this patch
 series with Juno, Icehouse, or Havana.

 I've put together an etherpad with a summary of where things are
 with this patch series and how we got here:
 https://etherpad.openstack.org/p/nova-ephemeral-rbd-clone-status

 Previous thread about this patch series on ceph-users ML:
 http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-March/028097.html

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.15 (GNU/Linux)
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

 iEYEARECAAYFAlPHa6kACgkQiJDTKUBxIRtpOwCeNjTlYlyypOsaGeI/+HRxZ6nt
 Y2kAoNLckOlSaEfw+dwSBacXP3JGkcAj
 =0Ez1
 -END PGP SIGNATURE-



-- 
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceph-users] [Nova] [RBD] Copy-on-write cloning for RBD-backed disks

2014-07-17 Thread Dmitry Borodaenko
The meeting is in 2 hours, so you still have a chance to particilate
or at least lurk :)

On Wed, Jul 16, 2014 at 11:55 PM, Somhegyi Benjamin
somhegyi.benja...@wigner.mta.hu wrote:
 Hi Dmitry,

 Will you please share with us how things went on the meeting?

 Many thanks,
 Benjamin



 -Original Message-
 From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
 Dmitry Borodaenko
 Sent: Wednesday, July 16, 2014 11:18 PM
 To: ceph-us...@lists.ceph.com
 Cc: OpenStack Development Mailing List (not for usage questions)
 Subject: [ceph-users] [Nova] [RBD] Copy-on-write cloning for RBD-backed
 disks

 I've got a bit of good news and bad news about the state of landing the
 rbd-ephemeral-clone patch series for Nova in Juno.

 The good news is that the first patch in the series
 (https://review.openstack.org/91722 fixing a data loss inducing bug with
 live migrations of instances with RBD backed ephemeral drives) was
 merged yesterday.

 The bad news is that after 2 months of sitting in review queue and only
 getting its first a +1 from a core reviewer on the spec approval freeze
 day, the spec for the blueprint rbd-clone-image-handler
 (https://review.openstack.org/91486) wasn't approved in time. Because of
 that, today the blueprint was rejected along with the rest of the
 commits in the series, even though the code itself was reviewed and
 approved a number of times.

 Our last chance to avoid putting this work on hold for yet another
 OpenStack release cycle is to petition for a spec freeze exception in
 the next Nova team meeting:
 https://wiki.openstack.org/wiki/Meetings/Nova

 If you're using Ceph RBD as backend for ephemeral disks in Nova and are
 interested this patch series, please speak up. Since the biggest concern
 raised about this spec so far has been lack of CI coverage, please let
 us know if you're already using this patch series with Juno, Icehouse,
 or Havana.

 I've put together an etherpad with a summary of where things are with
 this patch series and how we got here:
 https://etherpad.openstack.org/p/nova-ephemeral-rbd-clone-status

 Previous thread about this patch series on ceph-users ML:
 http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-
 March/028097.html

 --
 Dmitry Borodaenko
 ___
 ceph-users mailing list
 ceph-us...@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-17 Thread Johannes Erdfelt
On Thu, Jul 17, 2014, Russell Bryant rbry...@redhat.com wrote:
 On 07/17/2014 02:31 PM, Johannes Erdfelt wrote:
  It kind of helps. It's still implicit in that you need to look at what
  features are enabled at what version and determine if it is being
  tested.
  
  But the behavior is still broken since code is still getting merged that
  isn't tested. Saying that is by design doesn't help the fact that
  potentially broken code exists.
 
 Well, it may not be tested in our CI yet, but that doesn't mean it's not
 tested some other way, at least.

I'm skeptical. Unless it's tested continuously, it'll likely break at
some time.

We seem to be selectively choosing the continuous part of CI. I'd
understand if it was reluctantly because of immediate problems but
this reads like it's acceptable long-term too.

 I think there are some good ideas in other parts of this thread to look
 at how we can more reguarly rev libvirt in the gate to mitigate this.
 
 There's also been work going on to get Fedora enabled in the gate, which
 is a distro that regularly carries a much more recent version of libvirt
 (among other things), so that's another angle that may help.

That's an improvement, but I'm still not sure I understand what the
workflow will be for developers.

Do they need to now wait for Fedora to ship a new version of libvirt?
Fedora is likely to help the problem because of how quickly it generally
ships new packages and their release schedule but it would still hold
back some features?

  Also, this explanation doesn't answer my question about what happens
  when the gate finally gets around to actually testing those potentially
  broken code paths.
 
 I think we would just test out the bump and make sure it's working fine
 before it's enabled for every job.  That would keep potential breakage
 localized to people working on debugging/fixing it until it's ready to go.

The downside is that new features for libvirt could be held back by
needing to fix other unrelated features. This is certainly not a bigger
problem than users potentially running untested code simply because they
are on a newer version of libvirt.

I understand we have an immediate problem and I see the short-term value
in the libvirt version cap.

I try to look at the long-term and unless it's clear to me that a
solution is proposed to be short-term and there are some understood
trade-offs then I'll question the long-term implications of it.

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceph-users] [Nova] [RBD] Copy-on-write cloning for RBD-backed disks

2014-07-17 Thread Russell Bryant
On 07/17/2014 03:07 PM, Dmitry Borodaenko wrote:
 The meeting is in 2 hours, so you still have a chance to particilate
 or at least lurk :)

Note that this spec has 4 members of nova-core sponsoring it for an
exception on the etherpad tracking potential exceptions.  We'll review
further in the meeting.

https://etherpad.openstack.org/p/nova-juno-spec-priorities

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Gap 0 (database migrations) closed!

2014-07-17 Thread Carl Baldwin
Anna,

Your second point is going to be a bit of a maintenance headache.  I
was reviewing a patch [1] where you caught the lack of an import in
head.py.  I'm not sure that I can be trusted to check consistently for
this in future reviews.  I'm not sure that we can rely on you catching
them all either.  Jenkins didn't seem to have any problem with the
patch without the import.

So, how can we guarantee good maintenance of this head.py file?

Carl

[1] 
https://review.openstack.org/#/c/102101/40/neutron/db/migration/models/head.py

On Wed, Jul 16, 2014 at 2:14 AM, Anna Kamyshnikova
akamyshnik...@mirantis.com wrote:
 Hello everyone!

 I would like to bring the next two points to everybody's attention:

 1) As Henry mentioned if you add new migration you should make it
 unconditional. Conditional migrations should not be merged since now.

 2) If you add some new models you should ensure that module containing it is
 imported in /neutron/db/migration/models/head.py.

 The second point in important for testing which I hope will be merged soon:
 https://review.openstack.org/76520.

 Regards,
 Ann



 On Wed, Jul 16, 2014 at 5:54 AM, Kyle Mestery mest...@mestery.com wrote:

 On Tue, Jul 15, 2014 at 5:49 PM, Henry Gessau ges...@cisco.com wrote:
  I am happy to announce that the first (zero'th?) item in the Neutron Gap
  Coverage[1] has merged[2]. The Neutron database now contains all tables
  for
  all plugins, and database migrations are no longer conditional on the
  configuration.
 
  In the short term, Neutron developers who write migration scripts need
  to set
migration_for_plugins = ['*']
  but we will soon clean up the template for migration scripts so that
  this will
  be unnecessary.
 
  I would like to say special thanks to Ann Kamyshnikova and Jakub
  Libosvar for
  their great work on this solution. Also thanks to Salvatore Orlando and
  Mark
  McClain for mentoring this through to the finish.
 
  [1]
 
  https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage
  [2] https://review.openstack.org/96438
 
 This is great news! Thanks to everyone who worked on this particular
 gap. We're making progress on the other gaps identified in that plan,
 I'll send an email out once Juno-2 closes with where we're at.

 Thanks,
 Kyle

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group-based Policy code sprint

2014-07-17 Thread Kevin Benton
Is there somewhere we should RSVP to this?


On Tue, Jul 15, 2014 at 12:33 PM, Sumit Naiksatam sumitnaiksa...@gmail.com
wrote:

 Hi All,

 The Group Policy team is planning to meet on July 24th to focus on
 making progress with the pending items for Juno, and also to
 facilitate the vendor drivers. The specific agenda will be posted on
 the Group Policy wiki:
 https://wiki.openstack.org/wiki/Neutron/GroupPolicy

 Prasad Vellanki from One Convergence has graciously offered to host
 this for those planning to attend in person in the bay area:
 Address:
 2290 N First Street
 Suite # 304
 San Jose, CA 95131

 Time: 9.30 AM

 For those not being able to attend in person, we will post remote
 attendance details on the above Group Policy wiki.

 Thanks for your participation.

 ~Sumit.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Gap 0 (database migrations) closed!

2014-07-17 Thread Henry Gessau
Carl Baldwin c...@ecbaldwin.net wrote:
 Anna,
 
 Your second point is going to be a bit of a maintenance headache.  I
 was reviewing a patch [1] where you caught the lack of an import in
 head.py.  I'm not sure that I can be trusted to check consistently for
 this in future reviews.  I'm not sure that we can rely on you catching
 them all either.  Jenkins didn't seem to have any problem with the
 patch without the import.
 
 So, how can we guarantee good maintenance of this head.py file?

We have discussed moving all models out of their current diverse locations to
one directory, like maybe

  neutron/db/models/*.py

The idea is to move just the model classes (not the entire modules that they
currently reside in) here. Then head.py would be able to

  from neutron.db.models import *  # noqa

and this would have much less baggage than importing all the current modules.

I think the convention of putting all models in one directory will be quite
easy to follow and maintain. I don't yet have a timeline for getting this
done, but it will be discussed in the Neutron DB meeting[1] on Monday.

[1] https://wiki.openstack.org/wiki/Meetings/NeutronDB

-- 
Henry

 
 Carl
 
 [1] 
 https://review.openstack.org/#/c/102101/40/neutron/db/migration/models/head.py
 
 On Wed, Jul 16, 2014 at 2:14 AM, Anna Kamyshnikova
 akamyshnik...@mirantis.com wrote:
 Hello everyone!

 I would like to bring the next two points to everybody's attention:

 1) As Henry mentioned if you add new migration you should make it
 unconditional. Conditional migrations should not be merged since now.

 2) If you add some new models you should ensure that module containing it is
 imported in /neutron/db/migration/models/head.py.

 The second point in important for testing which I hope will be merged soon:
 https://review.openstack.org/76520.

 Regards,
 Ann



 On Wed, Jul 16, 2014 at 5:54 AM, Kyle Mestery mest...@mestery.com wrote:

 On Tue, Jul 15, 2014 at 5:49 PM, Henry Gessau ges...@cisco.com wrote:
 I am happy to announce that the first (zero'th?) item in the Neutron Gap
 Coverage[1] has merged[2]. The Neutron database now contains all tables
 for
 all plugins, and database migrations are no longer conditional on the
 configuration.

 In the short term, Neutron developers who write migration scripts need
 to set
   migration_for_plugins = ['*']
 but we will soon clean up the template for migration scripts so that
 this will
 be unnecessary.

 I would like to say special thanks to Ann Kamyshnikova and Jakub
 Libosvar for
 their great work on this solution. Also thanks to Salvatore Orlando and
 Mark
 McClain for mentoring this through to the finish.

 [1]

 https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage
 [2] https://review.openstack.org/96438

 This is great news! Thanks to everyone who worked on this particular
 gap. We're making progress on the other gaps identified in that plan,
 I'll send an email out once Juno-2 closes with where we're at.

 Thanks,
 Kyle

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Multi-attach nova-specs exception?

2014-07-17 Thread Boring, Walter
Hey folks,
  I would like to request an exception for volume multi-attach nova-specs.   My 
spec url is here
https://review.openstack.org/#/c/90239/

I’m actively working on the code and working out some of the issues raised in 
the spec process to ensure backwards compatible mode for the cinder api.

Thanks,
Walt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-17 Thread Carlos Garza
I added the following comments to patch 14 but I'm not -1 it but I think its a 
mistake
to assume the altSubjectName is a string type. See below.

--- Comments on patch 14 below 

SubjectAltNames are not a string and should be thought
 of as an array of tuples. Example
 [('dNSName','www.somehost.com'),
('dirNameCN','www.somehostFromAltCN.org'),
('dirNameCN','www.anotherHostFromAltCN.org')]

for right now we only care about entries that are of type dNSName
or the entries that are of type DirName that also contain a CN in the DirName 
container. All other AltNames can be ignores as they don't seem to be apart of 
hostname validation in PKIX

Also we don't need to store these in the object model. since these 
can be extracted from the X509 on the fly. Just be aware though that 
the SubjectAltName should not be treated as a simple string but as a 
list of (general_name_type,general_name_value) tuples

were really close to the end but we can't mess this one up.

I'm flexible if you want these values store in the database
or not. If we do store it in a database we need a table called
general_names that contains varchars for type and value for
now with what ever you guys want to use for the keys. to
map back to the tls_container_id. unless we want with a
firm decision on what strings in type should map to
GEN_DNS and GEN_DIRNAME CN entries from the
OpenSSL layer.

For now we can skip GEN_DIRNAME entries since RFC2818 doesn't mandate its 
support and I'm not sure if fetching the CN from the DirName is in practice 
now. I'm leery of using CN's from DirName entries as I can imagine people 
signing differen't X509Names as a DirName with no intention of host name 
validation. Excample
(dirName, 'cn=john.garza,ou=people,o=somecompany)

dNSName and DirName encodings are mentioned in RFC2459. if you want a more 
formal definition.

On Jul 17, 2014, at 10:19 AM, Stephen Balukoff sbaluk...@bluebox.net wrote:

 Ok, folks!
 
 Per the IRC meeting this morning, we came to the following consensus 
 regarding how TLS certificates are handled, how SAN is handled, and how 
 hostname conflict resolution is handled. I will be responding to all three of 
 the currently ongoing mailing list discussions with this info:
 
   • Driver does not have to use SAN that is passed from API layer, but 
 SAN will be available to drivers at the API layer. This will be mentioned 
 explicitly in the spec.
   • Order is a mandatory attribute. It's intended to be used as a hint 
 for hostname conflict resolution, but it's ultimately up to the driver to 
 decide how to resolve the conflict. (In other words, although it is a 
 mandatory attribute in our model, drivers are free to ignore it.)
   • Drivers are allowed to vary their behavior when choosing how to 
 implement hostname conflict resolution since there is no single algorithm 
 here that all vendors are able to support. (This is anticipated to be a rare 
 edge case anyway.)
 I think Evgeny will be updating the specs to reflect this decision so that it 
 is documented--  we hope to get ultimate approval of the spec in the next day 
 or two.
 
 Thanks,
 Stephen
 
 
 
 
 On Wed, Jul 16, 2014 at 7:31 PM, Stephen Balukoff sbaluk...@bluebox.net 
 wrote:
 Just saw this thread after responding to the other:
 
 I'm in favor of Evgeny's proposal. It sounds like it should resolve most (if 
 not all) of the operators', vendors' and users' concerns with regard to 
 handling TLS certificates.
 
 Stephen
 
 
 On Wed, Jul 16, 2014 at 12:35 PM, Carlos Garza carlos.ga...@rackspace.com 
 wrote:
 
 On Jul 16, 2014, at 10:55 AM, Vijay Venkatachalam 
 vijay.venkatacha...@citrix.com
  wrote:
 
  Apologies for the delayed response.
 
  I am OK with displaying the certificates contents as part of the API, that 
  should not harm.
 
  I think the discussion has to be split into 2 topics.
 
  1.   Certificate conflict resolution. Meaning what is expected when 2 
  or more certificates become eligible during SSL negotiation
  2.   SAN support
 
 
 Ok cool that makes more sense. #2 seems to be met by Evgeny proposal. 
 I'll let you folks decide the conflict resolution issue #1.
 
 
  I will send out 2 separate mails on this.
 
 
  From: Samuel Bercovici [mailto:samu...@radware.com]
  Sent: Tuesday, July 15, 2014 11:52 PM
  To: OpenStack Development Mailing List (not for usage questions); Vijay 
  Venkatachalam
  Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
  Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
 
  OK.
 
  Let me be more precise, extracting the information for view sake / 
  validation would be good.
  Providing values that are different than what is in the x509 is what I am 
  opposed to.
 
  +1 for Carlos on the library and that it should be ubiquitously used.
 
  I will wait for Vijay to speak for himself in this regard…
 
  -Sam.
 
 
  From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
  Sent: Tuesday, July 15, 2014 8:35 PM

Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

2014-07-17 Thread Mooney, Sean K
Hi

Following the discussion in yesterday’s ml2 meeting we have updated the nova 
blueprint and summited a neutron blueprint for the ml 2 changes.

https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost
https://review.openstack.org/#/c/95805/

https://blueprints.launchpad.net/neutron/+spec/ml2-use-dpdkvhost
https://review.openstack.org/#/c/107797/1

I would like to thank everyone for their feedback so far, especially irena and 
kyle for your input on the implementation and recommending that we
Bring this feature to the ml2 sub meeting.

I look forward to  progressing  this feature for the juno release.

Regards
sean

From: Czesnowicz, Przemyslaw
Sent: Wednesday, July 16, 2014 1:55 PM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage 
questions)
Cc: Mooney, Sean K; Czesnowicz, Przemyslaw
Subject: RE: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

Hi,

We were looking at this solution in the beginning, but it’s won’t work with 
opendaylight.
With opendaylight there is no agent running on the node so this info would have 
to be provided by opendaylight.

Thanks
Przemek
From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Sunday, July 13, 2014 8:31 AM
To: Czesnowicz, Przemyslaw; OpenStack Development Mailing List (not for usage 
questions)
Cc: Mooney, Sean K
Subject: RE: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

Hi,
For agent way to notify server regarding node specific info, you can leverage 
the  periodic state report that neutron agent sends to the neutron Server.
As an option, the ML2 Mechanism Driver can check that agent report and 
depending on the
datapath_type, update vif_details.
This can be done similar to bridge_mappings:
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mech_openvswitch.py#43
BR,
Irena


From: Czesnowicz, Przemyslaw [mailto:przemyslaw.czesnow...@intel.com]
Sent: Thursday, July 10, 2014 6:20 PM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage 
questions)
Cc: Mooney, Sean K
Subject: RE: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

Hi,

Thanks for Your answers.

Yep using binding:vif_details makes more sense. We would like to reuse 
VIF_TYPE_OVS and modify the nova to use the userspace vhost when ‘use_dpdk’ 
flag is present.
What we are missing is how to inform the ml2 plugin/mechanism drivers when to 
put that ‘use_dpdk’ flag into vif_details.

On the node ovs_neutron_agent could look up datapath_type in ovsdb, but how can 
we provide that info to the plugin?
Currently there is no mechanism to get node specific info into the ml2 plugin 
(or at least we don’t see any).

Any ideas on how this could be implemented?

Regards
Przemek
From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Thursday, July 10, 2014 8:08 AM
To: OpenStack Development Mailing List (not for usage questions); Czesnowicz, 
Przemyslaw
Cc: Mooney, Sean K
Subject: RE: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

Hi,
For passing  information from neutron to nova VIF Driver, you should use 
binding:vif_details dictionary.  You may not require new VIF_TYPE, but can 
leverage the existing VIF_TYPE_OVS, and add ‘use_dpdk’   in vif_details 
dictionary. This will require some rework of the existing libvirt vif_driver 
VIF_TYPE_OVS.

Binding:profile is considered as input dictionary that is used to pass 
information required for port binding on Server side. You  may use 
binding:profile to pass in  a dpdk ovs request, so it will be taken into port 
binding consideration by ML2 plugin.

I am not sure regarding new vnic_type, since it will require  port owner to 
pass in the requested type. Is it your intention? Should the port owner be 
aware of dpdk ovs usage?
There is also VM scheduling consideration that if certain vnic_type is 
requested, VM should be scheduled on the node that can satisfy the request.

Regards,
Irena


From: loy wolfe [mailto:loywo...@gmail.com]
Sent: Thursday, July 10, 2014 6:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Mooney, Sean K
Subject: Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

i think both a new vnic_type and a new vif_type should be added. now vnic has 
three types: normal, direct, macvtap, then we need a new type of uservhost.

as for vif_type, now we have VIF_TYPE_OVS, VIF_TYPE_QBH/QBG, VIF_HW_VEB, so we 
need a new VIF_TYPE_USEROVS

I don't think it's a good idea to directly reuse ovs agent, for we have to 
consider use cases that ovs and userovs co-exists. Now it's a little painful to 
fork and write a new agent, but it will be easier when ML2 agent BP is merged 
in the future. (https://etherpad.openstack.org/p/modular-l2-agent-outline)

On Wed, Jul 9, 2014 at 11:08 PM, Czesnowicz, Przemyslaw 
przemyslaw.czesnow...@intel.commailto:przemyslaw.czesnow...@intel.com wrote:
Hi

We (Intel Openstack team) would like to add support for dpdk based userspace 
openvswitch 

[openstack-dev] Virtio-scsi settings nova-specs exception

2014-07-17 Thread Mike Perez
As requested from the #openstack-meeting for Nova, I'm posting my
nova-spec exception proposal to the ML.

Spec: 
https://review.openstack.org/#/c/103797/3/specs/juno/virtio-scsi-settings.rst
Code: https://review.openstack.org/#/c/107650/

- Nikola Dipanov was kind to be the first core sponsor. [1]
- This is an optional feature, which should make it a low risk for Nova.
- The spec was posted before the spec freeze deadline.
- Code change is reasonable and available now.

Thank you!

[1] - https://etherpad.openstack.org/p/nova-juno-spec-priorities

--
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][glance] Allow Nova to use either Glance V1 or V2: Call for reviews to get the spec merged

2014-07-17 Thread Arnaud Legendre
The spec to allow Nova to support Glance v1 and v2 got an exception for the 
Juno Spec Freeze: see [1].

The spec has been created in April and went through 16 iterations.
The spec has been discussed several times already especially at the last Glance 
meetup in Washington D.C.

A couple of updates:
- The performances issues found in V2 have been fixed [2]
- We will use Glance V1 as the default API in Juno (see comments about that 
here [3])
- The changes made to the Nova codebase won’t be destructive meaning that each 
Nova functionality will be working equally with V1 and V2.

I think this is a good time to get this approved and go forward.  Please review 
the spec as soon as possible to get it merged: 
https://review.openstack.org/#/c/84887/

Thanks,
Arnaud

[1] https://etherpad.openstack.org/p/nova-juno-spec-priorities
[2] 
http://git.openstack.org/cgit/openstack/glance/commit/?id=0711daa7c27af0332d39dd7a2d906205611cce8b
[3] https://review.openstack.org/#/c/84887/14/specs/juno/use-glance-v2-api.rst

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-17 Thread Stephen Balukoff
From the comments there, I think the reason for storing the subjectAltNames
was to minimize the number of calls we will need to make to barbican, and
because the barbican container is immutable, and therefore the list of
subjectAltNames won't change so long as the container exists, and we don't
have to worry about cache invalidation. (Because really, storing the
subjectAltNames locally is a cache.)  We could accomplish the same thing by
storing the cert (NOT the key) in our database as well and extracting the
information from the x509 cert that we want on the fly. But this also seems
like we're doing more work than necessary to keep extracting the same data
from the same certificate that will never change.

How we store this in the database is something I'm less opinionated about,
but your idea that storing this data in a separate table seems to make
sense.

Do you really see a need to be concerned with anything but GEN_DNS entries
here? Or put another way, is there an application that would likely be used
in load balancing that makes use of any subjectAltName entries that are not
DNSNames? (I'm pretty sure that's all that all the major browsers look at
anyway-- and I don't see them changing any time soon since this satisfies
the need for implementing SNI.)  Secondary to this, does supporting other
subjectAltName types in our code cause any extra significant complication?
 In practice, I think anything that does TERMINATED_HTTPS as the listener
protocol is only going to care about dNSName entries and ignore the rest--
but if supporting the rest opens the door for more general-purpose forms of
TLS, I don't see harm in extracting these other subjectAltName types from
the x509 cert. It certainly feels more correct to treat these for what
they are: the tuples you've described.

Thanks,
Stephen



On Thu, Jul 17, 2014 at 2:29 PM, Carlos Garza carlos.ga...@rackspace.com
wrote:

 I added the following comments to patch 14 but I'm not -1 it but I think
 its a mistake
 to assume the altSubjectName is a string type. See below.

 --- Comments on patch 14 below 

 SubjectAltNames are not a string and should be thought
  of as an array of tuples. Example
  [('dNSName','www.somehost.com'),
 ('dirNameCN','www.somehostFromAltCN.org'),
 ('dirNameCN','www.anotherHostFromAltCN.org')]

 for right now we only care about entries that are of type dNSName
 or the entries that are of type DirName that also contain a CN in the
 DirName container. All other AltNames can be ignores as they don't seem to
 be apart of hostname validation in PKIX

 Also we don't need to store these in the object model. since these
 can be extracted from the X509 on the fly. Just be aware though that
 the SubjectAltName should not be treated as a simple string but as a
 list of (general_name_type,general_name_value) tuples

 were really close to the end but we can't mess this one up.

 I'm flexible if you want these values store in the database
 or not. If we do store it in a database we need a table called
 general_names that contains varchars for type and value for
 now with what ever you guys want to use for the keys. to
 map back to the tls_container_id. unless we want with a
 firm decision on what strings in type should map to
 GEN_DNS and GEN_DIRNAME CN entries from the
 OpenSSL layer.

 For now we can skip GEN_DIRNAME entries since RFC2818 doesn't mandate its
 support and I'm not sure if fetching the CN from the DirName is in practice
 now. I'm leery of using CN's from DirName entries as I can imagine people
 signing differen't X509Names as a DirName with no intention of host name
 validation. Excample
 (dirName, 'cn=john.garza,ou=people,o=somecompany)

 dNSName and DirName encodings are mentioned in RFC2459. if you want a more
 formal definition.

 On Jul 17, 2014, at 10:19 AM, Stephen Balukoff sbaluk...@bluebox.net
 wrote:

  Ok, folks!
 
  Per the IRC meeting this morning, we came to the following consensus
 regarding how TLS certificates are handled, how SAN is handled, and how
 hostname conflict resolution is handled. I will be responding to all three
 of the currently ongoing mailing list discussions with this info:
 
• Driver does not have to use SAN that is passed from API layer,
 but SAN will be available to drivers at the API layer. This will be
 mentioned explicitly in the spec.
• Order is a mandatory attribute. It's intended to be used as a
 hint for hostname conflict resolution, but it's ultimately up to the
 driver to decide how to resolve the conflict. (In other words, although it
 is a mandatory attribute in our model, drivers are free to ignore it.)
• Drivers are allowed to vary their behavior when choosing how to
 implement hostname conflict resolution since there is no single algorithm
 here that all vendors are able to support. (This is anticipated to be a
 rare edge case anyway.)
  I think Evgeny will be updating the specs to reflect this decision so
 that it is documented--  we hope to get 

Re: [openstack-dev] [heat]Heat Db Model updates

2014-07-17 Thread Angus Salkeld
On Thu, 2014-07-17 at 08:51 -0400, Ryan Brown wrote:
 On 07/17/2014 03:33 AM, Steven Hardy wrote:
  On Thu, Jul 17, 2014 at 12:31:05AM -0400, Zane Bitter wrote:
  On 16/07/14 23:48, Manickam, Kanagaraj wrote:
  SNIP
  *Resource*
 
  Status  action should be enum of predefined status
 
  +1
 
  Rsrc_metadata - make full name resource_metadata
 
  -0. I don't see any benefit here.
 
  Agreed
 
 
 I'd actually be in favor of the change from rsrc-resource, I feel like 
 rsrc is a pretty opaque abbreviation.
 
It really has nothing to do with resource, when I added this field 
I wanted to call it metadata, but it conflicts with an internal
attribute in sqla. So I had to change it somehow.

Are we planning data migrations for these changes?

-Angus


signature.asc
Description: This is a digitally signed message part
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Requesting spec freeze exception: LVM: Support a volume-group on shared storage spec

2014-07-17 Thread Mitsuhiro Tanino
Hi Nova cores,

I would like to request a spec freeze exception for my spec on
LVM: Support a volume-group on shared storage.

Please see the spec here:
https://review.openstack.org/#/c/97602/4/specs/juno/lvm-driver-for-shared-storage.rst

This is for the blueprint:
https://blueprints.launchpad.net/nova/+spec/lvm-driver-for-shared-storage.

This feature request is related to both nova and cinder pieces.
Cinder piece is following BP.
https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage

Both nova-spec and blue print of cinder were once approved, however I had some
discussion about cinder piece and the blueprint was changed to discussion status
from approved. Therefore I'm preparing to explain benefits of my proposal at
cinder community again.

As for the patch of nova piece, I have already got +2 from core reviewer, so 
once
I will get approved again for cinder piece, I think I am able to move forward
both cinder piece and nova piece to get into Juno release.

- Nova piece
Blue print   : 
https://blueprints.launchpad.net/nova/+spec/lvm-driver-for-shared-storage
nova-spec: https://review.openstack.org/97602
nova patch   : https://review.openstack.org/92443

- Cinder piece
Blue print   : 
https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage
cinder patch : https://review.openstack.org/92479

Regards,
Mitsuhiro Tanino mitsuhiro.tan...@hds.com
 HITACHI DATA SYSTEMS
 c/o Red Hat, 314 Littleton Road, Westford, MA 01886

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Heat Db Model updates

2014-07-17 Thread Clint Byrum
Excerpts from Manickam, Kanagaraj's message of 2014-07-16 20:48:04 -0700:
 Event
 Why uuid and id both used?

The event uuid is the user-facing ID. However, we need to return events
to the user in insertion order. So we use an auto-increment primary key,
and order by that in 'heat event-list stack_name'.

We don't want to expose that integer to the user though, because knowing
the rate at which these integers increase would reveal a lot about the
goings on inside Heat.

 Resource_action is being used in both event and resource table, so it should 
 be moved to common table

If we're joining to resource already o-k, but it is worth noting that
there is a desire to not use a SQL table for event storage. Maintaining
those events on a large, busy stack will be expensive. The simpler
solution is to just write batches of event files into swift.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >