Re: [openstack-dev] [api] [glance] conclusion needed on functional API

2015-02-15 Thread Flavio Percoco

On 15/02/15 06:31 +, Brian Rosmaita wrote:

This is a follow-up to the discussion at the 12 February API-WG meeting
[1] concerning functional API in Glance [2].  We made some progress, but
need to close this off so the spec can be implemented in Kilo.

I believe this is where we left off:
1. The general consensus was that POST is the correct verb.

2. Did not agree on what to POST.  Three options are in play:
(A) POST /images/{image_id}?action=deactivate
   POST /images/{image_id}?action=reactivate

(B) POST /images/{image_id}/actions
   with payload describing the action, e.g.,
   { action: deactivate }
   { action: reactivate }

(C) POST /images/{image_id}/actions/deactivate
   POST /images/{image_id}/actions/reactivate

The spec proposes to use (C), following the discussion at the Atlanta
summit.

As a quick summary of why (C) was proposed (since all the above were
actually discussed at the summit), I'd like to quote from Hemanth's ML
posting right after the summit [4]:

1. Discoverability of operations.  It'll be easier to expose permitted
actions through schemas [or] a json home document living at
/images/{image_id}/actions/.
2. More conducive for rate-limiting. It'll be easier to rate-limit actions
in different ways if the action type is available in the URL.
3. Makes more sense for functional actions that don't require a request
body (e.g., image deactivation).


I like option C as well. It also leaves some room for sending
parameters in the body when the action requires it.

Thanks for the summary, Brian.
Flavio



If you have a strong opinion, please reply to this message, and I will
report on the conclusion at the API-WG meeting at 00:00 UTC on 2015-02-19
[5].  This will be the third API-WG meeting at which this topic was
discussed; I would really like to see us reach a conclusion at this
meeting.

Thank you!

[1]
http://eavesdrop.openstack.org/meetings/api_wg/2015/api_wg.2015-02-12-16.00
.log.html
[2] https://review.openstack.org/#/c/135122
[3]
https://etherpad.openstack.org/p/glance-adding-functional-operations-to-api
[4] http://lists.openstack.org/pipermail/openstack-dev/2014-May/036416.html
[5] https://wiki.openstack.org/wiki/Meetings/API-WG


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpjSAiiBxRsN.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] documenting volume replication

2015-02-15 Thread Ronen Kat
Hi Ruijing,

Thanks for the comments.
Re (1) - driver can implement replication
in any means the driver see fit. It can be exported and be available to
the scheduler/drive via the capabilities or driver
extra-spec prefixes.
Re (3) - Not sure I see how this relates
to storage side replication, do you refer to host side replication?

Ronen



From:   
Guo, Ruijing
ruijing@intel.com
To:   
OpenStack Development
Mailing List (not for usage questions) openstack-dev@lists.openstack.org
Date:   
15/02/2015 03:41 AM
Subject:  
 Re: [openstack-dev]
[cinder] documenting volume replication




Hi, Ronen,

I dont know how to edit
https://etherpad.openstack.org/p/cinder-replication-redoc
and add some comments in email.

1.   We may add asynchronized
and synchronized type for replication.
2.   We may add CG for replication
3.   We may add to initialize
connection for replication

Thanks,
-Ruijing

From: Ronen Kat [mailto:ronen...@il.ibm.com]

Sent: Tuesday, February 3, 2015 9:41 PM
To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: [openstack-dev] [cinder] documenting volume replication

As some of you are aware the spec for replication
is not up to date, 
The current developer documentation, http://docs.openstack.org/developer/cinder/api/cinder.volume.driver.html,
cover replication but some folks indicated that it need additional details.


In order to get the spec and documentation up to date I created an Etherpad
to be a base for the update.

The Etherpad page is on https://etherpad.openstack.org/p/cinder-replication-redoc


I would appreciate if interested parties would take a look at the Etherpad,
add comments, details, questions and feedback.


Ronen, __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Voting and ATC emails?

2015-02-15 Thread Jeremy Stanley
On 2015-02-15 07:00:53 + (+), Gary Kotton wrote:
 Yes, I think that they go out in batches. It would be best to check with
 Stefano if you have any issues.

Also a reminder, you need to be the owner of a change in Gerrit
which merged on or after October 16, 2014 (or have an unexpired
entry in the extra-atcs file within the governance repo) to be in
the list of people who automatically get complimentary pass
discounts.

The time period was scaled down so that you now have to be active in
the current release cycle, rather than prior conferences where you
could qualify by only having a change in the previous cycle and none
in the current one.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [glance] conclusion needed on functional API

2015-02-15 Thread Jay Pipes

On 02/15/2015 01:31 AM, Brian Rosmaita wrote:

This is a follow-up to the discussion at the 12 February API-WG meeting
[1] concerning functional API in Glance [2].  We made some progress, but
need to close this off so the spec can be implemented in Kilo.

I believe this is where we left off:
1. The general consensus was that POST is the correct verb.


Yes, POST is correct (though the resource is wrong).


2. Did not agree on what to POST.  Three options are in play:
(A) POST /images/{image_id}?action=deactivate
 POST /images/{image_id}?action=reactivate

(B) POST /images/{image_id}/actions
 with payload describing the action, e.g.,
 { action: deactivate }
 { action: reactivate }

(C) POST /images/{image_id}/actions/deactivate
 POST /images/{image_id}/actions/reactivate


d) POST /images/{image_id}/tasks with payload:
   { action: deactivate|activate }

An action isn't created. An action is taken. A task is created. A task 
contains instructions on what action to take.


Best,
-jay


The spec proposes to use (C), following the discussion at the Atlanta
summit.

As a quick summary of why (C) was proposed (since all the above were
actually discussed at the summit), I'd like to quote from Hemanth's ML
posting right after the summit [4]:

1. Discoverability of operations.  It'll be easier to expose permitted
actions through schemas [or] a json home document living at
/images/{image_id}/actions/.
2. More conducive for rate-limiting. It'll be easier to rate-limit actions
in different ways if the action type is available in the URL.
3. Makes more sense for functional actions that don't require a request
body (e.g., image deactivation).

If you have a strong opinion, please reply to this message, and I will
report on the conclusion at the API-WG meeting at 00:00 UTC on 2015-02-19
[5].  This will be the third API-WG meeting at which this topic was
discussed; I would really like to see us reach a conclusion at this
meeting.

Thank you!

[1]
http://eavesdrop.openstack.org/meetings/api_wg/2015/api_wg.2015-02-12-16.00
.log.html
[2] https://review.openstack.org/#/c/135122
[3]
https://etherpad.openstack.org/p/glance-adding-functional-operations-to-api
[4] http://lists.openstack.org/pipermail/openstack-dev/2014-May/036416.html
[5] https://wiki.openstack.org/wiki/Meetings/API-WG


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] moving openvswitch ports between namespaces considered harmful

2015-02-15 Thread Thomas Graf
[Sorry for the resend, I had to subscribe to openstack-dev first,
 maybe worth removing the subscribe requirement for outsiders]

[Copying ovs-dev]

On 02/13/15 at 01:47pm, Miguel Ángel Ajo wrote:
 Sorry, I forgot about   
 
 5)  If we put all our OVS/OF bridge logic in just one bridge (instead of N: 
 br-tun, br-int, br-ex, br-xxx),
  the performance should be yet higher, since, as far as I understood, 
 flow rule lookup could be more
  optimized into the kernel megaflows without forwarding and re-starting 
 evaluation due to patch ports.
  (Please correct me here where I’m wrong, I just have very high level 
 view of this).

Some preliminary numbers were presented at the OVS Fall Conference 2014
which indicate that a pure OVS ACL solution scales better as the
number of rules changes. You can find the number on slide 9 here:

http://www.openvswitch.org/support/ovscon2014/17/1030-conntrack_nat.pdf

Another obvious advantage is that since we have to go through the OVS
flow table anyway, traversing any additional (linear) ruleset is
likely to have more overhead.

FYI: Ivar (CCed) is also working on collecting numbers to compare both
architectures to kick of a discussion at the next summit. Ivar, can
you link to the talk proposal?

 On Friday, 13 de February de 2015 at 13:42, Miguel Ángel Ajo wrote:
 
  I’m working on the following items:
   
  1) Doing Openflow traffic filtering (stateful firewall) based on OVS+CT[1] 
  patch, which may
  eventually merge. Here I want to build a good amount of benchmarks to 
  be able to compare
  the current network iptables+LB solution to just openflow.
   
   Openflow filtering should be fast, as it’s quite smart at using hashes 
  to match OF rules
   in the kernel megaflows (thanks Jiri  T.Graf for explaining me this)
  
   The only bad part is that we would have to dynamically change more 
  rules based on security
  group changes (now we do that via ip sets without reloading all the rules).

Worth pointing out that it is entirely possible to integrate ipset
with OVS in the datapath in case representing ipsets with individual
wildcard flows is not sufficient. I guess we'll know when we have more
numbers.

To do this properly, we may want to make the OVS plugin a real OF 
  controller to be able to
  push OF rules to the bridge without the need of calling ovs-ofctl on the 
  command line all the time.

We should synchronize this effort with the OVN effort. There is a lot
of overlap.

  2) Using OVS+OF to do QoS
   
  other interesting stuff to look at:
   
  3) Doing routing in OF too, thanks to the NAT capabilities of having OVS+CT 
   

Just want to point out that this is still WIP with several issues
outstanding. I think everybody agrees that it's tremendously useful
to have, we need to be able to implement it properly. I'll let you
and anybody else interested know as soon as it's ready for testing.

  4) The namespace problem, what kinds of statistics get broken by moving 
  ports into namespaces now?,
  the short-term fix could be using vets, but “namespaceable” OVS ports 
  would be perfect, yet I understand
  the change is a big feature.
   
  If we had 1  3, may be 4 wouldn’t be a problem anymore.

Implementing VRF in OVS will hide (4) for OpenStack but we should
still fix it in OVS as Ben suggested in the Bugzilla. It looks
feasible to support netns properly in OVS.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Replace eventlet with asyncio

2015-02-15 Thread Mike Bayer

I’ve spent most of the past week deeply reviewing the asyncio system,
including that I’ve constructed a comprehensive test suite designed to
discover exactly what kinds of latencies and/or throughput advantages or
disadvantages we may see each from: threaded database code, gevent-based
code using Psycopg2’s asynchronous API, and asyncio using aiopg. I’ve
written a long blog post describing a bit of background about non-blocking
IO and its use in Python, and listed out detailed and specific reasons why I
don’t think asyncio is an appropriate fit for those parts of Openstack that
are associated with relational databases. We in fact don’t get much benefit
from eventlet either in this regard, and with the current situation of
non-eventlet compatible DBAPIs, our continued use of eventlet for
database-oriented code is hurting Openstack deeply. 

My recommendations are that whether or not we use eventlet or asyncio in
order to receive HTTP connections, the parts of our application that focus
on querying and updating databases should at least be behind a thread pool.
I’ve also responded to the notions that asyncio-style programming will lead
to fewer bugs and faster production of code, and in that area I think there
are also some misconceptions regarding code that’s designed to deal with
relational databases.

The blog post is at
http://techspot.zzzeek.org/2015/02/15/asynchronous-python-and-databases/ and
you’ll find links to the test suite, which is fully runnable, within that
post.

Victor Stinner vstin...@redhat.com wrote:

 Hi,
 
 I wrote a second version of my cross-project specification Replace eventlet 
 with asyncio. It's now open for review:
 
 https://review.openstack.org/#/c/153298/
 
 I copied it below if you prefer to read it and/or comment it by email. Sorry, 
 I'm not sure that the spec will be correctly formatted in this email. Use the 
 URL if it's not case.
 
 Victor
 
 ..
 This work is licensed under a Creative Commons Attribution 3.0 Unported
 License.
 
 http://creativecommons.org/licenses/by/3.0/legalcode
 
 =
 Replace eventlet with asyncio
 =
 
 This specification proposes to replace eventlet, implicit async programming,
 with asyncio, explicit async programming. It should fix eventlet issues,
 prepare OpenStack for the future (asyncio is now part of the Python language)
 and may improve overall OpenStack performances. It also makes usage of native
 threads simpler and more natural.
 
 Even if the title contains asyncio, the spec proposes to use trollius. The
 name asyncio is used in the spec because it is more well known than trollius,
 and because trollius is almost the same thing than asyncio.
 
 The spec doesn't change OpenStack components running WSGI servers like
 nova-api.  Compatibility issue between WSGI and asyncio should be solved 
 first.
 
 The spec is focused on Oslo Messaging and Ceilometer projects. More OpenStack
 components may be modified later if the Ceilometer port to asyncio is
 successful. Ceilometer will be used to find and solve technical issues with
 asyncio, so the same solutions can be used on other OpenStack components.
 
 Blueprint: 
 https://blueprints.launchpad.net/oslo.messaging/+spec/greenio-executor
 
 Note: Since Trollius will be used, this spec is unrelated to Python 3. See the
 `OpenStack Python 3 wiki page https://wiki.openstack.org/wiki/Python3`_ to
 get the status of the port.
 
 
 Problem description
 ===
 
 OpenStack components are designed to scale. There are differenet options
 to support a lot of concurrent requests: implicit asynchronous programming,
 explicit programming, threads, processes, and combination of these options.
 
 In the past, the Nova project used Tornado, then Twisted and it is now using
 eventlet which also became the defacto standard in OpenStack. The rationale to
 switch from Twisted to eventlet in Nova can be found in the old `eventlet vs
 Twisted
 https://wiki.openstack.org/wiki/UnifiedServiceArchitecture#eventlet_vs_Twisted`_
 article.
 
 Eventlet issues
 ---
 
 This section only gives some examples of eventlet issues. There are more
 eventlet issues, but tricky issues are not widely discussed and so not well
 known. Most interesting issues are issues caused by the design of eventlet,
 especially the monkey-patching of the Python standard library.
 
 Eventlet itself is not really evil. Most issues come from the monkey-patching.
 The problem is that eventlet is almost always used with monkey-patching in
 OpenStack.
 
 The implementation of the monkey-patching is fragile. It's easy to forget to
 patch a function or have issues when the standard library is modified. The
 eventlet port to Python 3 showed how the patcher highly depends on the 
 standard
 library. A recent eventlet change (v0.16) turns off __builtin__ monkey
 patching by default to fix a tricky race condition: see `eventlet recursion
 error after RPC timeout
 

Re: [openstack-dev] [Neutron] PLUMgrid CI maintenance

2015-02-15 Thread trinath.soman...@freescale.com
Hi-

This is the not the ML for these notifications. Use 3rd Party announce ML for 
the same.

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

From: Fawad Khaliq [mailto:fa...@plumgrid.com]
Sent: Sunday, February 15, 2015 12:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron] PLUMgrid CI maintenance

Folks,

PLUMgrid CI is down because of unforeseen hardware breakdown issues. We are 
working on getting it sorted out asap. I will respond to this thread when it's 
back up.

Thanks,
Fawad Khaliq

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Voting and ATC emails?

2015-02-15 Thread Stefano Maffulli
On Sun, 2015-02-15 at 14:34 +, Jeremy Stanley wrote:
 Also a reminder, you need to be the owner of a change in Gerrit
 which merged on or after October 16, 2014 (or have an unexpired
 entry in the extra-atcs file within the governance repo) to be in
 the list of people who automatically get complimentary pass
 discounts.
[...]

All correct and all explained also here:

https://ask.openstack.org/en/question/45531/atc-pass-for-the-openstack-summit/

The next batch of invites will be sent out after next OpenStack
milestone is released.

/stef


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Depraction of the auth_token fragments

2015-02-15 Thread Thomas Goirand
On 02/16/2015 12:24 AM, Morgan Fainberg wrote:
 So, let me just say that while I do not have a timeline on the removal
 of auth fragments, since it is deprecated assume that this should no
 longer be used if there is an alternative (which there is). I am willing
 to continue with the discussion on reversing course, but consider the
 deprecation notice the “far in advance” warning that they are going away
 (isn’t that what deprecation is?). 
 
 —Morgan

Well, the thing is, I'd like to write some code to actually *remove* the
auth fragments from configuration files openstack-pkg-tools will see,
when that support will actually be removed upstream. Until then, it
isn't nice to do this maintenance work if an admin is using it (despite
the deprecation).

So yes, if we really are to remove it (which again, I'd prefer not to
happen), then I would need a more specific time-frame. No, the actual
message of deprecation isn't helpful enough for me to decide when to
implement the switch, especially considering that it's been 2 cycles
we're seeing it, and no feature removal happened.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] documenting volume replication

2015-02-15 Thread Guo, Ruijing
Hi, Ronen

3) I mean storage based replication. In normal, volume replication support FC 
or iSCSI. We need to setup FC or iSCSI before we do volume replication.

Case 1)

Host --FC--Storage A ---iSCSI  Storage B FC- Host

Case 2)

Host --FC--Storage A ---FC  Storage B FC- Host

As above diagram, we need to setup connection (iSCSI or FC) between storage A 
and Storage B.

For FC, we need to zone storage A  storage B in FC switch.

Thanks,
-Ruijing

From: Ronen Kat [mailto:ronen...@il.ibm.com]
Sent: Sunday, February 15, 2015 4:46 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder] documenting volume replication

Hi Ruijing,

Thanks for the comments.
Re (1) - driver can implement replication in any means the driver see fit. It 
can be exported and be available to the scheduler/drive via the capabilities 
or driver extra-spec prefixes.
Re (3) - Not sure I see how this relates to storage side replication, do you 
refer to host side replication?

Ronen



From:Guo, Ruijing 
ruijing@intel.commailto:ruijing@intel.com
To:OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date:15/02/2015 03:41 AM
Subject:Re: [openstack-dev] [cinder] documenting volume replication




Hi, Ronen,

I don't know how to edit 
https://etherpad.openstack.org/p/cinder-replication-redoc and add some comments 
in email.

1. We may add asynchronized and synchronized type for replication.
2. We may add CG for replication
3. We may add to initialize connection for replication

Thanks,
-Ruijing

From: Ronen Kat [mailto:ronen...@il.ibm.com]
Sent: Tuesday, February 3, 2015 9:41 PM
To: OpenStack Development Mailing List 
(openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org)
Subject: [openstack-dev] [cinder] documenting volume replication

As some of you are aware the spec for replication is not up to date,
The current developer documentation, 
http://docs.openstack.org/developer/cinder/api/cinder.volume.driver.html, cover 
replication but some folks indicated that it need additional details.

In order to get the spec and documentation up to date I created an Etherpad to 
be a base for the update.
The Etherpad page is on 
https://etherpad.openstack.org/p/cinder-replication-redoc

I would appreciate if interested parties would take a look at the Etherpad, add 
comments, details, questions and feedback.

Ronen, 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Depraction of the auth_token fragments

2015-02-15 Thread Morgan Fainberg

On February 15, 2015 at 3:54:24 PM, Thomas Goirand (z...@debian.org) wrote:

On 02/16/2015 12:24 AM, Morgan Fainberg wrote:
 So, let me just say that while I do not have a timeline on the removal
 of auth fragments, since it is deprecated assume that this should no
 longer be used if there is an alternative (which there is). I am willing
 to continue with the discussion on reversing course, but consider the
 deprecation notice the “far in advance” warning that they are going away
 (isn’t that what deprecation is?).  
  
 —Morgan

Well, the thing is, I'd like to write some code to actually *remove* the
auth fragments from configuration files openstack-pkg-tools will see,
when that support will actually be removed upstream. Until then, it
isn't nice to do this maintenance work if an admin is using it (despite
the deprecation).

So yes, if we really are to remove it (which again, I'd prefer not to
happen), then I would need a more specific time-frame. No, the actual
message of deprecation isn't helpful enough for me to decide when to
implement the switch, especially considering that it's been 2 cycles
we're seeing it, and no feature removal happened.

Cheers,

Thomas Goirand (zigo)



I just discussed this in IRC with Thomas (a bit faster than back-and-forth via 
email), and the following things came out of the conversation (context for 
those playing along at home via the ML and not in the IRC channels):

* The auth fragments should not be used if it is possible to use the new form 
(based upon the path taken today).

* There is no concrete timeline for removing the use of the auth fragments. For 
compatibility reasons, there are still other topics to discuss prior to 
removing deprecated items such as the auth fragments.

* If the auth fragment mode is significantly better than the direction we are 
headed, it is possible to consider undeprecating the auth fragment 
configuration. Discussion pertaining to reversing this direction is something 
we should continue both on the ML and during meetings (as we explore the best 
approach to handle these types of deprecations).

Cheers,
—Morgan__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Block migrations and Cinder volumes

2015-02-15 Thread Robert Collins
On 19 June 2014 at 20:38, Daniel P. Berrange berra...@redhat.com wrote:
 On Wed, Jun 18, 2014 at 11:09:33PM -0700, Rafi Khardalian wrote:
 I am concerned about how block migration functions when Cinder volumes are
 attached to an instance being migrated.  We noticed some unexpected
 behavior recently, whereby attached generic NFS-based volumes would become
 entirely unsparse over the course of a migration.  After spending some time
 reviewing the code paths in Nova, I'm more concerned that this was actually
 a minor symptom of a much more significant issue.

 For those unfamiliar, NFS-based volumes are simply RAW files residing on an
 NFS mount.  From Libvirt's perspective, these volumes look no different
 than root or ephemeral disks.  We are currently not filtering out volumes
 whatsoever when making the request into Libvirt to perform the migration.
  Libvirt simply receives an additional flag (VIR_MIGRATE_NON_SHARED_INC)
 when a block migration is requested, which applied to the entire migration
 process, not differentiated on a per-disk basis.  Numerous guards within
 Nova to prevent a block based migration from being allowed if the instance
 disks exist on the destination; yet volumes remain attached and within the
 defined XML during a block migration.

 Unless Libvirt has a lot more logic around this than I am lead to believe,
 this seems like a recipe for corruption.  It seems as though this would
 also impact any type of volume attached to an instance (iSCSI, RBD, etc.),
 NFS just happens to be what we were testing.  If I am wrong and someone can
 correct my understanding, I would really appreciate it.  Otherwise, I'm
 surprised we haven't had more reports of issues when block migrations are
 used in conjunction with any attached volumes.

 Libvirt/QEMU has no special logic. When told to block-migrate, it will do
 so for *all* disks attached to the VM in read-write-exclusive mode. It will
 only skip those marked read-only or read-write-shared mode. Even that
 distinction is somewhat dubious and so not reliably what you would want.

 It seems like we should just disallow block migrate when any cinder volumes
 are attached to the VM, since there is never any valid use case for doing
 block migrate from a cinder volume to itself.

 Regards,
 Daniel

Just ran across this from bug
https://bugs.launchpad.net/nova/+bug/1398999. Is there some way to
signal to libvirt that some block devices shouldn't be migrated by it
but instead are known to be networked etc? Or put another way, how can
we have our cake and eat it too. Its not uncommon for a VM to be
cinder booted but have local storage for swap... and AIUI the fix we
put in for this bug stops those VM's being migrated. Do you think it
is tractable (but needs libvirt work), or is it something endemic to
the problem (e.g. dirty page synchronisation with the VM itself) that
will be in the way?

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] moving openvswitch ports between namespaces considered harmful

2015-02-15 Thread Kevin Benton
What is the status of the conntrack integration with respect to
availability in distributions? The lack of state tracking has blocked the
ability for us to get rid of namespaces for the L3 agent (because of SNAT)
and the filtering bridge between the VM and OVS (stateful firewall for
security groups).

It has been known for a long time that these are suboptimal, but our hands
are sort of tied because we don't want to require kernel code changes to
use Neutron.

Are Ubuntu 1404 or CentOS 7 shipping openvswitch kernel modules with
conntrack integration? If not, I don't see a feasible way of eliminating
any of these problems with a pure OVS solution. (faking a stateful firewall
with flag matching doesn't count)

In the short term, we should probably switch back to veth pairs to handle
the namespace issue for the dhcp agent and the L3 agent.
[Sorry for the resend, I had to subscribe to openstack-dev first,
 maybe worth removing the subscribe requirement for outsiders]

[Copying ovs-dev]

On 02/13/15 at 01:47pm, Miguel Ángel Ajo wrote:
 Sorry, I forgot about

 5)  If we put all our OVS/OF bridge logic in just one bridge (instead of
N: br-tun, br-int, br-ex, br-xxx),
  the performance should be yet higher, since, as far as I understood,
flow rule lookup could be more
  optimized into the kernel megaflows without forwarding and
re-starting evaluation due to patch ports.
  (Please correct me here where I’m wrong, I just have very high level
view of this).

Some preliminary numbers were presented at the OVS Fall Conference 2014
which indicate that a pure OVS ACL solution scales better as the
number of rules changes. You can find the number on slide 9 here:

http://www.openvswitch.org/support/ovscon2014/17/1030-conntrack_nat.pdf

Another obvious advantage is that since we have to go through the OVS
flow table anyway, traversing any additional (linear) ruleset is
likely to have more overhead.

FYI: Ivar (CCed) is also working on collecting numbers to compare both
architectures to kick of a discussion at the next summit. Ivar, can
you link to the talk proposal?

 On Friday, 13 de February de 2015 at 13:42, Miguel Ángel Ajo wrote:

  I’m working on the following items:
 
  1) Doing Openflow traffic filtering (stateful firewall) based on
OVS+CT[1] patch, which may
  eventually merge. Here I want to build a good amount of benchmarks
to be able to compare
  the current network iptables+LB solution to just openflow.
 
   Openflow filtering should be fast, as it’s quite smart at using
hashes to match OF rules
   in the kernel megaflows (thanks Jiri  T.Graf for explaining me
this)
 
   The only bad part is that we would have to dynamically change more
rules based on security
  group changes (now we do that via ip sets without reloading all the
rules).

Worth pointing out that it is entirely possible to integrate ipset
with OVS in the datapath in case representing ipsets with individual
wildcard flows is not sufficient. I guess we'll know when we have more
numbers.

To do this properly, we may want to make the OVS plugin a real OF
controller to be able to
  push OF rules to the bridge without the need of calling ovs-ofctl on
the command line all the time.

We should synchronize this effort with the OVN effort. There is a lot
of overlap.

  2) Using OVS+OF to do QoS
 
  other interesting stuff to look at:
 
  3) Doing routing in OF too, thanks to the NAT capabilities of having
OVS+CT

Just want to point out that this is still WIP with several issues
outstanding. I think everybody agrees that it's tremendously useful
to have, we need to be able to implement it properly. I'll let you
and anybody else interested know as soon as it's ready for testing.

  4) The namespace problem, what kinds of statistics get broken by moving
ports into namespaces now?,
  the short-term fix could be using vets, but “namespaceable” OVS
ports would be perfect, yet I understand
  the change is a big feature.
 
  If we had 1  3, may be 4 wouldn’t be a problem anymore.

Implementing VRF in OVS will hide (4) for OpenStack but we should
still fix it in OVS as Ben suggested in the Bugzilla. It looks
feasible to support netns properly in OVS.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][security][rootwrap] Proposal to replace rootwrap/sudo with privsep helper process (for neutron, but others too)

2015-02-15 Thread Angus Lees
On Fri Feb 13 2015 at 10:35:49 PM Miguel Ángel Ajo majop...@redhat.com
wrote:

  We have an ongoing effort in neutron to move to rootwrap-daemon.


 https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bp/rootwrap-daemon-mode,n,z

 Thanks for replying. I should have mentioned rootwrap-daemon in my
original post:

The main difference between this privsep proposal and rootwrap (in
daemon-mode or rootwrap classic) is that rootwrap is oriented around
command lines, and privsep is oriented around python function calls.

Using functions rather than command lines means we can move to using native
libraries more easily (as I've done here with pyroute2), which allows a
more closely integrated, performant and safer interface.  It also means we
can describe more customised behaviour in the exposed function API, leading
to fewer cross-process calls and less scope for exposing unintended
behaviour.

To pick one example, starting up a long-running daemon (assuming we wanted
to do this) from the privsep process just works without any need to work
around the privsep mechanism.

To speed up multiple system calls, and be able to spawn daemons inside
 namespaces.


Yep, rootwrap-daemon certainly removes the python rootwrap startup time;
privsep *also* removes the subprocess exec startup time for cases where we
have moved to native libraries (again, see the pyroute2 example).

I have to read a bit what are the good  bad points of privsep.

 The advantage of rootwrap-daemon, is that we don’t need to change all our
 networking libraries across neutron,
 and we kill the sudo/rootwrap spawn for every call, yet keeping
 the rootwrap permission granularity.


The good news is that privsep and rootwrap (in either mode) can coexist
just fine.  A staged migration to privsep might involve spawning the
privsep daemon via sudo on the first call to something that needs it.  This
approach means we wouldn't require adding the privsep_daemon.start() call
early in any relevant main() - and the only downside is that we retain a
(single) dependency on sudo/sudoers.

 - Gus

Miguel Ángel Ajo

 On Friday, 13 de February de 2015 at 10:54, Angus Lees wrote:

 On Fri Feb 13 2015 at 5:45:36 PM Eric Windisch e...@windisch.us wrote:

 ᐧ


 from neutron.agent.privileged.commands import ip_lib as priv_ip
 def foo():
 # Need to create a new veth interface pair - that usually requires
 root/NET_ADMIN
 priv_ip.CreateLink('veth', 'veth0', peer='veth1')

 Because we now have elevated privileges directly (on the privileged daemon
 side) without having to shell out through sudo, we can do all sorts of
 nicer things like just using netlink directly to configure networking.
 This avoids the overhead of executing subcommands, the ugliness (and
 danger) of generating command lines and regex parsing output, and make us
 less reliant on specific versions of command line tools (since the kernel
 API should be very stable).


 One of the advantages of spawning a new process is being able to use flags
 to clone(2) and to set capabilities. This basically means to create
 containers, by some definition. Anything you have in a privileged daemon
 or privileged process ideally should reduce its privilege set for any
 operation it performs. That might mean it clones itself and executes
 Python, or it may execvp an executable, but either way, the new process
 would have less-than-full-privilege.

 For instance, writing a file might require root access, but does not need
 the ability to load kernel modules. Changing network interfaces does not
 need access to the filesystem, no more than changes to the filesystem needs
 access to the network. The capabilities and namespaces mechanisms resolve
 these security conundrums and simplify principle of least privilege.


 Agreed wholeheartedly, and I'd appreciate your thoughts on how I'm using
 capabilities in this change.  The privsep daemon limits itself to a
 particular set of capabilities (and drops root). The assumption is that
 most OpenStack services commonly need the same small set of capabilities to
 perform their duties (neutron - net_admin+sys_admin for example), so it
 makes sense to reuse the same privileged process.

 If we have a single service that somehow needs to frequently use a broad
 swathe of capabilities then we might want to break it up further somehow
 between the different internal aspects (multiple privsep helpers?) - but is
 there such a case?   There's also no problems with mixing privsep for
 frequent operations with the existing sudo/rootwrap approach for
 rare/awkward cases.

  - Gus

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  
 __
 

Re: [openstack-dev] [TripleO] stepping down as core reviewer

2015-02-15 Thread Clint Byrum
Thanks Robert. I share most of your views on this. The project will
certainly miss your reviews. I'll go ahead and remove you from the
permissions and stats.

Excerpts from Robert Collins's message of 2015-02-15 13:40:02 -0800:
 Hi, I've really not been pulling my weight as a core reviewer in
 TripleO since late last year when personal issues really threw me for
 a while. While those are behind me now, and I had a good break over
 the christmas and new year period, I'm sufficiently out of touch with
 the current (fantastic) progress being made that I don't feel
 comfortable +2'ing anything except the most trivial things.
 
 Now the answer to that is to get stuck back in, page in the current
 blueprints and charge ahead - but...
 
 One of the things I found myself reflecting on during my break was the
 extreme fragility of the things we were deploying in TripleO - most of
 our time is spent fixing fallout from unintended, unexpected
 consequences in the system. I think its time to put some effort
 directly in on that in a proactive fashion rather than just reacting
 to whichever failure du jour is breaking deployments / scale /
 performance.
 
 So for the last couple of weeks I've been digging into the Nova
 (initially) bugtracker and code with an eye to 'how did we get this
 bug in the first place', and refreshing my paranoid
 distributed-systems-ops mindset: I'll be writing more about that
 separately, but its clear to me that there's enough meat there - both
 analysis, discussion, and hopefully execution - that it would be
 self-deceptive for me to think I'll be able to meaningfully contribute
 to TripleO in the short term.
 
 I'm super excited by Kolla - I think that containers really address
 the big set of hurdles we had with image based deployments, and if we
 can one-way-or-another get cinder and Ironic running out of
 containers, we should have a pretty lovely deployment story. But I
 still think helping on the upstream stuff more is more important for
 now. We'll see where we're at in a cycle or two :)
 
 -Rob
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [glance] conclusion needed on functional API

2015-02-15 Thread Jay Pipes

On 02/15/2015 01:13 PM, Brian Rosmaita wrote:

On 2/15/15, 10:10 AM, Jay Pipes jaypi...@gmail.com wrote:


On 02/15/2015 01:31 AM, Brian Rosmaita wrote:

This is a follow-up to the discussion at the 12 February API-WG
meeting [1] concerning functional API in Glance [2].  We made
some progress, but need to close this off so the spec can be
implemented in Kilo.

I believe this is where we left off: 1. The general consensus was
that POST is the correct verb.


Yes, POST is correct (though the resource is wrong).


2. Did not agree on what to POST.  Three options are in play: (A)
POST /images/{image_id}?action=deactivate POST
/images/{image_id}?action=reactivate

(B) POST /images/{image_id}/actions with payload describing the
action, e.g., { action: deactivate } { action: reactivate
}

(C) POST /images/{image_id}/actions/deactivate POST
/images/{image_id}/actions/reactivate


d) POST /images/{image_id}/tasks with payload: { action:
deactivate|activate }

An action isn't created. An action is taken. A task is created. A
task contains instructions on what action to take.


The Images API v2 already has tasks (schema available at
/v2/schemas/tasks ), which are used for long-running asynchronous
operations (right now, image import and image export).  I think we
want to keep those distinct from what we're talking about here.

Does something really need to be created for this call?  The idea
behind the functional API was to have a place for things that don't
fit neatly into the CRUD-centric paradigm.  Option (C) seems like a
good fit for this.


Why not just use the existing tasks/ interface, then? :) Seems like a 
perfect fit to me.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [glance] conclusion needed on functional API

2015-02-15 Thread Brian Rosmaita
On 2/15/15, 10:10 AM, Jay Pipes jaypi...@gmail.com wrote:

On 02/15/2015 01:31 AM, Brian Rosmaita wrote:
 This is a follow-up to the discussion at the 12 February API-WG meeting
 [1] concerning functional API in Glance [2].  We made some progress,
but
 need to close this off so the spec can be implemented in Kilo.

 I believe this is where we left off:
 1. The general consensus was that POST is the correct verb.

Yes, POST is correct (though the resource is wrong).

 2. Did not agree on what to POST.  Three options are in play:
 (A) POST /images/{image_id}?action=deactivate
  POST /images/{image_id}?action=reactivate

 (B) POST /images/{image_id}/actions
  with payload describing the action, e.g.,
  { action: deactivate }
  { action: reactivate }

 (C) POST /images/{image_id}/actions/deactivate
  POST /images/{image_id}/actions/reactivate

d) POST /images/{image_id}/tasks with payload:
{ action: deactivate|activate }

An action isn't created. An action is taken. A task is created. A task
contains instructions on what action to take.

The Images API v2 already has tasks (schema available at /v2/schemas/tasks
), which are used for long-running asynchronous operations (right now,
image import and image export).  I think we want to keep those distinct
from what we're talking about here.

Does something really need to be created for this call?  The idea behind
the functional API was to have a place for things that don't fit neatly
into the CRUD-centric paradigm.  Option (C) seems like a good fit for this.

cheers,
brian

 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] FFE driver-private-data + pure-iscsi-chap-support

2015-02-15 Thread Patrick East
Hi All,

I would like to request a FFE for the following blueprints:

https://blueprints.launchpad.net/cinder/+spec/driver-private-data
https://blueprints.launchpad.net/cinder/+spec/pure-iscsi-chap-support

The first being a dependency for the second.

The new database table for driver data feature was discussed at the Cinder
mid-cycle meetup and seemed to be generally approved by the team in person
at the meeting as something we can get into Kilo.

There is currently a spec up for review for it here:
https://review.openstack.org/#/c/15/ but doesn't look like it will be
approved by the end of the day for the deadline. I have code pretty much
ready to go for review as soon as the spec is approved, it is a relatively
small patch set.

Thanks!

Patrick East
patrick.e...@purestorage.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Depraction of the auth_token fragments

2015-02-15 Thread Thomas Goirand
On 02/15/2015 02:56 AM, Clint Byrum wrote:
 Excerpts from Thomas Goirand's message of 2015-02-14 16:48:01 -0800:
 Hi,

 I've seen messages in the logs telling that we should move to the
 identity_uri.

 I don't really like the identity_uri which contains all of the
 information in a single directive, which means that a script that would
 edit it would need a lot more parsing work than simply a key/value pair
 logic. This is error prone. The fragments don't have this issue.

 So, could we decide to:
 1/ Not remove the auth fragments
 2/ Remove the deprecation warnings

 
 Automation has tended away from parsing and editing files in place for
 a long time now. Typically you'd have a source of truth with all the
 values, and a tool to turn that into a URL during file generation. This
 isn't error prone in my experience.

That's truth for Chef / Puppet based deployments. But for what I do with
debconf, I do insist on editing only what's needed to a pre-existing
configuration file. And yes, it should be up to the packages to maintain
configuration files, not stuff like puppet / chef. If everyone is doing
what you wrote above, it's precisely because, up to now, packages were
not doing a good enough (configuration maintenance) work, which I intend
to fix in the near future.

As of right now, I'm able to deploy a full all-in-one server using only
debconf to configure the packages. This works pretty well! And I'm now
using that to run my tempest CI (with which I am having very decent
results).

My next step is to do do multi-nodes setups using preseed as well. When
done, I'm also confident that using a preseed debian installer will also
work, and probably a custom Debian CD will also be possible (and then,
in the long run, make it a Debian pure blend, and integrate myself in
Tasksel).

I'm really hoping that, on the long run, hard-wiring configuration
management in packages directly, will be better than puppet hacks.

 I don't really know why the single URL is preferred, but I don't think
 the argument that it makes parsing and editting the config file with
 external tools is strong enough to roll this deprecation back.

On 02/15/2015 06:02 AM, Morgan Fainberg wrote:
 Thomas, Can you be more specific about what use-case you’re running
 into that makes the fragments better (instead of a URI form such as
 the SQL connection string would be)?

Let me describe what's done currently in my Debian packages.

What I do in my debian package, is that I use my shell script function
pkgos_inifile (available inside the openstack-pkg-tools package) which
has get / set as 2nd argument, then the config file, the section and
finally the directive, plus the new value in case of set. This function
gets included in the .config and .postinst scripts of all core projects
of openstack.

Then on top of that utility function, for the configuration part of the
package, I wrote a pkgos_read_admin_creds() function, which reads the
existing configuration (let's say in /etc/cinder/cinder.conf), and
populate the debconf cache with the values sets there. Then questions
are prompted to the user.

Then in the .postinst script, values are written to the cinder.conf,
thanks to the call: pkgos_write_admin_creds ${CINDER_CONF}
keystone_authtoken cinder.

All of this currently uses the auth_token fragments, rather than the
URI. While I could write code to parse the URI, the only bit which needs
to be edited is the auth_host directive, so I find it silly to have to
parse (when reading) to extract the host value only, and parse again
(when writing) to only modify the host field. Also, the URI doesn't even
contain the admin_user, admin_tenant_name and admin_password (correct me
if I'm wrong here), so it's not even self contained like a DSN would.

Last, removing the support of the auth fragments would mean that I would
have to write some configuration maintenance code that would go in both
the config and postinst scripts, which is always annoying and dangerous
to write.

So, finally, I'd prefer to have the auth fragments to stay. But if they
have to go, please let me know *EARLY* when they really do, as it will
impact me a lot.

I hope this gives more insights,
Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Depraction of the auth_token fragments

2015-02-15 Thread Morgan Fainberg
On February 15, 2015 at 3:16:04 PM, Thomas Goirand (z...@debian.org) wrote:
On 02/15/2015 02:56 AM, Clint Byrum wrote: 
 Excerpts from Thomas Goirand's message of 2015-02-14 16:48:01 -0800: 
 Hi, 
 
 I've seen messages in the logs telling that we should move to the 
 identity_uri. 
 
 I don't really like the identity_uri which contains all of the 
 information in a single directive, which means that a script that would 
 edit it would need a lot more parsing work than simply a key/value pair 
 logic. This is error prone. The fragments don't have this issue. 
 
 So, could we decide to: 
 1/ Not remove the auth fragments 
 2/ Remove the deprecation warnings 
 
 
 Automation has tended away from parsing and editing files in place for 
 a long time now. Typically you'd have a source of truth with all the 
 values, and a tool to turn that into a URL during file generation. This 
 isn't error prone in my experience. 

That's truth for Chef / Puppet based deployments. But for what I do with 
debconf, I do insist on editing only what's needed to a pre-existing 
configuration file. And yes, it should be up to the packages to maintain 
configuration files, not stuff like puppet / chef. If everyone is doing 
what you wrote above, it's precisely because, up to now, packages were 
not doing a good enough (configuration maintenance) work, which I intend 
to fix in the near future. 

As of right now, I'm able to deploy a full all-in-one server using only 
debconf to configure the packages. This works pretty well! And I'm now 
using that to run my tempest CI (with which I am having very decent 
results). 

So, let me just say that while I do not have a timeline on the removal of auth 
fragments, since it is deprecated assume that this should no longer be used if 
there is an alternative (which there is). I am willing to continue with the 
discussion on reversing course, but consider the deprecation notice the “far in 
advance” warning that they are going away (isn’t that what deprecation is?). 

—Morgan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] using one Manila service for two clouds

2015-02-15 Thread Jake Kugel
Thanks for the reply, and you're right - I was interested in was allowing 
people outside of the cloud to use Manila, that is great to hear that is a 
supported use case.  I still have some learning to do around per-tenant 
share servers and per-tenant networks but in general I think it will work 
well.

Thanks,
Jake


Ben Swartzlander b...@swartzlander.org wrote on 02/13/2015 07:30:57 PM:

 From: Ben Swartzlander b...@swartzlander.org
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 02/13/2015 07:33 PM
 Subject: Re: [openstack-dev] [Manila] using one Manila service for two 
clouds
 
 On 02/13/2015 05:58 PM, Jake Kugel wrote:
  Hi,
 
  this might be a dumb question, is it possible to have a stand-alone 
Manila
  service that could be used by clients outside of a specific OpenStack
  cloud?  For example, a shared Manila service that VMs in two clouds 
could
  both use?
 
 We've tried to design Manila to not have any hard dependencies on any 
 OpenStack services (except for keystone). The use case is exactly what 
 you suggest -- people should be able to use Manila outside of the cloud 
 if they wish.
 
  I am guessing that there would be two drawbacks to this scenario -- 
(1)
  users would need two keystone credentials - a keystone credential in 
the
  cloud hosting their VM, and then a keystone credential that is used 
with
  the stand-alone Manila service to create a share.  And (2), the shared
  Manila service wouldn't be able to isolate network traffic for a
  particular tenant - all users of the service would share the same 
network.
Do these capture the problems with it?
 
 Possibly yes. I would like to think it would be possible to use 1 
 keystone for both purposes, but I'm not expert on keystone I'm not 
 familiar with what you're trying to do.
 
 Regarding isolation of network traffic, Manila doesn't actually do that 
 for you. What Manila does is allows you to create per-tenant share 
 servers and connect them to various per-tenant network, assuming the 
 networks are already segmented by something else. That something else 
 doesn't have to be neutron or a cloud, necessarily. As the code is 
 written today, the segmentation is assumed to be either neutron or 
 nova-network based, but it shouldn't be terribly hard to add something 
 else in the future.
 
  Thanks,
  Jake
 
 
  
__
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] stepping down as core reviewer

2015-02-15 Thread Robert Collins
Hi, I've really not been pulling my weight as a core reviewer in
TripleO since late last year when personal issues really threw me for
a while. While those are behind me now, and I had a good break over
the christmas and new year period, I'm sufficiently out of touch with
the current (fantastic) progress being made that I don't feel
comfortable +2'ing anything except the most trivial things.

Now the answer to that is to get stuck back in, page in the current
blueprints and charge ahead - but...

One of the things I found myself reflecting on during my break was the
extreme fragility of the things we were deploying in TripleO - most of
our time is spent fixing fallout from unintended, unexpected
consequences in the system. I think its time to put some effort
directly in on that in a proactive fashion rather than just reacting
to whichever failure du jour is breaking deployments / scale /
performance.

So for the last couple of weeks I've been digging into the Nova
(initially) bugtracker and code with an eye to 'how did we get this
bug in the first place', and refreshing my paranoid
distributed-systems-ops mindset: I'll be writing more about that
separately, but its clear to me that there's enough meat there - both
analysis, discussion, and hopefully execution - that it would be
self-deceptive for me to think I'll be able to meaningfully contribute
to TripleO in the short term.

I'm super excited by Kolla - I think that containers really address
the big set of hurdles we had with image based deployments, and if we
can one-way-or-another get cinder and Ironic running out of
containers, we should have a pretty lovely deployment story. But I
still think helping on the upstream stuff more is more important for
now. We'll see where we're at in a cycle or two :)

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Block migrations and Cinder volumes

2015-02-15 Thread Tony Breeds
On Mon, Feb 16, 2015 at 01:39:21PM +1300, Robert Collins wrote:

 Just ran across this from bug
 https://bugs.launchpad.net/nova/+bug/1398999. Is there some way to
 signal to libvirt that some block devices shouldn't be migrated by it
 but instead are known to be networked etc? Or put another way, how can
 we have our cake and eat it too. Its not uncommon for a VM to be
 cinder booted but have local storage for swap... and AIUI the fix we
 put in for this bug stops those VM's being migrated. Do you think it
 is tractable (but needs libvirt work), or is it something endemic to
 the problem (e.g. dirty page synchronisation with the VM itself) that
 will be in the way?


I have a half drafted email for the libvirt devel list proposing this
exact thing.  Allow an element in the XML that tell libvirt how/if it
can/should migrate a device.  As noted previously I'm happy to do qemu/libvirt
work to help openstack.

Doign this would solve a few other issues in openstack without breaking
existing users.

So now I look like I'm jumping on your bandwagon Phooey
 
Yours Tony.


pgpryzA1nk4rz.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress][Policy][Copper]Collaboration between OpenStack Congress and OPNFV Copper

2015-02-15 Thread Zhipeng Huang
2 - 4 pm Tuesday (which means 6 - 8 am China Wed) should work for me. Let's
see if Bryan is ok with the time slot. And yes you don't need an account :)

On Mon, Feb 16, 2015 at 1:36 PM, Tim Hinrichs thinri...@vmware.com wrote:

  I just realized Monday is a holiday. What about 8a, 10a, 2-4p Tuesday?

  I'm happy to try out gotomeeting.  Looks like I don't need an account,
 right?

 Tim

  P. S. Pardon the brevity. Sent from my mobile.

 On Feb 13, 2015, at 4:25 PM, Zhipeng Huang zhipengh...@gmail.com wrote:

   Hi Tim,

 Monday 9am PST should be ok for me, it is better if u guys could do 4 - 5
 pm, but let's settle on 9 am for now, see if Bryan and others would be ok.
 Regarding meeting tools, in opnfv we use GoToMeeting a lot, would u guys be
 ok with that? Or should we have a Google Hangout?
 On Feb 14, 2015 8:04 AM, Tim Hinrichs thinri...@vmware.com wrote:

 Hi Zhipeng,

  Sure we can talk online Mon/Tue.  If you come up with some times the
 Copper team is available, I’ll do the same for a few of the Congress team.
 We’re usually available 9-4p Pacific, and for me Monday looks better than
 Tuesday.

  If we schedule an hour meeting in Santa Rosa for Wed at say 1:30 or 2p,
 I’ll do my best to make it there.  Even if I can’t make it, you’ll always
 have Sean to talk with.

  Tim




  On Feb 12, 2015, at 6:49 PM, Zhipeng Huang zhipengh...@gmail.com
 wrote:

  THX Tim!

  I think It'd be great if we could have some online discussion ahead of
 F2F LFC summit.We could have the crash course early next week (Monday or
 Tuesday), and then Bryan could discuss with Sean in detail when they met,
 with specific questions.

  Would this be ok for everyone ?

 On Fri, Feb 13, 2015 at 7:21 AM, Tim Hinrichs thinri...@vmware.com
 wrote:

 Bryan and Zhipeng,

  Sean Roberts (CCed) is planning to be in Santa Rosa.   Sean’s
 definitely there on Wed.  Less clear about Thu/Fri.

  I don’t know if I’ll make the trip yet, but I’m guessing Wed early
 afternoon if I can.

  Tim


  On Feb 11, 2015, at 9:04 PM, SULLIVAN, BRYAN L bs3...@att.com wrote:

   Hi Tim,


 It would be great to meet with members of the Congress project if
 possible at our meetup in Santa Rosa. I plan by then to have a basic
 understanding of Congress and some test driver apps / use cases to demo at
 the meetup. The goal is to assess the current state of Congress support for
 the use cases on the OPNFV wiki: https://wiki.opnfv.org/copper/use_cases
 https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.opnfv.org_copper_use-5Fcasesd=AwMFAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=B6BWd4kFfgOzAREgThxkmTZKy7dDXE2-eBAmL0PBK7sm=79iOYd5evGtBk2y36AKWDlDGaxiAbtt-Aago3I-8XcUs=d4pb7BHqqZMj3oOoJBwixcr4VsTM0B4JwHe_JHRQ_VUe=


 I would be doing the same with ODL but I’m not as far on getting ready
 with it. So the opportunity to discuss the use cases under Copper and the
 other policy-related projects

 (fault management
 https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.opnfv.org_doctord=AwMFAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=B6BWd4kFfgOzAREgThxkmTZKy7dDXE2-eBAmL0PBK7sm=79iOYd5evGtBk2y36AKWDlDGaxiAbtt-Aago3I-8XcUs=Wq56oTQYc1glpCeJ6wfL60x0AdyAphZeL55R7Kc7TvUe=,
 resource management
 https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.opnfv.org_promised=AwMFAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=B6BWd4kFfgOzAREgThxkmTZKy7dDXE2-eBAmL0PBK7sm=79iOYd5evGtBk2y36AKWDlDGaxiAbtt-Aago3I-8XcUs=69Ak90Xh9biVNpWyCeLW8_7I0CoX0WrcDuFwlHQmM30e=,
 resource scheduler
 https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.opnfv.org_requirements-5Fprojects_resource-5Fschedulerd=AwMFAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=B6BWd4kFfgOzAREgThxkmTZKy7dDXE2-eBAmL0PBK7sm=79iOYd5evGtBk2y36AKWDlDGaxiAbtt-Aago3I-8XcUs=haq_oYTeYW7TkZp-eJrCx33KJjCg_tQlWTwiH_4OO9Ie=)
 with Congress experts would be great.


 Once we understand the gaps in what we are trying to build in OPNFV, the
 goal for our first OPNFV release is to create blueprints for new work in
 Congress. We might also just find some bugs and get directly involved in
 Congress to address them, or start a collaborative development project in
 OPNFV for that. TBD


 Thanks,

 Bryan Sullivan | Service Standards | ATT


 *From:* Tim Hinrichs [mailto:thinri...@vmware.com thinri...@vmware.com]

 *Sent:* Wednesday, February 11, 2015 10:22 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Cc:* SULLIVAN, BRYAN L; HU, BIN; Rodriguez, Iben; Howard Huang
 *Subject:* Re: [openstack-dev] [congress][Policy][Copper]Collaboration
 between OpenStack Congress and OPNFV Copper



 Hi Zhipeng,



 We’d be happy to meet.  Sounds like fun!



 I don’t know of anyone on the Congress team who is planning to attend
 the LF collaboration summit.  But we might be able to send a couple of
 people if it’s the only real chance to have a face-to-face.  Otherwise,
 there are a bunch of us in and around Palo Alto.  And of course,
 phone/google hangout/irc are fine options as well.



 Tim








[openstack-dev] [cinder] volume replication

2015-02-15 Thread Ronen Kat
Ruijing,
hi,

Are you discussing the network/fabric
between Storage A and Storage B? 
If so, assumption in Cinder is that
this is done in advance by the storage administrator.
The design discussions for replication
resulted in that the driver is fully responsible for replication and it
is up to the driver to implement and manage replication on its own.
Hence, all vendor specific setup actions
like creating volume pools, setup network on the storage side are considered
prerequisite actions and outside the scope of the Cinder flows.

If someone feels that is not the case,
or should not be the case, feel free to chime in.

Or does this relates to setting up the
data path for accessing both Storage A and Storage B?
Should this be setup in advance? When
we attach the primary volume to the VM? Or when promoting the replica to
be primary?

-- Ronen



From:   
Guo, Ruijing
ruijing@intel.com
To:   
OpenStack Development
Mailing List (not for usage questions) openstack-dev@lists.openstack.org
Date:   
16/02/2015 02:29 AM
Subject:  
 Re: [openstack-dev]
[cinder] documenting volume replication




Hi, Ronen

3) I mean storage based replication.
In normal, volume replication support FC or iSCSI. We need to setup FC
or iSCSI before we do volume replication.

Case 1) 

Host --FC--Storage
A ---iSCSI  Storage B FC- Host

Case 2)

Host --FC--Storage
A ---FC  Storage B FC- Host

As above diagram, we need
to setup connection (iSCSI or FC) between storage A and Storage B.

For FC, we need to zone storage
A  storage B in FC switch.

Thanks,
-Ruijing

From: Ronen Kat [mailto:ronen...@il.ibm.com]

Sent: Sunday, February 15, 2015 4:46 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder] documenting volume replication

Hi Ruijing,


Thanks for the comments. 
Re (1) - driver can implement replication in any means the driver see fit.
It can be exported and be available to the scheduler/drive via the capabilities
or driver extra-spec prefixes.

Re (3) - Not sure I see how this relates to storage side replication, do
you refer to host side replication?


Ronen 



From:Guo,
Ruijing ruijing@intel.com

To:OpenStack
Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org

Date:15/02/2015
03:41 AM 
Subject:Re:
[openstack-dev] [cinder] documenting volume replication






Hi, Ronen, 
 
I dont know how to edit https://etherpad.openstack.org/p/cinder-replication-redoc
and add some comments in email.

 
1.   We may add asynchronized and synchronized type for replication.

2.   We may add CG for replication

3.   We may add to initialize connection for replication

 
Thanks, 
-Ruijing 
 
From: Ronen Kat [mailto:ronen...@il.ibm.com]

Sent: Tuesday, February 3, 2015 9:41 PM
To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: [openstack-dev] [cinder] documenting volume replication

 
As some of you are aware the spec for replication is not up to date, 
The current developer documentation, http://docs.openstack.org/developer/cinder/api/cinder.volume.driver.html,
cover replication but some folks indicated that it need additional details.


In order to get the spec and documentation up to date I created an Etherpad
to be a base for the update.

The Etherpad page is on https://etherpad.openstack.org/p/cinder-replication-redoc


I would appreciate if interested parties would take a look at the Etherpad,
add comments, details, questions and feedback.


Ronen, __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] volume replication

2015-02-15 Thread Zhipeng Huang
Hi Ronen,

Xingyang mentioned there's another etherpad on rep and CG, which etherpad
should we mainly follow ?

On Mon, Feb 16, 2015 at 2:38 PM, Ronen Kat ronen...@il.ibm.com wrote:

 Ruijing, hi,

 Are you discussing the network/fabric between Storage A and Storage B?
 If so, assumption in Cinder is that this is done in advance by the storage
 administrator.
 The design discussions for replication resulted in that the driver is
 fully responsible for replication and it is up to the driver to implement
 and manage replication on its own.
 Hence, all vendor specific setup actions like creating volume pools, setup
 network on the storage side are considered prerequisite actions and outside
 the scope of the Cinder flows.

 If someone feels that is not the case, or should not be the case, feel
 free to chime in.

 Or does this relates to setting up the data path for accessing both
 Storage A and Storage B?
 Should this be setup in advance? When we attach the primary volume to the
 VM? Or when promoting the replica to be primary?

 -- Ronen



 From:Guo, Ruijing ruijing@intel.com
 To:OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date:16/02/2015 02:29 AM
 Subject:Re: [openstack-dev] [cinder] documenting volume
 replication
 --



 Hi, Ronen

 3) I mean storage based replication. In normal, volume replication support
 FC or iSCSI. We need to setup FC or iSCSI before we do volume replication.

 Case 1)

 Host --FC--Storage A ---iSCSI  Storage B FC-
 Host

 Case 2)

 Host --FC--Storage A ---FC  Storage B FC- Host

 As above diagram, we need to setup connection (iSCSI or FC) between
 storage A and Storage B.

 For FC, we need to zone storage A  storage B in FC switch.

 Thanks,
 -Ruijing

 *From:* Ronen Kat [mailto:ronen...@il.ibm.com ronen...@il.ibm.com]
 * Sent:* Sunday, February 15, 2015 4:46 PM
 * To:* OpenStack Development Mailing List (not for usage questions)
 * Subject:* Re: [openstack-dev] [cinder] documenting volume replication

 Hi Ruijing,

 Thanks for the comments.
 Re (1) - driver can implement replication in any means the driver see fit.
 It can be exported and be available to the scheduler/drive via the
 capabilities or driver extra-spec prefixes.
 Re (3) - Not sure I see how this relates to storage side replication, do
 you refer to host side replication?

 Ronen



 From:Guo, Ruijing *ruijing@intel.com*
 ruijing@intel.com
 To:OpenStack Development Mailing List (not for usage questions)
 *openstack-dev@lists.openstack.org* openstack-dev@lists.openstack.org
 Date:15/02/2015 03:41 AM
 Subject:Re: [openstack-dev] [cinder] documenting volume
 replication
 --




 Hi, Ronen,

 I don’t know how to edit
 *https://etherpad.openstack.org/p/cinder-replication-redoc*
 https://etherpad.openstack.org/p/cinder-replication-redoc and add some
 comments in email.

 1. We may add asynchronized and synchronized type for replication.
 2. We may add CG for replication
 3. We may add to initialize connection for replication

 Thanks,
 -Ruijing

 * From:* Ronen Kat [*mailto:ronen...@il.ibm.com* ronen...@il.ibm.com]
 * Sent:* Tuesday, February 3, 2015 9:41 PM
 * To:* OpenStack Development Mailing List (
 *openstack-dev@lists.openstack.org* openstack-dev@lists.openstack.org)
 * Subject:* [openstack-dev] [cinder] documenting volume replication

 As some of you are aware the spec for replication is not up to date,
 The current developer documentation,
 *http://docs.openstack.org/developer/cinder/api/cinder.volume.driver.html*
 http://docs.openstack.org/developer/cinder/api/cinder.volume.driver.html,
 cover replication but some folks indicated that it need additional details.

 In order to get the spec and documentation up to date I created an
 Etherpad to be a base for the update.
 The Etherpad page is on
 *https://etherpad.openstack.org/p/cinder-replication-redoc*
 https://etherpad.openstack.org/p/cinder-replication-redoc

 I would appreciate if interested parties would take a look at the
 Etherpad, add comments, details, questions and feedback.

 Ronen,
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 *openstack-dev-requ...@lists.openstack.org?subject:unsubscribe*
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 

Re: [openstack-dev] Fwd: [Neutron][DVR]Neutron distributed SNAT

2015-02-15 Thread Kevin Benton
Has there been any work to use conntrack synchronization similar to L3 HA
in DVR so failover is fast on the SNAT node?

On Sat, Feb 14, 2015 at 1:31 PM, Carl Baldwin c...@ecbaldwin.net wrote:


 On Feb 10, 2015 2:36 AM, Wilence Yao wilence@gmail.com wrote:
 
 
  Hi all,
After OpenStack Juno, floating ip is handled by dvr, but SNAT is still
 handled by l3agent on network node. The distributed SNAT is in future plans
 for DVR. In my opinion, SNAT can move to DVR as well as floating ip. I have
 searched in blueprint, there is little  about distributed SNAT. Is there
 any different between distributed floating ip and distributed SNAT?

 The difference is that a shared snat address is shared among instances on
 multiple compute nodes.  A floating ip is exclusive to a single instance on
 one compute node.  I'm interested to hear your ideas for distributing it.

 Carl

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress][Policy][Copper]Collaboration between OpenStack Congress and OPNFV Copper

2015-02-15 Thread Tim Hinrichs
I just realized Monday is a holiday. What about 8a, 10a, 2-4p Tuesday?

I'm happy to try out gotomeeting.  Looks like I don't need an account, right?

Tim

P. S. Pardon the brevity. Sent from my mobile.

On Feb 13, 2015, at 4:25 PM, Zhipeng Huang 
zhipengh...@gmail.commailto:zhipengh...@gmail.com wrote:


Hi Tim,

Monday 9am PST should be ok for me, it is better if u guys could do 4 - 5 pm, 
but let's settle on 9 am for now, see if Bryan and others would be ok. 
Regarding meeting tools, in opnfv we use GoToMeeting a lot, would u guys be ok 
with that? Or should we have a Google Hangout?

On Feb 14, 2015 8:04 AM, Tim Hinrichs 
thinri...@vmware.commailto:thinri...@vmware.com wrote:
Hi Zhipeng,

Sure we can talk online Mon/Tue.  If you come up with some times the Copper 
team is available, I’ll do the same for a few of the Congress team.  We’re 
usually available 9-4p Pacific, and for me Monday looks better than Tuesday.

If we schedule an hour meeting in Santa Rosa for Wed at say 1:30 or 2p, I’ll do 
my best to make it there.  Even if I can’t make it, you’ll always have Sean to 
talk with.

Tim




On Feb 12, 2015, at 6:49 PM, Zhipeng Huang 
zhipengh...@gmail.commailto:zhipengh...@gmail.com wrote:

THX Tim!

I think It'd be great if we could have some online discussion ahead of F2F LFC 
summit.We could have the crash course early next week (Monday or Tuesday), and 
then Bryan could discuss with Sean in detail when they met, with specific 
questions.

Would this be ok for everyone ?

On Fri, Feb 13, 2015 at 7:21 AM, Tim Hinrichs 
thinri...@vmware.commailto:thinri...@vmware.com wrote:
Bryan and Zhipeng,

Sean Roberts (CCed) is planning to be in Santa Rosa.   Sean’s definitely there 
on Wed.  Less clear about Thu/Fri.

I don’t know if I’ll make the trip yet, but I’m guessing Wed early afternoon if 
I can.

Tim


On Feb 11, 2015, at 9:04 PM, SULLIVAN, BRYAN L 
bs3...@att.commailto:bs3...@att.com wrote:

Hi Tim,

It would be great to meet with members of the Congress project if possible at 
our meetup in Santa Rosa. I plan by then to have a basic understanding of 
Congress and some test driver apps / use cases to demo at the meetup. The goal 
is to assess the current state of Congress support for the use cases on the 
OPNFV wiki: 
https://wiki.opnfv.org/copper/use_caseshttps://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.opnfv.org_copper_use-5Fcasesd=AwMFAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=B6BWd4kFfgOzAREgThxkmTZKy7dDXE2-eBAmL0PBK7sm=79iOYd5evGtBk2y36AKWDlDGaxiAbtt-Aago3I-8XcUs=d4pb7BHqqZMj3oOoJBwixcr4VsTM0B4JwHe_JHRQ_VUe=

I would be doing the same with ODL but I’m not as far on getting ready with it. 
So the opportunity to discuss the use cases under Copper and the other 
policy-related projects
(fault 
managementhttps://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.opnfv.org_doctord=AwMFAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=B6BWd4kFfgOzAREgThxkmTZKy7dDXE2-eBAmL0PBK7sm=79iOYd5evGtBk2y36AKWDlDGaxiAbtt-Aago3I-8XcUs=Wq56oTQYc1glpCeJ6wfL60x0AdyAphZeL55R7Kc7TvUe=,
 resource 
managementhttps://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.opnfv.org_promised=AwMFAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=B6BWd4kFfgOzAREgThxkmTZKy7dDXE2-eBAmL0PBK7sm=79iOYd5evGtBk2y36AKWDlDGaxiAbtt-Aago3I-8XcUs=69Ak90Xh9biVNpWyCeLW8_7I0CoX0WrcDuFwlHQmM30e=,
 resource 
schedulerhttps://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.opnfv.org_requirements-5Fprojects_resource-5Fschedulerd=AwMFAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=B6BWd4kFfgOzAREgThxkmTZKy7dDXE2-eBAmL0PBK7sm=79iOYd5evGtBk2y36AKWDlDGaxiAbtt-Aago3I-8XcUs=haq_oYTeYW7TkZp-eJrCx33KJjCg_tQlWTwiH_4OO9Ie=)
 with Congress experts would be great.

Once we understand the gaps in what we are trying to build in OPNFV, the goal 
for our first OPNFV release is to create blueprints for new work in Congress. 
We might also just find some bugs and get directly involved in Congress to 
address them, or start a collaborative development project in OPNFV for that. 
TBD

Thanks,
Bryan Sullivan | Service Standards | ATT

From: Tim Hinrichs [mailto:thinri...@vmware.com]
Sent: Wednesday, February 11, 2015 10:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: SULLIVAN, BRYAN L; HU, BIN; Rodriguez, Iben; Howard Huang
Subject: Re: [openstack-dev] [congress][Policy][Copper]Collaboration between 
OpenStack Congress and OPNFV Copper

Hi Zhipeng,

We’d be happy to meet.  Sounds like fun!

I don’t know of anyone on the Congress team who is planning to attend the LF 
collaboration summit.  But we might be able to send a couple of people if it’s 
the only real chance to have a face-to-face.  Otherwise, there are a bunch of 
us in and around Palo Alto.  And of course, phone/google hangout/irc are fine 
options as well.

Tim



On Feb 11, 2015, at 8:59 AM, Zhipeng Huang 
zhipengh...@gmail.commailto:zhipengh...@gmail.com wrote:

Hi Congress Team,

As you might already knew, we had a project in OPNFV covering deployment policy 

Re: [openstack-dev] [keystone] SPFE: Authenticated Encryption (AE) Tokens

2015-02-15 Thread Lance Bragstad
fwiw, the latest patch set has logic built in that determines the purpose
of the key repository. If you want your deployment to sign tokens you can
point Keystone to a key repository for that purpose. Likewise, tokens will
only be encrypted if you tell Keystone to use a key repository for
encrypting.

On Sun, Feb 15, 2015 at 12:03 AM, Morgan Fainberg morgan.fainb...@gmail.com
 wrote:

 On February 14, 2015 at 9:53:14 PM, Adam Young (ayo...@redhat.com) wrote:

 On 02/13/2015 04:19 PM, Morgan Fainberg wrote:

  On February 13, 2015 at 11:51:10 AM, Lance Bragstad (lbrags...@gmail.com)
 wrote:

  Hello all,


 I'm proposing the Authenticated Encryption (AE) Token specification [1] as
 an SPFE. AE tokens increases scalability of Keystone by removing token
 persistence. This provider has been discussed prior to, and at the Paris
 summit [2]. There is an implementation that is currently up for review [3],
 that was built off a POC. Based on the POC, there has been some performance
 analysis done with respect to the token formats available in Keystone
 (UUID, PKI, PKIZ, AE) [4].

 The Keystone team spent some time discussing limitations of the current
 POC implementation at the mid-cycle. One case that still needs to be
 addressed (and is currently being worked), is federated tokens. When
 requesting unscoped federated tokens, the token contains unbound groups
 which would need to be carried in the token. This case can be handled by AE
 tokens but it would be possible for an unscoped federated AE token to
 exceed an acceptable AE token length (i.e.  255 characters). Long story
 short, a federation migration could be used to ensure federated AE tokens
 never exceed a certain length.

 Feel free to leave your comments on the AE Token spec.

 Thanks!

 Lance

 [1] https://review.openstack.org/#/c/130050/
 [2] https://etherpad.openstack.org/p/kilo-keystone-authorization
 [3] https://review.openstack.org/#/c/145317/
 [4] http://dolphm.com/benchmarking-openstack-keystone-token-formats/

 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 I am for granting this exception as long as it’s clear that the following
 is clear/true:

 * All current use-cases for tokens (including federation) will be
 supported by the new token provider.

 * The federation tokens being possibly over 255 characters can be
 addressed in the future if they are not addressed here (a “federation
 migration” does not clearly state what is meant.

 I think the length of the token is not a real issue.  We need to keep them
 within header lengths.  That is 8k.  Anything smaller than that will work.

 I think we start with  federation usncoped tokens allowing a list of
 groups.  These tokens only go back and forth from user to Keyston anyway,
 and should not got to other services.


 Scoped tokens via Federation (iirc this is possible) could end up
 including the groups. I think you hit the nail on the head though, the 255
 limit is artificial, and we’ve made a lot of efforts here to limit the
 token size already. These tokens need to handle 100% of the current token
 use cases, and limiting federated tokens to unscoped only is likely going
 to break that requirement. Future enhancements can ensure federated tokens
 fall below the 255 character limit (IIRC Dolph said he had a fix for this
 but it’s not something that can hit Kilo and will be proposed in the
 future).

 I also have a concernt with the requirement for new cryptoloy.
 Specificcally, the requirement for symmetric crypto and Keys management can
 be a s ignificant barrier to organizations that have to meet compliance
 rules.  Since PKI tokens have already forced this issue, I suggest we
 switch AE tokens to using PKI instead of symmetric crypto for the default
 case.  Putting in an optimization that uses symmetric crypto as an
 enhancement should then be a future enhancement.  Asymmetric crypto will
 mitigate the issues with multiple keystone servers sharing keys, and will
 remove the need for a key sharing mechanism.  Since this mechanism is in
 Keystone already, I think it is a realistic approach.


 I would rather see this spec crypto all together and strictly rely on HMAC
 with any form of crypto (Asymmetric or Symmetric) being the addon. I am
 worried if we go down the path of starting with PKI (or even symmetric
 crypto) instead of just HMAC signature validation (live-validated such as
 what keystone does today with UUID tokens) we are going to back ourselves
 into a similar corner we’re in today.

 I agree that adding new crypto and key management is a good deal more
 overhead than the basic implementation. As I commented on the spec, I’m not
 sure encryption is buying us a lot here. So, lets make the adjustment to
 avoid encrypting data and rely on the 

[openstack-dev] [nova][vmware] MS update

2015-02-15 Thread Gary Kotton
Hi,
MS is back up and running.
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] volume replication

2015-02-15 Thread Ronen Kat
Good question, I have:

https://etherpad.openstack.org/p/cinder-replication-redoc
https://etherpad.openstack.org/p/cinder-replication-cg
https://etherpad.openstack.org/p/volume-replication-fix-planning

Jay seems to be the champion for moving
replication forward, I will let Jay point the way.

-- Ronen



From:   
Zhipeng Huang zhipengh...@gmail.com
To:   
OpenStack Development
Mailing List (not for usage questions) openstack-dev@lists.openstack.org
Date:   
16/02/2015 09:14 AM
Subject:  
 Re: [openstack-dev]
[cinder] volume replication




Hi Ronen,

Xingyang mentioned there's another etherpad on rep and
CG, which etherpad should we mainly follow ?

On Mon, Feb 16, 2015 at 2:38 PM, Ronen Kat ronen...@il.ibm.com
wrote:
Ruijing,
hi, 

Are you discussing the network/fabric between Storage A and Storage B?

If so, assumption in Cinder is that this is done in advance by the storage
administrator. 
The design discussions for replication resulted in that the driver is fully
responsible for replication and it is up to the driver to implement and
manage replication on its own. 
Hence, all vendor specific setup actions like creating volume pools, setup
network on the storage side are considered prerequisite actions and outside
the scope of the Cinder flows. 

If someone feels that is not the case, or should not be the case, feel
free to chime in. 

Or does this relates to setting up the data path for accessing both Storage
A and Storage B? 
Should this be setup in advance? When we attach the primary volume to the
VM? Or when promoting the replica to be primary? 

-- Ronen 



From:Guo,
Ruijing ruijing@intel.com

To:OpenStack
Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org

Date:16/02/2015
02:29 AM 
Subject:Re:
[openstack-dev] [cinder] documenting volume replication





Hi, Ronen 
 
3) I mean storage based replication. In normal, volume replication support
FC or iSCSI. We need to setup FC or iSCSI before we do volume replication.

 
Case 1)  
 
Host --FC--Storage A ---iSCSI  Storage B FC-
Host 
 
Case 2) 
 
Host --FC--Storage A ---FC  Storage B FC- Host

 
As above diagram, we need to setup connection (iSCSI or FC) between storage
A and Storage B. 
 
For FC, we need to zone storage A  storage B in FC switch.

 
Thanks, 
-Ruijing 
 
From: Ronen Kat [mailto:ronen...@il.ibm.com]

Sent: Sunday, February 15, 2015 4:46 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder] documenting volume replication

 
Hi Ruijing,


Thanks for the comments. 
Re (1) - driver can implement replication in any means the driver see fit.
It can be exported and be available to the scheduler/drive via the capabilities
or driver extra-spec prefixes.

Re (3) - Not sure I see how this relates to storage side replication, do
you refer to host side replication?


Ronen 



From:Guo,
Ruijing ruijing@intel.com

To:OpenStack
Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org

Date:15/02/2015
03:41 AM 
Subject:Re:
[openstack-dev] [cinder] documenting volume replication







Hi, Ronen, 

I dont know how to edit https://etherpad.openstack.org/p/cinder-replication-redoc
and add some comments in email.


1.   We may add asynchronized and synchronized type for replication.

2.   We may add CG for replication

3.   We may add to initialize connection for replication


Thanks, 
-Ruijing 

From: Ronen Kat [mailto:ronen...@il.ibm.com]

Sent: Tuesday, February 3, 2015 9:41 PM
To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: [openstack-dev] [cinder] documenting volume replication


As some of you are aware the spec for replication is not up to date, 
The current developer documentation, http://docs.openstack.org/developer/cinder/api/cinder.volume.driver.html,
cover replication but some folks indicated that it need additional details.


In order to get the spec and documentation up to date I created an Etherpad
to be a base for the update.

The Etherpad page is on https://etherpad.openstack.org/p/cinder-replication-redoc


I would appreciate if interested parties would take a look at the Etherpad,
add comments, details, questions and feedback.


Ronen, __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not 

Re: [openstack-dev] [nova] Question about force_host skip filters

2015-02-15 Thread Nikola Đipanov
On 02/14/2015 08:25 AM, Alex Xu wrote:
 
 
 2015-02-14 1:41 GMT+08:00 Nikola Đipanov ndipa...@redhat.com
 mailto:ndipa...@redhat.com:
 
 On 02/12/2015 04:10 PM, Chris Friesen wrote:
  On 02/12/2015 03:44 AM, Sylvain Bauza wrote:
 
  Any action done by the operator is always more important than what the
  Scheduler
  could decide. So, in an emergency situation, the operator wants to
  force a
  migration to an host, we need to accept it and do it, even if it
  doesn't match
  what the Scheduler could decide (and could violate any policy)
 
  That's a *force* action, so please leave the operator decide.
 
  Are we suggesting that the operator would/should only ever specify a
  specific host if the situation is an emergency?
 
  If not, then perhaps it would make sense to have it go through the
  scheduler filters even if a host is specified.  We could then have a
  --force flag that would proceed anyways even if the filters don't 
 match.
 
  There are some cases (provider networks or PCI passthrough for example)
  where it really makes no sense to try and run an instance on a compute
  node that wouldn't pass the scheduler filters.  Maybe it would make the
  most sense to specify a list of which filters to override while still
  using the others.
 
 
 Actually this kind of already happens on the compute node when doing
 claims. Even if we do force the host, the claim will fail on the compute
 node and we will end up with a consistent scheduling.
 
 
 
 Agree with Nikola, the claim already checking that. And instance booting
 must be failed if there isn't pci device. But I still think it should go
 through the filters, because in the future we may move the claim into
 the scheduler. And we needn't any new options, I didn't see there is any
 behavior changed.
 

I think that it's not as simple as just re-running all the filters. When
we want to force a host - there are certain things we may want to
disregard (like aggregates? affinity?) that the admin de-facto overrides
by saying they want a specific host, and there are things we definitely
need to re-run to set the limits and for the request to even make sense
(like NUMA, PCI, maybe some others).

So what I am thinking is that we need a subset of filters that we flag
as - we need to re-run this even for force-host, and then run them on
every request.

thoughts?

N.

 
 
 This sadly breaks down for stuff that needs to use limits, as limits
 won't be set by the filters.
 
 Jay had a BP before to move limits onto compute nodes, which would solve
 this issue, as you would not need to run the filters at all - all the
 stuff would be known to the compute host that could then easily say
 nice of you to want this here, but it ain't happening.
 
 It will also likely need a check in the retry logic to make sure we
 don't hit the host 'retry' number of times.
 
 N.
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev