Re: [openstack-dev] Taking a break..

2014-10-23 Thread Sam Morrison
Thanks for all the help Chris and all the best.

Now on the lookout for another cells core I can harass and pass obscure bugs 
too. It was always reassuring knowing you’d probably already come across the 
issue and could point me to a review or git branch with a fix.

Cheers,
Sam


 On 23 Oct 2014, at 4:37 am, Chris Behrens cbehr...@codestud.com wrote:
 
 Hey all,
 
 Just wanted to drop a quick note to say that I decided to leave Rackspace to 
 pursue another opportunity. My last day was last Friday. I won’t have much 
 time for OpenStack, but I’m going to continue to hang out in the channels. 
 Having been involved in the project since day 1, I’m going to find it 
 difficult to fully walk away. I really don’t know how much I’ll continue to 
 stay involved. I am completely burned out on nova. However, I’d really like 
 to see versioned objects broken out into oslo and Ironic synced with nova’s 
 object advancements. So, if I work on anything, it’ll probably be related to 
 that.
 
 Cells will be left in a lot of capable hands. I have shared some thoughts 
 with people on how I think we can proceed to make it ‘the way’ in nova. I’m 
 going to work on documenting some of this in an etherpad so the thoughts 
 aren’t lost.
 
 Anyway, it’s been fun… the project has grown like crazy! Keep on trucking... 
 And while I won’t be active much, don’t be afraid to ping me!
 
 - Chris
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API Workgroup git repository

2014-10-23 Thread Christopher Yeoh
On Thu, 23 Oct 2014 05:53:21 +
Shaunak Kashyap shaunak.kash...@rackspace.com wrote:

 Thanks Chris. And am I correct in assuming that no weekly api-wg
 meetings are currently going on? I don’t see anything here anyway:
 https://wiki.openstack.org/wiki/Meetings.
 

They haven't started yet. Not sure if we'll be squeezing one in before
summit or not. The proposed meeting time patch only just recently
merged (Thursdays UTC )

https://review.openstack.org/#/c/128332/2

so perhaps thats why its not there yet.

Chris

 Shaunak
 
 On Oct 22, 2014, at 10:39 PM, Christopher Yeoh
 cbky...@gmail.commailto:cbky...@gmail.com wrote:
 
 On Thu, Oct 23, 2014 at 3:51 PM, Shaunak Kashyap
 shaunak.kash...@rackspace.commailto:shaunak.kash...@rackspace.com
 wrote: If I have a proposal to make for a guideline, how do I do it?
 Do I simply create a gerrit review for a file under
 http://git.openstack.org/cgit/openstack/api-wg/tree/guidelines?
 
 
 Yes, either additions to the existing files or add new ones if there
 is nothing currently appropriate.
 
 We still need to talk about the exact process for accepting a patch
 yet, but I was thinking about something along the lines of (and
 haven't really discussed it with anyone yet):
 
 - must be up for at least a week
 - discussed at a weekly api-wg meeting
 - no more than one negative vote (ok we might need something to break
   a deadlock if we really can't get consensus over something but
   I think there should be a lot of incentive to try very hard
   to do so).
 
 Though for now any setup work in the repository is getting approved
 by Jay or I if it looks reasonable.
 
 Chris
 
 
 Shaunak
 
 On Oct 22, 2014, at 3:34 PM, Christopher Yeoh
 cbky...@gmail.commailto:cbky...@gmail.com wrote:
 
  On Wed, 22 Oct 2014 20:36:27 +
  Everett Toews
  everett.to...@rackspace.commailto:everett.to...@rackspace.com
  wrote:
 
  I notice at the top of the GitHub mirror page [1] it reads, API
  Working Group http://openstack.orghttp://openstack.org/”
 
  Can we get that changed to API Working Group
  https://wiki.openstack.org/wiki/API_Working_Group”?
 
  That URL would be much more helpful to people who come across the
  GitHub repo. It's not a code change so we would need a repo owner
  to actually make the change. Who should I contact about that?
 
  I think this will do it:
 
  https://review.openstack.org/130377
 
  Chris
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Enable LVM ephemeral storage for Nova

2014-10-23 Thread John Griffith
On Tue, Oct 21, 2014 at 2:48 PM, Dan Genin daniel.ge...@jhuapl.edu wrote:

  So then it is probably best to leave existing Cinder LVM code in
 lib/cinder_backends/lvm alone and create a similar set of lvm scripts for
 Nova,
 perhaps in lib/nova_backends/lvm?

 Dan

 On 10/21/2014 03:10 PM, Duncan Thomas wrote:

 Sharing the vg with cinder is likely to cause some pain testing proposed
 features cinder reconciling backend with the cinder db. Creating a second
 vg sharing the same backend pv is easy and avoids all such problems.

 Duncan Thomas
 On Oct 21, 2014 4:07 PM, Dan Genin daniel.ge...@jhuapl.edu wrote:

 Hello,

 I would like to add to DevStack the ability to stand up Nova with LVM
 ephemeral
 storage. Below is a draft of the blueprint describing the proposed
 feature.

 Suggestions on architecture, implementation and the blueprint in general
 are very
 welcome.

 Best,
 Dan

 
 Enable LVM ephemeral storage for Nova
 

 Currently DevStack supports only file based ephemeral storage for Nova,
 e.g.,
 raw and qcow2. This is an obstacle to Tempest testing of Nova with LVM
 ephemeral
 storage, which in the past has been inadvertantly broken
 (see for example, https://bugs.launchpad.net/nova/+bug/1373962), and to
 Tempest
 testing of new features based on LVM ephemeral storage, such as LVM
 ephemeral
 storage encryption.

 To enable Nova to come up with LVM ephemeral storage it must be provided a
 volume group. Based on an initial discussion with Dean Troyer, this is
 best
 achieved by creating a single volume group for all services that
 potentially
 need LVM storage; at the moment these are Nova and Cinder.

 Implementation of this feature will:

  * move code in lib/cinder/cinder_backends/lvm to lib/lvm with appropriate
modifications

  * rename the Cinder volume group to something generic, e.g., devstack-vg

  * modify the Cinder initialization and cleanup code appropriately to use
the new volume group

  * initialize the volume group in stack.sh, shortly before services are
launched

  * cleanup the volume group in unstack.sh after the services have been
shutdown

 The question of how large to make the common Nova-Cinder volume group in
 order
 to enable LVM ephemeral Tempest testing will have to be explored.
 Although,
 given the tiny instance disks used in Nova Tempest tests, the current
 Cinder volume group size may already be adequate.

 No new configuration options will be necessary, assuming the volume group
 size
 will not be made configurable.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


One thing to keep in mind when using AWS as an example is that the
equivalent isn't Use LVM on Compute Node, the model is more use EBS (in
our case Cinder) Volumes for Instances.  We most certainly have this
currently and there is some testing in the gate.  It seems to be coming up
in all sorts of tangential conversations lately.   Personally I'd LOVE to
see it more widely used, which in turn would likely lead to improvements.

As far as the comment made about poor performance I'm not sure where that
comes from.  Sure, if you run iSCSI over a 1G network to a Cinder Volume on
a loop back device I wouldn't expect great perf.  If you run over a 10 Gig
Network to a Cinder Volume backed by a descent/real disk I don't think the
performance is as significant as some seem to think (full disclosure it's
been a while since I've actually tested and looked at this closely).  That
being said, there's a good deal of work that should be done here to tweak
it and improve the LVM driver in Cinder for example.  LVM in general tends
to be perceived as providing poor performance, but meh... some of that is
based more in fud than in data (note I said some of it).

All of that is kinda off topic but it did come up so I thought I'd chime
in.  The real question of LVM on Compute Nodes  I'd mostly be
interested in more feedback from folks that use that and why?  I don't have
any background or experience with it so I'd love some insight on why one
chooses LVM on the Compute Node versus File System based instances?  At
some point consolidating the LVM code in Cinder and Nova should really
happen as well (i.e. a shared library or having LVM code in oslo).  Yet
another topic and fortunately there are a few folks working on that for
Kilo.

Thanks,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-23 Thread John Griffith
On Tue, Oct 21, 2014 at 9:17 AM, Duncan Thomas duncan.tho...@gmail.com
wrote:

 For LVM-thin I believe it is already disabled? It is only really
 needed on LVM-thick, where the returning zeros behaviour is not done.

 On 21 October 2014 08:29, Avishay Traeger avis...@stratoscale.com wrote:
  I would say that wipe-on-delete is not necessary in most deployments.
 
  Most storage backends exhibit the following behavior:
  1. Delete volume A that has data on physical sectors 1-10
  2. Create new volume B
  3. Read from volume B before writing, which happens to map to physical
  sector 5 - backend should return zeroes here, and not data from volume A
 
  In case the backend doesn't provide this rather standard behavior, data
 must
  be wiped immediately.  Otherwise, the only risk is physical security,
 and if
  that's not adequate, customers shouldn't be storing all their data there
  regardless.  You could also run a periodic job to wipe deleted volumes to
  reduce the window of vulnerability, without making delete_volume take a
  ridiculously long time.
 
  Encryption is a good option as well, and of course it protects the data
  before deletion as well (as long as your keys are protected...)
 
  Bottom line - I too think the default in devstack should be to disable
 this
  option, and think we should consider making the default False in Cinder
  itself.  This isn't the first time someone has asked why volume deletion
  takes 20 minutes...
 
  As for queuing backup operations and managing bandwidth for various
  operations, ideally this would be done with a holistic view, so that for
  example Cinder operations won't interfere with Nova, or different Nova
  operations won't interfere with each other, but that is probably far down
  the road.
 
  Thanks,
  Avishay
 
 
  On Tue, Oct 21, 2014 at 9:16 AM, Chris Friesen 
 chris.frie...@windriver.com
  wrote:
 
  On 10/19/2014 09:33 AM, Avishay Traeger wrote:
 
  Hi Preston,
  Replies to some of your cinder-related questions:
  1. Creating a snapshot isn't usually an I/O intensive operation.  Are
  you seeing I/O spike or CPU?  If you're seeing CPU load, I've seen the
  CPU usage of cinder-api spike sometimes - not sure why.
  2. The 'dd' processes that you see are Cinder wiping the volumes during
  deletion.  You can either disable this in cinder.conf, or you can use a
  relatively new option to manage the bandwidth used for this.
 
  IMHO, deployments should be optimized to not do very long/intensive
  management operations - for example, use backends with efficient
  snapshots, use CoW operations wherever possible rather than copying
 full
  volumes/images, disabling wipe on delete, etc.
 
 
  In a public-cloud environment I don't think it's reasonable to disable
  wipe-on-delete.
 
  Arguably it would be better to use encryption instead of wipe-on-delete.
  When done with the backing store, just throw away the key and it'll be
  secure enough for most purposes.
 
  Chris
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Duncan Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


We disable this in the Gates CINDER_SECURE_DELETE=False

ThinLVM (which hopefully will be default upon release of Kilo) doesn't need
it because internally it returns zeros when reading unallocated blocks so
it's a non-issue.

The debate of to wipe LV's or not to is a long running issue.  The default
behavior in Cinder is to leave it enable and IMHO that's how it should
stay.  The fact is anything that might be construed as less secure and
has been defaulted to the more secure setting should be left as it is.
It's simple to turn this off.

Also, nobody seemed to mention that in the case of Cinder operations like
copy-volume and the delete process you also have the ability to set
bandwidth limits on these operations, and in the case of delete even
specify different schemes (not just enabled/disabled but other options that
may be less or more IO intensive).

For further reference checkout the config options [1]

Thanks,
John

[1]:
https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L69
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Cells conversation starter

2014-10-23 Thread Tim Bell
 -Original Message-
 From: Andrew Laski [mailto:andrew.la...@rackspace.com]
 Sent: 22 October 2014 21:12
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova] Cells conversation starter
 
 
 On 10/22/2014 12:52 AM, Michael Still wrote:
  Thanks for this.
 
  It would be interesting to see how much of this work you think is
  achievable in Kilo. How long do you see this process taking? In line
  with that, is it just you currently working on this? Would calling for
  volunteers to help be meaningful?
 
 I think that getting a single cell setup tested in the gate is achievable.  I 
 think
 feature parity might be a stretch but could be achievable with enough hands to
 work on it.  Honestly I think that making cells the default implementation is
 going to take more than a cycle. But I think we can get some specifics worked
 out as to the direction and may be able to get to a point where the remaining
 work is mostly mechanical.
 

I think getting to feature parity would be a good Kilo objective. Moving to 
default is another step which would need migration
scripts from the non-cells setups which would need heavy testing. Aiming for L 
for that would seem reasonable given that we
are not drowning in volunteers.

 At the moment it is mainly me working on this with some support from a couple
 of people.  Volunteers would certainly be welcomed on this effort though.  If 
 it
 would be useful perhaps we could even have a cells subgroup to track progress
 and direction of this effort.
 

CERN and BARC (Bhaba Atomic Research Centre in Mumbai) would be interested in 
helping to close the gap. 

Tim

 
  Michael
 
  On Tue, Oct 21, 2014 at 5:00 AM, Andrew Laski
  andrew.la...@rackspace.com wrote:
  One of the big goals for the Kilo cycle by users and developers of
  the cells functionality within Nova is to get it to a point where it
  can be considered a first class citizen of Nova.  Ultimately I think
  this comes down to getting it tested by default in Nova jobs, and
  making it easy for developers to work with.  But there's a lot of
  work to get there.  In order to raise awareness of this effort, and
  get the conversation started on a few things, I've summarized a little bit
 about cells and this effort below.
 
 
  Goals:
 
  Testing of a single cell setup in the gate.
  Feature parity.
  Make cells the default implementation.  Developers write code once
  and it works for  cells.
 
  Ultimately the goal is to improve maintainability of a large feature
  within the Nova code base.
 
 
  Feature gaps:
 
  Host aggregates
  Security groups
  Server groups
 
 
  Shortcomings:
 
  Flavor syncing
   This needs to be addressed now.
 
  Cells scheduling/rescheduling
  Instances can not currently move between cells
   These two won't affect the default one cell setup so they will
  be addressed later.
 
 
  What does cells do:
 
  Schedule an instance to a cell based on flavor slots available.
  Proxy API requests to the proper cell.
  Keep a copy of instance data at the global level for quick retrieval.
  Sync data up from a child cell to keep the global level up to date.
 
 
  Simplifying assumptions:
 
  Cells will be treated as a two level tree structure.
 
 
  Plan:
 
  Fix flavor breakage in child cell which causes boot tests to fail.
  Currently the libvirt driver needs flavor.extra_specs which is not
  synced to the child cell.  Some options are to sync flavor and extra
  specs to child cell db, or pass full data with the request.
  https://review.openstack.org/#/c/126620/1
  offers a means of passing full data with the request.
 
  Determine proper switches to turn off Tempest tests for features that
  don't work with the goal of getting a voting job.  Once this is in
  place we can move towards feature parity and work on internal refactorings.
 
  Work towards adding parity for host aggregates, security groups, and
  server groups.  They should be made to work in a single cell setup,
  but the solution should not preclude them from being used in multiple
  cells.  There needs to be some discussion as to whether a host
  aggregate or server group is a global concept or per cell concept.
 
  Work towards merging compute/api.py and compute/cells_api.py so that
  developers only need to make changes/additions in once place.  The
  goal is for as much as possible to be hidden by the RPC layer, which
  will determine whether a call goes to a compute/conductor/cell.
 
  For syncing data between cells, look at using objects to handle the
  logic of writing data to the cell/parent and then syncing the data to the
 other.
 
  A potential migration scenario is to consider a non cells setup to be
  a child cell and converting to cells will mean setting up a parent
  cell and linking them.  There are periodic tasks in place to sync
  data up from a child already, but a manual kick off mechanism will need to 
  be
 added.
 
 
  Future plans:
 
  Something 

Re: [openstack-dev] [oslo] proposed summit session topics

2014-10-23 Thread Flavio Percoco
Looks good, sorry for missing the meeting!

On 10/22/2014 09:11 PM, Doug Hellmann wrote:
 
 On Oct 20, 2014, at 1:22 PM, Doug Hellmann d...@doughellmann.com wrote:
 
 After today’s meeting, we have filled our seven session slots. Here’s the 
 proposed list, in no particular order. If you think something else needs to 
 be on the list, speak up today because I’ll be plugging all of this into the 
 scheduling tool in the next day or so.

 https://etherpad.openstack.org/p/kilo-oslo-summit-topics

 * oslo.messaging
  * need more reviewers
  * what to do about keeping drivers up to date / moving them out of the main 
 tree
  * python 3 support

 * Graduation schedule

 * Python 3
  * what other than oslo.messaging / eventlet should (or can) we be working 
 on?

 * Alpha versioning

 * Namespace packaging

 * Quota management
  * What should the library do?
  * How do we manage database schema info from the incubator or a library if 
 the app owns the migration scripts?

 * taskflow
  * needs more reviewers
  * removing duplication with other oslo libraries
 
 I’ve pushed our schedule to http://kilodesignsummit.sched.org but it will 
 take a little while for the sync to happen. In the mean time, here’s what I 
 came up with:
 
 2014-11-05 11:00  - Oslo graduation schedule 
 2014-11-05 11:50  - oslo.messaging 
 2014-11-05 13:50  - A Common Quota Management Library 
 2014-11-06 11:50  - taskflow 
 2014-11-06 13:40  - Using alpha versioning for Oslo libraries 
 2014-11-06 16:30  - Python 3 support in Oslo 
 2014-11-06 17:20  - Moving Oslo away from namespace packages 
 
 That should allow the QA and Infra teams to participate in the versioning and 
 packaging discussions, Salvatore to be present for the quota library session 
 (and lead it, I hope), and the eNovance guys who also work on ceilometer to 
 be there for the Python 3 session.
 
 If you know you have a conflict with one of these times, let me know and I’ll 
 see if we can juggle a little.
 
 Doug
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-23 Thread Flavio Percoco
On 10/22/2014 08:15 PM, Doug Hellmann wrote:
 The application projects are dropping python 2.6 support during Kilo, and 
 I’ve had several people ask recently about what this means for Oslo. Because 
 we create libraries that will be used by stable versions of projects that 
 still need to run on 2.6, we are going to need to maintain support for 2.6 in 
 Oslo until Juno is no longer supported, at least for some of our projects. 
 After Juno’s support period ends we can look again at dropping 2.6 support in 
 all of the projects.
 
 
 I think these rules cover all of the cases we have:
 
 1. Any Oslo library in use by an API client that is used by a supported 
 stable branch (Icehouse and Juno) needs to keep 2.6 support.
 
 2. If a client library needs a library we graduate from this point forward, 
 we will need to ensure that library supports 2.6.
 
 3. Any Oslo library used directly by a supported stable branch of an 
 application needs to keep 2.6 support.
 
 4. Any Oslo library graduated during Kilo can drop 2.6 support, unless one of 
 the previous rules applies.
 
 5. The stable/icehouse and stable/juno branches of the incubator need to 
 retain 2.6 support for as long as those versions are supported.
 
 6. The master branch of the incubator needs to retain 2.6 support until we 
 graduate all of the modules that will go into libraries used by clients.
 
 
 A few examples:
 
 - oslo.utils was graduated during Juno and is used by some of the client 
 libraries, so it needs to maintain python 2.6 support.
 
 - oslo.config was graduated several releases ago and is used directly by the 
 stable branches of the server projects, so it needs to maintain python 2.6 
 support.
 
 - oslo.log is being graduated in Kilo and is not yet in use by any projects, 
 so it does not need python 2.6 support.
 
 - oslo.cliutils and oslo.apiclient are on the list to graduate in Kilo, but 
 both are used by client projects, so they need to keep python 2.6 support. At 
 that point we can evaluate the code that remains in the incubator and see if 
 we’re ready to turn of 2.6 support there.
 
 
 Let me know if you have questions about any specific cases not listed in the 
 examples.

The rules look ok to me but I'm a bit worried that we might miss
something in the process due to all these rules being in place. Would it
be simpler to just say we'll keep py2.6 support in oslo for Kilo and
drop it in Igloo (or L?) ?

Once Igloo development begins, Kilo will be stable (without py2.6
support except for Oslo) and Juno will be in security maintenance (with
py2.6 support).

I guess the TL;DR of what I'm proposing is to keep 2.6 support in oslo
until we move the rest of the projects just to keep the process simpler.
Probably longer but hopefully simpler.

I'm sure I'm missing something so please, correct me here.
Flavio


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-23 Thread Andreas Jaeger
Doug, thanks for writing this up.

Looking at your list, I created a patch and only changed oslo.log:

https://review.openstack.org/130444

Please double check that I didn't miss anything,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Cinder] The sorry state of cinder's driver in Glance

2014-10-23 Thread Flavio Percoco
On 10/22/2014 04:46 PM, Zhi Yan Liu wrote:
 Replied in inline.
 
 On Wed, Oct 22, 2014 at 9:33 PM, Flavio Percoco fla...@redhat.com wrote:
 On 10/22/2014 02:30 PM, Zhi Yan Liu wrote:
 Greetings,

 On Wed, Oct 22, 2014 at 4:56 PM, Flavio Percoco fla...@redhat.com wrote:
 Greetings,

 Back in Havana a, partially-implemented[0][1], Cinder driver was merged
 in Glance to provide an easier and hopefully more consistent interaction
 between glance, cinder and nova when it comes to manage volume images
 and booting from volumes.

 With my idea, it not only for VM provisioning and consuming feature
 but also for implementing a consistent and unified block storage
 backend for image store.  For historical reasons, we have implemented
 a lot of duplicated block storage drivers between glance and cinder,
 IMO, cinder could regard as a full-functional block storage backend
 from OpenStack's perspective (I mean it contains both data and control
 plane), glance could just leverage cinder as a unified block storage
 backend. Essentially, Glance has two kind of drivers, block storage
 driver and object storage driver (e.g. swift and s3 driver),  from
 above opinion, I consider to give glance a cinder driver is very
 sensible, it could provide a unified and consistent way to access
 different kind of block backend instead of implement duplicated
 drivers in both projects.

 Let me see if I got this right. You're suggesting to have a cinder
 driver in Glance so we can basically remove the
 'create-volume-from-image' functionality from Cinder. is this right?

 
 I don't think we need to remove any feature as an existing/reasonable
 use case from end user's perspective, 'create-volume-from-image' is a
 useful function and need to keep as-is to me, but I consider we can do
 some changes for internal implementation if we have cinder driver for
 glance, e.g. for this use case, if glance store image as a volume
 already then cinder can create volume effectively - to leverage such
 capability from backend storage, I think this case just like ceph
 current way on this situation (so a duplication example again).
 
 I see some people like to see implementing similar drivers in
 different projects again and again, but at least I think this is a
 hurtless and beneficial feature/driver.

 It's not as harmless as it seems. There are many users confused as to
 what the use case of this driver is. For example, should users create
 volumes from images? or should the create images that are then stored in
 a volume? What's the difference?
 
 I'm not sure I understood all concerns from those folks, but for your
 examples, one key reason I think is that they still think it in
 technical way to much. I mean create-image-from-volume and
 create-volume-from-image are useful and reasonable _use case_ from end
 user's perspective because volume and image are totally different
 concept for end user in cloud context (at least, in OpenStack
 context), the benefits/purpose of leverage cinder store/driver in
 glance is not to change those concepts and existing use case for end
 user/operator but to try to help us implement those feature
 efficiently in glance and cinder inside, IMO, including low the
 duplication as much as possible which as I mentioned before. So, in
 short, I see the impact of this idea is on _implementation_ level,
 instead on the exposed _use case_ level.

While I agree it has a major impact in the implementation of things, I
still think it has an impact from an use-case perspective, even an
operations perspective.

For example, If I were to deploy Glance on top of Cinder, I need to
first make Cinder accessible from Glance, which it might not be. This is
probably not a big deal. However, I also need to figure out things like:
How should I expose this to my users? Do they need it? or should I keep
it internal?

Furthermore, I'd need to answer questions like: Now that images can be
created in volumes, I need to take that into account when doing my size
planning.

Not saying these are blockers for this feature but I just want to make
clear that from a users perspective, it's not as simple as enabling a
new driver in Glance.


 Technically, the answer is probably none, but from a deployment and
 usability perspective, there's a huge difference that needs to be
 considered.
 
 According to my above explanations, IMO, this driver/idea couldn't
 (and shouldn't) break existing concept and use case for end
 user/operator, but if I still miss something pls let me know.

According to the use-cases explained in this thread (also in the emails
from John and Mathieu) this is something that'd be good having. I'm
looking forward to seeing the driver completed.

As John mentioned in his email, we should probably sync again in K-1 to
see if there's been some progress on the bricks side and the other
things this driver depends on. If there hasn't, we should probably get
rid of it and add it back once it can actually be full-featured.

Cheers,
Flavio



Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-23 Thread henry hly
Hi Phil,

Thanks for your feedback, and patience of this long history reading :)
See comments inline.

On Wed, Oct 22, 2014 at 5:59 PM, Day, Phil philip@hp.com wrote:
 -Original Message-
 From: henry hly [mailto:henry4...@gmail.com]
 Sent: 08 October 2014 09:16
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
 cascading

 Hi,

 Good questions: why not just keeping multiple endpoints, and leaving
 orchestration effort in the client side?

 From feedback of some large data center operators, they want the cloud
 exposed to tenant as a single region with multiple AZs, while each AZ may be
 distributed in different/same locations, very similar with AZ concept of AWS.
 And the OpenStack API is indispensable for the cloud for eco-system
 friendly.

 The cascading is mainly doing one thing: map each standalone child
 Openstack to AZs in the parent Openstack, hide separated child endpoints,
 thus converge them into a single standard OS-API endpoint.

 One of the obvious benefit doing so is the networking: we can create a single
 Router/LB, with subnet/port member from different child, just like in a 
 single
 OpenStack instance. Without the parent OpenStack working as the
 aggregation layer, it is not so easy to do so. Explicit VPN endpoint may be
 required in each child.

 I've read through the thread and the various links, and to me this still 
 sounds an awful lot like having multiple regions in Keystone.

 First of all I think we're in danger of getting badly mixed up in terminology 
 here around AZs which is an awfully overloaded term - esp when we make 
 comparisons to AWS AZs.  Whether we think the current Openstack usage of 
 these terms or not, lets at least stick to how they are currently defined and 
 used in Openstack:

 AZs - A scheduling concept in Nova and Cinder.Simply provides some 
 isolation schemantic about a compute host or storage server.  Nothing to do 
 with explicit physical or geographical location, although some degree of that 
 (separate racks, power, etc) is usually implied.

 Regions - A keystone concept for a collection of Openstack Endpoints.   They 
 may be distinct (a completely isolated set of Openstack service) or overlap 
 (some shared services).  Openstack clients support explicit user selection of 
 a region.

 Cells - A scalability / fault-isolation concept within Nova.  Because Cells 
 aspires to provide all Nova features transparently across cells this kind or 
 acts like multiple regions where only the Nova service is distinct 
 (Networking has to be common, Glance has to be common or at least federated 
 in a transparent way, etc).   The difference from regions is that the user 
 doesn’t have to make an explicit region choice - they get a single Nova URL 
 for all cells.   From what I remember Cells originally started out also using 
 the existing APIs as the way to connect the Cells together, but had to move 
 away from that because of the performance overhead of going through multiple 
 layers.



Agree, it's very clear now. However isolation is not all about
hardware and facility fault, REST API is preferred in terms of system
level isolation despite the theoretical protocol serialization
overhead.


 Now with Cascading it seems that we're pretty much building on the Regions 
 concept, wrapping it behind a single set of endpoints for user convenience, 
 overloading the term AZ

Sorry not very certain of the meaning overloading. It's just a
configuration choice by admin in the wrapper Openstack. As you
mentioned, there is no explicit definition of what a AZ should be, so
Cascading select to map it to a child Openstack. Surely we could use
another concept or invent new concept instead of AZ, but AZ is the
most appropriate one because it share the same semantic of isolation
with those child.

 to re-expose those sets of services to allow the user to choose between them 
 (doesn't this kind of negate the advantage of not having to specify the 
 region in the client- is that really such a bit deal for users ?) , and doing 
 something to provide a sort of federated Neutron service - because as we all 
 know the hard part in all of this is how you handle the Networking.

 It kind of feels to me that if we just concentrated on the part of this that 
 is working out how to distribute/federate Neutron then we'd have a solution 
 that could be mapped as easily cells and/or regions - and I wonder if then 
 why really need yet another aggregation concept ?


I agree that it's not so huge a gap between cascading AZ and
standalone endpoints for Nova and Cinder. However, wrapping is
strongly needed by customer feedback for Neutron, especially for those
who operate multiple internally connected DC. They don't like to force
tenants to create multiple route domain, connected with explicit
vpnaas. Instead they prefer a simple L3 router connecting subnets and
ports from 

Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-23 Thread Preston L. Bannister
John,

As a (new) OpenStack developer, I just discovered the
CINDER_SECURE_DELETE option.

As an *implicit* default, I entirely approve.  Production OpenStack
installations should *absolutely* insure there is no information leakage
from one instance to the next.

As an *explicit* default, I am not so sure. Low-end storage requires you do
this explicitly. High-end storage can insure information never leaks.
Counting on high level storage can make the upper levels more efficient,
can be a good thing.

The debate about whether to wipe LV's pretty much massively depends on the
intelligence of the underlying store. If the lower level storage never
returns accidental information ... explicit zeroes are not needed.



On Wed, Oct 22, 2014 at 11:15 PM, John Griffith john.griffi...@gmail.com
wrote:



 On Tue, Oct 21, 2014 at 9:17 AM, Duncan Thomas duncan.tho...@gmail.com
 wrote:

 For LVM-thin I believe it is already disabled? It is only really
 needed on LVM-thick, where the returning zeros behaviour is not done.

 On 21 October 2014 08:29, Avishay Traeger avis...@stratoscale.com
 wrote:
  I would say that wipe-on-delete is not necessary in most deployments.
 
  Most storage backends exhibit the following behavior:
  1. Delete volume A that has data on physical sectors 1-10
  2. Create new volume B
  3. Read from volume B before writing, which happens to map to physical
  sector 5 - backend should return zeroes here, and not data from volume A
 
  In case the backend doesn't provide this rather standard behavior, data
 must
  be wiped immediately.  Otherwise, the only risk is physical security,
 and if
  that's not adequate, customers shouldn't be storing all their data there
  regardless.  You could also run a periodic job to wipe deleted volumes
 to
  reduce the window of vulnerability, without making delete_volume take a
  ridiculously long time.
 
  Encryption is a good option as well, and of course it protects the data
  before deletion as well (as long as your keys are protected...)
 
  Bottom line - I too think the default in devstack should be to disable
 this
  option, and think we should consider making the default False in Cinder
  itself.  This isn't the first time someone has asked why volume deletion
  takes 20 minutes...
 
  As for queuing backup operations and managing bandwidth for various
  operations, ideally this would be done with a holistic view, so that for
  example Cinder operations won't interfere with Nova, or different Nova
  operations won't interfere with each other, but that is probably far
 down
  the road.
 
  Thanks,
  Avishay
 
 
  On Tue, Oct 21, 2014 at 9:16 AM, Chris Friesen 
 chris.frie...@windriver.com
  wrote:
 
  On 10/19/2014 09:33 AM, Avishay Traeger wrote:
 
  Hi Preston,
  Replies to some of your cinder-related questions:
  1. Creating a snapshot isn't usually an I/O intensive operation.  Are
  you seeing I/O spike or CPU?  If you're seeing CPU load, I've seen the
  CPU usage of cinder-api spike sometimes - not sure why.
  2. The 'dd' processes that you see are Cinder wiping the volumes
 during
  deletion.  You can either disable this in cinder.conf, or you can use
 a
  relatively new option to manage the bandwidth used for this.
 
  IMHO, deployments should be optimized to not do very long/intensive
  management operations - for example, use backends with efficient
  snapshots, use CoW operations wherever possible rather than copying
 full
  volumes/images, disabling wipe on delete, etc.
 
 
  In a public-cloud environment I don't think it's reasonable to disable
  wipe-on-delete.
 
  Arguably it would be better to use encryption instead of
 wipe-on-delete.
  When done with the backing store, just throw away the key and it'll be
  secure enough for most purposes.
 
  Chris
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Duncan Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 We disable this in the Gates CINDER_SECURE_DELETE=False

 ThinLVM (which hopefully will be default upon release of Kilo) doesn't
 need it because internally it returns zeros when reading unallocated blocks
 so it's a non-issue.

 The debate of to wipe LV's or not to is a long running issue.  The default
 behavior in Cinder is to leave it enable and IMHO that's how it should
 stay.  The fact is anything that might be construed as less secure and
 has been defaulted to the more secure setting should be left as it is.
 It's simple to turn this off.

 Also, nobody seemed to mention that in the case of Cinder operations like
 

Re: [openstack-dev] [rally][users]: Synchronizing between multiple scenario instances.

2014-10-23 Thread Behzad Dastur (bdastur)
Hi Boris,
I am still getting my feet wet with rally so some concepts are new, and did not 
quite get your statement regarding the different load generators. I am 
presuming you are referring to the Scenario runner and the different “types” of 
runs.

What I was looking at is the runner, where we specify the type, times and 
concurrency.  We could have an additional field(s) which would specify the 
synchronization property.

Essentially, what I have found most useful in the cases where we run 
scenarios/tests in parallel;  is some sort of “barrier”, where at a certain 
point in the run you want all the parallel tasks to reach a specific point 
before continuing.

Also, I am also considering cases where synchronization is needed within a 
single benchmark case, where the same benchmark scenario:
creates some vms, performs some tasks, deletes the vm

Just for simplicity as a POC, I tried something with shared memory 
(multiprocessing.Value), which looks something like this:

class Barrier(object):
__init__(self, concurrency)
self.shmem = multiprocessing.Value(‘I’, concurrency)
self.lock = multiprocessing.Lock()

def wait_at_barrier ():
   while self.shmem.value  0:
   time.sleep(1)
   return

def decrement_shm_concurrency_cnt ():
 with self.lock:
 self.shmem.value -=  1


And from the scenario, it can be called as:

scenario:
 -- do some action –
  barrobj.decrement_shm_concurrency_cnt()
 sync_helper.wait_at_barrier()
-- do some action –   -- all processes will do this action at almost the same 
time.

I would be happy to discuss more to get a good common solution.

regards,
Behzad





From: bo...@pavlovic.ru [mailto:bo...@pavlovic.ru] On Behalf Of Boris Pavlovic
Sent: Tuesday, October 21, 2014 3:23 PM
To: Behzad Dastur (bdastur)
Cc: OpenStack Development Mailing List (not for usage questions); Pradeep 
Chandrasekar (pradeech); John Wei-I Wu (weiwu)
Subject: Re: [openstack-dev] [rally][users]: Synchronizing between multiple 
scenario instances.

Behzad,

Unfortunately at this point there is no support of locking between scenarios.


It will be quite tricky for implementation, because we have different load 
generators, and we will need to find
common solutions for all of them.

If you have any ideas about how to implement it in such way, I will be more 
than happy to get this in upstream.


One of the way that I see is to having some kind of chain-of-benchmarks:

1) Like first benchmark is running N VMs
2) Second benchmarking is doing something with all those benchmarks
3) Third benchmark is deleting all these VMs

(where chain elements are atomic actions)

Probably this will be better long term solution.
Only thing that we should understand is how to store those results and how to 
display them.


If you would like to help with this let's start discussing it, in some kind of 
google docs.

Thoughts?


Best regards,
Boris Pavlovic


On Wed, Oct 22, 2014 at 2:13 AM, Behzad Dastur (bdastur) 
bdas...@cisco.commailto:bdas...@cisco.com wrote:
Does rally provide any synchronization mechanism to synchronize between 
multiple scenario, when running in parallel? Rally spawns multiple processes, 
with each process running the scenario.  We need a way to synchronize between 
these to start a perf test operation at the same time.


regards,
Behzad


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-23 Thread joehuang
Hi,

Because I am not able to find a meeting room to have deep diving OpenStack 
cascading before design summit. You are welcome to have a f2f conversation 
about the cascading before design summit. I planned to stay at Paris from 
Oct.30 to Nov.8, if you have any doubt or question, please feel free to contact 
me. All the conversation is for clarification / idea exchange purpose, not for 
any secret agreement purpose. It is necessary before design summit, for design 
summit session, it's only 40 minutes, if all 40 minutes are spent on basic 
question and clarification, then no valuable conclusion can be drawn in the 
meeting. So I want to work as client-server mode, anyone who is interested in 
talking cascading with me, just tell me when he will come to the hotel where I 
stay at Paris, then a chat could be made to reduce misunderstanding, get more 
clear picture, and focus on what need to be discussed and consensuses during 
the design summit session. 

It kind of feels to me that if we just concentrated on the part of this 
that is working out how to distribute/federate Neutron then we'd have a 
solution that could be mapped as easily cells and/or regions - and I wonder 
if then why really need yet another aggregation concept ?

My answer is that it seems to be feasible but can not meet the muti-site cloud 
demand (that's the drive force for cascading): 
1) large cloud operator ask multi-vendor to build the distributed but unified 
multi-site cloud together and each vendor has his own OpenStack based solution. 
If shared Nova/Cinder with federated Neutron used, the cross data center 
integration through RPC message for multi-vendor infrastrcuture is very 
difficult, and no clear responsibility boundry, it leads to difficulty for 
trouble shooting, upgrade, etc.
2) restful API /CLI is required for each site to make the cloud always workable 
and manageable. If shared Nova/Cinder with federated Neutron, then some data 
center is not able to expose restful API/CLI for management purpose.
3) the unified cloud need to expose open and standard api. If shared Nova / 
Cinder with federated Neutron, this point can be arhieved.

Best Regards

Chaoyi Huang ( joehuang )

-Original Message-
From: henry hly [mailto:henry4...@gmail.com] 
Sent: Thursday, October 23, 2014 3:13 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hi Phil,

Thanks for your feedback, and patience of this long history reading :) See 
comments inline.

On Wed, Oct 22, 2014 at 5:59 PM, Day, Phil philip@hp.com wrote:
 -Original Message-
 From: henry hly [mailto:henry4...@gmail.com]
 Sent: 08 October 2014 09:16
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack cascading

 Hi,

 Good questions: why not just keeping multiple endpoints, and leaving 
 orchestration effort in the client side?

 From feedback of some large data center operators, they want the 
 cloud exposed to tenant as a single region with multiple AZs, while 
 each AZ may be distributed in different/same locations, very similar with AZ 
 concept of AWS.
 And the OpenStack API is indispensable for the cloud for eco-system 
 friendly.

 The cascading is mainly doing one thing: map each standalone child 
 Openstack to AZs in the parent Openstack, hide separated child 
 endpoints, thus converge them into a single standard OS-API endpoint.

 One of the obvious benefit doing so is the networking: we can create 
 a single Router/LB, with subnet/port member from different child, 
 just like in a single OpenStack instance. Without the parent 
 OpenStack working as the aggregation layer, it is not so easy to do 
 so. Explicit VPN endpoint may be required in each child.

 I've read through the thread and the various links, and to me this still 
 sounds an awful lot like having multiple regions in Keystone.

 First of all I think we're in danger of getting badly mixed up in terminology 
 here around AZs which is an awfully overloaded term - esp when we make 
 comparisons to AWS AZs.  Whether we think the current Openstack usage of 
 these terms or not, lets at least stick to how they are currently defined and 
 used in Openstack:

 AZs - A scheduling concept in Nova and Cinder.Simply provides some 
 isolation schemantic about a compute host or storage server.  Nothing to do 
 with explicit physical or geographical location, although some degree of that 
 (separate racks, power, etc) is usually implied.

 Regions - A keystone concept for a collection of Openstack Endpoints.   They 
 may be distinct (a completely isolated set of Openstack service) or overlap 
 (some shared services).  Openstack clients support explicit user selection of 
 a region.

 Cells - A scalability / fault-isolation concept within Nova.  Because Cells 
 aspires to provide all Nova features 

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-23 Thread Alan Kavanagh
+1 many thanks to Kyle for putting this as a priority, its most welcome.
/Alan

-Original Message-
From: Erik Moe [mailto:erik@ericsson.com] 
Sent: October-22-14 5:01 PM
To: Steve Gordon; OpenStack Development Mailing List (not for usage questions)
Cc: iawe...@cisco.com
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints


Hi,

Great that we can have more focus on this. I'll attend the meeting on Monday 
and also attend the summit, looking forward to these discussions.

Thanks,
Erik


-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com]
Sent: den 22 oktober 2014 16:29
To: OpenStack Development Mailing List (not for usage questions)
Cc: Erik Moe; iawe...@cisco.com; calum.lou...@metaswitch.com
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

- Original Message -
 From: Kyle Mestery mest...@mestery.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
 There are currently at least two BPs registered for VLAN trunk support 
 to VMs in neutron-specs [1] [2]. This is clearly something that I'd 
 like to see us land in Kilo, as it enables a bunch of things for the 
 NFV use cases. I'm going to propose that we talk about this at an 
 upcoming Neutron meeting [3]. Given the rotating schedule of this 
 meeting, and the fact the Summit is fast approaching, I'm going to 
 propose we allocate a bit of time in next Monday's meeting to discuss 
 this. It's likely we can continue this discussion F2F in Paris as 
 well, but getting a head start would be good.
 
 Thanks,
 Kyle
 
 [1] https://review.openstack.org/#/c/94612/
 [2] https://review.openstack.org/#/c/97714
 [3] https://wiki.openstack.org/wiki/Network/Meetings

Hi Kyle,

Thanks for raising this, it would be great to have a converged plan for 
addressing this use case [1] for Kilo. I plan to attend the Neutron meeting and 
I've CC'd Erik, Ian, and Calum to make sure they are aware as well.

Thanks,

Steve

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-October/047548.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Travels tips for the Paris summit

2014-10-23 Thread Maish Saidel-Keesing
Thanks for creating the page.

I have added a section to it with information about Kosher restaurants
as well.

Well done and thank you for the invaluable information

Maish
On 14/10/2014 20:02, Anita Kuno wrote:
 On 10/14/2014 12:40 PM, Sylvain Bauza wrote:
 Le 14/10/2014 18:29, Anita Kuno a écrit :
 On 10/14/2014 11:35 AM, Adrien Cunin wrote:
 Hi everyone,

 Inspired by the travels tips published for the HK summit, the
 French OpenStack user group wrote a similar wiki page for Paris:

 https://wiki.openstack.org/wiki/Summit/Kilo/Travel_Tips

 Also note that if you want some local informations or want to talk
 about user groups during the summit we will have a booth in the
 market place expo hall (location: E47).

 Adrien, On behalf of OpenStack-fr



 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 This is awesome, thanks Adrien.

 I have a request. Is there any way to expand the Food section to
 include how to find vegetarian restaurants? Any help here appreciated.
 Well, this is a tough question. We usually make use of TripAdvisor or
 other French noting websites for finding good places to eat, but some
 small restaurant don't provide this kind of information. There is no
 official requirement to provide these details for example.

 What I can suggest is when looking at the menu (this is mandatory to put
 it outside of the restaurant) and check for the word 'Végétarien'.

 Will amend the wiki tho with these details.

 -Sylvain
 Thanks Sylvain, I appreciate the pointers. Will wander around and look
 at menus outside restaurants. Not hard to do since I love wandering
 around the streets of Paris, so easy to walk, nice wide sidewalks.

 I'll also check back on the wikipage after you have edited.

 Thank you!
 Anita.
 Thanks so much for creating this wikipage,
 Anita.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Maish Saidel-Keesing


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Fuel-library][CI] New voting gates for provision and master node smoke checks

2014-10-23 Thread Bogdan Dobrelya
Hello.
We have only one voting gate for changes submitted to fuel-library which
checks deployment of Openstack nodes with Puppet manifests from
fuel-library.
But we must to check for two more important potential 'smoke sources':

1) Master node build:
  * nailgun::*_only classes - for docker builds must be smoke tested
as well as modules for Openstack nodes.
  * repo consistency - either packages used by Puppet manifests
was included in repos at master node or not)

2) Nodes provisioning
  * consistency checks for packages shipped with ISO as well - if some
package was missed and node required it at provision stage,
gate would have shown that.

Would like to see comments and ideas from our DevOps and QA teams, please.

-- 
Best regards,
Bogdan Dobrelya,
Skype #bogdando_at_yahoo.com
Irc #bogdando

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Error in ssh key pair log in

2014-10-23 Thread Khayam Gondal
I am trying to login into VM from host using ssh key pair instead of
password. I have created VM using keypair *khayamkey* and than tried to
login into vm using following command

ssh -l tux -i khayamkey.pem 10.3.24.56

where *tux* is username for VM, but I got following error


WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!

IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!Someone could be
eavesdropping on you right now (man-in-the-middle attack)!It is also
possible that a host key has just been changed.The fingerprint for the
RSA key sent by the remote host
is52:5c:47:33:dd:d0:7a:cd:0e:78:8d:9b:66:d8:74:a3.Please contact your
system administrator.Add correct host key in
/home/openstack/.ssh/known_hosts to get rid of this message.Offending
RSA key in /home/openstack/.ssh/known_hosts:1
  remove with: ssh-keygen -f /home/openstack/.ssh/known_hosts -R 10.3.24.56
RSA host key for 10.3.24.56 has changed and you have requested strict
checking.Host key verification failed.

P.S: I know if I run ssh-keygen -f /home/openstack/.ssh/known_hosts -R
10.3.24.56 problem can be solved but than I have to provide password to log
in to VM, but my goal is to use keypairs NOT password.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Backup of information about nodes.

2014-10-23 Thread Tomasz Napierala

On 22 Oct 2014, at 21:03, Adam Lawson alaw...@aqorn.com wrote:

 What is current best practice to restore a failed Fuel node?

It’s documented here:
http://docs.mirantis.com/openstack/fuel/fuel-5.1/operations.html#restoring-fuel-master

Regards,
-- 
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



[openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-23 Thread Elena Ezhova
Hi!

I am working on a bug ping still working once connected even after related
security group rule is deleted (
https://bugs.launchpad.net/neutron/+bug/1335375). The gist of the problem
is the following: when we delete a security group rule the corresponding
rule in iptables is also deleted, but the connection, that was allowed by
that rule, is not being destroyed.
The reason for such behavior is that in iptables we have the following
structure of a chain that filters input packets for an interface of an
istance:

Chain neutron-openvswi-i830fa99f-3 (1 references)
 pkts bytes target prot opt in out source
destination
0 0 DROP   all  --  *  *   0.0.0.0/0
0.0.0.0/0state INVALID /* Drop packets that are not associated
with a state. */
0 0 RETURN all  --  *  *   0.0.0.0/0
0.0.0.0/0state RELATED,ESTABLISHED /* Direct packets associated
with a known session to the RETURN chain. */
0 0 RETURN udp  --  *  *   10.0.0.3
0.0.0.0/0udp spt:67 dpt:68
0 0 RETURN all  --  *  *   0.0.0.0/0
0.0.0.0/0match-set IPv43a0d3610-8b38-43f2-8 src
0 0 RETURN tcp  --  *  *   0.0.0.0/0
0.0.0.0/0tcp dpt:22   rule that allows ssh on port 22

184 RETURN icmp --  *  *   0.0.0.0/0
0.0.0.0/0
0 0 neutron-openvswi-sg-fallback  all  --  *  *   0.0.0.0/0
   0.0.0.0/0/* Send unmatched traffic to the fallback
chain. */

So, if we delete rule that allows tcp on port 22, then all connections that
are already established won't be closed, because all packets would satisfy
the rule:
0 0 RETURN all  --  *  *   0.0.0.0/00.0.0.0/0
 state RELATED,ESTABLISHED /* Direct packets associated with a
known session to the RETURN chain. */

I seek advice on the way how to deal with the problem. There are a couple
of ideas how to do it (more or less realistic):

   - Kill the connection using conntrack

  The problem here is that it is sometimes impossible to tell which
connection should be killed. For example there may be two instances running
in different namespaces that have the same ip addresses. As a compute
doesn't know anything about namespaces, it cannot distinguish between the
two seemingly identical connections:
 $ sudo conntrack -L  | grep 10.0.0.5
 tcp  6 431954 ESTABLISHED src=10.0.0.3 dst=10.0.0.5
sport=60723 dport=22 src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60723
[ASSURED] mark=0 use=1
 tcp  6 431976 ESTABLISHED src=10.0.0.3 dst=10.0.0.5
sport=60729 dport=22 src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60729
[ASSURED] mark=0 use=1

I wonder whether there is any way to search for a connection by destination
MAC?

   - Delete iptables rule that directs packets associated with a known
   session to the RETURN chain

   It will force all packets to go through the full chain each time
and this will definitely make the connection close. But this will strongly
affect the performance. Probably there may be created a timeout after which
this rule will be restored, but it is uncertain how long should it be.

Please share your thoughts on how it would be better to handle it.

Thanks in advance,
Elena
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] [qa] [oslo] Declarative HTTP Tests

2014-10-23 Thread Chris Dent


I've proposed a spec to Ceilometer

   https://review.openstack.org/#/c/129669/

for a suite of declarative HTTP tests that would be runnable both in
gate check jobs and in local dev environments.

There's been some discussion that this may be generally applicable
and could be best served by a generic tool. My original assertion
was let's make something work and then see if people like it but I
thought I also better check with the larger world:

* Is this a good idea?

* Do other projects have similar ideas in progress?

* Is this concept something for which a generic tool should be
  created _prior_ to implementation in an individual project?

* Is there prior art? What's a good format?

Thanks.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Pluggable framework in Fuel: first prototype ready

2014-10-23 Thread Mike Scherbakov


1. I feel like we should not require user to unpack the plugin before
installing it. Moreover, we may chose to distribute plugins in our own
format, which we may potentially change later. E.g. lbaas-v2.0.fp. I'd
rather stick with two actions:


- Assembly (externally): fpb --build name


- Installation (on master node): fuel --install-plugin name

  I like the idea of putting plugin installation functionality in fuel client,
 which is installed
 on master node.
 But in the current version plugin installation requires files operations
 on the master,
 as result we can have problems if user's fuel-client is installed on
 another env.


I suggest to keep it simple for now as we have the issue mentioned by
Evgeny: fuel client is supposed to work from other nodes, and we will need
additional verification code in there. Also, to make it smooth, we will
have to end up with a few more checks - like what if tarball is broken,
what if we can't find install script in it, etc.
I'd suggest to run it simple for 6.0, and then we will see how it's being
used and what other limitations / issues we have around plugin installation
and usage. We can consider to make this functionality as part of fuel
client a bit later.

Thanks,

On Tue, Oct 21, 2014 at 6:57 PM, Vitaly Kramskikh vkramsk...@mirantis.com
wrote:

 Hi,

 As for a separate section for plugins, I think we should not force it and
 leave this decision to a plugin developer, so he can create just a single
 checkbox or a section of the settings tab or a separate tab depending on
 plugin functionality. Plugins should be able to modify arbitrary release
 fields. For example, if Ceph was a plugin, it should be able to extend
 wizard config to add new options to Storage pane. If vCenter was a plugin,
 it should be able to set maximum amount of Compute nodes to 0.

 2014-10-20 21:21 GMT+07:00 Evgeniy L e...@mirantis.com:

 Hi guys,

 *Romans' questions:*

  I feel like we should not require user to unpack the plugin before
 installing it.
  Moreover, we may chose to distribute plugins in our own format, which
 we
  may potentially change later. E.g. lbaas-v2.0.fp.

 I like the idea of putting plugin installation functionality in fuel
 client, which is installed
 on master node.
 But in the current version plugin installation requires files operations
 on the master,
 as result we can have problems if user's fuel-client is installed on
 another env.
 What we can do is to try to determine where fuel-client is installed, if
 it's master
 node, we can perform installation, if it isn't master node, we can show
 user the
 message, that in the current version remote plugin installation is not
 supported.
 In the next versions if we implement plugin manager (which is separate
 service
 for plugins management) we will be able to do it remotely.

  How are we planning to distribute fuel plugin builder and its updates?


 Yes, as Mike mentioned our plan is to release it on PyPi which is python
 packages
 repository, so any developer will be able to run `pip install fpb` and
 get the tool.

  What happens if an error occurs during plugin installation?

 Plugins installation process is very simple, our plan is to have some
 kind of transaction,
 to make it atomic.

 1. register plugin via API
 2. copy the files

 In case of error on the 1st step, we can do nothing, in case of error on
 the 2nd step,
 remove files if there are any, and delete a plugin via rest api. And show
 user a message.

  What happens if an error occurs during plugin execution?

 In the first iteration we are going to interrupt deployment if there are
 any errors for plugin's
 tasks, also we are thinking how to improve it, for example we wanted to
 provide a special
 flag for each task, like fail_deploument_on_error, and only if it's true,
 we fail deployment in
 case of failed task. But it can be tricky to implement, it requires to
 change the current
 orchestrator/nailgun error handling logic. So, I'm not sure if we can
 implement this logic in
 the first release.

 Regarding to meaningful error messages, yes, we want to show the
 user, which plugin
 causes the error.

  Shall we consider a separate place in UI (tab) for plugins?

 +1 to Mike's answer

  When are we planning to focus on the 2 plugins which were identified
 as must-haves
  for 6.0? Cinder  LBaaS

 For Cinder we are going to implement plugin which configures GlusterFS as
 cinder backend,
 so, if user has installed GlusterFS cluster, we can configure our cinder
 to work with it,
 I want to mention that we don't install GlusterFS nodes, we just
 configure cinder to work
 with user's GlusterFS cluster.
 Stanislaw B. already did some scripts which configures cinder to work
 with GlusterFS, so
 we are on testing stage.

 Regarding to LBaaS, Stanislaw B. did multinode implementation, ha
 implementation is tricky
 and requires some additional work, we are working on it.

 Nathan's questions:

 Looks like Mike answered UI related 

Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-23 Thread Duncan Thomas
On 23 October 2014 08:30, Preston L. Bannister pres...@bannister.us wrote:
 John,

 As a (new) OpenStack developer, I just discovered the CINDER_SECURE_DELETE
 option.

 As an *implicit* default, I entirely approve.  Production OpenStack
 installations should *absolutely* insure there is no information leakage
 from one instance to the next.

 As an *explicit* default, I am not so sure. Low-end storage requires you do
 this explicitly. High-end storage can insure information never leaks.
 Counting on high level storage can make the upper levels more efficient, can
 be a good thing.

 The debate about whether to wipe LV's pretty much massively depends on the
 intelligence of the underlying store. If the lower level storage never
 returns accidental information ... explicit zeroes are not needed.

The security requirements regarding wiping are totally and utterly
site dependent - some places care and are happy to pay the cost (some
even using an entirely pointless multi-write scrub out of historically
rooted paranoia) where as some don't care in the slightest. LVM thin
that John mentioned is no worse or better than most 'smart' arrays -
unless you happen to hit a bug, it won't return previous info.

That's a good default, if your site needs better then there are lots
of config options to go looking into for a whole variety of things,
and you should probably be doing your own security audits of the code
base and other deep analysis, as well as reading and contributing to
the security guide.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-23 Thread Day, Phil
Hi,

 -Original Message-
 From: joehuang [mailto:joehu...@huawei.com]
 Sent: 23 October 2014 09:59
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
 cascading
 
 Hi,
 
 Because I am not able to find a meeting room to have deep diving OpenStack
 cascading before design summit. You are welcome to have a f2f conversation
 about the cascading before design summit. I planned to stay at Paris from
 Oct.30 to Nov.8, if you have any doubt or question, please feel free to
 contact me. All the conversation is for clarification / idea exchange purpose,
 not for any secret agreement purpose. It is necessary before design summit,
 for design summit session, it's only 40 minutes, if all 40 minutes are spent 
 on
 basic question and clarification, then no valuable conclusion can be drawn in
 the meeting. So I want to work as client-server mode, anyone who is
 interested in talking cascading with me, just tell me when he will come to the
 hotel where I stay at Paris, then a chat could be made to reduce
 misunderstanding, get more clear picture, and focus on what need to be
 discussed and consensuses during the design summit session.
 
Sure, I'll certainly try and find some time to meet and talk.


 It kind of feels to me that if we just concentrated on the part of this 
 that
 is working out how to distribute/federate Neutron then we'd have a solution
 that could be mapped as easily cells and/or regions - and I wonder if then
 why really need yet another aggregation concept ?
 
 My answer is that it seems to be feasible but can not meet the muti-site
 cloud demand (that's the drive force for cascading):
 1) large cloud operator ask multi-vendor to build the distributed but unified
 multi-site cloud together and each vendor has his own OpenStack based
 solution. If shared Nova/Cinder with federated Neutron used, the cross data
 center integration through RPC message for multi-vendor infrastrcuture is
 very difficult, and no clear responsibility boundry, it leads to difficulty 
 for
 trouble shooting, upgrade, etc.

So if the scope of what you're doing to is to provide a single API across 
multiple clouds that are being built and operated independently then I'm not 
sure how you can impose enough consistency to guarantee any operations.What 
if one of those clouds has Nova AZs configured, and your using (from what I 
understand AZs to try and route to a specific cloud) ?   How do you get image 
and flavor consistency across the clouds ?

I picked up on the Network aspect because that seems to be something you've 
covered in some depth here 
https://docs.google.com/presentation/d/1wIqWgbZBS_EotaERV18xYYA99CXeAa4tv6v_3VlD2ik/edit?pli=1#slide=id.g390a1cf23_2_149
 so I'd assumed it was an intrinsic part of your proposal.  Now I'm even less 
clear on the scope of what you're trying to achieve ;-( 

If this is a federation layer for in effect arbitrary Openstack clouds then it 
kind of feels like it can't be anything other than an aggregator of queries 
(list the VMs in all of the clouds you know about, and show the results in one 
output).   If you have to make API calls into many clouds (when only one of 
them may have any results) then that feels like it would be a performance 
issue.  If you're going to cache the results somehow then in effect you needs 
the Cells approach for propogating up results, which means the sub-clouds have 
to be co-operating.

Maybe I missed it somewhere, but is there a clear write-up of the restrictions 
/ expectations of sub-clouds to work in this model ?

Kind Regards
Phil

 2) restful API /CLI is required for each site to make the cloud always 
 workable
 and manageable. If shared Nova/Cinder with federated Neutron, then some
 data center is not able to expose restful API/CLI for management purpose.
 3) the unified cloud need to expose open and standard api. If shared Nova /
 Cinder with federated Neutron, this point can be arhieved.
 
 Best Regards
 
 Chaoyi Huang ( joehuang )
 
 -Original Message-
 From: henry hly [mailto:henry4...@gmail.com]
 Sent: Thursday, October 23, 2014 3:13 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
 cascading
 
 Hi Phil,
 
 Thanks for your feedback, and patience of this long history reading :) See
 comments inline.
 
 On Wed, Oct 22, 2014 at 5:59 PM, Day, Phil philip@hp.com wrote:
  -Original Message-
  From: henry hly [mailto:henry4...@gmail.com]
  Sent: 08 October 2014 09:16
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by
  OpenStack cascading
 
  Hi,
 
  Good questions: why not just keeping multiple endpoints, and leaving
  orchestration effort in the client side?
 
  From feedback of some large data center operators, they want the
  cloud 

Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-23 Thread Miguel Angel Ajo Pelayo

Hi!

  This is an interesting topic, I don't know if there's any way to
target connection tracker rules by MAC, but that'd be the ideal solution.

  I also understand the RETURN for RELATED,ESTABLISHED is there for
performance reasons, and removing it would lead to longer table evaluation,
and degraded packet throughput.

  Temporarily removing this entry doesn't seem like a good solution
to me as we can't really know how long do we need to remove this rule to
induce the connection to close at both ends (it will only close if any
new activity happens, and timeout is exhausted afterwards).


  Also, I'm not sure if removing all the conntrack rules that match the
certain filter would be OK enough, as it may only lead to full reevaluation
of rules for the next packet of the cleared connections (may be I'm missing 
some corner detail, which could be).


Best regards,
Miguel Ángel.



- Original Message - 

 Hi!

 I am working on a bug  ping still working once connected even after related
 security group rule is deleted (
 https://bugs.launchpad.net/neutron/+bug/1335375 ). The gist of the problem
 is the following: when we delete a security group rule the corresponding
 rule in iptables is also deleted, but the connection, that was allowed by
 that rule, is not being destroyed.
 The reason for such behavior is that in iptables we have the following
 structure of a chain that filters input packets for an interface of an
 istance:

 Chain neutron-openvswi-i830fa99f-3 (1 references)
 pkts bytes target prot opt in out source destination
 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID /* Drop packets that
 are not associated with a state. */
 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED /* Direct
 packets associated with a known session to the RETURN chain. */
 0 0 RETURN udp -- * * 10.0.0.3 0.0.0.0/0 udp spt:67 dpt:68
 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set IPv43a0d3610-8b38-43f2-8
 src
 0 0 RETURN tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22  rule that allows
 ssh on port 22
 1 84 RETURN icmp -- * * 0.0.0.0/0 0.0.0.0/0
 0 0 neutron-openvswi-sg-fallback all -- * * 0.0.0.0/0 0.0.0.0/0 /* Send
 unmatched traffic to the fallback chain. */

 So, if we delete rule that allows tcp on port 22, then all connections that
 are already established won't be closed, because all packets would satisfy
 the rule:
 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED /* Direct
 packets associated with a known session to the RETURN chain. */

 I seek advice on the way how to deal with the problem. There are a couple of
 ideas how to do it (more or less realistic):

 * Kill the connection using conntrack

 The problem here is that it is sometimes impossible to tell which connection
 should be killed. For example there may be two instances running in
 different namespaces that have the same ip addresses. As a compute doesn't
 know anything about namespaces, it cannot distinguish between the two
 seemingly identical connections:
 $ sudo conntrack -L | grep 10.0.0.5
 tcp 6 431954 ESTABLISHED src=10.0.0.3 dst=10.0.0.5 sport=60723 dport=22
 src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60723 [ASSURED] mark=0 use=1
 tcp 6 431976 ESTABLISHED src=10.0.0.3 dst=10.0.0.5 sport=60729 dport=22
 src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60729 [ASSURED] mark=0 use=1

 I wonder whether there is any way to search for a connection by destination
 MAC?

 * Delete iptables rule that directs packets associated with a known session
 to the RETURN chain

 It will force all packets to go through the full chain each time and this
 will definitely make the connection close. But this will strongly affect the
 performance. Probably there may be created a timeout after which this rule
 will be restored, but it is uncertain how long should it be.

 Please share your thoughts on how it would be better to handle it.

 Thanks in advance,
 Elena

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [stable] Tool to aid in scalability problems mitigation.

2014-10-23 Thread Miguel Angel Ajo Pelayo


Recently, we have identified clients with problems due to the 
bad scalability of security groups in Havana and Icehouse, that 
was addressed during juno here [1] [2]

This situation is identified by blinking agents (going UP/DOWN),
high AMQP load, nigh neutron-server load, and timeout from openvswitch
agents when trying to contact neutron-server security_group_rules_for_devices.

Doing a [1] backport involves many dependent patches related 
to the general RPC refactor in neutron (which modifies all plugins), 
and subsequent ones fixing a few bugs. Sounds risky to me. [2] Introduces 
new features and it's dependent on features which aren't available on 
all systems.

To remediate this on production systems, I wrote a quick tool
to help on reporting security groups and mitigating the problem
by writing almost-equivalent rules [3]. 

We believe this tool would be better available to the wider community,
and under better review and testing, and, since it doesn't modify any behavior 
or actual code in neutron, I'd like to propose it for inclusion into, at least, 
Icehouse stable branch where it's more relevant.

I know the usual way is to go master-Juno-Icehouse, but at this moment
the tool is only interesting for Icehouse (and Havana), although I believe 
it could be extended to cleanup orphaned resources, or any other cleanup 
tasks, in that case it could make sense to be available for K-J-I.
 
As a reference, I'm leaving links to outputs from the tool [4][5]
  
Looking forward to get some feedback,
Miguel Ángel.


[1] https://review.openstack.org/#/c/111876/ security group rpc refactor
[2] https://review.openstack.org/#/c/111877/ ipset support
[3] https://github.com/mangelajo/neutrontool
[4] http://paste.openstack.org/show/123519/
[5] http://paste.openstack.org/show/123525/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-23 Thread joehuang
Hi, Phil,

I am sorry that no enough information for you to understand the cascading in 
the document.  If we can talk f2f, I can explain much more in detail. But in 
short, I will give a simplified picture how a virtual machine will be booted:

The general process to boot a VM is like this:
Nova API - Nova Scheduler - Nova Compute( manager ) - Nova Compute( Libvirt 
driver ) - Nova Compute( KVM )

After OpenStack cascading is introduced, the process is a little difference and 
can be divided into two parts:
1. inside cascading OpenStack: Nova API - Nova Scheduler - Nova Proxy -
2, inside cascaded OpenStack: Nova API - Nova Scheduler - Nova Compute( 
manager ) - Nova Compute( Libvirt driver ) - Nova Compute( KVM )
 
After schedule a Nova-Proxy, the instance object will be persisted in the DB in 
the cascading layer. And VM query to the cloud will be answered by the  
cascading Nova API from cascading layer DB. No need to touch cascaded Nova. 
(it's not bad thing to persist the data in the cascading layer, quota control, 
system healing and consistency correction, fast user experience, etc...)

All VM generation in the cascaded OpenStack has nothing different with the 
process of general VM boot process, and is a asynchronous process from the 
cascading layer.

How the scheduler in the cascading layer to select proper Nova Proxy. The 
answer is that if hosts the cascaded Nova was added to AZ1 (AZ: availability 
zone in short), then the Nova proxy (it's a host in the cascading layer) will 
also be added to AZ1 in the cascading layer, and this nova proxy will be 
configured to send all request to the endpoint of the regarding cascaded Nova. 
And scheduler will be configured to use availability zone filter only, we know 
all VM boot  request has AZ parameter, that's the key for scheduling in the 
cascading layer.  Host Aggregate could be done in the same way.

After Nova-proxy receive the RPC message from the Nova-scheduler, it will not 
work like libvirt driver to boot a VM in the local host, it'll pickup all 
request parameter and call python client, to send the restful nova-boot request 
to the cascaded Nova.

How the flavor will be synchronized to the cascaded Nova? The flavor will be 
synchronized to the cascaded Nova only if the flavor does not exist in the 
cascaded Nova, or the flavor is recently updated but not synchronized to the 
cascaded Nova. Because the VM boot request has been answered after scheduling, 
so all the things done in the nova-proxy is asynchronous operation just like a 
VM booted in a host, it'll take seconds to minutes in general host, but in the 
cascading, some API calling will be done by nova-proxy to cascaded Nova, or 
cascaded Cinder  Neutron. 

I wrote a few blogs to explain something in detail, but I am too busy, and not 
able to write all things we have done in the PoC. [ 1 ]

[1] blog about cascading:  http://www.linkedin.com/today/author/23841540

Best Regards

Chaoyi Huang


From: Day, Phil [philip@hp.com]
Sent: 23 October 2014 19:24
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hi,

 -Original Message-
 From: joehuang [mailto:joehu...@huawei.com]
 Sent: 23 October 2014 09:59
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
 cascading

 Hi,

 Because I am not able to find a meeting room to have deep diving OpenStack
 cascading before design summit. You are welcome to have a f2f conversation
 about the cascading before design summit. I planned to stay at Paris from
 Oct.30 to Nov.8, if you have any doubt or question, please feel free to
 contact me. All the conversation is for clarification / idea exchange purpose,
 not for any secret agreement purpose. It is necessary before design summit,
 for design summit session, it's only 40 minutes, if all 40 minutes are spent 
 on
 basic question and clarification, then no valuable conclusion can be drawn in
 the meeting. So I want to work as client-server mode, anyone who is
 interested in talking cascading with me, just tell me when he will come to the
 hotel where I stay at Paris, then a chat could be made to reduce
 misunderstanding, get more clear picture, and focus on what need to be
 discussed and consensuses during the design summit session.

Sure, I'll certainly try and find some time to meet and talk.


 It kind of feels to me that if we just concentrated on the part of this 
 that
 is working out how to distribute/federate Neutron then we'd have a solution
 that could be mapped as easily cells and/or regions - and I wonder if then
 why really need yet another aggregation concept ?

 My answer is that it seems to be feasible but can not meet the muti-site
 cloud demand (that's the drive force for cascading):
 1) large cloud operator ask multi-vendor to build the 

Re: [openstack-dev] [oslo] proposed summit session topics

2014-10-23 Thread Julien Danjou
On Wed, Oct 22 2014, Doug Hellmann wrote:

 2014-11-05 11:00  - Oslo graduation schedule 
 2014-11-05 11:50  - oslo.messaging 
 2014-11-05 13:50  - A Common Quota Management Library 
 2014-11-06 11:50  - taskflow 
 2014-11-06 13:40  - Using alpha versioning for Oslo libraries 
 2014-11-06 16:30  - Python 3 support in Oslo 
 2014-11-06 17:20  - Moving Oslo away from namespace packages 

 That should allow the QA and Infra teams to participate in the versioning and
 packaging discussions, Salvatore to be present for the quota library session
 (and lead it, I hope), and the eNovance guys who also work on ceilometer to be
 there for the Python 3 session.

 If you know you have a conflict with one of these times, let me know
 and I’ll see if we can juggle a little.

LGTM!

-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-23 Thread Adam Young

On 10/09/2014 03:36 PM, Duncan Thomas wrote:

On 9 October 2014 07:49, henry hly henry4...@gmail.com wrote:

Hi Joshua,

...in fact hierarchical scale
depends on square of single child scale. If a single child can deal
with 00's to 000's, cascading on it would then deal with 00,000's.

That is faulty logic - maybe the cascading solution needs to deal with
global quota and other aggregations that will rapidly break down your


There should not be Global quota in a cascading deployment.  If I own a 
cloud, I should manage my own Quota.


Keystone needs to be able to merge the authorization data across 
multiple OpenStack instances.  I have a spec proposal for this:


https://review.openstack.org/#/c/123782/

There are many issues to be resolved due to the organic growth nature 
of OpenStack deployments.  We see a recuring pattern where people need 
to span across multiple deployments, and not just for Bursting.


Quota then becomes essential:  it is the way of limiting what a user can 
do in one deployment ,separate from what they could do in a different 
one.  The quotas really reflect the contract between the user and the 
deployment.




scaling factor, or maybe there are few such problems can the cascade
part can scale way better than the underlying part. They are two
totally different scaling cases, so and suggestion that they are
anything other than an unknown multiplier is bogus.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] making Daneyon Hansen core

2014-10-23 Thread Jeff Peeler

On 10/22/2014 11:04 AM, Steven Dake wrote:

A few weeks ago in IRC we discussed the criteria for joining the core
team in Kolla.  I believe Daneyon has met all of these requirements by
reviewing patches along with the rest of the core team and providing
valuable comments, as well as implementing neutron and helping get
nova-networking implementation rolling.

Please vote +1 or -1 if your kolla core.  Recall a -1 is a veto.  It
takes 3 votes.  This email counts as one vote ;)


definitely +1


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-23 Thread Flavio Percoco
On 10/23/2014 08:56 AM, Flavio Percoco wrote:
 On 10/22/2014 08:15 PM, Doug Hellmann wrote:
 The application projects are dropping python 2.6 support during Kilo, and 
 I’ve had several people ask recently about what this means for Oslo. Because 
 we create libraries that will be used by stable versions of projects that 
 still need to run on 2.6, we are going to need to maintain support for 2.6 
 in Oslo until Juno is no longer supported, at least for some of our 
 projects. After Juno’s support period ends we can look again at dropping 2.6 
 support in all of the projects.


 I think these rules cover all of the cases we have:

 1. Any Oslo library in use by an API client that is used by a supported 
 stable branch (Icehouse and Juno) needs to keep 2.6 support.

 2. If a client library needs a library we graduate from this point forward, 
 we will need to ensure that library supports 2.6.

 3. Any Oslo library used directly by a supported stable branch of an 
 application needs to keep 2.6 support.

 4. Any Oslo library graduated during Kilo can drop 2.6 support, unless one 
 of the previous rules applies.

 5. The stable/icehouse and stable/juno branches of the incubator need to 
 retain 2.6 support for as long as those versions are supported.

 6. The master branch of the incubator needs to retain 2.6 support until we 
 graduate all of the modules that will go into libraries used by clients.


 A few examples:

 - oslo.utils was graduated during Juno and is used by some of the client 
 libraries, so it needs to maintain python 2.6 support.

 - oslo.config was graduated several releases ago and is used directly by the 
 stable branches of the server projects, so it needs to maintain python 2.6 
 support.

 - oslo.log is being graduated in Kilo and is not yet in use by any projects, 
 so it does not need python 2.6 support.

 - oslo.cliutils and oslo.apiclient are on the list to graduate in Kilo, but 
 both are used by client projects, so they need to keep python 2.6 support. 
 At that point we can evaluate the code that remains in the incubator and see 
 if we’re ready to turn of 2.6 support there.


 Let me know if you have questions about any specific cases not listed in the 
 examples.
 
 The rules look ok to me but I'm a bit worried that we might miss
 something in the process due to all these rules being in place. Would it
 be simpler to just say we'll keep py2.6 support in oslo for Kilo and
 drop it in Igloo (or L?) ?
 
 Once Igloo development begins, Kilo will be stable (without py2.6
 support except for Oslo) and Juno will be in security maintenance (with
 py2.6 support).

OMFG, did I really say Igloo? I should really consider taking a break.
Anyway, just read Igloo as the L release.

Seriously, WTF?
Flavio

 
 I guess the TL;DR of what I'm proposing is to keep 2.6 support in oslo
 until we move the rest of the projects just to keep the process simpler.
 Probably longer but hopefully simpler.
 
 I'm sure I'm missing something so please, correct me here.
 Flavio
 
 


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-10-23 Thread Vadivel Poonathan
Kyle,
Gentle reminder... when you get a chance!..

Anne,
In case, if i need to send it to different group or email-id to reach Kyle
Mestery, pls. let me know. Thanks for your help.

Regards,
Vad
--


On Tue, Oct 21, 2014 at 8:51 AM, Vadivel Poonathan 
vadivel.openst...@gmail.com wrote:

 Hi Kyle,

 Can you pls. comment on this discussion and confirm the requirements for
 getting out-of-tree mechanism_driver listed in the supported plugin/driver
 list of the Openstack Neutron docs.

 Thanks,
 Vad
 --

 On Mon, Oct 20, 2014 at 12:48 PM, Anne Gentle a...@openstack.org wrote:



 On Mon, Oct 20, 2014 at 2:42 PM, Vadivel Poonathan 
 vadivel.openst...@gmail.com wrote:

 Hi,







 * On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton blak...@gmail.com
 blak...@gmail.com wrote: I think you will probably have to
 wait until after the summit so we can see the direction that will be
 taken with the rest of the in-tree drivers/plugins. It seems like we
 are moving towards removing all of them so we would definitely need a
 solution to documenting out-of-tree drivers as you suggested.*

 [Vad] while i 'm waiting for the conclusion on this subject, i 'm trying
 to setup the third-party CI/Test system and meet its requirements to get my
 mechanism_driver listed in the Kilo's documentation, in parallel.

 Couple of questions/confirmations before i proceed further on this
 direction...

 1) Is there anything more required other than the third-party CI/Test
 requirements ??.. like should I still need to go-through the entire
 development process of submit/review/approval of the blue-print and code of
 my ML2 driver which was already developed and in-use?...


 The neutron PTL Kyle Mestery can answer if there are any additional
 requirements.


 2) Who is the authority to clarify and confirm the above (and how do i
 contact them)?...


 Elections just completed, and the newly elected PTL is Kyle Mestery,
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/031433.html
 .



 Thanks again for your inputs...

 Regards,
 Vad
 --

 On Tue, Oct 14, 2014 at 3:17 PM, Anne Gentle a...@openstack.org wrote:



 On Tue, Oct 14, 2014 at 5:14 PM, Vadivel Poonathan 
 vadivel.openst...@gmail.com wrote:

 Agreed on the requirements of test results to qualify the vendor
 plugin to be listed in the upstream docs.
 Is there any procedure/infrastructure currently available for this
 purpose?..
 Pls. fwd any link/pointers on those info.


 Here's a link to the third-party testing setup information.

 http://ci.openstack.org/third_party.html

 Feel free to keep asking questions as you dig deeper.
 Thanks,
 Anne


 Thanks,
 Vad
 --

 On Mon, Oct 13, 2014 at 10:25 PM, Akihiro Motoki amot...@gmail.com
 wrote:

 I agree with Kevin and Kyle. Even if we decided to use separate tree
 for neutron
 plugins and drivers, they still will be regarded as part of the
 upstream.
 These plugins/drivers need to prove they are well integrated with
 Neutron master
 in some way and gating integration proves it is well tested and
 integrated.
 I believe it is a reasonable assumption and requirement that a vendor
 plugin/driver
 is listed in the upstream docs. This is a same kind of question as
 what vendor plugins
 are tested and worth documented in the upstream docs.
 I hope you work with the neutron team and run the third party
 requirements.

 Thanks,
 Akihiro

 On Tue, Oct 14, 2014 at 10:09 AM, Kyle Mestery mest...@mestery.com
 wrote:
  On Mon, Oct 13, 2014 at 6:44 PM, Kevin Benton blak...@gmail.com
 wrote:
 The OpenStack dev and docs team dont have to worry about
  gating/publishing/maintaining the vendor specific plugins/drivers.
 
  I disagree about the gating part. If a vendor wants to have a link
 that
  shows they are compatible with openstack, they should be reporting
 test
  results on all patches. A link to a vendor driver in the docs
 should signify
  some form of testing that the community is comfortable with.
 
  I agree with Kevin here. If you want to play upstream, in whatever
  form that takes by the end of Kilo, you have to work with the
 existing
  third-party requirements and team to take advantage of being a part
 of
  things like upstream docs.
 
  Thanks,
  Kyle
 
  On Mon, Oct 13, 2014 at 11:33 AM, Vadivel Poonathan
  vadivel.openst...@gmail.com wrote:
 
  Hi,
 
  If the plan is to move ALL existing vendor specific
 plugins/drivers
  out-of-tree, then having a place-holder within the OpenStack
 domain would
  suffice, where the vendors can list their plugins/drivers along
 with their
  documentation as how to install and use etc.
 
  The main Openstack Neutron documentation page can explain the
 plugin
  framework (ml2 type drivers, mechanism drivers, serviec plugin
 and so on)
  and its purpose/usage etc, then provide a link to refer the
 currently
  supported vendor specific plugins/drivers for more details.  That
 way the
  documentation will be accurate to what is in-tree and limit the
  documentation of external plugins/drivers to 

Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-23 Thread John Griffith
On Thu, Oct 23, 2014 at 1:30 AM, Preston L. Bannister pres...@bannister.us
wrote:

 John,

 As a (new) OpenStack developer, I just discovered the
 CINDER_SECURE_DELETE option.

 As an *implicit* default, I entirely approve.  Production OpenStack
 installations should *absolutely* insure there is no information leakage
 from one instance to the next.

 As an *explicit* default, I am not so sure. Low-end storage requires you
 do this explicitly. High-end storage can insure information never leaks.
 Counting on high level storage can make the upper levels more efficient,
 can be a good thing.


Not entirely sure of the distinction intended as far as
implicit/explicit... but one other thing I should probably point out; this
ONLY applies to the LVM driver, maybe that's what you're getting at.  Would
be better probably to advertise as an LVM Driver option (easy enough to do
in the config options help message).

Anyway, I just wanted to point to some of the options like using io-nice,
clear-size, blkio cgroups, bps_limit..

It doesn't suck as bad as you might have thought or some of the other
respondents on this thread seem to think.  There's certainly room for
improvement and growth but it hasn't been completely ignored on the Cinder
side.



 The debate about whether to wipe LV's pretty much massively depends on the
 intelligence of the underlying store. If the lower level storage never
 returns accidental information ... explicit zeroes are not needed.



 On Wed, Oct 22, 2014 at 11:15 PM, John Griffith john.griffi...@gmail.com
 wrote:



 On Tue, Oct 21, 2014 at 9:17 AM, Duncan Thomas duncan.tho...@gmail.com
 wrote:

 For LVM-thin I believe it is already disabled? It is only really
 needed on LVM-thick, where the returning zeros behaviour is not done.

 On 21 October 2014 08:29, Avishay Traeger avis...@stratoscale.com
 wrote:
  I would say that wipe-on-delete is not necessary in most deployments.
 
  Most storage backends exhibit the following behavior:
  1. Delete volume A that has data on physical sectors 1-10
  2. Create new volume B
  3. Read from volume B before writing, which happens to map to physical
  sector 5 - backend should return zeroes here, and not data from volume
 A
 
  In case the backend doesn't provide this rather standard behavior,
 data must
  be wiped immediately.  Otherwise, the only risk is physical security,
 and if
  that's not adequate, customers shouldn't be storing all their data
 there
  regardless.  You could also run a periodic job to wipe deleted volumes
 to
  reduce the window of vulnerability, without making delete_volume take a
  ridiculously long time.
 
  Encryption is a good option as well, and of course it protects the data
  before deletion as well (as long as your keys are protected...)
 
  Bottom line - I too think the default in devstack should be to disable
 this
  option, and think we should consider making the default False in Cinder
  itself.  This isn't the first time someone has asked why volume
 deletion
  takes 20 minutes...
 
  As for queuing backup operations and managing bandwidth for various
  operations, ideally this would be done with a holistic view, so that
 for
  example Cinder operations won't interfere with Nova, or different Nova
  operations won't interfere with each other, but that is probably far
 down
  the road.
 
  Thanks,
  Avishay
 
 
  On Tue, Oct 21, 2014 at 9:16 AM, Chris Friesen 
 chris.frie...@windriver.com
  wrote:
 
  On 10/19/2014 09:33 AM, Avishay Traeger wrote:
 
  Hi Preston,
  Replies to some of your cinder-related questions:
  1. Creating a snapshot isn't usually an I/O intensive operation.  Are
  you seeing I/O spike or CPU?  If you're seeing CPU load, I've seen
 the
  CPU usage of cinder-api spike sometimes - not sure why.
  2. The 'dd' processes that you see are Cinder wiping the volumes
 during
  deletion.  You can either disable this in cinder.conf, or you can
 use a
  relatively new option to manage the bandwidth used for this.
 
  IMHO, deployments should be optimized to not do very long/intensive
  management operations - for example, use backends with efficient
  snapshots, use CoW operations wherever possible rather than copying
 full
  volumes/images, disabling wipe on delete, etc.
 
 
  In a public-cloud environment I don't think it's reasonable to disable
  wipe-on-delete.
 
  Arguably it would be better to use encryption instead of
 wipe-on-delete.
  When done with the backing store, just throw away the key and it'll be
  secure enough for most purposes.
 
  Chris
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Duncan Thomas

 

Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-23 Thread John Griffith
On Thu, Oct 23, 2014 at 8:50 AM, John Griffith john.griffi...@gmail.com
wrote:



 On Thu, Oct 23, 2014 at 1:30 AM, Preston L. Bannister 
 pres...@bannister.us wrote:

 John,

 As a (new) OpenStack developer, I just discovered the
 CINDER_SECURE_DELETE option.


OHHH... Most importantly, I almost forgot.  Welcome!!!


 As an *implicit* default, I entirely approve.  Production OpenStack
 installations should *absolutely* insure there is no information leakage
 from one instance to the next.

 As an *explicit* default, I am not so sure. Low-end storage requires you
 do this explicitly. High-end storage can insure information never leaks.
 Counting on high level storage can make the upper levels more efficient,
 can be a good thing.


 Not entirely sure of the distinction intended as far as
 implicit/explicit... but one other thing I should probably point out; this
 ONLY applies to the LVM driver, maybe that's what you're getting at.  Would
 be better probably to advertise as an LVM Driver option (easy enough to do
 in the config options help message).

 Anyway, I just wanted to point to some of the options like using io-nice,
 clear-size, blkio cgroups, bps_limit..

 It doesn't suck as bad as you might have thought or some of the other
 respondents on this thread seem to think.  There's certainly room for
 improvement and growth but it hasn't been completely ignored on the Cinder
 side.



 The debate about whether to wipe LV's pretty much massively depends on
 the intelligence of the underlying store. If the lower level storage never
 returns accidental information ... explicit zeroes are not needed.



 On Wed, Oct 22, 2014 at 11:15 PM, John Griffith john.griffi...@gmail.com
  wrote:



 On Tue, Oct 21, 2014 at 9:17 AM, Duncan Thomas duncan.tho...@gmail.com
 wrote:

 For LVM-thin I believe it is already disabled? It is only really
 needed on LVM-thick, where the returning zeros behaviour is not done.

 On 21 October 2014 08:29, Avishay Traeger avis...@stratoscale.com
 wrote:
  I would say that wipe-on-delete is not necessary in most deployments.
 
  Most storage backends exhibit the following behavior:
  1. Delete volume A that has data on physical sectors 1-10
  2. Create new volume B
  3. Read from volume B before writing, which happens to map to physical
  sector 5 - backend should return zeroes here, and not data from
 volume A
 
  In case the backend doesn't provide this rather standard behavior,
 data must
  be wiped immediately.  Otherwise, the only risk is physical security,
 and if
  that's not adequate, customers shouldn't be storing all their data
 there
  regardless.  You could also run a periodic job to wipe deleted
 volumes to
  reduce the window of vulnerability, without making delete_volume take
 a
  ridiculously long time.
 
  Encryption is a good option as well, and of course it protects the
 data
  before deletion as well (as long as your keys are protected...)
 
  Bottom line - I too think the default in devstack should be to
 disable this
  option, and think we should consider making the default False in
 Cinder
  itself.  This isn't the first time someone has asked why volume
 deletion
  takes 20 minutes...
 
  As for queuing backup operations and managing bandwidth for various
  operations, ideally this would be done with a holistic view, so that
 for
  example Cinder operations won't interfere with Nova, or different Nova
  operations won't interfere with each other, but that is probably far
 down
  the road.
 
  Thanks,
  Avishay
 
 
  On Tue, Oct 21, 2014 at 9:16 AM, Chris Friesen 
 chris.frie...@windriver.com
  wrote:
 
  On 10/19/2014 09:33 AM, Avishay Traeger wrote:
 
  Hi Preston,
  Replies to some of your cinder-related questions:
  1. Creating a snapshot isn't usually an I/O intensive operation.
 Are
  you seeing I/O spike or CPU?  If you're seeing CPU load, I've seen
 the
  CPU usage of cinder-api spike sometimes - not sure why.
  2. The 'dd' processes that you see are Cinder wiping the volumes
 during
  deletion.  You can either disable this in cinder.conf, or you can
 use a
  relatively new option to manage the bandwidth used for this.
 
  IMHO, deployments should be optimized to not do very long/intensive
  management operations - for example, use backends with efficient
  snapshots, use CoW operations wherever possible rather than copying
 full
  volumes/images, disabling wipe on delete, etc.
 
 
  In a public-cloud environment I don't think it's reasonable to
 disable
  wipe-on-delete.
 
  Arguably it would be better to use encryption instead of
 wipe-on-delete.
  When done with the backing store, just throw away the key and it'll
 be
  secure enough for most purposes.
 
  Chris
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  

[openstack-dev] [Fuel] Fuel standards

2014-10-23 Thread Vladimir Kozhukalov
All,

Recently we launched a couple new Fuel related projects
(fuel_plugin_builder, fuel_agent, fuel_upgrade, etc.). Those projects are
written in python and they use different approaches to organizing CLI,
configuration, different third party libraries, etc. Besides, we have some
old Fuel projects which are also not standardized.

The idea is to have a set of standards for all Fuel related projects about
architecture in general, third party libraries, API, user interface,
documentation, etc. When I take a look at any OpenStack project I usually
know a priori how project's code is organized. For example, cli is likely
based on python cliff library, configuration is based on oslo.config,
database layer is based of oslo.db and so on.

Let's do the same for Fuel. Frankly, I'd say we could take OpenStack
standards as is and use them for Fuel. But maybe there are other opinions.
Let's discuss this and decide what to do. Do we actually need those
standards at all?

Just to keep the scope narrow let's consider fuelclient project as an
example. If we decide something about it, we can then try to spread those
decisions on other Fuel related projects.

0) Standard for projects naming.
Currently most of Fuel projects are named like fuel-whatever or even
whatever? Is it ok? Or maybe we need some formal rules for naming. For
example, all OpenStack clients are named python-someclient. Do we need to
rename fuelclient into python-fuelclient?

1) Standard for an architecture.
Most of OpenStack services are split into several independent parts
(raughly service-api, serivce-engine, python-serivceclient) and those parts
interact with each other via REST and AMQP. python-serivceclient is usually
located in a separate repository. Do we actually need to do the same for
Fuel? According to fuelclient it means it should be moved into a separate
repository. Fortunately, it already uses REST API for interacting with
nailgun. But it should be possible to use it not only as a CLI tool, but
also as a library.

2) Standard for project directory structure (directory names for api, db
models,  drivers, cli related code, plugins, common code, etc.)
Do we actually need to standardize a directory structure?

3) Standard for third party libraries
As far as Fuel is a deployment tool for OpenStack, let's make a decision
about using OpenStack components wherever it is possible.
3.1) oslo.config for configuring.
3.2) oslo.db for database layer
3.3) oslo.messaging for AMQP layer
3.4) cliff for CLI (should we refactor fuelclient so as to make based on
cliff?)
3.5) oslo.log for logging
3.6) stevedore for plugins
etc.
What about third party components which are not OpenStack related? What
could be the requirements for an arbitrary PyPi package?

4) Standard for testing.
It requires a separate discussion.

5) Standard for documentation.
It requires a separate discussion.


Vladimir Kozhukalov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [oslo.db] model_query() future and neutron specifics

2014-10-23 Thread Kyle Mestery
On Mon, Oct 20, 2014 at 2:44 PM, Mike Bayer mba...@redhat.com wrote:
 As I’ve established oslo.db blueprints which will roll out new SQLAlchemy 
 connectivity patterns for consuming applications within both API [1] and 
 tests [2], one of the next big areas I’m to focus on is that of querying.   
 If one looks at how SQLAlchemy ORM queries are composed across Openstack, the 
 most prominent feature one finds is the prevalent use of the model_query() 
 initiation function.This is a function that is implemented in a specific 
 way for each consuming application; its purpose is to act as a factory for 
 new Query objects, starting from the point of acquiring a Session, starting 
 up the Query against a selected model, and then augmenting that Query right 
 off with criteria derived from the given application context, typically 
 oriented around the widespread use of so-called “soft-delete” columns, as 
 well as a few other fixed criteria.

 There’s a few issues with model_query() that I will be looking to solve, 
 starting with the proposal of a new blueprint.   Key issues include that it 
 will need some changes to interact with my new connectivity specification, it 
 may need a big change in how it is invoked in order to work with some new 
 querying features I also plan on proposing at some point (see 
 https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#Baked_Queries), and 
 also it’s current form in some cases tends to slightly discourage the 
 construction of appropriate queries.

 In order to propose a new system for model_query(), I have to do a survey of 
 how this function is implemented and used across projects.  Which is why we 
 find me talking about Neutron today - Neutron’s model_query() system is a 
 much more significant construct compared to that of all other projects.   It 
 is interesting because it makes clear some use cases that SQLAlchemy may very 
 well be able to help with.  It also seems to me that in its current form it 
 leads to SQL queries that are poorly formed - as I see this, on one hand we 
 can blame the structure of neutron’s model_query() for how this occurs, but 
 on the other, we can blame SQLAlchemy for not providing more tools oriented 
 towards what Neutron is trying to do.   The use case Neutron has here is very 
 common throughout many Python applications, but as yet I’ve not had the 
 opportunity to address this kind of pattern in a comprehensive way.

 I first sketched out my concerns on a Neutron issue 
 https://bugs.launchpad.net/neutron/+bug/1380823, however I was encouraged to 
 move it over to the mailing list.

 Specifically with Neutron’s model_query(), we're talking here about the 
 plugin architecture in neutron/db/common_db_mixin.py, where the 
 register_model_query_hook() method presents a way of applying modifiers to 
 queries. This system appears to be used by: db/external_net_db.py, 
 plugins/ml2/plugin.py, db/portbindings_db.py, 
 plugins/metaplugin/meta_neutron_plugin.py.

 What the use of the hook has in common in these cases is that a LEFT OUTER 
 JOIN is applied to the Query early on, in anticipation of either the 
 filter_hook or result_filters being applied to the query, but only 
 *possibly*, and then even within those hooks as supplied, again only 
 *possibly*. It's these two *possiblies* that leads to the use of LEFT OUTER 
 JOIN - this extra table is present in the query's FROM clause, but if we 
 decide we don't need to filter on it, the idea is that it's just a left outer 
 join, which will not change the primary result if not added to what’s being 
 filtered. And even, in the case of external_net_db.py, maybe we even add a 
 criteria WHERE extra model id IS NULL, that is doing a not contains off 
 of this left outer join.

 The result is that we can get a query like this:

 SELECT a.* FROM a LEFT OUTER JOIN b ON a.id=b.aid WHERE b.id IS NOT NULL

 this can happen for example if using External_net_db_mixin, the outerjoin to 
 ExternalNetwork is created, _network_filter_hook applies 
 expr.or_(ExternalNetwork.network_id != expr.null()), and that's it.

 The database will usually have a much easier time if this query is expressed 
 correctly [3]:

SELECT a.* FROM a INNER JOIN b ON a.id=b.aid

 the reason this bugs me is because the SQL output is being compromised as a 
 result of how the plugin system is organized. Preferable would be a system 
 where the plugins are either organized into fewer functions that perform all 
 the checking at once, or if the plugin system had more granularity to know 
 that it needs to apply an optional JOIN or not.   My thoughts for new 
 SQLAlchemy/oslo.db features are being driven largely by Neutron’s use case 
 here.

 Towards my goal of proposing a better system of model_query(), along with 
 Neutron’s heavy use of generically added criteria, I’ve put some thoughts 
 down on a new SQLAlchemy feature which would also be backported to oslo.db. 
 The initial sketch is at 
 

Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-10-23 Thread Kyle Mestery
Vad:

The third-party CI is required for your upstream driver. I think
what's different from my reading of this thread is the question of
what is the requirement to have a driver listed in the upstream
documentation which is not in the upstream codebase. To my knowledge,
we haven't done this. Thus, IMHO, we should NOT be utilizing upstream
documentation to document drivers which are themselves not upstream.
When we split out the drivers which are currently upstream in neutron
into a separate repo, they will still be upstream. So my opinion here
is that if your driver is not upstream, it shouldn't be in the
upstream documentation. But I'd like to hear others opinions as well.

Thanks,
Kyle

On Thu, Oct 23, 2014 at 9:44 AM, Vadivel Poonathan
vadivel.openst...@gmail.com wrote:
 Kyle,
 Gentle reminder... when you get a chance!..

 Anne,
 In case, if i need to send it to different group or email-id to reach Kyle
 Mestery, pls. let me know. Thanks for your help.

 Regards,
 Vad
 --


 On Tue, Oct 21, 2014 at 8:51 AM, Vadivel Poonathan
 vadivel.openst...@gmail.com wrote:

 Hi Kyle,

 Can you pls. comment on this discussion and confirm the requirements for
 getting out-of-tree mechanism_driver listed in the supported plugin/driver
 list of the Openstack Neutron docs.

 Thanks,
 Vad
 --

 On Mon, Oct 20, 2014 at 12:48 PM, Anne Gentle a...@openstack.org wrote:



 On Mon, Oct 20, 2014 at 2:42 PM, Vadivel Poonathan
 vadivel.openst...@gmail.com wrote:

 Hi,

  On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton blak...@gmail.com
  wrote:
 
  I think you will probably have to wait until after the summit so
  we can
  see the direction that will be taken with the rest of the in-tree
  drivers/plugins. It seems like we are moving towards removing all
  of them so
  we would definitely need a solution to documenting out-of-tree
  drivers as
  you suggested.

 [Vad] while i 'm waiting for the conclusion on this subject, i 'm trying
 to setup the third-party CI/Test system and meet its requirements to get my
 mechanism_driver listed in the Kilo's documentation, in parallel.

 Couple of questions/confirmations before i proceed further on this
 direction...

 1) Is there anything more required other than the third-party CI/Test
 requirements ??.. like should I still need to go-through the entire
 development process of submit/review/approval of the blue-print and code of
 my ML2 driver which was already developed and in-use?...


 The neutron PTL Kyle Mestery can answer if there are any additional
 requirements.


 2) Who is the authority to clarify and confirm the above (and how do i
 contact them)?...


 Elections just completed, and the newly elected PTL is Kyle Mestery,
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/031433.html.



 Thanks again for your inputs...

 Regards,
 Vad
 --

 On Tue, Oct 14, 2014 at 3:17 PM, Anne Gentle a...@openstack.org wrote:



 On Tue, Oct 14, 2014 at 5:14 PM, Vadivel Poonathan
 vadivel.openst...@gmail.com wrote:

 Agreed on the requirements of test results to qualify the vendor
 plugin to be listed in the upstream docs.
 Is there any procedure/infrastructure currently available for this
 purpose?..
 Pls. fwd any link/pointers on those info.


 Here's a link to the third-party testing setup information.

 http://ci.openstack.org/third_party.html

 Feel free to keep asking questions as you dig deeper.
 Thanks,
 Anne


 Thanks,
 Vad
 --

 On Mon, Oct 13, 2014 at 10:25 PM, Akihiro Motoki amot...@gmail.com
 wrote:

 I agree with Kevin and Kyle. Even if we decided to use separate tree
 for neutron
 plugins and drivers, they still will be regarded as part of the
 upstream.
 These plugins/drivers need to prove they are well integrated with
 Neutron master
 in some way and gating integration proves it is well tested and
 integrated.
 I believe it is a reasonable assumption and requirement that a vendor
 plugin/driver
 is listed in the upstream docs. This is a same kind of question as
 what vendor plugins
 are tested and worth documented in the upstream docs.
 I hope you work with the neutron team and run the third party
 requirements.

 Thanks,
 Akihiro

 On Tue, Oct 14, 2014 at 10:09 AM, Kyle Mestery mest...@mestery.com
 wrote:
  On Mon, Oct 13, 2014 at 6:44 PM, Kevin Benton blak...@gmail.com
  wrote:
 The OpenStack dev and docs team dont have to worry about
  gating/publishing/maintaining the vendor specific
  plugins/drivers.
 
  I disagree about the gating part. If a vendor wants to have a link
  that
  shows they are compatible with openstack, they should be reporting
  test
  results on all patches. A link to a vendor driver in the docs
  should signify
  some form of testing that the community is comfortable with.
 
  I agree with Kevin here. If you want to play upstream, in whatever
  form that takes by the end of Kilo, you have to work with the
  existing
  third-party requirements and team to take advantage of being a part
  of
  things like upstream docs.
 
  Thanks,
  Kyle
 
 

Re: [openstack-dev] [Fuel] Pluggable framework in Fuel: first prototype ready

2014-10-23 Thread Evgeniy L
Hi Mike,

I would like to add a bit more details about current implementation and how
it can be done.

*Implement installation as a scripts inside of tar ball:*
Cons:
* install script is really simple right now, but it will be much more
complicated
** it requires to implement logic where we can ask user for login/password
** use some config, where we will be able to get endpoints, like where is
keystone, nailgun
** validate that it's possible to install plugin on the current version of
master
** handle error cases (to make installation process more atomic)
* it will be impossible to deprecate the installation logic/method, because
it's on the plugin's side
  and you cannot change a plugin which user downloaded some times ago, when
we get
  plugin manager, we probably would like user to use plugin manager,
instead of some scripts
* plugin installation process is not so simple as it could be (untar, cd
plugin, ./install)

Pros:
* plugin developer can change installation scripts (I'm not sure if it's a
pros)

*Add installation to fuel client:*
Cons:
* requires changes in fuel client, which are not good for fuel client by
design (fuel client
  should be able to work remotely from user's machine), current
implementation requires
  local operations on files, it will be changed in the future releases, so
fuel-client will
  be able to do it via api, also we can determine if it's not master node
by /etc/fuel/version.yaml
  and show the user a message which says that in the current version it's
not possible
  to install the plugin remotely
* plugin developer won't be able to change installation process (I'm not
sure if it's a cons)

Pros:
* it's easier for user to install the plugin `fuel --install-plugin
plugin_name-1.0.1.fpb'
* all of the authentication logic already implemented in fuel client
* fuel client uses config with endpoints which is generated by puppet
* it will be easier to deprecate previous installation approach, we can
just install new
  fuel client on the master which uses api

Personally I like the second approach, and I think we should try to
implement it,
when we get time.

Thanks,

On Thu, Oct 23, 2014 at 3:02 PM, Mike Scherbakov mscherba...@mirantis.com
wrote:


1. I feel like we should not require user to unpack the plugin before
installing it. Moreover, we may chose to distribute plugins in our own
format, which we may potentially change later. E.g. lbaas-v2.0.fp. I'd
rather stick with two actions:


- Assembly (externally): fpb --build name


- Installation (on master node): fuel --install-plugin name

  I like the idea of putting plugin installation functionality in fuel client,
 which is installed
 on master node.
 But in the current version plugin installation requires files operations
 on the master,
 as result we can have problems if user's fuel-client is installed on
 another env.


 I suggest to keep it simple for now as we have the issue mentioned by
 Evgeny: fuel client is supposed to work from other nodes, and we will need
 additional verification code in there. Also, to make it smooth, we will
 have to end up with a few more checks - like what if tarball is broken,
 what if we can't find install script in it, etc.
 I'd suggest to run it simple for 6.0, and then we will see how it's being
 used and what other limitations / issues we have around plugin installation
 and usage. We can consider to make this functionality as part of fuel
 client a bit later.

 Thanks,

 On Tue, Oct 21, 2014 at 6:57 PM, Vitaly Kramskikh vkramsk...@mirantis.com
  wrote:

 Hi,

 As for a separate section for plugins, I think we should not force it and
 leave this decision to a plugin developer, so he can create just a single
 checkbox or a section of the settings tab or a separate tab depending on
 plugin functionality. Plugins should be able to modify arbitrary release
 fields. For example, if Ceph was a plugin, it should be able to extend
 wizard config to add new options to Storage pane. If vCenter was a plugin,
 it should be able to set maximum amount of Compute nodes to 0.

 2014-10-20 21:21 GMT+07:00 Evgeniy L e...@mirantis.com:

 Hi guys,

 *Romans' questions:*

  I feel like we should not require user to unpack the plugin before
 installing it.
  Moreover, we may chose to distribute plugins in our own format, which
 we
  may potentially change later. E.g. lbaas-v2.0.fp.

 I like the idea of putting plugin installation functionality in fuel
 client, which is installed
 on master node.
 But in the current version plugin installation requires files operations
 on the master,
 as result we can have problems if user's fuel-client is installed on
 another env.
 What we can do is to try to determine where fuel-client is installed, if
 it's master
 node, we can perform installation, if it isn't master node, we can show
 user the
 message, that in the current version remote plugin installation is not
 supported.
 In the next versions if we implement plugin manager (which is 

[openstack-dev] [Nova] questions on object/db usage

2014-10-23 Thread Chen CH Ji

Hi
   When I fix some bugs, I found that some code in
nova/compute/api.py

  sometimes we use db ,sometimes we use objects do we have any
criteria for it? I knew we can't access db in compute layer code, how about
others ? prefer object or db direct access? thanks

def service_delete(self, context, service_id):
Deletes the specified service.
objects.Service.get_by_id(context, service_id).destroy()

def instance_get_all_by_host(self, context, host_name):
Return all instances on the given host.
return self.db.instance_get_all_by_host(context, host_name)

def compute_node_get_all(self, context):
return self.db.compute_node_get_all(context)

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress] kilo design session

2014-10-23 Thread Tim Hinrichs
Works for me. 

Tim

P. S. Pardon the brevity. Sent from my mobile. 

 On Oct 22, 2014, at 5:01 PM, Sean Roberts seanrobert...@gmail.com wrote:
 
 We are scheduled for Monday, 03 Nov, 14:30 - 16:00. I have a conflict with 
 the “Meet the Influencers” talk that runs from 14:30-18:30, plus the GBP 
 session is on Tuesday, 04 Nov, 12:05-12:45. I was thinking we would want to 
 co-located the Congress and GBP talks as much as possible.
 
 The BOSH team has the Tuesday, 04 Nov, 16:40-18:10 slot and wants to switch. 
 
 Does this switch work for everyone?
 
 Maybe we can get some space in one of the pods or cross-project workshops on 
 Tuesday between the GBP and the potential Congress session to make it even 
 more better.
 
 ~sean
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Fuel standards

2014-10-23 Thread Anton Zemlyanov
I have another example, nailgun and UI are bundled in FuelWeb being quite
independent components. Nailgun is python REST API, while UI is HTML/CSS/JS
+ libs. I also support the idea making CLI a separate project, it is
similar to FuelWeb UI, it uses the same REST API. Fuelclient lib is also
good idea, REPL can be separated from command execution logic.

Multiple simple components are usually easier to maintain, bigger
components tend to become complex and tightly coupled.

I also fully support standards of naming files and directories, although it
relates to Python stuff mostly.

Anton Zemlyanov


 1) Standard for an architecture.
 Most of OpenStack services are split into several independent parts
 (raughly service-api, serivce-engine, python-serivceclient) and those parts
 interact with each other via REST and AMQP. python-serivceclient is usually
 located in a separate repository. Do we actually need to do the same for
 Fuel? According to fuelclient it means it should be moved into a separate
 repository. Fortunately, it already uses REST API for interacting with
 nailgun. But it should be possible to use it not only as a CLI tool, but
 also as a library.

 2) Standard for project directory structure (directory names for api, db
 models,  drivers, cli related code, plugins, common code, etc.)
 Do we actually need to standardize a directory structure?

 3) Standard for third party libraries
 As far as Fuel is a deployment tool for OpenStack, let's make a decision
 about using OpenStack components wherever it is possible.
 3.1) oslo.config for configuring.
 3.2) oslo.db for database layer
 3.3) oslo.messaging for AMQP layer
 3.4) cliff for CLI (should we refactor fuelclient so as to make based on
 cliff?)
 3.5) oslo.log for logging
 3.6) stevedore for plugins
 etc.
 What about third party components which are not OpenStack related? What
 could be the requirements for an arbitrary PyPi package?

 4) Standard for testing.
 It requires a separate discussion.

 5) Standard for documentation.
 It requires a separate discussion.


 Vladimir Kozhukalov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-10-23 Thread Anne Gentle
On Thu, Oct 23, 2014 at 10:31 AM, Kyle Mestery mest...@mestery.com wrote:

 Vad:

 The third-party CI is required for your upstream driver. I think
 what's different from my reading of this thread is the question of
 what is the requirement to have a driver listed in the upstream
 documentation which is not in the upstream codebase. To my knowledge,
 we haven't done this. Thus, IMHO, we should NOT be utilizing upstream
 documentation to document drivers which are themselves not upstream.
 When we split out the drivers which are currently upstream in neutron
 into a separate repo, they will still be upstream. So my opinion here
 is that if your driver is not upstream, it shouldn't be in the
 upstream documentation. But I'd like to hear others opinions as well.


This is my sense as well.

The hypervisor drivers are documented on the wiki, sometimes they're
in-tree, sometimes they're not, but the state of testing is documented on
the wiki. I think we could take this approach for network and storage
drivers as well.

https://wiki.openstack.org/wiki/HypervisorSupportMatrix

Anne


 Thanks,
 Kyle

 On Thu, Oct 23, 2014 at 9:44 AM, Vadivel Poonathan
 vadivel.openst...@gmail.com wrote:
  Kyle,
  Gentle reminder... when you get a chance!..
 
  Anne,
  In case, if i need to send it to different group or email-id to reach
 Kyle
  Mestery, pls. let me know. Thanks for your help.
 
  Regards,
  Vad
  --
 
 
  On Tue, Oct 21, 2014 at 8:51 AM, Vadivel Poonathan
  vadivel.openst...@gmail.com wrote:
 
  Hi Kyle,
 
  Can you pls. comment on this discussion and confirm the requirements for
  getting out-of-tree mechanism_driver listed in the supported
 plugin/driver
  list of the Openstack Neutron docs.
 
  Thanks,
  Vad
  --
 
  On Mon, Oct 20, 2014 at 12:48 PM, Anne Gentle a...@openstack.org
 wrote:
 
 
 
  On Mon, Oct 20, 2014 at 2:42 PM, Vadivel Poonathan
  vadivel.openst...@gmail.com wrote:
 
  Hi,
 
   On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton blak...@gmail.com
 
   wrote:
  
   I think you will probably have to wait until after the summit so
   we can
   see the direction that will be taken with the rest of the
 in-tree
   drivers/plugins. It seems like we are moving towards removing
 all
   of them so
   we would definitely need a solution to documenting out-of-tree
   drivers as
   you suggested.
 
  [Vad] while i 'm waiting for the conclusion on this subject, i 'm
 trying
  to setup the third-party CI/Test system and meet its requirements to
 get my
  mechanism_driver listed in the Kilo's documentation, in parallel.
 
  Couple of questions/confirmations before i proceed further on this
  direction...
 
  1) Is there anything more required other than the third-party CI/Test
  requirements ??.. like should I still need to go-through the entire
  development process of submit/review/approval of the blue-print and
 code of
  my ML2 driver which was already developed and in-use?...
 
 
  The neutron PTL Kyle Mestery can answer if there are any additional
  requirements.
 
 
  2) Who is the authority to clarify and confirm the above (and how do i
  contact them)?...
 
 
  Elections just completed, and the newly elected PTL is Kyle Mestery,
 
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/031433.html.
 
 
 
  Thanks again for your inputs...
 
  Regards,
  Vad
  --
 
  On Tue, Oct 14, 2014 at 3:17 PM, Anne Gentle a...@openstack.org
 wrote:
 
 
 
  On Tue, Oct 14, 2014 at 5:14 PM, Vadivel Poonathan
  vadivel.openst...@gmail.com wrote:
 
  Agreed on the requirements of test results to qualify the vendor
  plugin to be listed in the upstream docs.
  Is there any procedure/infrastructure currently available for this
  purpose?..
  Pls. fwd any link/pointers on those info.
 
 
  Here's a link to the third-party testing setup information.
 
  http://ci.openstack.org/third_party.html
 
  Feel free to keep asking questions as you dig deeper.
  Thanks,
  Anne
 
 
  Thanks,
  Vad
  --
 
  On Mon, Oct 13, 2014 at 10:25 PM, Akihiro Motoki amot...@gmail.com
 
  wrote:
 
  I agree with Kevin and Kyle. Even if we decided to use separate
 tree
  for neutron
  plugins and drivers, they still will be regarded as part of the
  upstream.
  These plugins/drivers need to prove they are well integrated with
  Neutron master
  in some way and gating integration proves it is well tested and
  integrated.
  I believe it is a reasonable assumption and requirement that a
 vendor
  plugin/driver
  is listed in the upstream docs. This is a same kind of question as
  what vendor plugins
  are tested and worth documented in the upstream docs.
  I hope you work with the neutron team and run the third party
  requirements.
 
  Thanks,
  Akihiro
 
  On Tue, Oct 14, 2014 at 10:09 AM, Kyle Mestery 
 mest...@mestery.com
  wrote:
   On Mon, Oct 13, 2014 at 6:44 PM, Kevin Benton blak...@gmail.com
 
   wrote:
  The OpenStack dev and docs team dont have to worry about
   gating/publishing/maintaining the vendor specific
   plugins/drivers.
  

Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-23 Thread Andrey Kurilin
Just a joke: Can we drop supporting Python 2.6, when several project still
have hooks for Python 2.4?

https://github.com/openstack/python-novaclient/blob/master/novaclient/exceptions.py#L195-L203
https://github.com/openstack/python-cinderclient/blob/master/cinderclient/exceptions.py#L147-L155

On Wed, Oct 22, 2014 at 9:15 PM, Doug Hellmann d...@doughellmann.com
wrote:

 The application projects are dropping python 2.6 support during Kilo, and
 I’ve had several people ask recently about what this means for Oslo.
 Because we create libraries that will be used by stable versions of
 projects that still need to run on 2.6, we are going to need to maintain
 support for 2.6 in Oslo until Juno is no longer supported, at least for
 some of our projects. After Juno’s support period ends we can look again at
 dropping 2.6 support in all of the projects.


 I think these rules cover all of the cases we have:

 1. Any Oslo library in use by an API client that is used by a supported
 stable branch (Icehouse and Juno) needs to keep 2.6 support.

 2. If a client library needs a library we graduate from this point
 forward, we will need to ensure that library supports 2.6.

 3. Any Oslo library used directly by a supported stable branch of an
 application needs to keep 2.6 support.

 4. Any Oslo library graduated during Kilo can drop 2.6 support, unless one
 of the previous rules applies.

 5. The stable/icehouse and stable/juno branches of the incubator need to
 retain 2.6 support for as long as those versions are supported.

 6. The master branch of the incubator needs to retain 2.6 support until we
 graduate all of the modules that will go into libraries used by clients.


 A few examples:

 - oslo.utils was graduated during Juno and is used by some of the client
 libraries, so it needs to maintain python 2.6 support.

 - oslo.config was graduated several releases ago and is used directly by
 the stable branches of the server projects, so it needs to maintain python
 2.6 support.

 - oslo.log is being graduated in Kilo and is not yet in use by any
 projects, so it does not need python 2.6 support.

 - oslo.cliutils and oslo.apiclient are on the list to graduate in Kilo,
 but both are used by client projects, so they need to keep python 2.6
 support. At that point we can evaluate the code that remains in the
 incubator and see if we’re ready to turn of 2.6 support there.


 Let me know if you have questions about any specific cases not listed in
 the examples.

 Doug

 PS - Thanks to fungi and clarkb for helping work out the rules above.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best regards,
Andrey Kurilin.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-23 Thread Kevin L. Mitchell
On Thu, 2014-10-23 at 18:56 +0300, Andrey Kurilin wrote:
 Just a joke: Can we drop supporting Python 2.6, when several project
 still have hooks for Python 2.4?
 
 https://github.com/openstack/python-novaclient/blob/master/novaclient/exceptions.py#L195-L203
 https://github.com/openstack/python-cinderclient/blob/master/cinderclient/exceptions.py#L147-L155

It may have been intended as a joke, but it's worth pointing out that
the Xen plugins for nova (at least) have to be compatible with Python
2.4, because they run on the Xenserver, which has an antiquated Python
installed :)

As for the clients, we could probably drop that segment now; it's not
like we *test* against 2.4, right?  :)
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-10-23 Thread Edgar Magana
I second Anne’s and Kyle comments. Actually, I like very much the wiki part to 
provide some visibility for out-of-tree plugins/drivers but not into the 
official documentation.

Thanks,

Edgar

From: Anne Gentle a...@openstack.orgmailto:a...@openstack.org
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, October 23, 2014 at 8:51 AM
To: Kyle Mestery mest...@mestery.commailto:mest...@mestery.com
Cc: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Neutron documentation to update about 
new vendor plugin, but without code in repository?



On Thu, Oct 23, 2014 at 10:31 AM, Kyle Mestery 
mest...@mestery.commailto:mest...@mestery.com wrote:
Vad:

The third-party CI is required for your upstream driver. I think
what's different from my reading of this thread is the question of
what is the requirement to have a driver listed in the upstream
documentation which is not in the upstream codebase. To my knowledge,
we haven't done this. Thus, IMHO, we should NOT be utilizing upstream
documentation to document drivers which are themselves not upstream.
When we split out the drivers which are currently upstream in neutron
into a separate repo, they will still be upstream. So my opinion here
is that if your driver is not upstream, it shouldn't be in the
upstream documentation. But I'd like to hear others opinions as well.


This is my sense as well.

The hypervisor drivers are documented on the wiki, sometimes they're in-tree, 
sometimes they're not, but the state of testing is documented on the wiki. I 
think we could take this approach for network and storage drivers as well.

https://wiki.openstack.org/wiki/HypervisorSupportMatrix

Anne

Thanks,
Kyle

On Thu, Oct 23, 2014 at 9:44 AM, Vadivel Poonathan
vadivel.openst...@gmail.commailto:vadivel.openst...@gmail.com wrote:
 Kyle,
 Gentle reminder... when you get a chance!..

 Anne,
 In case, if i need to send it to different group or email-id to reach Kyle
 Mestery, pls. let me know. Thanks for your help.

 Regards,
 Vad
 --


 On Tue, Oct 21, 2014 at 8:51 AM, Vadivel Poonathan
 vadivel.openst...@gmail.commailto:vadivel.openst...@gmail.com wrote:

 Hi Kyle,

 Can you pls. comment on this discussion and confirm the requirements for
 getting out-of-tree mechanism_driver listed in the supported plugin/driver
 list of the Openstack Neutron docs.

 Thanks,
 Vad
 --

 On Mon, Oct 20, 2014 at 12:48 PM, Anne Gentle 
 a...@openstack.orgmailto:a...@openstack.org wrote:



 On Mon, Oct 20, 2014 at 2:42 PM, Vadivel Poonathan
 vadivel.openst...@gmail.commailto:vadivel.openst...@gmail.com wrote:

 Hi,

  On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton 
  blak...@gmail.commailto:blak...@gmail.com
  wrote:
 
  I think you will probably have to wait until after the summit so
  we can
  see the direction that will be taken with the rest of the in-tree
  drivers/plugins. It seems like we are moving towards removing all
  of them so
  we would definitely need a solution to documenting out-of-tree
  drivers as
  you suggested.

 [Vad] while i 'm waiting for the conclusion on this subject, i 'm trying
 to setup the third-party CI/Test system and meet its requirements to get my
 mechanism_driver listed in the Kilo's documentation, in parallel.

 Couple of questions/confirmations before i proceed further on this
 direction...

 1) Is there anything more required other than the third-party CI/Test
 requirements ??.. like should I still need to go-through the entire
 development process of submit/review/approval of the blue-print and code of
 my ML2 driver which was already developed and in-use?...


 The neutron PTL Kyle Mestery can answer if there are any additional
 requirements.


 2) Who is the authority to clarify and confirm the above (and how do i
 contact them)?...


 Elections just completed, and the newly elected PTL is Kyle Mestery,
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/031433.html.



 Thanks again for your inputs...

 Regards,
 Vad
 --

 On Tue, Oct 14, 2014 at 3:17 PM, Anne Gentle 
 a...@openstack.orgmailto:a...@openstack.org wrote:



 On Tue, Oct 14, 2014 at 5:14 PM, Vadivel Poonathan
 vadivel.openst...@gmail.commailto:vadivel.openst...@gmail.com wrote:

 Agreed on the requirements of test results to qualify the vendor
 plugin to be listed in the upstream docs.
 Is there any procedure/infrastructure currently available for this
 purpose?..
 Pls. fwd any link/pointers on those info.


 Here's a link to the third-party testing setup information.

 http://ci.openstack.org/third_party.html

 Feel free to keep asking questions as you dig deeper.
 Thanks,
 Anne


 Thanks,
 Vad
 --

 On Mon, Oct 13, 2014 at 10:25 PM, Akihiro Motoki 
 amot...@gmail.commailto:amot...@gmail.com
 wrote:

 I agree with Kevin and Kyle. Even if we 

Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-10-23 Thread Edgar Magana
I forgot to mention that I can help to coordinate the creation and maintenance 
of the wiki for non-upstreamed drivers for Neutron.
We need to be sure that we DO NOT confuse users with the current information 
here:
https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers

I have been maintaining that wiki and I would like to keep just for upstreamed 
vendor-specific plugins/drivers.

Edgar

From: Edgar Magana edgar.mag...@workday.commailto:edgar.mag...@workday.com
Date: Thursday, October 23, 2014 at 9:46 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Kyle Mestery mest...@mestery.commailto:mest...@mestery.com
Subject: Re: [openstack-dev] [Neutron] Neutron documentation to update about 
new vendor plugin, but without code in repository?

I second Anne’s and Kyle comments. Actually, I like very much the wiki part to 
provide some visibility for out-of-tree plugins/drivers but not into the 
official documentation.

Thanks,

Edgar

From: Anne Gentle a...@openstack.orgmailto:a...@openstack.org
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, October 23, 2014 at 8:51 AM
To: Kyle Mestery mest...@mestery.commailto:mest...@mestery.com
Cc: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Neutron documentation to update about 
new vendor plugin, but without code in repository?



On Thu, Oct 23, 2014 at 10:31 AM, Kyle Mestery 
mest...@mestery.commailto:mest...@mestery.com wrote:
Vad:

The third-party CI is required for your upstream driver. I think
what's different from my reading of this thread is the question of
what is the requirement to have a driver listed in the upstream
documentation which is not in the upstream codebase. To my knowledge,
we haven't done this. Thus, IMHO, we should NOT be utilizing upstream
documentation to document drivers which are themselves not upstream.
When we split out the drivers which are currently upstream in neutron
into a separate repo, they will still be upstream. So my opinion here
is that if your driver is not upstream, it shouldn't be in the
upstream documentation. But I'd like to hear others opinions as well.


This is my sense as well.

The hypervisor drivers are documented on the wiki, sometimes they're in-tree, 
sometimes they're not, but the state of testing is documented on the wiki. I 
think we could take this approach for network and storage drivers as well.

https://wiki.openstack.org/wiki/HypervisorSupportMatrix

Anne

Thanks,
Kyle

On Thu, Oct 23, 2014 at 9:44 AM, Vadivel Poonathan
vadivel.openst...@gmail.commailto:vadivel.openst...@gmail.com wrote:
 Kyle,
 Gentle reminder... when you get a chance!..

 Anne,
 In case, if i need to send it to different group or email-id to reach Kyle
 Mestery, pls. let me know. Thanks for your help.

 Regards,
 Vad
 --


 On Tue, Oct 21, 2014 at 8:51 AM, Vadivel Poonathan
 vadivel.openst...@gmail.commailto:vadivel.openst...@gmail.com wrote:

 Hi Kyle,

 Can you pls. comment on this discussion and confirm the requirements for
 getting out-of-tree mechanism_driver listed in the supported plugin/driver
 list of the Openstack Neutron docs.

 Thanks,
 Vad
 --

 On Mon, Oct 20, 2014 at 12:48 PM, Anne Gentle 
 a...@openstack.orgmailto:a...@openstack.org wrote:



 On Mon, Oct 20, 2014 at 2:42 PM, Vadivel Poonathan
 vadivel.openst...@gmail.commailto:vadivel.openst...@gmail.com wrote:

 Hi,

  On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton 
  blak...@gmail.commailto:blak...@gmail.com
  wrote:
 
  I think you will probably have to wait until after the summit so
  we can
  see the direction that will be taken with the rest of the in-tree
  drivers/plugins. It seems like we are moving towards removing all
  of them so
  we would definitely need a solution to documenting out-of-tree
  drivers as
  you suggested.

 [Vad] while i 'm waiting for the conclusion on this subject, i 'm trying
 to setup the third-party CI/Test system and meet its requirements to get my
 mechanism_driver listed in the Kilo's documentation, in parallel.

 Couple of questions/confirmations before i proceed further on this
 direction...

 1) Is there anything more required other than the third-party CI/Test
 requirements ??.. like should I still need to go-through the entire
 development process of submit/review/approval of the blue-print and code of
 my ML2 driver which was already developed and in-use?...


 The neutron PTL Kyle Mestery can answer if there are any additional
 requirements.


 2) Who is the authority to clarify and confirm the above (and how do i
 contact them)?...


 Elections just completed, and the newly elected PTL is Kyle Mestery,
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/031433.html.



 Thanks 

Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-10-23 Thread Vadivel Poonathan
Hi Kyle and Anne,

Thanks for the clarifications... understood and it makes sense.

However, per my understanding, the drivers (aka plugins) are meant to be
developed and supported by third-party vendors, outside of the OpenStack
community, and they are supposed to work as plug-n-play... they are not
part of the core OpenStack development, nor any of its components. If that
is the case, then why should OpenStack community include and maintain them
as part of it, for every release?...  Wouldnt it be enough to limit the
scope with the plugin framework and built-in drivers such as LinuxBridge or
OVS etc?... not extending to commercial vendors?...  (It is just a curious
question, forgive me if i missed something and correct me!).

At the same time, IMHO, there must be some reference or a page within the
scope of OpenStack documentation (not necessarily the core docs, but some
wiki page or reference link or so - as Anne suggested) to mention the list
of the drivers/plugins supported as of given release and may be an external
link to know more details about the driver, if the link is provided by
respective vendor.


Anyway, besides my opinion, the wiki page similar to hypervisor driver
would be good for now atleast, until the direction/policy level decision is
made to maintain out-of-tree plugins/drivers.


Thanks,
Vad
--




On Thu, Oct 23, 2014 at 9:46 AM, Edgar Magana edgar.mag...@workday.com
wrote:

  I second Anne’s and Kyle comments. Actually, I like very much the wiki
 part to provide some visibility for out-of-tree plugins/drivers but not
 into the official documentation.

  Thanks,

  Edgar

   From: Anne Gentle a...@openstack.org
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Thursday, October 23, 2014 at 8:51 AM
 To: Kyle Mestery mest...@mestery.com
 Cc: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron] Neutron documentation to update
 about new vendor plugin, but without code in repository?



 On Thu, Oct 23, 2014 at 10:31 AM, Kyle Mestery mest...@mestery.com
 wrote:

 Vad:

 The third-party CI is required for your upstream driver. I think
 what's different from my reading of this thread is the question of
 what is the requirement to have a driver listed in the upstream
 documentation which is not in the upstream codebase. To my knowledge,
 we haven't done this. Thus, IMHO, we should NOT be utilizing upstream
 documentation to document drivers which are themselves not upstream.
 When we split out the drivers which are currently upstream in neutron
 into a separate repo, they will still be upstream. So my opinion here
 is that if your driver is not upstream, it shouldn't be in the
 upstream documentation. But I'd like to hear others opinions as well.


  This is my sense as well.

  The hypervisor drivers are documented on the wiki, sometimes they're
 in-tree, sometimes they're not, but the state of testing is documented on
 the wiki. I think we could take this approach for network and storage
 drivers as well.

  https://wiki.openstack.org/wiki/HypervisorSupportMatrix

  Anne


 Thanks,
 Kyle

 On Thu, Oct 23, 2014 at 9:44 AM, Vadivel Poonathan
  vadivel.openst...@gmail.com wrote:
  Kyle,
  Gentle reminder... when you get a chance!..
 
  Anne,
  In case, if i need to send it to different group or email-id to reach
 Kyle
  Mestery, pls. let me know. Thanks for your help.
 
  Regards,
  Vad
  --
 
 
  On Tue, Oct 21, 2014 at 8:51 AM, Vadivel Poonathan
  vadivel.openst...@gmail.com wrote:
 
  Hi Kyle,
 
  Can you pls. comment on this discussion and confirm the requirements
 for
  getting out-of-tree mechanism_driver listed in the supported
 plugin/driver
  list of the Openstack Neutron docs.
 
  Thanks,
  Vad
  --
 
  On Mon, Oct 20, 2014 at 12:48 PM, Anne Gentle a...@openstack.org
 wrote:
 
 
 
  On Mon, Oct 20, 2014 at 2:42 PM, Vadivel Poonathan
  vadivel.openst...@gmail.com wrote:
 
  Hi,
 
   On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton 
 blak...@gmail.com
   wrote:
  
   I think you will probably have to wait until after the summit
 so
   we can
   see the direction that will be taken with the rest of the
 in-tree
   drivers/plugins. It seems like we are moving towards removing
 all
   of them so
   we would definitely need a solution to documenting out-of-tree
   drivers as
   you suggested.
 
  [Vad] while i 'm waiting for the conclusion on this subject, i 'm
 trying
  to setup the third-party CI/Test system and meet its requirements to
 get my
  mechanism_driver listed in the Kilo's documentation, in parallel.
 
  Couple of questions/confirmations before i proceed further on this
  direction...
 
  1) Is there anything more required other than the third-party CI/Test
  requirements ??.. like should I still need to go-through the entire
  development process of submit/review/approval of the blue-print and
 code of
  my ML2 driver which was 

[openstack-dev] [Keystone] python-keystoneclient release 0.11.2

2014-10-23 Thread Morgan Fainberg
The Keystone team has released python-keystoneclient 0.11.2 [1]. This version 
includes a number of bug fixes.

Details of new features and bug fixes included in the 0.11.2 release of 
python-keystoneclient can be found on the milestone information page [2].


[1] https://pypi.python.org/pypi/python-keystoneclient/0.11.2
[2] https://launchpad.net/python-keystoneclient/+milestone/0.11.2
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Convergence prototyping

2014-10-23 Thread Zane Bitter

Hi folks,
I've been looking at the convergence stuff, and become a bit concerned 
that we're more or less flying blind (or at least I have been) in trying 
to figure out the design, and also that some of the first implementation 
efforts seem to be around the stuff that is _most_ expensive to change 
(e.g. database schemata).


What we really want is to experiment on stuff that is cheap to change 
with a view to figuring out the big picture without having to iterate on 
the expensive stuff. To that end, I started last week to write a little 
prototype system to demonstrate the concepts of convergence. (Note that 
none of this code is intended to end up in Heat!) You can find the code 
here:


https://github.com/zaneb/heat-convergence-prototype

Note that this is a *very* early prototype. At the moment it can create 
resources, and not much else. I plan to continue working on it to 
implement updates and so forth. My hope is that we can develop a test 
framework and scenarios around this that can eventually be transplanted 
into Heat's functional tests. So the prototype code is throwaway, but 
the tests we might write against it in future should be useful.


I'd like to encourage anyone who needs to figure out any part of the 
design of convergence to fork the repo and try out some alternatives - 
it should be very lightweight to do so. I will also entertain pull 
requests (though I see my branch primarily as a vehicle for my own 
learning at this early stage, so if you want to go in a different 
direction it may be best to do so on your own branch), and the issue 
tracker is enabled if there is something you want to track.


I have learned a bunch of stuff already:

* The proposed spec for persisting the dependency graph 
(https://review.openstack.org/#/c/123749/1) is really well done. Kudos 
to Anant and the other folks who had input to it. I have left comments 
based on what I learned so far from trying it out.



* We should isolate the problem of merging two branches of execution 
(i.e. knowing when to trigger a check on one resource that depends on 
multiple others). Either in a library (like taskflow) or just a separate 
database table (like my current prototype). Baking it into the 
orchestration algorithms (e.g. by marking nodes in the dependency graph) 
would be a colossal mistake IMHO.



* Our overarching plan is backwards.

There are two quite separable parts to this architecture - the worker 
and the observer. Up until now, we have been assuming that implementing 
the observer would be the first step. Originally we thought that this 
would give us the best incremental benefits. At the mid-cycle meetup we 
came to the conclusion that there were actually no real incremental 
benefits to be had until everything was close to completion. I am now of 
the opinion that we had it exactly backwards - the observer 
implementation should come last. That will allow us to deliver 
incremental benefits from the observer sooner.


The problem with the observer is that it requires new plugins. (That 
sucks BTW, because a lot of the value of Heat is in having all of these 
tested, working plugins. I'd love it if we could take the opportunity to 
design a plugin framework such that plugins would require much less 
custom code, but it looks like a really hard job.) Basically this means 
that convergence would be stalled until we could rewrite all the 
plugins. I think it's much better to implement a first stage that can 
work with existing plugins *or* the new ones we'll eventually have with 
the observer. That allows us to get some benefits soon and further 
incremental benefits as we convert plugins one at a time. It should also 
mean a transition period (possibly with a performance penalty) for 
existing plugin authors, and for things like HARestarter (can we please 
please deprecate it now?).


So the two phases I'm proposing are:
 1. (Workers) Distribute tasks for individual resources among workers; 
implement update-during-update (no more locking).
 2. (Observers) Compare against real-world values instead of template 
values to determine when updates are needed. Make use of notifications 
and such.


I believe it's quite realistic to aim to get #1 done for Kilo. There 
could also be a phase 1.5, where we use the existing stack-check 
mechanism to detect the most egregious divergences between template and 
reality (e.g. whole resource is missing should be easy-ish). I think 
this means that we could have a feasible Autoscaling API for Kilo if 
folks step up to work on it - and in any case now is the time to start 
on that to avoid it being delayed more than it needs to be based purely 
on the availability of underlying features. That's why I proposed a 
session on Autoscaling for the design summit.



* This thing is going to _hammer_ the database

The advantage is that we'll be able to spread the access across an 
arbitrary number of workers, but it's still going to be brutal because 
there's only one 

Re: [openstack-dev] [Nova] Cells conversation starter

2014-10-23 Thread Andrew Laski


On 10/22/2014 08:11 PM, Sam Morrison wrote:

On 23 Oct 2014, at 5:55 am, Andrew Laski andrew.la...@rackspace.com wrote:


While I agree that N is a bit interesting, I have seen N=3 in production

[central API]--[state/region1]--[state/region DC1]
\-[state/region DC2]
   --[state/region2 DC]
   --[state/region3 DC]
   --[state/region4 DC]

I would be curious to hear any information about how this is working out.  Does 
everything that works for N=2 work when N=3?  Are there fixes that needed to be 
added to make this work?  Why do it this way rather than bring [state/region 
DC1] and [state/region DC2] up a level?

We (NeCTAR) have 3 tiers, our current setup has one parent, 6 children then 3 
of the children have 2 grandchildren each. All compute nodes are at the lowest 
level.

Everything works fine and we haven’t needed to do any modifications.

We run in a 3 tier system because it matches how our infrastructure is 
logically laid out, but I don’t see a problem in just having a 2 tier system 
and getting rid of the middle man.


There's no reason an N-tier system where N  2 shouldn't be feasible, 
but it's not going to be tested in this initial effort. So while we will 
try not to break it, it's hard to guarantee that. That's why my 
preference would be to remove that code and build up an N-tier system in 
conjunction with testing later.  But with a clear user of this 
functionality I don't think that's an option.




Sam


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] questions on object/db usage

2014-10-23 Thread Dan Smith
When I fix some bugs, I found that some code in
 nova/compute/api.py
   sometimes we use db ,sometimes we use objects do we have
 any criteria for it? I knew we can't access db in compute layer code,
 how about others ? prefer object or db direct access? thanks

Prefer objects, and any remaining db.* usage anywhere (other than the
object code itself) is not only a candidate for cleanup, it's much
appreciated :)

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-10-23 Thread Vadivel Poonathan
On Thu, Oct 23, 2014 at 9:49 AM, Edgar Magana edgar.mag...@workday.com
wrote:

  I forgot to mention that I can help to coordinate the creation and
 maintenance of the wiki for non-upstreamed drivers for Neutron.

[vad] Edgar, that would be nice!... but not sure whether it has to wait
till the outcome of the design discussion on this topic in the upcoming
summit??!...

Thanks,
Vad
--


 We need to be sure that we DO NOT confuse users with the current
 information here:
 https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers

  I have been maintaining that wiki and I would like to keep just for
 upstreamed vendor-specific plugins/drivers.

  Edgar

   From: Edgar Magana edgar.mag...@workday.com
 Date: Thursday, October 23, 2014 at 9:46 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org, Kyle Mestery mest...@mestery.com

 Subject: Re: [openstack-dev] [Neutron] Neutron documentation to update
 about new vendor plugin, but without code in repository?

   I second Anne’s and Kyle comments. Actually, I like very much the wiki
 part to provide some visibility for out-of-tree plugins/drivers but not
 into the official documentation.

  Thanks,

  Edgar

   From: Anne Gentle a...@openstack.org
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Thursday, October 23, 2014 at 8:51 AM
 To: Kyle Mestery mest...@mestery.com
 Cc: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron] Neutron documentation to update
 about new vendor plugin, but without code in repository?



 On Thu, Oct 23, 2014 at 10:31 AM, Kyle Mestery mest...@mestery.com
 wrote:

 Vad:

 The third-party CI is required for your upstream driver. I think
 what's different from my reading of this thread is the question of
 what is the requirement to have a driver listed in the upstream
 documentation which is not in the upstream codebase. To my knowledge,
 we haven't done this. Thus, IMHO, we should NOT be utilizing upstream
 documentation to document drivers which are themselves not upstream.
 When we split out the drivers which are currently upstream in neutron
 into a separate repo, they will still be upstream. So my opinion here
 is that if your driver is not upstream, it shouldn't be in the
 upstream documentation. But I'd like to hear others opinions as well.


  This is my sense as well.

  The hypervisor drivers are documented on the wiki, sometimes they're
 in-tree, sometimes they're not, but the state of testing is documented on
 the wiki. I think we could take this approach for network and storage
 drivers as well.

  https://wiki.openstack.org/wiki/HypervisorSupportMatrix

  Anne


 Thanks,
 Kyle

 On Thu, Oct 23, 2014 at 9:44 AM, Vadivel Poonathan
  vadivel.openst...@gmail.com wrote:
  Kyle,
  Gentle reminder... when you get a chance!..
 
  Anne,
  In case, if i need to send it to different group or email-id to reach
 Kyle
  Mestery, pls. let me know. Thanks for your help.
 
  Regards,
  Vad
  --
 
 
  On Tue, Oct 21, 2014 at 8:51 AM, Vadivel Poonathan
  vadivel.openst...@gmail.com wrote:
 
  Hi Kyle,
 
  Can you pls. comment on this discussion and confirm the requirements
 for
  getting out-of-tree mechanism_driver listed in the supported
 plugin/driver
  list of the Openstack Neutron docs.
 
  Thanks,
  Vad
  --
 
  On Mon, Oct 20, 2014 at 12:48 PM, Anne Gentle a...@openstack.org
 wrote:
 
 
 
  On Mon, Oct 20, 2014 at 2:42 PM, Vadivel Poonathan
  vadivel.openst...@gmail.com wrote:
 
  Hi,
 
   On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton 
 blak...@gmail.com
   wrote:
  
   I think you will probably have to wait until after the summit
 so
   we can
   see the direction that will be taken with the rest of the
 in-tree
   drivers/plugins. It seems like we are moving towards removing
 all
   of them so
   we would definitely need a solution to documenting out-of-tree
   drivers as
   you suggested.
 
  [Vad] while i 'm waiting for the conclusion on this subject, i 'm
 trying
  to setup the third-party CI/Test system and meet its requirements to
 get my
  mechanism_driver listed in the Kilo's documentation, in parallel.
 
  Couple of questions/confirmations before i proceed further on this
  direction...
 
  1) Is there anything more required other than the third-party CI/Test
  requirements ??.. like should I still need to go-through the entire
  development process of submit/review/approval of the blue-print and
 code of
  my ML2 driver which was already developed and in-use?...
 
 
  The neutron PTL Kyle Mestery can answer if there are any additional
  requirements.
 
 
  2) Who is the authority to clarify and confirm the above (and how do
 i
  contact them)?...
 
 
  Elections just completed, and the newly elected PTL is Kyle Mestery,
 
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/031433.html
 .
 
 
 
  Thanks again for 

Re: [openstack-dev] [Fuel] Pluggable framework in Fuel: first prototype ready

2014-10-23 Thread Dmitry Borodaenko
Preventing plugin developers from implementing their own installer is
a pro, not a con, you've already listed one reason in cons against
install scripts inside plugin tarball: if we centralize plugin
installation and management logic in fuel, we can change it once for
all plugins and don't have to worry about old plugins using an
obsolete installer.

I think priorities here should be 1) easy of plugin development; and
2) ease of use. Pluggable architecture won't do us much good if we end
up being the only ones being able to use it efficiently. Adding a
little more complexity to fuelclient to allow moving a lot of fuel
complexity from core to plugins is a good tradeoff.


On Thu, Oct 23, 2014 at 8:32 AM, Evgeniy L e...@mirantis.com wrote:
 Hi Mike,

 I would like to add a bit more details about current implementation and how
 it can be done.

 Implement installation as a scripts inside of tar ball:
 Cons:
 * install script is really simple right now, but it will be much more
 complicated
 ** it requires to implement logic where we can ask user for login/password
 ** use some config, where we will be able to get endpoints, like where is
 keystone, nailgun
 ** validate that it's possible to install plugin on the current version of
 master
 ** handle error cases (to make installation process more atomic)
 * it will be impossible to deprecate the installation logic/method, because
 it's on the plugin's side
   and you cannot change a plugin which user downloaded some times ago, when
 we get
   plugin manager, we probably would like user to use plugin manager, instead
 of some scripts
 * plugin installation process is not so simple as it could be (untar, cd
 plugin, ./install)

 Pros:
 * plugin developer can change installation scripts (I'm not sure if it's a
 pros)

 Add installation to fuel client:
 Cons:
 * requires changes in fuel client, which are not good for fuel client by
 design (fuel client
   should be able to work remotely from user's machine), current
 implementation requires
   local operations on files, it will be changed in the future releases, so
 fuel-client will
   be able to do it via api, also we can determine if it's not master node by
 /etc/fuel/version.yaml
   and show the user a message which says that in the current version it's
 not possible
   to install the plugin remotely
 * plugin developer won't be able to change installation process (I'm not
 sure if it's a cons)

 Pros:
 * it's easier for user to install the plugin `fuel --install-plugin
 plugin_name-1.0.1.fpb'
 * all of the authentication logic already implemented in fuel client
 * fuel client uses config with endpoints which is generated by puppet
 * it will be easier to deprecate previous installation approach, we can just
 install new
   fuel client on the master which uses api

 Personally I like the second approach, and I think we should try to
 implement it,
 when we get time.

 Thanks,

 On Thu, Oct 23, 2014 at 3:02 PM, Mike Scherbakov mscherba...@mirantis.com
 wrote:

 I feel like we should not require user to unpack the plugin before
 installing it. Moreover, we may chose to distribute plugins in our own
 format, which we may potentially change later. E.g. lbaas-v2.0.fp. I'd
 rather stick with two actions:

 Assembly (externally): fpb --build name

 Installation (on master node): fuel --install-plugin name

  I like the idea of putting plugin installation functionality in fuel
 client, which is installed
 on master node.
 But in the current version plugin installation requires files operations
 on the master,
 as result we can have problems if user's fuel-client is installed on
 another env.


 I suggest to keep it simple for now as we have the issue mentioned by
 Evgeny: fuel client is supposed to work from other nodes, and we will need
 additional verification code in there. Also, to make it smooth, we will have
 to end up with a few more checks - like what if tarball is broken, what if
 we can't find install script in it, etc.
 I'd suggest to run it simple for 6.0, and then we will see how it's being
 used and what other limitations / issues we have around plugin installation
 and usage. We can consider to make this functionality as part of fuel client
 a bit later.

 Thanks,

 On Tue, Oct 21, 2014 at 6:57 PM, Vitaly Kramskikh
 vkramsk...@mirantis.com wrote:

 Hi,

 As for a separate section for plugins, I think we should not force it and
 leave this decision to a plugin developer, so he can create just a single
 checkbox or a section of the settings tab or a separate tab depending on
 plugin functionality. Plugins should be able to modify arbitrary release
 fields. For example, if Ceph was a plugin, it should be able to extend
 wizard config to add new options to Storage pane. If vCenter was a plugin,
 it should be able to set maximum amount of Compute nodes to 0.

 2014-10-20 21:21 GMT+07:00 Evgeniy L e...@mirantis.com:

 Hi guys,

 Romans' questions:

  I feel like we should not require user to unpack the plugin 

[openstack-dev] [neutron] Lightning talks during the Design Summit!

2014-10-23 Thread Kyle Mestery
As discussed during the neutron-drivers meeting this week [1], we've
going to use one of the Neutron 40 minute design summit slots for
lightning talks. The basic idea is we will have 6 lightning talks,
each 5 minutes long. We will force a 5 minute hard limit here. We'll
do the lightning talk round first thing Thursday morning.

To submit a lightning talk, please add it to the etherpad linked here
[2]. I'll be collecting ideas until after the Neutron meeting on
Monday, 10-27-2014. At that point, I'll take all the ideas and add
them into a Survey Monkey form and we'll vote for which talks people
want to see. The top 6 talks will get a lightning talk slot.

I'm hoping the lightning talks allow people to discuss some ideas
which didn't get summit time, and allow for even new contributors to
discuss their ideas face to face with folks.

Thanks!
Kyle

[1] 
http://eavesdrop.openstack.org/meetings/neutron_drivers/2014/neutron_drivers.2014-10-22-15.02.log.html
[2] https://etherpad.openstack.org/p/neutron-kilo-lightning-talks

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-23 Thread Vishvananda Ishaya
If you exec conntrack inside the namespace with ip netns exec does it still 
show both connections?

Vish

On Oct 23, 2014, at 3:22 AM, Elena Ezhova eezh...@mirantis.com wrote:

 Hi!
 
 I am working on a bug ping still working once connected even after related 
 security group rule is deleted 
 (https://bugs.launchpad.net/neutron/+bug/1335375). The gist of the problem is 
 the following: when we delete a security group rule the corresponding rule in 
 iptables is also deleted, but the connection, that was allowed by that rule, 
 is not being destroyed.
 The reason for such behavior is that in iptables we have the following 
 structure of a chain that filters input packets for an interface of an 
 istance:
 
 Chain neutron-openvswi-i830fa99f-3 (1 references)
  pkts bytes target prot opt in out source   
 destination 
 0 0 DROP   all  --  *  *   0.0.0.0/00.0.0.0/0 
state INVALID /* Drop packets that are not associated with a 
 state. */
 0 0 RETURN all  --  *  *   0.0.0.0/00.0.0.0/0 
state RELATED,ESTABLISHED /* Direct packets associated with a 
 known session to the RETURN chain. */
 0 0 RETURN udp  --  *  *   10.0.0.3 0.0.0.0/0 
udp spt:67 dpt:68
 0 0 RETURN all  --  *  *   0.0.0.0/00.0.0.0/0 
match-set IPv43a0d3610-8b38-43f2-8 src
 0 0 RETURN tcp  --  *  *   0.0.0.0/00.0.0.0/0 
tcp dpt:22   rule that allows ssh on port 22  
   
 184 RETURN icmp --  *  *   0.0.0.0/00.0.0.0/0 
   
 0 0 neutron-openvswi-sg-fallback  all  --  *  *   0.0.0.0/0   
  0.0.0.0/0/* Send unmatched traffic to the fallback 
 chain. */
 
 So, if we delete rule that allows tcp on port 22, then all connections that 
 are already established won't be closed, because all packets would satisfy 
 the rule: 
 0 0 RETURN all  --  *  *   0.0.0.0/00.0.0.0/0 
state RELATED,ESTABLISHED /* Direct packets associated with a known 
 session to the RETURN chain. */
 
 I seek advice on the way how to deal with the problem. There are a couple of 
 ideas how to do it (more or less realistic):
 Kill the connection using conntrack
   The problem here is that it is sometimes impossible to tell which 
 connection should be killed. For example there may be two instances running 
 in different namespaces that have the same ip addresses. As a compute doesn't 
 know anything about namespaces, it cannot distinguish between the two 
 seemingly identical connections: 
  $ sudo conntrack -L  | grep 10.0.0.5
  tcp  6 431954 ESTABLISHED src=10.0.0.3 dst=10.0.0.5 sport=60723 
 dport=22 src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60723 [ASSURED] mark=0 use=1
  tcp  6 431976 ESTABLISHED src=10.0.0.3 dst=10.0.0.5 sport=60729 
 dport=22 src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60729 [ASSURED] mark=0 use=1
 
 I wonder whether there is any way to search for a connection by destination 
 MAC?
 Delete iptables rule that directs packets associated with a known session to 
 the RETURN chain
It will force all packets to go through the full chain each time 
 and this will definitely make the connection close. But this will strongly 
 affect the performance. Probably there may be created a timeout after which 
 this rule will be restored, but it is uncertain how long should it be.
 
 Please share your thoughts on how it would be better to handle it.
 
 Thanks in advance,
 Elena
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [qa] [oslo] Declarative HTTP Tests

2014-10-23 Thread Doug Hellmann

On Oct 23, 2014, at 6:27 AM, Chris Dent chd...@redhat.com wrote:

 
 I've proposed a spec to Ceilometer
 
   https://review.openstack.org/#/c/129669/
 
 for a suite of declarative HTTP tests that would be runnable both in
 gate check jobs and in local dev environments.
 
 There's been some discussion that this may be generally applicable
 and could be best served by a generic tool. My original assertion
 was let's make something work and then see if people like it but I
 thought I also better check with the larger world:
 
 * Is this a good idea?
 
 * Do other projects have similar ideas in progress?
 
 * Is this concept something for which a generic tool should be
  created _prior_ to implementation in an individual project?
 
 * Is there prior art? What's a good format?

WebTest isn’t quite what you’re talking about, but does provide a way to talk 
to a WSGI app from within a test suite rather simply. Can you expand a little 
on why “declarative” tests are better suited for this than the more usual sorts 
of tests we write?

I definitely don’t think the ceilometer team should build something completely 
new for this without a lot more detail in the spec about which projects on PyPI 
were evaluated and rejected as not meeting the requirements. If we do need/want 
something like this I would expect it to be built within the QA program. I 
don’t know if it’s appropriate to put it in tempestlib or if we need a 
completely new tool.

Doug

 
 Thanks.
 
 -- 
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-23 Thread Doug Hellmann

On Oct 23, 2014, at 2:56 AM, Flavio Percoco fla...@redhat.com wrote:

 On 10/22/2014 08:15 PM, Doug Hellmann wrote:
 The application projects are dropping python 2.6 support during Kilo, and 
 I’ve had several people ask recently about what this means for Oslo. Because 
 we create libraries that will be used by stable versions of projects that 
 still need to run on 2.6, we are going to need to maintain support for 2.6 
 in Oslo until Juno is no longer supported, at least for some of our 
 projects. After Juno’s support period ends we can look again at dropping 2.6 
 support in all of the projects.
 
 
 I think these rules cover all of the cases we have:
 
 1. Any Oslo library in use by an API client that is used by a supported 
 stable branch (Icehouse and Juno) needs to keep 2.6 support.
 
 2. If a client library needs a library we graduate from this point forward, 
 we will need to ensure that library supports 2.6.
 
 3. Any Oslo library used directly by a supported stable branch of an 
 application needs to keep 2.6 support.
 
 4. Any Oslo library graduated during Kilo can drop 2.6 support, unless one 
 of the previous rules applies.
 
 5. The stable/icehouse and stable/juno branches of the incubator need to 
 retain 2.6 support for as long as those versions are supported.
 
 6. The master branch of the incubator needs to retain 2.6 support until we 
 graduate all of the modules that will go into libraries used by clients.
 
 
 A few examples:
 
 - oslo.utils was graduated during Juno and is used by some of the client 
 libraries, so it needs to maintain python 2.6 support.
 
 - oslo.config was graduated several releases ago and is used directly by the 
 stable branches of the server projects, so it needs to maintain python 2.6 
 support.
 
 - oslo.log is being graduated in Kilo and is not yet in use by any projects, 
 so it does not need python 2.6 support.
 
 - oslo.cliutils and oslo.apiclient are on the list to graduate in Kilo, but 
 both are used by client projects, so they need to keep python 2.6 support. 
 At that point we can evaluate the code that remains in the incubator and see 
 if we’re ready to turn of 2.6 support there.
 
 
 Let me know if you have questions about any specific cases not listed in the 
 examples.
 
 The rules look ok to me but I'm a bit worried that we might miss
 something in the process due to all these rules being in place. Would it
 be simpler to just say we'll keep py2.6 support in oslo for Kilo and
 drop it in Igloo (or L?) ?

I think we have to actually wait for M, don’t we (K  L represents 1 year where 
J is supported, M is the first release where J is not supported and 2.6 can be 
fully dropped).

But to your point of keeping it simple and saying we support 2.6 in all of Oslo 
until no stable branches use it, that could work. I think in practice we’re not 
in any hurry to drop the 2.6 tests from existing Oslo libs, and we just won’t 
add them to new ones, which gives us basically the same result.

Doug

 
 Once Igloo development begins, Kilo will be stable (without py2.6
 support except for Oslo) and Juno will be in security maintenance (with
 py2.6 support).
 
 I guess the TL;DR of what I'm proposing is to keep 2.6 support in oslo
 until we move the rest of the projects just to keep the process simpler.
 Probably longer but hopefully simpler.
 
 I'm sure I'm missing something so please, correct me here.
 Flavio
 
 
 -- 
 @flaper87
 Flavio Percoco
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-23 Thread Doug Hellmann

On Oct 23, 2014, at 12:30 PM, Kevin L. Mitchell kevin.mitch...@rackspace.com 
wrote:

 On Thu, 2014-10-23 at 18:56 +0300, Andrey Kurilin wrote:
 Just a joke: Can we drop supporting Python 2.6, when several project
 still have hooks for Python 2.4?
 
 https://github.com/openstack/python-novaclient/blob/master/novaclient/exceptions.py#L195-L203
 https://github.com/openstack/python-cinderclient/blob/master/cinderclient/exceptions.py#L147-L155
 
 It may have been intended as a joke, but it's worth pointing out that
 the Xen plugins for nova (at least) have to be compatible with Python
 2.4, because they run on the Xenserver, which has an antiquated Python
 installed :)
 
 As for the clients, we could probably drop that segment now; it's not
 like we *test* against 2.4, right?  :)

I’m not aware of any Oslo code that presents a problem for those plugins. We 
wouldn’t want to cause a problem, but as you say, we don’t have anywhere to 
test 2.4 code. Do you know if the Xen driver uses any of the Oslo code?

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [poppy] Summit Design Session Planning

2014-10-23 Thread Amit Gandhi
Hi

The summit planning etherpad [0] is now available to continue discussion topics 
for the Poppy design session in Paris (to be held on Tuesday Nov 4th, at 2pm)

The planning etherpad will be kept open until next Thursday, during which we 
will finalize what will be discussed during the Poppy Design session[1].  The 
Poppy team started the planning discussion at todays weekly Poppy meeting[2].

One of the initial design session topics we plan to discuss at the summit is 
how Poppy can provision CDN services over Swift Containers.

I would like to invite any Swift developers who are attending the Kilo Summit 
to attend the Poppy design session, so that we can discuss in detail how this 
feature would work and any issues we would need to consider.


For more information on Poppy (CDN), and the Design Session, please visit the 
Poppy wiki page [3]

[0] https://etherpad.openstack.org/p/poppy-design-session-paris
[1] 
http://kilodesignsummit.sched.org/event/5c9eed173199565ce840100e37ebd754#.VElwU4exE1d
[2] https://wiki.openstack.org/wiki/Meetings/Poppy
[3] https://wiki.openstack.org/wiki/Poppy

Thanks,

Amit Gandhi
Rackspace.

@amitgandhinz on Freenode
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] [Devstack]

2014-10-23 Thread David Lyle
In order to help ease an ongoing struggle with session size limit issues,
Horizon is planning on changing the default session store from signed
cookie to simple server side session storage using sqlite. The size limit
for cookie based sessions is 4K and when this value is overrun, the result
is truncation of the session data in the cookie or a complete lack of
session data updates.

Operators will have the flexibility to replace the sqlite backend with the
DB of their choice, or memcached.

We gain: support for non-trivial service catalogs, support for higher
number of regions, space for holding multiple tokens (domain scoped and
project scoped), better support for PKI and PKIZ tokens, and frees up
cookie space for user preferences.

The drawbacks are we lose HA as a default, a slightly more complicated
configuration. Once the cookie size limit is removed, cookie based storage
would no longer be supported.

Additionally, this will require a few config changes to devstack to
configure the session store DB and clean it up periodically.

Concerns?

David
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-23 Thread Preston L. Bannister
On Thu, Oct 23, 2014 at 7:51 AM, John Griffith john.griffi...@gmail.com
wrote:

 On Thu, Oct 23, 2014 at 8:50 AM, John Griffith john.griffi...@gmail.com
 wrote:

 On Thu, Oct 23, 2014 at 1:30 AM, Preston L. Bannister 
 pres...@bannister.us wrote:

 John,

 As a (new) OpenStack developer, I just discovered the
 CINDER_SECURE_DELETE option.


 OHHH... Most importantly, I almost forgot.  Welcome!!!


Thanks! (I think...)




 It doesn't suck as bad as you might have thought or some of the other
 respondents on this thread seem to think.  There's certainly room for
 improvement and growth but it hasn't been completely ignored on the Cinder
 side.


To be clear, I am fairly impressed with what has gone into OpenStack as a
whole. Given the breadth, complexity, and growth ... not everything is
going to be perfect (yet?).

So ... not trying to disparage past work, but noting what does not seem
right. (Also know I could easily be missing something.)





 The debate about whether to wipe LV's pretty much massively depends on the
 intelligence of the underlying store. If the lower level storage never
 returns accidental information ... explicit zeroes are not needed.


Yes, that is pretty much the key.

Does LVM let you read physical blocks that have never been written? Or zero
out virgin segments on read? If not, then dd of zeroes is a way of doing
the right thing (if *very* expensive).
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Error in ssh key pair log in

2014-10-23 Thread Patil, Tushar
Hi Khayam,

Read below warning message carefully.

Open /home/openstack/.ssh/known_hosts file from where you are trying to connect 
to the VM, delete line #1 and try it again.

TP

From: Khayam Gondal khayam.gon...@gmail.commailto:khayam.gon...@gmail.com
Date: Thursday, October 23, 2014 at 2:32 AM
To: openst...@lists.openstack.orgmailto:openst...@lists.openstack.org 
openst...@lists.openstack.orgmailto:openst...@lists.openstack.org, 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [Openstack] Error in ssh key pair log in


I am trying to login into VM from host using ssh key pair instead of password. 
I have created VM using keypair khayamkey and than tried to login into vm using 
following command

ssh -l tux -i khayamkey.pem 10.3.24.56

where tux is username for VM, but I got following error

WARNING: REMOTE 
HOST IDENTIFICATION HAS CHANGED! 

IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!Someone could be 
eavesdropping on you right now (man-in-the-middle attack)!It is also possible 
that a host key has just been changed.The fingerprint for the RSA key sent by 
the remote host is52:5c:47:33:dd:d0:7a:cd:0e:78:8d:9b:66:d8:74:a3.Please 
contact your system administrator.Add correct host key in 
/home/openstack/.ssh/known_hosts to get rid of this message.Offending RSA key 
in /home/openstack/.ssh/known_hosts:1
  remove with: ssh-keygen -f /home/openstack/.ssh/known_hosts -R 10.3.24.56
RSA host key for 10.3.24.56 has changed and you have requested strict 
checking.Host key verification failed.

P.S: I know if I run ssh-keygen -f /home/openstack/.ssh/known_hosts -R 
10.3.24.56 problem can be solved but than I have to provide password to log in 
to VM, but my goal is to use keypairs NOT password.

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [Devstack]

2014-10-23 Thread Gabriel Hurley
All in all this is been a long time coming. The cookie-based option was useful 
as a batteries-included, simplest-case scenario. Moving to SQLite is a 
reasonable second choice since most systems Horizon might be deployed on 
support sqlite out of the box.

I would make a couple notes:


1)  If you’re going to store very large amounts of data in the session, 
then session cleanup is going to become an important issue to prevent excessive 
data growth from old sessions.

2)  SQLite is far worse to go into production with than cookie-based 
sessions (which are far from perfect). The more we can do to ensure people 
don’t make that mistake, the better.

3)  There should be a clear deprecation for cookie-based sessions. Don’t 
just drop them in a single release, as tempting as it is.

Otherwise, seems good to me.


-  Gabriel

From: David Lyle [mailto:dkly...@gmail.com]
Sent: Thursday, October 23, 2014 2:44 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Horizon] [Devstack]

In order to help ease an ongoing struggle with session size limit issues, 
Horizon is planning on changing the default session store from signed cookie to 
simple server side session storage using sqlite. The size limit for cookie 
based sessions is 4K and when this value is overrun, the result is truncation 
of the session data in the cookie or a complete lack of session data updates.

Operators will have the flexibility to replace the sqlite backend with the DB 
of their choice, or memcached.

We gain: support for non-trivial service catalogs, support for higher number of 
regions, space for holding multiple tokens (domain scoped and project scoped), 
better support for PKI and PKIZ tokens, and frees up cookie space for user 
preferences.

The drawbacks are we lose HA as a default, a slightly more complicated 
configuration. Once the cookie size limit is removed, cookie based storage 
would no longer be supported.

Additionally, this will require a few config changes to devstack to configure 
the session store DB and clean it up periodically.

Concerns?

David


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-23 Thread Ian Wells
There are two categories of problems:

1. some networks don't pass VLAN tagged traffic, and it's impossible to
detect this from the API
2. it's not possible to pass traffic from multiple networks to one port on
one machine as (e.g.) VLAN tagged traffic

(1) is addressed by the VLAN trunking network blueprint, XXX. Nothing else
addresses this, particularly in the case that one VM is emitting tagged
packets that another one should receive and Openstack knows nothing about
what's going on.

We should get this in, and ideally in quickly and in a simple form where it
simply tells you if a network is capable of passing tagged traffic.  In
general, this is possible to calculate but a bit tricky in ML2 - anything
using the OVS mechanism driver won't pass VLAN traffic, anything using
VLANs should probably also claim it doesn't pass VLAN traffic (though
actually it depends a little on the switch), and combinations of L3 tunnels
plus Linuxbridge seem to pass VLAN traffic just fine.  Beyond that, it's
got a backward compatibility mode, so it's possible to ensure that any
plugin that doesn't implement VLAN reporting is still behaving correctly
per the specification.

(2) is addressed by several blueprints, and these have overlapping ideas
that all solve the problem.  I would summarise the possibilities as follows:

A. Racha's L2 gateway blueprint,
https://blueprints.launchpad.net/neutron/+spec/gateway-api-extension, which
(at its simplest, though it's had features added on and is somewhat
OVS-specific in its detail) acts as a concentrator to multiplex multiple
networks onto one as a trunk.  This is a very simple approach and doesn't
attempt to resolve any of the hairier questions like making DHCP work as
you might want it to on the ports attached to the trunk network.
B. Isaku's L2 gateway blueprint, https://review.openstack.org/#/c/100278/,
which is more limited in that it refers only to external connections.
C. Erik's VLAN port blueprint,
https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms, which tries
to solve the addressing problem mentioned above by having ports within
ports (much as, on the VM side, interfaces passing trunk traffic tend to
have subinterfaces that deal with the traffic streams).
D. Not a blueprint, but an idea I've come across: create a network that is
a collection of other networks, each 'subnetwork' being a VLAN in the
network trunk.
E. Kyle's very old blueprint,
https://blueprints.launchpad.net/neutron/+spec/quantum-network-bundle-api -
where we attach a port, not a network, to multiple networks.  Probably
doesn't work with appliances.

I would recommend we try and find a solution that works with both external
hardware and internal networks.  (B) is only a partial solution.

Considering the others, note that (C) and (D) add significant complexity to
the data model, independently of the benefits they bring.  (A) adds one new
functional block to networking (similar to today's routers, or even today's
Nova instances).

Finally, I suggest we consider the most prominent use case for multiplexing
networks.  This seems to be condensing traffic from many networks to either
a service VM or a service appliance.  It's useful, but not essential, to
have Neutron control the addresses on the trunk port subinterfaces.

So, that said, I personally favour (A) is the simplest way to solve our
current needs, and I recommend paring (A) right down to its basics: a block
that has access ports that we tag with a VLAN ID, and one trunk port that
has all of the access networks multiplexed onto it.  This is a slightly
dangerous block, in that you can actually set up forwarding blocks with it,
and that's a concern; but it's a simple service block like a router, it's
very, very simple to implement, and it solves our immediate problems so
that we can make forward progress.  It also doesn't affect the other
solutions significantly, so someone could implement (C) or (D) or (E) in
the future.
-- 
Ian.


On 23 October 2014 02:13, Alan Kavanagh alan.kavan...@ericsson.com wrote:

 +1 many thanks to Kyle for putting this as a priority, its most welcome.
 /Alan

 -Original Message-
 From: Erik Moe [mailto:erik@ericsson.com]
 Sent: October-22-14 5:01 PM
 To: Steve Gordon; OpenStack Development Mailing List (not for usage
 questions)
 Cc: iawe...@cisco.com
 Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
 blueprints


 Hi,

 Great that we can have more focus on this. I'll attend the meeting on
 Monday and also attend the summit, looking forward to these discussions.

 Thanks,
 Erik


 -Original Message-
 From: Steve Gordon [mailto:sgor...@redhat.com]
 Sent: den 22 oktober 2014 16:29
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Erik Moe; iawe...@cisco.com; calum.lou...@metaswitch.com
 Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
 blueprints

 - Original Message -
  From: Kyle Mestery mest...@mestery.com
  To: OpenStack 

Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-23 Thread Kevin L. Mitchell
On Thu, 2014-10-23 at 17:19 -0400, Doug Hellmann wrote:
 I’m not aware of any Oslo code that presents a problem for those
 plugins. We wouldn’t want to cause a problem, but as you say, we don’t
 have anywhere to test 2.4 code. Do you know if the Xen driver uses any
 of the Oslo code?

I missed the [oslo] tag in the subject line and was thinking generally;
so no, none of the Xen plugins use anything from oslo, because of the
need to support 2.4.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [stable] Tool to aid in scalability problems mitigation.

2014-10-23 Thread Salvatore Orlando
Hi Miguel,

while we'd need to hear from the stable team, I think it's not such a bad
idea to make this tool available to users of pre-juno openstack releases.
As far as upstream repos are concerned, I don't know if this tool violates
the criteria for stable branches. Even if it would be a rather large change
for stable/icehouse, it is pretty much orthogonal to the existing code, so
it could be ok. However, please note that stable/havana has now reached its
EOL, so there will be no more stable release for it.

The orthogonal nature of this tool however also make the case for making it
widely available on pypi. I think it should be ok to describe the
scalability issue in the official OpenStack Icehouse docs and point out to
this tool for mitigation.

Salvatore

On 23 October 2014 14:03, Miguel Angel Ajo Pelayo mangel...@redhat.com
wrote:



 Recently, we have identified clients with problems due to the
 bad scalability of security groups in Havana and Icehouse, that
 was addressed during juno here [1] [2]

 This situation is identified by blinking agents (going UP/DOWN),
 high AMQP load, nigh neutron-server load, and timeout from openvswitch
 agents when trying to contact neutron-server
 security_group_rules_for_devices.

 Doing a [1] backport involves many dependent patches related
 to the general RPC refactor in neutron (which modifies all plugins),
 and subsequent ones fixing a few bugs. Sounds risky to me. [2] Introduces
 new features and it's dependent on features which aren't available on
 all systems.

 To remediate this on production systems, I wrote a quick tool
 to help on reporting security groups and mitigating the problem
 by writing almost-equivalent rules [3].

 We believe this tool would be better available to the wider community,
 and under better review and testing, and, since it doesn't modify any
 behavior
 or actual code in neutron, I'd like to propose it for inclusion into, at
 least,
 Icehouse stable branch where it's more relevant.

 I know the usual way is to go master-Juno-Icehouse, but at this
 moment
 the tool is only interesting for Icehouse (and Havana), although I believe
 it could be extended to cleanup orphaned resources, or any other cleanup
 tasks, in that case it could make sense to be available for K-J-I.

 As a reference, I'm leaving links to outputs from the tool [4][5]

 Looking forward to get some feedback,
 Miguel Ángel.


 [1] https://review.openstack.org/#/c/111876/ security group rpc refactor
 [2] https://review.openstack.org/#/c/111877/ ipset support
 [3] https://github.com/mangelajo/neutrontool
 [4] http://paste.openstack.org/show/123519/
 [5] http://paste.openstack.org/show/123525/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-23 Thread John Griffith
On Thu, Oct 23, 2014 at 3:44 PM, Preston L. Bannister pres...@bannister.us
wrote:


 On Thu, Oct 23, 2014 at 7:51 AM, John Griffith john.griffi...@gmail.com
 wrote:

 On Thu, Oct 23, 2014 at 8:50 AM, John Griffith john.griffi...@gmail.com
 wrote:

 On Thu, Oct 23, 2014 at 1:30 AM, Preston L. Bannister 
 pres...@bannister.us wrote:

 John,

 As a (new) OpenStack developer, I just discovered the
 CINDER_SECURE_DELETE option.


 OHHH... Most importantly, I almost forgot.  Welcome!!!


 Thanks! (I think...)

:)





 It doesn't suck as bad as you might have thought or some of the other
 respondents on this thread seem to think.  There's certainly room for
 improvement and growth but it hasn't been completely ignored on the Cinder
 side.


 To be clear, I am fairly impressed with what has gone into OpenStack as a
 whole. Given the breadth, complexity, and growth ... not everything is
 going to be perfect (yet?).

 So ... not trying to disparage past work, but noting what does not seem
 right. (Also know I could easily be missing something.)

Sure, I didn't mean anything by that at all, and certainly didn't take it
that way.






 The debate about whether to wipe LV's pretty much massively depends on
 the intelligence of the underlying store. If the lower level storage never
 returns accidental information ... explicit zeroes are not needed.


 Yes, that is pretty much the key.

 Does LVM let you read physical blocks that have never been written? Or
 zero out virgin segments on read? If not, then dd of zeroes is a way of
 doing the right thing (if *very* expensive).


Yeah... so that's the crux of the issue on LVM (Thick).  It's quite
possible for a new LV to be allocated from the VG and a block from a
previous LV can be allocated.  So in essence if somebody were to sit there
in a cloud env and just create volumes and read the blocks over and over
and over they could gather some previous or other tenants data (or pieces
of it at any rate).  It's def the right thing to do if you're in an env
where you need some level of security between tenants.  There are other
ways to solve it of course but this is what we've got.




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-23 Thread Preston L. Bannister
On Thu, Oct 23, 2014 at 3:04 PM, John Griffith john.griffi...@gmail.com
wrote:

The debate about whether to wipe LV's pretty much massively depends on the
 intelligence of the underlying store. If the lower level storage never
 returns accidental information ... explicit zeroes are not needed.



 On Thu, Oct 23, 2014 at 3:44 PM, Preston L. Bannister 
 pres...@bannister.us wrote:


 Yes, that is pretty much the key.

 Does LVM let you read physical blocks that have never been written? Or
 zero out virgin segments on read? If not, then dd of zeroes is a way of
 doing the right thing (if *very* expensive).


 Yeah... so that's the crux of the issue on LVM (Thick).  It's quite
 possible for a new LV to be allocated from the VG and a block from a
 previous LV can be allocated.  So in essence if somebody were to sit there
 in a cloud env and just create volumes and read the blocks over and over
 and over they could gather some previous or other tenants data (or pieces
 of it at any rate).  It's def the right thing to do if you're in an env
 where you need some level of security between tenants.  There are other
 ways to solve it of course but this is what we've got.



Has anyone raised this issue with the LVM folk? Returning zeros on
unwritten blocks would require a bit of extra bookkeeping, but a lot more
efficient overall.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-23 Thread Jorge Miramontes
Hey German/Susanne,

To continue our conversation from our IRC meeting could you all provide
more insight into you usage requirements? Also, I'd like to clarify a few
points related to using logging.

I am advocating that logs be used for multiple purposes, including
billing. Billing requirements are different that connection logging
requirements. However, connection logging is a very accurate mechanism to
capture billable metrics and thus, is related. My vision for this is
something like the following:

- Capture logs in a scalable way (i.e. capture logs and put them on a
separate scalable store somewhere so that it doesn't affect the amphora).
- Every X amount of time (every hour, for example) process the logs and
send them on their merry way to cielometer or whatever service an operator
will be using for billing purposes.
- Keep logs for some configurable amount of time. This could be anything
from indefinitely to not at all. Rackspace is planing on keeping them for
a certain period of time for the following reasons:

A) We have connection logging as a planned feature. If a customer turns
on the connection logging feature for their load balancer it will already
have a history. One important aspect of this is that customers (at least
ours) tend to turn on logging after they realize they need it (usually
after a tragic lb event). By already capturing the logs I'm sure customers
will be extremely happy to see that there are already X days worth of logs
they can immediately sift through.
B) Operators and their support teams can leverage logs when providing
service to their customers. This is huge for finding issues and resolving
them quickly.
C) Albeit a minor point, building support for logs from the get-go
mitigates capacity management uncertainty. My example earlier was the
extreme case of every customer turning on logging at the same time. While
unlikely, I would hate to manage that!

I agree that there are other ways to capture billing metrics but, from my
experience, those tend to be more complex than what I am advocating and
without the added benefits listed above. An understanding of HP's desires
on this matter will hopefully get this to a point where we can start
working on a spec.

Cheers,
--Jorge

P.S. Real-time stats is a different beast and I envision there being an
API call that returns real-time data such as this ==
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.


From:  Eichberger, German german.eichber...@hp.com
Reply-To:  OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:  Wednesday, October 22, 2014 2:41 PM
To:  OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements


Hi Jorge,
 
Good discussion so far + glad to have you back
J
 
I am not a big fan of using logs for billing information since ultimately
(at least at HP) we need to pump it into ceilometer. So I am envisioning
either the
 amphora (via a proxy) to pump it straight into that system or we collect
it on the controller and pump it from there.
 
Allowing/enabling logging creates some requirements on the hardware,
mainly, that they can handle the IO coming from logging. Some operators
might choose to
 hook up very cheap and non performing disks which might not be able to
deal with the log traffic. So I would suggest that there is some rate
limiting on the log output to help with that.

 
Thanks,
German
 
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]

Sent: Wednesday, October 22, 2014 6:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements


 
Hey Stephen (and Robert),

 

For real-time usage I was thinking something similar to what you are
proposing. Using logs for this would be overkill IMO so your suggestions
were what I was
 thinking of starting with.

 

As far as storing logs is concerned I was definitely thinking of
offloading these onto separate storage devices. Robert, I totally hear
you on the scalability
 part as our current LBaaS setup generates TB of request logs. I'll start
planning out a spec and then I'll let everyone chime in there. I just
wanted to get a general feel for the ideas I had mentioned. I'll also
bring it up in today's meeting.

 

Cheers,

--Jorge




 

From:
Stephen Balukoff sbaluk...@bluebox.net
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Wednesday, October 22, 2014 4:04 AM
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

 

Hi Jorge!

 

Welcome back, eh! You've been missed.

 

Anyway, I just wanted to say that your proposal sounds great to me, and
it's good to finally be closer to having 

Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-23 Thread Chris Friesen

On 10/23/2014 04:24 PM, Preston L. Bannister wrote:

On Thu, Oct 23, 2014 at 3:04 PM, John Griffith john.griffi...@gmail.com
mailto:john.griffi...@gmail.com wrote:

The debate about whether to wipe LV's pretty much massively
depends on the intelligence of the underlying store. If the
lower level storage never returns accidental information ...
explicit zeroes are not needed.

On Thu, Oct 23, 2014 at 3:44 PM, Preston L. Bannister
pres...@bannister.us mailto:pres...@bannister.us wrote:


Yes, that is pretty much the key.

Does LVM let you read physical blocks that have never been
written? Or zero out virgin segments on read? If not, then dd
of zeroes is a way of doing the right thing (if *very* expensive).

Yeah... so that's the crux of the issue on LVM (Thick).  It's quite
possible for a new LV to be allocated from the VG and a block from a
previous LV can be allocated.  So in essence if somebody were to sit
there in a cloud env and just create volumes and read the blocks
over and over and over they could gather some previous or other
tenants data (or pieces of it at any rate).  It's def the right
thing to do if you're in an env where you need some level of
security between tenants.  There are other ways to solve it of
course but this is what we've got.



Has anyone raised this issue with the LVM folk? Returning zeros on
unwritten blocks would require a bit of extra bookkeeping, but a lot
more efficient overall.


For Cinder volumes, I think that if you have new enough versions of 
everything you can specify lvm_type = thin and it will use thin 
provisioning.  Among other things this should improve snapshot 
performance and also avoid the need to explicitly wipe on delete (since 
the next user of the storage will be provided zeros for a read of any 
page it hasn't written).


As far as I know this is not supported for ephemeral storage.

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] Networking API Create network missing Request parameters

2014-10-23 Thread Danny Choi (dannchoi)
Hi,

In neutron, user with “admin” role can specify the provider network parameters 
when creating a network.

—provider:network_type
—provider:physical_network
—provider:segmentation_id


localadmin@qa4:~/devstack$ neutron net-create test-network 
--provider:network_type vlan --provider:physical_network physnet1 
--provider:segmentation_id 400

Created a new network:

+---+--+

| Field | Value|

+---+--+

| admin_state_up| True |

| id| 389caa09-da54-4713-b869-12f7389cb9c6 |

| name  | test-network |

| provider:network_type | vlan |

| provider:physical_network | physnet1 |

| provider:segmentation_id  | 400  |

| router:external   | False|

| shared| False|

| status| ACTIVE   |

| subnets   |  |

| tenant_id | 92edf0cd20bf4085bb9dbe1b9084aadb |

+---+--+

However, the Networking API v2.0 
(http://developer.openstack.org/api-ref-networking-v2.html) “Create network”
does not list them as Request parameters.

Is this a print error?

Thanks,
Danny
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Networking API Create network missing Request parameters

2014-10-23 Thread Mathieu Gagné

On 2014-10-23 7:00 PM, Danny Choi (dannchoi) wrote:


In neutron, user with “admin” role can specify the provider network
parameters when creating a network.

—provider:network_type
—provider:physical_network
—provider:segmentation_id

localadmin@qa4:~/devstack$ neutron net-create test-network
--provider:network_type vlan --provider:physical_network physnet1
--provider:segmentation_id 400

However, the Networking API v2.0
(http://developer.openstack.org/api-ref-networking-v2.html) “Create network”
does not list them as Request parameters.

Is this a print error?



I see them under the Networks multiple provider extension (networks) 
section. [1]


Open the detail for Create network with multiple segment mappings to 
see them.


Is this what you were looking for?

[1] 
http://developer.openstack.org/api-ref-networking-v2.html#network_multi_provider-ext


--
Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] [Summit] proposed item for the crossproject and/ or Nova meetings in the Design summit

2014-10-23 Thread Elzur, Uri
Today, OpenStack makes placement decision mainly based on Compute demands 
(Scheduler is part of Nova). It also uses some info provided about platform's 
Compute capabilities. But for a given application (consists of some VMs, some 
Network appliances, some storage etc), Nova/Scheduler has no way to figure out 
relative placement of Network devices (virtual appliances, SFC) and/or Storage 
devices (which is also network born in many cases) in reference to the Compute 
elements. This makes it harder to provide SLA, support certain policies (e.g. 
HA or keeping all of these elements within a physical boundary of your choice, 
or within a given network physical boundary and guarantee storage proximity, 
for example. It also makes it harder to optimize resource utilization level, 
which increases the cost and may cause Openstack to be less competitive on TCO.

Another aspect of the issue, is that in order, to lower the cost per unit of 
compute (or said better per unit of Application), it is essential to pack 
tighter. This increases infrastructure utilization but also makes interference 
a more important phenomenon (aka Nosy neighbor). SLA requests, SLA guarantees 
and placement based on ability to provide desired SLA are required.

We'd like to suggest moving a bit faster on making OpenStack a more compelling 
stack for Compute/Network/Storage, capable of supporting Telco/NFV and other 
usage models, and creating the foundation for providing very low cost platform, 
more competitive with large cloud deployment.

The concern is that any scheduler change will take long time. Folks closer to 
the Scheduler work, have already pointed out we first need to stabilize the API 
between Nova and the Scheduler, before we can talk about a split (e.g. Gantt). 
So it may take till  late in 2016 (best case?), to get this kind of broader 
Application level functionality in the OpenStack scheduler .

We'd like to bring it up in the coming design summit. Where do you think it 
needs to be discussed: cross project tack? Scheduler discussion? Other?

I've just added a proposed item 17.1 to the 
https://etherpad.openstack.org/p/kilo-crossproject-summit-topics
1.
2.   present Application's Network and Storage requirements, coupled with 
infrastructure capabilities and status (e.g. up/dn, utilization levels) and 
placement policy (e.g. proximity, HA) to get optimized placement decisions 
accounting for all application elements (VMs, virt Network appliances, Storage) 
vs. Compute only


Thx

Uri (Oo-Ree)
C: 949-378-7568
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-23 Thread Jeremy Stanley
On 2014-10-23 17:18:04 -0400 (-0400), Doug Hellmann wrote:
 I think we have to actually wait for M, don’t we (K  L represents
 1 year where J is supported, M is the first release where J is not
 supported and 2.6 can be fully dropped).
[...]

Roughly speaking, probably. It's more accurate to say we need to
keep it until stable/juno reaches end of support, which won't
necessarily coincide exactly with any particular release cycle
ending (it will instead coincide with whenever the stable branch
management team decides the final 2014.2.x point release is, which I
don't think has been settled quite yet).
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Deprecation of Python 2.6 CI Testing

2014-10-23 Thread Clark Boylan
Hello,

At the Atlanta summit there was a session on removing python2.6
testing/support from the OpenStack Kilo release [0]. The Infra team is
working on enacting this change in the near future.

The way that this will work is python26 jobs will be removed from
running on master and feature branches of projects that have
stable/icehouse and/or stable/juno branches. The python26 jobs will
still continue to run against the stable branches. Any project that is a
library consumed by stable releases but does not have stable branches
will have python26 run against that project's master branch. This is
necessary to ensure we don't break backward compatibility with stable
releases.

This essentially boils down to: no python26 jobs against server project
master branches, but python26 jobs continue to run against stable
branches. Python-*client and oslo projects[1] will continue to have
python26 jobs run against their master branches. All other projects will
have python26 jobs completely removed (including stackforge).

If you are a project slated to have python26 removed and would prefer to
continue testing python26 that is doable, but we ask that you propose a
change atop the removal change [2] that adds python26 back to your
project. This way it is clear through git history and review that this
is a desired state. Also, this serves as a warning to the future where
we will drop all python26 jobs when stable/juno is no longer supported.
At that point we will stop building slaves capable of running python26
jobs.

Rough timeline for making these changes is early next week for OpenStack
projects. Then at the end of November (November 30th) we will make the 
changes to stackforge. This should give us plenty of time to work out 
which stackforge projects wish to continue testing python26.

[0] https://etherpad.openstack.org/p/juno-cross-project-future-of-python
[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-October/048999.html
[2] https://review.openstack.org/129434

Let me or the Infra team know if you have any questions,
Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Summit] proposed item for the crossproject and/ or Nova meetings in the Design summit

2014-10-23 Thread Jay Pipes

On 10/23/2014 07:57 PM, Elzur, Uri wrote:

Today, OpenStack makes placement decision mainly based on Compute
demands (Scheduler is part of Nova). It also uses some info provided
about platform’s Compute capabilities. But for a given application
(consists of some VMs, some Network appliances, some storage etc),
Nova/Scheduler has no way to figure out relative placement of Network
devices (virtual appliances, SFC) and/or Storage devices (which is also
network born in many cases) in reference to the Compute elements. This
makes it harder to provide SLA, support certain policies (e.g. HA or
keeping all of these elements within a physical boundary of your choice,
or within a given network physical boundary and guarantee storage
proximity, for example. It also makes it harder to optimize resource
utilization level, which increases the cost and may cause Openstack to
be less competitive on TCO.

Another aspect of the issue, is that in order, to lower the cost per
unit of compute (or said better per unit of Application), it is
essential to pack tighter. This increases infrastructure utilization but
also makes interference a more important phenomenon (aka Nosy neighbor).
SLA requests, SLA guarantees and placement based on ability to provide
desired SLA are required.

We’d like to suggest moving a bit faster on making OpenStack a more
compelling stack for Compute/Network/Storage, capable of supporting
Telco/NFV and other usage models, and creating the foundation for
providing very low cost platform, more competitive with large cloud
deployment.


How do you suggest moving faster?

Also, when you say things like more competitive with large cloud 
deployment you need to tell us what you are comparing OpenStack to, and 
what cost factors you are using. Otherwise, it's just a statement with 
no context.



The concern is that any scheduler change will take long time. Folks
closer to the Scheduler work, have already pointed out we first need to
stabilize the API between Nova and the Scheduler, before we can talk
about a split (e.g. Gantt). So it may take till  late in 2016 (best
case?), to get this kind of broader Application level functionality in
the OpenStack scheduler .


I'm not entirely sure where late in 2016 comes from? Could you elaborate?


We’d like to bring it up in the coming design summit. Where do you think
it needs to be discussed: cross project tack? Scheduler discussion? Other?

I’ve just added a proposed item 17.1 to the
https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

1.

2.“present Application’s Network and Storage requirements, coupled with
infrastructure capabilities and status (e.g. up/dn


This is the kind of thing that was nixed as an idea last go around with 
the nic-state-aware-scheduler:


https://review.openstack.org/#/c/87978/

You are coupling service state monitoring with placement decisions, and 
by doing so, you will limit the scale of the system considerably. We 
need improvements to our service state monitoring, for sure, including 
the ability to have much more fine-grained definition of what a service 
is. But I am 100% against adding the concept of checking service state 
*during* placement decisions.


Service state monitoring (it's called the servicegroup API in Nova) can 
and should notify the scheduler of important changes to the state of 
resource providers, but I'm opposed to making changes to the scheduler 
that would essentially make a placement decision and then immediately go 
and check a link for UP/DOWN state before finalizing the claim of 
resources on the resource provider.



, utilization levels) and placement policy (e.g. proximity, HA)


I understand proximity (affinity/anti-affinity), but what does HA have 
to do with placement policy? Could you elaborate a bit more on that?


 to get optimized placement

decisions accounting for all application elements (VMs, virt Network
appliances, Storage) vs. Compute only”


Yep. These are all simply inputs to the scheduler's placement decision 
engine. We need:


 a) A way of providing these inputs to the launch request without 
polluting a cloud user's view of the cloud -- remember we do NOT want 
users of the Nova API to essentially need to understand the exact layout 
of the cloud provider's datacenter. That's definitively anti-cloudy :) 
So, we need a way of providing generic inputs to the scheduler that the 
scheduler can translate into specific inputs because the scheduler would 
know the layout of the datacenter...


 b) Simple condition engine that would be able to understand the inputs 
(requested proximity to a storage cluster used by applications running 
in the instance, for example) with information the scheduler can query 
for about the topology of the datacenter's network and storage.


Work on b) involves the following foundational blueprints:

https://review.openstack.org/#/c/127609/
https://review.openstack.org/#/c/127610/
https://review.openstack.org/#/c/127612/

Looking forward to a 

Re: [openstack-dev] [api] Networking API Create network missing Request parameters

2014-10-23 Thread Anne Gentle
On Thu, Oct 23, 2014 at 6:12 PM, Mathieu Gagné mga...@iweb.com wrote:

 On 2014-10-23 7:00 PM, Danny Choi (dannchoi) wrote:


 In neutron, user with “admin” role can specify the provider network
 parameters when creating a network.

 —provider:network_type
 —provider:physical_network
 —provider:segmentation_id

 localadmin@qa4:~/devstack$ neutron net-create test-network
 --provider:network_type vlan --provider:physical_network physnet1
 --provider:segmentation_id 400

 However, the Networking API v2.0
 (http://developer.openstack.org/api-ref-networking-v2.html) “Create
 network”
 does not list them as Request parameters.

 Is this a print error?


 I see them under the Networks multiple provider extension (networks)
 section. [1]

 Open the detail for Create network with multiple segment mappings to see
 them.

 Is this what you were looking for?

 [1] http://developer.openstack.org/api-ref-networking-v2.
 html#network_multi_provider-ext


We have a couple of doc bugs on this:

https://bugs.launchpad.net/openstack-api-site/+bug/1373418

https://bugs.launchpad.net/openstack-api-site/+bug/1373423

Hope that helps -- please triage those bugs if you find out more.

Thanks,
Anne


 --
 Mathieu

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-23 Thread Brian Haley
On 10/23/14 6:22 AM, Elena Ezhova wrote:
 Hi!
 
 I am working on a bug ping still working once connected even after
 related security group rule is
 deleted (https://bugs.launchpad.net/neutron/+bug/1335375). The gist of
 the problem is the following: when we delete a security group rule the
 corresponding rule in iptables is also deleted, but the connection, that
 was allowed by that rule, is not being destroyed.
 The reason for such behavior is that in iptables we have the following
 structure of a chain that filters input packets for an interface of an
 istance:
snip

Like Miguel said, there's no easy way to identify this on the compute
node since neither the MAC nor the interface are going to be in the
conntrack command output.  And you don't want to drop the wrong tenant's
connections.

Just wondering, if you remove the conntrack entries using the IP/port
from the router namespace does it drop the connection?  Or will it just
start working again on the next packet?  Doesn't work for VM to VM
packets, but those packets are probably less interesting.  It's just my
first guess.

-Brian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] How can we get more feedback from users?

2014-10-23 Thread Angus Salkeld
Hi all

I have felt some grumblings about usability issues with Heat
templates/client/etc..
and wanted a way that users could come and give us feedback easily (low
barrier). I started an etherpad (
https://etherpad.openstack.org/p/heat-useablity-improvements) - the first
win is it is spelt wrong :-O

We now have some great feedback there in a very short time, most of this we
should be able to solve.

This lead me to think, should OpenStack have a more general mechanism for
users to provide feedback. The idea is this is not for bugs or support,
but for users to express pain points, requests for features and docs/howtos.

It's not easy to improve your software unless you are listening to your
users.

Ideas?

-Angus
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev