[openstack-dev] [Horizon] Rethinking the launch-instance wizard model

2015-03-10 Thread Richard Jones
Hi folks,

Currently the launch instance model file does all the fetching of various
bits of data. Combined with all of the controllers also being loaded at
wizard startup, this results in some very difficult synchronisation issues*.

An issue I've run into is the initialisation of the controller based on
model data. Specifically, loading the allocated and available lists
into the security groups transfer table. I can load a reference to the
model securityGroups array as the available set, but since that data is
generally not loaded (by some other code) when the controller is setting
up, I can't also select the first group in the array as the default group
in allocated.

So, I propose that the model data for a particular pane be loaded *by that
pane*, so that pane can then attach a callback to run once the data is
loaded, to handle situations like this (which will be common, IIUC). Or the
model needs to provide promises for the pane controllers to attach
callbacks to.


  Richard

* one issue is the problem of the controllers running for the life of the
wizard and not knowing when they're active (having them only be temporarily
active would solve the issue of having to watch the transfer tables for
changes of data - we could just read the allocated lists when the
controller exits).
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] how can deploy environment with useEnvironmentNerwork=false

2015-03-10 Thread Ekaterina Chernova
Hi Choe!

Why do you want to set this option to false?
In that way new instance will not be connected to the environment network.

Do you have any issues with the deployment?

If you still want to try the deployment without handling networks by
default,
you need to set default value of specified  parameter to 'false' here
https://github.com/stackforge/murano/blob/master/meta/io.murano/Classes/resources/Instance.yaml
[1] and re-import murano core library.

You can contact us at #murano channel.

[1] -
https://github.com/stackforge/murano/blob/master/meta/io.murano/Classes/resources/Instance.yaml

Regards,
Kate.

On Mon, Mar 9, 2015 at 4:26 PM, Choe, Cheng-Dae white...@gmail.com wrote:

 hi there

 In murano when deploy environment useEnvironmentNerwork=true is default.

 How can I deploy with useEnvironmentNerwork=true?

 I'm currently using sample Apache web server package


 --
 Choe, Cheng-Dae
 Blog: http://blog.woosum.net http://www.woosum.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Recent issues with our review workflow

2015-03-10 Thread Tomasz Napierala

 On 09 Mar 2015, at 18:21, Ryan Moe r...@mirantis.com wrote:
 
 Hi All,
 
 I've noticed a few times recently where reviews have been abandoned by people 
 who were not the original authors. These reviews were only days old and there 
 was no prior notice or discussion. This is both rude and discouraging to 
 contributors. Reasons for abandoning should be discussed on the review and/or 
 in email before any action is taken.

Hi Ryan,

I was trying to find any examples, and the only one I see is:
https://review.openstack.org/#/c/152674/

I spoke to Bogdan and he agreed it was not proper way to do it, but they were 
in a rush - I know, it does not explain anything really.

Do you have any other examples? I’d like to clarify them

Regards,
-- 
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vmware] Alternate meeting times

2015-03-10 Thread Gary Kotton
Hi,
As mentioned a few weeks ago we would like to have alternate meeting times for 
the Vmware driver(s) meeting. So for all interested lets meet tomorrow at 10:00 
UTC on #openstack-meeting-4.
Thanks
Gary


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-10 Thread Attila Fazekas




- Original Message -
 From: Jay Pipes jaypi...@gmail.com
 To: openstack-dev@lists.openstack.org
 Sent: Wednesday, March 4, 2015 9:22:43 PM
 Subject: Re: [openstack-dev] [nova] blueprint about multiple workers 
 supported in nova-scheduler
 
 On 03/04/2015 01:51 AM, Attila Fazekas wrote:
  Hi,
 
  I wonder what is the planned future of the scheduling.
 
  The scheduler does a lot of high field number query,
  which is CPU expensive when you are using sqlalchemy-orm.
  Does anyone tried to switch those operations to sqlalchemy-core ?
 
 Actually, the scheduler does virtually no SQLAlchemy ORM queries. Almost
 all database access is serialized from the nova-scheduler through the
 nova-conductor service via the nova.objects remoting framework.
 

It does not helps you.

  The scheduler does lot of thing in the application, like filtering
  what can be done on the DB level more efficiently. Why it is not done
  on the DB side ?
 
 That's a pretty big generalization. Many filters (check out NUMA
 configuration, host aggregate extra_specs matching, any of the JSON
 filters, etc) don't lend themselves to SQL column-based sorting and
 filtering.
 

What a basic SQL query can do,
and what is the limit of the SQL is two different thing.
Even if you do not move everything to the DB side,
the dataset the application need to deal with could be limited.

  There are use cases when the scheduler would need to know even more data,
  Is there a plan for keeping `everything` in all schedulers process memory
  up-to-date ?
  (Maybe zookeeper)
 
 Zookeeper has nothing to do with scheduling decisions -- only whether or
 not a compute node's service descriptor is active or not. The end goal
 (after splitting the Nova scheduler out into Gantt hopefully at the
 start of the L release cycle) is to have the Gantt database be more
 optimized to contain the resource usage amounts of all resources
 consumed in the entire cloud, and to use partitioning/sharding to scale
 the scheduler subsystem, instead of having each scheduler process handle
 requests for all resources in the cloud (or cell...)
 
What is the current optional usage of zookeeper, 
and what it could be used for is very different thing.
The resource tracking is possible. 

  The opposite way would be to move most operation into the DB side,
  since the DB already knows everything.
  (stored procedures ?)
 
 See above. This assumes that the data the scheduler is iterating over is
 well-structured and consistent, and that is a false assumption.

With stored procedures you can do almost anything,
and in many ceases it is more readable than an complex query.

 
 Best,
 -jay
 
  Best Regards,
  Attila
 
 
  - Original Message -
  From: Rui Chen chenrui.m...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Sent: Wednesday, March 4, 2015 4:51:07 AM
  Subject: [openstack-dev] [nova] blueprint about multiple workers supported
 in nova-scheduler
 
  Hi all,
 
  I want to make it easy to launch a bunch of scheduler processes on a host,
  multiple scheduler workers will make use of multiple processors of host
  and
  enhance the performance of nova-scheduler.
 
  I had registered a blueprint and commit a patch to implement it.
  https://blueprints.launchpad.net/nova/+spec/scheduler-multiple-workers-support
 
  This patch had applied in our performance environment and pass some test
  cases, like: concurrent booting multiple instances, currently we didn't
  find
  inconsistent issue.
 
  IMO, nova-scheduler should been scaled horizontally on easily way, the
  multiple workers should been supported as an out of box feature.
 
  Please feel free to discuss this feature, thanks.
 
  Best Regards
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Recent issues with our review workflow

2015-03-10 Thread Bartłomiej Piotrowski
On 03/09/2015 06:21 PM, Ryan Moe wrote:
 Hi All,
 
 I've noticed a few times recently where reviews have been abandoned by
 people who were not the original authors. These reviews were only days
 old and there was no prior notice or discussion. This is both rude and
 discouraging to contributors. Reasons for abandoning should be discussed
 on the review and/or in email before any action is taken.
 
 I would also like to talk about issues with our backporting procedure
 [0]. Over the past few weeks I've seen changes proposed to stable
 branches before the change in master was merged. This serves no purpose
 other than to increase our workload. We also run the risk of
 inconsistency between the same commit on master and stable branches.
 Please, do not propose backports until the change has been merged to master.
 
 [0] 
 https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Backport_bugfixes_to_stable_release_series
 
 Thanks,
 Ryan
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

Could we avoid beating around the bush and talk about exact examples of
said behavior?

Best regards,
Bartłomiej

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Unshelve Instance Performance Optimization Questions

2015-03-10 Thread Kekane, Abhishek
Hi Devs,

As another alternative we can use start/stop API’s instead of shelve/unshelve 
the instance.

API’s

cpu/memory released

Disk released

Fast respawning

Notes

start/stop

No

No

Yes



shelve/unshelve

Yes

Yes (Not released if shelved_offload_time = -1)

No

Instance does not respawn faster in case of instance is booted from image


In order to make unshelve fast enough, we need to preserve instance root disk 
in compute node,
which I have proposed in spec [1] of shelve-partial-offload.

In case of start/stop API’s cpu/memory are not released/reassigned. We can 
modify these API’s to release
the cpu and memory while stopping the instance and reassign the same while 
starting the instance. In this case
also rescheduling logic need  to be modified to reschedule the instance on 
different host, if required resources
are not available while starting the instance. This is similar to what I have 
implemented in [2] Improving the
performance of unshelve API.

[1] https://review.openstack.org/#/c/135387/
[2] https://review.openstack.org/#/c/150344/

Please let me know your opinion, whether we can modify start/stop API’s as an 
alternative to shelve/unshelve API’s.

Thank You,

Abhishek Kekane

From: Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
Sent: 24 February 2015 12:47
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Unshelve Instance Performance Optimization 
Questions

Hi Duncan,

Thank you for the inputs.

@Community-Members
I want to know if there are any other alternatives to improve the performance 
of unshelve api ((booted from image only).
Please give me your opinion on the same.

Thank You,

Abhishek Kekane



From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
Sent: 16 February 2015 16:46
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Unshelve Instance Performance Optimization 
Questions

There has been some talk in cinder meetings about making cinder-glance 
interactions more efficient. They are already optimised in some deployments, 
e.g. ceph glance and ceph cinder, and some backends cache glance images so that 
many volumes created from the same image becomes very efficient. (search the 
meeting logs or channel logs for 'public snapshot' to get some entry points 
into the discussions)

I'd like to see more work done on this, and perhaps re-examine a cinder backend 
to glance. This would give some of what you're suggesting (particularly fast, 
low traffic un-shelve), and there is more that can be done to improve that 
performance, particularly if we can find a better performing generic CoW 
technology than QCOW2.
As suggested in the review, in the short term you might be better experimenting 
with moving to boot-from-volume instances if you have a suitable cinder 
deployed, since that gives you some of the performance improvements already.

On 16 February 2015 at 12:10, Kekane, Abhishek 
abhishek.kek...@nttdata.commailto:abhishek.kek...@nttdata.com wrote:
Hi Devs,

Problem Statement: Performance and storage efficiency of shelving/unshelving 
instance booted from image is far worse than instance booted from volume.

When you unshelve hundreds of instances at the same time, instance spawning 
time varies and it mainly depends on the size of the instance snapshot and
the network speed between glance and nova servers.

If you have configured file store (shared storage) as a backend in Glance for 
storing images/snapshots, then it's possible to improve the performance of
unshelve instance dramatically by configuring nova.image.download.FileTransfer 
in nova. In this case, it simply copies the instance snapshot as if it is
stored on the local filesystem of the compute node. But then again in this 
case, it is observed the network traffic between shared storage servers and
nova increases enormously resulting in slow spawning of the instances.

I would like to gather some thoughts about how we can improve the performance 
of unshelve api (booted from image only) in terms of downloading large
size instance snapshots from glance.

I have proposed a nova-specs [1] to address this performance issue. Please take 
a look at it.

During the last nova mid-cycle summit, Michael 
Stillhttps://review.openstack.org/#/q/owner:mikal%2540stillhq.com+status:open,n,z
 has suggested alternative solutions to tackle this issue.

Storage solutions like ceph (Software based) and NetApp (Hardare based) support 
exposing images from glance to nova-compute and cinder-volume with
copy in write feature. This way there will be no need to download the instance 
snapshot and unshelve api will be pretty faster than getting it
from glance.

Do you think the above performance issue should be handled in the OpenStack 
software as described in nova-specs [1] or storage solutions like
ceph/NetApp should be used in production environment? Apart from ceph/NetApp 
solutions, are there any other options available in 

Re: [openstack-dev] [nova][api] Microversions. And why do we need API extensions for new API functionality?

2015-03-10 Thread Christopher Yeoh
On Mon, 09 Mar 2015 16:14:21 -0400
Sean Dague s...@dague.net wrote:

 On 03/09/2015 03:37 PM, Jay Pipes wrote:
  On 03/08/2015 08:10 AM, Alex Xu wrote:
  Thanks for Jay point this out! If we have agreement on this and
  document it, that will be great for guiding developer how to add
  new API.
 
  I know we didn't want extension for API. But I think we still
  need modularity. I don't think we should put everything in a single
  file, that file will become huge in the future and hard to
  maintenance.
  
  I don't think everything should be in a single file either. In fact,
  I've never advocated for that.
  
  We can make the 'extension' not configurable. Replace 'extension'
  with another name, deprecate the extension info api int he
  future But that is not mean we should put everything in a file.
  
  I didn't say that in my email. I'm not sure where you got the
  impression I want to put everything in one file?
  
  For modularity, we need define what should be in a separated
  module(it is extension now.) There are three cases:
 
  1. Add new resource
   This is totally worth to put in a separated module.
  
  Agreed.
  
  2. Add new sub-resource
   like server-tags, I prefer to put in a separated module, I
  don't think put another 100 lines code in the servers.py is good
  choice.
  
  Agreed, which is exactly what I said in my email:
  
  Similarly, new microversion API functionality should live in a
  module, as a top-level (or subcollection) Controller in
  /nova/api/openstack/compute/, and should not be in the
  /nova/api/openstack/compute/plugins/ directory. Why? Because it's
  not a plugin.
  

Ok so I'm getting confused about what we're disagreeing about then.

However, the first microversion change
https://review.openstack.org/#/c/140313/32 is one case where we didn't
need to create a new extension, relying only on microversions to add a
new parameter to the response, whereas server tags does add a new
conroller (os-server-tags) which is non trivia so I think it does. Do
we disagree about that?

btw in a situation now where (I think) we are saying we are not going
to do any work for 3rd parties to add their own API plugins and have a
well planned API we don't need the prefixes as we don't need the prefix
on parameter names as there won't be name clashes without us realising
during testing. And we certainly don't need the os- prefix is the
plugin alias, but we do neeed them unique across the API I believe
because of the way we store information about them.

  3. extend attributes and methods for a existed resource
  like add new attributes for servers, we can choice one of
  existed module to put it in. Just like this patch
  https://review.openstack.org/#/c/155853/
  But for servers-tags, it's sub-resource, we can put it in
  its-own module.
  
  Agreed, and that's what I put in my email.
  
  If we didn't want to support extension right now, we can begin
  from not show servers-tags in extension info API first. That means
  extension info is freeze now. We deprecated the extension info api
  in later version.


It can be surpressed from /extensions by adding it to the suppress list
in the extensions code. Probably a good idea to stop v2.1+microversion
code REST API users not accidentally relying on it.
  
  I don't understand what you're saying here. Could you elaborate?
  What I am asking for is for new functionality (like the server-tags
  subcollection resource), just add a new module called
  /nova/api/openstack/compute/server_tags.py, create a Controller
  object in that file with the new server tags resource, and don't
  use any of the API extensions framework whatsoever.
  
  In addition to that, for the changes to the main GET
  /servers/{server_id} resource, use microversions to decorate the
  /nova/api/openstack/compute/servers.py.Controller.show() method for
  2.4 and add a tags key to the dict (note: NOT a
  os-server-tags:tags key) returned by GET /servers/{server_id}. No
  use of API extensions needed.



So I that's doabe but I think if we compared to the two wsgi.extends
is cleaner and less code and we have to have the separate module file
anyway for the controller. Can discuss this more later

Incidentally as has been mentioned before we need new names for the
API and where the files need to live needs cleaning up. For example v3
and v2.1 needs to be worked out of the directory paths.This clenaup not
on involves but directly  affects the all the related unit and
functional tests. Nor should we have contrib or should v2 live directly
in in nova/api/openstack/compute. But we also need a name for v2.1
microversions and should spend some time on that (does api modules work
for anyone?)

I think this dicussion indicates we should start with a bit of planning
first rather than just jump in and start shuffling this around now.
Work out what we want the final thing to look like so we can see what
the dependencies look like and minimise the number of times 

Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-10 Thread Nikola Đipanov
On 03/06/2015 03:19 PM, Attila Fazekas wrote:
 Looks like we need some kind of _per compute node_ mutex in the critical 
 section,
 multiple scheduler MAY be able to schedule to two compute node at same time,
 but not for scheduling to the same compute node.
 
 If we don't want to introduce another required component or
 reinvent the wheel there are some possible trick with the existing globally 
 visible
 components like with the RDMS.
 
 `Randomized` destination choose is recommended in most of the possible 
 solutions,
 alternatives are much more complex.
 
 One SQL example:
 
 * Add `sched_cnt`, defaul=0, Integer field; to a hypervisors related table.
 
 When the scheduler picks one (or multiple) node, he needs to verify is the 
 node(s) are 
 still good before sending the message to the n-cpu.
 
 It can be done by re-reading the ONLY the picked hypervisor(s) related data.
 with `LOCK IN SHARE MODE`.
 If the destination hyper-visors still OK:
 
 Increase the sched_cnt value exactly by 1,
 test is the UPDATE really update the required number of rows,
 the WHERE part needs to contain the previous value.
 
 You also need to update the resource usage on the hypervisor,
  by the expected cost of the new vms.
 
 If at least one selected node was ok, the transaction can be COMMITed.
 If you were able to COMMIT the transaction, the relevant messages 
  can be sent.
 
 The whole process needs to be repeated with the items which did not passed the
 post verification.
 
 If a message sending failed, `act like` migrating the vm to another host.
 
 If multiple scheduler tries to pick multiple different host in different 
 order,
 it can lead to a DEADLOCK situation.
 Solution: Try to have all scheduler to acquire to Shared RW locks in the same 
 order,
 at the end.
 
 Galera multi-writer (Active-Active) implication:
 As always, retry on deadlock. 
 
 n-sch + n-cpu crash at the same time:
 * If the scheduling is not finished properly, it might be fixed manually,
 or we need to solve which still alive scheduler instance is 
 responsible for fixing the particular scheduling..
 

So if I am reading the above correctly - you are basically proposing to
move claims to the scheduler (we would atomically check if there were
changes since the time we picked the host with the UPDATE .. WHERE using
LOCK IN SHARE MODE (assuming REPEATABLE READS is the used isolation
level) and then updating the usage, a.k.a doing the claim in the same
transaction.

The issue here is that we still have a window between sending the
message, and the message getting picked up by the compute host (or
timing out) or the instance outright failing, so for sure we will need
to ack/nack the claim in some way on the compute side.

I believe something like this has come up before under the umbrella term
of moving claims to the scheduler, and was discussed in some detail on
the latest Nova mid-cycle meetup, but only artifacts I could find were a
few lines on this etherpad Sylvain pointed me to [1] that I am copying here:


* White board the scheduler service interface
 ** note: this design won't change the existing way/logic of reconciling
nova db != hypervisor view
 ** gantt should just return claim ids, not entire claim objects
 ** claims are acked as being in use via the resource tracker updates
from nova-compute
 ** we still need scheduler retries for exceptional situations (admins
doing things outside openstack, hardware changes / failures)
 ** retry logic in conductor? probably a separate item/spec


As you can see - not much to go on (but that is material for a separate
thread that I may start soon).

The problem I have with this particular approach is that while it claims
to fix some of the races (and probably does) it does so by 1) turning
the current scheduling mechanism on it's head 2) and not providing any
thought into the trade-offs that it will make. For example, we may get
more correct scheduling in the general case and the correctness will not
be affected by the number of workers, but how does the fact that we now
do locking DB access on every request fare against the retry mechanism
for some of the more common usage patterns. What is the increased
overhead of calling back to he scheduler to confirm the claim? In the
end - how do we even measure that we are going in the right direction
with the new design.

I personally think that different workloads will have different needs
from the scheduler in terms of response times and tolerance to failure,
and that we need to design for that. So as an example a cloud operator
with very simple scheduling requirements may want to go for the no
locking approach and optimize for response times allowing for a small
number of instances to fail under high load/utilization due to retries,
while some others with more complicated scheduling requirements, or less
tolerance for data inconsistency might want to trade in response times
by doing locking claims in the scheduler. Some similar trade-offs and
how to 

Re: [openstack-dev] [api][neutron] Best API for generating subnets from ool

2015-03-10 Thread Salvatore Orlando
Thanks for bringing up this use case Miguel - these are the use cases we
need to make informed decisions.
Some answers inline.

Salvatore

On 10 March 2015 at 07:53, Miguel Ángel Ajo majop...@redhat.com wrote:

 Thanks to everybody working on this,

 Answers inline:

 On Tuesday, 10 de March de 2015 at 0:34, Tidwell, Ryan wrote:

  Thanks Salvatore.  Here are my thoughts, hopefully there’s some merit to
 them:



 With implicit allocations, the thinking is that this is where a subnet is
 created in a backward-compatible way with no subnetpool_id and the subnets
 API’s continue to work as they always have.



 In the case of a specific subnet allocation request (create-subnet passing
 a pool ID and specific CIDR), I would look in the pool’s available prefix
 list and carve out a subnet from one of those prefixes and ask for it to be
 reserved for me.  In that case I know the CIDR I’ll be getting up front.
 In such a case, I’m not sure I’d ever specify my gateway using notation
 like 0.0.0.1, even if I was allowed to.  If I know I’ll be getting
 10.10.10.0/24, I can simply pass gateway_ip as 10.10.10.1 and be done
 with it.  I see no added value in supporting that wildcard notation for a
 gateway on a specific subnet allocation.



 In the case of an “any” subnet allocation request (create-subnet passing a
 pool ID, but no specific CIDR), I’m already delegating responsibility for
 addressing my subnet to Neutron.  As such, it seems reasonable to not have
 strong opinions about details like gateway_ip when making the request to
 create a subnet in this manner.



 To me, this all points to not supporting wildcards for gateway_ip and
 allocation_pools on subnet create (even though it found its way into the
 spec).  My opinion (which I think lines up with yours) is that on an any
 request it makes sense to let the pool fill in allocation_pools and
 gateway_ip when requesting an “any” allocation from a subnet pool.  When
 creating a specific subnet from a pool, gateway IP and allocation pools
 could still be passed explicitly by the user.



 -Ryan



 *From:* Salvatore Orlando [mailto:sorla...@nicira.com
 sorla...@nicira.com]
 *Sent:* Monday, March 09, 2015 6:06 AM
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [api][neutron] Best API for generating subnets
 from pool



 Greetings!



 Neutron is adding a new concept of subnet pool. To put it simply, it is
 a collection of IP prefixes from which subnets can be allocated. In this
 way a user does not have to specify a full CIDR, but simply a desired
 prefix length, and then let the pool generate a CIDR from its prefixes. The
 full spec is available at [1], whereas two patches are up for review at [2]
 (CRUD) and [3] (integration between subnets and subnet pools).

 While [2] is quite straightforward, I must admit I am not really sure that
 the current approach chosen for generating subnets from a pool might be the
 best one, and I'm therefore seeking your advice on this matter.



 A subnet can be created with or without a pool.

 Without a pool the user will pass the desired cidr:



 POST /v2.0/subnets

 {'network_id': 'meh',

   'cidr': '192.168.0.0/24'}



 Instead with a pool the user will pass pool id and desired prefix lenght:

 POST /v2.0/subnets

 {'network_id': 'meh',

  'prefix_len': 24,

  'pool_id': 'some_pool'}



 The response to the previous call would populate the subnet cidr.

 So far it looks quite good. Prefix_len is a bit of duplicated information,
 but that's tolerable.

 It gets a bit awkward when the user specifies also attributes such as
 desired gateway ip or allocation pools, as they have to be specified in a
 cidr-agnostic way. For instance:



 POST /v2.0/subnets

 {'network_id': 'meh',

  'gateway_ip': '0.0.0.1',

  'prefix_len': 24,

  'pool_id': 'some_pool'}



 would indicate that the user wishes to use the first address in the range
 as the gateway IP, and the API would return something like this:



 POST /v2.0/subnets

 {'network_id': 'meh',

  'cidr': '10.10.10.0/24'

  'gateway_ip': '10.10.10.1',

  'prefix_len': 24,

  'pool_id': 'some_pool'}



 The problem with this approach is, in my opinion, that attributes such as
 gateway_ip are used with different semantics in requests and responses;
 this might also need users to write client applications expecting the
 values in the response might differ from those in the request.



 I have been considering alternatives, but could not find any that I would
 regard as winner.

 I therefore have some questions for the neutron community and the API
 working group:



 1) (this is more for neutron people) Is there a real use case for
 requesting specific gateway IPs and allocation pools when allocating from a
 pool? If not, maybe we should let the pool set a default gateway IP and
 allocation pools. The user can then update them with another call. Another
 option would be to provide subnet templates from which a user can choose.
 For instance one template 

[openstack-dev] [murano] Application Usage Information Tracking

2015-03-10 Thread Darshan Mn
Hi everyone,

I would like to know if the application usage information is tracked by the
murano-agent? If not, how is it done? Is ceilometer used at all, anywhere?

Regards
Darshan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] proposing rameshg87 to ironic-core

2015-03-10 Thread Faizan Barmawer
Though my vote does not count, definitely a +1

On Tue, Mar 10, 2015 at 6:37 AM, Ruby Loo rlooya...@gmail.com wrote:

 +1 for sure!

 On 9 March 2015 at 18:03, Devananda van der Veen devananda@gmail.com
 wrote:

 Hi all,

 I'd like to propose adding Ramakrishnan (rameshg87) to ironic-core.

 He's been consistently providing good code reviews, and been in the top
 five active reviewers for the last 90 days and top 10 for the last 180
 days. Two cores have recently approached me to let me know that they, too,
 find his reviews valuable.

 Furthermore, Ramakrishnan has made significant code contributions to
 Ironic over the last year. While working primarily on the iLO driver, he
 has also done a lot of refactoring of the core code, touched on several
 other drivers, and maintains the proliantutils library on stackforge. All
 in all, I feel this demonstrates a good and growing knowledge of the
 codebase and architecture of our project, and feel he'd be a valuable
 member of the core team.

 Stats, for those that want them, are below the break.

 Best Regards,
 Devananda



 http://stackalytics.com/?release=allmodule=ironic-groupuser_id=rameshg87

 http://russellbryant.net/openstack-stats/ironic-reviewers-90.txt
 http://russellbryant.net/openstack-stats/ironic-reviewers-180.txt

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-10 Thread Attila Fazekas




- Original Message -
 From: Nikola Đipanov ndipa...@redhat.com
 To: openstack-dev@lists.openstack.org
 Sent: Tuesday, March 10, 2015 10:53:01 AM
 Subject: Re: [openstack-dev] [nova] blueprint about multiple workers 
 supported in nova-scheduler
 
 On 03/06/2015 03:19 PM, Attila Fazekas wrote:
  Looks like we need some kind of _per compute node_ mutex in the critical
  section,
  multiple scheduler MAY be able to schedule to two compute node at same
  time,
  but not for scheduling to the same compute node.
  
  If we don't want to introduce another required component or
  reinvent the wheel there are some possible trick with the existing globally
  visible
  components like with the RDMS.
  
  `Randomized` destination choose is recommended in most of the possible
  solutions,
  alternatives are much more complex.
  
  One SQL example:
  
  * Add `sched_cnt`, defaul=0, Integer field; to a hypervisors related table.
  
  When the scheduler picks one (or multiple) node, he needs to verify is the
  node(s) are
  still good before sending the message to the n-cpu.
  
  It can be done by re-reading the ONLY the picked hypervisor(s) related
  data.
  with `LOCK IN SHARE MODE`.
  If the destination hyper-visors still OK:
  
  Increase the sched_cnt value exactly by 1,
  test is the UPDATE really update the required number of rows,
  the WHERE part needs to contain the previous value.
  
  You also need to update the resource usage on the hypervisor,
   by the expected cost of the new vms.
  
  If at least one selected node was ok, the transaction can be COMMITed.
  If you were able to COMMIT the transaction, the relevant messages
   can be sent.
  
  The whole process needs to be repeated with the items which did not passed
  the
  post verification.
  
  If a message sending failed, `act like` migrating the vm to another host.
  
  If multiple scheduler tries to pick multiple different host in different
  order,
  it can lead to a DEADLOCK situation.
  Solution: Try to have all scheduler to acquire to Shared RW locks in the
  same order,
  at the end.
  
  Galera multi-writer (Active-Active) implication:
  As always, retry on deadlock.
  
  n-sch + n-cpu crash at the same time:
  * If the scheduling is not finished properly, it might be fixed manually,
  or we need to solve which still alive scheduler instance is
  responsible for fixing the particular scheduling..
  
 
 So if I am reading the above correctly - you are basically proposing to
 move claims to the scheduler (we would atomically check if there were
 changes since the time we picked the host with the UPDATE .. WHERE using
 LOCK IN SHARE MODE (assuming REPEATABLE READS is the used isolation
 level) and then updating the usage, a.k.a doing the claim in the same
 transaction.
 
 The issue here is that we still have a window between sending the
 message, and the message getting picked up by the compute host (or
 timing out) or the instance outright failing, so for sure we will need
 to ack/nack the claim in some way on the compute side.
 
 I believe something like this has come up before under the umbrella term
 of moving claims to the scheduler, and was discussed in some detail on
 the latest Nova mid-cycle meetup, but only artifacts I could find were a
 few lines on this etherpad Sylvain pointed me to [1] that I am copying here:
 

 
 * White board the scheduler service interface
  ** note: this design won't change the existing way/logic of reconciling
 nova db != hypervisor view
  ** gantt should just return claim ids, not entire claim objects
  ** claims are acked as being in use via the resource tracker updates
 from nova-compute
  ** we still need scheduler retries for exceptional situations (admins
 doing things outside openstack, hardware changes / failures)
  ** retry logic in conductor? probably a separate item/spec
 
 
 As you can see - not much to go on (but that is material for a separate
 thread that I may start soon).

In my example, the resource needs to be considered as used before we get
anything back from the compute.
The resource can be `freed` at error handling,
hopefully be migrating to another node.
 
 The problem I have with this particular approach is that while it claims
 to fix some of the races (and probably does) it does so by 1) turning
 the current scheduling mechanism on it's head 2) and not providing any
 thought into the trade-offs that it will make. For example, we may get
 more correct scheduling in the general case and the correctness will not
 be affected by the number of workers, but how does the fact that we now
 do locking DB access on every request fare against the retry mechanism
 for some of the more common usage patterns. What is the increased
 overhead of calling back to he scheduler to confirm the claim? In the
 end - how do we even measure that we are going in the right direction
 with the new design.
 
 I personally think that different workloads will have different needs
 from the scheduler in 

Re: [openstack-dev] [oslo][messaging][zmq] Discussion on zmq driver design issues

2015-03-10 Thread ozamiatin

Hi Li Ma,

Thank you very much for your reply

On 06.03.15 05:01, Li Ma wrote:

Hi all, actually I'm writing the same mail topic for zeromq driver,
but I haven't done it yet. Thank you for proposing this topic,
ozamiatin.

1. ZeroMQ functionality

Actually I proposed a session topic in the coming summit to show our
production system, named 'Distributed Messaging System for OpenStack
at Scale'. I don't know whether it will be allowed to present.
Otherwise, if it is possible, I can share my experience in design
summit.

Currently, AWCloud (the company I'm working) deployed more than 20
private clouds and 3 public clouds for our customers in production,
scaling from 10 to 500 physical nodes without any performance issue.
The performance dominates all the existing drivers in every aspect.
All is using ZeroMQ driver. We started improving ZeroMQ driver in
Icehouse and currently the modified driver has switched to
oslo.messaging.

As all knows, ZeroMQ has been unmaintainable for long. My colleagues
and I continuously contribute patches to upstream. The progress is a
little bit slow because we are doing everything just in our spare time
and the review procedure is also not efficient.

Here are two important patches [1], [2], for matchmaker redis. When
they are landed, I think ZeroMQ driver is capable of running in small
deployments.

It's good to hear you have a living deployment with zmq driver.
Is there a big divergence between your production and upstream versions
of the driver? Besides [1] and [2] fixes for redis we have [5] and [6]
critical multi-backend related issues for using the driver in real-world 
deployment.

The only functionality for large-scale deployment that lacks in the
current upstream codebase is socket pool scheduling (to provide
lifecycle management, including recycle and reuse zeromq sockets). It
was done several months ago and we are willing to contribute. I plan
to propose a blueprint in the next release.

Pool, recycle and reuse sounds good for performance.
We also need a refactoring of the driver to reduce redundant entities
or reconsider them (like ZmqClient or InternalContext) and to reduce 
code replications (like with topics).

There is also some topics management needed.
Clear code == less bugs == easy understand == easy contribute.
We need a discussion (with related spec and UMLs) about what the driver 
architecture should be (I'm in progress with that).


2. Why ZeroMQ matters for OpenStack

ZeroMQ is the only driver that depends on a stable library not an open
source product. This is the most important thing that comes up my
mind. When we deploy clouds with RabbitMQ or Qpid, we need
comprehensive knowledge from their community, from deployment best
practice to performance tuning for different scales. As an open source
product, no doubt that bugs are always there. You need to push lots of
things in different communities rather than OpenStack community.
Finally, it is not that working, you all know it, right?

ZeroMQ library itself is just encapsulation of sockets and is stable
enough and widely used in large-scale cluster communication for long.
We can build our own messaging system for inter-component RPC. We can
improve it for OpenStack and have the governance for codebase. We
don't need to rely on different products out of the community.
Actually, only ZeroMQ provides the possibility.

IMO, we can just keep it and improve it and finally it becomes another
choice for operators.

Zmq is also an opensource product and it has it's own community,
but I agree that rabbit is a complicated blackbox software we depend on.
While zmq is just a library and it is much more simpler (more reliable) 
inside.
Zmq driver is more lower-level than rabbit and qpid therefore it 
provides more flexibility.

By now it is the only driver where brokerless implementation is possible.


3. ZeroMQ integration

I've been working on the integration of ZeroMQ and DevStack for a
while and actually it is working right now. I updated the deployment
guide [3].

That's true it works! :)

I think it is the time to bring a non-voting gate for ZeroMQ and we
can make the functional tests work.

You can turn it with 'check experimental'. It is broken now.


4. ZeroMQ blueprints

We'd love to provide blueprints to improve ZeroMQ, as ozamiatin does.
According to my estimation, ZeroMQ can be another choice for
production in 1-2 release cycles due to bp review and patch review
procedure.

5. ZeroMQ discussion

Here I'd like to say sorry for this driver. Due to spare time and
timezone, I'm not available for IRC or other meeting or discussions.
But if it is possible, should we create a subgroup for ZeroMQ and
schedule meetings for it? If we can schedule in advance or at a fixed
date  time, I'm in.

That's great idea
+1 for zmq subgroup and meetings

6. Feedback to ozamiatin's suggestions

I'm with you in most all the proposals, but for packages, I think we
can just separate all the components in a sub-directory. This step 

Re: [openstack-dev] [murano] how can deploy environment with useEnvironmentNerwork=false

2015-03-10 Thread Serg Melikyan
Hi Cheng-Dae,

We are working on improving supported networking schemes, please take
a look on following commits:
* https://review.openstack.org/152643 - Adds ability to join instances
to existing Neutron networks
* https://review.openstack.org/152747 - Configurable environment's
default network config

You also may be found details about current networking in the
following e-mail:
http://lists.openstack.org/pipermail/openstack-dev/2014-September/047111.html

On Mon, Mar 9, 2015 at 4:26 PM, Choe, Cheng-Dae white...@gmail.com wrote:
 hi there

 In murano when deploy environment useEnvironmentNerwork=true is default.

 How can I deploy with useEnvironmentNerwork=true?

 I'm currently using sample Apache web server package


 --
 Choe, Cheng-Dae
 Blog: http://blog.woosum.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Deprecation of ComputeFilter

2015-03-10 Thread Murray, Paul (HP Cloud)
Hi Sylvain,

The list of filters does not only determine what conditions are checked, it 
also specifies the order in which they are checked.

If I read the code right this change creates the worst case efficiency for this 
filter. Normally you would filter first on something that removes as many nodes 
as possible to cut down the list. It is not normal for large numbers of hosts 
to be disabled, so this filter should normally come low down the list. This 
change effectively puts it at the top. 

As an example, imagine you are using AvailabilityZoneFilter and ComputeFilter. 
Let's say you have three AZs and at any one time a small percentage of your 
nodes are disabled. These are realistic circumstances. In this case you would 
filter on the AvaiabilityZoneFilter first and ComputeFilter last. The AZ will 
cut the number of hosts being considered by two thirds with the ComputeFilter 
only being executed against the remaining third. If the order is reversed both 
filters are run against almost all nodes.

Note he following:
1: the default driver for the servicegroup api is db, so this adds a db lookup 
for every node that would otherwise only be called for remaining nodes after 
executing other filters.
2: if the driver uses a local in memory cache this is not so bad - but that's 
not the default

Even if this filter seems dumb, it is still a filtering operation, so why not 
leave it as a filter in the same model as all the others and under the 
operators control?

Paul

-Original Message-
From: Sylvain Bauza [mailto:sba...@redhat.com] 
Sent: 06 March 2015 15:19
To: OpenStack Development Mailing List (not for usage questions)
Cc: openstack-operat...@lists.openstack.org
Subject: [openstack-dev] [nova] Deprecation of ComputeFilter

Hi,

First, sorry for cross-posting on both dev and operator MLs but I also would 
like to get operators feedback.

So, I was reviewing the scheduler ComputeFilter and I was wondering why the 
logic should be in a filter.
We indeed already have a check on the service information each time that a 
request is coming in, which is done by
HostManager.get_all_host_states() - basically called by
FilterScheduler._schedule()

Instead, I think it is error-prone to leave that logic in a filter because it 
can easily be accidentally removed from the list of filters. 
Besides, the semantics of the filter is not well known and operators could not 
understand that it is filtering on a Service RPC status, not the real compute 
node behind it.

In order to keep a possibility for operators to explicitely ask the 
FilterScheduler to also filter on disabled hosts, I propose to add a config 
option which would be self-explicit.

So, I made a quick POC for showing how we could move the logic to HostManager 
[1]. Feel free to review it and share your thoughts both on the change and 
here, because I want to make sure that we get a consensus on the removal before 
really submitting anything.


[1] https://review.openstack.org/#/c/162180/

-Sylvain


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone]ON DELETE RESTRICT VS ON DELETE CASCADE

2015-03-10 Thread Mike Bayer


Adam Young ayo...@redhat.com wrote:

 On 03/09/2015 01:26 PM, Mike Bayer wrote:
 Im about -1000 on disabling foreign key constraints.
 So was I.  We didn't do it out of performance.
 
 Since I am responsible for tipping over this particular cow, let me explain.
 
 No, is too much. Let me sum up.
 
 In the murky past, Keystone was primarily the identity back end.  I start 
 with users, then tenant, and then it grew to have roles.
 
 If this has stayed all in a SQL back end, you bet your sweet bippy I'd have 
 left the integrity constraints in place.  THere is a big reason it didn't.
 
 My first hack in Keystone was putting LDAP support back in, after the 
 Keysteon Light rewrite pulled it out.  Back then, I waws warned that LDAP was 
 different, and I kind of knew that it was, but I tried to do everything in 
 LDAP we were doing in SQl, and, while the solution was bogus, it kindof 
 worked if you squinted and were able to accept putting service users in your 
 active directory.
 
 Oh, and didn't want to write to it.  I mean, sure, there was writable LDAP. 
 BUt people don't use L:DAP that way.  LDAP is maintained by corporate 
 IT...which really means HR.  Bottom line is that the OpenStack lab people are 
 not going to be writing projects into their LDAP servers.
 
 At the same time, the abstractions were growing.  We added groups, domains, 
 and role assignments.  Federation was in the distance, and mapping had to go 
 somewhere.
 
 At the Protland summit, a few conversation made it clear that we needed to 
 split the Identity backend into a read only LDAP portion and a SQL writable 
 portion.  Oh, sure, you could still keep users in SQL, and many people wanted 
 to, but  LDAP was the larger concern, and, again, we knew federation was 
 coming with similar constraints. So, a FK from the role-assignments table 
 into the proejct table would be OK,  but now to either users or groups:  if 
 thiose were in LDAP, there would be nothing there, and the constraint could 
 not be met.
 
 
 We've gone even further this release.  The assignments backend itself is 
 being split up.  TBGH, I don't know if this is an essential split, but some 
 of the main Keystone developers have worked really hard to make it work, and 
 to show how Keystone specific data (role assignments)  can be kept separate 
 from the projects and domains.
 
 So, no, we are not talking performance.  We are talking architecture and 
 functionality.  Keystone, with few exceptions, does not own the user 
 database.  Keystone consumes it.  As time goes on, Keystone will do a better 
 job of consume pre-existing data, and minimizing the amount of custom data it 
 manages.
 
 Does that make more sense?

Somewhat vaguely. If by So, a FK from the role-assignments table into the
proejct table would be OK, but now to either users or groups: if thiose were
in LDAP, there would be nothing there, and the constraint could not be
met.”, we mean that we start with this:

create table project (
   id integer primary key
)

create table users (
   id integer primary key
)

create table groups (
   id integer primary key
)

create table role_assignments (
id integer primary key
project_id integer references project(id)
)


and then we change it, such that we are really doing this:

create table role_assignments (
id integer primary key
project_or_group_or_user_id integer
)

if *that’s* what you mean, that’s known as a “polymorphic foreign key”, and
it is not actually a foreign key at all, it is a terrible antipattern started by
the PHP/Rails community and carried forth by projects like Django. For
details on how to correct for this pattern, I wrote about it many years ago
here:
http://techspot.zzzeek.org/2007/05/29/polymorphic-associations-with-sqlalchemy/.
SQLAlchemy has an example suite that illustrates several approaches to
mitigating this anti pattern which you can see here:
http://docs.sqlalchemy.org/en/rel_0_9/orm/examples.html#module-examples.generic_associations.

So if that’s what we mean, then that is exactly what I’m trying to target as
a “don’t do that” situation; it’s unnecessary and incorrect. LDAP and SQL
databases are obviously very different, so in order to achieve parity
between them, a lot of work has to be done on the SQL side in particular as
it is much more structured than LDAP. If we are diluting our SQL databases
to turn into amorphous, unstructured blobs, then I think that’s a very bad
idea and I’m not sure why relational databases have any place, when
unstructured solutions like MongoDB are readily available. I’d note that
LDAP servers themselves will often use relational storage as their actual
backend, and you can be assured these systems can make correct use of
normalization internally.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [murano] Application Usage Information Tracking

2015-03-10 Thread Serg Melikyan
Hi Darshan,

Unfortunately application usage is not tracked in Murano in any way.
We only have special logging message [1] that can help to organize
tracking of usage using some sort tools for log analysis (e.g.
Logstash).

[1] 
https://github.com/stackforge/murano/blob/73f8368024acc2f79ef4494b1fbfcc3d2452e494/murano/common/server.py#L113

On Tue, Mar 10, 2015 at 2:00 PM, Darshan Mn darshan.m...@gmail.com wrote:
 Hi everyone,

 I would like to know if the application usage information is tracked by the
 murano-agent? If not, how is it done? Is ceilometer used at all, anywhere?

 Regards
 Darshan

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]difference between spec merged and BP approval

2015-03-10 Thread Stefano Maffulli
Hi David,

On Sat, 2015-03-07 at 02:22 +, Chen, Wei D wrote:
 I thought the feature should be approved as long as the SPEC[1] is
 merged, but it seems I am wrong from the beginning[2], both of
 them (SPEC merged and BP approval[4][5]) is necessary and mandatory
 for getting some effective reviews, right? anyone can help to
 confirm with that?

Since Cinder uses BP+spec, the process is described on the wiki page:

https://wiki.openstack.org/wiki/Blueprints#Spec_.2B_Blueprints_lifecycle

If it helps, I'd consider the spec and the blueprint as one element
made of two pieces. The spec needs to be approved and its
corresponding blueprint needs to be approved and have a priority,
deadline/milestone assigned. If any of these attributes is missing, the
feature is not going to be reviewed.

Blueprints and their attributes 'priority' and 'milestone' are used to
track the status of the release. The reviewers use BPs to identify the
code that they need to review. For example,
https://launchpad.net/cinder/+milestone/kilo-3

I've tried to piece the history of your experience from the links you
provided:

- you submitted the spec in November 2014
- the spec was approved on Jan 6, 2015 (from
https://review.openstack.org/#/c/136253/)
- the spec references two blueprints, one for Cinder, one of
Cinder-client; both BPs were created at the end of February
- none of the BP have a milestone set
- you submitted code related to the approved spec between Jan 6 and
today

I have the impression that you may have missed a step in the BP+spec
process. I have tried to find the documentation for this process myself
and I had a hard time, too.

 Besides, who is eligible to define/modify the priority in the list[3],
 only PTL or any core? I am trying to understand the
 acceptable procedure for the coming 'L'.

The project team leaders (PTL) are ultimately responsible to set the
priorities, although the decision is always a consensual decision of the
core teams.

Have you considered joining OpenStack Upstream Training? 
https://www.openstack.org/blog/2015/02/openstack-upstream-training-in-vancouver/

Cheers,
stef


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone]ON DELETE RESTRICT VS ON DELETE CASCADE

2015-03-10 Thread Mike Bayer


Clint Byrum cl...@fewbar.com wrote:

 
 Please try to refrain from using false equivalence. ACID stands for
 Atomicity, Consistency, Isolation, Durability. Nowhere in there does it
 stand for referential integrity”. 

This point is admittedly controversial as I’ve had this debate before, but
it is common that the database concept of integrity constraints is
considered under the umbrella of “consistency” as one of the facets of this
guarantee. Just check the second sentence of Wikipedia’s page (which I have
been told is itself incorrect, which if so, I would greatly appreciate
someone editing this page as well as their ACID page to remove all
references to “constraints, cascades, and triggers” and perhaps clarify that
these concepts have nothing to do with ACID):
http://en.wikipedia.org/wiki/Consistency_%28database_systems%29

 
 I'm not entirely sure what you've said above actually prevents coders
 from relying on the constraints. Being careful about deleting all of the
 child rows before a parent is good practice. I have seen code like this
 in the past though:
 
 try:
  parent.delete()
 except ForeignKeyFailure:
  parent.children.delete()
  parent.delete()
 
 This means if you don't have the FK's, you may never delete the
 children. Is this a bug? YES. Is it super obvious that it is the wrong
 thing to do? No.

So the point you’re making here is that, if foreign key constraints are
removed, poorly written code might silently fail. I’m glad we agree this is
an issue!  It’s the only point I’m making.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging][zmq] Discussion on zmq driver design issues

2015-03-10 Thread ozamiatin

Hi, Eric

Thanks a lot for your comments.

On 06.03.15 06:21, Eric Windisch wrote:
On Wed, Mar 4, 2015 at 12:10 PM, ozamiatin ozamia...@mirantis.com 
mailto:ozamia...@mirantis.com wrote:


Hi,

By this e-mail I'd like to start a discussion about current zmq
driver internal design problems I've found out.
I wish to collect here all proposals and known issues. I hope this
discussion will be continued on Liberty design summit.
And hope it will drive our further zmq driver development efforts.

ZMQ Driver issues list (I address all issues with # and references
are in []):

1. ZMQContext per socket (blocker is neutron improper usage of
messaging via fork) [3]
2. Too many different contexts.
We have InternalContext used for ZmqProxy, RPCContext used in
ZmqReactor, and ZmqListener.
There is also zmq.Context which is zmq API entity. We need to
consider a possibility to unify their usage over inheritance
(maybe stick to RPCContext)
or to hide them as internal entities in their modules (see
refactoring #6)


The code, when I abandoned it, was moving toward fixing these issues, 
but for backwards compatibility was doing so in a staged fashion 
across the stable releases.


I agree it's pretty bad. Fixing this now, with the driver in a less 
stable state should be easier, as maintaining compatibility is of less 
importance.


3. Topic related code everywhere. We have no topic entity. It is
all string operations.
We need some topics management entity and topic itself as an
entity (not a string).
It causes issues like [4], [5]. (I'm already working on it).
There was a spec related [7].


Good! It's ugly. I had proposed a patch at one point, but I believe 
the decision was that it was better and cleaner to move toward the 
oslo.messaging abstraction as we solve the topic issue. Now that 
oslo.messaging exists, I agree it's well past time to fix this 
particular ugliness.


4. Manual implementation of messaging patterns.
   Now we can observe poor usage of zmq features in zmq driver.
Almost everything is implemented over PUSH/PULL.

4.1 Manual polling - use zmq.Poller (listening and replying
for multiple sockets)
4.2 Manual request/reply implementation for call [1].
Using of REQ/REP (ROUTER/DEALER) socket solves many
issues. A lot of code may be reduced.
4.3 Timeouts waiting


There are very specific reasons for the use of PUSH/PULL. I'm firmly 
of the belief that it's the only viable solution for an OpenStack RPC 
driver. This has to do with how asynchronous programming in Python is 
performed, with how edge-triggered versus level-triggered events are 
processed, and general state management for REQ/REP sockets.


I could be proven wrong, but I burned quite a bit of time in the 
beginning of the ZMQ effort looking at REQ/REP before realizing that 
PUSH/PULL was the only reasonable solution. Granted, this was over 3 
years ago, so I would not be too surprised if my assumptions are no 
longer valid.


I agree that REQ/REP is very limited because of their synchronous nature 
and 1-to-1 connection possibility.
But there are ROUTER/DEALER proxy sockets recommended to use with 
REQ/REP to compose 1-to-N and N-to-N asynchronous patterns.
I'm in research of that now and I didn't finally decide yet. When 
everything will be clear for me I'll come with a spec on that.


5. Add possibility to work without eventlet [2]. #4.1 is also
related here, we can reuse many of the implemented solutions
   like zmq.Poller over asynchronous sockets in one separate
thread (instead of spawning on each new socket).
   I will update the spec [2] on that.


Great. This was one of the motivations behind oslo.messaging and it 
would be great to see this come to fruition.


6. Put all zmq driver related stuff (matchmakers, most classes
from zmq_impl) into a separate package.
   Don't keep all classes (ZmqClient, ZmqProxy, Topics management,
ZmqListener, ZmqSocket, ZmqReactor)
   in one impl_zmq.py module.


Seems fine. In fact, I think a lot of code could be shared with an 
AMQP v1 driver...


I'll check what can be shared. Actually I didn't yet dig into AMQP v1 
driver enough.


7. Need more technical documentation on the driver like [6].
   I'm willing to prepare a current driver architecture overview
with some graphics UML charts, and to continue discuss the driver
architecture.


Documentation has always been a sore point. +2
--
Regards,
Eric Windisch
ᐧ


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Regards,
Oleksii Zamiatin

Re: [openstack-dev] Driver documentation for Kilo [cinder] [neutron] [nova] [trove]

2015-03-10 Thread Anne Gentle
On Tue, Mar 10, 2015 at 8:28 AM, Erlon Cruz sombra...@gmail.com wrote:

 Hi Anne,

 How about driver documentation that is in the old format? Will it be
 removed in Kilo?



Hi Erlon,
The spec doesn't have a specific person assigned for removal, and the only
drivers the docs team signed up for through the blueprint are these:


   - For cinder: volume drivers: document LVM and NFS; backup drivers:
   document swift
   - For glance: Document local storage, cinder, and swift as backends
   - For neutron: document ML2 plug-in with the mechanisms drivers
   OpenVSwitch and LinuxBridge
   - For nova: document KVM (mostly), send Xen open source call for help
   - For sahara: apache hadoop
   - For trove: document all supported Open Source database engines like
   MySQL.





 The wiki says: Bring all driver sections that are currently just ‘bare
 bones’ up to the standard mentioned. Will this be performed by core team?


Andreas has done some of that work, for example here:
https://review.openstack.org/#/c/157086/

We can use more hands of course, just coordinate the work here on the list.
And Andreas, if there aren't any more to do, let us know. :)
Thanks,
Anne




 Thanks,
 Erlon

 On Fri, Mar 6, 2015 at 4:58 PM, Anne Gentle annegen...@justwriteclick.com
  wrote:

 Hi all,

 We have been working on streamlining driver documentation for Kilo
 through a specification, on the mailing lists, and in my weekly What's Up
 Doc updates. Thanks for the reviews while we worked out the solutions.
 Here's the final spec:

 http://specs.openstack.org/openstack/docs-specs/specs/kilo/move-driver-docs.html

 Driver documentation caretakers, please note the following summary:

 - At a minimum, driver docs are published in the Configuration Reference
 at with tables automatically generated from the code. There's a nice set of
 examples in this patch: https://review.openstack.org/#/c/157086/

 - If you want full driver docs on docs.openstack.org, please add a
 contact person's name and email to this wiki page:
 https://wiki.openstack.org/wiki/Documentation/VendorDrivers

 - To be included in the April 30 release of the Configuration Reference,
 driver docs are due by April 9th.

 Thanks all for your collaboration and attention.

 Anne


 --
 Anne Gentle
 annegen...@justwriteclick.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Anne Gentle
annegen...@justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Driver documentation for Kilo [cinder] [neutron] [nova] [trove]

2015-03-10 Thread Erlon Cruz
Hi Anne,

How about driver documentation that is in the old format? Will it be
removed in Kilo? The wiki says: Bring all driver sections that are
currently just ‘bare bones’ up to the standard mentioned. Will this be
performed by core team?

Thanks,
Erlon

On Fri, Mar 6, 2015 at 4:58 PM, Anne Gentle annegen...@justwriteclick.com
wrote:

 Hi all,

 We have been working on streamlining driver documentation for Kilo through
 a specification, on the mailing lists, and in my weekly What's Up Doc
 updates. Thanks for the reviews while we worked out the solutions. Here's
 the final spec:

 http://specs.openstack.org/openstack/docs-specs/specs/kilo/move-driver-docs.html

 Driver documentation caretakers, please note the following summary:

 - At a minimum, driver docs are published in the Configuration Reference
 at with tables automatically generated from the code. There's a nice set of
 examples in this patch: https://review.openstack.org/#/c/157086/

 - If you want full driver docs on docs.openstack.org, please add a
 contact person's name and email to this wiki page:
 https://wiki.openstack.org/wiki/Documentation/VendorDrivers

 - To be included in the April 30 release of the Configuration Reference,
 driver docs are due by April 9th.

 Thanks all for your collaboration and attention.

 Anne


 --
 Anne Gentle
 annegen...@justwriteclick.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Heat] Expression of Bay Status

2015-03-10 Thread Zane Bitter

On 09/03/15 23:47, Angus Salkeld wrote:

On Tue, Mar 10, 2015 at 8:53 AM, Adrian Otto adrian.o...@rackspace.com
mailto:adrian.o...@rackspace.com wrote:

Magnum Team,

In the following review, we have the start of a discussion about how
to tackle bay status:

https://review.openstack.org/159546

I think a key issue here is that we are not subscribing to an event
feed from Heat to tell us about each state transition, so we have a
low degree of confidence that our state will match the actual state
of the stack in real-time. At best, we have an eventually consistent
state for Bay following a bay creation.


Hi Adrian

Currently Heat does not an event stream, but instead an event table
and REST resource. This sucks as you have to poll it.
We have been long wanting some integration with Zaqar - we are all
convinced, just needs someone to do the work.
So the idea here is we send user related events via a Zaqar queue and
the user (Magnum) subscribes and get events.
 From last summit
https://etherpad.openstack.org/p/kilo-heat-summit-topics (see line 73).


+1


Here are some options for us to consider to solve this:

1) Propose enhancements to Heat (or learn about existing features)
to emit a set of notifications upon state changes to stack resources
so the state can be mirrored in the Bay resource.


See above, you have anyone to drive this?


2) Spawn a task to poll the Heat stack resource for state changes,
and express them in the Bay status, and allow that task to exit once
the stack reaches its terminal (completed) state.

3) Don’t store any state in the Bay object, and simply query the
heat stack for status as needed.


If it's not too frequent then this might be your best bet until we get 1).

Hope this helps
-Angus


Are each of these options viable? Are there other options to
consider? What are the pro/con arguments for each?

Thanks,

Adrian



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-10 Thread Attila Fazekas




- Original Message -
 From: Attila Fazekas afaze...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, March 10, 2015 12:48:00 PM
 Subject: Re: [openstack-dev] [nova] blueprint about multiple workers 
 supported in nova-scheduler
 
 
 
 
 
 - Original Message -
  From: Nikola Đipanov ndipa...@redhat.com
  To: openstack-dev@lists.openstack.org
  Sent: Tuesday, March 10, 2015 10:53:01 AM
  Subject: Re: [openstack-dev] [nova] blueprint about multiple workers
  supported in nova-scheduler
  
  On 03/06/2015 03:19 PM, Attila Fazekas wrote:
   Looks like we need some kind of _per compute node_ mutex in the critical
   section,
   multiple scheduler MAY be able to schedule to two compute node at same
   time,
   but not for scheduling to the same compute node.
   
   If we don't want to introduce another required component or
   reinvent the wheel there are some possible trick with the existing
   globally
   visible
   components like with the RDMS.
   
   `Randomized` destination choose is recommended in most of the possible
   solutions,
   alternatives are much more complex.
   
   One SQL example:
   
   * Add `sched_cnt`, defaul=0, Integer field; to a hypervisors related
   table.
   
   When the scheduler picks one (or multiple) node, he needs to verify is
   the
   node(s) are
   still good before sending the message to the n-cpu.
   
   It can be done by re-reading the ONLY the picked hypervisor(s) related
   data.
   with `LOCK IN SHARE MODE`.
   If the destination hyper-visors still OK:
   
   Increase the sched_cnt value exactly by 1,
   test is the UPDATE really update the required number of rows,
   the WHERE part needs to contain the previous value.
   
   You also need to update the resource usage on the hypervisor,
by the expected cost of the new vms.
   
   If at least one selected node was ok, the transaction can be COMMITed.
   If you were able to COMMIT the transaction, the relevant messages
can be sent.
   
   The whole process needs to be repeated with the items which did not
   passed
   the
   post verification.
   
   If a message sending failed, `act like` migrating the vm to another host.
   
   If multiple scheduler tries to pick multiple different host in different
   order,
   it can lead to a DEADLOCK situation.
   Solution: Try to have all scheduler to acquire to Shared RW locks in the
   same order,
   at the end.
   
   Galera multi-writer (Active-Active) implication:
   As always, retry on deadlock.
   
   n-sch + n-cpu crash at the same time:
   * If the scheduling is not finished properly, it might be fixed manually,
   or we need to solve which still alive scheduler instance is
   responsible for fixing the particular scheduling..
   
  
  So if I am reading the above correctly - you are basically proposing to
  move claims to the scheduler (we would atomically check if there were
  changes since the time we picked the host with the UPDATE .. WHERE using
  LOCK IN SHARE MODE (assuming REPEATABLE READS is the used isolation
  level) and then updating the usage, a.k.a doing the claim in the same
  transaction.
  
  The issue here is that we still have a window between sending the
  message, and the message getting picked up by the compute host (or
  timing out) or the instance outright failing, so for sure we will need
  to ack/nack the claim in some way on the compute side.
  
  I believe something like this has come up before under the umbrella term
  of moving claims to the scheduler, and was discussed in some detail on
  the latest Nova mid-cycle meetup, but only artifacts I could find were a
  few lines on this etherpad Sylvain pointed me to [1] that I am copying
  here:
  
 
  
  * White board the scheduler service interface
   ** note: this design won't change the existing way/logic of reconciling
  nova db != hypervisor view
   ** gantt should just return claim ids, not entire claim objects
   ** claims are acked as being in use via the resource tracker updates
  from nova-compute
   ** we still need scheduler retries for exceptional situations (admins
  doing things outside openstack, hardware changes / failures)
   ** retry logic in conductor? probably a separate item/spec
  
  
  As you can see - not much to go on (but that is material for a separate
  thread that I may start soon).
 
 In my example, the resource needs to be considered as used before we get
 anything back from the compute.
 The resource can be `freed` at error handling,
 hopefully be migrating to another node.
  
  The problem I have with this particular approach is that while it claims
  to fix some of the races (and probably does) it does so by 1) turning
  the current scheduling mechanism on it's head 2) and not providing any
  thought into the trade-offs that it will make. For example, we may get
  more correct scheduling in the general case and the correctness will not
  be affected by the 

Re: [openstack-dev] [nova][api] Microversions. And why do we need API extensions for new API functionality?

2015-03-10 Thread Andrew Laski



On 03/09/2015 06:04 PM, melanie witt wrote:

On Mar 9, 2015, at 13:14, Sean Dague s...@dague.net wrote:


So possibly another way to think about this is our prior signaling of
what was supported by Nova was signaled by the extension list. Our code
was refactored into a way that supported optional loading by that unit.

As we're making things less optional it probably makes sense to evolve
the API code tree to look more like our REST resource tree. Yes, that
means servers.py ends up being big, but it is less confusing that all
servers related code is in that file vs all over a bunch of other files.

So I'd agree that in this case server tags probably should just be in
servers.py. I also think long term we should do some plugin collapse
for stuff that's all really just features on one resource tree so that
the local filesystem code structure looks a bit more like the REST url tree.

I think this makes a lot of sense. When I read the question, why is server tags being added 
as an extension the answer that comes to mind first is, because the extension framework 
is there and that's how things have been done so far.

I think the original thinking on extensions was, make everything optional so 
users can enable/disable as they please, operators can disable any feature by 
removing the extension. Another benefit is the ability for anyone to add a 
(non-useful to the community at-large) feature without having to patch in 
several places.

I used to be for extensions for the aforementioned benefits, but now I tend to 
think it's too flexible and complex. It's so flexible that you can easily get 
yourself into a situation where your deployment can't work with other useful 
tools/libraries/etc which expect a certain contract from the Nova API. It 
doesn't make sense to let the API we provide be so arbitrary. It's certainly 
not friendly to API users.

We still have the ability to disable or limit features based on policy -- I 
don't think we need to do it via extensions.

The only problem that seems to be left is, how can we allow people to add un-upstreamable 
features to the API in their internal deployments? I know the ideal answer is don't 
do that but the reality is some things will never be agreed upon upstream and I do 
see value in the extension framework for that. I don't think anything in-tree should be 
implemented as extensions, though.


At the moment this is provided for by an experimental flag in the 
response headers: 
https://git.openstack.org/cgit/openstack/nova-specs/tree/specs/kilo/approved/api-microversions.rst#n182 
.  It is intended to be used for transitioning from the current state of 
extensions to a place where optional API extensions aren't allowed, but 
that discussion can continue if there's a case for allowing some 
optional components for deployers.  I'm in favor of having a mechanism 
for adding features to an deployment as long as it's exposed in a way 
that makes it clear that it's separate from the standard API, e.g. an 
entirely separate tree, not just resource prefixes.




melanie (melwitt)






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] 3rd Party CI failures ignored, caused driver to break

2015-03-10 Thread Erlon Cruz
Agreed, CI systems are not reliable. Most of failures are related to
mis-configuration or devstack problems, not driver problems itself. What
happen then, is that people just don't care if there's a red FAILURE in the
CIs results. A 4) option would be to rate CIs according to their
trustfulness (may be a composition counter of uptime and false negatives),
then the developers would pay more attention if they broke a reliable  CI.
Also, this rating could be used as parameter to decide if a CI should vote
or not.

On Thu, Feb 26, 2015 at 5:24 PM, John Griffith john.griffi...@gmail.com
wrote:



 On Thu, Feb 26, 2015 at 1:10 PM, Walter A. Boring IV walter.bor...@hp.com
  wrote:

 Hey folks,
Today we found out that a patch[1] that was made against our lefthand
 driver caused the driver to fail.   The 3rd party CI that we have setup
 to test our driver (hp_lefthand_rest_proxy.py) caught the CI failure and
 reported it correctly[2].  The patch that broke the driver was reviewed and
 approved without a mention of 3rd Party CI failures.

 This is a perfect example of 3rd Party CI working perfectly and catching
 a failure, and being completely ignored by everyone
 involved in the review process for the patch

 I know that 3rd party CI isn't perfect, and has been ripe with false
 failures, which is one of the reasons why they aren't voting today.
 But, that being said, if patch submitters aren't even looking at the
 failures for CI when they are touching drivers that they don't maintain,
 and reviewers
 aren't looking at the CI failures, then why are we even doing 3rd party
 CI?

 Our team is partially responsible for not seeing the failure as well.  We
 should be watching the CI failures closely, but we are doing the best we
 can.  There are enough patches for Cinder ongoing at any one time, that
 we simply can't watch every single one of them for failures. We did
 eventually
 see that every single patchset in gerrit was now failing against our
 driver, and this is how we caught it.  Yes, it was a bit after the fact,
 but we did notice
 it and now have a new patch up that fixes it.   So, in that regard 3rd
 party CI did eventually vet out a problem that our team caught.

 How can we prevent this in the future?
 1) Make 3rd party CI voting.  I don't believe this is ready yet.


 ​Agreed, look at the history on that driver (and many others) and you'll
 see we are in no way ready for that.​


 2) Authors and reviews need to look at 3rd party CI failures when a patch
 touches a driver.  If a failure is seen, contact the CI maintainer and work
 with them and
 see if the failure is related to the patch, if it's not obvious. In this
 case, the failure was obvious.  The import changed, and now the package
 can't find the module.


 ​If things were more stable yeah, I might, but the reality is as I've
 pointed out we have a serious signal to noise ratio problem IMO
 ​


 3) CI maintainers watch every single patchset and report -1's on
 reviews?  (ouch)
 4) ?


 ​Option 4 in my opinion is exactly what I've been doing since this
 started.  I receive a notification for any change that my CI setup fails
 on, and then it's up to me to go and verify if something is truly broken or
 if it's my system that's messed up.  It's not perfect and it's not really a
 true CI but it is a continuous test system which for me is what's most
 important here.  I completely understand that's not the case for other
 things.

 This is where the churn of typo fixes, hacking changes etc can bight us.
 Sure, while reviewing that probably could've/should've been caught.  The
 problem is for me at least if we're going to do this sorts of semantic
 changes that touch so many files I'm likely to be half asleep before I get
 to the cinder/volume section.  Doesn't make it right, but kinda how it is.

 Anyway, yeah it sucks, but I'd argue this worked out GREAT.  A change was
 made that broke things in LHN driver, but historically nobody would've
 known until a customer tried to use it.  In this case, the problem was
 found in less than a day and fixed.  That's a pretty huge win in my opinion!

 Thanks,
 John​





 Here is the patch that broke the lefthand driver[1]
 Here is the reported failure in the c-vol log for the patch by our 3rd
 party CI system[2]
 Here is my new patch that fixes the lefthand driver again.[3]

 [1] https://review.openstack.org/#/c/145780/
 [2] http://15.126.198.151/80/145780/15/check/lefthand-
 iscsi-driver-master-client-pip-dsvm/3927e3d/logs/screen-
 c-vol.txt.gz?level=ERROR
 [3] https://review.openstack.org/#/c/159586/

 $0.02
 Walt

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List 

[openstack-dev] [python-novaclient] Better wording for secgroup-*-default-rules? help text

2015-03-10 Thread Chris St. Pierre
I've just filed a bug on the confusing wording of help text for the
secgroup-{add,delete,list}-default-rules? commands:
https://bugs.launchpad.net/python-novaclient/+bug/1430354

As I note in the bug, though, I'm not sure the best way to fix this. In an
unconstrained world, I'd like to see something like:

secgroup-add-default-rule   Add a rule to the set of rules that will be
added to the 'default' security group in a newly-created tenant.

But that's obviously excessively verbose. And the help strings are pulled
from the docstrings of the functions that implement the commands, so we're
limited to what can fit in a one-line docstring. (We could add another
source of help documentation -- e.g., `desc = getattr(callback, help,
callback.__doc__) or ''` on novaclient/shell.py line 531 -- but that seems
like it should be a last resort.)

How can we clean up the wording to make it clear that the default security
group is, in fact, not the 'default' security group or the security
group which is default, but rather another beast entirely which isn't even
actually a security group?

Naming: still the hardest problem in computer science. :(

-- 
Chris St. Pierre
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Testtools 1.7.0 may error if you installed it before reading this email

2015-03-10 Thread Robert Collins
On 11 March 2015 at 13:59, Robert Collins robe...@robertcollins.net wrote:
 On 11 March 2015 at 10:27, Dolph Mathews dolph.math...@gmail.com wrote:
 Great to hear that this has been addressed, as this impacted a few tests in
 keystone.

 (but why was the fix not released as 1.7.1?)

 There will be a new release indeed later today to fix a small UI issue
 on pypy3 which affects testtools CI, but the actual bug affecting
 1.7.0 on Python2 was entirely in the build process, not in the release
 itself.

 The built wheel was corrupt (missing files), as opposed to there being
 a bug in the code itself.

And indeed now that we've worked around Sphinx breaking pypy3 and
Python 3.2, I've been able to cut a new release of testtools, which
will bring with it a new wheel, and that will workaround the bug in
capping in stable (because thats how 1.6.1 was working - wheels better
:)).

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-novaclient] Better wording for secgroup-*-default-rules? help text

2015-03-10 Thread Chris St. Pierre
On Tue, Mar 10, 2015 at 4:50 PM, melanie witt melwi...@gmail.com wrote:

 I don't think your suggestion for the help text is excessively verbose.
 There are already longer help texts for some commands than that, and I
 think it's important to accurately explain what commands do. You can use a
 multiline docstring to have a longer help text.


Ah, look at that! In some other projects, flake8 complains about a
docstring whose first line doesn't end in a period, so I didn't think it'd
be possible. If you don't think that's excessively verbose, there'll be a
patch in shortly. Thanks!

Why do you say the default security group isn't actually a security
 group? The fact that it's per-tenant and therefore not necessarily
 consistent?


That's precisely the confusion -- the security group name 'default' is, of
course, a security group. But the default security group, as referenced
by the help text for these commands, is actually a sort of
meta-security-group object that is only used to populate the 'default'
security group in new tenants. It is not, in and of itself, an actual
security group. That is, adding a new rule with 'nova
secgroup-add-default-rules' has absolutely no effect on what network
traffic is allowed between guests; it only affects new tenants created
afterwards.

-- 
Chris St. Pierre
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-10 Thread John Griffith
On Tue, Mar 10, 2015 at 10:29 AM, Russell Bryant rbry...@redhat.com wrote:

 The TC is in the middle of implementing a fairly significant change in
 project governance.  You can find an overview from Thierry on the
 OpenStack blog [1].

 Part of the change is to recognize more projects as being part of the
 OpenStack community.  Another critical part was replacing the integrated
 release with a set of tags.  A project would be given a tag if it meets
 some defined set of criteria.

 I feel that we're at a very vulnerable part of this transition.  We've
 abolished the incubation process and integrated release.  We've
 established a fairly low bar for new projects [2].  However, we have not
 yet approved *any* tags other than the one that reflects which projects
 are included in the final integrated release (Kilo) [3].  Despite the
 previously discussed challenges with the integrated release,
 it did at least mean that a project has met a very useful set of
 criteria [4].

 We now have several new project proposals.  However, I propose not
 approving any new projects until we have a tagging system that is at
 least far enough along to represent the set of criteria that we used to
 apply to all OpenStack projects (with exception for ones we want to
 consciously drop).  Otherwise, I think it's a significant setback to our
 project governance as we have yet to provide any useful way to navigate
 the growing set of projects.

 The resulting set of tags doesn't have to be focused on replicating our
 previous set of criteria.  The focus must be on what information is
 needed by various groups of consumers and tags are a mechanism to
 implement that.  In any case, we're far from that point because today we
 have nothing.

 I can't think of any good reason to rush into approving projects in the
 short term.  If we're not able to work out this rich tagging system in a
 reasonable amount of time, then maybe the whole approach is broken and
 we need to rethink the whole approach.

 Thanks,

 [1]
 http://www.openstack.org/blog/2015/02/tc-update-project-reform-progress/
 [2]
 http://governance.openstack.org/reference/new-projects-requirements.html
 [3] http://governance.openstack.org/reference/tags/index.html
 [4]

 http://governance.openstack.org/reference/incubation-integration-requirements.html

 --
 Russell Bryant

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


​I think these are great points Russell and agree completely.  I'd also say
that not only is their risk in rushing approval in the short time, I'd also
say that in my opinion there's little or no advantage either.  I like the
idea of suspending things temporarily until we get the tagging and other
details worked out a bit more.

Thanks,
John​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-10 Thread Doug Hellmann


On Tue, Mar 10, 2015, at 12:46 PM, Doug Hellmann wrote:
 
 
 On Tue, Mar 10, 2015, at 12:29 PM, Russell Bryant wrote:
  The TC is in the middle of implementing a fairly significant change in
  project governance.  You can find an overview from Thierry on the
  OpenStack blog [1].
  
  Part of the change is to recognize more projects as being part of the
  OpenStack community.  Another critical part was replacing the integrated
  release with a set of tags.  A project would be given a tag if it meets
  some defined set of criteria.
  
  I feel that we're at a very vulnerable part of this transition.  We've
  abolished the incubation process and integrated release.  We've
  established a fairly low bar for new projects [2].  However, we have not
  yet approved *any* tags other than the one that reflects which projects
  are included in the final integrated release (Kilo) [3].  Despite the
  previously discussed challenges with the integrated release,
  it did at least mean that a project has met a very useful set of
  criteria [4].
  
  We now have several new project proposals.  However, I propose not
  approving any new projects until we have a tagging system that is at
  least far enough along to represent the set of criteria that we used to
  apply to all OpenStack projects (with exception for ones we want to
  consciously drop).  Otherwise, I think it's a significant setback to our
  project governance as we have yet to provide any useful way to navigate
  the growing set of projects.
  
  The resulting set of tags doesn't have to be focused on replicating our
  previous set of criteria.  The focus must be on what information is
  needed by various groups of consumers and tags are a mechanism to
  implement that.  In any case, we're far from that point because today we
  have nothing.
  
  I can't think of any good reason to rush into approving projects in the
  short term.  If we're not able to work out this rich tagging system in a
  reasonable amount of time, then maybe the whole approach is broken and
  we need to rethink the whole approach.
 
 I think we made it pretty clear that we would be taking approvals
 slowly, and that we might not approve any new projects before the
 summit, precisely for the reasons you state here. I have found the
 submitted proposals 

Oops

I have found the existing applications useful for thinking about what
tags we need, and what other criteria we might be missing (Joe's
proposal to add a team employer diversity requirement is one example).

Doug

 
  
  Thanks,
  
  [1]
  http://www.openstack.org/blog/2015/02/tc-update-project-reform-progress/
  [2]
  http://governance.openstack.org/reference/new-projects-requirements.html
  [3] http://governance.openstack.org/reference/tags/index.html
  [4]
  http://governance.openstack.org/reference/incubation-integration-requirements.html
  
  -- 
  Russell Bryant
  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron] Best API for generating subnets from pool

2015-03-10 Thread Carl Baldwin
Honestly, I'm a little frustrated that this is coming up now when we
tried very hard to discuss this during the spec review and we thought
we got to a resolution.  It seems a little late to go back to the
drawing board.

On Mon, Mar 9, 2015 at 7:05 AM, Salvatore Orlando sorla...@nicira.com wrote:
 The problem with this approach is, in my opinion, that attributes such as
 gateway_ip are used with different semantics in requests and responses; this
 might also need users to write client applications expecting the values in
 the response might differ from those in the request.

Is this so strange?  Could you explain why this is a problem with an example?

 1) (this is more for neutron people) Is there a real use case for requesting
 specific gateway IPs and allocation pools when allocating from a pool? If
 not, maybe we should let the pool set a default gateway IP and allocation
 pools. The user can then update them with another call. Another option would
 be to provide subnet templates from which a user can choose. For instance
 one template could have the gateway as first IP, and then a single pool for
 the rest of the CIDR.

If you really don't like this aspect of the design then my vote will
be to drop support for this use case for Kilo.  Neutron will specify
gateway and allocation pools from the subnet and maybe the user can
update the subnet afterward if it needs to change.

 2) Is the action of creating a subnet from a pool better realized as a
 different way of creating a subnet, or should there be some sort of pool
 action? Eg.:

I think this shift in direction will push this work entirely out to
Liberty.  We have one week until Kilo-3.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Unknown resource OS::Heat::ScaledResource

2015-03-10 Thread Steven Hardy
On Tue, Mar 10, 2015 at 04:26:28PM +, Manickam, Kanagaraj wrote:
Hi,
 
 
 
I observed in one of the patch mentioned below, OS::Heat::ScaledResource
is reported as unknown, could anyone help here to resolve the issue.
Thanks.
 
 
 

 http://logs.openstack.org/76/157376/8/check/check-heat-dsvm-functional-mysql/c9a1be3/logs/screen-h-eng.txt.gz
 
 reports OS::Heat::ScaledResource as unknown

Your patch appears to introduce a regression in this test:

https://github.com/openstack/heat/blob/master/heat_integrationtests/functional/test_autoscaling.py#L636

You can see we override the normal mapping from OS::Heat::ScaledResource to
AWS::EC2::Instance - the instance resource is overridden by provider.yaml
in this test, which I assume means you're breaking the environment somehow
in your patch and losing that mapping.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron] Best API for generating subnets from pool

2015-03-10 Thread Carl Baldwin
On Mon, Mar 9, 2015 at 5:34 PM, Tidwell, Ryan ryan.tidw...@hp.com wrote:
 With implicit allocations, the thinking is that this is where a subnet is
 created in a backward-compatible way with no subnetpool_id and the subnets
 API’s continue to work as they always have.

Correct.

 In the case of a specific subnet allocation request (create-subnet passing a
 pool ID and specific CIDR), I would look in the pool’s available prefix list
 and carve out a subnet from one of those prefixes and ask for it to be
 reserved for me.  In that case I know the CIDR I’ll be getting up front.  In
 such a case, I’m not sure I’d ever specify my gateway using notation like
 0.0.0.1, even if I was allowed to.  If I know I’ll be getting 10.10.10.0/24,
 I can simply pass gateway_ip as 10.10.10.1 and be done with it.  I see no
 added value in supporting that wildcard notation for a gateway on a specific
 subnet allocation.

Correct.  Not what it was designed for.

 In the case of an “any” subnet allocation request (create-subnet passing a
 pool ID, but no specific CIDR), I’m already delegating responsibility for
 addressing my subnet to Neutron.  As such, it seems reasonable to not have
 strong opinions about details like gateway_ip when making the request to
 create a subnet in this manner.

I'm okay dropping this use case if we need to.

 To me, this all points to not supporting wildcards for gateway_ip and
 allocation_pools on subnet create (even though it found its way into the
 spec).  My opinion (which I think lines up with yours) is that on an any
 request it makes sense to let the pool fill in allocation_pools and
 gateway_ip when requesting an “any” allocation from a subnet pool.  When
 creating a specific subnet from a pool, gateway IP and allocation pools
 could still be passed explicitly by the user.

If this is what we need to do.  I don't think there is high demand for
this use case.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Second Release of Magnum

2015-03-10 Thread Adrian Otto
We are proud to announce our second release of Magnum [1]. This release [2] 
includes numerous improvements, including significant test code coverage, 
multi-tenancy support, scalable bays, and support for CoreOS Nodes, 8 bit 
character support, and 52 other enhancements, bug fixes, and technical debt 
elimination. 

To get started with Magnum, see our dev-quickstart.rst document.

Regards,

Adrian Otto

References:

[1] https://wiki.openstack.org/wiki/Magnum Magnum Project
[2] https://github.com/stackforge/magnum/releases/tag/2015.1.0b2 Release 2
[3] 
https://github.com/stackforge/magnum/blob/master/doc/source/dev/dev-quickstart.rst
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] [all] glance_store release 0.3.0

2015-03-10 Thread Nikhil Komawar
The glance_store release management team is pleased to announce:

glance_store version 0.3.0 has been released on Tuesday March 10th around 
1755 UTC.

For more information, please find the details at:

https://launchpad.net/glance-store/+milestone/v0.3.0

Please report the issues through launchpad:

https://bugs.launchpad.net/glance-store
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-10 Thread Doug Hellmann


On Tue, Mar 10, 2015, at 12:29 PM, Russell Bryant wrote:
 The TC is in the middle of implementing a fairly significant change in
 project governance.  You can find an overview from Thierry on the
 OpenStack blog [1].
 
 Part of the change is to recognize more projects as being part of the
 OpenStack community.  Another critical part was replacing the integrated
 release with a set of tags.  A project would be given a tag if it meets
 some defined set of criteria.
 
 I feel that we're at a very vulnerable part of this transition.  We've
 abolished the incubation process and integrated release.  We've
 established a fairly low bar for new projects [2].  However, we have not
 yet approved *any* tags other than the one that reflects which projects
 are included in the final integrated release (Kilo) [3].  Despite the
 previously discussed challenges with the integrated release,
 it did at least mean that a project has met a very useful set of
 criteria [4].
 
 We now have several new project proposals.  However, I propose not
 approving any new projects until we have a tagging system that is at
 least far enough along to represent the set of criteria that we used to
 apply to all OpenStack projects (with exception for ones we want to
 consciously drop).  Otherwise, I think it's a significant setback to our
 project governance as we have yet to provide any useful way to navigate
 the growing set of projects.
 
 The resulting set of tags doesn't have to be focused on replicating our
 previous set of criteria.  The focus must be on what information is
 needed by various groups of consumers and tags are a mechanism to
 implement that.  In any case, we're far from that point because today we
 have nothing.
 
 I can't think of any good reason to rush into approving projects in the
 short term.  If we're not able to work out this rich tagging system in a
 reasonable amount of time, then maybe the whole approach is broken and
 we need to rethink the whole approach.

I think we made it pretty clear that we would be taking approvals
slowly, and that we might not approve any new projects before the
summit, precisely for the reasons you state here. I have found the
submitted proposals 

 
 Thanks,
 
 [1]
 http://www.openstack.org/blog/2015/02/tc-update-project-reform-progress/
 [2]
 http://governance.openstack.org/reference/new-projects-requirements.html
 [3] http://governance.openstack.org/reference/tags/index.html
 [4]
 http://governance.openstack.org/reference/incubation-integration-requirements.html
 
 -- 
 Russell Bryant
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

2015-03-10 Thread Carl Baldwin
Neutron currently does not enforce the uniqueness, or non-overlap, of
subnet cidrs within the address scope for a single tenant.  For
example, if a tenant chooses to use 10.0.0.0/24 on more than one
subnet, he or she is free to do so.  Problems will arise when trying
to connect a router between these subnets but that is left up to the
tenant to work out.

In the current IPAM rework, we had decided to allow this overlap in
the reference implementation for backward compatibility.  However,
we've hit a snag.  It would be convenient to use the subnet cidr as
the handle with which to refer to a previously allocated subnet when
talking to IPAM.  If overlap is allowed, this is not possible and we
need to come up with another identifier such as Neutron's subnet_id or
another unique IPAM specific ID.  It could be a burden on an external
IPAM system -- which does not allow overlap -- to work with a
completely separate identifier for a subnet.

I do not know of anyone using this capability (or mis-feature) of
Neutron.  I would hope that tenants are aware of the issues with
trying to route between subnets with overlapping address spaces and
would avoid it.  Is this potential overlap something that we should
really be worried about?  Could we just add the assumption that
subnets do not overlap within a tenant's scope?

An important thing to note is that this topic is different than
allowing overlap of cidrs between tenants.  Neutron will continue to
allow overlap of addresses between tenants and support the isolation
of these address spaces.  The IPAM rework will support this.

Carl Baldwin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

2015-03-10 Thread Fawad Khaliq
On Tue, Mar 10, 2015 at 10:38 PM, Gabriel Bezerra gabri...@lsd.ufcg.edu.br
wrote:

 Em 10.03.2015 14:34, Gabriel Bezerra escreveu:

  Em 10.03.2015 14:24, Carl Baldwin escreveu:
 Neutron currently does not enforce the uniqueness, or non-overlap, of
 subnet cidrs within the address scope for a single tenant.  For
 example, if a tenant chooses to use 10.0.0.0/24 on more than one
 subnet, he or she is free to do so.  Problems will arise when trying
 to connect a router between these subnets but that is left up to the
 tenant to work out.

 In the current IPAM rework, we had decided to allow this overlap in
 the reference implementation for backward compatibility.  However,
 we've hit a snag.  It would be convenient to use the subnet cidr as
 the handle with which to refer to a previously allocated subnet when
 talking to IPAM.  If overlap is allowed, this is not possible and we
 need to come up with another identifier such as Neutron's subnet_id or
 another unique IPAM specific ID.  It could be a burden on an external
 IPAM system -- which does not allow overlap -- to work with a
 completely separate identifier for a subnet.

 I do not know of anyone using this capability (or mis-feature) of
 Neutron.  I would hope that tenants are aware of the issues with
 trying to route between subnets with overlapping address spaces and
 would avoid it.  Is this potential overlap something that we should
 really be worried about?  Could we just add the assumption that
 subnets do not overlap within a tenant's scope?

 An important thing to note is that this topic is different than
 allowing overlap of cidrs between tenants.  Neutron will continue to
 allow overlap of addresses between tenants and support the isolation
 of these address spaces.  The IPAM rework will support this.

 Carl Baldwin


 I'd vote for allowing against such restriction, but throwing an error
 in case of creating a router between the subnets.


 Fixing my previous e-mail:
 I'd vote against such restriction, but throwing an error in case of
 creating a router between the subnets that overlap.

+1 to Gabriel's suggestion. Multiple routers and multiple subnets with
overlapping IPs is a perfectly valid scenario and is used in some
blueprints; for instance PLUMgrid plugin supports it. Throwing an error for
overlapping IPs on Router interfaces seems like the right approach.




 I can imagine a tenant running multiple instances of an application,
 each one with its own network that uses the same address range, to
 minimize configuration differences between them.


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron] Best API for generating subnets from ool

2015-03-10 Thread Carl Baldwin
On Tue, Mar 10, 2015 at 12:53 AM, Miguel Ángel Ajo majop...@redhat.com wrote:

 a) What if the subnet pools go into an external network, so, the gateway is
 predefined and external, we may want to be able to specify it, we could
 assume the convention that we’re going to expect the gateway to be on the
 first IP of the subnet...

In this case, you're not going to ask IPAM for just any subnet.
You're going to tell it exactly what you want.  So, the point is moot
here.

 b) Thinking of an on-link route, the gateway could be a fixed IP (regardless
 of the subnet CIDR), this case is not fully supported now in neutron
 l3-agent now, but I plan to add it on the next cycle [5] (sorry, I’ve been a
 bit slow at this), it’s a very neat standard where you can route RIPE blocks
 as subnets to a physical net without spending any extra IP for the router.

IPAM doesn't care if your gateway is an on-link route outside of the
subnet.  Neutron will not even tell IPAM about such a case.  So, we're
fine here.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

2015-03-10 Thread Ryan Moats


Gabriel Bezerra gabri...@lsd.ufcg.edu.br wrote on 03/10/2015 12:34:30 PM:


 Em 10.03.2015 14:24, Carl Baldwin escreveu:
  Neutron currently does not enforce the uniqueness, or non-overlap, of
  subnet cidrs within the address scope for a single tenant.  For
  example, if a tenant chooses to use 10.0.0.0/24 on more than one
  subnet, he or she is free to do so.  Problems will arise when trying
  to connect a router between these subnets but that is left up to the
  tenant to work out.
 
  In the current IPAM rework, we had decided to allow this overlap in
  the reference implementation for backward compatibility.  However,
  we've hit a snag.  It would be convenient to use the subnet cidr as
  the handle with which to refer to a previously allocated subnet when
  talking to IPAM.  If overlap is allowed, this is not possible and we
  need to come up with another identifier such as Neutron's subnet_id or
  another unique IPAM specific ID.  It could be a burden on an external
  IPAM system -- which does not allow overlap -- to work with a
  completely separate identifier for a subnet.
 
  I do not know of anyone using this capability (or mis-feature) of
  Neutron.  I would hope that tenants are aware of the issues with
  trying to route between subnets with overlapping address spaces and
  would avoid it.  Is this potential overlap something that we should
  really be worried about?  Could we just add the assumption that
  subnets do not overlap within a tenant's scope?
 
  An important thing to note is that this topic is different than
  allowing overlap of cidrs between tenants.  Neutron will continue to
  allow overlap of addresses between tenants and support the isolation
  of these address spaces.  The IPAM rework will support this.
 
  Carl Baldwin


 I'd vote for allowing against such restriction, but throwing an error in
 case of creating a router between the subnets.

 I can imagine a tenant running multiple instances of an application,
 each one with its own network that uses the same address range, to
 minimize configuration differences between them.


While I'd personally like to see this be restricted (Carl's position), I
know
of at least one existence proof where management applications are doing
precisely what Gabriel is suggesting - reusing the same address range to
minimize the configuration differences.

Ryan Moats__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress]How to add tempest tests for testing murano drive

2015-03-10 Thread Wong, Hong
Hi Aaron,

I just want to confirm how CI is running the congress tempest tests in its 
environment as I am about to check in a tempest test for testing murano 
deployment.  If I check in the test script to 
congress/contrib/tempest/tempest/scenario/congress_datasources, the CI will 
take care of running the test by copying it to 
stack/tempest/tempest/scenario/congress_datasources ?  So, I don't need to 
worry about adding python-congerssclient and python-muranoclient in 
stack/tempest/requirements.txt right ?

Thanks,
Hong



From: Aaron Rosen [mailto:aaronoro...@gmail.com]
Sent: Monday, March 09, 2015 9:28 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress]How to add tempest tests for testing 
murano drive

Hi Hong,

I you should be able to run the tempest tests with ./run_tempest.sh -N which by 
default uses site-packages so they should be installed by the devstack script. 
If you want to run tempest via tox and venv you'll need to do:

echo python-congressclient  requirements.txt
echo python-muranoclient  requirements.txt

Then have tox build the venv.

Best,

Aaron

On Mon, Mar 9, 2015 at 8:28 PM, Wong, Hong 
hong.w...@hp.commailto:hong.w...@hp.com wrote:
Hi Tim and Aaron,

I got the latest changes from r157166 and I see the thirdparty-requirements.txt 
file where you can define the murano client (it’s already there), so the unit 
tests for murano driver can run out from the box.  However, this change is only 
in congress, so the tempest tests (tempest/ directory where congress tempest 
tests need to copy to as described from readme file) required murano and 
congress clients will still have issue as it doesn’t have the thirdparty 
requirement file concept.  Will r157166 changes also going to be implemented in 
tempest package ?

Thanks,
Hong


--



Message: 10

Date: Mon, 2 Mar 2015 15:39:11 +

From: Tim Hinrichs thinri...@vmware.commailto:thinri...@vmware.com

To: OpenStack Development Mailing List (not for usage questions)

  
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org

Subject: Re: [openstack-dev] [Congress]How to add tempest tests for

  testing murano driver

Message-ID: 
d6dbf6ed-2207-4e19-9eec-c270bce2f...@vmware.commailto:d6dbf6ed-2207-4e19-9eec-c270bce2f...@vmware.com

Content-Type: text/plain; charset=utf-8



Hi Hong,



Aaron started working on this, but we don?t have anything in place yet, as far 
as I know.  He?s a starting point.



https://review.openstack.org/#/c/157166/



Tim



On Feb 26, 2015, at 2:56 PM, Wong, Hong 
hong.w...@hp.commailto:hong.w...@hp.commailto:hong.w...@hp.com%3cmailto:hong.w...@hp.com
 wrote:



Hi Aaron,



I am new to congress and trying to write tempest tests for the newly added 
murano datasource driver.  Since the murano datasource tempest tests require 
both murano and python-congress clients as the dependencies.  I was told that I 
can't just simply add the requirements in the tempest/requirements.txt file as 
both packages are in not in the main branch, so CI will not be able to pick 
them up.  Do you know of any workaround ?



Thanks,

Hong


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

2015-03-10 Thread Gabriel Bezerra

Em 10.03.2015 14:34, Gabriel Bezerra escreveu:

Em 10.03.2015 14:24, Carl Baldwin escreveu:
Neutron currently does not enforce the uniqueness, or non-overlap, of
subnet cidrs within the address scope for a single tenant.  For
example, if a tenant chooses to use 10.0.0.0/24 on more than one
subnet, he or she is free to do so.  Problems will arise when trying
to connect a router between these subnets but that is left up to the
tenant to work out.

In the current IPAM rework, we had decided to allow this overlap in
the reference implementation for backward compatibility.  However,
we've hit a snag.  It would be convenient to use the subnet cidr as
the handle with which to refer to a previously allocated subnet when
talking to IPAM.  If overlap is allowed, this is not possible and we
need to come up with another identifier such as Neutron's subnet_id or
another unique IPAM specific ID.  It could be a burden on an external
IPAM system -- which does not allow overlap -- to work with a
completely separate identifier for a subnet.

I do not know of anyone using this capability (or mis-feature) of
Neutron.  I would hope that tenants are aware of the issues with
trying to route between subnets with overlapping address spaces and
would avoid it.  Is this potential overlap something that we should
really be worried about?  Could we just add the assumption that
subnets do not overlap within a tenant's scope?

An important thing to note is that this topic is different than
allowing overlap of cidrs between tenants.  Neutron will continue to
allow overlap of addresses between tenants and support the isolation
of these address spaces.  The IPAM rework will support this.

Carl Baldwin


I'd vote for allowing against such restriction, but throwing an error
in case of creating a router between the subnets.


Fixing my previous e-mail:
I'd vote against such restriction, but throwing an error in case of 
creating a router between the subnets that overlap.




I can imagine a tenant running multiple instances of an application,
each one with its own network that uses the same address range, to
minimize configuration differences between them.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] FYI : Micro-versioning for Nova API

2015-03-10 Thread Ben Swartzlander



On 03/09/2015 08:48 PM, Li, Chen wrote:


Hello Manila,

I noticed there were some discussions about api extensions in the past 
few weeks.


Looks like nova has similar discussions too.

“Each extension gets a version”, if my understanding about the api 
extension discussion purpose is correct.


Not sure if you already known it or not.




I wasn't aware of this, and it is relevant, so thanks for pointing it 
out. I'm still not decided on what approach would be best for Manila. I 
have experience dealing with API compatibility issues from other 
projects and I think the main thing want to avoid is supporting too many 
versions at the same time. There will be cases where supporting 2 
versions in parallel will be needed, but I'd prefer never to need more 
than 2. I'm not sure the Nova proposal helps towards that end.




I’m no expert here, so just FYI:

https://wiki.openstack.org/wiki/Nova/ProposalForAPIMicroVersions

http://lists.openstack.org/pipermail/openstack-dev/2014-September/046482.html

They have some discussions about where/how to add the new API 
functionality recently:


http://lists.openstack.org/pipermail/openstack-dev/2015-March/058493.html

Thanks.

-chen



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [log] Log working group -- Alternate moderator needed for today

2015-03-10 Thread Kuvaja, Erno
You mean for tomorrow? No worries, I can kick off the meeting and run through 
agenda if we have something to address.

Take best out of the ops meetup!


-  Erno

From: Rochelle Grober [mailto:rochelle.gro...@huawei.com]
Sent: Tuesday, March 10, 2015 4:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [log] Log working group -- Alternate moderator needed 
for today

Or Cancellation.  I'm in the Ops Midcycle meeting and can't guarantee I can 
join.

Meeting meets Wednesdays at 20:00UTC, which is now 11am PDT.

--Rocky
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] must-fix bugs for final kilo releases

2015-03-10 Thread Doug Hellmann
I have started an etherpad to track bugs we consider critical for final
releases of incubator modules and library code for Kilo. I added the 2
items discussed in yesterday's meeting, but please add other items to
the list as needed so we can track them.

Thanks,
Doug

https://etherpad.openstack.org/p/oslo-kilo-final-bug-list

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Common library for shared code

2015-03-10 Thread James Slagle
On Mon, Mar 9, 2015 at 4:35 PM, Jan Provazník jprov...@redhat.com wrote:
 Hi,
 it would make sense to have a library for the code shared by Tuskar UI and
 CLI (I mean TripleO CLI - whatever it will be, not tuskarclient which is
 just a thing wrapper for Tuskar API). There are various actions which
 consist from more that a single API call to an openstack service, to give
 some examples:

 - nodes registration - for loading a list of nodes from a user defined file,
 this means parsing a CSV file and then feeding Ironic with this data
 - decommission a resource node - this might consist of disabling
 monitoring/health checks on this node, then gracefully shut down the node
 - stack breakpoints - setting breakpoints will allow manual
 inspection/validation of changes during stack-update, user can then update
 nodes one-by-one and trigger rollback if needed

I agree something is needed. In addition to the items above, it's much
of the post deployment steps from devtest_overcloud.sh. I'd like to see that be
consumable from the UI and CLI.

I think we should be aware though that where it makes sense to add things
to os-cloud-config directly, we should just do that.


 It would be nice to have a place (library) where the code could live and
 where it could be shared both by web UI and CLI. We already have
 os-cloud-config [1] library which focuses on configuring OS cloud after
 first installation only (setting endpoints, certificates, flavors...) so not
 all shared code fits here. It would make sense to create a new library where
 this code could live. This lib could be placed on Stackforge for now and it
 might have very similar structure as os-cloud-config.

 And most important... what is the best name? Some of ideas were:
 - tuskar-common

I agree with Dougal here, -1 on this.

 - tripleo-common
 - os-cloud-management - I like this one, it's consistent with the
 os-cloud-config naming

I'm more or less happy with any of those.

However, If we wanted something to match the os-*-config pattern we might
could go with:
- os-management-config
- os-deployment-config

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Unknown resource OS::Heat::ScaledResource

2015-03-10 Thread Zane Bitter

On 10/03/15 12:26, Manickam, Kanagaraj wrote:

Hi,

I observed in one of the patch mentioned below, OS::Heat::ScaledResource
is reported as unknown, could anyone help here to resolve the issue. Thanks.

http://logs.openstack.org/76/157376/8/check/check-heat-dsvm-functional-mysql/c9a1be3/logs/screen-h-eng.txt.gz


  reports OS::Heat::ScaledResource as unknown


Some more context for anyone looking at this:

* The resource type mapping is stored in the environment (for all 
autoscaling group nested stacks).
* The error is happening when deleting the autoscaling group - i.e. upon 
loading the stack from the database the mapping is no longer present in 
the environment
* https://review.openstack.org/#/c/157376/8 is the patch in question, 
which is changing the way the environment is stored (evidently incorrectly)


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone]ON DELETE RESTRICT VS ON DELETE CASCADE

2015-03-10 Thread Adam Young

On 03/10/2015 10:23 AM, Mike Bayer wrote:

if*that’s*  what you mean, that’s known as a “polymorphic foreign key”, and
it is not actually a foreign key at all, it is a terrible antipattern started by
the PHP/Rails community and carried forth by projects like Django.
A) Heh. it is much, much older than that.  SQL Database have been around 
for long enough for these antipatterns to be discovered and rediscovered 
by multiple generations.  I'm aware of the mean by which we cn mitigate 
them.


But that is not what we are doing here.  These are no parity issues 
even.  It is distributed data.


User sand Groups are in, not just one LDAP server,  but many.  With 
Federation, the users  will not even be in a system we can enumerate.  
Which is good, we should never have been allowing list users in the 
first place.


What the Assignments table is doing is pulling together the User and 
groups from remote systems together with role defintions and project 
definitions in the local database.  The data is not in one database.  It 
is in Many.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

2015-03-10 Thread Gabriel Bezerra

Em 10.03.2015 14:24, Carl Baldwin escreveu:

Neutron currently does not enforce the uniqueness, or non-overlap, of
subnet cidrs within the address scope for a single tenant.  For
example, if a tenant chooses to use 10.0.0.0/24 on more than one
subnet, he or she is free to do so.  Problems will arise when trying
to connect a router between these subnets but that is left up to the
tenant to work out.

In the current IPAM rework, we had decided to allow this overlap in
the reference implementation for backward compatibility.  However,
we've hit a snag.  It would be convenient to use the subnet cidr as
the handle with which to refer to a previously allocated subnet when
talking to IPAM.  If overlap is allowed, this is not possible and we
need to come up with another identifier such as Neutron's subnet_id or
another unique IPAM specific ID.  It could be a burden on an external
IPAM system -- which does not allow overlap -- to work with a
completely separate identifier for a subnet.

I do not know of anyone using this capability (or mis-feature) of
Neutron.  I would hope that tenants are aware of the issues with
trying to route between subnets with overlapping address spaces and
would avoid it.  Is this potential overlap something that we should
really be worried about?  Could we just add the assumption that
subnets do not overlap within a tenant's scope?

An important thing to note is that this topic is different than
allowing overlap of cidrs between tenants.  Neutron will continue to
allow overlap of addresses between tenants and support the isolation
of these address spaces.  The IPAM rework will support this.

Carl Baldwin



I'd vote for allowing against such restriction, but throwing an error in 
case of creating a router between the subnets.


I can imagine a tenant running multiple instances of an application, 
each one with its own network that uses the same address range, to 
minimize configuration differences between them.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-10 Thread Thierry Carrez
Russell Bryant wrote:
 [...]
 We now have several new project proposals.  However, I propose not
 approving any new projects until we have a tagging system that is at
 least far enough along to represent the set of criteria that we used to
 apply to all OpenStack projects (with exception for ones we want to
 consciously drop).  Otherwise, I think it's a significant setback to our
 project governance as we have yet to provide any useful way to navigate
 the growing set of projects.
 
 The resulting set of tags doesn't have to be focused on replicating our
 previous set of criteria.  The focus must be on what information is
 needed by various groups of consumers and tags are a mechanism to
 implement that.  In any case, we're far from that point because today we
 have nothing.

I agree that we need tags to represent the various facets of what was in
the integrated release concept.

I'm not sure we should block accepting new project teams until all tags
are defined, though. That sounds like a way to stall forever. So could
you be more specific ? Is there a clear set of tags you'd like to see
defined before we add new project teams ?

 I can't think of any good reason to rush into approving projects in the
 short term.  If we're not able to work out this rich tagging system in a
 reasonable amount of time, then maybe the whole approach is broken and
 we need to rethink the whole approach.

The current plan for the Vancouver Design Summit is to only give space
to OpenStack projects (while non-OpenStack projects may get space in
ecosystem sessions outside of the Design Summit). So it's only fair
for those projects to file for recognition before that happens.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-10 Thread Joe Gordon
On Tue, Mar 10, 2015 at 3:31 PM, James E. Blair cor...@inaugust.com wrote:

 Joe Gordon joe.gord...@gmail.com writes:

  After watching the TC meeting, and double checking with the meeting notes
  [0], it looks like the magnum vote was deferred to next week. But what
  concerns me is the lack of action items assigned that will help make sure
  next weeks discussion isn't just a repeat of what happened today.

 I believe we decided to talk about an application from a project _other_
 than Magnum during the next meeting to help us survey the existing
 applications and identify anything we would like to shore up before we
 proceed.  Then return to Magnum in a later meeting.

 I think that's important so that if we are going to check ourselves with
 the new process, that we do it without focusing unduly on Magnum's
 application, which I think is quite good.

 So I would like us to see where this thread gets us, whether there is
 more input to be digested from the ops summit, talk about other
 applications next week, and then get on to a vote on the Magnum
 application shortly.


Sounds like a good plan to me.

To get the ball rolling on the wider discussion I thought I would take a
quick look at the 4 applications we have today:

Disclaimer: I may be getting some of the details wrong here so take this
all with a large grain of salt.

1. python-openstackclient [0]
2. Magnum - OpenStack Containers Service [1]
3. Murano Application Catalog [2]
4.Group Based Policy (GBP) Project [3]

First lets consider these projects based on the old model [4] consisting of
Scope, Maturity, Process, API, QA, Documentation and Legal. of those items
lets look at scope, Maturity, QA.

*Scope*:
1. *Yes*. python-openstackclient has a clear scope and and place in
OpenStack in fact we already use it all over the place.
2. *Yes*. Magnum definitely doesn't overlap with any existing OpenStack
projects (especially nova). I think its safe to say it has a clear scope
and is a measured progression of openstack as a whole
3. *Maybe*. Not sure about the scope, it is fairly broad and there may be
some open ended corners, such as some references to billing. On the other
hand an application catalog sounds really useful and like a measured
progression for OpenStack as a whole. Murano may overlap with glance's
stated mission of  To provide a service where users can upload and
discover data assets that are meant to be used with other services, like
images for Nova and templates for Heat. Murano also relies heavily on the
Mistral service which is still in stackforge itself.
4. *Maybe*. GBP has a clear scope, although a broad and challenging one
does not duplicate work and seems like a measured progression.

*Maturity*:
1. *Yes*, 6 or so very active reviewers from different companies. No
reworks planned
2. *Maybe*. Diverse set of contributors. Major architectural work may be
needed to remove single points of failure. Unclear On what the API actually
is today, some of it is a management API to provision kubranetes, some is a
pass through to kubranetes via the Magnum API (magnum runs 'kubectl create'
for you) and some requires using the user using the native kube API.
3. *Maybe* *Yes*. Active team, but not diverse. Not sure about rewrites.
Perhaps if there is overlap with other things.
4. *Maybe*. active team, Unclear on if a major rewrite is in its future

*QA*:
1. *Yes*
2. *No*. No devstack-gate job set up
3. *Yes*
4. *No*. No devstack-gate job running

When looking just these 3 requirements. Only python-openstackclient clearly
meets all of them, Magnum and GBP clearly fail the QA requirement. and
Murano is up in the air.

Now that we have some context of how these would have played out in the old
model, lets look at the new requirements [5].

*Aligns with the OpenStack Mission*:
Yes to all

*4 Opens:*
1. *Yes, *not sure if Dean was formally 'chosen' as PTL but that is easy to
sort out
2. *Yes*
3. *Yes, *similar detail about PTL
4. *Yes*, similar detail about PTL

*Supports Keystone*:
Yes for all

*Active team*:
Yes to all


[0] https://review.openstack.org/#/c/161885/
[1] https://review.openstack.org/#/c/161080/
[2] https://review.openstack.org/#/c/162745/
[3] https://review.openstack.org/#/c/161902/
[4]
http://governance.openstack.org/reference/incubation-integration-requirements.html
[5] http://governance.openstack.org/reference/new-projects-requirements.html


 -Jim

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Testtools 1.7.0 may error if you installed it before reading this email

2015-03-10 Thread Joe Gordon
On Tue, Mar 10, 2015 at 5:09 PM, Alan Pevec ape...@gmail.com wrote:

  The wheel has been removed from PyPI and anyone installing testtools
  1.7.0 now will install from source which works fine.

 On stable/icehouse devstack fails[*] with
 pkg_resources.VersionConflict: (unittest2 0.5.1
 (/usr/lib/python2.7/dist-packages),
 Requirement.parse('unittest2=1.0.0'))
 when installing testtools 1.7.0

 unittest2 is not capped in stable/icehouse requirements, why is it not
 upgraded by pip?


Tracking bug: https://bugs.launchpad.net/devstack/+bug/1430592




 Cheers,
 Alan


 [*] e.g.
 http://logs.openstack.org/14/144714/3/check/check-tempest-dsvm-neutron/4d195b5/logs/devstacklog.txt.gz

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Testtools 1.7.0 may error if you installed it before reading this email

2015-03-10 Thread Robert Collins
On 11 March 2015 at 10:27, Dolph Mathews dolph.math...@gmail.com wrote:
 Great to hear that this has been addressed, as this impacted a few tests in
 keystone.

 (but why was the fix not released as 1.7.1?)

There will be a new release indeed later today to fix a small UI issue
on pypy3 which affects testtools CI, but the actual bug affecting
1.7.0 on Python2 was entirely in the build process, not in the release
itself.

The built wheel was corrupt (missing files), as opposed to there being
a bug in the code itself.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-novaclient] Better wording for secgroup-*-default-rules? help text

2015-03-10 Thread melanie witt
On Mar 10, 2015, at 19:28, Chris St. Pierre chris.a.st.pie...@gmail.com wrote:

 Ah, look at that! In some other projects, flake8 complains about a docstring 
 whose first line doesn't end in a period, so I didn't think it'd be possible. 
 If you don't think that's excessively verbose, there'll be a patch in 
 shortly. Thanks!

Oh, right -- I wasn't thinking about that. Probably it's not a restriction in 
novaclient because documentation is generated from the docstrings.

 That's precisely the confusion -- the security group name 'default' is, of 
 course, a security group. But the default security group, as referenced by 
 the help text for these commands, is actually a sort of meta-security-group 
 object that is only used to populate the 'default' security group in new 
 tenants. It is not, in and of itself, an actual security group. That is, 
 adding a new rule with 'nova secgroup-add-default-rules' has absolutely no 
 effect on what network traffic is allowed between guests; it only affects new 
 tenants created afterwards.

Got it. I learned a lot about the default security group in nova-network 
because of your email and bug. It's actually generated if it doesn't exist for 
a tenant when a server is created. If it's found, it's reused and thus won't 
pick up any default rules that had been added since it was created. And then 
you could get into particulars like deleting the 'default' group, then you 
would get all freshest default rules next time you create a server, even if 
your tenant isn't new. Really not easy to understand.

melanie (melwitt)







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone]ON DELETE RESTRICT VS ON DELETE CASCADE

2015-03-10 Thread Mike Bayer


Mike Bayer mba...@redhat.com wrote:

 
 I'm not entirely sure what you've said above actually prevents coders
 from relying on the constraints. Being careful about deleting all of the
 child rows before a parent is good practice. I have seen code like this
 in the past though:
 
 try:
 parent.delete()
 except ForeignKeyFailure:
 parent.children.delete()
 parent.delete()
 
 This means if you don't have the FK's, you may never delete the
 children. Is this a bug? YES. Is it super obvious that it is the wrong
 thing to do? No.
 
 So the point you’re making here is that, if foreign key constraints are
 removed, poorly written code might silently fail. I’m glad we agree this is
 an issue!  It’s the only point I’m making.

I apologize for my snark here. The above code is wrong, and I think it is
obviously wrong. People working on this code should be familiar with
SQLAlchemy basics (at least having read the ORM tutorial), and that includes
the very easy to use features of relationship management.

Even if we are dealing with a version of the above that does not use
SQLAlchemy, it should be apparent that a DELETE should be emitted for the
child rows whether or not they’ve been tested as existing, if we are
deleting on the criteria of “parent_id”. Code like the above should ideally
never get through review, and if code like that exists right now, it should
be fixed.

What foreign key guarantees get us for the above would be for the much
more common case that someone emits a DELETE for the parent row
without being at all aware that there are dependent rows present.  That
silent failure leaves those child rows as orphans which will
lead to application failures when accessed, assuming the application 
also attempts to access the referenced parent.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Heat] Expression of Bay Status

2015-03-10 Thread Hongbin Lu
Hi Adrian,

On Mon, Mar 9, 2015 at 6:53 PM, Adrian Otto adrian.o...@rackspace.com
wrote:

 Magnum Team,

 In the following review, we have the start of a discussion about how to
 tackle bay status:

 https://review.openstack.org/159546

 I think a key issue here is that we are not subscribing to an event feed
 from Heat to tell us about each state transition, so we have a low degree
 of confidence that our state will match the actual state of the stack in
 real-time. At best, we have an eventually consistent state for Bay
 following a bay creation.

 Here are some options for us to consider to solve this:

 1) Propose enhancements to Heat (or learn about existing features) to emit
 a set of notifications upon state changes to stack resources so the state
 can be mirrored in the Bay resource.


A drawback of this option is that it increases the difficulty of
trouble-shooting. In my experience of using Heat (SoftwareDeployments in
particular), Ironic and Trove, one of the most frequent errors I
encountered is that the provisioning resources stayed in deploying state
(never went to completed). The reason is that they were waiting a callback
signal from the provisioning resource to indicate its completion, but the
callback signal was blocked due to various reasons (e.g. incorrect firewall
rules, incorrect configs, etc.). Troubling-shooting such problem is
generally harder.



 2) Spawn a task to poll the Heat stack resource for state changes, and
 express them in the Bay status, and allow that task to exit once the stack
 reaches its terminal (completed) state.

 3) Don’t store any state in the Bay object, and simply query the heat
 stack for status as needed.


 Are each of these options viable? Are there other options to consider?
 What are the pro/con arguments for each?

 Thanks,

 Adrian



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][third party] best packaging practices

2015-03-10 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all,

RDO project started to look into packaging some of vendor libraries
that were split from neutron tree during Kilo for Delorean, and found
some issues with some of pypi packages that were released in public.

We feel that communicating each vendor may not be very effective, and
instead we should start collecting guidelines for those that are new
to pypi releases due to the recent vendor split somewhere for their
reference.

So I would like to advertise a wiki page [2] that is started to
collect best practices to release vendor code for packaging in
distributions. Both vendors and other packagers are welcome to fill in
the gaps.

Me being the sole author of the page for now, it's probably a little
bit Red Hat specific, and I am sure there are other requirements from
other distributions that are missing there. So I encourage other
packagers to get involved and fill in any gaps in the guidelines that
could help you in your packaging work.

[1]:
https://openstack.redhat.com/packaging/rdo-packaging.html#master-pkg-guide
[2]: https://wiki.openstack.org/wiki/Neutron/VendorSplitPackaging

Thanks,
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU/xfDAAoJEC5aWaUY1u57XuoH/jX2gmyB/6NogSGnVORKW9cX
AkvAbjTXtgBfLCeh1eOliEqIIYMDELBlWLSuYuANer/YLuREWAdfv2WXX+csnO6E
vDiFnmkWuIsTvlXfIYDLdw2VBPJiFdlR10ugVC9bCIv8M8uT5x+emcSDoHGWQlJv
I/r/AkM8UE1r6KsmFnpnR+PJaxDMDo0dyo3/tBKp7dDsqrOfVpjh9kecD3ratJfj
YAbRl2rS1MFzYbII2blexDbQFxNchLsRf5vYLmk0BpwY/oN7uudB8ngKKPOfhiyI
vVNYxbg/t6Js4jSFBBM5jTtlTtyTGaZqI+yMoaAgyTKZRsrWcOLrpW9DM5bPaUE=
=zvsd
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Avoiding regression in project governance

2015-03-10 Thread Russell Bryant
The TC is in the middle of implementing a fairly significant change in
project governance.  You can find an overview from Thierry on the
OpenStack blog [1].

Part of the change is to recognize more projects as being part of the
OpenStack community.  Another critical part was replacing the integrated
release with a set of tags.  A project would be given a tag if it meets
some defined set of criteria.

I feel that we're at a very vulnerable part of this transition.  We've
abolished the incubation process and integrated release.  We've
established a fairly low bar for new projects [2].  However, we have not
yet approved *any* tags other than the one that reflects which projects
are included in the final integrated release (Kilo) [3].  Despite the
previously discussed challenges with the integrated release,
it did at least mean that a project has met a very useful set of
criteria [4].

We now have several new project proposals.  However, I propose not
approving any new projects until we have a tagging system that is at
least far enough along to represent the set of criteria that we used to
apply to all OpenStack projects (with exception for ones we want to
consciously drop).  Otherwise, I think it's a significant setback to our
project governance as we have yet to provide any useful way to navigate
the growing set of projects.

The resulting set of tags doesn't have to be focused on replicating our
previous set of criteria.  The focus must be on what information is
needed by various groups of consumers and tags are a mechanism to
implement that.  In any case, we're far from that point because today we
have nothing.

I can't think of any good reason to rush into approving projects in the
short term.  If we're not able to work out this rich tagging system in a
reasonable amount of time, then maybe the whole approach is broken and
we need to rethink the whole approach.

Thanks,

[1] http://www.openstack.org/blog/2015/02/tc-update-project-reform-progress/
[2] http://governance.openstack.org/reference/new-projects-requirements.html
[3] http://governance.openstack.org/reference/tags/index.html
[4]
http://governance.openstack.org/reference/incubation-integration-requirements.html

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all][qa][gabbi][rally][tempest] Extend rally verfiy to unify work with Gabbi, Tempest and all in-tree functional tests

2015-03-10 Thread Timur Nurlygayanov
Hi,

I like this idea, we use Rally for OpenStack clouds verification at scale
and it is the real issue - how to run all functional tests from each
project with the one script. If Rally will do this, I will use Rally to run
these tests.

Thank you!

On Mon, Mar 9, 2015 at 6:04 PM, Chris Dent chd...@redhat.com wrote:

 On Mon, 9 Mar 2015, Davanum Srinivas wrote:

  2. Is there a test project with Gabbi based tests that you know of?


 In addition to the ceilometer tests that Boris pointed out gnocchi
 is using it as well:

https://github.com/stackforge/gnocchi/tree/master/gnocchi/tests/gabbi

  3. What changes if any are needed in Gabbi to make this happen?


 I was unable to tell from the original what this is and how gabbi
 is involved but the above link ought to be able to show you how
 gabbi can be used. There's also the docs (which could do with some
 improvement, so suggestions or pull requests welcome):

http://gabbi.readthedocs.org/en/latest/

 --
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Timur,
Senior QA Engineer
OpenStack Projects
Mirantis Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][third party] Major third party CI breakage expected for out-of-tree plugins

2015-03-10 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all,

team is going to merge in a patch to migrate to oslo.log [1] in the
very near future. This patch is expected to break all third party CI
for all vendor libraries that were split from the main repo in Kilo
and that rely on neutron.openstack.common.log module to be present.
(The patch removes the module; and no, we don't have an option to
leave it intact since it conflicts with oslo.log configuration
options, among other things).

So this is a heads-up for all out-of-tree vendor library maintainers
that they should stop using this particular oslo-incubator module from
neutron tree. The best short term option you have is copying the
module from neutron tree into your own tree and make all the code
refer to it. It may actually work, but it's not guaranteed. The best
option would be to migrate affected vendor libraries to oslo.log, to
stay in sync with neutron and to avoid potential issues due to mixing
library versions.

For the very least, the team gives vendors two days to proceed with
solutions that would make their CI work with the patch merged. After
that time, the patch may be merged in the tree and break those who
haven't switched yet.

The team feels that there is no effective way to communicate those
kind of breakages to vendor library maintainers. That's why we've
started a new wiki page [2] to track breaking changes and hope that
all neutron contributors will update the page on demand.

[1]: https://review.openstack.org/159638
[2]: https://wiki.openstack.org/wiki/Neutron/LibraryAPIBreakage

Thanks,
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU/xfBAAoJEC5aWaUY1u57jxcH/3p7ff60ZQN2tYvN/YBl4/Ns
1vlv5fwnPkzXA8fELRzYgrrOhIcKai4Ed/wSvI08G/oLxhBFeeHYA1Ekxk2l12AU
9rSMFeu0FZCSem5kZsA5uMeCDY4dPuy4/NtS2jnklmrypArVtehCNf0EbdmrjoUp
ATuYztf2/7vkMNxq/QfWI1spn9GCVZMUqRSE42CoiDGjEYBfBSRpoMtFNP/gwJfr
rUr//L1v4zpTp7iWYjrbRhlSPXTSiHDU0pllywDcmClyrOBTN4r8IVnpL/yBAxLc
PfKlB1nm66tgrweouevQXw4va1BHQlgzccDW5lfHP8yt3me6m1Xyn0FlJrsagTU=
=ayfx
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Ceilometer] Real world experience with Ceilometer deployments - Feedback requested

2015-03-10 Thread gordon chung
sorry, i apparently don't know how to format emails...

cheers,
gord


From: g...@live.ca
To: openstack-operat...@lists.openstack.org; openstack-dev@lists.openstack.org
Date: Tue, 10 Mar 2015 16:05:47 -0400
Subject: Re: [openstack-dev] [Openstack-operators] [Ceilometer] Real world 
experience with Ceilometer deployments - Feedback requested




hi,
just to follow-up, thanks for the input, the usability of ceilometer is 
obviously a concern of ours and something the team tries to address with the 
resources we have.
as a quick help/update, here are some points of interests that i think might 
help:- if using Juno+, DO use the notifier:// publisher rather than rpc:// as 
there is a certain level of overhead that comes with rpc[1]. you can also 
configure multiple messaging servers if there are load issues.- a part of the 
telemetry team has been exploring tsdb and we expect to have a tech preview for 
Kilo. the project is called Gnocchi[2]- in Kilo, we expanded notification event 
handling (existing stacktach integration code) and said events can be published 
to an external source(s) or to a database (ElasticSearch for full-text 
querying, in addition to mongo, sql)- ceilometer does not configure databases. 
operators are expected to read up on the db of choice and properly configure db 
to their needs (ie. don't run default mongo install on a single node with no 
sharding to store data from 2000 nodes)[3]- DO adjust your pipeline to only 
store events/meters that you use. by default, ceilometer gives you the world 
and from there you can filter based on requirements.- it's entirely possible to 
use ceilometer to gather data and store it externally and avoid ceilometer 
storage (if you so choose)- DO NOT use SQL backend prior to Juno... for any 
deployment size... any...- there was some work in Kilo to jitter polling cycle 
of agents to distribute load.- the agents are designed to scale horizontally to 
increase bandwidth. also, they work independently so if you want just 
notifications, it's possible to just deploy the notification agent and nothing 
else.
we've also been updating -- and still continuing to update -- some of the docs 
to better reflect some of the changes made to Ceilometer in Juno and 
Kilo[4][5]. particularly, i'd probably look at the architecture diagram[6] to 
get an idea of what components of ceilometer you could use to fit your needs.
i'm probably missed stuff but i hope the above helps. as always, community help 
is always invited. if you have a patch that will improve ceilometer, the 
community gladly welcomes it. 
[1] https://www.rabbitmq.com/tutorials/tutorial-six-python.html[2] 
http://www.slideshare.net/EoghanGlynn/rdo-hangout-on-gnocchi [3] 
http://blog.sileht.net/using-a-shardingreplicaset-mongodb-with-ceilometer[4] 
http://docs.openstack.org/admin-guide-cloud/content/ch_admin-openstack-telemetry.html[5]
 http://docs.openstack.org/developer/ceilometer/[6] 
http://docs.openstack.org/developer/ceilometer/architecture.html (self-plug for 
my amazing diagram skills)
cheers,
gord
  

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Driver documentation for Kilo [cinder] [neutron] [nova] [trove]

2015-03-10 Thread Erlon Cruz
Hi Anne,

Thanks for the quick answer. One thing that still not clear for me is about
the documentation that is currently there. Will it be removed (converted to
the resumed version) in Kilo? If so what are the milestones for that?

Erlon

On Tue, Mar 10, 2015 at 10:48 AM, Anne Gentle annegen...@justwriteclick.com
 wrote:



 On Tue, Mar 10, 2015 at 8:28 AM, Erlon Cruz sombra...@gmail.com wrote:

 Hi Anne,

 How about driver documentation that is in the old format? Will it be
 removed in Kilo?



 Hi Erlon,
 The spec doesn't have a specific person assigned for removal, and the only
 drivers the docs team signed up for through the blueprint are these:


- For cinder: volume drivers: document LVM and NFS; backup drivers:
document swift
- For glance: Document local storage, cinder, and swift as backends
- For neutron: document ML2 plug-in with the mechanisms drivers
OpenVSwitch and LinuxBridge
- For nova: document KVM (mostly), send Xen open source call for help
- For sahara: apache hadoop
- For trove: document all supported Open Source database engines like
MySQL.





 The wiki says: Bring all driver sections that are currently just ‘bare
 bones’ up to the standard mentioned. Will this be performed by core team?


 Andreas has done some of that work, for example here:
 https://review.openstack.org/#/c/157086/

 We can use more hands of course, just coordinate the work here on the
 list. And Andreas, if there aren't any more to do, let us know. :)
 Thanks,
 Anne




 Thanks,
 Erlon

 On Fri, Mar 6, 2015 at 4:58 PM, Anne Gentle 
 annegen...@justwriteclick.com wrote:

 Hi all,

 We have been working on streamlining driver documentation for Kilo
 through a specification, on the mailing lists, and in my weekly What's Up
 Doc updates. Thanks for the reviews while we worked out the solutions.
 Here's the final spec:

 http://specs.openstack.org/openstack/docs-specs/specs/kilo/move-driver-docs.html

 Driver documentation caretakers, please note the following summary:

 - At a minimum, driver docs are published in the Configuration Reference
 at with tables automatically generated from the code. There's a nice set of
 examples in this patch: https://review.openstack.org/#/c/157086/

 - If you want full driver docs on docs.openstack.org, please add a
 contact person's name and email to this wiki page:
 https://wiki.openstack.org/wiki/Documentation/VendorDrivers

 - To be included in the April 30 release of the Configuration Reference,
 driver docs are due by April 9th.

 Thanks all for your collaboration and attention.

 Anne


 --
 Anne Gentle
 annegen...@justwriteclick.com


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Anne Gentle
 annegen...@justwriteclick.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Driver documentation for Kilo [cinder] [neutron] [nova] [trove]

2015-03-10 Thread Anne Gentle
On Tue, Mar 10, 2015 at 3:35 PM, Erlon Cruz sombra...@gmail.com wrote:

 Hi Anne,

 Thanks for the quick answer. One thing that still not clear for me is
 about the documentation that is currently there. Will it be removed
 (converted to the resumed version) in Kilo? If so what are the milestones
 for that?


All deadlines revolve around the release of Kilo and time for reviews. I
don't know if we are planning on a purge with all the migration work still
to be done, so please just work on best effort by April 9th so the doc team
can work with you.

Thanks,
Anne



 Erlon

 On Tue, Mar 10, 2015 at 10:48 AM, Anne Gentle 
 annegen...@justwriteclick.com wrote:



 On Tue, Mar 10, 2015 at 8:28 AM, Erlon Cruz sombra...@gmail.com wrote:

 Hi Anne,

 How about driver documentation that is in the old format? Will it be
 removed in Kilo?



 Hi Erlon,
 The spec doesn't have a specific person assigned for removal, and the
 only drivers the docs team signed up for through the blueprint are these:


- For cinder: volume drivers: document LVM and NFS; backup drivers:
document swift
- For glance: Document local storage, cinder, and swift as backends
- For neutron: document ML2 plug-in with the mechanisms drivers
OpenVSwitch and LinuxBridge
- For nova: document KVM (mostly), send Xen open source call for help
- For sahara: apache hadoop
- For trove: document all supported Open Source database engines like
MySQL.





 The wiki says: Bring all driver sections that are currently just ‘bare
 bones’ up to the standard mentioned. Will this be performed by core team?


 Andreas has done some of that work, for example here:
 https://review.openstack.org/#/c/157086/

 We can use more hands of course, just coordinate the work here on the
 list. And Andreas, if there aren't any more to do, let us know. :)
 Thanks,
 Anne




 Thanks,
 Erlon

 On Fri, Mar 6, 2015 at 4:58 PM, Anne Gentle 
 annegen...@justwriteclick.com wrote:

 Hi all,

 We have been working on streamlining driver documentation for Kilo
 through a specification, on the mailing lists, and in my weekly What's Up
 Doc updates. Thanks for the reviews while we worked out the solutions.
 Here's the final spec:

 http://specs.openstack.org/openstack/docs-specs/specs/kilo/move-driver-docs.html

 Driver documentation caretakers, please note the following summary:

 - At a minimum, driver docs are published in the Configuration
 Reference at with tables automatically generated from the code. There's a
 nice set of examples in this patch:
 https://review.openstack.org/#/c/157086/

 - If you want full driver docs on docs.openstack.org, please add a
 contact person's name and email to this wiki page:
 https://wiki.openstack.org/wiki/Documentation/VendorDrivers

 - To be included in the April 30 release of the Configuration
 Reference, driver docs are due by April 9th.

 Thanks all for your collaboration and attention.

 Anne


 --
 Anne Gentle
 annegen...@justwriteclick.com


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Anne Gentle
 annegen...@justwriteclick.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Anne Gentle
annegen...@justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

2015-03-10 Thread Carl Baldwin
On Tue, Mar 10, 2015 at 11:34 AM, Gabriel Bezerra
gabri...@lsd.ufcg.edu.br wrote:
 Em 10.03.2015 14:24, Carl Baldwin escreveu:
 I'd vote for allowing against such restriction, but throwing an error in
 case of creating a router between the subnets.

 I can imagine a tenant running multiple instances of an application, each
 one with its own network that uses the same address range, to minimize
 configuration differences between them.

I see your point but yuck!  This isn't the place to skimp on
configuration changes.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-10 Thread Lauren Sell
Dissolving the integrated release without having a solid plan and replacement 
is difficult to communicate to people who depend on OpenStack. We’re struggling 
on that front.

That said, I’m still optimistic about project structure reform and think it 
could be beneficial to the development community and users if it’s executed 
well. It gives us the opportunity to focus on a tighter core of services that 
are stable, reliable and scalable, while also recognizing more innovation 
that’s already happening in the community, beyond the existing integrated 
release. Coming out of the ops meetup in Philadelphia yesterday, a few things 
were clear:

- Operators don’t want the wild west. They are nervous about dissolving the 
integrated release, because they want a strong filter and rules - dependency 
mapping, release timing, test coverage - around the most widely adopted 
projects. I’m not sure we’re giving them a lot of confidence.
- They also want some kind of bar or filter for community projects, to provide 
guidance beyond it’s in or out of the community. Tags can help with the nuances 
once they’re in the tent, but I think there’s some support for a bit higher bar 
overall. 
- That said, several people expressed they did not want duplication to prevent 
a project from making it into the tent. They would like to have options beyond 
the core set of projects
- The layers concept came back to play. It was clear there was a distinct 
dropoff in operators running projects other than nova, keystone, glance, 
cinder, horizon and neutron
- The operators community is keen to help define and apply some tags, 
especially those relevant to maturity and stability and general operability

(I know several of you were at the ops meetup, so please jump in if I’ve missed 
or misrepresented some of the feedback. Notes from the session 
https://etherpad.openstack.org/p/PHL-ops-tags.)

Based on feedback and conversations yesterday, I think it’s worth evolving the 
overall project criteria to add 1) a requirement for contributor diversity, 2) 
some criteria for maturity like documentation, test coverage and 
integration/dependency requirements, and 3) make sure there are no trademark 
issues with the project name, since it will be referred to as an OpenStack 
project. I’m also unclear how we’re planning to refer to these projects, as 
“Foo, an OpenStack community project” but not “OpenStack Foo?

For tags, I think defining a set of projects based on a broad reference 
architecture / use case like base compute” or “compute kernel” and “object 
storage” is critical. Those tags will imply the projects share common 
dependencies and are released together. If we categorize tags that can be 
applied, compute kernel” could be a higher level category and more prominent. 
Defining those initial tags should provide enough direction and confidence to 
start considering new projects.

Getting this worked out before the Kilo release would be valuable, because 
having the “last” integrated release without a clear plan forward creates some 
real concerns for those running or productizing the software. Not all tags or 
implementation details need to be defined, of course, but we should be able to 
communicate a solid plan for the layers and categories of tags, as well as the 
different bodies who may be involved in defining tags (ops community, etc) 
before expanding. 


 On Mar 10, 2015, at 2:02 PM, Russell Bryant rbry...@redhat.com wrote:
 
 On 03/10/2015 02:56 PM, Thierry Carrez wrote:
 Russell Bryant wrote:
 One point of clarification:
 
 On 03/10/2015 02:28 PM, Gabriel Hurley wrote:
 Even more concerning is the sentiment of projects we want to
 consciously drop from Russell's original email.
 
 This was in reference to criteria defined in:
 
 http://governance.openstack.org/reference/incubation-integration-requirements.html
 
 For example, we previously had a strict requirement *against*
 duplication of functionality among OpenStack projects unless it was with
 intent and with a clear plan to replace the old thing.  In this new
 model, that would be a requirement we would consciously drop.
 
 It's a requirement we *already* consciously dropped when we approved the
 new projects requirements. Or do you mean you want to come back on that
 decision[1]?
 
 No, I don't want to come back on it.  It was obviously a poorly worded
 comment.  It was an attempt to say that I'd like it if we were closer to
 having tags that covered most of those requirements, except for the
 things we no longer care about, such as the example given.
 
 -- 
 Russell Bryant
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not 

Re: [openstack-dev] [neutron][vpnaas] VPNaaS Subteam meetings

2015-03-10 Thread Paul Michali
Given the votes so far, the proposal is to move the meeting time to 1600
UTC on Tuesday. The channel is openstack-meeting-3 (as the only one
available).

In addition, the meeting will be on-demand, so if you want to have a
meeting, send email to this mailing list, at least 24 hours before the
meeting, and update the agenda on the wiki with the topic(s) you want to
discuss and the date of the meeting being requested.

https://wiki.openstack.org/wiki/Meetings/VPNaaS

Regards,

PCM (Paul Michali)

IRC pc_m (irc.freenode.com)
Twitter... @pmichali


On Mon, Mar 9, 2015 at 3:39 PM, Paul Michali p...@michali.net wrote:

 I guess I'll vote for (D), so that there is the possibility of early (1400
 UTC) and late (2100) on alternating weeks, given we don't have much to
 discuss lately and then changing to (C), if things pick up.

 Let's discuss at Tuesday's meeting (note DST change for US folks), at 1500
 UTC.



 PCM (Paul Michali)

 IRC pc_m (irc.freenode.com)
 Twitter... @pmichali


 On Fri, Mar 6, 2015 at 1:14 AM, Joshua Zhang joshua.zh...@canonical.com
 wrote:

 Hi all,

 I would also vote for (A) with 1500 UTC which is 23:00 in Beijing
 time -:)

 On Fri, Mar 6, 2015 at 1:22 PM, Mohammad Hanif mha...@brocade.com
 wrote:

   Hi all,

  I would also vote for (C) with 1600 UTC or later.  This  will
 hopefully increase more participation from the Pacific time zone.

  Thanks,
 —Hanif.

   From: Mathieu Rohon
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 Date: Thursday, March 5, 2015 at 1:52 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron][vpnaas] VPNaaS Subteam meetings

 Hi,

  I'm fine with C) and 1600 UTC would be more adapted for EU time Zone :)

  However, I Agree that neutron-vpnaas meetings was mainly focus on
 maintaining the current IPSec implementation, by managing the slip out,
 adding StrongSwan support and adding functional tests.
  Maybe we will get a broader audience once we will speak about adding
 new use cases such as edge-vpn.
  Edge-vpn use cases overlap with the Telco WG VPN use case [1]. May be
 those edge-vpn discussions should occur during the Telco WG meeting?

 [1]
 https://wiki.openstack.org/wiki/TelcoWorkingGroup/UseCases#VPN_Instantiation

 On Thu, Mar 5, 2015 at 3:02 AM, Sridhar Ramaswamy sric...@gmail.com
 wrote:

 Hi Paul.

  I'd vote for (C) and a slightly later time-slot on Tuesdays - 1630
 UTC (or later).

  The meetings so far was indeed quite useful. I guess the current busy
 Kilo cycle is also contributing to the low turnout. As we pick up things
 going forward this forum will be quite useful to discuss edge-vpn and,
 perhaps, other vpn variants.

  - Sridhar

  On Tue, Mar 3, 2015 at 3:38 AM, Paul Michali p...@michali.net wrote:

  Hi all! The email, that I sent on 2/24 didn't make it to the mailing
 list (no wonder I didn't get responses!). I think I had an issue with my
 email address used - sorry for the confusion!

  So, I'll hold the meeting today (1500 UTC meeting-4, if it is still
 available), and we can discuss this...


  We've been having very low turnout for meetings for the past several
 weeks, so I'd like to ask those in the community interested in VPNaaS, 
 what
 the preference would be regarding meetings...

  A) hold at the same day/time, but only on-demand.
 B) hold at a different day/time.
 C) hold at a different day/time, but only on-demand.
 D) hold as a on-demand topic in main Neutron meeting.

  Please vote your interest, and provide desired day/time, if you pick
 B or C. The fallback will be (D), if there's not much interest anymore for
 meeting, or we can't seem to come to a consensus (or super-majority :)

  Regards,

  PCM

  Twitter: @pmichali
 TEXT: 6032894458
 PCM (Paul Michali)

  IRC pc_m (irc.freenode.com)
 Twitter... @pmichali



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Best Regards
 Zhang Hua(张华)
 Software Engineer | Canonical
 IRC:  zhhuabj

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 

Re: [openstack-dev] [stacktach] [oslo] stachtach - kombu - pika ??

2015-03-10 Thread Joshua Harlow
Maybe the plan for oslo.messaging should be to make it resolve some of 
the operators issues first ;-)


https://etherpad.openstack.org/p/PHL-ops-rabbit-queue

https://etherpad.openstack.org/p/PHL-ops-large-deployments

I'd rather think we should like ummm, be thinking about fixing issues 
instead of adding new things to oslo.messaging. IMHO let some other 
project be the playground for these things (kombu, other...)...


-Josh

gordon chung wrote:

  We're going to be adding support for consuming from and writing to
Kafka as well and will likely use a kafka-specific library for that too.

is the plan to add this support to oslo.messaging? i believe there is
interest from the oslo.messaging team in supporting Kafka and in
addition to adding a Kafka publisher in ceilometer, there is suppose to
be a bp to add support to oslo.messaging.

cheers,
/gord/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Testtools 1.7.0 may error if you installed it before reading this email

2015-03-10 Thread Robert Collins
There was a broken wheel built when testtools 1.7.0 was released. The
wheel was missing the _compat2x.py file used for 2.x only syntax in
exception handling, for an unknown reason. (We know how to trigger it
- build the wheel with Python 3.4).

The wheel has been removed from PyPI and anyone installing testtools
1.7.0 now will install from source which works fine.

Cheers,
Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron] Best API for generating subnets from pool

2015-03-10 Thread Salvatore Orlando
On 10 March 2015 at 16:48, Carl Baldwin c...@ecbaldwin.net wrote:

 Honestly, I'm a little frustrated that this is coming up now when we
 tried very hard to discuss this during the spec review and we thought
 we got to a resolution.  It seems a little late to go back to the
 drawing board.


I guess that frustration has now become part of the norm for Openstack.
It is not the first time I frustrate people because I ask to reconsider
decisions approved in specifications.
This is probably bad behaviour on my side. Anyway, I'm not suggesting to go
back to the drawing board, merely trying to get larger feedback, especially
since that patch should always have had the ApiImpact flag.
Needless to say, I'm happy to proceed with things as they've been agreed.



 On Mon, Mar 9, 2015 at 7:05 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
  The problem with this approach is, in my opinion, that attributes such as
  gateway_ip are used with different semantics in requests and responses;
 this
  might also need users to write client applications expecting the values
 in
  the response might differ from those in the request.

 Is this so strange?  Could you explain why this is a problem with an
 example?


There is nothing intrinsically wrong with it - in the sense that it does
not impact the functional behaviour of the system.
My comment is about RESTful API guidelines. What we pass to/from the API
endpoint is a resource, in this case the subnet being created.
You expect gateway_ip to be always one thing - a gateway address, whereas
with the wildcarded design it could be an address or an incremental counter
within a range, but with the counter being valid only in request objects.
Differences in entities between requests and response are however fairly
common in RESTful APIs, so if the wildcards sastisfy a concrete and valid
use case I will stop complaining, but I'm not sure I see any use case for
wildcarded gateways and allocation pools.


  1) (this is more for neutron people) Is there a real use case for
 requesting
  specific gateway IPs and allocation pools when allocating from a pool? If
  not, maybe we should let the pool set a default gateway IP and allocation
  pools. The user can then update them with another call. Another option
 would
  be to provide subnet templates from which a user can choose. For
 instance
  one template could have the gateway as first IP, and then a single pool
 for
  the rest of the CIDR.

 If you really don't like this aspect of the design then my vote will
 be to drop support for this use case for Kilo.  Neutron will specify
 gateway and allocation pools from the subnet and maybe the user can
 update the subnet afterward if it needs to change.


I reckon Ryan is of the same advice as well. The point is not about what I
like or not. Nobody care about that.
The point is whether this really makes sense or not. If you already have
use cases for using such wildcards then we'll look at supporting them.


  2) Is the action of creating a subnet from a pool better realized as a
  different way of creating a subnet, or should there be some sort of pool
  action? Eg.:

 I think this shift in direction will push this work entirely out to
 Liberty.  We have one week until Kilo-3.


One week is barely enough for code review alone.
But code-wise implementing support for a slightly different API is fairly
simple.
Also, there might also be backward-compatible ways of switching from one
approach to another, in which case I'm happy to keep things as they are and
relieve Ryan from yet another worry.



 Carl

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-10 Thread Jay Pipes

On 03/10/2015 12:29 PM, Russell Bryant wrote:

The TC is in the middle of implementing a fairly significant change in
project governance.  You can find an overview from Thierry on the
OpenStack blog [1].

Part of the change is to recognize more projects as being part of the
OpenStack community.  Another critical part was replacing the integrated
release with a set of tags.  A project would be given a tag if it meets
some defined set of criteria.


The two things are not mutually exclusive.

Also, the tags are intended to be informative, not granted by the TC. 
As Thierry mentioned elsewhere, the job of defining these tags and 
applying them to projects is a never-ending thing, not something that 
needs to be completed before allowing new projects into the openstack/ 
code namespace.



I feel that we're at a very vulnerable part of this transition.  We've
abolished the incubation process and integrated release.  We've
established a fairly low bar for new projects [2].  However, we have not
yet approved *any* tags other than the one that reflects which projects
are included in the final integrated release (Kilo) [3].  Despite the
previously discussed challenges with the integrated release,
it did at least mean that a project has met a very useful set of
criteria [4].


a) I believe the integrated release moniker held much value previously

b) The existing OpenStack projects that were in the integrated release 
are *already* tagged with the integrated-release tag, and no new 
projects will be tagged with that.


c) There is no connection at all between the bar for projects to get 
into the openstack/ code namespace and the tags. The tags are 
informative and can be applied at any time to a project. They are not a 
blessing by the TC.



We now have several new project proposals.  However, I propose not
approving any new projects until we have a tagging system that is at
least far enough along to represent the set of criteria that we used to
apply to all OpenStack projects (with exception for ones we want to
consciously drop).


Again, tags aren't criteria that we use to determine whether a project 
is worthy of being in the openstack/ code namespace. The entire point of 
the Big Tent model was to move away from the TC blessing projects and 
instead use tags to decorate a project with some useful information. In 
other words, the point of Big Tent was to decouple these tags from the 
application process.



Otherwise, I think it's a significant setback to our
project governance as we have yet to provide any useful way to navigate
the growing set of projects.

The resulting set of tags doesn't have to be focused on replicating our
previous set of criteria.  The focus must be on what information is
needed by various groups of consumers and tags are a mechanism to
implement that.  In any case, we're far from that point because today we
have nothing.

I can't think of any good reason to rush into approving projects in the
short term.  If we're not able to work out this rich tagging system in a
reasonable amount of time, then maybe the whole approach is broken and
we need to rethink the whole approach.


In contrast, I see no reason to prevent new projects from applying. 
There's nothing about the new application requirements that mentions 
tags or the need to tag a project at application time.


Best,
-jay


Thanks,

[1] http://www.openstack.org/blog/2015/02/tc-update-project-reform-progress/
[2] http://governance.openstack.org/reference/new-projects-requirements.html
[3] http://governance.openstack.org/reference/tags/index.html
[4]
http://governance.openstack.org/reference/incubation-integration-requirements.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-10 Thread Jeremy Stanley
On 2015-03-10 14:42:18 -0400 (-0400), Russell Bryant wrote:
[...]
 As to specific tags, I refer back to this:
 
 http://governance.openstack.org/reference/incubation-integration-requirements.html
 
 We worked pretty hard to come up with useful things for projects
 to aim for. In fact, we considered it a minimum. Let's make sure
 we capture the things we still value, which I believe is most of
 it.
[...]

Coming from a horizontal resource and facilitation perspective, we
previously had guidelines like these to help prioritize where effort
is focused. I was hoping that most of the incubation requirements
would become tags in some form so that support decisions could still
be made based on them. Otherwise I worry we're stuck relying on tags
which merely declare the set of projects each horizontal team has
chosen as a priority (in situations where there are ongoing demands
on team members available time to help those specific projects).

Yes, horizontal teams should provide the means for OpenStack
projects to support themselves where possible, but some activities
do not scale linearly and do necessitate hard support decisions.
Guidance from the community as to where it's most effective to spend
those limited resources is appreciated, and also increases the
chances that in those situations the prioritized subset overlaps
substantially between various limited resources (which provides a
more consistent experience and helps set expectations).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-10 Thread Thierry Carrez
Russell Bryant wrote:
 One point of clarification:
 
 On 03/10/2015 02:28 PM, Gabriel Hurley wrote:
 Even more concerning is the sentiment of projects we want to
 consciously drop from Russell's original email.
 
 This was in reference to criteria defined in:
 
 http://governance.openstack.org/reference/incubation-integration-requirements.html
 
 For example, we previously had a strict requirement *against*
 duplication of functionality among OpenStack projects unless it was with
 intent and with a clear plan to replace the old thing.  In this new
 model, that would be a requirement we would consciously drop.

It's a requirement we *already* consciously dropped when we approved the
new projects requirements. Or do you mean you want to come back on that
decision[1]?

We can refine those rules as we go and consider applications and realize
the rules are incomplete... But denying their existence and ask to
freeze until they are defined sounds a bit weird. The rules are there.
Slow consideration of additions is the way to make iterative progress,
not freezing.

[1]
http://git.openstack.org/cgit/openstack/governance/commit/?id=fcc4046f7d866d0516f2810571aad0c0ce2cc361

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-10 Thread Gabriel Hurley
Blocking the acceptance of new projects seems punitive and against the spirit 
of the big tent. Classification (tagging) can be done at any point, and is 
hardly fixed in stone. You can refine tags as needed.

To put it harshly: it is a failure of both leadership and process to have 
stripped out the old process and set a low bar only to insist that no one may 
be accepted under the new criteria because you haven't defined the rest of the 
process yet.

Even more concerning is the sentiment of projects we want to consciously drop 
from Russell's original email. I realize that was meant to apply to whatever 
becomes the integrated release tag, yet still... the point of the big tent is 
not to exclude; the big tent is meant to *include and classify* so that the 
community, operators, distros, and vendors could make the best choices for 
themselves.

So I agree that these projects are a great litmus test for what kind of tags 
you need, but at this point I don't think you have a leg to stand on for not 
accepting projects that meet the current criteria. The bar for acceptance is in 
the governance documents.

A freeze seems unjustifiable and dragging your feet seems unnecessary, at least 
unless you all plan on changing the governance yet again.

- Gabriel

-Original Message-
From: Thierry Carrez [mailto:thie...@openstack.org] 
Sent: Tuesday, March 10, 2015 11:00 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Avoiding regression in project governance

Russell Bryant wrote:
 [...]
 We now have several new project proposals.  However, I propose not 
 approving any new projects until we have a tagging system that is at 
 least far enough along to represent the set of criteria that we used 
 to apply to all OpenStack projects (with exception for ones we want to 
 consciously drop).  Otherwise, I think it's a significant setback to 
 our project governance as we have yet to provide any useful way to 
 navigate the growing set of projects.
 
 The resulting set of tags doesn't have to be focused on replicating 
 our previous set of criteria.  The focus must be on what information 
 is needed by various groups of consumers and tags are a mechanism to 
 implement that.  In any case, we're far from that point because today 
 we have nothing.

I agree that we need tags to represent the various facets of what was in the 
integrated release concept.

I'm not sure we should block accepting new project teams until all tags are 
defined, though. That sounds like a way to stall forever. So could you be more 
specific ? Is there a clear set of tags you'd like to see defined before we add 
new project teams ?

 I can't think of any good reason to rush into approving projects in 
 the short term.  If we're not able to work out this rich tagging 
 system in a reasonable amount of time, then maybe the whole approach 
 is broken and we need to rethink the whole approach.

The current plan for the Vancouver Design Summit is to only give space to 
OpenStack projects (while non-OpenStack projects may get space in ecosystem 
sessions outside of the Design Summit). So it's only fair for those projects to 
file for recognition before that happens.

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-10 Thread Russell Bryant
One point of clarification:

On 03/10/2015 02:28 PM, Gabriel Hurley wrote:
 Even more concerning is the sentiment of projects we want to
 consciously drop from Russell's original email.

This was in reference to criteria defined in:

http://governance.openstack.org/reference/incubation-integration-requirements.html

For example, we previously had a strict requirement *against*
duplication of functionality among OpenStack projects unless it was with
intent and with a clear plan to replace the old thing.  In this new
model, that would be a requirement we would consciously drop.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Controlling data sent to client

2015-03-10 Thread Rick Jones

On 03/10/2015 11:45 AM, Omkar Joshi wrote:

Hi,

I am using open stack swift server. Now say multiple clients are
requesting 5GB object from server. The rate at which server can push
data into server socket is much more than the rate at which client can
read it from proxy server. Is there configuration / setting which we use
to control / cap the pending data on server side socket? Because
otherwise this will cause server to go out of memory.


The Linux networking stack will have a limit to the size of the 
SO_SNDBUF, which will limit how much the proxy server code will be able 
to shove into a given socket at one time.  The Linux networking stack 
may autotune that setting if the proxy server code itself isn't making 
an explicit setsockopt(SO_SNDBUF) call.  Such autotuning will be 
controlled via the sysctl net.ipv4.tcp_wmem


If the proxy server code does make an explicit setsockopt(SO_SNDBUF) 
call, that will be limited to no more than what is set in net.core.wmem_max.


But I am guessing you are asking about something different because 
virtually every TCP/IP stack going back to the beginning has had bounded 
socket buffers.  Are you asking about something else?  Are you asking 
about the rate at which data might come from the object server(s) to the 
proxy and need to be held on the proxy while it is sent-on to the clients?


rick

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Controlling data sent to client

2015-03-10 Thread Omkar Joshi
Thanks Rick for a quick reply.

Are you asking about the rate at which data might come from the object
server(s) to the proxy and need to be held on the proxy while it is sent-on
to the clients? Yes... the object sever will push faster and therefore
accumulation of data in proxy server will be a lot if client is not able to
catch up. Shouldn't there be a back pressure? from client to proxy server
and then from proxy server to object server?

something like don't cache more than 10M at a time per client.?

On Tue, Mar 10, 2015 at 11:59 AM, Rick Jones rick.jon...@hp.com wrote:

 On 03/10/2015 11:45 AM, Omkar Joshi wrote:

 Hi,

 I am using open stack swift server. Now say multiple clients are
 requesting 5GB object from server. The rate at which server can push
 data into server socket is much more than the rate at which client can
 read it from proxy server. Is there configuration / setting which we use
 to control / cap the pending data on server side socket? Because
 otherwise this will cause server to go out of memory.


 The Linux networking stack will have a limit to the size of the SO_SNDBUF,
 which will limit how much the proxy server code will be able to shove into
 a given socket at one time.  The Linux networking stack may autotune that
 setting if the proxy server code itself isn't making an explicit
 setsockopt(SO_SNDBUF) call.  Such autotuning will be controlled via the
 sysctl net.ipv4.tcp_wmem

 If the proxy server code does make an explicit setsockopt(SO_SNDBUF) call,
 that will be limited to no more than what is set in net.core.wmem_max.

 But I am guessing you are asking about something different because
 virtually every TCP/IP stack going back to the beginning has had bounded
 socket buffers.  Are you asking about something else?  Are you asking about
 the rate at which data might come from the object server(s) to the proxy
 and need to be held on the proxy while it is sent-on to the clients?

 rick

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,
Omkar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stacktach] [oslo] stachtach - kombu - pika ??

2015-03-10 Thread Sandy Walsh
No, we're adding this to Yagi first and perhaps Notabene later. We don't need 
rpc support, so it's too big a change for us to take on.




From: gordon chung g...@live.ca
Sent: Tuesday, March 10, 2015 3:58 PM
To: OpenStack Development Mailing List not for usage questions
Subject: Re: [openstack-dev] [stacktach] [oslo] stachtach - kombu - pika ??

 We're going to be adding support for consuming from and writing to Kafka as 
 well and will likely use a kafka-specific library for that too.

is the plan to add this support to oslo.messaging? i believe there is interest 
from the oslo.messaging team in supporting Kafka and in addition to adding a 
Kafka publisher in ceilometer, there is suppose to be a bp to add support to 
oslo.messaging.

cheers,
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-10 Thread Russell Bryant
On 03/10/2015 02:00 PM, Thierry Carrez wrote:
 Russell Bryant wrote:
 [...]
 We now have several new project proposals.  However, I propose not
 approving any new projects until we have a tagging system that is at
 least far enough along to represent the set of criteria that we used to
 apply to all OpenStack projects (with exception for ones we want to
 consciously drop).  Otherwise, I think it's a significant setback to our
 project governance as we have yet to provide any useful way to navigate
 the growing set of projects.

 The resulting set of tags doesn't have to be focused on replicating our
 previous set of criteria.  The focus must be on what information is
 needed by various groups of consumers and tags are a mechanism to
 implement that.  In any case, we're far from that point because today we
 have nothing.
 
 I agree that we need tags to represent the various facets of what was in
 the integrated release concept.
 
 I'm not sure we should block accepting new project teams until all tags
 are defined, though. That sounds like a way to stall forever. So could
 you be more specific ? Is there a clear set of tags you'd like to see
 defined before we add new project teams ?

I'd like to have enough tags that I don't feel like we're communicating
drastically less about the state of OpenStack projects.  Approving now
means we'll have a big pool of projects with absolutely no attempt to
communicate anything useful about the difference between Nova, Swift,
and the newest experiment.  I'd rather feel like we've replaced one
thing with something that's improvement.  Today feels like we've
replaced something with close to nothing, which seems like the worst
time to open the gates for new projects.

As to specific tags, I refer back to this:

http://governance.openstack.org/reference/incubation-integration-requirements.html

We worked pretty hard to come up with useful things for projects to aim
for.  In fact, we considered it a minimum.  Let's make sure we capture
the things we still value, which I believe is most of it.

 I can't think of any good reason to rush into approving projects in the
 short term.  If we're not able to work out this rich tagging system in a
 reasonable amount of time, then maybe the whole approach is broken and
 we need to rethink the whole approach.
 
 The current plan for the Vancouver Design Summit is to only give space
 to OpenStack projects (while non-OpenStack projects may get space in
 ecosystem sessions outside of the Design Summit). So it's only fair
 for those projects to file for recognition before that happens.

I hear you.  That's a real problem that must be dealt with.  I don't
think it's justification for governance change, though.  If nothing
else, the TC could just make a list of the projects we're willing to
give space to given our collective view of their momentum in the
community.  We're elected to make decisions with a broad view like that
after all.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron] Best API for generating subnets from pool

2015-03-10 Thread Carl Baldwin
On Tue, Mar 10, 2015 at 12:24 PM, Salvatore Orlando sorla...@nicira.com wrote:
 I guess that frustration has now become part of the norm for Openstack.
 It is not the first time I frustrate people because I ask to reconsider
 decisions approved in specifications.

I'm okay revisiting decisions.  It is just the timing that is difficult.

 This is probably bad behaviour on my side. Anyway, I'm not suggesting to go
 back to the drawing board, merely trying to get larger feedback, especially
 since that patch should always have had the ApiImpact flag.

It did have the ApiImpact flag since PS1 [1].

 Needless to say, I'm happy to proceed with things as they've been agreed.

I'm happy to discuss and I value your input very highly.  I was just
hoping that it had come at a better time to react.

 There is nothing intrinsically wrong with it - in the sense that it does not
 impact the functional behaviour of the system.
 My comment is about RESTful API guidelines. What we pass to/from the API
 endpoint is a resource, in this case the subnet being created.
 You expect gateway_ip to be always one thing - a gateway address, whereas
 with the wildcarded design it could be an address or an incremental counter
 within a range, but with the counter being valid only in request objects.
 Differences in entities between requests and response are however fairly
 common in RESTful APIs, so if the wildcards sastisfy a concrete and valid
 use case I will stop complaining, but I'm not sure I see any use case for
 wildcarded gateways and allocation pools.

Let's drop the use case and the wildcards as we've discussed.

 Also, there might also be backward-compatible ways of switching from one
 approach to another, in which case I'm happy to keep things as they are and
 relieve Ryan from yet another worry.

I think dropping the use case for now allows us the most freedom and
doesn't commit us to supporting backward compatibility for a decision
that may end up proving to be a mistake in API design.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Controlling data sent to client

2015-03-10 Thread Omkar Joshi
Hi,

I am using open stack swift server. Now say multiple clients are requesting
5GB object from server. The rate at which server can push data into server
socket is much more than the rate at which client can read it from proxy
server. Is there configuration / setting which we use to control / cap the
pending data on server side socket? Because otherwise this will cause
server to go out of memory.

-- 
Thanks,
Omkar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-10 Thread Zane Bitter

On 10/03/15 12:29, Russell Bryant wrote:


I feel that we're at a very vulnerable part of this transition.  We've
abolished the incubation process and integrated release.  We've
established a fairly low bar for new projects [2].  However, we have not
yet approved*any*  tags other than the one that reflects which projects
are included in the final integrated release (Kilo) [3].  Despite the
previously discussed challenges with the integrated release,
it did at least mean that a project has met a very useful set of
criteria [4].

We now have several new project proposals.  However, I propose not
approving any new projects until we have a tagging system that is at
least far enough along to represent the set of criteria that we used to
apply to all OpenStack projects (with exception for ones we want to
consciously drop).  Otherwise, I think it's a significant setback to our
project governance as we have yet to provide any useful way to navigate
the growing set of projects.


I appreciate the concerns here, but I'm also uncomfortable with having 
an open-ended hold on making projects an official part of OpenStack. 
There are a lot of projects on StackForge that are by any reasonable 
definition a part of this community, it seems wrong to put them on 
indefinite hold when the Big Tent model has already been agreed upon.


Here is a possible compromise: invite applications now and set a fixed 
date on which the new system will become operational. That way it's the 
TC's responsibility to get the house in order by the deadline, rather 
than making it everyone else's problem. If we see a wildly inappropriate 
application then that's valuable data about where the requirements are 
unclear. To avoid mass confusion in the absence of a mature set of tags, 
I think it's probably appropriate that the changes kick in after the 
Kilo release, but let's make it as soon as possible after that.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-10 Thread Joe Gordon
On Tue, Mar 10, 2015 at 9:29 AM, Russell Bryant rbry...@redhat.com wrote:

 The TC is in the middle of implementing a fairly significant change in
 project governance.  You can find an overview from Thierry on the
 OpenStack blog [1].

 Part of the change is to recognize more projects as being part of the
 OpenStack community.  Another critical part was replacing the integrated
 release with a set of tags.  A project would be given a tag if it meets
 some defined set of criteria.

 I feel that we're at a very vulnerable part of this transition.  We've
 abolished the incubation process and integrated release.  We've
 established a fairly low bar for new projects [2].  However, we have not
 yet approved *any* tags other than the one that reflects which projects
 are included in the final integrated release (Kilo) [3].  Despite the
 previously discussed challenges with the integrated release,
 it did at least mean that a project has met a very useful set of
 criteria [4].

 We now have several new project proposals.  However, I propose not
 approving any new projects until we have a tagging system that is at
 least far enough along to represent the set of criteria that we used to
 apply to all OpenStack projects (with exception for ones we want to
 consciously drop).  Otherwise, I think it's a significant setback to our
 project governance as we have yet to provide any useful way to navigate
 the growing set of projects.



I don't follow this argument.

My understanding is no tags will be required to join 'OpenStack,' they are
just optional things for projects to try to achieve once they are in. So
holding off accepting new projects for something that is not required
during the adding new projects process seems odd.

Perhaps a better way to say the same thing is: While working with the
tagging system to come up with a good set of tags to represent our previous
graduation requirements, we may want to adjust the new project
requirements[0].

[0] governance.openstack.org/reference/new-projects-requirements.html


 The resulting set of tags doesn't have to be focused on replicating our
 previous set of criteria.  The focus must be on what information is
 needed by various groups of consumers and tags are a mechanism to
 implement that.  In any case, we're far from that point because today we
 have nothing.

 I can't think of any good reason to rush into approving projects in the
 short term.  If we're not able to work out this rich tagging system in a
 reasonable amount of time, then maybe the whole approach is broken and
 we need to rethink the whole approach.


I fear this is a real possibility, and sounds like a reason to proceed
carefully with adding new projects.



 Thanks,

 [1]
 http://www.openstack.org/blog/2015/02/tc-update-project-reform-progress/
 [2]
 http://governance.openstack.org/reference/new-projects-requirements.html
 [3] http://governance.openstack.org/reference/tags/index.html
 [4]

 http://governance.openstack.org/reference/incubation-integration-requirements.html

 --
 Russell Bryant

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stacktach] [oslo] stachtach - kombu - pika ??

2015-03-10 Thread Sandy Walsh
Hey J,

Our (old) notification consumer was using carrot, which is dead but worked. 
Lately though there have been conflicts with carrot and msgpack, so we had to 
change. Around the same time, we ran into a bug where we were writing to an 
unnamed exchange (completely valid, but too easy to do under kombu). During 
that debug process we ended up chatting with the rabbitmq core devs and they 
strongly recommended we use Pika instead of Kombu. We're going to follow that 
advice. 

We're going to be adding support for consuming from and writing to Kafka as 
well and will likely use a kafka-specific library for that too.  

You can ignore my hastily-written comment about oslo-messaging considering it. 
It's probably not important for your use-cases. 

Sorry for any confusion this may have caused.

-S




From: Joshua Harlow harlo...@outlook.com
Sent: Tuesday, March 10, 2015 1:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [stacktach] [oslo] stachtach - kombu - pika ??

Hi all,

I saw the following on
https://etherpad.openstack.org/p/PHL-ops-rabbit-queue and was wondering
if there was more explanation of why?

The StackTach team is switching from Kombu to Pika (at the
recommendation of core rabbitmq devs). Hopefully oslo-messaging will do
the same.

I'm wondering why/what?

Pika seems to be less supported, has less support for things other than
rabbitmq, and seems less developed (it lacks python 3.3 support apparently).

What's the details on this idea listed there?

Any stachtack folks got any more details?

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat][Mistral] Use and adoption of YAQL

2015-03-10 Thread Stan Lagun
I would suggest to do the migration but not to merge it till official yaql
1.0 release which is going to happen soon.
As for the docs it is still very hard to write them since yaql 1.0 has got
tons on new features and hundreds of functions. Any help is appreciated.
But 99% of yaql 1.0 features and functions are covered by unit tests and
there are ~250 of them so this is the best source of information at the
moment.

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

sla...@mirantis.com

On Sat, Feb 7, 2015 at 12:46 AM, Dmitri Zimine dzim...@stackstorm.com
wrote:

 Stan, Alex, Renat:

 Should we migrate to YAQL 1.0 now? and stop using the initial one? What’s
 the delta?

 Still short on the docs :) but I understand they’re coming up.
 https://github.com/stackforge/yaql/tree/master/doc/source

 Cheers, Dmitri.

 On Jan 16, 2015, at 6:46 AM, Stan Lagun sla...@mirantis.com wrote:

 Dmitri,

 we are working hard towards stable YAQL 1.0 which is expected to be
 released during Kilo cycle. It is going to have proper documentation and
 high unit test coverage which can also serve as a documentation source.
 YAQL has already migrated to StackForge and adopted OpenStack development
 process and tools but the work is still in progress. Any help from Mistral
 team and/or other YAQL adopters is appreciated.



 Sincerely yours,
 Stan Lagun
 Principal Software Engineer @ Mirantis

 sla...@mirantis.com

 On Thu, Jan 15, 2015 at 10:54 PM, Dmitri Zimine dzim...@stackstorm.com
 wrote:

 Folks,

 We use YAQL in Mistral for referencing variables, expressing conditions,
 etc. Murano is using it extensively, I saw Heat folks thought of using it,
 at least once :) May be others...

 We are learning that YAQL incredibly powerful comparing to alternatives
 like Jinja2 templates used  in salt / ansible.

 But with lack of documentation, it becomes one of adoption blockers to
 Mistral (we got very vocal user feedback on this).

 This is pretty much all the docs I can offer our users on YAQL so far.
 Not much.
 http://yaql.readthedocs.org/en/latest/
 https://github.com/ativelkov/yaql/blob/master/README.rst
 https://murano.readthedocs.org/en/latest/murano_pl/murano_pl.html#yaql

 Are there any plans to fix it?

 Are there interest from other projects to use YAQL?

 Cheers,
 DZ.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-10 Thread Russell Bryant
On 03/10/2015 02:43 PM, Thierry Carrez wrote:
 Joe Gordon wrote:
 On Tue, Mar 10, 2015 at 9:29 AM, Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com wrote:
 We now have several new project proposals.  However, I propose not
 approving any new projects until we have a tagging system that is at
 least far enough along to represent the set of criteria that we used to
 apply to all OpenStack projects (with exception for ones we want to
 consciously drop).  Otherwise, I think it's a significant setback to our
 project governance as we have yet to provide any useful way to navigate
 the growing set of projects.

 I don't follow this argument.

 My understanding is no tags will be required to join 'OpenStack,' they
 are just optional things for projects to try to achieve once they are
 in. So holding off accepting new projects for something that is not
 required during the adding new projects process seems odd.

 Perhaps a better way to say the same thing is: While working with the
 tagging system to come up with a good set of tags to represent our
 previous graduation requirements, we may want to adjust the new project
 requirements[0].

 [0] governance.openstack.org/reference/new-projects-requirements.html
 http://governance.openstack.org/reference/new-projects-requirements.html  
 
 I totally agree with you. Project team additions are judged on the new
 projects requirements. Tagging are just about describing them once they
 are in. I suspect most people advocating for a freeze are actually
 wanting extra rules to be added to the new project requirements (think:
 diversity). We said at the TC that we would refine those requirements as
 we go and learn... So slowly processing applications sounds like a
 better way to make fast iterative progress than freezing altogether.

I completely understand and agree with the difference between project
additions and the categorization.

 I /think/ Russell's point is that we'd end up adding not-yet-categorized
 stuff in the tent and that might create temporary confusion -- I tend to
 think that freezing project addition is actually more detrimental.

Yes, that was my point.  It's an ordering issue.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-10 Thread Russell Bryant
On 03/10/2015 02:56 PM, Thierry Carrez wrote:
 Russell Bryant wrote:
 One point of clarification:

 On 03/10/2015 02:28 PM, Gabriel Hurley wrote:
 Even more concerning is the sentiment of projects we want to
 consciously drop from Russell's original email.

 This was in reference to criteria defined in:

 http://governance.openstack.org/reference/incubation-integration-requirements.html

 For example, we previously had a strict requirement *against*
 duplication of functionality among OpenStack projects unless it was with
 intent and with a clear plan to replace the old thing.  In this new
 model, that would be a requirement we would consciously drop.
 
 It's a requirement we *already* consciously dropped when we approved the
 new projects requirements. Or do you mean you want to come back on that
 decision[1]?

No, I don't want to come back on it.  It was obviously a poorly worded
comment.  It was an attempt to say that I'd like it if we were closer to
having tags that covered most of those requirements, except for the
things we no longer care about, such as the example given.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-10 Thread Kyle Mestery
On Tue, Mar 10, 2015 at 1:32 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 03/10/2015 02:28 PM, Gabriel Hurley wrote:

 Blocking the acceptance of new projects seems punitive and against
 the spirit of the big tent. Classification (tagging) can be done at
 any point, and is hardly fixed in stone. You can refine tags as
 needed.

 To put it harshly: it is a failure of both leadership and process to
 have stripped out the old process and set a low bar only to insist
 that no one may be accepted under the new criteria because you
 haven't defined the rest of the process yet.

 Even more concerning is the sentiment of projects we want to
 consciously drop from Russell's original email. I realize that was
 meant to apply to whatever becomes the integrated release tag, yet
 still... the point of the big tent is not to exclude; the big tent is
 meant to *include and classify* so that the community, operators,
 distros, and vendors could make the best choices for themselves.

 So I agree that these projects are a great litmus test for what kind
 of tags you need, but at this point I don't think you have a leg to
 stand on for not accepting projects that meet the current criteria.
 The bar for acceptance is in the governance documents.

 A freeze seems unjustifiable and dragging your feet seems
 unnecessary, at least unless you all plan on changing the governance
 yet again.


 Amen. +1.

 To be honest, given how OpenStack is always about change, I'm confused
that people are not willing to stop, evaluate where we are, and make sure
it's moving in the intended direction. Seems like taking stock of where we
as we change the governance model would be a wise thing to do.

As a given example, I'd like to compare the governance model OpenStack used
to have with the one OpenDaylight currently has. OpenStack is moving
towards the ODL model with the big tent proposal. The existing ODL
governance model has been to accept anything that is proposed (note: that's
the tl;dr version, read here [1] for more details). Any project proposed in
ODL is accepted and allowed in. Great, right? Except it's not always great,
because there is no check for overlapping functionality, they allow in
vendor-only projects, and they now have 48 accepted projects. Even worse,
at least 5 of those implement network virtualization. As a user of ODL,
trying to figure out which one to use for network virtualization is
challenging. Someone used the reference of ODL being a bag of parts you
assemble on your own, and to some extent that's true. Maybe this is a
distribution's job, in which case the bag of parts reference for upstream
may be ok. It is what it is, after all.

Even worse, when you want to do something like integrate ODL with
OpenStack, which network virtualization project do you use? It depends on
who you work for or which project you're involved in. But the answer is
never a consensus one, because with overlapping functionality, integrating
ODL and OpenStack now means different things to different people.

At the end of the day, it's my opinion consensus is the part of the Big
Tent that worries me. To me, consensus is a big part of what makes
OpenStack awesome. However tags and big tents evolve, if we lose that,
we've lost part of OpenStack that got us to where we are.

Kyle

[1] https://wiki.opendaylight.org/view/Project_Proposals:Main

-jay


  - Gabriel

 -Original Message- From: Thierry Carrez
 [mailto:thie...@openstack.org] Sent: Tuesday, March 10, 2015 11:00
 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev]
 Avoiding regression in project governance

 Russell Bryant wrote:

 [...] We now have several new project proposals.  However, I
 propose not approving any new projects until we have a tagging
 system that is at least far enough along to represent the set of
 criteria that we used to apply to all OpenStack projects (with
 exception for ones we want to consciously drop).  Otherwise, I
 think it's a significant setback to our project governance as we
 have yet to provide any useful way to navigate the growing set of
 projects.

 The resulting set of tags doesn't have to be focused on
 replicating our previous set of criteria.  The focus must be on
 what information is needed by various groups of consumers and tags
 are a mechanism to implement that.  In any case, we're far from
 that point because today we have nothing.


 I agree that we need tags to represent the various facets of what was
 in the integrated release concept.

 I'm not sure we should block accepting new project teams until all
 tags are defined, though. That sounds like a way to stall forever. So
 could you be more specific ? Is there a clear set of tags you'd like
 to see defined before we add new project teams ?

  I can't think of any good reason to rush into approving projects
 in the short term.  If we're not able to work out this rich
 tagging system in a reasonable amount of time, then maybe the whole
 approach is broken and we need 

[openstack-dev] [Cinder] Cinder-GlusterFS CI update

2015-03-10 Thread Deepak Shetty
Hi All,
 Quick update.

 We added GlusterFS CI job (gate-tempest-dsvm-full-glusterfs) to *check
pipeline (non-voting)* after the patch @ [1]  was merged.

Its been running successfully ( so far so good ) on Cinder patches, few
examples are in [2]

I also updated the 3rd party CI status page [3] with the current status.

[1]: https://review.openstack.org/162556
 [2]: https://review.openstack.org/#/c/162532/ ,
https://review.openstack.org/#/c/157956/ ,
https://review.openstack.org/#/c/160682/
 [3]: https://wiki.openstack.org/wiki/Cinder/third-party-ci-status

 thanx,
 deepak  bharat
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Volume Replication and Migration bug triage ...

2015-03-10 Thread Sheng Bo Hou
Hi all,

https://bugs.launchpad.net/cinder/+bug/1255622
For this bug, I have added some comments to explain why the orphaned 
volume that ends up in the end of the migration.
@John Griffith, I hope this can resolve your confusion a bit.

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
West Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193



Jay S Bryant/Rochester/IBM@IBMUS 
02/07/2015 07:50 AM

To
Weekly Cinder Meeting
cc
Theresa Backlund/Rochester/IBM@IBMUS, Jacob Morlock/Rochester/IBM@IBMUS, 
Peter Wassel/Endicott/IBM@IBMUS
Subject
Fw: [cinder] Volume Replication and Migration bug triage ...







 All the world's a stage and most of us are desperately unrehearsed.
   -- Sean O'Casey

- Forwarded by Jay S Bryant/Rochester/IBM on 02/06/2015 05:47 PM -

From:   Jay S. Bryant jsbry...@electronicjungle.net
To: openStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, Jay S Bryant/Rochester/IBM@IBMUS
Date:   02/06/2015 05:46 PM
Subject:[cinder] Volume Replication and Migration bug triage ...
Sent by:Jay Bryant jungleb...@gmail.com



All,

In discussion with Mike Perez earlier this week the following bugs were 
highlighted in Volume Migration and Volume Replication.  IBM is focusing 
on investigating and resolving these bugs.

I will be putting out updates as we progress towards resolution of these 
issues.

Replication:

https://bugs.launchpad.net/cinder/+bug/1390001 -- Investigated by Tao and 
found to be Invalid
https://bugs.launchpad.net/cinder/+bug/1384040 -- assigned to Tao
https://bugs.launchpad.net/cinder/+bug/1372292 -- assigned to Tao
https://bugs.launchpad.net/cinder/+bug/1370311 -- Requires multi-pool 
scheduler awareness and replica promote supporting multiple pools.  BP 
opened:  
https://blueprints.launchpad.net/cinder/+spec/storwize-support-muli-pool-within-one-backend-relative-features

https://bugs.launchpad.net/cinder/+bug/1383524 -- Currently assigned to 
Ronen with updates from Avishay.  Have a question in to Avishay to see if 
he can keep investigating.

Migration:

https://bugs.launchpad.net/cinder/+bug/1404013 -- Fix released for this.
https://bugs.launchpad.net/cinder/+bug/1403916 -- Question out to the 
reporter to see if this is still an issue. (LVM)
https://bugs.launchpad.net/cinder/+bug/1403912 -- Question out to the 
reporter to see if this is still an issue. (LVM)
https://bugs.launchpad.net/cinder/+bug/1403904 -- Marked Invalid
https://bugs.launchpad.net/cinder/+bug/1391179 --  Assigned to Alon Marx 
as this is an issue that was seen on XIV.
https://bugs.launchpad.net/cinder/+bug/1283313 -- Avishay was looking into 
this.  Asked if he still is doing so.
https://bugs.launchpad.net/cinder/+bug/1255957 -- Currently marked 
incomplete.  May warrant futher investigation.
https://bugs.launchpad.net/cinder/+bug/1391172 -- Fix released
https://bugs.launchpad.net/cinder/+bug/1403902 -- A number of patches have 
been proposed around this one.  Will follow up to understand if it is 
still a problem.
https://bugs.launchpad.net/cinder/+bug/1255622 -- John was the last one 
looking at this.  Appears to work in some situations.
https://bugs.launchpad.net/cinder/+bug/1398177 -- Assigned to Vincent Hou
https://bugs.launchpad.net/cinder/+bug/1308822 -- Assigned to Tao or Li 
Min.
https://bugs.launchpad.net/cinder/+bug/1278289 -- Assigned to Jay.  Will 
investigate
https://bugs.launchpad.net/cinder/+bug/1308315.-- Tao is investigating but 
hasn't been able to recreate.

While triaging all the issues I updated the ones for migration with a 
'migration' tag and updated the replication ones with a 'replication' tag.

If you have any questions/concerns about this, please let me know.  
Otherwise we will work towards cleaning these up.

Thanks!
Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron] Best API for generating subnets from ool

2015-03-10 Thread Miguel Ángel Ajo
Thanks to everybody working on this,

Answers inline:  

On Tuesday, 10 de March de 2015 at 0:34, Tidwell, Ryan wrote:

 Thanks Salvatore.  Here are my thoughts, hopefully there’s some merit to them:
   
 With implicit allocations, the thinking is that this is where a subnet is 
 created in a backward-compatible way with no subnetpool_id and the subnets 
 API’s continue to work as they always have.
   
 In the case of a specific subnet allocation request (create-subnet passing a 
 pool ID and specific CIDR), I would look in the pool’s available prefix list 
 and carve out a subnet from one of those prefixes and ask for it to be 
 reserved for me.  In that case I know the CIDR I’ll be getting up front.  In 
 such a case, I’m not sure I’d ever specify my gateway using notation like 
 0.0.0.1, even if I was allowed to.  If I know I’ll be getting 10.10.10.0/24, 
 I can simply pass gateway_ip as 10.10.10.1 and be done with it.  I see no 
 added value in supporting that wildcard notation for a gateway on a specific 
 subnet allocation.
   
 In the case of an “any” subnet allocation request (create-subnet passing a 
 pool ID, but no specific CIDR), I’m already delegating responsibility for 
 addressing my subnet to Neutron.  As such, it seems reasonable to not have 
 strong opinions about details like gateway_ip when making the request to 
 create a subnet in this manner.
   
 To me, this all points to not supporting wildcards for gateway_ip and 
 allocation_pools on subnet create (even though it found its way into the 
 spec).  My opinion (which I think lines up with yours) is that on an any 
 request it makes sense to let the pool fill in allocation_pools and 
 gateway_ip when requesting an “any” allocation from a subnet pool.  When 
 creating a specific subnet from a pool, gateway IP and allocation pools could 
 still be passed explicitly by the user.
   
 -Ryan
   
 From: Salvatore Orlando [mailto:sorla...@nicira.com]  
 Sent: Monday, March 09, 2015 6:06 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [api][neutron] Best API for generating subnets from 
 pool  
   
 Greetings!
   
  
 Neutron is adding a new concept of subnet pool. To put it simply, it is a 
 collection of IP prefixes from which subnets can be allocated. In this way a 
 user does not have to specify a full CIDR, but simply a desired prefix 
 length, and then let the pool generate a CIDR from its prefixes. The full 
 spec is available at [1], whereas two patches are up for review at [2] (CRUD) 
 and [3] (integration between subnets and subnet pools).
  
 While [2] is quite straightforward, I must admit I am not really sure that 
 the current approach chosen for generating subnets from a pool might be the 
 best one, and I'm therefore seeking your advice on this matter.
  
   
  
 A subnet can be created with or without a pool.
  
 Without a pool the user will pass the desired cidr:
  
   
  
 POST /v2.0/subnets
  
 {'network_id': 'meh',
  
   'cidr': '192.168.0.0/24 (http://192.168.0.0/24)'}
  
   
  
 Instead with a pool the user will pass pool id and desired prefix lenght:
  
 POST /v2.0/subnets
  
 {'network_id': 'meh',
  
  'prefix_len': 24,
  
  'pool_id': 'some_pool'}
  
   
  
 The response to the previous call would populate the subnet cidr.
  
 So far it looks quite good. Prefix_len is a bit of duplicated information, 
 but that's tolerable.
  
 It gets a bit awkward when the user specifies also attributes such as desired 
 gateway ip or allocation pools, as they have to be specified in a 
 cidr-agnostic way. For instance:
  
   
  
 POST /v2.0/subnets
  
 {'network_id': 'meh',
  
  'gateway_ip': '0.0.0.1',
  
  'prefix_len': 24,
  
  'pool_id': 'some_pool'}
  
  
   
  
 would indicate that the user wishes to use the first address in the range as 
 the gateway IP, and the API would return something like this:
  
   
  
 POST /v2.0/subnets
  
 {'network_id': 'meh',
  
  'cidr': '10.10.10.0/24 (http://10.10.10.0/24)'
  
  'gateway_ip': '10.10.10.1',
  
  'prefix_len': 24,
  
  'pool_id': 'some_pool'}
  
  
  
   
  
 The problem with this approach is, in my opinion, that attributes such as 
 gateway_ip are used with different semantics in requests and responses; this 
 might also need users to write client applications expecting the values in 
 the response might differ from those in the request.
  
   
  
 I have been considering alternatives, but could not find any that I would 
 regard as winner.
  
 I therefore have some questions for the neutron community and the API working 
 group:
  
   
  
 1) (this is more for neutron people) Is there a real use case for requesting 
 specific gateway IPs and allocation pools when allocating from a pool? If 
 not, maybe we should let the pool set a default gateway IP and allocation 
 pools. The user can then update them with another call. Another option would 
 be to provide subnet templates from which a user can choose. For instance 
 one template could have the gateway 

[openstack-dev] [kolla] about the image size

2015-03-10 Thread Bohai (ricky)
Hi, stackers

I try to use the Kolla Images and pull them down from docker hub.
I found the size of the image is bigger than what I thought(for example, the 
images of docker conductor service is about 1.4GB).

Is it possible to get a more smaller images.
Do we have the plan to minimize the images. 

Best regards to you.
Ricky


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Rethinking the launch-instance wizard model

2015-03-10 Thread Tripp, Travis S
Richard,

I have been thinking for some time that each step controller should be able to 
define the data it needs as well as manipulating it.  Perhaps in the morning 
before you get up in Australia I could take a pass at converting that for 
access  security.  I’ll talk it over with Sean, since there are some cross 
step dependencies it may complicate things a little and better understand his 
initialization states.

Travis

From: Richard Jones r1chardj0...@gmail.commailto:r1chardj0...@gmail.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, March 9, 2015 at 10:59 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Horizon] Rethinking the launch-instance wizard model

Hi folks,

Currently the launch instance model file does all the fetching of various bits 
of data. Combined with all of the controllers also being loaded at wizard 
startup, this results in some very difficult synchronisation issues*.

An issue I've run into is the initialisation of the controller based on model 
data. Specifically, loading the allocated and available lists into the 
security groups transfer table. I can load a reference to the model 
securityGroups array as the available set, but since that data is generally 
not loaded (by some other code) when the controller is setting up, I can't also 
select the first group in the array as the default group in allocated.

So, I propose that the model data for a particular pane be loaded *by that 
pane*, so that pane can then attach a callback to run once the data is loaded, 
to handle situations like this (which will be common, IIUC). Or the model needs 
to provide promises for the pane controllers to attach callbacks to.


  Richard

* one issue is the problem of the controllers running for the life of the 
wizard and not knowing when they're active (having them only be temporarily 
active would solve the issue of having to watch the transfer tables for changes 
of data - we could just read the allocated lists when the controller exits).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Avoiding regression in project governance

2015-03-10 Thread Joe Gordon
On Tue, Mar 10, 2015 at 12:30 PM, Zane Bitter zbit...@redhat.com wrote:

 On 10/03/15 12:29, Russell Bryant wrote:


 I feel that we're at a very vulnerable part of this transition.  We've
 abolished the incubation process and integrated release.  We've
 established a fairly low bar for new projects [2].  However, we have not
 yet approved*any*  tags other than the one that reflects which projects
 are included in the final integrated release (Kilo) [3].  Despite the
 previously discussed challenges with the integrated release,
 it did at least mean that a project has met a very useful set of
 criteria [4].

 We now have several new project proposals.  However, I propose not
 approving any new projects until we have a tagging system that is at
 least far enough along to represent the set of criteria that we used to
 apply to all OpenStack projects (with exception for ones we want to
 consciously drop).  Otherwise, I think it's a significant setback to our
 project governance as we have yet to provide any useful way to navigate
 the growing set of projects.


 I appreciate the concerns here, but I'm also uncomfortable with having an
 open-ended hold on making projects an official part of OpenStack. There are
 a lot of projects on StackForge that are by any reasonable definition a
 part of this community, it seems wrong to put them on indefinite hold when
 the Big Tent model has already been agreed upon.

 Here is a possible compromise: invite applications now and set a fixed
 date on which the new system will become operational. That way it's the
 TC's responsibility to get the house in order by the deadline, rather than
 making it everyone else's problem. If we see a wildly inappropriate
 application then that's valuable data about where the requirements are
 unclear. To avoid mass confusion in the absence of a mature set of tags, I
 think it's probably appropriate that the changes kick in after the Kilo
 release, but let's make it as soon as possible after that.


After watching the TC meeting, and double checking with the meeting notes
[0], it looks like the magnum vote was deferred to next week. But what
concerns me is the lack of action items assigned that will help make sure
next weeks discussion isn't just a repeat of what happened today.

I get that starting to apply the big tent model to admit new projects will
take time to get right, but deferring a decision for a it's not you, it's
me reason without any actionable items doesn't sound like real progress to
me.


[0] http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-03-10-20.06.html



 cheers,
 Zane.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Testtools 1.7.0 may error if you installed it before reading this email

2015-03-10 Thread Dolph Mathews
Great to hear that this has been addressed, as this impacted a few tests in
keystone.

(but why was the fix not released as 1.7.1?)

On Tue, Mar 10, 2015 at 4:10 PM, Robert Collins robe...@robertcollins.net
wrote:

 There was a broken wheel built when testtools 1.7.0 was released. The
 wheel was missing the _compat2x.py file used for 2.x only syntax in
 exception handling, for an unknown reason. (We know how to trigger it
 - build the wheel with Python 3.4).

 The wheel has been removed from PyPI and anyone installing testtools
 1.7.0 now will install from source which works fine.

 Cheers,
 Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Question about is_admin_available()

2015-03-10 Thread David Kranz
In the process of writing a unit test for this I discovered that it can 
call out to keystone for a token with some configurations through the 
call to get_configured_credentials. This surprised me since I thought it 
would just check for the necessary admin credentials in either 
tempest.conf or accounts.yaml. Is this a bug?


 -David


def is_admin_available():
is_admin = True
# If tenant isolation is enabled admin will be available
if CONF.auth.allow_tenant_isolation:
return is_admin
# Check whether test accounts file has the admin specified or not
elif os.path.isfile(CONF.auth.test_accounts_file):
check_accounts = accounts.Accounts(name='check_admin')
if not check_accounts.admin_available():
is_admin = False
else:
try:
cred_provider.get_configured_credentials('identity_admin')
except exceptions.InvalidConfiguration:
is_admin = False
return is_admin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] new failures running Barbican functional tests

2015-03-10 Thread Douglas Mendizabal
Thanks for the insight, other Doug. :)  It appears that this is in part due to 
the fact that Tempest has not yet updated to oslo_log and is still using 
incubator oslo.log.  Can someone from the Tempest team chime in on what the 
status of migrating to oslo_log is?

It’s imperative for us to fix our gate, since we’re blocked from landing any 
code, which just over a week away from a milestone release is a major 
hinderance.

Thanks,
-Doug Mendizábal


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5  0CC9 AD14 1F30 2D58 923C

 On Mar 9, 2015, at 8:58 PM, Doug Hellmann d...@doughellmann.com wrote:
 
 
 
 On Mon, Mar 9, 2015, at 05:39 PM, Steve Heyman wrote:
 We are getting issues running barbican functional tests - seems to have
 started sometime between Thursday last week (3/5) and today (3/9)
 
 Seems that oslo config giving us DuplicateOptErrors now.  Our functional
 tests use oslo config (via tempest) as well as barbican server code.
 Looking into it...seems that oslo_config is 1.9.1 and oslo_log is 1.0.0
 and a system I have working has oslo_config 1.9 and oslo_log at 0.4 (both
 with same barbican code).
 
 We released oslo.log today with an updated setting for the default log
 levels for third-party libraries. The new option in the library
 conflicts with the old definition of the option in the incubated code,
 so if you have some dependencies using oslo.log and some using the
 incubated library you'll see this error.
 
 Updating from the incubated version to the library is not complex, but
 it's not just a matter of changing a few imports. There are some
 migration notes in this review: https://review.openstack.org/#/c/147312/
 
 Let me know if you run into issues or need a hand with a review.
 
 Also getting Failure: ArgsAlreadyParsedError (arguments already parsed:
 cannot register CLI option)which seems to be related.
 
 This is probably an import order issue. After a ConfigOpts object has
 been called to parse the command line you cannot register new command
 line options. It's possible the problem is actually caused by the same
 module conflict, since both log modules register command line options. I
 would need a full traceback to know for sure.
 
 Doug
 
 Is this a known issue? Is there a launchpad bug yet?
 
 Thanks!
 
 [cid:5076AFB4-808D-4676-8F1C-A6E468E2CD73]
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 Email had 1 attachment:
 + signature-with-mafia[2].png
  19k (image/png)
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-novaclient] Better wording for secgroup-*-default-rules? help text

2015-03-10 Thread melanie witt
On Mar 10, 2015, at 7:32, Chris St. Pierre chris.a.st.pie...@gmail.com wrote:

 I've just filed a bug on the confusing wording of help text for the 
 secgroup-{add,delete,list}-default-rules? commands: 
 https://bugs.launchpad.net/python-novaclient/+bug/1430354
 
 As I note in the bug, though, I'm not sure the best way to fix this. In an 
 unconstrained world, I'd like to see something like:
 
 secgroup-add-default-rule   Add a rule to the set of rules that will be 
 added to the 'default' security group in a newly-created tenant.
 
 But that's obviously excessively verbose. And the help strings are pulled 
 from the docstrings of the functions that implement the commands, so we're 
 limited to what can fit in a one-line docstring. (We could add another source 
 of help documentation -- e.g., `desc = getattr(callback, help, 
 callback.__doc__) or ''` on novaclient/shell.py line 531 -- but that seems 
 like it should be a last resort.)
 
 How can we clean up the wording to make it clear that the default security 
 group is, in fact, not the 'default' security group or the security group 
 which is default, but rather another beast entirely which isn't even 
 actually a security group?
 
 Naming: still the hardest problem in computer science. :(

I don't think your suggestion for the help text is excessively verbose. There 
are already longer help texts for some commands than that, and I think it's 
important to accurately explain what commands do. You can use a multiline 
docstring to have a longer help text.

Why do you say the default security group isn't actually a security group? 
The fact that it's per-tenant and therefore not necessarily consistent?

melanie (melwitt)







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Question about is_admin_available()

2015-03-10 Thread Andrea Frittoli
Preventing the token request could be an improvement, as the token
request might not be expected to happen in that method.

If the token cannot be obtained because credentials are wrong, an
exception will be triggered.
If we removed the token request from is_admin_available, this scenario
would be detected slightly later, when admin credentials are actually
used.

I don't have any strong preference for any of the following two
options, leave it as it is (and document the token call), or drop the
token call.

The change required to remove the token call would be really easy:

  cred_provider.get_configured_credentials('identity_admin', fill_in=False)

andrea

On 10 March 2015 at 21:38, David Kranz dkr...@redhat.com wrote:
 In the process of writing a unit test for this I discovered that it can call
 out to keystone for a token with some configurations through the call to
 get_configured_credentials. This surprised me since I thought it would just
 check for the necessary admin credentials in either tempest.conf or
 accounts.yaml. Is this a bug?

  -David


 def is_admin_available():
 is_admin = True
 # If tenant isolation is enabled admin will be available
 if CONF.auth.allow_tenant_isolation:
 return is_admin
 # Check whether test accounts file has the admin specified or not
 elif os.path.isfile(CONF.auth.test_accounts_file):
 check_accounts = accounts.Accounts(name='check_admin')
 if not check_accounts.admin_available():
 is_admin = False
 else:
 try:
 cred_provider.get_configured_credentials('identity_admin')
 except exceptions.InvalidConfiguration:
 is_admin = False
 return is_admin


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >