[openstack-dev] [Nova] Hyper-V CI broken?

2015-03-31 Thread Michael Still
I apologise if there's already been an email about this, I can't see one.

Is the Hyper-V CI broken at the moment? It looks like there are a
number of tests failing for every change, including trivial typo
fixes. An example:

http://64.119.130.115/168500/4/results.html.gz

http://stackalytics.com/report/driverlog?project_id=openstack%2Fnovavendor=Cloudbase
seems to think that the tests haven't passed in five days, which is
quite a long time for it to be broken.

Comments please?

Thanks,
Michael

-- 
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core API vs extension: the subnet pool feature

2015-03-31 Thread Gary Kotton
Hi,
I am also fine with the shim extension.
Thanks
Gary

On 3/31/15, 1:44 AM, Carl Baldwin c...@ecbaldwin.net wrote:

Thanks for your support, Akihiro.  We will get this up for review very
soon.

Carl

On Mon, Mar 30, 2015 at 2:59 PM, Akihiro Motoki amot...@gmail.com wrote:
 Hi Carl,

 I am now reading the detail from Salvatore, but would like to response
 this first.

 I don't want to kill this useful feature too and move the thing forward.
 I am fine with the empty/shim extension approach.
 The subnet pool is regarded as a part of Core API, so I think this
 extension can be
 always enabled even if no plugin declares to use it.
 Sorry for interrupting the work at the last stage, and thank for
understanding.

 Akihiro

 2015-03-31 5:28 GMT+09:00 Carl Baldwin c...@ecbaldwin.net:
 Akihiro,

 If we go with the empty extension you proposed in the patch will that
be
 acceptable?

 We've got to stop killing new functionality on the very last day like
this .
 It just kills progress.  This proposal isn't new.

 Carl

 On Mar 30, 2015 11:37 AM, Akihiro Motoki amot...@gmail.com wrote:

 Hi Neutron folks
 (API folks may be interested on this)

 We have another discussion on Core vs extension in the subnet pool
 feature reivew
 https://review.openstack.org/#/c/157597/.
 We did the similar discussion on VLAN transparency and MTU for a
 network model last week.
 I would like to share my concerns on changing the core API directly.
 I hope this help us make the discussion productive.
 Note that I don't want to discuss the micro-versioning because it
 mainly focues on Kilo FFE BP.

 I would like to discuss this topic in today's neutron meeting,
 but I am not so confident I can get up in time, I would like to send
this
 mail.


 The extension mechanism in Neutron provides two points for
extensibility:
 - (a) visibility of features in API (users can know which features are
 available through the API)
 - (b) opt-in mechanism in plugins (plugin maintainers can decide to
 support some feature after checking the detail)

 My concerns mainly comes from the first point (a).
 If we have no way to detect it, users (including Horizon) need to do a
 dirty work around
 to determine whether some feature is available. I believe this is one
 important point in API.

 On the second point, my only concern (not so important) is that we are
 making the core
 API change at this moment of the release. Some plugins do not consume
 db_base_plugin and
 such plugins need to investigate the impact from now on.
 On the other hand, if we use the extension mechanism all plugins need
to
 update
 their extension list in the last moment :-(


 My vote at this moment is still to use an extension, but an extension
 layer can be a shim.
 The idea is to that all implementation can be as-is and we just add an
 extension module
 so that the new feature is visible thru the extension list.
 It is not perfect but I think it is a good compromise regarding the
first
 point.


 I know there was a suggestion to change this into the core API in the
 spec review
 and I didn't notice it at that time, but I would like to raise this
 before releasing it.

 For longer term (and Liberty cycle),  we need to define more clear
 guideline
 on Core vs extension vs micro-versioning in spec reviews.

 Thanks,
 Akihiro

 
___
___
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 

__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Akihiro Motoki amot...@gmail.com

 
_
_
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Attaching extra-spec to vol-type using Cinder py-client

2015-03-31 Thread Vipin Balachandran
cinder.volume_types.create returns an instance of VolumeType.
https://github.com/openstack/python-cinderclient/blob/master/cinderclient/v2/volume_types.py#L118

Thanks,
Vipin

From: Pradip Mukhopadhyay [mailto:pradip.inte...@gmail.com]
Sent: Tuesday, March 31, 2015 10:07 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [cinder] Attaching extra-spec to vol-type using Cinder 
py-client


Hello,

I am trying to create and type-set some parameters to a volume-type as follows:

cinder type-create nfs

cinder type-key nfs set volume_backend_name=myNFSBackend

The same thing I want to achieve through python client.

I can create the type as follows:

from cinderclient import client

cinder = client.Client('2', 'admin', 'pw', 'demo', 
'http://127.0.0.1:5000/v2.0https://urldefense.proofpoint.com/v2/url?u=http-3A__127.0.0.1-3A5000_v2.0d=AwMBaQc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=CTAUyaHvyUUJ-0QHviztQxBhCDLLSg1DksoSE4TOfZ8m=EIUflZKBvoV_Hp3DkmTE612FqBGHpwXLuwGJ3UpwbbIs=hk8YML4Dn_kR2cMzCB4Lohd-fnmlk8Z9zBEu1Cc3DO0e=',
 service_type=volumev2)

cinder.volume_types.create('nfs')

However how can I associate the extra-spec through python-client code to the 
'nfs' volume (same impact as the CLI 'cinder type-key nfs set 
volume_backend_name=myNFSBackend' does)?

The 'set_keys' etc. methods are there in the v2/volume_types.py in 
python-cinderclient codebase. How to call it? (it's part of VolumeType class, 
not VolumeTypeManager).

Any help would be great.

Thanks, Pradip
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Neutron] Deprecating the use_namespaces option - Now's the time to speak up!

2015-03-31 Thread Miguel Ángel Ajo
On Tuesday, 31 de March de 2015 at 7:14, George Shuklin wrote:
  
  
 On 03/30/2015 11:18 AM, Kevin Benton wrote:
  What does fog do? Is it just a client to the Neutron HTTP API? If so,  
  it should not have broken like that because the API has remained  
  pretty stable. If it's a deployment tool, then I could see that  
  because the configuration options to tend to suffer quite a bit of  
  churn as tools used by the reference implementation evolve.
   
  
 As far as I understand (I'm not ruby guy, I'm openstack guy, but I  
 peeking to ruby guys attempts to use openstack with fog as replacement  
 for vagrant/virtualbox), the problem lies in the default network selection.
  
 Fog expects to have one network and use it, and neutron network-rich  
 environment is simply too complex for it. May be it is fog to blame, but  
 result is simple: some user library worked fine with nova networks but  
 struggling after update to neutron.
  
 Linux usually covers all those cases to make transition between versions  
 very smooth. Openstack is not.
  
  I agree that these changes are an unpleasant experience for the end  
  users, but that's what the deprecation timeline is for. This feature  
  won't break in L, it will just result in deprecation warnings. If we  
  get feedback from users that this serves an important use case that  
  can't be addressed another way, we can always stop the deprecation at  
  that point.
   
  
 In my opinion it happens too fast and cruel. For example: It deprecates  
 in 'L' release and will be kept only of 'L' users complains. But for  
 that many users should switch from havana to newer version. But it not  
 true, many skips few versions before moving to the new one.
  
 Openstack releases are too wild and untested to be used 'after release'  
 (simple example: VLAN id bug in neutron, which completely breaks hard  
 reboots in neutron, was fixed in last update of havana, that means all  
 havanas was broken from the moment of release to the very last moment),  
 so users wait until bugs are fixed. And they deploy new version after  
 that. So it is something like half of the year between new version and  
 deployment. And no one wants to do upgrade right after they done  
 deployment. Add one or two more years. And only than user find that  
 everything is deprecated and removed and openstack is new and shiny  
 again, and everyone need to learn it from scratches. I'm exaggerating a  
 bit, but that's true - the older and mature installation (like big  
 public cloud) the less they want to upgrade every half of the year to  
 the shiny new bugs.
  
 TL;DR: Deprecation cycle should take at least few years to get proper  
 feedback from real heavy users.
  
  


From the user POV I can’t do other thing but agree, you pictured it right,
currently we mark something for deprecation, and by the middle/end of next
cycle it’s deprecated. But most users won’t realize it’s deprecated until
it’s too late, either because they jump to use a stable version after a few 
stable
releases to be safe, or because they skip versions.

From the code point of view, it can, sometimes become messy, but we  
should take care of our customers…

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [third-party-ci] Clarifications on the goal and skipping tests

2015-03-31 Thread Duncan Thomas
On 31 March 2015 at 01:35, John Griffith john.griffi...@gmail.com wrote:

 On Mon, Mar 30, 2015 at 4:06 PM, Doug Wiegley 
 doug...@parksidesoftware.com wrote:

 ​


 - Test relies on some “optional” feature, like overlapping IP subnets that
 the backend doesn’t support.  I’d argue it’s another case of broken tests
 if they require an optional feature, but it still needs skipping in the
 meantime.


 ​This may be something specific to Neutron perhaps?  In Cinder LVM is
 pretty much the lowest common denominator.  I'm not aware of any volume
 tests in Tempest that rely on optional features that don't pick this up
 automatically out of the config (like multi-backend for example).
 ​



That I know of off the top of my head:
- Snapshot of an attached volume works for most drivers but not all
- Backup of a volume that has snapshots fails for some drivers
- Restore to a volume that has snapshots fails on some drivers

I think all of the above are things that we should fix, but they exist
today.

Since one obscure bug can lead to CI failing on every patch, is it better
to say 'no skips without active bugs, and record your open bugs somewhere'?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cross-Project meeting, Tue March 31st, 21:00 UTC

2015-03-31 Thread Thierry Carrez
Dear PTLs, cross-project liaisons and anyone else interested,

We'll have a cross-project meeting today at 21:00 UTC, with the
following agenda:

* PTL election season
* Design Summit content
  * Slot allocation
  * Session proposals (etherpads ?)
  * Introducing Cheddar-based sched edition
* Open discussion  announcements

See you there !

For more details on this meeting, please see:
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] VTI or GRE over ipsec feature for vpnaas

2015-03-31 Thread wei hu
Hi, all.
Recently, I have requirement to establish ipsec connection with virtual
tunnel interface(VTI).
But I found the existing vpnaas in neutron does not support ipsec with
virtual tunnel interface.
Do  we have plan to let vpnaas support ipsec with vti or gre over ipsec ?

With this feature, we can add route rules in the vpn gateways, so that we
can not only connect
two private subnets with each ipsec connection, but no limit(just add route
rule in the gateway).

(Sorry for forgot to add subject in the former email)
Relate links:
http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/WAN_and_MAN/P2P_GRE_IPSec/P2P_GRE_IPSec/2_p2pGRE_Phase2.html

http://www.cisco.com/c/en/us/td/docs/ios/sec_secure_connectivity/configuration/guide/15_0/sec_secure_connectivity_15_0_book/sec_ipsec_virt_tunnl.html


--
huwei@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Million level scalability test report from cascading

2015-03-31 Thread joehuang
Hi, all,

During the last cross project meeting[1][2] for the next step of OpenStack 
cascading solution[3], the conclusion of the meeting is OpenStack isn't ready 
for the project, and if he want's it ready sooner than later, joehuang needs to 
help make it ready by working on scaling being coded now, and the scaling is 
on the first priority for OpenStack community.

We just finished the 1 million VMs semi-simulation test report[4] for OpenStack 
cascading solution, the most interesting findings during the test is, the 
cascading architecture can support million level ports in Neutron, and also 
million level VMs in Nova. And the test report also shows that OpenStack 
cascading solution can manage up to 100k physical hosts without challenge. Some 
scaling issues were found during the test and listed in the report.

The conclusion of the report is:
According to the Phase I and Phase II test data analysis, due to the hardware 
resources limitation, the OpenStack cascading solution with current 
configuration can supports a maximum of 1 million virtual machines and is 
capable of handling 500 concurrent API request if L3 (DVR) mode is included or, 
1000 concurrent API request if only L2 networking needed. It's up to deployment 
policy to use OpenStack cascading solution inside one site ( one data center) 
or multi-sites (multi-data centers), the maximal sites (data centers) supported 
are 100, i.e., 100 cascaded OpenStack instances.

The test report is shared first, let's discuss the next step later.

Hope you have a joyful Easter holiday!

[1]Meeting minutes: 
http://eavesdrop.openstack.org/meetings/crossproject/2014/crossproject.2014-12-16-21.01.html
[2]Meeting log: 
http://eavesdrop.openstack.org/meetings/crossproject/2014/crossproject.2014-12-16-21.01.log.html
[3]OpenStack cascading solution: 
https://wiki.openstack.org/wiki/OpenStack_cascading_solution
[4]1 million VM test report: 
http://www.slideshare.net/JoeHuang7/test-report-for-open-stack-cascading-solution-to-support-1-million-v-ms-in-100-data-centers

Best Regards
Chaoyi Huang ( Joe Huang )




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder list and cinder create generating error as unknown column instance_uuid in cinder.volumes

2015-03-31 Thread Duncan Thomas
Does running 'sudo cinder db sync' fix the problem (this runs all migration
scripts needed to bring the db up to the current schema)?

On 31 March 2015 at 10:05, Kamsali, RaghavendraChari (Artesyn) 
raghavendrachari.kams...@artesyn.com wrote:



 Hi,



 Am using openstack/juno and I bring up the setup like controller node and
 storage node, when I created volumes or getting list of volumes it is
 generating error as shown below .







 [stack@Storage devstack]$ cinder list

 ERROR: Internal Server Error (HTTP 500) (Request-ID:
 req-519d330d-5b14-4705-9f3d-937de00a61e0)

 [stack@Storage devstack]$





 *c-api.log*





 from (pid=24777) _http_log_response
 /usr/lib/python2.7/site-packages/keystoneclient/session.py:223

 2015-03-31 12:29:39.300 INFO cinder.api.openstack.wsgi
 [req-519d330d-5b14-4705-9f3d-937de00a61e0 155aae03e3214f9e8fc411a6395706f8
 e4e2d713323a4a66be5994f85ce91101] GET
 http://192.168.21.108:8776/v1/e4e2d713323a4a66be5994f85ce91101/volumes/detail

 2015-03-31 12:29:39.301 DEBUG cinder.api.openstack.wsgi
 [req-519d330d-5b14-4705-9f3d-937de00a61e0 155aae03e3214f9e8fc411a6395706f8
 e4e2d713323a4a66be5994f85ce91101] Empty body provided in request from
 (pid=24777) get_body /opt/stack/cinder/cinder/api/openstack/wsgi.py:789

 2015-03-31 12:29:39.324 ERROR cinder.api.middleware.fault
 [req-519d330d-5b14-4705-9f3d-937de00a61e0 155aae03e3214f9e8fc411a6395706f8
 e4e2d713323a4a66be5994f85ce91101] Caught error: (OperationalError) (1054,
 Unknown column 'volumes.instance_uuid' in 'field list') 'SELECT
 volumes.created_at AS volumes_created_at, volumes.updated_at AS
 volumes_updated_at, volumes.deleted_at AS volumes_deleted_at, volumes.id
 AS volumes_id, volumes._name_id AS volumes__name_id, volumes.ec2_id AS
 volumes_ec2_id, volumes.user_id AS volumes_user_id, volumes.project_id AS
 volumes_project_id, volumes.snapshot_id AS volumes_snapshot_id,
 volumes.host AS volumes_host, volumes.size AS volumes_size,
 volumes.availability_zone AS volumes_availability_zone,
 volumes.instance_uuid AS volumes_instance_uuid, volumes.attached_host AS
 volumes_attached_host, volumes.mountpoint AS volumes_mountpoint,
 volumes.attach_time AS volumes_attach_time, volumes.status AS
 volumes_status, volumes.attach_status AS volumes_attach_status,
 volumes.migration_status AS volumes_migration_status, volumes.scheduled_at
 AS volumes_scheduled_at, volumes.launched_at AS volumes_launched_at,
 volumes.terminated_at AS volumes_terminated_at, volumes.display_name AS
 volumes_display_name, volumes.display_description AS
 volumes_display_description, volumes.provider_location AS
 volumes_provider_location, volumes.provider_auth AS volumes_provider_auth,
 volumes.provider_geometry AS volumes_provider_geometry,
 volumes.volume_type_id AS volumes_volume_type_id, volumes.source_volid AS
 volumes_source_volid, volumes.encryption_key_id AS
 volumes_encryption_key_id, volumes.consistencygroup_id AS
 volumes_consistencygroup_id, volumes.deleted AS volumes_deleted,
 volumes.bootable AS volumes_bootable, volumes.replication_status AS
 volumes_replication_status, volumes.replication_extended_status AS
 volumes_replication_extended_status, volumes.replication_driver_data AS
 volumes_replication_driver_data, consistencygroups_1.created_at AS
 consistencygroups_1_created_at, consistencygroups_1.updated_at AS
 consistencygroups_1_updated_at, consistencygroups_1.deleted_at AS
 consistencygroups_1_deleted_at, consistencygroups_1.deleted AS
 consistencygroups_1_deleted, consistencygroups_1.id AS
 consistencygroups_1_id, consistencygroups_1.user_id AS
 consistencygroups_1_user_id, consistencygroups_1.project_id AS
 consistencygroups_1_project_id, consistencygroups_1.host AS
 consistencygroups_1_host, consistencygroups_1.availability_zone AS
 consistencygroups_1_availability_zone, consistencygroups_1.name AS
 consistencygroups_1_name, consistencygroups_1.description AS
 consistencygroups_1_description, consistencygroups_1.volume_type_id AS
 consistencygroups_1_volume_type_id, consistencygroups_1.status AS
 consistencygroups_1_status, volume_metadata_1.created_at AS
 volume_metadata_1_created_at, volume_metadata_1.updated_at AS
 volume_metadata_1_updated_at, volume_metadata_1.deleted_at AS
 volume_metadata_1_deleted_at, volume_metadata_1.deleted AS
 volume_metadata_1_deleted, volume_metadata_1.id AS volume_metadata_1_id,
 volume_metadata_1.`key` AS volume_metadata_1_key, volume_metadata_1.value
 AS volume_metadata_1_value, volume_metadata_1.volume_id AS
 volume_metadata_1_volume_id, volume_types_1.created_at AS
 volume_types_1_created_at, volume_types_1.updated_at AS
 volume_types_1_updated_at, volume_types_1.deleted_at AS
 volume_types_1_deleted_at, volume_types_1.deleted AS
 volume_types_1_deleted, volume_types_1.id AS volume_types_1_id,
 volume_types_1.name AS volume_types_1_name, volume_types_1.qos_specs_id
 AS volume_types_1_qos_specs_id, volume_admin_metadata_1.created_at AS
 volume_admin_metadata_1_created_at, 

Re: [openstack-dev] [Neutron] initial OVN testing

2015-03-31 Thread Miguel Ángel Ajo
That’s super nice ;) !!! :D

I’m prototyping over here [1] to gather some benchmarks for the summit 
presentation
about “Taking Security Groups To Ludicrous Speed with Open vSwitch” with Ivar, 
Justin
and Thomas.


I know Justin and Joe have been doing good advances on it ;) [3] lately.

[1] https://review.openstack.org/#/c/167671/
[2] https://github.com/justinpettit/ovs/tree/conntrack
[3] https://github.com/justinpettit/ovs/commits/conntrack

Miguel Ángel Ajo


On Tuesday, 31 de March de 2015 at 9:34, Kevin Benton wrote:

 Very cool. What's the latest status on data-plane support for the conntrack 
 based things like firewall rules and conntrack integration?
  
 On Mon, Mar 30, 2015 at 7:19 PM, Russell Bryant rbry...@redhat.com 
 (mailto:rbry...@redhat.com) wrote:
  On 03/26/2015 07:54 PM, Russell Bryant wrote:
   Gary and Kyle, I saw in my IRC backlog that you guys were briefly
   talking about testing the Neutron ovn ml2 driver.  I suppose it's time
   to add some more code to the devstack integration to install the current
   ovn branch and set up ovsdb-server to serve up the right database for
   this.  I'll try to work on that tomorrow.  Of course, note that all we
   can set up right now is the northbound database.  None of the code that
   reacts to updates to that database is merged yet.  We can still go ahead
   and test our code and make sure the expected data makes it there, though.
   
  With help from Kyle Mestery, Gary Kotton, and Gal Sagie, some great
  progress has been made over the last few days.  Devstack support has
  merged and the ML2 driver seems to be doing the right thing.
   
  After devstack runs, you can see that the default networks created by
  devstack are in the OVN db:
   
   $ neutron net-list
   +--+-+--+
   | id   | name| subnets
 |
   +--+-+--+
   | 1c4c9a38-afae-40aa-a890-17cd460b314b | private | 
   115f27d1-5330-489e-b81f-e7f7da123a31 10.0.0.0/24 (http://10.0.0.0/24) |
   | 69fc7d7c-6906-43e7-b5e2-77c059cf4143 | public  | 
   6b5c1597-4af8-4ad3-b28b-a4e83a07121b |
   +--+-+--+
   
   $ ovn-nbctl lswitch-list
   47135494-6b36-4db9-8ced-3bdc9b711ca9 
   (neutron-1c4c9a38-afae-40aa-a890-17cd460b314b)
   03494923-48cf-4af5-a391-ed48fe180c0b 
   (neutron-69fc7d7c-6906-43e7-b5e2-77c059cf4143)
   
   $ ovn-nbctl lswitch-get-external-id 
   neutron-1c4c9a38-afae-40aa-a890-17cd460b314b
   neutron:network_id=1c4c9a38-afae-40aa-a890-17cd460b314b
   neutron:network_name=private
   
   $ ovn-nbctl lswitch-get-external-id 
   neutron-69fc7d7c-6906-43e7-b5e2-77c059cf4143
   neutron:network_id=69fc7d7c-6906-43e7-b5e2-77c059cf4143
   neutron:network_name=public
   
  You can also create ports and see those reflected in the OVN db:
   
   $ neutron port-create 1c4c9a38-afae-40aa-a890-17cd460b314b
   Created a new port:
   +---+-+
   | Field | Value   
   |
   +---+-+
   | admin_state_up| True
   |
   | allowed_address_pairs | 
   |
   | binding:vnic_type | normal  
   |
   | device_id | 
   |
   | device_owner  | 
   |
   | fixed_ips | {subnet_id: 
   115f27d1-5330-489e-b81f-e7f7da123a31, ip_address: 10.0.0.3} |
   | id| e7c080ad-213d-4839-aa02-1af217a6548c
   |
   | mac_address   | fa:16:3e:07:9e:68   
   |
   | name  | 
   |
   | network_id| 1c4c9a38-afae-40aa-a890-17cd460b314b
   |
   | security_groups   | be68fd4e-48d8-46f2-8204-8a916ea6f348
   |
   | status| DOWN
   |
   | tenant_id | ed782253a54c4e0a8b46e275480896c9
   |
   

[openstack-dev] [horizon][all] Missing XStatic-Angular-Irdragndrop: CI Check/Gate pipelines currently stuck due to a bad dependency creeping in the system.

2015-03-31 Thread Flavio Percoco

Greetings,

Our gate is currently stuck. Or better, all the jobs that depend on
horizon, which depends on XStatic-Angular-Irdragndrop=1.0.2.1 (or
better XStatic-Angular-lrdragndrop).

Apparently, what caused this was a typo in the previous name of this
project. After the rename[0], which happened in order to allow the
packager to properly upload the package, we ended up with all gates
failing due to a missing package on pypi[1].

There's a patch proposing the rename[2], which requires urgent
attention. A new release of this library might be required besides the
update of our global-requirements.

I believe the horizon team is already aware of this issue, I hope
we'll be able to fix it asap. Although, this might require some dark
magic.

Flavio

[0] https://review.openstack.org/#/c/167798/
[1] 
http://logs.openstack.org/93/164893/4/gate/gate-tempest-dsvm-redis-zaqar/a04779e/logs/devstacklog.txt.gz#_2015-03-31_06_57_46_107
[2] https://review.openstack.org/#/c/169132/

--
@flaper87
Flavio Percoco


pgpY2si0b_KCw.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cinder list and cinder create generating error as unknown column instance_uuid in cinder.volumes

2015-03-31 Thread Kamsali, RaghavendraChari (Artesyn)

Hi,

Am using openstack/juno and I bring up the setup like controller node and 
storage node, when I created volumes or getting list of volumes it is 
generating error as shown below .



[stack@Storage devstack]$ cinder list
ERROR: Internal Server Error (HTTP 500) (Request-ID: 
req-519d330d-5b14-4705-9f3d-937de00a61e0)
[stack@Storage devstack]$


c-api.log


from (pid=24777) _http_log_response 
/usr/lib/python2.7/site-packages/keystoneclient/session.py:223
2015-03-31 12:29:39.300 INFO cinder.api.openstack.wsgi 
[req-519d330d-5b14-4705-9f3d-937de00a61e0 155aae03e3214f9e8fc411a6395706f8 
e4e2d713323a4a66be5994f85ce91101] GET 
http://192.168.21.108:8776/v1/e4e2d713323a4a66be5994f85ce91101/volumes/detail
2015-03-31 12:29:39.301 DEBUG cinder.api.openstack.wsgi 
[req-519d330d-5b14-4705-9f3d-937de00a61e0 155aae03e3214f9e8fc411a6395706f8 
e4e2d713323a4a66be5994f85ce91101] Empty body provided in request from 
(pid=24777) get_body /opt/stack/cinder/cinder/api/openstack/wsgi.py:789
2015-03-31 12:29:39.324 ERROR cinder.api.middleware.fault 
[req-519d330d-5b14-4705-9f3d-937de00a61e0 155aae03e3214f9e8fc411a6395706f8 
e4e2d713323a4a66be5994f85ce91101] Caught error: (OperationalError) (1054, 
Unknown column 'volumes.instance_uuid' in 'field list') 'SELECT 
volumes.created_at AS volumes_created_at, volumes.updated_at AS 
volumes_updated_at, volumes.deleted_at AS volumes_deleted_at, volumes.id AS 
volumes_id, volumes._name_id AS volumes__name_id, volumes.ec2_id AS 
volumes_ec2_id, volumes.user_id AS volumes_user_id, volumes.project_id AS 
volumes_project_id, volumes.snapshot_id AS volumes_snapshot_id, volumes.host AS 
volumes_host, volumes.size AS volumes_size, volumes.availability_zone AS 
volumes_availability_zone, volumes.instance_uuid AS volumes_instance_uuid, 
volumes.attached_host AS volumes_attached_host, volumes.mountpoint AS 
volumes_mountpoint, volumes.attach_time AS volumes_attach_time, volumes.status 
AS volumes_status, volumes.attach_status AS volumes_attach_status, 
volumes.migration_status AS volumes_migration_status, volumes.scheduled_at AS 
volumes_scheduled_at, volumes.launched_at AS volumes_launched_at, 
volumes.terminated_at AS volumes_terminated_at, volumes.display_name AS 
volumes_display_name, volumes.display_description AS 
volumes_display_description, volumes.provider_location AS 
volumes_provider_location, volumes.provider_auth AS volumes_provider_auth, 
volumes.provider_geometry AS volumes_provider_geometry, volumes.volume_type_id 
AS volumes_volume_type_id, volumes.source_volid AS volumes_source_volid, 
volumes.encryption_key_id AS volumes_encryption_key_id, 
volumes.consistencygroup_id AS volumes_consistencygroup_id, volumes.deleted AS 
volumes_deleted, volumes.bootable AS volumes_bootable, 
volumes.replication_status AS volumes_replication_status, 
volumes.replication_extended_status AS volumes_replication_extended_status, 
volumes.replication_driver_data AS volumes_replication_driver_data, 
consistencygroups_1.created_at AS consistencygroups_1_created_at, 
consistencygroups_1.updated_at AS consistencygroups_1_updated_at, 
consistencygroups_1.deleted_at AS consistencygroups_1_deleted_at, 
consistencygroups_1.deleted AS consistencygroups_1_deleted, 
consistencygroups_1.id AS consistencygroups_1_id, consistencygroups_1.user_id 
AS consistencygroups_1_user_id, consistencygroups_1.project_id AS 
consistencygroups_1_project_id, consistencygroups_1.host AS 
consistencygroups_1_host, consistencygroups_1.availability_zone AS 
consistencygroups_1_availability_zone, consistencygroups_1.name AS 
consistencygroups_1_name, consistencygroups_1.description AS 
consistencygroups_1_description, consistencygroups_1.volume_type_id AS 
consistencygroups_1_volume_type_id, consistencygroups_1.status AS 
consistencygroups_1_status, volume_metadata_1.created_at AS 
volume_metadata_1_created_at, volume_metadata_1.updated_at AS 
volume_metadata_1_updated_at, volume_metadata_1.deleted_at AS 
volume_metadata_1_deleted_at, volume_metadata_1.deleted AS 
volume_metadata_1_deleted, volume_metadata_1.id AS volume_metadata_1_id, 
volume_metadata_1.`key` AS volume_metadata_1_key, volume_metadata_1.value AS 
volume_metadata_1_value, volume_metadata_1.volume_id AS 
volume_metadata_1_volume_id, volume_types_1.created_at AS 
volume_types_1_created_at, volume_types_1.updated_at AS 
volume_types_1_updated_at, volume_types_1.deleted_at AS 
volume_types_1_deleted_at, volume_types_1.deleted AS volume_types_1_deleted, 
volume_types_1.id AS volume_types_1_id, volume_types_1.name AS 
volume_types_1_name, volume_types_1.qos_specs_id AS 
volume_types_1_qos_specs_id, volume_admin_metadata_1.created_at AS 
volume_admin_metadata_1_created_at, volume_admin_metadata_1.updated_at AS 
volume_admin_metadata_1_updated_at, volume_admin_metadata_1.deleted_at AS 
volume_admin_metadata_1_deleted_at, volume_admin_metadata_1.deleted AS 
volume_admin_metadata_1_deleted, volume_admin_metadata_1.id AS 
volume_admin_metadata_1_id, 

Re: [openstack-dev] [Openstack-operators] [Neutron] Deprecating the use_namespaces option - Now's the time to speak up!

2015-03-31 Thread Kevin Benton
Assaf, can you provide some context on why this option had to be
deprecated? Isn't the no namespace case a degenerate version of all of
stuff scoped to a namespace, or is it not that simple?

I'm less convinced that deprecating is the right move here if it's just to
make the code easier to manage. We did already get one use case from
Calico...

On Mon, Mar 30, 2015 at 11:11 PM, Miguel Ángel Ajo majop...@redhat.com
wrote:

 On Tuesday, 31 de March de 2015 at 7:14, George Shuklin wrote:



 On 03/30/2015 11:18 AM, Kevin Benton wrote:

 What does fog do? Is it just a client to the Neutron HTTP API? If so,
 it should not have broken like that because the API has remained
 pretty stable. If it's a deployment tool, then I could see that
 because the configuration options to tend to suffer quite a bit of
 churn as tools used by the reference implementation evolve.

 As far as I understand (I'm not ruby guy, I'm openstack guy, but I
 peeking to ruby guys attempts to use openstack with fog as replacement
 for vagrant/virtualbox), the problem lies in the default network selection.

 Fog expects to have one network and use it, and neutron network-rich
 environment is simply too complex for it. May be it is fog to blame, but
 result is simple: some user library worked fine with nova networks but
 struggling after update to neutron.

 Linux usually covers all those cases to make transition between versions
 very smooth. Openstack is not.

 I agree that these changes are an unpleasant experience for the end
 users, but that's what the deprecation timeline is for. This feature
 won't break in L, it will just result in deprecation warnings. If we
 get feedback from users that this serves an important use case that
 can't be addressed another way, we can always stop the deprecation at
 that point.

 In my opinion it happens too fast and cruel. For example: It deprecates
 in 'L' release and will be kept only of 'L' users complains. But for
 that many users should switch from havana to newer version. But it not
 true, many skips few versions before moving to the new one.

 Openstack releases are too wild and untested to be used 'after release'
 (simple example: VLAN id bug in neutron, which completely breaks hard
 reboots in neutron, was fixed in last update of havana, that means all
 havanas was broken from the moment of release to the very last moment),
 so users wait until bugs are fixed. And they deploy new version after
 that. So it is something like half of the year between new version and
 deployment. And no one wants to do upgrade right after they done
 deployment. Add one or two more years. And only than user find that
 everything is deprecated and removed and openstack is new and shiny
 again, and everyone need to learn it from scratches. I'm exaggerating a
 bit, but that's true - the older and mature installation (like big
 public cloud) the less they want to upgrade every half of the year to
 the shiny new bugs.

 TL;DR: Deprecation cycle should take at least few years to get proper
 feedback from real heavy users.


 From the user POV I can’t do other thing but agree, you pictured it right,
 currently we mark something for deprecation, and by the middle/end of next
 cycle it’s deprecated. But most users won’t realize it’s deprecated until
 it’s too late, either because they jump to use a stable version after a
 few stable
 releases to be safe, or because they skip versions.

 From the code point of view, it can, sometimes become messy, but we
 should take care of our customers…


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] initial OVN testing

2015-03-31 Thread Kevin Benton
Very cool. What's the latest status on data-plane support for the conntrack
based things like firewall rules and conntrack integration?

On Mon, Mar 30, 2015 at 7:19 PM, Russell Bryant rbry...@redhat.com wrote:

 On 03/26/2015 07:54 PM, Russell Bryant wrote:
  Gary and Kyle, I saw in my IRC backlog that you guys were briefly
  talking about testing the Neutron ovn ml2 driver.  I suppose it's time
  to add some more code to the devstack integration to install the current
  ovn branch and set up ovsdb-server to serve up the right database for
  this.  I'll try to work on that tomorrow.  Of course, note that all we
  can set up right now is the northbound database.  None of the code that
  reacts to updates to that database is merged yet.  We can still go ahead
  and test our code and make sure the expected data makes it there, though.

 With help from Kyle Mestery, Gary Kotton, and Gal Sagie, some great
 progress has been made over the last few days.  Devstack support has
 merged and the ML2 driver seems to be doing the right thing.

 After devstack runs, you can see that the default networks created by
 devstack are in the OVN db:

  $ neutron net-list
 
 +--+-+--+
  | id   | name| subnets
 |
 
 +--+-+--+
  | 1c4c9a38-afae-40aa-a890-17cd460b314b | private |
 115f27d1-5330-489e-b81f-e7f7da123a31 10.0.0.0/24 |
  | 69fc7d7c-6906-43e7-b5e2-77c059cf4143 | public  |
 6b5c1597-4af8-4ad3-b28b-a4e83a07121b |
 
 +--+-+--+

  $ ovn-nbctl lswitch-list
  47135494-6b36-4db9-8ced-3bdc9b711ca9
 (neutron-1c4c9a38-afae-40aa-a890-17cd460b314b)
  03494923-48cf-4af5-a391-ed48fe180c0b
 (neutron-69fc7d7c-6906-43e7-b5e2-77c059cf4143)

  $ ovn-nbctl lswitch-get-external-id
 neutron-1c4c9a38-afae-40aa-a890-17cd460b314b
  neutron:network_id=1c4c9a38-afae-40aa-a890-17cd460b314b
  neutron:network_name=private

  $ ovn-nbctl lswitch-get-external-id
 neutron-69fc7d7c-6906-43e7-b5e2-77c059cf4143
  neutron:network_id=69fc7d7c-6906-43e7-b5e2-77c059cf4143
  neutron:network_name=public

 You can also create ports and see those reflected in the OVN db:

  $ neutron port-create 1c4c9a38-afae-40aa-a890-17cd460b314b
  Created a new port:
 
 +---+-+
  | Field | Value
  |
 
 +---+-+
  | admin_state_up| True
   |
  | allowed_address_pairs |
  |
  | binding:vnic_type | normal
   |
  | device_id |
  |
  | device_owner  |
  |
  | fixed_ips | {subnet_id:
 115f27d1-5330-489e-b81f-e7f7da123a31, ip_address: 10.0.0.3} |
  | id| e7c080ad-213d-4839-aa02-1af217a6548c
   |
  | mac_address   | fa:16:3e:07:9e:68
  |
  | name  |
  |
  | network_id| 1c4c9a38-afae-40aa-a890-17cd460b314b
   |
  | security_groups   | be68fd4e-48d8-46f2-8204-8a916ea6f348
   |
  | status| DOWN
   |
  | tenant_id | ed782253a54c4e0a8b46e275480896c9
   |
 
 +---+-+

 List ports on the logical switch named neutron-1c4c9a38...:

  $ ovn-nbctl lport-list neutron-1c4c9a38-afae-40aa-a890-17cd460b314b
  ...
  96432697-df3c-472a-b48a-9f844764d4bf
 (neutron-e7c080ad-213d-4839-aa02-1af217a6548c)

 We can also see that the proper MAC address was set on that port:

  $ ovn-nbctl lport-get-macs neutron-e7c080ad-213d-4839-aa02-1af217a6548c
  fa:16:3e:07:9e:68

 --
 Russell Bryant

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [scheduler] [gantt] Please stop using Gantt for discussing about Nova scheduler

2015-03-31 Thread Sylvain Bauza


Le 31/03/2015 02:57, Dugger, Donald D a écrit :

I actually prefer to use the term Gantt, it neatly encapsulates the discussions 
and it doesn't take much effort to realize that Gantt refers to the scheduler 
and, if you feel there is confusion, we can clarify things in the wiki page to 
emphasize the process: clean up the current scheduler interfaces and then split 
off the scheduler.  The end goal will be the Gantt scheduler and I'd prefer not 
to change the discussion.

Bottom line is I don't see a need to drop the Gantt reference.


While I agree with you that *most* of the scheduler effort is to 
spin-off the scheduler as a dedicated repository whose codename is 
Gantt, there are some notes to do :
 1. not all the efforts are related to the split, some are only 
reducing the tech debt within Nova (eg. 
bp/detach-service-from-computenode has very little impact on the 
scheduler itself, but rather on what is passed to the scheduler as 
resources) and may confuse people who could wonder why it is related to 
the split


2. We haven't yet agreed on a migration path for Gantt and what will 
become the existing nova-scheduler. I seriously doubt that the Nova 
community would accept to keep the existing nova-scheduler as a feature 
duplicate to the future Gantt codebase, but that has been not yet 
discussed and things can be less clear


3. Based on my experience, we are loosing contributors or people 
interested in the scheduler area because they just don't know that Gantt 
is actually at the moment the Nova scheduler.



I seriously don't think that if we decide to leave the Gantt codename 
unused while we're working on Nova, it won't seriously impact our 
capacity to propose an alternative based on a separate repository, 
ideally as a cross-project service. It will just translate the reality, 
ie. that Gantt is at the moment more an idea than a project.


-Sylvain




--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

-Original Message-
From: Sylvain Bauza [mailto:sba...@redhat.com]
Sent: Monday, March 30, 2015 8:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova] [scheduler] [gantt] Please stop using Gantt 
for discussing about Nova scheduler

Hi,

tl;dr: I used the [gantt] tag for this e-mail, but I would prefer if we could 
do this for the last time until we spin-off the project.

   As it is confusing for many people to understand the difference in between 
the future Gantt project and the Nova scheduler effort we're doing, I'm 
proposing to stop using that name for all the efforts related to reducing the 
technical debt and splitting out the scheduler. That includes, not 
exhaustively, the topic name for our IRC weekly meetings on Tuesdays, any ML 
thread related to the Nova scheduler or any discussed related to the scheduler 
happening on IRC.
Instead of using [gantt], please use [nova] [scheduler] tags.

That said, any discussion related to the real future of a cross-project scheduler based 
on the existing Nova scheduler makes sense to be tagged as Gantt, of course.


-Sylvain


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Proposal for the Resume Feature

2015-03-31 Thread Renat Akhmerov
Hi,

Thanks guys for bringing this topic up to discussion. In my opinion, this 
feature is extremely important and will move Mistral further to being a truly 
useful tool. I think it’s one of the “must have” feature of Mistral.


 On 31 Mar 2015, at 08:56, Dmitri Zimine dzim...@stackstorm.com wrote:
 
 @Lingxian Kong
  The context for a task is used
 internally, I know the aim for this feature is to make it very easy
 and convinient for users to see the details for the workflow exection,
 but what users can do next with the context? Do you have a plan to
 change that context for a task by users? if the answer is no, I think
 it is not very necessary to expose the context endpoint.
 
 I think the answer is “yes users will change the context” this falls out of 
 use case #3. 
 Let’s be specific: a create_vm task failed due to, say, network connection. 
 As a user, I created the VM manually, now want to continue the workflow. 
 Next step is attach storage to VM, needs VM ID published variable. So a user 
 needs to 
 modify outgoing context of create_vm task.

Agree with Dmitri here.


 May be use case 2 be sufficient? 
 We are also likely to specify multiple tasks: in case a parallel execution of 
 two tasks
 (create VM, create DNS record) failed again due to network conditions - than 
 network 
 is back I want to continue, but re-run those two exact tasks. 
 
 Another point, may be obvious but let’s articulate it: we re-run task, not 
 individual action within task.
 In context of with_items, retry, repeat, it will lead to running actions 
 multiple times.
 
 Finally, workflow execution traceability. We need to get to the point of 
 tracing pause and resume as workflow events. 
 
 @Lingxian Kong
  we can introduce the notification
 system to Mistral, which is heavily used in other OpenStack projects.
 care to elaborare? Thanks! 

I’m curious too. Lingxian, could you please explain more detailed what you mean 
exactly?

 On Fri, Mar 27, 2015 at 11:20 AM, W Chan m4d.co...@gmail.com 
 mailto:m4d.co...@gmail.com wrote:
 We assume WF is in paused/errored state when 1) user manually pause the WF,
 2) pause is specified on transition (on-condition(s) such as on-error), and
 3) task errored.
 
 The resume feature will support the following use cases.
 1) User resumes WF from manual pause.
 2) In the case of task failure, user fixed the problem manually outside of
 Mistral, and user wants to re-run the failed task.
 3) In the case of task failure, user fixed the problem manually outside of
 Mistral, and user wants to resume from the next task.
 
 Resuming from #1 should be straightforward.

Just to clarify: this already works.

 Resuming from #2, user may want to change the inbound context.
 Resuming from #3, users is required to manually provide the published vars
 for the failed task(s).


These two cases is basically what we need to implement.

Winson, very good and clear summary (at least to me). I would suggest we 
prepare a little bit more formal (but not too much) spec of what we’re going to 
do here. A few examples would help us understand the topic better. So 
specifically, it would be interesting to see:

What endpoints we are going to add and how approximately they would look 
(calculating requirements that need to be satisfied in order to resume 
workflow, task contextx).
A few typical scenarios of resuming a workflow with explanations of how we 
modify contexts or published vars and how we resume the workflow. The trivial 
case (#1) can be skipped as it’s already implemented.
Roughly formed suggestions on how that all could be implemented.

This is just my preference to see something like this but at the same time I 
personally don’t want you to spend much time on that but if it’s possible to 
prepare it within a reasonable amount of time that would be helpful

Thanks

Renat Akhmerov
@ Mirantis Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo.db] Repeatable Read considered harmful

2015-03-31 Thread Eugene Nikanorov
Hi Matthew,

I'll add just 2c:

We've tried to move from repeatable-read to read committed in Neutron
project.
This change actually has caused multiple deadlocks during regular tempest
test run.
That is a known problem (the issue with eventlet and currect mysql client
library),
but anyway, at least one major openstack project is not ready to move to
read-committed.

Also, particular transaction isolation level's performance is highly
affected by DB usage pattern.
Is there any research of how read-committed affects performance of
openstack projects?

Thanks,
Eugene.

On Fri, Feb 6, 2015 at 7:59 PM, Matthew Booth mbo...@redhat.com wrote:

 I was surprised recently to discover that MySQL uses repeatable read for
 transactions by default. Postgres uses read committed by default, and
 SQLite uses serializable. We don't set the isolation level explicitly
 anywhere, so our applications are running under different isolation
 levels depending on backend. This doesn't sound like a good idea to me.
 It's one thing to support multiple sql syntaxes, but different isolation
 levels have different semantics. Supporting that is much harder, and
 currently we're not even trying.

 I'm aware that the same isolation level on different databases will
 still have subtly different semantics, but at least they should agree on
 the big things. I think we should pick one, and it should be read
 committed.

 Also note that 'repeatable read' on both MySQL and Postgres is actually
 snapshot isolation, which isn't quite the same thing. For example, it
 doesn't get phantom reads.

 The most important reason I think we need read committed is recovery
 from concurrent changes within the scope of a single transaction. To
 date, in Nova at least, this hasn't been an issue as transactions have
 had an extremely small scope. However, we're trying to expand that scope
 with the new enginefacade in oslo.db:
 https://review.openstack.org/#/c/138215/ . With this expanded scope,
 transaction failure in a library function can't simply be replayed
 because the transaction scope is larger than the function.

 So, 3 concrete examples of how repeatable read will make Nova worse:

 * https://review.openstack.org/#/c/140622/

 This was committed to Nova recently. Note how it involves a retry in the
 case of concurrent change. This works fine, because the retry is creates
 a new transaction. However, if the transaction was larger than the scope
 of this function this would not work, because each iteration would
 continue to read the old data. The solution to this is to create a new
 transaction. However, because the transaction is outside of the scope of
 this function, the only thing we can do locally is fail. The caller then
 has to re-execute the whole transaction, or fail itself.

 This is a local concurrency problem which can be very easily handled
 locally, but not if we're using repeatable read.

 *

 https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L4749

 Nova has multiple functions of this type which attempt to update a
 key/value metadata table. I'd expect to find multiple concurrency issues
 with this if I stopped to give it enough thought, but concentrating just
 on what's there, notice how the retry loop starts a new transaction. If
 we want to get to a place where we don't do that, with repeatable read
 we're left failing the whole transaction.

 * https://review.openstack.org/#/c/136409/

 This one isn't upstream, yet. It's broken, and I can't currently think
 of a solution if we're using repeatable read.

 The issue is atomic creation of a shared resource. We want to handle a
 creation race safely. This patch:

 * Attempts to reads the default (it will normally exist)
 * Creates a new one if it doesn't exist
 * Goes back to the start if creation failed due to a duplicate

 Seems fine, but it will fail because the re-read will continue to not
 return the new value under repeatable read (no phantom reads). The only
 way to see the new row is a new transaction. Is this will no longer be
 in the scope of this function, the only solution will be to fail. Read
 committed could continue without failing.

 Incidentally, this currently works by using multiple transactions, which
 we are trying to avoid. It has also been suggested that in this specific
 instance the default security group could be created with the project.
 However, that would both be more complicated, because it would require
 putting a hook into another piece of code, and less robust, because it
 wouldn't recover if somebody deleted the default security group.


 To summarise, with repeatable read we're forced to abort the current
 transaction to deal with certain relatively common classes of
 concurrency issue, whereas with read committed we can safely recover. If
 we want to reduce the number of transactions we're using, which we do,
 the impact of this is going to dramatically increase. We should
 standardise on read committed.

 Matt
 --
 Matthew 

Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-31 Thread Sean Dague


bin7JAxOZOqRl.bin
Description: PGP/MIME version identification


encrypted.asc
Description: OpenPGP encrypted message
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-31 Thread James Bottomley
On Fri, 2015-03-27 at 17:01 +, Tim Bell wrote:
 From the stats 
 (http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014),
 
 
 -43% of production clouds use OVS (the default for Neutron)
 
 -30% of production clouds are Nova network based
 
 -15% of production clouds use linux bridge
 
 There is therefore a significant share of the OpenStack production
 user community who are interested in a simple provider network linux
 bridge based solution.
  
 I think it is important to make an attractive cloud solution  where
 deployers can look at the balance of function and their skills and
 choose the appropriate combinations.
 
 Whether a simple network model should be the default is a different
 question to should there be a simple option. Personally, one of the
 most regular complaints I get is the steep learning curve for a new
 deployment. If we could make it so that people can do it as a series
 of steps (by making an path to add OVS) rather than a large leap, I
 think that would be attractive.

To be honest, there's a technology gap between the LinuxBridge and OVS
that cannot be filled.  We've found, since we sell technology to hosting
companies, that we got an immediate back reaction when we tried to
switch from a LinuxBridge to OVS in our Cloud Server product.  The
specific problem is that lots of hosting providers have heavily scripted
iptables and traffic control rules on the host side (i.e. on the bridge)
for controlling client networks which simply cannot be replicated in
OVS.  Telling the customers to rewrite everything in OpenFlow causes
incredulity and threats to pull the product.  No currently existing or
planned technology is there to fill this gap (the closest was google's
plan to migrate iptables rules to openflow, which died).  Our net
takeaway is that we have to provide both options for the foreseeable
future (scripting works to convert some use cases, but by no means
all ... and in our case not even a majority).

So the point of this for OpenStack is seeing this as a choice between
LinuxBridge and OVS is going to set up a false dichotomy.  Realistically
the future network technology has to support both, at least until the
trailing edge becomes more comfortable with SDN.

Moving neutron to ML2 instead of L2 helps isolate neutron from the
bridge technology, but it doesn't do anything to help the customer who
is currently poking at L2 to implement specific policies because they
have to care what the bridge technology is.  Telling the customer not to
poke the bridge isn't an option because they see the L2 plane as their
primary interface to diagnose and fix network issues ... which is why
they care about the bridge technology.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Identifying release critical bugs in Kilo

2015-03-31 Thread Michael Still
Hi!

As discussed in the nova meeting last week, now is the time for us to
be focusing on closing release critical bugs in Kilo. The nominal date
for RC1 is 9 April, but it will release sooner than that if we close
all of the bugs targetted to RC1 before then.

This calls for two actions:

 - if you are aware of a bug that you think is release critical, you
need to flag it now. That should be in the form of tagging it as
kilo-rc-potential in launchpad and replying to this email. A reply is
helpful, because we need to actively curate this list.

 - secondly, we need to ensure that only really truly release critical
bugs are release critical, so you can expect core to be keeping an eye
on that list and possibly removing the tag from bugs which shouldn't
block the release of kilo.

Thanks everyone for your work on kilo, we're nearly there!

Michael

-- 
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] FFE request for Authomatic cleanup of share_servers

2015-03-31 Thread Julia Varlamova
Hello,

I'd like to request a Feature Freeze Exception for Authomatic cleanup of
share_servers
(Launchpad:
https://blueprints.launchpad.net/manila/+spec/automatic-cleanup-of-share-servers
).

Patch can be found here: https://review.openstack.org/#/c/166182

I am looking forward for your decision about considering this change for a
FFe.

Thank you!

--
Regards,
Julia Varlamova
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Security] Agenda for next meeting

2015-03-31 Thread Clark, Robert Graham
Security folks,

The agenda for the next security group meeting is up on
https://wiki.openstack.org/wiki/Meetings/OpenStackSecurity#OpenStack_Security_Group_Meetings

As a reminder, this is 1700 UTC on irc.freenode.net #openstack-meeting-alt

Cheers
-Rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Attaching extra-spec to vol-type using Cinder py-client

2015-03-31 Thread Pradip Mukhopadhyay
Oops. Missed it. Thanks! It worked Vipin.



On Tue, Mar 31, 2015 at 12:06 PM, Vipin Balachandran 
vbalachand...@vmware.com wrote:

  cinder.volume_types.create returns an instance of VolumeType.


 https://github.com/openstack/python-cinderclient/blob/master/cinderclient/v2/volume_types.py#L118



 Thanks,

 Vipin



 *From:* Pradip Mukhopadhyay [mailto:pradip.inte...@gmail.com]
 *Sent:* Tuesday, March 31, 2015 10:07 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] [cinder] Attaching extra-spec to vol-type
 using Cinder py-client



 Hello,

 I am trying to create and type-set some parameters to a volume-type as
 follows:

 cinder type-create nfs

 cinder type-key nfs set volume_backend_name=myNFSBackend

 The same thing I want to achieve through python client.

 I can create the type as follows:

 from cinderclient import client

 cinder = client.Client('2', 'admin', 'pw', 'demo', 
 'http://127.0.0.1:5000/v2.0 
 https://urldefense.proofpoint.com/v2/url?u=http-3A__127.0.0.1-3A5000_v2.0d=AwMBaQc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=CTAUyaHvyUUJ-0QHviztQxBhCDLLSg1DksoSE4TOfZ8m=EIUflZKBvoV_Hp3DkmTE612FqBGHpwXLuwGJ3UpwbbIs=hk8YML4Dn_kR2cMzCB4Lohd-fnmlk8Z9zBEu1Cc3DO0e=',
  service_type=volumev2)

 cinder.volume_types.create('nfs')

 However how can I associate the extra-spec through python-client code to
 the 'nfs' volume (same impact as the CLI 'cinder type-key nfs set
 volume_backend_name=myNFSBackend' does)?

 The 'set_keys' etc. methods are there in the v2/volume_types.py in
 python-cinderclient codebase. How to call it? (it's part of VolumeType
 class, not VolumeTypeManager).

 Any help would be great.

 Thanks, Pradip

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][all] Missing XStatic-Angular-Irdragndrop: CI Check/Gate pipelines currently stuck due to a bad dependency creeping in the system.

2015-03-31 Thread Flavio Percoco

Gate Unstuck!

The deleted package has been uploaded again. This will give us enough
room to do the proper migration to the new one.

Thanks to everyone involved in the fix,
Flavio

On 31/03/15 10:52 +0200, Flavio Percoco wrote:

Greetings,

Our gate is currently stuck. Or better, all the jobs that depend on
horizon, which depends on XStatic-Angular-Irdragndrop=1.0.2.1 (or
better XStatic-Angular-lrdragndrop).

Apparently, what caused this was a typo in the previous name of this
project. After the rename[0], which happened in order to allow the
packager to properly upload the package, we ended up with all gates
failing due to a missing package on pypi[1].

There's a patch proposing the rename[2], which requires urgent
attention. A new release of this library might be required besides the
update of our global-requirements.

I believe the horizon team is already aware of this issue, I hope
we'll be able to fix it asap. Although, this might require some dark
magic.

Flavio

[0] https://review.openstack.org/#/c/167798/
[1] 
http://logs.openstack.org/93/164893/4/gate/gate-tempest-dsvm-redis-zaqar/a04779e/logs/devstacklog.txt.gz#_2015-03-31_06_57_46_107
[2] https://review.openstack.org/#/c/169132/

--
@flaper87
Flavio Percoco





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


pgphAQfpmYlVH.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-31 Thread Sean Dague
On 03/30/2015 05:58 PM, Sean M. Collins wrote:
 Quick update about OVS vs LB:
 Sean M. Collins pushed up a patch that runs CI on Tempest with LB:
 https://review.openstack.org/#/c/168423/

 So far it's failing pretty badly.
 
 
 I haven't had a chance to debug the failures - it is my hope that
 perhaps there are just more changes I need to make to DevStack to make
 LinuxBridge work correctly. If anyone is successfully using LinuxBridge
 with DevStack, please do review that patch and offer suggestions or
 share their local.conf file. :)

(applogies for previous encrypted email, enigmail somehow flagged
openstack-dev as encrypt by default for me.)

... Right, remember that getting a working neutron config requires a raft of
variables set correctly in the first place. Also, unlike n-net (which
owns setting up it's own network), neutron doesn't bootstrap it's own
bridges. Devstack has to specifically do ovs commands to create the
bridges for neutron otherwise it face plants. I expect that in this case
we're missing all that extra devstack initialization here, based on what
I saw in the failures.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] swift memory usage in centos7 devstack jobs

2015-03-31 Thread Ian Wienand

On 03/27/2015 08:47 PM, Alan Pevec wrote:

But how come that same recent pyOpenSSL doesn't consume more memory on Ubuntu?


Just to loop back on the final status of this ...

pyOpenSSL 0.14 does seem to use about an order of magnitude more
memory than 0.13 (2mb - 20mb).  For details see [1].

This is due to the way it now goes through cryptography (the
package, not the concept :) which binds to openssl using cffi.  This
ends up parsing a bunch of C to build up the ABI representation, and
it seems pycparser's model of this consumes most of the memory [2].
If that is a bug or not remains to be seen.

Ubuntu doesn't notice this in our CI environment because it comes with
python-openssl 0.13 pre-installed in the image.  Centos started
hitting this when I merged my change to start using as many libraries
from pip as possible.

I have a devstack workaround for centos out (preinstall the package)
[3] and I think a global solution of avoiding it in requirements [4]
(reviews appreciated).

I'm also thinking about how we can better monitor memory usage for
jobs.  Being able to see exactly what change pushed up memory usage by
a large % would have made finding this easier.  We keep some overall
details for devstack runs in a log file, but there is room to do
better.

-i

[1] https://etherpad.openstack.org/p/oom-in-rax-centos7-CI-job
[2] https://github.com/eliben/pycparser/issues/72
[3] https://review.openstack.org/168217
[4] https://review.openstack.org/169596

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Evolution of core developer to maintainer?

2015-03-31 Thread Anita Kuno
On 03/31/2015 08:46 PM, Dean Troyer wrote:
 On Tue, Mar 31, 2015 at 5:30 PM, Joe Gordon joe.gord...@gmail.com wrote:
 
 Do you feel like a core deveper/reviewer (we initially called them core
 developers) [1]:

 In OpenStack a core developer is a developer who has submitted enough high
 quality code and done enough code reviews that we trust their code reviews
 for merging into the base source tree. It is important that we have a
 process for active developers to be added to the core developer team.

 Or a maintainer [1]:

 1. They share responsibility in the project’s success.
 2. They have made a long-term, recurring time investment to improve the
 project.
 3. They spend that time doing whatever needs to be done, not necessarily
 what is the most interesting or fun.


 First, I don't think these two things are mutually exclusive, that's a
 false dichotomy.  They sound like two groups of attributes (or roles), both
 of which must be earned in the eyes of the rest of the project team.
 Frankly, being a PTL is your maintainer list on steroids for some projects,
 except that the PTL is directly elected.
 
 
 Maintainers are often under-appreciated, because their work is harder to
 appreciate. It’s easy to appreciate a really cool and technically advanced
 feature. It’s harder to appreciate the absence of bugs, the slow but steady
 improvement in stability, or the reliability of a release process. But
 those things distinguish a good project from a great one.


 The best maintainers appear to be invisible because stuff Just Works(TM).
 
 It feels to me like a couple of things are being conflated here and need to
 be explicitly stated to break the conversation down into meaningful parts
 that can be discussed without getting side-tracked:
 
 a) How do we scale?  How do we spread the project management load?  How do
 we maintain consistency in subteams/subsystems?
 
 b) How do we avoid the 'aristoctacy'?
 
 c) what did I miss?
 
 Taking b) first, the problem being solved needs to be stated.  Is it to
 avoid 'cliques'?  Are feelings being hurt because some are 'more-core' than
 others?  Is it to remove being a core team member as a job-review checkbox
 for some companies?  This seems to be bigger than just increasing core
 reviewer numbers, and tied to some developers being slighted in some way.
 
 A) is an organization structure problem.  We're seeing the boundaries of
 startup-style flat organization, and I think we all know we don't want
  traditional enterprise layers of managers.
 
 It seems like there is a progression of advancement for team members:
  prove yourself and become a core team member/reviewer/whatever.  The next
 step is what I think you want to formalize Joe, and that is those who again
 prove themselves in some manner to unlock the 'maintainer' achievements.
 
 The idea of taking the current becoming-core-team process and repeating it
 based on existing cores and PTL recommendations doesn't seem like too far
 of a stretch.  I mean really, is any project holding back people who want
 to do the maintainer role on more than just one pet part of a project? (I
 know those exist)
 
 
 FWIW, I have not been deeply involved in any of the highly
 political/vendor-driven projects so this may appear totally ignorant to
 those realities, but I think that is a clue that those projects are
 drifting away from the ideals that OpenStack was started with.
 
 dt
I agree with a lot of what both John and Doug have said so far in their
replies to this post but I'll add my thoughts to Dean's post because it
happens to be open.

I am really having a problem with a lack of common vision. Now this may
just be my problem here, and if it is, that is fine, I'll own that.

I had a long talk with Monty today about vision and whether or not
OpenStack had a common vision once and either lost it or is drifting
away from it or never had one in the first place. I won't put words into
other people's mouths, so I'll just stick to my own perspective here.

I have been operating with the belief that OpenStack did have a common
vision, and stated or not stated, it was clear enough to me that I took
from it a sense of direction in my activities, what to work on, what was
important, what furthered and supported OpenStack.

I'm really feeling lost here because I don't feel that anymore. It is
possible that it never existed in the first place and I was operating
within my own bubble and this actually is the reality. Okay fine, if
that is the way it is, that is my problem to deal with.

But other folks, as Dean mentions above, do indicate in their language
that they feel something was present at one point and is either gone now
or is in danger of going.

I don't know exactly what to call it but it goes along with the
unanswered question Anne Gentle posed to the TC a month or so back which
paraphrased was along the line of 'How do we create trust?'. I think I
felt trust before and I recognize that on a daily basis I don't now,
that 

Re: [openstack-dev] [neutron] Design Summit Session etherpad

2015-03-31 Thread Vikram Choudhary
Hi Kyle,

The link [2] https://etherpad.openstack.org/p/liberty-neutron-summit-topics 
shows the next meeting is scheduled on (4/7/2015) but it’s Tuesday not Monday. 
Date on Monday is (4/6/2015) so I got confused:☹
Agenda for Next Neutron Team Meeting
Monday (4/7/2015) at 1400 UTC on #openstack-meeting

Thanks
Vikram

From: Kyle Mestery [mailto:mest...@mestery.com]
Sent: 01 April 2015 01:02
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron] Design Summit Session etherpad

Hi folks!
Now that we're deep into the feature freeze and the Liberty specs repository is 
open [1], I wanted to let everyone know I've created an etherpad to track 
Design Summit Sessions. If you'd like to propose something, please have a look 
at the etherpad. We'll discuss these next week in the Neutron meeting [3], so 
please join and come with your ideas!
Thanks,
Kyle

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-March/060183.html
[2] https://etherpad.openstack.org/p/liberty-neutron-summit-topics
[3] https://wiki.openstack.org/wiki/Network/Meetings
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] API WG Meeting Time

2015-03-31 Thread Everett Toews
Ever since daylight savings time it has been increasing difficult for many API 
WG members to make it to the Thursday 00:00 UTC meeting time.

Do we change it so there’s only the Thursday 16:00 UTC meeting time?

On a related note, I can’t make it to tomorrow’s meeting. Can someone else 
please #startmeeting?

Thanks,
Everett


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] heat-kubernetes is dead, long live heat-coe-templates

2015-03-31 Thread Adrian Otto
Thank you Lars!

--
Adrian

 On Mar 31, 2015, at 6:30 PM, Lars Kellogg-Stedman l...@redhat.com wrote:
 
 Hello folks,
 
 Late last week we completely the migration of
 https://github.com/larsks/heat-kubernetes into stackforge, where you
 can now access it as:
 
  https://github.com/stackforge/heat-coe-templates/
 
 Bug reports can be filed in launchpad:
 
  https://bugs.launchpad.net/heat-coe-templates/+filebug
 
 GitHub pull request against the original repository will no longer be
 accepted; all changes can now go through the Gerrit review process
 that we all know and love.
 
 The only check implemented right now is a basic YAML linting process
 to ensure that files are syntactically correct.
 
 Cheers,
 
 -- 
 Lars Kellogg-Stedman l...@redhat.com | larsks @ {freenode,twitter,github}
 Cloud Engineering / OpenStack  | http://blog.oddbit.com/
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-31 Thread Kevin Benton
It's worth pointing out here that the in-tree OVS solution controls traffic
using iptables on regular bridges too. The difference between the two
occurs when it comes to how traffic is separated into different networks.

It's also worth noting that DVR requires OVS as well. If nobody is
comfortable with OVS then they can't use DVR and they won't have parity
with Nova network as far as floating IP resilience and performance is
concerned.
On Mar 31, 2015 4:56 AM, James Bottomley 
james.bottom...@hansenpartnership.com wrote:

 On Fri, 2015-03-27 at 17:01 +, Tim Bell wrote:
  From the stats (
 http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014
 ),
 
 
  -43% of production clouds use OVS (the default for Neutron)
 
  -30% of production clouds are Nova network based
 
  -15% of production clouds use linux bridge
 
  There is therefore a significant share of the OpenStack production
  user community who are interested in a simple provider network linux
  bridge based solution.
 
  I think it is important to make an attractive cloud solution  where
  deployers can look at the balance of function and their skills and
  choose the appropriate combinations.
 
  Whether a simple network model should be the default is a different
  question to should there be a simple option. Personally, one of the
  most regular complaints I get is the steep learning curve for a new
  deployment. If we could make it so that people can do it as a series
  of steps (by making an path to add OVS) rather than a large leap, I
  think that would be attractive.

 To be honest, there's a technology gap between the LinuxBridge and OVS
 that cannot be filled.  We've found, since we sell technology to hosting
 companies, that we got an immediate back reaction when we tried to
 switch from a LinuxBridge to OVS in our Cloud Server product.  The
 specific problem is that lots of hosting providers have heavily scripted
 iptables and traffic control rules on the host side (i.e. on the bridge)
 for controlling client networks which simply cannot be replicated in
 OVS.  Telling the customers to rewrite everything in OpenFlow causes
 incredulity and threats to pull the product.  No currently existing or
 planned technology is there to fill this gap (the closest was google's
 plan to migrate iptables rules to openflow, which died).  Our net
 takeaway is that we have to provide both options for the foreseeable
 future (scripting works to convert some use cases, but by no means
 all ... and in our case not even a majority).

 So the point of this for OpenStack is seeing this as a choice between
 LinuxBridge and OVS is going to set up a false dichotomy.  Realistically
 the future network technology has to support both, at least until the
 trailing edge becomes more comfortable with SDN.

 Moving neutron to ML2 instead of L2 helps isolate neutron from the
 bridge technology, but it doesn't do anything to help the customer who
 is currently poking at L2 to implement specific policies because they
 have to care what the bridge technology is.  Telling the customer not to
 poke the bridge isn't an option because they see the L2 plane as their
 primary interface to diagnose and fix network issues ... which is why
 they care about the bridge technology.

 James



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Million level scalability test report from cascading

2015-03-31 Thread Robert Collins
On 31 March 2015 at 22:05, joehuang joehu...@huawei.com wrote:
 Hi, all,

 During the last cross project meeting[1][2] for the next step of OpenStack 
 cascading solution[3], the conclusion of the meeting is OpenStack isn't 
 ready for the project, and if he want's it ready sooner than later, joehuang 
 needs to help make it ready by working on scaling being coded now, and the 
 scaling is on the first priority for OpenStack community.

 We just finished the 1 million VMs semi-simulation test report[4] for 
 OpenStack cascading solution, the most interesting findings during the test 
 is, the cascading architecture can support million level ports in Neutron, 
 and also million level VMs in Nova. And the test report also shows that 
 OpenStack cascading solution can manage up to 100k physical hosts without 
 challenge. Some scaling issues were found during the test and listed in the 
 report.

 The conclusion of the report is:
 According to the Phase I and Phase II test data analysis, due to the 
 hardware resources limitation, the OpenStack cascading solution with current 
 configuration can supports a maximum of 1 million virtual machines and is 
 capable of handling 500 concurrent API request if L3 (DVR) mode is included 
 or, 1000 concurrent API request if only L2 networking needed. It's up to 
 deployment policy to use OpenStack cascading solution inside one site ( one 
 data center) or multi-sites (multi-data centers), the maximal sites (data 
 centers) supported are 100, i.e., 100 cascaded OpenStack instances.

 The test report is shared first, let's discuss the next step later.

Wow thats beautiful stuff.

The next time someone does a report like this, I'd like to suggest
some extra metrics to capture.
API failure rate: what % of API errors occur.
VM failure rate: what % of operations lead to a failed VM (e.g. not
deleted on delete, or not started on create, or didn't boot correctly)
block device failure rate similarly.

Looking in your results, I observe significant load in the
steady-state mode for most of the DB's. Thats a little worrying, if as
I assume steady-state means 'no new API calls being made'.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] swift-dispersion-populate for different storage policies

2015-03-31 Thread Kirubakaran Kaliannan
Hi

I am working on making the swift-dispersion-populate and -report to work on
different storage policies (different rings).
Looks like internally the container objects are hardcoded.
Is there anyone working on improving this or am I the first one ?

Thanks
kiru

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Evolution of core developer to maintainer?

2015-03-31 Thread Joe Gordon
On Tue, Mar 31, 2015 at 5:24 PM, John Griffith john.griffi...@gmail.com
wrote:



 On Tue, Mar 31, 2015 at 4:30 PM, Joe Gordon joe.gord...@gmail.com wrote:

 I am starting this thread based on Thierry's feedback on [0].  Instead of
 writing the same thing twice, you can look at the rendered html from that
 patch [1]. Neutron tried to go from core to maintainer but after input from
 the TC and others, they are keeping the term 'core' but are clarifying what
 it means to be a neutron core [2]. [2] does a very good job of showing how
 what it means to be core is evolving.  From

 everyone is a dev and everyone is a reviewer. No committers or repo
 owners, no aristocracy. Some people just commit to do a lot of reviewing
 and keep current with the code, and have votes that matter more (+2).
 (Theirry)

 To a system where cores are more then people who have votes that matter
 more. Neutron's proposal tries to align that document with what is already
 happening.

 1. They share responsibility in the project's success.
 2. They have made a long-term, recurring time investment to improve the
 project.
 3. They spend their time doing what needs to be done to ensure the
 projects success, not necessarily what is the most interesting or fun.


 I think there are a few issues at the heart of this debate:

 1. Our current concept of a core team has never been able to grow past 20
 or so people, even for really big projects like nova and cinder. Why is
 that?  How do we delegate responsibility for subsystems? How do we keep
 growing?
 2. If everyone is just developers and reviewers who is actually
 responsible for the projects success? How does that mesh with the ideal of
 no 'aristocracy'? Do are early goals still make sense today?




 Do you feel like a core deveper/reviewer (we initially called them core
 developers) [1]:

 In OpenStack a core developer is a developer who has submitted enough
 high quality code and done enough code reviews that we trust their code
 reviews for merging into the base source tree. It is important that we have
 a process for active developers to be added to the core developer team.

 Or a maintainer [1]:

 1. They share responsibility in the project’s success.
 2. They have made a long-term, recurring time investment to improve the
 project.
 3. They spend that time doing whatever needs to be done, not necessarily
 what is the most interesting or fun.

 Maintainers are often under-appreciated, because their work is harder to
 appreciate. It’s easy to appreciate a really cool and technically advanced
 feature. It’s harder to appreciate the absence of bugs, the slow but steady
 improvement in stability, or the reliability of a release process. But
 those things distinguish a good project from a great one.




 [0] https://review.openstack.org/#/c/163660/
 [1]
 http://docs-draft.openstack.org/60/163660/3/check/gate-governance-docs/f386acf//doc/build/html/resolutions/20150311-rename-core-to-maintainers.html
 [2] https://review.openstack.org/#/c/164208/

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Hey Joe,

 I mentioned in last weeks TC meeting that I didn't really see a burning
 need to change or create new labels; but that's probably beside the
 point.  So if I read this it really comes down to a number of people in the
 community want core to mean something more than special reviewer is
 that right?  I mean regardless of whether you change the name from core
 to maintainer I really don't care.  If it makes some folks feel better to
 have that title/label associated with themselves that's cool by me (yes I
 get the *extra* responsibilities part you lined out).


As Doug said in his response, for many projects this is about trying to
make the definition of what is expected from a core reflect reality.



 What is missing for me here however is who picks these special people.
 I'm convinced that this does more to promote the idea of special
 contributors than anything else.  Maybe that's actually what you want, but
 it seemed based on your message that wasn't the case.


correct, I would like to see the opposite. I think we need to empower and
trust more people with more then just the standard +1 vote.



 Anyway, core nominations are fairly objective in my opinion and is
 *mostly* based on number of reviews and perceived quality of those reviews
 (measured somewhat by disagreement rates etc).  What are the metrics for
 this special group of folks that you're proposing we empower and title as
 maintainers?  Do I get to be a maintainer, is it reserved for a special
 group of people, a specific company?  What's the criteria? Do *you* get to
 be a maintainer?


Long term I see two levels of maintainers. General maintainers and
subsystem maintainers.   Both 

Re: [openstack-dev] [congress] is an openstack project

2015-03-31 Thread Zhipeng Huang
Congrats !

On Wed, Apr 1, 2015 at 4:40 AM, Aaron Rosen aaronoro...@gmail.com wrote:

 Sure thing!

 Thanks Sean.

 Aaron


 On Tue, Mar 31, 2015 at 1:28 PM, sean roberts seanrobert...@gmail.com
 wrote:

 All is left now is to patch infra to copy the three repos from
 stackforge. Aaron can you take that on?

 Congratulations team!

 ~ sean



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard  Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Evolution of core developer to maintainer?

2015-03-31 Thread Joe Gordon
On Tue, Mar 31, 2015 at 5:46 PM, Dean Troyer dtro...@gmail.com wrote:

 On Tue, Mar 31, 2015 at 5:30 PM, Joe Gordon joe.gord...@gmail.com wrote:

 Do you feel like a core deveper/reviewer (we initially called them core
 developers) [1]:

 In OpenStack a core developer is a developer who has submitted enough
 high quality code and done enough code reviews that we trust their code
 reviews for merging into the base source tree. It is important that we have
 a process for active developers to be added to the core developer team.

 Or a maintainer [1]:

 1. They share responsibility in the project’s success.
 2. They have made a long-term, recurring time investment to improve the
 project.
 3. They spend that time doing whatever needs to be done, not necessarily
 what is the most interesting or fun.


 First, I don't think these two things are mutually exclusive, that's a
 false dichotomy.  They sound like two groups of attributes (or roles), both
 of which must be earned in the eyes of the rest of the project team.
 Frankly, being a PTL is your maintainer list on steroids for some projects,
 except that the PTL is directly elected.


Yes, these are not orthogonal ideas. The question should be rephrased to
'which description do you identify the most with: core developer/reviewer
or maintainer?'

P.S. if you read the linked spec, you will see the maintainer definition is
straight from docker.




 Maintainers are often under-appreciated, because their work is harder to
 appreciate. It’s easy to appreciate a really cool and technically advanced
 feature. It’s harder to appreciate the absence of bugs, the slow but steady
 improvement in stability, or the reliability of a release process. But
 those things distinguish a good project from a great one.


 The best maintainers appear to be invisible because stuff Just Works(TM).

 It feels to me like a couple of things are being conflated here and need
 to be explicitly stated to break the conversation down into meaningful
 parts that can be discussed without getting side-tracked:

 a) How do we scale?  How do we spread the project management load?  How do
 we maintain consistency in subteams/subsystems?

 b) How do we avoid the 'aristoctacy'?

 c) what did I miss?


Well said.


 Taking b) first, the problem being solved needs to be stated.  Is it to
 avoid 'cliques'?  Are feelings being hurt because some are 'more-core' than
 others?  Is it to remove being a core team member as a job-review checkbox
 for some companies?  This seems to be bigger than just increasing core
 reviewer numbers, and tied to some developers being slighted in some way.


I am honestly not actually clear on what this one really means. I think
this originates from some of the oral history of OpenStack. As Thierry's said
No committers or repo owners, no aristocracy,  I think this is related to
OpenStack's notion of a flat core team where members of the core team was
supposed to be fungible, and all trust each other.

I don't think this is about removing being a core from a job review
checkbox, this may be about inter company/team politics? Not really sure
though.


 A) is an organization structure problem.  We're seeing the boundaries of
 startup-style flat organization, and I think we all know we don't want
  traditional enterprise layers of managers.


Yes, well said we are seeing the boundaries of the flat style origination
in many of the larger projects.


 It seems like there is a progression of advancement for team members:
  prove yourself and become a core team member/reviewer/whatever.  The next
 step is what I think you want to formalize Joe, and that is those who again
 prove themselves in some manner to unlock the 'maintainer' achievements.


Two comments

1. Yes, I think we need to clarify the next step once you prove yourself.
This is exactly what neutron is doing in there patch.
2. There is a really big second part to this, which is figure a way  to
scale the 'core teams' beyond that magical size of 20 people. See more
below.



 The idea of taking the current becoming-core-team process and repeating it
 based on existing cores and PTL recommendations doesn't seem like too far
 of a stretch.  I mean really, is any project holding back people who want
 to do the maintainer role on more than just one pet part of a project? (I
 know those exist)


I am more concerned about empowering people with the inverse desire.
Empower people who are interested in one subsection of a project to be
empowered to help maintain that piece and share some of the
review/maintenance burden. Take the nova db for example. Pulling the nova
db out into its own repo is a lot more pain then its worth, but there are
definitely people who are just interested in making sure nova's DB calls
are performant. Today these people can review the code, but ultimately two
cores are needed to review the code, making it hard for people to feel
empowered to own/maintain that code.


FWIW, I have not been deeply involved 

Re: [openstack-dev] The Evolution of core developer to maintainer?

2015-03-31 Thread Doug Wiegley

 On Mar 31, 2015, at 6:24 PM, John Griffith john.griffi...@gmail.com wrote:
 
 
 
 On Tue, Mar 31, 2015 at 4:30 PM, Joe Gordon joe.gord...@gmail.com 
 mailto:joe.gord...@gmail.com wrote:
 I am starting this thread based on Thierry's feedback on [0].  Instead of 
 writing the same thing twice, you can look at the rendered html from that 
 patch [1]. Neutron tried to go from core to maintainer but after input from 
 the TC and others, they are keeping the term 'core' but are clarifying what 
 it means to be a neutron core [2]. [2] does a very good job of showing how 
 what it means to be core is evolving.  From 
 everyone is a dev and everyone is a reviewer. No committers or repo owners, 
 no aristocracy. Some people just commit to do a lot of reviewing and keep 
 current with the code, and have votes that matter more (+2). (Theirry) 
 To a system where cores are more then people who have votes that matter more. 
 Neutron's proposal tries to align that document with what is already 
 happening.
 1. They share responsibility in the project's success.
 2. They have made a long-term, recurring time investment to improve the 
 project.
 3. They spend their time doing what needs to be done to ensure the projects 
 success, not necessarily what is the most interesting or fun.
 
 
 I think there are a few issues at the heart of this debate:
 
 1. Our current concept of a core team has never been able to grow past 20 or 
 so people, even for really big projects like nova and cinder. Why is that?  
 How do we delegate responsibility for subsystems? How do we keep growing?
 2. If everyone is just developers and reviewers who is actually responsible 
 for the projects success? How does that mesh with the ideal of no 
 'aristocracy'? Do are early goals still make sense today?
 
 
 
 
 Do you feel like a core deveper/reviewer (we initially called them core 
 developers) [1]:
 In OpenStack a core developer is a developer who has submitted enough high 
 quality code and done enough code reviews that we trust their code reviews 
 for merging into the base source tree. It is important that we have a process 
 for active developers to be added to the core developer team.
 Or a maintainer [1]:
 1. They share responsibility in the project’s success.
 2. They have made a long-term, recurring time investment to improve the 
 project.
 3. They spend that time doing whatever needs to be done, not necessarily what 
 is the most interesting or fun.
 
 Maintainers are often under-appreciated, because their work is harder to 
 appreciate. It’s easy to appreciate a really cool and technically advanced 
 feature. It’s harder to appreciate the absence of bugs, the slow but steady 
 improvement in stability, or the reliability of a release process. But those 
 things distinguish a good project from a great one.
 
 
 
 [0] https://review.openstack.org/#/c/163660/ 
 https://review.openstack.org/#/c/163660/
 [1] 
 http://docs-draft.openstack.org/60/163660/3/check/gate-governance-docs/f386acf//doc/build/html/resolutions/20150311-rename-core-to-maintainers.html
  
 http://docs-draft.openstack.org/60/163660/3/check/gate-governance-docs/f386acf//doc/build/html/resolutions/20150311-rename-core-to-maintainers.html
 [2] https://review.openstack.org/#/c/164208/ 
 https://review.openstack.org/#/c/164208/
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 Hey Joe,
 
 I mentioned in last weeks TC meeting that I didn't really see a burning need 
 to change or create new labels; but that's probably beside the point.  So 
 if I read this it really comes down to a number of people in the community 
 want core to mean something more than special reviewer is that right?  I 
 mean

Just my $0.02, but I think the intent is the exact opposite. Core reviewers 
already tend to be doing more than just reviewing. Maybe it’s a simple 
by-product of the review expertise translating into faster triage, or the 
relationships developed with other cores making maintenance tasks easier, or an 
indication that the folks with that much time just end up tending to be cores 
also, but regardless, the cores in a project end up being more than just 
reviewers. Which, again, could be coincidence more than title.

The intent, as I understand it, is simply to attempt document what’s already 
going on, in preparation to *split* those responsibilities more, as the current 
scheme is not working/scaling for some projects.

The name change, to me, is just distracting noise.

Thanks,
doug


 regardless of whether you change the name from core to maintainer I 
 really don't care.  If it makes some folks feel better to 

[openstack-dev] [magnum] heat-kubernetes is dead, long live heat-coe-templates

2015-03-31 Thread Lars Kellogg-Stedman
Hello folks,

Late last week we completely the migration of
https://github.com/larsks/heat-kubernetes into stackforge, where you
can now access it as:

  https://github.com/stackforge/heat-coe-templates/

Bug reports can be filed in launchpad:

  https://bugs.launchpad.net/heat-coe-templates/+filebug

GitHub pull request against the original repository will no longer be
accepted; all changes can now go through the Gerrit review process
that we all know and love.

The only check implemented right now is a basic YAML linting process
to ensure that files are syntactically correct.

Cheers,

-- 
Lars Kellogg-Stedman l...@redhat.com | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/



pgp4e0rmRGFeO.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] One confirm about max fixed ips per port

2015-03-31 Thread Zou, Yun
Hello, Shixiong

The continue of the discussion has moved to gerrit. And I get a clear answer 
from core.

Best regards,
Watanabe.isao

 -Original Message-
 From: sparkofwisdom.cl...@gmail.com
 [mailto:sparkofwisdom.cl...@gmail.com]
 Sent: Tuesday, March 31, 2015 11:43 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron] One confirm about max fixed ips per
 port
 
 Do we have clarity on this question? I think it will be more important for
 IPv6 enabled port…Any guidance from the Neutron Core?
 
 
 
 Thanks!
 
 
 
 Shixiong
 
 
 
 
 
 Zou, Yun zou@jp.fujitsu.com wrote:
 
 Hello, Oleg Bondarev.
 
 
 
 Sir, I could not find out any merit of multi subnets on one network, except
 the following one.
 
 - Migrate IPv4 to IPv6, so we need both subnet range on one network.
 
 So I don't know very much the nesessery of max_fied_ips_per_port parameter.
 
 All I know is only DB module and opencontrail plugin are using this parameter
 for validate.
 
 Do we have any usages about this issue, please?
 
 I appreciate a lot of your help.
 
 
 
 My question is related to fix [1].
 
 [1]: https://review.openstack.org/#/c/160214/
 
 
 
 Best regards,
 
 Watanabe.isao
 
 
 
 
 
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican : Use of consumer resource

2015-03-31 Thread Asha Seshagiri
Tons of Thanks to  Adam ,John and all the people in the openstack group who
have been always proactive in responding to the concerns and queries . It
has helped me a lot :)

On Tue, Mar 31, 2015 at 12:20 AM, Adam Harwell adam.harw...@rackspace.com
wrote:

  As John said, the URI is unrestricted (intentionally so) -- this could
 be 'mailto:s...@person.com' just as easily as a reference to another
 OpenStack or external service. Originally, the idea was that Loadbalancers
 would need to use a Container for TLS purposes, so we'd put the LB's URI in
 there as a back-reference (
 https://loadbalancers.myservice.com/lbaas/v2/loadbalancers/12345). That
 way, you could easily show in Horizon that LB 12345 is using this
 container.

  Registering with that POST has the side-effect of receiving the
 container's data as though you'd just done a GET — so, the design was that
 any time a service needed to GET the container data, it would do a POST to
 register instead — which would give you the data, but also mark interest.
 The registration action is idempotent, so you can register once, twice, or
 a hundred times and it has the same effect. The only tricky part is making
 sure that your service de-registers when you stop using the container.

  --Adam


   From: John Wood john.w...@rackspace.com
 Date: Tuesday, March 31, 2015 12:06 AM
 To: Asha Seshagiri asha.seshag...@gmail.com, openstack-dev 
 openstack-dev@lists.openstack.org
 Cc: Reller, Nathan S. nathan.rel...@jhuapl.edu, Douglas Mendizabal 
 douglas.mendiza...@rackspace.com, a...@redhat.com a...@redhat.com,
 Paul Kehrer paul.keh...@rackspace.com, Adam Harwell 
 adam.harw...@rackspace.com

 Subject: Re: Barbican : Use of consumer resource

   (Including Adam, who implemented this feature last year to make sure
 I’m not misspeaking here :)

  Hello Asha,

  The consumers feature allows clients/services to register ‘interest’ in
 a given secret or container. The URL provided is unrestricted. Clients that
 wish to delete a secret or consumer may add logic to hold off deleting if
 other services have registered their interest in the resource. However for
 Barbican this data is only informational, with no business logic (such as
 rejecting delete attempts) associated with it.

  I hope that helps.

  Thanks,
 John


   From: Asha Seshagiri asha.seshag...@gmail.com
 Date: Monday, March 30, 2015 at 5:04 PM
 To: openstack-dev openstack-dev@lists.openstack.org
 Cc: John Wood john.w...@rackspace.com, Reller, Nathan S. 
 nathan.rel...@jhuapl.edu, Douglas Mendizabal 
 douglas.mendiza...@rackspace.com, a...@redhat.com a...@redhat.com,
 Paul Kehrer paul.keh...@rackspace.com
 Subject: Re: Barbican : Use of consumer resource

   Including Alee and Paul in the loop

  Refining the above question :

  The consumer resource allows the clients to register with container
 resources. Please find the command and response below

  POST v1/containers/888b29a4-c7cf-49d0-bfdf-bd9e6f26d718/consumers

 Header: content-type=application/json
 X-Project-Id: {project_id}
 {
 name: foo-service,
 URL: https://www.fooservice.com/widgets/1234;
 }

 I would like to know the following :

 *1. Who  does the client here refers to ? Openstack Services or any other 
 services as well?*

 *2. Once the client gets registered through the consumer resource , How does 
 client consume or use the consumer resource*

 Any Help would be appreciated.

 Thanks Asha.





 On Mon, Mar 30, 2015 at 12:05 AM, Asha Seshagiri asha.seshag...@gmail.com
  wrote:

 Hi All,

  Once the consumer resource registers to the containers , how does the
 consumer resource consume the container resource?
 Is there any API supporting the above operation.

  Could any one please help on this?

  --
  *Thanks and Regards,*
 *Asha Seshagiri*




  --
  *Thanks and Regards,*
 *Asha Seshagiri*




-- 
*Thanks and Regards,*
*Asha Seshagiri*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] FFE request for Authomatic cleanup of share_servers

2015-03-31 Thread Ben Swartzlander

On 03/31/2015 08:49 AM, Julia Varlamova wrote:

Hello,

I'd like to request a Feature Freeze Exception forAuthomatic cleanup 
of share_servers
(Launchpad: 
https://blueprints.launchpad.net/manila/+spec/automatic-cleanup-of-share-servers).


Patch can be found here: https://review.openstack.org/#/c/166182

I am looking forward for your decision about considering this change 
for a FFe.


Thank you!



This is a change I'm interested in seeing merge in Kilo.

I believe the risk is small due to the small number of lines of code and 
the good test coverage.


The risk of NOT fixing this in Kilo is that some deployments will leak 
share_servers and waste a possibly large amount of resources. This 
leaking has been observed in my test environment, and the only 
workaround is for the administrator to periodically check share_servers 
and delete them.


I view the current behaviour has a bug, despite the fact that the change 
is advertised as a feature with a blueprint.


Since I'm in favor of this change, I'd like other community members to 
weigh in on the risks of fixing this vs. not fixing this in Kilo.


-Ben Swartzlander




--
Regards,
Julia Varlamova


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] One confirm about max fixed ips per port

2015-03-31 Thread sparkofwisdom.cloud
Do we have clarity on this question? I think it will be more important for 
IPv6 enabled port…Any guidance from the Neutron Core?


Thanks!

Shixiong

Zou, Yun zou@jp.fujitsu.com wrote:

Hello, Oleg Bondarev.

Sir, I could not find out any merit of multi subnets on one network, except 
the following one.

- Migrate IPv4 to IPv6, so we need both subnet range on one network.
So I don't know very much the nesessery of max_fied_ips_per_port parameter.
All I know is only DB module and opencontrail plugin are using this parameter 
for validate.

Do we have any usages about this issue, please?
I appreciate a lot of your help.

My question is related to fix [1].
[1]: https://review.openstack.org/#/c/160214/

Best regards,
Watanabe.isao


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hyper-V CI broken?

2015-03-31 Thread Hashir Abdi
System off lined while we investigate.

Wiki will be updated again on recovery.

Regards 

Hashir Abdi

Microsoft

-Original Message-
From: Anita Kuno [mailto:ante...@anteaya.info] 
Sent: Tuesday, March 31, 2015 10:52 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] Hyper-V CI broken?

On 03/31/2015 10:25 AM, Hashir Abdi wrote:
 Michael:
 
 
 Sorry about this Michael.
 
 We will work with our Cloudbase partners and look into these failing tests.
 
 Will update once a resolution is worked out.
Might I suggest you take your system offline and update your wikipage for your 
system:
https://wiki.openstack.org/wiki/ThirdPartySystems/Hyper-V_CI which actually 
shows you as being offline now, page last edited March 2nd.

Thank you, Hashir,
Anita.
 
 Regards
 
 Hashir Abdi
 
 Microsoft
 
 
 
 _
 From: Michael Still mi...@stillhq.commailto:mi...@stillhq.com
 Sent: marți, martie 31, 2015 9:38 a.m.
 Subject: [openstack-dev] [Nova] Hyper-V CI broken?
 To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstac
 k.org
 
 
 I apologise if there's already been an email about this, I can't see 
 one.Is the Hyper-V CI broken at the moment? It looks like there are 
 anumber of tests failing for every change, including trivial 
 typofixes. An 
 example:http://64.119.130.115/168500/4/results.html.gzhttp://stackalyt
 ics.com/report/driverlog?project_id=openstack%2Fnovavendor=Cloudbases
 eems to think that the tests haven't passed in five days, which 
 isquite a long time for it to be broken.Comments 
 please?Thanks,Michael-- Rackspace 
 Australia_
 _OpenStack Development Mailing List (not for usage 
 questions)Unsubscribe: 
 openstack-dev-requ...@lists.openstack.orgmailto:OpenStack-dev-request
 @lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cg
 i-bin/mailman/listinfo/openstack-devhttp://lists.openstack.org/cgi-bi
 n/mailman/listinfo/openstack-dev
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hyper-V CI broken?

2015-03-31 Thread Anita Kuno
On 03/31/2015 10:25 AM, Hashir Abdi wrote:
 Michael:
 
 
 Sorry about this Michael.
 
 We will work with our Cloudbase partners and look into these failing tests.
 
 Will update once a resolution is worked out.
Might I suggest you take your system offline and update your wikipage
for your system:
https://wiki.openstack.org/wiki/ThirdPartySystems/Hyper-V_CI which
actually shows you as being offline now, page last edited March 2nd.

Thank you, Hashir,
Anita.
 
 Regards
 
 Hashir Abdi
 
 Microsoft
 
 
 
 _
 From: Michael Still mi...@stillhq.commailto:mi...@stillhq.com
 Sent: marți, martie 31, 2015 9:38 a.m.
 Subject: [openstack-dev] [Nova] Hyper-V CI broken?
 To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 
 
 I apologise if there's already been an email about this, I can't see one.Is 
 the Hyper-V CI broken at the moment? It looks like there are anumber of tests 
 failing for every change, including trivial typofixes. An 
 example:http://64.119.130.115/168500/4/results.html.gzhttp://stackalytics.com/report/driverlog?project_id=openstack%2Fnovavendor=Cloudbaseseems
  to think that the tests haven't passed in five days, which isquite a long 
 time for it to be broken.Comments please?Thanks,Michael-- Rackspace 
 Australia__OpenStack
  Development Mailing List (not for usage questions)Unsubscribe: 
 openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] FFE Request: glusterfs_native: negotiate volumes with glusterd

2015-03-31 Thread Ben Swartzlander

On 03/31/2015 10:54 AM, Csaba Henk wrote:

Hi Ben,

please find my answer inline.

- Original Message -

From: Ben Swartzlander b...@swartzlander.org
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Monday, March 30, 2015 7:44:25 PM
Subject: Re: [openstack-dev] [Manila] FFE Request: glusterfs_native: negotiate 
volumes with glusterd

Thanks for going through the formal request process with this change.

One question I have that's not answered here is: what is the risk of
delaying this fix to Liberty? Clearly it needs to be fixed eventually,
but if we hold off and allow Kilo to ship as-is, will anything bad
happen? From the description above it sounds like the driver is
functional, and a somewhat awkward workaround (restarting the backend)
is required to deal with bug 1437176.

The risk is usability of the driver. To put it like that, driver is
architecturally broken -- storing all possible share backend instances
in a config parameter is not something should be seen in release code.


Will users be subjected to any upgrade problems going from Kilo to
Liberty if we don't fix this in Kilo? Will there be any significant
maintenance problems in the Kilo code if we don't change it?

OpenStack distributions might be tempted to backport the fix (to arrive at
a usable driver) in which case they take up a maintenance burden.

Csaba


Approved

-Ben Swartzlander


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Weekly subteam status report

2015-03-31 Thread Ruby Loo
Hi,

Following is the subteam report for Ironic. As usual, this is pulled
directly from the Ironic whiteboard[0] and formatted.

Drivers
==

iRMC (naohirot)
-
python-scciclient ver 0.1.0 for kilo will be released on PyPI by April 9th.



Until next week,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hyper-V CI broken?

2015-03-31 Thread Hashir Abdi
Michael:


Sorry about this Michael.

We will work with our Cloudbase partners and look into these failing tests.

Will update once a resolution is worked out.

Regards

Hashir Abdi

Microsoft



_
From: Michael Still mi...@stillhq.commailto:mi...@stillhq.com
Sent: marți, martie 31, 2015 9:38 a.m.
Subject: [openstack-dev] [Nova] Hyper-V CI broken?
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org


I apologise if there's already been an email about this, I can't see one.Is the 
Hyper-V CI broken at the moment? It looks like there are anumber of tests 
failing for every change, including trivial typofixes. An 
example:http://64.119.130.115/168500/4/results.html.gzhttp://stackalytics.com/report/driverlog?project_id=openstack%2Fnovavendor=Cloudbaseseems
 to think that the tests haven't passed in five days, which isquite a long time 
for it to be broken.Comments please?Thanks,Michael-- Rackspace 
Australia__OpenStack
 Development Mailing List (not for usage questions)Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V meeting

2015-03-31 Thread Peter Pouliot
Hi All,

Due to other conflicts we won't have quorum today.   Therefor the usual hyper-v 
meeting will be postponed until next week.

p

Peter J. Pouliot CISSP
Microsoft Enterprise Cloud Solutions
C:\OpenStack
New England Research  Development Center
1 Memorial Drive
Cambridge, MA 02142
P: 1.(857).4536436
E: ppoul...@microsoft.commailto:ppoul...@microsoft.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo.db] Repeatable Read considered harmful

2015-03-31 Thread Mike Bayer


Eugene Nikanorov enikano...@mirantis.com wrote:

 Hi Matthew,
 
 I'll add just 2c:
 
 We've tried to move from repeatable-read to read committed in Neutron project.
 This change actually has caused multiple deadlocks during regular tempest 
 test run.
 That is a known problem (the issue with eventlet and currect mysql client 
 library),
 but anyway, at least one major openstack project is not ready to move to 
 read-committed.
 
 Also, particular transaction isolation level's performance is highly affected 
 by DB usage pattern.
 Is there any research of how read-committed affects performance of openstack 
 projects?

So I would add that I think the altering of transaction isolation level
should be done on a per-method basis; that is, methods that definitely need
the effects of a certain isolation level should run it locally, so that the
change can be made safely without having to deal with moving the entire
application space over to a new mode of operation. I’ve added methods to
SQLAlchemy specifically to make this achievable at the ORM level in response
to [1], documented at [2].

The addition of specific isolation levels to enginefacade [3] will be
straightforward and very clean. The API as it stands now, assuming decorator
use looks like:

@enginefacade.writer
def some_api_method(context):
context.session.do_something()

To specify specific isolation level would be like this:

@enginefacade.writer.with_read_committed
def some_api_method(context):
context.session.do_something()



[1] https://review.openstack.org/#/c/148339/
[2] 
http://docs.sqlalchemy.org/en/rel_0_9/orm/session_transaction.html#setting-isolation-for-individual-transactions.
[3] https://review.openstack.org/#/c/138215/




 
 Thanks,
 Eugene.
 
 On Fri, Feb 6, 2015 at 7:59 PM, Matthew Booth mbo...@redhat.com wrote:
 I was surprised recently to discover that MySQL uses repeatable read for
 transactions by default. Postgres uses read committed by default, and
 SQLite uses serializable. We don't set the isolation level explicitly
 anywhere, so our applications are running under different isolation
 levels depending on backend. This doesn't sound like a good idea to me.
 It's one thing to support multiple sql syntaxes, but different isolation
 levels have different semantics. Supporting that is much harder, and
 currently we're not even trying.
 
 I'm aware that the same isolation level on different databases will
 still have subtly different semantics, but at least they should agree on
 the big things. I think we should pick one, and it should be read committed.
 
 Also note that 'repeatable read' on both MySQL and Postgres is actually
 snapshot isolation, which isn't quite the same thing. For example, it
 doesn't get phantom reads.
 
 The most important reason I think we need read committed is recovery
 from concurrent changes within the scope of a single transaction. To
 date, in Nova at least, this hasn't been an issue as transactions have
 had an extremely small scope. However, we're trying to expand that scope
 with the new enginefacade in oslo.db:
 https://review.openstack.org/#/c/138215/ . With this expanded scope,
 transaction failure in a library function can't simply be replayed
 because the transaction scope is larger than the function.
 
 So, 3 concrete examples of how repeatable read will make Nova worse:
 
 * https://review.openstack.org/#/c/140622/
 
 This was committed to Nova recently. Note how it involves a retry in the
 case of concurrent change. This works fine, because the retry is creates
 a new transaction. However, if the transaction was larger than the scope
 of this function this would not work, because each iteration would
 continue to read the old data. The solution to this is to create a new
 transaction. However, because the transaction is outside of the scope of
 this function, the only thing we can do locally is fail. The caller then
 has to re-execute the whole transaction, or fail itself.
 
 This is a local concurrency problem which can be very easily handled
 locally, but not if we're using repeatable read.
 
 *
 https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L4749
 
 Nova has multiple functions of this type which attempt to update a
 key/value metadata table. I'd expect to find multiple concurrency issues
 with this if I stopped to give it enough thought, but concentrating just
 on what's there, notice how the retry loop starts a new transaction. If
 we want to get to a place where we don't do that, with repeatable read
 we're left failing the whole transaction.
 
 * https://review.openstack.org/#/c/136409/
 
 This one isn't upstream, yet. It's broken, and I can't currently think
 of a solution if we're using repeatable read.
 
 The issue is atomic creation of a shared resource. We want to handle a
 creation race safely. This patch:
 
 * Attempts to reads the default (it will normally exist)
 * Creates a new one if it doesn't exist
 * Goes back to the start if creation 

Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-31 Thread Dr. Jens Rosenboom

Am 01/04/15 um 04:10 schrieb Kevin Benton:

It's worth pointing out here that the in-tree OVS solution controls traffic
using iptables on regular bridges too. The difference between the two
occurs when it comes to how traffic is separated into different networks.

It's also worth noting that DVR requires OVS as well. If nobody is
comfortable with OVS then they can't use DVR and they won't have parity
with Nova network as far as floating IP resilience and performance is
concerned.


It was my understanding that the reason for this was that the first 
implementation for DVR was only done for OVS, probably because it is the 
default. Or is there some reason to assume that DVR also cannot be made 
to work with linuxbridge within Liberty?


FWIW, I think I made some progress in getting [1] to work, though if 
someone could jump in and make a proper patch from my hack, that would 
be great.


[1] https://review.openstack.org/168423

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How to install HealthNMon with OpenStack devstack

2015-03-31 Thread Abhishek Talwar/HYD/TCS
Hi,  

I have a devstack installation of OpenStack Juno and I want to install 





HealthNMon with that. How can we install it with devstack ?

Thanks and Regards
Abhishek Talwar
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Release management and bug triage

2015-03-31 Thread Emilien Macchi


On 03/31/2015 11:47 AM, Mathieu Gagné wrote:
 On 2015-03-26 1:08 PM, Sebastien Badia wrote:

 About lp script, a short search on github (bug mgmt, merged changes):

  - https://github.com/openstack-infra/release-tools
  - https://github.com/markmc/openstack-lp-scripts
  - https://github.com/dolph/launchpad

 But we wait the publishing of Mathieu scripts :)

 
 Those are great tools. I mainly invested time in the ability to
 massively create/update series and milestones (which is a pain) from a
 projects.yaml file.
 
 https://github.com/mgagne/openstack-puppet-release-tools

This tool is awesome, we may want to contribute/share to it with other
projects.
Maybe we could move it to stackforge?

Other solution is to keep github pull-request module (something I don't
like).

Thanks a lot for sharing.
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] PTL Candidacy

2015-03-31 Thread Emilien Macchi
Hi,

As we want to move under the big tent, we decided in the last Puppet
OpenStack meeting that we need a PTL for the next Cycle.
I would like to announce my candidacy.


Qualifications
--

I joined eNovance in 2012 and mainly worked in OpenStack deployments
before I started working on OpenStack automation in 2013 for internal
needs. Since the beginning, all my Puppet work has been upstream
(Puppetlabs or Stackforge) and I strongly believe our community modules
are the way to go for a production-ready OpenStack environment. My
general knowledge in OpenStack and my background in deployments allow me
to understand all components from high level so I can see how to deploy
them with Puppet.
Today, I'm leading Puppet OpenStack work at Red Hat and my team and I
are working and focusing on upstream modules. Our involvement in Puppet
OpenStack is because we are developing an open-source product which
deploys OpenStack by using our modules, so each feature we need, is in
Stackforge modules that can be leverage by the rest of the community.
I'm also highly involved in Puppet integration for TripleO which is a
great new opportunity to test different use-cases of deployments.


Work in Puppet OpenStack


* For more than one year, I've been a top contributor to the Puppet
OpenStack modules:
http://stackalytics.com/report/contribution/puppet-group/365
* I started puppet-ironic, puppet-heat, puppet-ceilometer (with Francois
Charlier), puppet-tuskar, puppet-trove (with Sebastien Badia),
puppet-gnocchi, puppet-tripleo modules.
* I participate at meetings, help people on the mailing-list, contribute
to discussions about our community, and regularly am a speaker at
meetups and summits.


Plans for Liberty
-

* CI
I would like to continue the work done by nibalizer to have acceptance
tests in our modules. I'm very interested in using TripleO jobs to gate
our modules, I think it's a good use case, and everything is already in
place.

* Reviews
Some patches would require some specific reviews, from OpenStack
developers and operators. We'll find a way to document or automate the
way we review these patches.
I truly think our modules work because of the invaluable feedback we
from both operators and also from OpenStack developers.

* Release management
As we move under the big tent, I would like to start following OpenStack
release processes and emphasize bug triage.
We will do our best to keep our launchpad updated so we can improve the
way to we work together as a community.

* Community
I would like to ensure we organize regular meetings (IRC/video) with our
community to provide support and to make bug triage as a team. We also
will take care of welcoming newcomers and try to help them if they want
to contribute.

* My expectation of core-team
As members of the core team, or those wanting to be, it's my hope that
we: participate at our meetings, make good reviews, discuss on the
mailing list regarding important topics, and help in fixing bugs.

* Big tent
This is a first for all of us, and I'll do my best to have Puppet
OpenStack project involved in OpenStack's Big Tent. I accept this
challenge and I strongly believe that, as a team, we can make it
and succeed *without breaking our way to work together*.

* Code
Regarding our modules, we will continue our daily work on
bugs/features/tests patches, but we should also continue the work on
openstacklib to ensure everything is consistent across our modules.

* Meetups/Summit
As usual, we will ensure to have design sessions and an area where we
can discuss together about Puppet OpenStack. I'll do my best to acquire
all the resources we would need to effectively plan the upcoming cycle.


Thank you for your time and consideration.
Best regards,
-- 
Emilien Macchi







signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] [rally] How will Tempest discover/run tests migrated to specific projects?

2015-03-31 Thread David Kranz

On 03/30/2015 10:44 AM, Matthew Treinish wrote:

On Mon, Mar 30, 2015 at 12:21:18PM +0530, Rohan Kanade wrote:

Since tests can now be removed from Tempest 
https://wiki.openstack.org/wiki/QA/Tempest-test-removal and migrated to
their specific projects.

Does Tempest plan to discover/run these tests in tempest gates? If yes, how
is that going to be done?  Will there be a discovery mechanism in Tempest
to discover tests from individual projects?


No, the idea behind that wiki page is to outline the procedure for finding
something that is out of scope and doesn't belong in tempest and is also safe
to remove from the tempest jobs. The point of going through that entire
procedure is that the test being removed should not be run in the tempest gates
anymore and will become the domain of the other project.

Also, IMO the moved test ideally won't be in the same pattern of a tempest test
or have the same constraints of a tempest test and would ideally be more coupled
to the project under test's internals. So that wouldn't be appropriate to
include in a tempest run either.

For example, the first test we removed with that procedure was:

https://review.openstack.org/#/c/158852/

which removed the flavor negative tests from tempest. These were just testing
operations that would go no deeper than Nova's DB layer. Which was something
we couldn't verify in tempest. They also didn't really belong in tempest because
they were just implicitly verifying Nova's DB layer through API responses. The
replacement tests:

http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/functional/wsgi/test_flavor_manage.py

were able to verify the state of the DB was correct and ensure the correct
behavior both in the api and nova's internals. This kind of testing is something
which doesn't belong in tempest or any other external test suite. It is also
what I feel we should be targeting for with project specific in-tree functional
testing and the kind of thing we should be using the removal process on that
wiki page for.


-Matt Treinish


Matt, while everything you say here is true, I don't think it answers 
the whole question. neutron is also planning to move the tempest 
networking tests into the neutron repo with safeguards to prevent 
incompatible changes, but also keeping the tests in a form that is not 
so different from tempest.


The problem is that deployers/users/refstack/etc. (let's call them 
verifiers) want an OpenStack functional verification suite. Until now 
that has been easy since most of what that requires is in tempest, and 
Rally calls tempest. But to a verifier, the fact that all the tests used 
for verification are in one tempest repo is an implementation detail. 
OpenStack verifiers do not want to lose neutron tests because they moved 
out of tempest. So verifiers will need to do something about this and it 
would be better if we all did it as a community by agreeing on a UX and 
method for locating and running all the tests that should be included in 
an OpenStack functional test suite. Even now, there are tests that are 
useful for verification that are not in tempest.


I think the answer that Boris gave 
http://lists.openstack.org/pipermail/openstack-dev/2015-March/060173.html is 
trying to address this by saying that Rally will take on the role of 
being the OpenStack verification suite (including performance tests). 
I don't know if that is the best answer and tempest/rally could agree on 
a UX/discovery/import mechanism, but I think we are looking at one of 
those two choices.


 -David


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Sahara] Question about Sahara db code

2015-03-31 Thread Chen, Ken
Hi all,
I have some confusions on Sahara conductor codes. Maybe the questions are 
silly, but please let me know if you have the answer. Thanks.

1.   In Sahara conf we have an option db_driver, whose default value is 
sahara.db. Is it possible we do not use sahara.db? I think it should be the 
only choice for Sahara, so why do we have this option? For different db engine 
backend we already have another option db_backend whose default value is 
sqlalchemy.

2.   In sahara/db/ directory we have a base.py, which defines Base class 
where use the db_driver to initialize self.db in Base. Thus we have below 
calling sequence (use cluster_create method as an example):
sahara.conductor.manager.ConductorManager().cluster_create  == 
sahara.db.Base().db.cluster_create == sahara.db.cluster_create == 
sahara.db.api.cluster_create == IMPL.cluster_create
So why we do not just discard base.py, and assign 
sahara.conductor.manager.ConductorManager().db_api = sahara.db.api, and let the 
flow be like
sahara.conductor.manager.ConductorManager().cluster_create  == 
sahara.db_api.cluster_create == IMPL.cluster_create ?
This is what Heat codes is like. Current Sahara implementation seems copied 
from nova. There we also have a db_driver whose default value is nova.db. 
In fact I also have the similar questions on nova (db_driver and base.py seem 
redundant).

Thanks.
-Ken
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Release management and bug triage

2015-03-31 Thread Mathieu Gagné
On 2015-03-26 1:08 PM, Sebastien Badia wrote:
 
 About lp script, a short search on github (bug mgmt, merged changes):
 
  - https://github.com/openstack-infra/release-tools
  - https://github.com/markmc/openstack-lp-scripts
  - https://github.com/dolph/launchpad
 
 But we wait the publishing of Mathieu scripts :)
 

Those are great tools. I mainly invested time in the ability to
massively create/update series and milestones (which is a pain) from a
projects.yaml file.

https://github.com/mgagne/openstack-puppet-release-tools

-- 
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] Question about Sahara API v2

2015-03-31 Thread Chen, Ken
Sergey and Michael, thanks for explaining these.
-Ken

From: Sergey Lukjanov [mailto:slukja...@mirantis.com]
Sent: Tuesday, March 31, 2015 12:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Sahara] Question about Sahara API v2

Agree with Mike, thx for the link.

On Mon, Mar 30, 2015 at 4:55 PM, michael mccune 
m...@redhat.commailto:m...@redhat.com wrote:
On 03/30/2015 07:02 AM, Sergey Lukjanov wrote:
My personal opinion for API 2.0 - we should discuss design of all object
and endpoint, review how they are used from Horizon or
python-saharaclient and improve them as much as possible. For example,
it includes:

* get rid of tons of extra optional fields
* rename Job - Job Template, Job Execution - Job
* better support for Horizon needs
* hrefs

If you have any ideas ideas about 2.0 - please write them up, there is a
99% chance that we'll discuss an API 2.0 a lot on Vancouver summit.

+1

i've started a pad that we can use to collect ideas for the discussion: 
https://etherpad.openstack.org/p/sahara-liberty-api-v2

things that i'd like to see from the v2 discussion

* a full endpoint review, some of the endpoints might need to be deprecated or 
adjusted slightly (for example, job-binary-internals)

* a technology review, should we consider Pecan or stay with Flask?

* proposals for more radical changes to the api; use of micro-versions akin to 
nova's plan, migrating the project id into the headers, possible use of swagger 
to aid in auto-generation of api definitions.

i think we will have a good amount to discuss and i will be migrating some of 
my local notes into the pad over this week and the next. i invite everyone to 
add their thoughts to the pad for ideas.

mike


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] JetBrains WebStorm License Available

2015-03-31 Thread Andrew Melton
Hi Devs,


Some interest was expressed in a WebStorm license for some of the OpenStack 
projects

that now include some JavaScript. I reached out to JetBrains and they have 
provided us

with a license for WebStorm along side our existing PyCharm license.


As with the PyCharm license, I can't post it directly to the mailing list. So, 
if you would like

to use WebStorm, please send me a reply including your full name and 
launchpad-id.?


Thanks!

Andrew Melton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Design Summit Session etherpad

2015-03-31 Thread Kyle Mestery
Hi folks!

Now that we're deep into the feature freeze and the Liberty specs
repository is open [1], I wanted to let everyone know I've created an
etherpad to track Design Summit Sessions. If you'd like to propose
something, please have a look at the etherpad. We'll discuss these next
week in the Neutron meeting [3], so please join and come with your ideas!

Thanks,
Kyle

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-March/060183.html
[2] https://etherpad.openstack.org/p/liberty-neutron-summit-topics
[3] https://wiki.openstack.org/wiki/Network/Meetings
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Hierarchical Multitenancy quotas

2015-03-31 Thread Daniel Comnea
I see this spec has been merged however can anyone point out if this will
make it into final Kilo release?

Thanks,
Dani

On Wed, Jan 7, 2015 at 5:03 PM, Tim Bell tim.b...@cern.ch wrote:

  Are we yet at the point  in the New Year to register requests for
 exceptions ?



 There is strong interest from CERN and Yahoo! In this feature and there
 are many +1s and no unaddressed -1s.



 Thanks for consideration,



 Tim



  Joe wrote

  ….

 

 Nova's spec deadline has passed, but I think this is a good candidate for
 an exception.  We will announce the process for asking for a formal spec
 exception shortly after new years.

 



 *From:* Tim Bell [mailto:tim.b...@cern.ch]
 *Sent:* 23 December 2014 19:02
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] Hierarchical Multitenancy



 Joe,



 Thanks… there seems to be good agreement on the spec and the matching
 implementation is well advanced with BARC so the risk is not too high.



 Launching HMT with quota in Nova in the same release cycle would also
 provide a more complete end user experience.



 For CERN, this functionality is very interesting as it allows the central
 cloud providers to delegate the allocation of quotas to the LHC
 experiments. Thus, from a central perspective, we are able to allocate N
 thousand cores to an experiment and delegate their resource co-ordinator to
 prioritise the work within the experiment. Currently, we have many manual
 helpdesk tickets with significant latency to adjust the quotas.



 Tim



 *From:* Joe Gordon [mailto:joe.gord...@gmail.com joe.gord...@gmail.com]
 *Sent:* 23 December 2014 17:35
 *To:* OpenStack Development Mailing List
 *Subject:* Re: [openstack-dev] Hierarchical Multitenancy




 On Dec 23, 2014 12:26 AM, Tim Bell tim.b...@cern.ch wrote:
 
 
 
  It would be great if we can get approval for the Hierachical Quota
 handling in Nova too (https://review.openstack.org/#/c/129420/).

 Nova's spec deadline has passed, but I think this is a good candidate for
 an exception.  We will announce the process for asking for a formal spec
 exception shortly after new years.

 
 
 
  Tim
 
 
 
  From: Morgan Fainberg [mailto:morgan.fainb...@gmail.com]
  Sent: 23 December 2014 01:22
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] Hierarchical Multitenancy
 
 
 
  Hi Raildo,
 
 
 
  Thanks for putting this post together. I really appreciate all the work
 you guys have done (and continue to do) to get the Hierarchical
 Mulittenancy code into Keystone. It’s great to have the base implementation
 merged into Keystone for the K1 milestone. I look forward to seeing the
 rest of the development land during the rest of this cycle and what the
 other OpenStack projects build around the HMT functionality.
 
 
 
  Cheers,
 
  Morgan
 
 
 
 
 
 
 
  On Dec 22, 2014, at 1:49 PM, Raildo Mascena rail...@gmail.com wrote:
 
 
 
  Hello folks, My team and I developed the Hierarchical Multitenancy
 concept for Keystone in Kilo-1 but What is Hierarchical Multitenancy? What
 have we implemented? What are the next steps for kilo?
 
  To answers these questions, I created a blog post
 http://raildo.me/hierarchical-multitenancy-in-openstack/
 
 
 
  Any question, I'm available.
 
 
 
  --
 
  Raildo Mascena
 
  Software Engineer.
 
  Bachelor of Computer Science.
 
  Distributed Systems Laboratory
  Federal University of Campina Grande
  Campina Grande, PB - Brazil
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] FFE request for Authomatic cleanup of share_servers

2015-03-31 Thread Valeriy Ponomaryov
No user-facing changes. Only under-the-hood improvement. I think result
worth passing of FFE.

-- 
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomar...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Tracking Ideas for Summit Sessions

2015-03-31 Thread Matthew Treinish
Hi Everyone,

I started an etherpad to begin planning and brainstorming ideas for summit
sessions here:

https://etherpad.openstack.org/p/liberty-qa-summit-topics

Feel free to add on it to it if you have an idea for a session.

Thanks,

-Matt Treinish


pgpQE7CeD5ee1.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] upgrades from juno to kilo

2015-03-31 Thread Eoghan Glynn


 I tracked down the cause of the check-grenade-dsvm failure on
 https://review.openstack.org/#/c/167370 . As I understand it, grenade is
 taking the previous stable release, deploying it, then upgrading to the
 current master (plus the proposed changeset) without changing any of the
 config from the stable deployment. Thus the policy.json file used in that
 test is the file from stable-juno. Then if we look at oslo_policy/policy.py
 we see that if the rule being looked for is missing then the default rule
 will be used, but then if that default rule is also missing a KeyError is
 thrown. Since the default rule was missing with ceilometer's policy.json
 file in Juno, that's what would happen here. I assume that KeyError then
 gets turned into the 403 Forbidden that is causing check-grenade-dsvm
 failure.
 
 I suspect the author of the already-merged
 https://review.openstack.org/#/c/115717 did what they did in
 ceilometer/api/rbac.py rather than what is proposed in
 https://review.openstack.org/#/c/167370 just to get the grenade tests to
 pass. I think they got lucky (unlucky for us), too, because I think they
 actually did break what the grenade tests are meant to catch. The patch set
 which was merged under https://review.openstack.org/#/c/115717 changed the
 rule that is checked in get_limited_to() from context_is_admin to
 segregation. But the segregation rule didn't exist in the Juno version
 of ceilometer's policy.json, so if a method that calls get_limited_to() was
 tested after an upgrade, I believe it would fail with a 403 Forbidden
 tracing back to a KeyError looking for the segregation rule... very
 similar to what we're seeing in https://review.openstack.org/#/c/167370
 
 Am I on the right track here? How should we handle this? Is there a way to
 maintain backward compatibility while fixing what is currently broken (as a
 result of https://review.openstack.org/#/c/115717 ) and allowing for a fix
 for https://bugs.launchpad.net/ceilometer/+bug/1435855 (the goal of
 https://review.openstack.org/#/c/167370 )? Or will we need to document in
 the release notes that the manual step of modifying ceilometer's policy.json
 is required when upgrading from Juno, and then correspondingly modify
 grenade's upgrade_ceilometer file?

Thanks for raising this issue.

IIUC the idea behind the unconventional approach taken by the original
RBAC patch that landed in juno was to ensure that API calls continued to
be allowed by default, as was previously the case.

However, you correctly point out that this missed a case where the new
logic is run against a completely unchanged policy.json from Juno or
before.

As we just discussed on #os-ceilometer IRC, we can achieve the following
three goals with a relatively minor change:

 1. allow API operations if no matching rule *and* no default rule

 2. apply the default rule *if* present

 3. tolerate the absence of the segregation rule

This would require:

 (a) explicitly checking for 'default' in _ENFORCER.rules.keys() before
 applying the enforcement approach in [1], otherwise falling back
 to the prior enforcement approach in [2]

 (b) explicitly checking for 'segregation' in _ENFORCER.rules.keys()
 before [3] otherwise falling back to checking for the literal
 'context_as_admin' as before.

If https://review.openstack.org/167370 is updated to follow this approach,
I think we can land it for kilo-rc1 without an upgrade exception.
 
Cheers,
Eoghan
 
[1] https://review.openstack.org/#/c/167370/5/ceilometer/api/rbac.py line 49

[2] https://review.openstack.org/#/c/115717/18/ceilometer/api/rbac.py line 51

[3] https://review.openstack.org/#/c/115717/18/ceilometer/api/rbac.py line 81

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][tripleo] Optional Ansible deployment coming

2015-03-31 Thread Steven Dake (stdake)
Hey folks,

One of our community members submitted a review to add optional Ansible support 
to deploy OpenStack using Ansible and the containers within Kolla.  Our main 
objective remains: for third party deployment tools to use Kolla as a building 
block for container content and management.

Since this is scope expansion for our small core team, I required a majority 
vote on the first commit.  See the review here:

https://review.openstack.org/#/c/168637/

A couple follow-on reviews:

https://review.openstack.org/169154
https://review.openstack.org/169152

If folks in the community want to build an Ansible deployment tool that deploys 
thin containers, now is your chance to get involved from nearly the first 
commit.  The core team doesn’t know much about Ansible, so we could really use 
extra expertise :)

Please drop by our irc channel #kolla if you need help getting up to speed or 
just want to chat about Containerizing OpenStack.

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Hierarchical Multitenancy quotas

2015-03-31 Thread Michael Still
The blueprint for this work appears to be
https://blueprints.launchpad.net/nova/+spec/nested-quota-driver-api
which shows it didn't make it into Nova in Kilo.

Looking at the reviews, I agree they didn't get enough attention, but
there isn't a lot that we can do about that right now. We can revisit
this in Liberty.

Michael

On Wed, Apr 1, 2015 at 6:29 AM, Daniel Comnea comnea.d...@gmail.com wrote:
 I see this spec has been merged however can anyone point out if this will
 make it into final Kilo release?

 Thanks,
 Dani

 On Wed, Jan 7, 2015 at 5:03 PM, Tim Bell tim.b...@cern.ch wrote:

 Are we yet at the point  in the New Year to register requests for
 exceptions ?



 There is strong interest from CERN and Yahoo! In this feature and there
 are many +1s and no unaddressed -1s.



 Thanks for consideration,



 Tim



  Joe wrote

  ….

 

 Nova's spec deadline has passed, but I think this is a good candidate for
  an exception.  We will announce the process for asking for a formal spec
  exception shortly after new years.

 



 From: Tim Bell [mailto:tim.b...@cern.ch]
 Sent: 23 December 2014 19:02
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Hierarchical Multitenancy



 Joe,



 Thanks… there seems to be good agreement on the spec and the matching
 implementation is well advanced with BARC so the risk is not too high.



 Launching HMT with quota in Nova in the same release cycle would also
 provide a more complete end user experience.



 For CERN, this functionality is very interesting as it allows the central
 cloud providers to delegate the allocation of quotas to the LHC experiments.
 Thus, from a central perspective, we are able to allocate N thousand cores
 to an experiment and delegate their resource co-ordinator to prioritise the
 work within the experiment. Currently, we have many manual helpdesk tickets
 with significant latency to adjust the quotas.



 Tim



 From: Joe Gordon [mailto:joe.gord...@gmail.com]
 Sent: 23 December 2014 17:35
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] Hierarchical Multitenancy




 On Dec 23, 2014 12:26 AM, Tim Bell tim.b...@cern.ch wrote:
 
 
 
  It would be great if we can get approval for the Hierachical Quota
  handling in Nova too (https://review.openstack.org/#/c/129420/).

 Nova's spec deadline has passed, but I think this is a good candidate for
 an exception.  We will announce the process for asking for a formal spec
 exception shortly after new years.

 
 
 
  Tim
 
 
 
  From: Morgan Fainberg [mailto:morgan.fainb...@gmail.com]
  Sent: 23 December 2014 01:22
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] Hierarchical Multitenancy
 
 
 
  Hi Raildo,
 
 
 
  Thanks for putting this post together. I really appreciate all the work
  you guys have done (and continue to do) to get the Hierarchical 
  Mulittenancy
  code into Keystone. It’s great to have the base implementation merged into
  Keystone for the K1 milestone. I look forward to seeing the rest of the
  development land during the rest of this cycle and what the other OpenStack
  projects build around the HMT functionality.
 
 
 
  Cheers,
 
  Morgan
 
 
 
 
 
 
 
  On Dec 22, 2014, at 1:49 PM, Raildo Mascena rail...@gmail.com wrote:
 
 
 
  Hello folks, My team and I developed the Hierarchical Multitenancy
  concept for Keystone in Kilo-1 but What is Hierarchical Multitenancy? What
  have we implemented? What are the next steps for kilo?
 
  To answers these questions, I created a blog post
  http://raildo.me/hierarchical-multitenancy-in-openstack/
 
 
 
  Any question, I'm available.
 
 
 
  --
 
  Raildo Mascena
 
  Software Engineer.
 
  Bachelor of Computer Science.
 
  Distributed Systems Laboratory
  Federal University of Campina Grande
  Campina Grande, PB - Brazil
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [cinder] Future of Tasflow in Cinder

2015-03-31 Thread Ivan Kolodyazhny
Hi Cinder devs,

Following our discussion during the last meeting [1] I created etherpad [2]
with some pros and cons of Taskflow usage in Cinder. Let's start discussion
here before we'll make a final decision at Design Summit [3].

Feel free to add notes, comments and questions here and in etherpad [2].

[1]
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-03-25-16.00.html
[2] https://etherpad.openstack.org/p/cinder-taskflow
[3] https://etherpad.openstack.org/p/cinder-liberty-proposed-sessions

Regards,
Ivan Kolodyazhny,
Software Engineer,
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Regarding finding Mentors

2015-03-31 Thread Stefano Maffulli
On Sat, 2015-03-28 at 19:58 +0530, Ganesh R wrote:
 I am a newbie to OpenStack, very interested in contributing to
 OpenStack development.
 
Welcome.

Jeremy provided some useful suggestions. Just last week I added to how
to contribute page the links to mentors page:

https://wiki.openstack.org/wiki/How_To_Contribute#Mentoring_and_finding_mentors

 To proceed further, I thought it will be good to have a mentor with
 whom I can work.

I'd suggest you to start by hanging out on IRC #openstack-101 and ask
questions there.

I've been working on a checklist for first time contributors:

https://etherpad.openstack.org/p/from-zero-to-atc

I would appreciate your help to validate and complete it as you go on
with your quest :)

I'm reed on IRC, happy to talk to you more.

Regards,
Stef


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] initial OVN testing

2015-03-31 Thread George Shuklin
If that thing will have worked,  I'll owe you a beer. Every time I debug 
OVS-neutron issues I want to cry. All that constant 'exec ovs-vsctl' 
stuff driving me mad because of the underengineering and overall 
inefficiency.


I will definitively try it on 'reallife lab installation' with few 
compute hosts.


On 03/27/2015 01:54 AM, Russell Bryant wrote:

Gary and Kyle, I saw in my IRC backlog that you guys were briefly
talking about testing the Neutron ovn ml2 driver.  I suppose it's time
to add some more code to the devstack integration to install the current
ovn branch and set up ovsdb-server to serve up the right database for
this.  I'll try to work on that tomorrow.  Of course, note that all we
can set up right now is the northbound database.  None of the code that
reacts to updates to that database is merged yet.  We can still go ahead
and test our code and make sure the expected data makes it there, though.

Here's some more detail about the pieces ...

When I was writing ovn-nbctl [1], I was testing using ovs-sandbox.  It's
a script that sets up a handy development environment for ovs.  It has
ovn support if you pass the -o option [2].  To run it, it would be
something like ...

   $ git clone https://github.com/openvswitch/ovs.git
   $ cd ovs
   $ git checkout ovn
   $ ./boot.sh
   $ ./configure
   $ make
   $ make SANDBOXFLAGS=-o sandbox

 From there you can run ovn-nbctl.  Here's a script to demonstrate the
various commands:

   https://gist.github.com/russellb/946953e8675063c0c756

To set this up outside of ovs-sandbox, you need to first create the OVN
northbound database:

   $ ovsdb-tool create ovnnb.db ovs-git-tree/ovn/ovn-nb.ovsschema

Then you need to tell ovsdb-server to use it.  By default ovsdb-server
will only serve up conf.db.  It can take a list of dbs as positional
arguments, though.  You can see that's what the ovs-sandbox script is doing.

So, you can either change the command used to start ovsdb-server on your
system, or start up another instance of it with its own unix socket and
tcp port.

There was also a question on IRC about the format of the database option
for the ML2 driver.  The value is passed directly to ovn-nbctl.  The
format is the same as is used for ovs-vsctl (and probably others).

When running in ovs-sandbox, ovn-nbctl's help output shows:

   --db=DATABASE connect to DATABASE
 (default:
unix:/home/rbryant/src/ovs/tutorial/sandbox/db.sock)

and further down, it provides some more detail:

   Active database connection methods:
 tcp:IP:PORT PORT at remote IP
 ssl:IP:PORT SSL PORT at remote IP
 unix:FILE   Unix domain socket named FILE
   Passive database connection methods:
 ptcp:PORT[:IP]  listen to TCP PORT on IP
 pssl:PORT[:IP]  listen for SSL on PORT on IP
 punix:FILE  listen on Unix domain socket FILE


[1] http://openvswitch.org/pipermail/dev/2015-March/052757.html
[2] http://openvswitch.org/pipermail/dev/2015-March/052353.html




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo.db] Repeatable Read considered harmful

2015-03-31 Thread Salvatore Orlando
On 31 March 2015 at 17:22, Mike Bayer mba...@redhat.com wrote:



 Eugene Nikanorov enikano...@mirantis.com wrote:

  Hi Matthew,
 
  I'll add just 2c:
 
  We've tried to move from repeatable-read to read committed in Neutron
 project.
  This change actually has caused multiple deadlocks during regular
 tempest test run.
  That is a known problem (the issue with eventlet and currect mysql
 client library),
  but anyway, at least one major openstack project is not ready to move to
 read-committed.
 
  Also, particular transaction isolation level's performance is highly
 affected by DB usage pattern.
  Is there any research of how read-committed affects performance of
 openstack projects?

 So I would add that I think the altering of transaction isolation level
 should be done on a per-method basis; that is, methods that definitely need
 the effects of a certain isolation level should run it locally, so that the
 change can be made safely without having to deal with moving the entire
 application space over to a new mode of operation. I’ve added methods to
 SQLAlchemy specifically to make this achievable at the ORM level in
 response
 to [1], documented at [2].


I totally agree that the transaction isolation level should be set on a
per-method basis, because ultimately it's just impossible to define a rule
which will fit all cases.
Considering repeatable read a harmful mode is, in my opinion, an
exaggerated generalization. But I guess that considered harmful has
nowadays become just a trendy and catchy mailing list subject.
Anyway, a  wilful relaxation of transaction isolation is something that
should be handled carefully - and this is why explicitly setting the
desired isolation level for a transaction represent the optimal solution.
As Eugene said, there are examples where globally lowering it triggers more
issues, and not just because of the infamous eventlet issue.

Said that, I agree that the dichotomy between mysql and postgres' default
isolation methods has always been annoying for me as for every transaction
we write or review we have to be careful about thinking whether it will be
safe both in read committed and repeatable read modes. From what I gather,
the changes in sqlalchemy mentioned by Mike will also enable us to set an
application-specific isolation default, and therefore the applications will
not be dependent anymore on the backend default.

Salvatore




 The addition of specific isolation levels to enginefacade [3] will be
 straightforward and very clean. The API as it stands now, assuming
 decorator
 use looks like:

 @enginefacade.writer
 def some_api_method(context):
 context.session.do_something()

 To specify specific isolation level would be like this:

 @enginefacade.writer.with_read_committed
 def some_api_method(context):
 context.session.do_something()



 [1] https://review.openstack.org/#/c/148339/
 [2]
 http://docs.sqlalchemy.org/en/rel_0_9/orm/session_transaction.html#setting-isolation-for-individual-transactions
 .
 [3] https://review.openstack.org/#/c/138215/




 
  Thanks,
  Eugene.
 
  On Fri, Feb 6, 2015 at 7:59 PM, Matthew Booth mbo...@redhat.com wrote:
  I was surprised recently to discover that MySQL uses repeatable read for
  transactions by default. Postgres uses read committed by default, and
  SQLite uses serializable. We don't set the isolation level explicitly
  anywhere, so our applications are running under different isolation
  levels depending on backend. This doesn't sound like a good idea to me.
  It's one thing to support multiple sql syntaxes, but different isolation
  levels have different semantics. Supporting that is much harder, and
  currently we're not even trying.
 
  I'm aware that the same isolation level on different databases will
  still have subtly different semantics, but at least they should agree on
  the big things. I think we should pick one, and it should be read
 committed.
 
  Also note that 'repeatable read' on both MySQL and Postgres is actually
  snapshot isolation, which isn't quite the same thing. For example, it
  doesn't get phantom reads.
 
  The most important reason I think we need read committed is recovery
  from concurrent changes within the scope of a single transaction. To
  date, in Nova at least, this hasn't been an issue as transactions have
  had an extremely small scope. However, we're trying to expand that scope
  with the new enginefacade in oslo.db:
  https://review.openstack.org/#/c/138215/ . With this expanded scope,
  transaction failure in a library function can't simply be replayed
  because the transaction scope is larger than the function.
 
  So, 3 concrete examples of how repeatable read will make Nova worse:
 
  * https://review.openstack.org/#/c/140622/
 
  This was committed to Nova recently. Note how it involves a retry in the
  case of concurrent change. This works fine, because the retry is creates
  a new transaction. However, if the transaction was larger than the scope
  of this 

Re: [openstack-dev] [Neutron] initial OVN testing

2015-03-31 Thread Russell Bryant
On 03/31/2015 01:09 PM, George Shuklin wrote:
 If that thing will have worked,  I'll owe you a beer. Every time I debug
 OVS-neutron issues I want to cry. All that constant 'exec ovs-vsctl'
 stuff driving me mad because of the underengineering and overall
 inefficiency.
 
 I will definitively try it on 'reallife lab installation' with few
 compute hosts.

Feel free to try it if you're interested in helping with development.
It's not far enough along to actually use, though.  The actual network
connectivity part isn't wired up, which as it turns out, is kind of
important.  :-)

The current goal is to have something working to demo and try by the
Vancouver summit in May.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Puppet] TripleO Puppet CI wiki page

2015-03-31 Thread Dan Prince
Today I started a TripleO Puppet CI wiki page to help describe the
TripleO Puppet CI process, what gets executed, and how to troubleshoot a
failed CI run here:

https://wiki.openstack.org/wiki/TripleOPuppetCI

The goal was to help those who are familiar with either TripleO or
Puppet (but perhaps not both) troubleshoot and understand the CI job we
have running.

Let me know if there are other sections you would like to see added to
this document...

Dan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress] is an openstack project

2015-03-31 Thread Aaron Rosen
Sure thing!

Thanks Sean.

Aaron


On Tue, Mar 31, 2015 at 1:28 PM, sean roberts seanrobert...@gmail.com
wrote:

 All is left now is to patch infra to copy the three repos from stackforge.
 Aaron can you take that on?

 Congratulations team!

 ~ sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to handle vendor-specific API microversions?

2015-03-31 Thread Steve Gordon


- Original Message -
 From: Chris Friesen chris.frie...@windriver.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Friday, March 27, 2015 4:48:52 PM
 Subject: Re: [openstack-dev] [nova] how to handle vendor-specific API 
 microversions?
 
 On 03/27/2015 01:40 PM, Steve Gordon wrote:
  - Original Message -
  From: Chris Friesen chris.frie...@windriver.com
 
  So for the case where a customer really wants some functionality, and
  wants it *soon* rather than waiting for it to get merged upstream, what is
  the recommended implementation path for a vendor?
 
  Well, before all else the key is to at least propose it in the community
  and
  see what the appetite for it is. I think part of the problem here is that
  we're still discussing this mostly in the abstract, although you provided
  some high level examples in response to Sean the only link was to a review
  that merged the same day it was proposed (albeit in 2012). I'm interested
  in
  whether there is a specific proposal you can link to that you put forward
  in
  the past and it wasn't accepted or was held up or whether you are working
  on
  a preset assumption here?
 
 Whoops...I had meant to link to https://review.openstack.org/163060; and
 managed to miss the last character.  My bad.  The API change I was talking
 about
 has now been split out to https://review.openstack.org/168418;.

That makes a little more sense :).

 I haven't proposed any features (with spec/blueprint) for kilo or earlier.
 I'm
 planning on proposing some for the L release.  (Some are already in for
 review,
 though I realize they're not going to get attention until Kilo is out.)
 
 I may be making invalid assumptions about how long it takes to get things
 done,
 but if so it's coloured by past experience.
 
 Some examples:
 
 I proposed a one-line trivial change in April of last year and it took almost
 2
 months before anyone even looked at it.
 
 I reported https://bugs.launchpad.net/nova/+bug/1213224; in 2013 and it
 hasn't
 been fixed.
 
 I opened https://bugs.launchpad.net/nova/+bug/1289064; over a year ago,
 proposed a fix (which admittedly had flaws), then handed it off to someone
 else,
 then it bounced around a few other people and still isn't resolved.
 
 I opened https://bugs.launchpad.net/nova/+bug/1284719; over a year ago and
 it's
 not yet resolved.
 
 I opened https://bugs.launchpad.net/nova/+bug/1298690; a year ago and it
 hasn't
 been touched.
 
 
 Chris

I'm not going to pick these apart one by one but at a high level I guess 
fundamentally the expectation of a vendor that needs a fix to resolve a 
customer issue is they would drive resolution of it in the community directly, 
which means not just filing the bug but also owning the creation of patches and 
iteration in response to review feedback (and also backporting to stable if 
necessary/appropriate) etc.. It's not immediately clear if this was the case in 
the bugs listed above (or even a subset thereof), rather it seems like they 
were raised for the broader community to resolve at their leisure relative to 
everything else in the queue and handled accordingly. 

That's not to say raising bugs to track issues identified in downstream testing 
isn't helpful in and of itself, but if the desire is to ensure fast resolution 
then there is a deeper investment in terms of contributing to writing and 
reviewing code required.

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [congress] is an openstack project

2015-03-31 Thread sean roberts
All is left now is to patch infra to copy the three repos from stackforge.
Aaron can you take that on?

Congratulations team!

~ sean
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] FFE Request: glusterfs_native: negotiate volumes with glusterd

2015-03-31 Thread Csaba Henk
Hi Ben,

please find my answer inline.

- Original Message -
 From: Ben Swartzlander b...@swartzlander.org
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Monday, March 30, 2015 7:44:25 PM
 Subject: Re: [openstack-dev] [Manila] FFE Request: glusterfs_native: 
 negotiate volumes with glusterd
 
 Thanks for going through the formal request process with this change.
 
 One question I have that's not answered here is: what is the risk of
 delaying this fix to Liberty? Clearly it needs to be fixed eventually,
 but if we hold off and allow Kilo to ship as-is, will anything bad
 happen? From the description above it sounds like the driver is
 functional, and a somewhat awkward workaround (restarting the backend)
 is required to deal with bug 1437176.

The risk is usability of the driver. To put it like that, driver is
architecturally broken -- storing all possible share backend instances
in a config parameter is not something should be seen in release code.

 Will users be subjected to any upgrade problems going from Kilo to
 Liberty if we don't fix this in Kilo? Will there be any significant
 maintenance problems in the Kilo code if we don't change it?

OpenStack distributions might be tempted to backport the fix (to arrive at
a usable driver) in which case they take up a maintenance burden.

Csaba

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra][cinder] Could you please re-consider Oracle ZFS/SA Cinder drivers (iSCSI and NFS)

2015-03-31 Thread Diem Tran

Hi Mike,

This is just a gentle status check from our team:

We are aware of the requirements where the CI needs to report in a 
stable fashion for at least 5 days prior to 4/6/2015. Hence we'd like to 
know if you and the Cinder team have any issue with the current state of 
Oracle ZFSSA CI. We want to handle it immediately and stay complied with 
the requirements.


Thank you,
Diem.


On 03/26/2015 01:28 PM, Diem Tran wrote:
Hi Mike, The CI has been adjusted to run 304 tests since 3/24/2015 
evening: 
https://review.openstack.org/#/q/reviewer:%22Oracle+ZFSSA+CI%22,n,z


Here are examples of recent success runs:
https://review.openstack.org/#/c/167080/
https://review.openstack.org/#/c/165763/
https://review.openstack.org/#/c/166823/
https://review.openstack.org/#/c/166689/
https://review.openstack.org/#/c/167366/
https://review.openstack.org/#/c/166164/

We delayed our response until now because we want to get proofs of 
success runs and make sure our CI complies with the requirements. We 
believe it does now.


Thank you for your contribution and effort in keeping us updated on 
this matter.


Diem.
On 03/24/2015 05:28 PM, Mike Perez wrote:

On 18:20 Mon 23 Mar , Diem Tran wrote:

Hello Cinder team,

Oracle ZFSSA CI has been reporting since March 20th. Below is a link
to the list of results the CI already posted:

https://review.openstack.org/#/q/reviewer:%22Oracle+ZFSSA+CI%22,n,z

Our CI system will be running and reporting results from now on,
hence I kindly request that you accept our CI results and consider
re-integrating our drivers back in Kilo RC.

If there is any concern, please let us know.

Diem,

I appreciate your team getting back to us on the CI. It appears your 
CI is
running 247 tests, when it should be running 304. Please verify 
you're running

tempest as followed in the instructions here:

https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#What_tests_do_I_use.3F 



Once this issue is resolved, I'll continue to monitor the stability, 
and based

on that make a decision for readding the driver.






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Summit planning etherpad

2015-03-31 Thread Morgan Fainberg
The summit planning etherpad for Keystone is here:
https://etherpad.openstack.org/p/Keystone-liberty-summit-brainstorm

Please brainstorm / toss ideas up / discuss the Liberty cycle goals (since
we're almost at RC for Kilo)

Cheers,
--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Million level scalability test report from cascading

2015-03-31 Thread joehuang
Hi, all,

During the last cross project meeting[1][2] for the next step of OpenStack 
cascading solution[3], the conclusion of the meeting is OpenStack isn't ready 
for the project, and if he want's it ready sooner than later, joehuang needs to 
help make it ready by working on scaling being coded now, and the scaling is 
on the first priority for OpenStack community.

We just finished the 1 million VMs semi-simulation test report[4] for OpenStack 
cascading solution, the most interesting findings during the test is, the 
cascading architecture can support million level ports in Neutron, and also 
million level VMs in Nova. And the test report also shows that OpenStack 
cascading solution can manage up to 100k physical hosts without challenge. Some 
scaling issues were found during the test and listed in the report.

The conclusion of the report is:
According to the Phase I and Phase II test data analysis, due to the hardware 
resources limitation, the OpenStack cascading solution with current 
configuration can supports a maximum of 1 million virtual machines and is 
capable of handling 500 concurrent API request if L3 (DVR) mode is included or, 
1000 concurrent API request if only L2 networking needed. It’s up to deployment 
policy to use OpenStack cascading solution inside one site ( one data center) 
or multi-sites (multi-data centers), the maximal sites (data centers) supported 
are 100, i.e., 100 cascaded OpenStack instances.

The test report[4] is shared first, let's discuss the next step later.

Hope you have a joyful Easter holiday!

[1]Meeting minutes: 
http://eavesdrop.openstack.org/meetings/crossproject/2014/crossproject.2014-12-16-21.01.html
[2]Meeting log: 
http://eavesdrop.openstack.org/meetings/crossproject/2014/crossproject.2014-12-16-21.01.log.html
[3]OpenStack cascading solution: 
https://wiki.openstack.org/wiki/OpenStack_cascading_solution
[4]1 million VM test report: 
http://www.slideshare.net/JoeHuang7/test-report-for-open-stack-cascading-solution-to-support-1-million-v-ms-in-100-data-centers


Best Regards
Chaoyi Huang ( Joe Huang )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra][cinder] Could you please re-consider Oracle ZFS/SA Cinder drivers (iSCSI and NFS)

2015-03-31 Thread Duncan Thomas
It looks like you're meeting all of the requirements to me

On 1 April 2015 at 00:23, Diem Tran diem.t...@oracle.com wrote:

 Hi Mike,

 This is just a gentle status check from our team:

 We are aware of the requirements where the CI needs to report in a stable
 fashion for at least 5 days prior to 4/6/2015. Hence we'd like to know if
 you and the Cinder team have any issue with the current state of Oracle
 ZFSSA CI. We want to handle it immediately and stay complied with the
 requirements.

 Thank you,
 Diem.



 On 03/26/2015 01:28 PM, Diem Tran wrote:

 Hi Mike, The CI has been adjusted to run 304 tests since 3/24/2015
 evening: https://review.openstack.org/#/q/reviewer:%22Oracle+ZFSSA+
 CI%22,n,z

 Here are examples of recent success runs:
 https://review.openstack.org/#/c/167080/
 https://review.openstack.org/#/c/165763/
 https://review.openstack.org/#/c/166823/
 https://review.openstack.org/#/c/166689/
 https://review.openstack.org/#/c/167366/
 https://review.openstack.org/#/c/166164/

 We delayed our response until now because we want to get proofs of
 success runs and make sure our CI complies with the requirements. We
 believe it does now.

 Thank you for your contribution and effort in keeping us updated on this
 matter.

 Diem.
 On 03/24/2015 05:28 PM, Mike Perez wrote:

 On 18:20 Mon 23 Mar , Diem Tran wrote:

 Hello Cinder team,

 Oracle ZFSSA CI has been reporting since March 20th. Below is a link
 to the list of results the CI already posted:

 https://review.openstack.org/#/q/reviewer:%22Oracle+ZFSSA+CI%22,n,z

 Our CI system will be running and reporting results from now on,
 hence I kindly request that you accept our CI results and consider
 re-integrating our drivers back in Kilo RC.

 If there is any concern, please let us know.

 Diem,

 I appreciate your team getting back to us on the CI. It appears your CI
 is
 running 247 tests, when it should be running 304. Please verify you're
 running
 tempest as followed in the instructions here:

 https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-
 drivers#What_tests_do_I_use.3F

 Once this issue is resolved, I'll continue to monitor the stability, and
 based
 on that make a decision for readding the driver.




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to handle vendor-specific API microversions?

2015-03-31 Thread Clint Byrum
Excerpts from Lingxian Kong's message of 2015-03-23 21:11:28 -0700:
 2015-03-21 23:31 GMT+08:00 Monty Taylor mord...@inaugust.com:
 
  I would vote that we not make this pleasant or easy for vendors who are
  wanting to add a feature to the API. As a person who uses several clouds
  daily, I can tell you that a vendor chosing to do that is VERY mean to
  users, and provides absolutely no value to anyone, other than allowing
  someone to make a divergent differentiated fork.
 
  Just don't do it. Seriously. It makes life very difficult for people
  trying to consume these things.
 
  The API is not the place for divergence.
 
 But, what if some vendors have already implemented some on-premise
 features using the Nova extension mechanism, to achieve strategy of
 product differentiation themselves based on OpenStack? IMHO, the
 DefCore has already give some advise about what's OpenStack(you must
 pass through a lot of predefined tests). If vendors can not provide
 extra features by themselvs(which is backwards compatible), they will
 lose a little competitiveness on their product.
 
 I'm not very sure whether or not my understanding is right, but I
 really concern about the what's the right direction for the vendors or
 providers.
 

What is being suggested is that those vendors need to write an API
that stands alone, apart from OpenStack's API's, with its own client
libraries and programs. This is to make it clear, those things are not
OpenStack. Extensions sort of hide in the shadows, and it is very hard
for a user to distinguish what they can depend on.

Think about the very nice alternatives that are GNU-specific for glibc.
If someone is writing an app that may need to land on many systems, they
must at least know to put those calls behind a layer of indirection that
they can focus on when porting. Same thing here.

Nobody wants to harm the ecosystem or discourage vendors from pushing into
corners where upstream might take too long to catch up. But OpenStack
isn't going to facilitate those things at the expense of the end-user
ecosystem.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][stable][OSSA 2015-005] Nova console Cross-Site WebSocket hijacking (CVE-2015-0259)

2015-03-31 Thread Tristan Cacqueray
On 03/26/2015 04:23 PM, Jeremy Stanley wrote:
 On 2015-03-26 14:29:03 -0400 (-0400), Lars Kellogg-Stedman wrote:
 [...]
 The solution, of course, is to make sure that the value of
 novncproxy_base_url is set explicitly where the nova-novncproxy
 service is running. This is a bit of a hack, since the service
 *really* only cares about the protocol portion of the URL,
 suggesting that maybe a new configuration option would have been a
 less intrusive solution.
 [...]
 
 Thanks for the heads up. The developers working to backport security
 fixes to stable branches try to come up with ways to have them
 automatically applicable without configuration changes on the part
 of the deployers consuming them. Sometimes it's possible, sometimes
 it's not, and sometimes they think it is but turn out in retrospect
 to have introduced an unintended behavior change. Unfortunately I
 think that last possibility is what happened for this bug[1].
 
 It's worth bringing this to the attention of the Nova developers who
 implemented the original fix to see if there's a better stable
 solution which achieves the goal of protecting deployments where
 operators aren't likely to update their configuration while still
 maintaining consistent behavior. To that end, I'm Cc'ing the
 openstack-dev list, setting MFT and tagging the subject accordingly.
 
 [1] https://launchpad.net/bugs/1409142
 

Thanks Lars for bringing this up!

I've submitted a documentation change to document that new behavior[2]
and I'd like to amend the release note[3] with this:

There is a known issue with the new websocket origin access control
(OSSA 2015-005): ValidationError will prevent VNC and SPICE connection
if base_urls are not properly configured. The novncproxy_base_url and
html5proxy_base_url now need to match the TLS settings of the connection
origin and needs to be set explicitly where the nova proxy service is
running.

Feedback are most welcome...

[2]: https://review.openstack.org/169515
[3]: https://wiki.openstack.org/wiki/ReleaseNotes/2014.1.4



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [api] Erring is Caring

2015-03-31 Thread Everett Toews
Hi All,

An API Working Group Guideline for Errors

https://review.openstack.org/#/c/167793/

Errors are a crucial part of the developer experience when using an API. As 
developers learn the API they inevitably run into errors. The quality and 
consistency of the error messages returned to them will play a large part in 
how quickly they can learn the API, how they can be more effective with the 
API, and how much they enjoy using the API.

We need consistency across all services for the error format returned in the 
response body.


The Way Forward

I did a bit of research into the current state of consistency in errors across 
OpenStack services [1]. Since no services seem to respond with a top-level 
errors key, it's possible that they could just include this key in the 
response body along with their usual response and the 2 can live side-by-side 
for some deprecation period. Hopefully those services with unstructured errors 
should okay with adding some structure. That said, the current error formats 
aren't documented anywhere that I've seen so this all feels fair game anyway.

How this would get implemented in code is up to you. It could eventually be 
implemented in all projects individually or perhaps a Oslo utility is called 
for. However, this discussion is not about the implementation. This discussion 
is about the error format.


The Review

I’ve explicitly added all of the API WG and Logging WG CPLs as reviewers to 
that patch but feedback from all is welcome. You can find a more readable 
version of patch set 4 at [2]. I see the id and “code” fields as the 
connection point to what the logging working group is doing.


Thanks,
Everett


[1] https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Errors
[2] 
http://docs-draft.openstack.org/93/167793/4/check/gate-api-wg-docs/e2f5b6e//doc/build/html/guidelines/errors.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [ML2] using binding:tun_ip instead of agent_ip for l2pop to support agentless backend

2015-03-31 Thread Sławek Kapłoński
Hello,

I think that easiest way could be to have own mech_driver (AFAIK such
drivers are for such usage) to talk with external devices to tell them
what tunnels should it establish.
With change to tun_ip Henry propese l2_pop agent will be able to
establish tunnel with external device.

On Mon, Mar 30, 2015 at 10:19:38PM +0200, Mathieu Rohon wrote:
 hi henry,
 
 thanks for this interesting idea. It would be interesting to think about
 how external gateway could leverage the l2pop framework.
 
 Currently l2pop sends its fdb messages once the status of the port is
 modified. AFAIK, this status is only modified by agents which send
 update_devce_up/down().
 This issue has also to be addressed if we want agent less equipments to be
 announced through l2pop.
 
 Another way to do it is to introduce some bgp speakers with e-vpn
 capabilities at the control plane of ML2 (as a MD for instance). Bagpipe
 [1] is an opensource bgp speaker which is able to do that.
 BGP is standardized so equipments might already have it embedded.
 
 last summit, we talked about this kind of idea [2]. We were going further
 by introducing the bgp speaker on each compute node, in use case B of [2].
 
 [1]https://github.com/Orange-OpenSource/bagpipe-bgp
 [2]http://www.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe
 
 On Thu, Mar 26, 2015 at 7:21 AM, henry hly henry4...@gmail.com wrote:
 
  Hi ML2er,
 
  Today we use agent_ip in L2pop to store endpoints for ports on a
  tunnel type network, such as vxlan or gre. However this has some
  drawbacks:
 
  1) It can only work with backends with agents;
  2) Only one fixed ip is supported per-each agent;
  3) Difficult to interact with other backend and world outside of Openstack.
 
  L2pop is already widely accepted and deployed in host based overlay,
  however because it use agent_ip to populate tunnel endpoint, it's very
  hard to co-exist and inter-operating with other vxlan backend,
  especially agentless MD.
 
  A small change is suggested that the tunnel endpoint should not be the
  attribute of *agent*, but be the attribute of *port*, so if we store
  it in something like *binding:tun_ip*, it is much easier for different
  backend to co-exists. Existing ovs agent and bridge need a small
  patch, to put the local agent_ip into the port context binding fields
  when doing port_up rpc.
 
  Several extra benefits may also be obtained by this way:
 
  1) we can easily and naturally create *external vxlan/gre port* which
  is not attached by an Nova booted VM, with the binding:tun_ip set when
  creating;
  2) we can develop some *proxy agent* which manage a bunch of remote
  external backend, without restriction of its agent_ip.
 
  Best Regards,
  Henry
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-03-31 Thread Everett Toews
Top posting to continue the discussion in another thread.

[openstack-dev] [all] [api] Erring is Caring
http://lists.openstack.org/pipermail/openstack-dev/2015-March/060314.html

Everett


On Feb 4, 2015, at 10:29 AM, Duncan Thomas 
duncan.tho...@gmail.commailto:duncan.tho...@gmail.com wrote:

Ideally there would need to be a way to replicate 
errors.openstack.orghttp://errors.openstack.org/ and switch the url, for 
none-internet connected deployments, but TBH sites with that sort of 
requirement are used to weird breakages, so not a huge issue of it can't easily 
be done

On 3 February 2015 at 00:35, Jay Pipes 
jaypi...@gmail.commailto:jaypi...@gmail.com wrote:
On 01/29/2015 12:41 PM, Sean Dague wrote:
Correct. This actually came up at the Nova mid cycle in a side
conversation with Ironic and Neutron folks.

HTTP error codes are not sufficiently granular to describe what happens
when a REST service goes wrong, especially if it goes wrong in a way
that would let the client do something other than blindly try the same
request, or fail.

Having a standard json error payload would be really nice.

{
  fault: ComputeFeatureUnsupportedOnInstanceType,
  messsage: This compute feature is not supported on this kind of
instance type. If you need this feature please use a different instance
type. See your cloud provider for options.
}

That would let us surface more specific errors.
snip

Standardization here from the API WG would be really great.

What about having a separate HTTP header that indicates the OpenStack Error 
Code, along with a generated URI for finding more information about the error?

Something like:

X-OpenStack-Error-Code: 1234
X-OpenStack-Error-Help-URI: http://errors.openstack.org/1234

That way is completely backwards compatible (since we wouldn't be changing 
response payloads) and we could handle i18n entirely via the HTTP help service 
running on errors.openstack.orghttp://errors.openstack.org/.

Best,
-jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] SQLAlchemy-related topics in the Vancouver summit

2015-03-31 Thread Mike Bayer
hey all -

Just a heads up that I am booked to attend the Vancouver summit.And I have 
almost nothing to do.  So please reach out and invite me to your 
database-related design sessions, so that I can help out with SQLAlchemy, 
Alembic/Migrate, and oslo.db feature support (with props to dogpile as well).   
 I’m hoping to have a fairly populated calendar by the time the summit comes 
around and I’d most like to attend those sessions where people are actually 
looking for me!

- mike



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] sqlalchemy-migrate 0.9.6 released

2015-03-31 Thread Matt Riedemann

sqlalchemy-migrate 0.9.6 is released:

https://pypi.python.org/pypi/sqlalchemy-migrate/0.9.6

This is a bug fix release for a single change that will unblock DB2 
third party CI for Nova:


mriedem@ubuntu:~/git/sqlalchemy-migrate$ git log --oneline --no-merges 
0.9.5..0.9.6

e57ee4c Fix ibmdb2 index name handling

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] The Evolution of core developer to maintainer?

2015-03-31 Thread Joe Gordon
I am starting this thread based on Thierry's feedback on [0].  Instead of
writing the same thing twice, you can look at the rendered html from that
patch [1]. Neutron tried to go from core to maintainer but after input from
the TC and others, they are keeping the term 'core' but are clarifying what
it means to be a neutron core [2]. [2] does a very good job of showing how
what it means to be core is evolving.  From

everyone is a dev and everyone is a reviewer. No committers or repo
owners, no aristocracy. Some people just commit to do a lot of reviewing
and keep current with the code, and have votes that matter more (+2).
(Theirry)

To a system where cores are more then people who have votes that matter
more. Neutron's proposal tries to align that document with what is already
happening.

1. They share responsibility in the project's success.
2. They have made a long-term, recurring time investment to improve the
project.
3. They spend their time doing what needs to be done to ensure the projects
success, not necessarily what is the most interesting or fun.


I think there are a few issues at the heart of this debate:

1. Our current concept of a core team has never been able to grow past 20
or so people, even for really big projects like nova and cinder. Why is
that?  How do we delegate responsibility for subsystems? How do we keep
growing?
2. If everyone is just developers and reviewers who is actually responsible
for the projects success? How does that mesh with the ideal of no
'aristocracy'? Do are early goals still make sense today?




Do you feel like a core deveper/reviewer (we initially called them core
developers) [1]:

In OpenStack a core developer is a developer who has submitted enough high
quality code and done enough code reviews that we trust their code reviews
for merging into the base source tree. It is important that we have a
process for active developers to be added to the core developer team.

Or a maintainer [1]:

1. They share responsibility in the project’s success.
2. They have made a long-term, recurring time investment to improve the
project.
3. They spend that time doing whatever needs to be done, not necessarily
what is the most interesting or fun.

Maintainers are often under-appreciated, because their work is harder to
appreciate. It’s easy to appreciate a really cool and technically advanced
feature. It’s harder to appreciate the absence of bugs, the slow but steady
improvement in stability, or the reliability of a release process. But
those things distinguish a good project from a great one.




[0] https://review.openstack.org/#/c/163660/
[1]
http://docs-draft.openstack.org/60/163660/3/check/gate-governance-docs/f386acf//doc/build/html/resolutions/20150311-rename-core-to-maintainers.html
[2] https://review.openstack.org/#/c/164208/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] One confirm about max fixed ips per port

2015-03-31 Thread Kevin Benton
Multiple subnets can be used per port to have discontiguous floating IP
ranges on the external network. They can be used on internal networks to
migrate to a different address space. They can be added in VLAN-based
environments where networks are expensive to increase the number of hosts
allowed on the network later.

On Tue, Mar 31, 2015 at 7:43 AM, sparkofwisdom.cl...@gmail.com wrote:

  Do we have clarity on this question? I think it will be more important
 for IPv6 enabled port…Any guidance from the Neutron Core?



 Thanks!



 Shixiong




  Zou, Yun zou@jp.fujitsu.com wrote:

 Hello, Oleg Bondarev.



 Sir, I could not find out any merit of multi subnets on one network,
 except the following one.

 - Migrate IPv4 to IPv6, so we need both subnet range on one network.

 So I don't know very much the nesessery of max_fied_ips_per_port parameter.

 All I know is only DB module and opencontrail plugin are using this
 parameter for validate.

 Do we have any usages about this issue, please?

 I appreciate a lot of your help.



 My question is related to fix [1].

 [1]: https://review.openstack.org/#/c/160214/



 Best regards,

 Watanabe.isao





 __

 OpenStack Development Mailing List (not for usage questions)

 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Evolution of core developer to maintainer?

2015-03-31 Thread John Griffith
On Tue, Mar 31, 2015 at 4:30 PM, Joe Gordon joe.gord...@gmail.com wrote:

 I am starting this thread based on Thierry's feedback on [0].  Instead of
 writing the same thing twice, you can look at the rendered html from that
 patch [1]. Neutron tried to go from core to maintainer but after input from
 the TC and others, they are keeping the term 'core' but are clarifying what
 it means to be a neutron core [2]. [2] does a very good job of showing how
 what it means to be core is evolving.  From

 everyone is a dev and everyone is a reviewer. No committers or repo
 owners, no aristocracy. Some people just commit to do a lot of reviewing
 and keep current with the code, and have votes that matter more (+2).
 (Theirry)

 To a system where cores are more then people who have votes that matter
 more. Neutron's proposal tries to align that document with what is already
 happening.

 1. They share responsibility in the project's success.
 2. They have made a long-term, recurring time investment to improve the
 project.
 3. They spend their time doing what needs to be done to ensure the
 projects success, not necessarily what is the most interesting or fun.


 I think there are a few issues at the heart of this debate:

 1. Our current concept of a core team has never been able to grow past 20
 or so people, even for really big projects like nova and cinder. Why is
 that?  How do we delegate responsibility for subsystems? How do we keep
 growing?
 2. If everyone is just developers and reviewers who is actually
 responsible for the projects success? How does that mesh with the ideal of
 no 'aristocracy'? Do are early goals still make sense today?




 Do you feel like a core deveper/reviewer (we initially called them core
 developers) [1]:

 In OpenStack a core developer is a developer who has submitted enough high
 quality code and done enough code reviews that we trust their code reviews
 for merging into the base source tree. It is important that we have a
 process for active developers to be added to the core developer team.

 Or a maintainer [1]:

 1. They share responsibility in the project’s success.
 2. They have made a long-term, recurring time investment to improve the
 project.
 3. They spend that time doing whatever needs to be done, not necessarily
 what is the most interesting or fun.

 Maintainers are often under-appreciated, because their work is harder to
 appreciate. It’s easy to appreciate a really cool and technically advanced
 feature. It’s harder to appreciate the absence of bugs, the slow but steady
 improvement in stability, or the reliability of a release process. But
 those things distinguish a good project from a great one.




 [0] https://review.openstack.org/#/c/163660/
 [1]
 http://docs-draft.openstack.org/60/163660/3/check/gate-governance-docs/f386acf//doc/build/html/resolutions/20150311-rename-core-to-maintainers.html
 [2] https://review.openstack.org/#/c/164208/

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hey Joe,

I mentioned in last weeks TC meeting that I didn't really see a burning
need to change or create new labels; but that's probably beside the
point.  So if I read this it really comes down to a number of people in the
community want core to mean something more than special reviewer is
that right?  I mean regardless of whether you change the name from core
to maintainer I really don't care.  If it makes some folks feel better to
have that title/label associated with themselves that's cool by me (yes I
get the *extra* responsibilities part you lined out).

What is missing for me here however is who picks these special people.
I'm convinced that this does more to promote the idea of special
contributors than anything else.  Maybe that's actually what you want, but
it seemed based on your message that wasn't the case.

Anyway, core nominations are fairly objective in my opinion and is *mostly*
based on number of reviews and perceived quality of those reviews (measured
somewhat by disagreement rates etc).  What are the metrics for this special
group of folks that you're proposing we empower and title as maintainers?
Do I get to be a maintainer, is it reserved for a special group of
people, a specific company?  What's the criteria? Do *you* get to be a
maintainer?

What standards are *Maintainers* held to?  Who/How do we decide he/she is
doing their job?  Are there any rules about representation and interests
(keeping the team of people balanced).  What about the work by those
maintainers that introduces more/new bugs?

My feeling on this is that yes a lot of this sort of thing is happening
naturally on its own and that's a pretty cool thing IMO.  What you're
saying though is you want to formalize it?  Is the problem that people
don't feel 

Re: [openstack-dev] The Evolution of core developer to maintainer?

2015-03-31 Thread Dean Troyer
On Tue, Mar 31, 2015 at 5:30 PM, Joe Gordon joe.gord...@gmail.com wrote:

 Do you feel like a core deveper/reviewer (we initially called them core
 developers) [1]:

 In OpenStack a core developer is a developer who has submitted enough high
 quality code and done enough code reviews that we trust their code reviews
 for merging into the base source tree. It is important that we have a
 process for active developers to be added to the core developer team.

 Or a maintainer [1]:

 1. They share responsibility in the project’s success.
 2. They have made a long-term, recurring time investment to improve the
 project.
 3. They spend that time doing whatever needs to be done, not necessarily
 what is the most interesting or fun.


First, I don't think these two things are mutually exclusive, that's a
false dichotomy.  They sound like two groups of attributes (or roles), both
of which must be earned in the eyes of the rest of the project team.
Frankly, being a PTL is your maintainer list on steroids for some projects,
except that the PTL is directly elected.


 Maintainers are often under-appreciated, because their work is harder to
 appreciate. It’s easy to appreciate a really cool and technically advanced
 feature. It’s harder to appreciate the absence of bugs, the slow but steady
 improvement in stability, or the reliability of a release process. But
 those things distinguish a good project from a great one.


The best maintainers appear to be invisible because stuff Just Works(TM).

It feels to me like a couple of things are being conflated here and need to
be explicitly stated to break the conversation down into meaningful parts
that can be discussed without getting side-tracked:

a) How do we scale?  How do we spread the project management load?  How do
we maintain consistency in subteams/subsystems?

b) How do we avoid the 'aristoctacy'?

c) what did I miss?

Taking b) first, the problem being solved needs to be stated.  Is it to
avoid 'cliques'?  Are feelings being hurt because some are 'more-core' than
others?  Is it to remove being a core team member as a job-review checkbox
for some companies?  This seems to be bigger than just increasing core
reviewer numbers, and tied to some developers being slighted in some way.

A) is an organization structure problem.  We're seeing the boundaries of
startup-style flat organization, and I think we all know we don't want
 traditional enterprise layers of managers.

It seems like there is a progression of advancement for team members:
 prove yourself and become a core team member/reviewer/whatever.  The next
step is what I think you want to formalize Joe, and that is those who again
prove themselves in some manner to unlock the 'maintainer' achievements.

The idea of taking the current becoming-core-team process and repeating it
based on existing cores and PTL recommendations doesn't seem like too far
of a stretch.  I mean really, is any project holding back people who want
to do the maintainer role on more than just one pet part of a project? (I
know those exist)


FWIW, I have not been deeply involved in any of the highly
political/vendor-driven projects so this may appear totally ignorant to
those realities, but I think that is a clue that those projects are
drifting away from the ideals that OpenStack was started with.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev