Re: [openstack-dev] multiple external networks not working

2014-02-13 Thread Yongsheng Gong
in order that the router is scheduled to a l3 agent with a given external
network,  we should create the router and set its gateway interface on that
external network just after the router creation.


On Thu, Feb 13, 2014 at 3:26 PM, Nick Ma skywalker.n...@gmail.com wrote:

 I have a multiple external networks environment:

 | 6fd43d02-221a-44fe-8088-dc5915512c14  |   Ext-Net-2   |
 1a530334-9dd7-45f3-aa6a-2bb1a5dad562 192.168.1.0/24 |
 | a2946b29-6be5-4285-9eb9-99625ec2a283 |Ext-Net  |
 dfbc7f6c-c3dd-4c56-a142-48964e2e474c 192.168.2.0/24 |

 Each of the external network is served by a single L3-agent. I also
 declare gateway_external_network_id in each l3_agent.ini, and set
 external_network_bridge.

 Then, I set up tenants, routers and gateways as follows:

 TenantA -- TenantA-Router -- Ext-Net -- Internet
 TenantB -- TenantB-Router -- Ext-Net-2 -- Internet

 I find that both qrouter-xxx is associated with only one L3-agent, not
 as expected.

 I debugged l3-agent-router-scheduler component to see how it works. I
 found that:

 Before the router is scheduled, I print agent objects out:

 neutron.db.agents_db.Agent[object at 2ba9150] {...
 configurations=u'{router_id: , gateway_external_network_id:
 6fd43d02-221a-44fe-8088-dc5915512c14, ...

 The gateway_external_network_id is actual there.

 And I print router object out:

 Router: {'status': u'ACTIVE', 'external_gateway_info': None, 'name':
 u'TenantA-R1', 'gw_port_id': None, 'admin_state_up': True, 'tenant_id':
 u'b181fd2406784da5895a966da4b74126', 'routes': [], 'id':
 u'14ff540e-13c6-4aec-8064-a74320f62a0d'}

 The external_gateway_info is None.

 Finally, I run neutron router-show:

 7ab3f8f4-89c8-4735-a5c2-4c0a09103bf1 | TenantA-R1 | {network_id:
 6fd43d02-221a-44fe-8088-dc5915512c14, enable_snat: true}

 The external_gateway_info is there.

 So, I guess that:

 The router scheduler works before I associate with external gateway to
 that router.

 in neutron/db/l3_agentschedulers_db.py:
 def get_l3_agent_candidates(self, sync_router, l3_agents)
 gateway_external_network_id = agent_conf.get(
 'gateway_external_network_id', None)
 ex_net_id = (sync_router['external_gateway_info'] or {}).get(
 'network_id')

 It compares the two variables to see if they are equal, the l3-agent is
 candidate.

 Forget to tell that it happens for HAVANA stable release. I haven't
 tested master branch yet.

 1. A bug or by design?
 2. How to run multiple external networks?

 --

 Nick Ma
 skywalker.n...@gmail.com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][VCDriver] VMWare VCDriver problems

2014-02-13 Thread Gary Kotton
Hi,
The commit 
https://github.com/openstack/nova/commit/c4bf32c03283cbedade9ab8ca99e5b13b9b86ccb
 added a warning that the ESX driver is not tested. My understanding is that 
there are a number of people using the ESX driver so it should not be 
deprecated. In order to get the warning removed we will need to have CI on the 
driver. As far as I know there is no official decision to deprecate it.
Thanks
Gary

From: Jay Lau jay.lau@gmail.commailto:jay.lau@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, February 13, 2014 4:00 AM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [OpenStack][Nova][VCDriver] VMWare VCDriver problems

Greetings,

I was now doing some integration with VMWare VCDriver and have some questions 
during the integration work.

1) In Hong Kong Summit, it was mentioned that ESXDriver will be dropped, so do 
we have any plan when to drop this driver?
2) There are many good features in VMWare was not supportted by VCDriver, such 
as live migration, cold migration and resize within one vSphere cluster, also 
we cannot get individual ESX Server details via VCDriver.

Do we have some planning to make those features worked?

--
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Glance plugin in Nova should switch to using buffered http

2014-02-13 Thread Sridevi K R Koushik
On Thu, Feb 13, 2014 at 2:16 PM, Sridevi K R Koushik 
sridevi.kous...@thoughtworks.com wrote:

 Hi,

 I would like to get some comments on
  
 https://blueprints.launchpad.net/nova/+spec/use-buffered-http-in-glance-pluginhttps://blueprints.launchpad.net/nova/+spec/use-buffered-http-in-glance-plugin
 and https://blueprints.launchpad.net/nova/+spec/should-use-100-continue-header
 blueprints.
 These blueprints will address many of the auth related failures during the
 upload process.

 Thanks,
 Sridevi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] multiple external networks not working

2014-02-13 Thread Nick Ma
Thanks for your quick reply. I'll try it later.

On 2/13/2014 3:54 PM, Yongsheng Gong wrote:
 in order that the router is scheduled to a l3 agent with a given
 external network,  we should create the router and set its gateway
 interface on that external network just after the router creation.


 On Thu, Feb 13, 2014 at 3:26 PM, Nick Ma skywalker.n...@gmail.com
 mailto:skywalker.n...@gmail.com wrote:

 I have a multiple external networks environment:

 | 6fd43d02-221a-44fe-8088-dc5915512c14  |   Ext-Net-2   |
 1a530334-9dd7-45f3-aa6a-2bb1a5dad562 192.168.1.0/24
 http://192.168.1.0/24 |
 | a2946b29-6be5-4285-9eb9-99625ec2a283 |Ext-Net  |
 dfbc7f6c-c3dd-4c56-a142-48964e2e474c 192.168.2.0/24
 http://192.168.2.0/24 |

 Each of the external network is served by a single L3-agent. I also
 declare gateway_external_network_id in each l3_agent.ini, and set
 external_network_bridge.

 Then, I set up tenants, routers and gateways as follows:

 TenantA -- TenantA-Router -- Ext-Net -- Internet
 TenantB -- TenantB-Router -- Ext-Net-2 -- Internet

 I find that both qrouter-xxx is associated with only one L3-agent, not
 as expected.

 I debugged l3-agent-router-scheduler component to see how it works. I
 found that:

 Before the router is scheduled, I print agent objects out:

 neutron.db.agents_db.Agent[object at 2ba9150] {...
 configurations=u'{router_id: , gateway_external_network_id:
 6fd43d02-221a-44fe-8088-dc5915512c14, ...

 The gateway_external_network_id is actual there.

 And I print router object out:

 Router: {'status': u'ACTIVE', 'external_gateway_info': None, 'name':
 u'TenantA-R1', 'gw_port_id': None, 'admin_state_up': True,
 'tenant_id':
 u'b181fd2406784da5895a966da4b74126', 'routes': [], 'id':
 u'14ff540e-13c6-4aec-8064-a74320f62a0d'}

 The external_gateway_info is None.

 Finally, I run neutron router-show:

 7ab3f8f4-89c8-4735-a5c2-4c0a09103bf1 | TenantA-R1 | {network_id:
 6fd43d02-221a-44fe-8088-dc5915512c14, enable_snat: true}

 The external_gateway_info is there.

 So, I guess that:

 The router scheduler works before I associate with external gateway to
 that router.

 in neutron/db/l3_agentschedulers_db.py:
 def get_l3_agent_candidates(self, sync_router, l3_agents)
 gateway_external_network_id = agent_conf.get(
 'gateway_external_network_id', None)
 ex_net_id = (sync_router['external_gateway_info'] or {}).get(
 'network_id')

 It compares the two variables to see if they are equal, the
 l3-agent is
 candidate.

 Forget to tell that it happens for HAVANA stable release. I haven't
 tested master branch yet.

 1. A bug or by design?
 2. How to run multiple external networks?

 --

 Nick Ma
 skywalker.n...@gmail.com mailto:skywalker.n...@gmail.com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Nick Ma
skywalker.n...@gmail.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] review days

2014-02-13 Thread Roman Prykhodchenko
8am PST mostly works for me (except Wednesdays ), so +1 to that.

- romcheg

On Feb 13, 2014, at 09:44 , Ghe Rivero ghe.riv...@gmail.com wrote:

 What time would work for you? How about Thursdays at 8am PST?
 
 Works for me!
 
 Ghe Rivero
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] ARP Proxy in l2-population Mechanism Driver for OVS

2014-02-13 Thread Mathieu Rohon
Hi,

You can see in the review [1] that doude first proposed an ebtables
manager to implement the ARP responder for ovs. OVS 2.1 is now able to
manage an ARP responder based on flow [2], so he switches his
implementation to a flow based ARP responder (please, have a look at
patches history).
ebtables driver seems more interesting since this implementation would
be compatible with any ovs version, but VM needs to be plugged to a
linux bridge and with ovsfirewalldriver [3], nova won't need to plug
VM to a linux bridge anymore, so ARP responder based on ebtables won't
work.

[1]https://review.openstack.org/#/c/49227/
[2]https://review.openstack.org/#/c/49227/27/neutron/plugins/ml2/drivers/l2pop/README
[3]https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver

On Thu, Feb 13, 2014 at 9:51 AM, Édouard Thuleau thul...@gmail.com wrote:
 Hi,

 On Havana, a local ARP responder is available if you use the ML2 with the
 l2-pop MD and the Linux Bridge (natively implemented by the Linux kernel
 VXLAN module).
 It's not (yet [1]) available with the OVS agent. The proposed OVS
 implementation use new OVS flows integrated on branch 2.1.

 Just few remarks about the ML2 MD l2-pop. Two important bugs persists:
 - One [2] impacts all the MD l2-pop (Linux Bridge and OVS agents). Merged on
 trunk and waiting to be backported [3]
 - Another one [4] impacts only the OVS agent stills waiting review.

 [1] https://review.openstack.org/#/c/49227/
 [2] https://review.openstack.org/#/c/63913/
 [3] https://review.openstack.org/#/c/71821/
 [4] https://review.openstack.org/#/c/63917/

 Édouard.


 On Thu, Feb 13, 2014 at 4:57 AM, Nick Ma skywalker.n...@gmail.com wrote:

 Hi all,

 I'm running a OpenStack Havana cloud on pre-production stage using
 Neutron ML2 VxLAN. I'd like to incorporate l2-population to get rid of
 tunnel broadcast.

 However, it seems that ARP Proxy has NOT been implemented yet for Open
 vSwitch for Havana and also the latest master branch.

 I find that ebtables arpreply can do it and then put some corresponding
 flow rules into OVS.

 Could anyone provide more hints on how to implement it in l2-pop?

 thanks,

 --

 Nick Ma
 skywalker.n...@gmail.com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] ARP Proxy in l2-population Mechanism Driver for OVS

2014-02-13 Thread Nick Ma
I'll check these links. Thanks a lot.

On 2/13/2014 5:13 PM, Mathieu Rohon wrote:
 Hi,

 You can see in the review [1] that doude first proposed an ebtables
 manager to implement the ARP responder for ovs. OVS 2.1 is now able to
 manage an ARP responder based on flow [2], so he switches his
 implementation to a flow based ARP responder (please, have a look at
 patches history).
 ebtables driver seems more interesting since this implementation would
 be compatible with any ovs version, but VM needs to be plugged to a
 linux bridge and with ovsfirewalldriver [3], nova won't need to plug
 VM to a linux bridge anymore, so ARP responder based on ebtables won't
 work.

 [1]https://review.openstack.org/#/c/49227/
 [2]https://review.openstack.org/#/c/49227/27/neutron/plugins/ml2/drivers/l2pop/README
 [3]https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver

 On Thu, Feb 13, 2014 at 9:51 AM, Édouard Thuleau thul...@gmail.com wrote:
 Hi,

 On Havana, a local ARP responder is available if you use the ML2 with the
 l2-pop MD and the Linux Bridge (natively implemented by the Linux kernel
 VXLAN module).
 It's not (yet [1]) available with the OVS agent. The proposed OVS
 implementation use new OVS flows integrated on branch 2.1.

 Just few remarks about the ML2 MD l2-pop. Two important bugs persists:
 - One [2] impacts all the MD l2-pop (Linux Bridge and OVS agents). Merged on
 trunk and waiting to be backported [3]
 - Another one [4] impacts only the OVS agent stills waiting review.

 [1] https://review.openstack.org/#/c/49227/
 [2] https://review.openstack.org/#/c/63913/
 [3] https://review.openstack.org/#/c/71821/
 [4] https://review.openstack.org/#/c/63917/

 Édouard.


 On Thu, Feb 13, 2014 at 4:57 AM, Nick Ma skywalker.n...@gmail.com wrote:
 Hi all,

 I'm running a OpenStack Havana cloud on pre-production stage using
 Neutron ML2 VxLAN. I'd like to incorporate l2-population to get rid of
 tunnel broadcast.

 However, it seems that ARP Proxy has NOT been implemented yet for Open
 vSwitch for Havana and also the latest master branch.

 I find that ebtables arpreply can do it and then put some corresponding
 flow rules into OVS.

 Could anyone provide more hints on how to implement it in l2-pop?

 thanks,

 --

 Nick Ma
 skywalker.n...@gmail.com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

cheers,
Li Ma


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchicical Multitenancy Discussion

2014-02-13 Thread Vinod Kumar Boppanna
Dear All,

At the meeting last week we (myself and Ulrich) have been assigned the task of 
doing POC for Quota Management in the Hierarchical Multitenancy setup.

So, here it is:

Wiki Page - https://wiki.openstack.org/wiki/POC_for_QuotaManagement   
(explained here an example setup and my thoughts)

Code - 
https://github.com/vinodkumarboppanna/POC-For-Quotas/commit/391e9108fa579d292880c8836cadfd7253586f37

Please post your comments or any inputs and i hope this POC will be discussed 
in this weeks meeting on Friday at 1600 UTC.


In addition to this, we have completed the implementation the Domain Quota 
Management in Nova with V2 APIs, and if anybody interested, please have a look

BluePrint - https://blueprints.launchpad.net/nova/+spec/domain-quota-driver-api
Wiki Page - https://wiki.openstack.org/wiki/APIs_for_Domain_Quota_Driver
GitHub Code - https://github.com/vinodkumarboppanna/DomainQuotaAPIs


Thanks  Regards,
Vinod Kumar Boppanna

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] API 1.1 feedback

2014-02-13 Thread Jamie Hannaford
I noticed a few things about the new 1.1 spec that I thought I'd give feedback 
on:

1. Set Queue Metadata
A PUT operation is provided, which does a hard replace of metadata values. New 
items are inserted, and existing items that are not specified are wiped.

Nova also provides a POST operation that is more sympathetic - allowing you to 
update only the values specified, leaving existing unspecified items 
unmodified. Could a similar operation be added to this API - since there 
definitely seems like a use case for it.

2. Get a Specific Message
In the response body, the `href` field is provided as a relative URI - why? 
Surely absolute URIs are more convenient for the end-user.

3. Deleting Multiple Messages
a. How does one delete multiple claimed messages? What would the URI template 
look like? It is not specified whether this is possible or not.
b. If I provide a bunch of IDs, and one of them is a claimed message, what 
happens? Will it be silently ignored? The behavior is undefined.

4. Read a Shard
Should these response structures be nested in a top-level shard object? Same 
will the List Shards collection.

5. The request body for Post Message(s) contains malformed JSON - the `=` 
should be `:`

Sorry if some of these issues have already been settled or discussed :)

Jamie




Jamie Hannaford
Software Developer II - CH  [experience Fanatical Support]

Tel:+41434303908
Mob:+41791009767
[Rackspace]



Rackspace International GmbH a company registered in the Canton of Zurich, 
Switzerland (company identification number CH-020.4.047.077-1) whose registered 
office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland. Rackspace 
International GmbH privacy policy can be viewed at 
www.rackspace.co.uk/legal/swiss-privacy-policy
-
Rackspace Hosting Australia PTY LTD a company registered in the state of 
Victoria, Australia (company registered number ACN 153 275 524) whose 
registered office is at Suite 3, Level 7, 210 George Street, Sydney, NSW 2000, 
Australia. Rackspace Hosting Australia PTY LTD privacy policy can be viewed at 
www.rackspace.com.au/company/legal-privacy-statement.php
-
Rackspace US, Inc, 5000 Walzem Road, San Antonio, Texas 78218, United States of 
America
Rackspace US, Inc privacy policy can be viewed at 
www.rackspace.com/information/legal/privacystatement
-
Rackspace Limited is a company registered in England  Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ.
Rackspace Limited privacy policy can be viewed at 
www.rackspace.co.uk/legal/privacy-policy
-
Rackspace Benelux B.V. is a company registered in the Netherlands (company KvK 
nummer 34276327) whose registered office is at Teleportboulevard 110, 1043 EJ 
Amsterdam.
Rackspace Benelux B.V privacy policy can be viewed at 
www.rackspace.nl/juridisch/privacy-policy
-
Rackspace Asia Limited is a company registered in Hong Kong (Company no: 
1211294) whose registered office is at 9/F, Cambridge House, Taikoo Place, 979 
King's Road, Quarry Bay, Hong Kong.
Rackspace Asia Limited privacy policy can be viewed at 
www.rackspace.com.hk/company/legal-privacy-statement.php
-
This e-mail message (including any attachments or embedded documents) is 
intended for the exclusive and confidential use of the individual or entity to 
which this message is addressed, and unless otherwise expressly indicated, is 
confidential and privileged information of Rackspace. Any dissemination, 
distribution or copying of the enclosed material is prohibited. If you receive 
this transmission in error, please notify us immediately by e-mail at 
ab...@rackspace.com and delete the original message. Your cooperation is 
appreciated.
inline: image538c56.JPGinline: image22190f.JPGinline: imageaf72fd.JPGinline: imageda860e.JPG___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gamification and on-boarding ...

2014-02-13 Thread Thierry Carrez
Sandy Walsh wrote:
 The informal OpenStack motto is automate everything, so perhaps we should 
 consider some form of gamification [1] to help us? Can we offer badges, 
 quests and challenges to new users to lead them on the way to being strong 
 contributors?
 
 Fixed your first bug badge
 Updated the docs badge
 Got your blueprint approved badge
 Triaged a bug badge
 Reviewed a branch badge
 Contributed to 3 OpenStack projects badge
 Fixed a Cells bug badge
 Constructive in IRC badge
 Freed the gate badge
 Reverted branch from a core badge
 etc. 

I think that works if you only keep the ones you can automate.
Constructive in IRC for example sounds a bit subjective to me, and you
don't want to issue those badges one-by-one manually.

Second thing, you don't want the game to start polluting your bug
status, i.e. people randomly setting bugs to triaged to earn the
Triaged a bug badge. So the badges we keep should be provably useful ;)

A few other suggestions:
Found a valid security issue (to encourage security reports)
Fixed a bug submitted by someone else (to encourage attacking random bugs)
Removed code (to encourage tech debt reduction)
Backported a fix to a stable branch (to encourage backporting)
Fixed a bug that was tagged nobody-wants-to-fix-this-one (to encourage
people to attack critical / hard bugs)

We might need protected tags to automate this: tags that only some
people could set to bugs/tasks to designate gate-freeing or
nobody-wants-to-fix-this-one bugs that will give you badges if you fix
them.

So overall it's a good idea, but it sounds a bit tricky to automate it
properly to avoid bad side-effects.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] How to run pylint locally?

2014-02-13 Thread Subramanian
Thanks for the response Dirk. I see very similar output now as shown in the
log when running 'tox -epylint', so I am getting closer. Unfortunately I am
still not able to reproduce the failure shown here
http://logs.openstack.org/17/62217/5/check/gate-cinder-pylint/1272f0a/console.html
.

What seems strange to me is that even when I plain run pylint -E -i y
cinder on this branch I still don't see the two lint errors reported in
above link!

- Subbu


On Thu, Feb 13, 2014 at 2:45 PM, Dirk Müller d...@dmllr.de wrote:

 Hi,

  Here is what I tried:
  /opt/stack/cinder $ ./tools/lintstack.sh
 
  But this does not report any errors even though I am on the same branch.
  What am I missing?

 You might be running against local packages then which have a
 different version / output. The proper way to reproduce the Gate
 errors is by using tox.

 in this case tox -e pylint

 Please note that the pylint error is nonvoting, the actual -1 comes
 from the Devstack test run failure.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] Cinder driver verification

2014-02-13 Thread Thierry Carrez
John Griffith wrote:
 So we've talked about this a bit and had a number of ideas regarding
 how to test and show compatibility for third-party drivers in Cinder.
 This has been an eye opening experience (the number of folks that have
 NEVER run tempest before, as well as the problems uncovered now that
 they're trying it).
 
 I'm even more convinced now that having vendors run these tests is a
 good thing and should be required.  That being said there's a ton of
 push back from my proposal to require that results from a successful
 run of the tempest tests to accompany any new drivers submitted to
 Cinder.

Could you describe the nature of the pushback ? Is it that the tests are
too deep and reject valid drivers ? Is it that it's deemed unfair to
block new drivers while the existing ones aren't better ? Is it that
it's difficult for them to run those tests and get a report ? Or is it
because they care more about having their name covered in mainline and
not so much about having the code working properly ?

 The consensus from the Cinder community for now is that we'll
 log a bug for each driver after I3, stating that it hasn't passed
 certification tests.  We'll then have a public record showing
 drivers/vendors that haven't demonstrated functional compatibility,
 and in order to close those bugs they'll be required to run the tests
 and submit the results to the bug in Launchpad.
 
 So, this seems to be the approach we're taking for Icehouse at least,
 it's far from ideal IMO, however I think it's still progress and it's
 definitely exposed some issues with how drivers are currently
 submitted to Cinder so those are positive things that we can learn
 from and improve upon in future releases.
 
 To add some controversy and keep the original intent of having only
 known tested and working drivers in the Cinder release, I am going to
 propose that any driver that has not submitted successful functional
 testing by RC1 that that driver be removed.  I'd at least like to see
 driver maintainers try... if the test fails a test or two that's
 something that can be discussed, but it seems that until now most
 drivers just flat out are not even being tested.

I think there are multiple stages here.

Stage 0: noone knows if drivers work
Stage 1: we know the (potentially sad) state of the drivers that are in
the release
Stage 2: only drivers that pass tests are added, drivers that don't pass
tests have a gap analysis and a plan to fix it
Stage 3: drivers that fail tests are removed before release
Stage 4: 3rd-party testing rigs must run tests on every change in order
to stay in tree

At the very minimum you should be at stage 1 for the Icehouse release,
so I agree with your last paragraph. I'd recommend that you start the
Juno cycle at stage 2 (for new drivers), and try to reach stage 3 for
the end of the Juno release.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] set network admin_state_up to false

2014-02-13 Thread Sylvain Afchain
Hi,

I'm working on this bug :

https://bugs.launchpad.net/neutron/+bug/1237807

and I'm wondering how we should address this issue, below my 
suggestions/thoughts

Currently when the admin_state_up of a network is set to false, the only thing 
which is done
is that the dhcp instance of this network is disabled/destroyed, so the dhcp 
port is removed
from br-int.

So should we :

1. set all ports admin_state_up to false, and let the agents (ovs) set the port 
as dead, 
   which is a different behavior than the dhcp which removes the port from 
br-int. 
  
2. do not change the admin_state_up value of ports and introduce a new field in 
the get_device_details
   rpc call in order to indicate that the admin_state_up of network is down and 
then set the port as dead ?
   So that if the admin_state_up of the network is restored all ports recover 
their original status.

In any case it could be a good thing to have a consistent behavior between dhcp 
ports and other network ports ?

Thanks in advance,

-Sylvain


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Proposal for model change - Loadbalancer Instance feedback

2014-02-13 Thread Eugene Nikanorov
Hi Stephen,

Please see my comments inline.


On Thu, Feb 13, 2014 at 5:19 AM, Stephen Balukoff sbaluk...@bluebox.netwrote:

 Hi y'all!

 I've been reading through the LoadBalancerInsance description as outlined
 here and have some feedback:
 https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance

 First off, I agree that we need a container object and that the pool
 shouldn't be the object root. This container object is going to have some
 attributes associated with it which then will apply to all related objects
 further down on the chain.  (I'm thinking, for example, that it may make
 sense for the loadbalancer to have 'network_id' as an attribute, and the
 associated VIPs, pools, etc. will inherit this from the container object.)


Particularly, network_id could be different for vip and a pool in case the
balancer works in routed mode (e.g. connects to different networks)


 One thing that was not clear to me just yet:  Is the 'loadbalancer' object
 meant, at least eventually, to be associated with an actual load balancer
 device of some kind (be that the neutron node with haproxy, a vendor
 appliance or a software appliance)?


Yes, that is one of the proposed roles of 'loadbalancer' object. But not
the only. Appliance is not the only representation of the balancer that we
are working with, it could also be a process on the host that is controlled
by the agent. So other types of associations also are necessary (like
association between the agent and the 'loadbalancer')


 If not, then I think we should use a name other than 'Loadbalancer' so we
 don't confuse people. I realize I might just be harping on one of the two
 truly difficult problems in software engineering (which are: Naming things,
 cache invalidation, and off-by-one errors). But if a 'loadbalancer' object
 isn't meant to actually be synonymous with a load balancer appliance of
 some kind, the object needs a new name.


I don't mind to have other name like 'instance', for example. But appliance
(would it be a device or a process+agent) is really a synonym of what I am
proposing.


 If the object and the device are meant to essentially be synonymous, then
 I think we're starting off too simplistic here, and the model proposed is
 going to need another significant revision when we add additional features
 later on.  I suspect we'll be painting ourselves into a corner with the
 LoadBalancerInstance as proposed. Specifically, I'm thinking about:


- Operational concerns around the life cycle of a physical piece of
infrastructure. If we're going to replace a physical load balancer, it
often makes sense to have both the old and new load balancer defined in the
system at the same time during the transition. If you then swap all the
VIPs from the old to the new, suddenly all the child objects have their
loadbalancer_id changed, which will often wreak havoc on client application
code (who really shouldn't be hard-coding things like loadbalancer_id, but
will do so anyway. :P ) Such transitions are much easier accomplished if
both load balancers can exist within an overarching container object (ie.
cluster in my proposal) which will never need to be swapped out.

 I'd like to understand that better. I guess no model is complex enough to
describe each end every use case. What I'm trying to address with the lb
instance is both simplistic cases that are only supported by the current
code plus some more complex configurations like multiple pools (L7) and
multiple vips. And at the same time we need to consider backward
compatibility and plus we need to make some progress. The bigger is the
change, the harder it is to make a progress. So we need to find an
iterative way of increasing API and model complexity.


- Having tenants know about loadbalancer_id (if it corresponds with
physical hardware) feels inherently un-cloud-like to me. Better that said
tenants know about the container object (which doesn't actually correspond
with any single physical piece of infrastructure) and not concern
themselves with physical hardware.

 Having loadbalancer_id has nothing to do with the appliance or particular
backend, so that even might not give any clue to a tenant about the
 backend type. However, tenant may want to know something about the backend
and he/she may want to use the single appliance for their needs (due to
quotas, billing or topology limitations), and that's where loadbalancer_id
helps to envelop resources and group them to just one (some!) physical
backend.


-
- In an active-standby or active-active HA load balancer topology (ie.
anything other than 'single device' topology), multiple load balancers will
carry the same configuration, as far as VIPs, Pools, Members, etc. are
concerned. Therefore, it doesn't make sense for the 'container' object to
be synonymous with a single device. It might be possible to hide this
complexity from the model by having 

Re: [openstack-dev] [Neutron][LBaaS] Proposal for model change - Logging configuration

2014-02-13 Thread Samuel Bercovici
Have modified the document access, let me know if you still have issues.

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Thursday, February 13, 2014 4:02 AM
To: Samuel Bercovici
Cc: OpenStack Development Mailing List (not for usage questions); 
rw3...@att.com; David Patterson; Eugene Nikanorov (enikano...@mirantis.com)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Proposal for model change - 
Logging configuration

Hi Roger and Sam!

Sam--  Could you please alter the permissions on that google doc so that people 
can add comments (at least)? Right now, it appears I have a read-only view.

Roger:  What you're describing seems to me like it isn't something that would 
apply to ceilometer integration so much as log offloading (and specifically, 
error log offloading). (Ceilometer is more about real-time stats than logs.) 
This isn't one of the features I had fleshed out on my model yet, as in our 
environment we dump the logs to local files (via syslog-ng) and any 
troubleshooting we do thereafter is done by looking at these log files on our 
software appliances directly (on behalf of our customers, in most cases).

But I can see that having the ability to ship logs off the load balancers is 
going to be important.  I see two possible ways of doing this:

1. Periodic archiving logs off-server. In practical terms for a software 
appliance, this would mean haproxy continues to log to disk as usual, then a 
cron job periodically rsyncs logs off the appliance to some user-defined 
destination. I see mostly disadvantages with this approach:

  *   Shipping the logs can consume a lot of bandwidth when it's happening, so 
care would be needed in scheduling this so as not to affect production traffic 
going to the load balancer.
  *   We'll have to deal with credentials for logging into the remote log 
archive server (and communicating and storing those credentials in a secure 
manner.)
  *   Logs are not immediately available, which makes real-time troubleshooting 
of problems...er... problematic.
  *   If the load balancer dies before shipping the logs off, the logs are lost.
  *   A very busy load balancer needs to worry about disk space (and log 
rotation) at lot more.

2. Real-time shipping of the logs via syslog-ng. In this case, haproxy would be 
configured to pass its logs to syslog-ng, which is in turn configured to pass 
them in real time to a logging host somewhere on the network. The main 
advantages of this approach are:

  *   No bandwidth-hogging periodic rsync process to deal with.
  *   Real-time troubleshooting is now possible.
  *   If the load balancer dies, we still get to see its last gasps just before 
it died.
  *   Log rotation, etc. are not a concern. Neither is disk space on the load 
balancer.

The main disadvantages to this approach are:

  *   DOSing the load balancer is easier, as requests now generate extra 
traffic for the load balancer (to the logging host)
  *   If the load balancer gets DOSed, the logging host might be getting DOSed 
as well.

Given the above I'm favoring the second approach. But does anyone else have 
ideas with regard to how to handle this?

Other attributes we might want to consider for the logging object:

  *   Verbosity
  *   Log format

Anything else?

In any case, any kind of 'logging' resource (with its various attributes) 
probably makes the most sense to attach in a 1:N relationship with the listener 
in my model (ie. one 'logging' object can be associated with many listeners.)

Thanks,
Stephen

On Wed, Feb 12, 2014 at 1:58 AM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com wrote:
Hi,

We plan to address LBaaS in ceilometer for Juno.
A blue print was registered 
https://blueprints.launchpad.net/neutron/+spec/lbaas-ceilometer-integration
Please use the following  google document to add include requirements and 
thoughts at: 
https://docs.google.com/document/d/1mrrn6DEQkiySwx4eTaKijr0IJkJpUT3WX277aC12YFg/edit?usp=sharing

Regards,
-Sam.


-Original Message-
From: WICKES, ROGER [mailto:rw3...@att.commailto:rw3...@att.com]
Sent: Tuesday, February 11, 2014 7:35 PM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Proposal for model

[Roger] Hi Stephen! Great job! Obviously your experience is both awesome and 
essential here.

I would ask that we add a historical archive (physically implemented as a log 
file, probably) object to your model. When you mentioned sending data off to 
Ceilometer, that triggered me to think about one problem I have had to deal 
with is what packet went where? 
in diagnosing errors usually related to having a bug on 1 out of 5 
load-balanced servers, usually because of a deployed version mismatch, but 
could also be due to virus. When our customer sees hey every now and then this 
image is broken on a web page that points us to an inconsistent farm, and 
having the ability to trace or see which server got that 

Re: [openstack-dev] [Neutron][LBaaS] Proposal for model change - Multiple services per floating IP

2014-02-13 Thread Eugene Nikanorov
Hi,

see my comments inline:


On Thu, Feb 13, 2014 at 4:11 AM, Stephen Balukoff sbaluk...@bluebox.netwrote:



 Is this blueprint not yet implemented?  When I attempt to create multiple
 VIPs using the same IP in my test cluster, I get:

 sbalukoff@testbox:~$ neutron lb-vip-create --name test-vip2
 --protocol-port 8000 --protocol HTTP --subnet-id
 a4370142-dc49-4633-9679-ce5366c482f5 --address 10.48.7.7 test-lb2
 Unable to complete operation for network
 aa370a26-742d-4eb6-a6f3-a8c344c664de. The IP address 10.48.7.7 is in use.

 From that, I gathered there was a uniqueness check on the IP address.


No, it's not yet implemented. Currently VIP creation implies port creation,
so in order to create VIP on the same IP, the port should be shared between
them, and that is blocked by existing implementation of haproxy driver.
We're now working on addressing that.


 Regardless of the above:  I think splitting the concept of a 'VIP' into
 'instance' and 'listener' objects has a couple other benefits as well:


- You can continue to do a simple uniqueness check on the IP address,
as only one instance should have a given IP.

- The 'instance' object can contain a 'tenant_id' attribute, which
means that at the model level, we enforce the idea that a given floating IP
can only be used by one tenant (which is good, from a security 
 perspective).

- This seems less ambiguous from a terminology perspective. The name
'VIP' in other contexts means 'virtual IP address', which is the same thing
as a floating IP, which in other contexts is usually considered to be
unique to a subset of devices that share the IP (or pass it between them).
It doesn't necessarily have anything to do with layers 4 and above in the
OSI model. However, if in the context of Neutron LBaaS, VIP has a
protocol-port attribute, this means it's no longer just a floating IP:
 It's a floating IP + TCP port (plus other attributes that make sense for a
TCP service). This feels like Neutron LBaaS is trying to redefine what a
virtual IP is, and is in any case going to be confusing for new comers
expecting it to be one thing when it's actually another.

 So we have some constraints here because of existing haproxy driver impl,
the particular reason is that VIP created by haproxy is not a floating ip,
but an ip on the internal tenant network with a neutron port. So ip
uniqueness is enforced at port level and not at VIP level. We need to allow
VIPs to share the port, that is a part of multiple-vips-per-pool blueprint.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] Mission Statement wording

2014-02-13 Thread Sergey Lukjanov
Hi folks,

I'm working now on adding Savanna's mission statement to governance docs
[0]. There are some comments on our current one to make it simpler and
remove marketing like stuff.

So, current option is:

To provide a scalable data processing stack and associated management
interfaces.

(thanks for Doug for proposing it).

So, please, share your objections (and suggestions too). Additionally I'd
like to talk about it on todays IRC meeting.

Thanks.

[0] https://review.openstack.org/#/c/71045/1/reference/programs.yaml

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Version Discovery Standardization

2014-02-13 Thread Jamie Lennox
Hi all,

I am one of i think a number of efforts trying to make clients be interoperable 
between different versions of an API.

What i would like to talk about specifically here are the inconsistencies in 
the version listing of the different servers when you query the root GET '/' 
and GET '/vX' address for versions. This is a badly formatted sampling of the 
policies out there: http://paste.openstack.org/show/64770/

This is my draft of a common solution 
https://wiki.openstack.org/wiki/VersionDiscovery which has some changes for 
everyone, but I at least hope can be a guide for new services and a target for 
the existing

There are a number of major inconsistencies that i hope to address:

1. The 'status' of an API. 

Keystone uses the word 'stable' to indicate a stable API, there are a number of 
services using 'CURRENT' and i'm not sure what 'SUPPORTED' is supposed to mean 
here. In general I think 'stable' makes the most sense and in many ways 
keystone has to be the leader here as it is the first contact. Any ideas how to 
convert existing APIs to this scheme? 

2. HTTP Status

Some services are 200, some 300. It also doesn't matter how many responses 
there are in this status. 

3. Keystone uses ['versions']['values']

Yep. Not sure why that is. Sorry, we should be able to have a copy under 
'values' and one in the root 'versions' simultaneously for a while and then 
drop the 'values' in some future release. 

4. Glance does a version entry for each minor version. 

Seperate entries for v2.2, v2.1, v2.0. They all point to the same place so IMO 
this is unnecessary. 

5. Differences between entry in GET '/' and GET '/vX'

There is often a log more information in GET '/vX' like media-type that is not 
present in the root. I'm not sure if this was on purpose but i think it easier 
(and less lookups) to have this information consistent.

6. GET '/' is unrestricted. GET '/vX' is often token restricted. 

Keystone allows access to /v2.0 and /v3 but most services give a HTTP 
Unauthorized. This is a real problem for discovery because we need to be able 
to evaluate the endpoints in the service catalog. I think we need to make these 
unauthorized.

Please have a look over the wiki page and how it addresses the above and fits 
into the existing schemes and reply with any comments or problems that you see. 
Is this going to mess with any pre-existing clients?

Then is there somewhere we can do 'new project guidelines' that we can link 
this from?


Jamie


PS. This is the script I used for the sampling if you want to test yourself: 
http://paste.openstack.org/show/65015/ it makes assumptions on URL structures 
and it won't pass code review.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Meeting cancelled this week

2014-02-13 Thread Russell Bryant
The weekly Nova IRC meeting is cancelled since we had an in-person Nova
meetup this week.  We'll meet again next week (Feb 20).

https://wiki.openstack.org/wiki/Meetings/Nova

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Question about Zuul's role in Solum

2014-02-13 Thread Julien Vey
Hi,

I have some concerns about using Zuul in Solum

I agree gating is a great feature but it is not useful for every project
and as Adrian said, not understood by everyone.
I think many Solum users, and PaaS users in general, are
single-project/single-build/simple git worklow and do not care about gating.

I see 2 drawbacks with Zuul :
- Tenant Isolation : How do we allow access on zuul (and jenkins) for a
specific tenant in isolation to the others tenants using Solum.
- Build customization : One of the biggest advantage of Jenkins is its
ecosystem and the many build customization it offers. Using zuul will
prohibit this.

About Gerrit, I think it is also a little too much. Many users have their
own reviewing system, Pull requests with github, bitbucket or stash, their
own instance of gerrit, or even a custom git workflow.
Gerrit would be a great feature for future versions of Solum. but only as
an optionnal one, we should not force people into it.

Julien

2014-02-13 5:47 GMT+01:00 Clark Boylan clark.boy...@gmail.com:

 On Wed, Feb 12, 2014 at 7:25 PM, Noorul Islam K M noo...@noorul.com
 wrote:
  devdatta kulkarni devdatta.kulka...@rackspace.com writes:
 
  Hi,
 
  I have been looking at Zuul for last few days and had a question
  about its intended role within Solum.
 
  From what I understand, Zuul is a code gating system.
 
  I have been wondering if code gating is something we are considering as
 a feature
  to be provided in Solum? If yes, then Zuul is a perfect fit.
  But if not, then we should discuss what benefits do we gain by using
 Zuul
  as an integral part of Solum.
 
  It feels to me that right now we are treating Zuul as a conduit for
 triggering job(s)
  that would do the following:
  - clone/download source
  - run tests
  - create a deployment unit (DU) if tests pass
  - upload DU to glance
  - trigger the DU deployment workflow
 
  In the language-pack working group we have talked about being able to do
  CI on the submitted code and building the DUs only after tests pass.
  Now, there is a distinction between doing CI on merged code vs.
  doing it before code is permanently merged to master/stable branches.
  The latter is what a 'code gating' system does, and Zuul is a perfect
 fit for this.
  For the former though, using a code gating system is not be needed.
  We can achieve the former with an API endpoint, a queue,
  and a mechanism to trigger job(s) that perform above mentioned steps.
 
  I guess it comes down to Solum's vision. If the vision includes
 supporting, among other things, code gating
  to ensure that Solum-managed code is never broken, then Zuul is a
 perfect fit.
  Of course, in that situation we would want to ensure that the gating
 functionality is pluggable
  so that operators can have a choice of whether to use Zuul or something
 else.
  But if the vision is to be that part of an overall application
 lifecycle management flow which deals with
  creation and scaling of DUs/plans/assemblies but not necessarily be a
 code gate, then we should re-evaluate Zuul's role
  as an integral part of Solum.
 
  Thoughts?
 
 
  Is Zuul tightly couple with launchpad? I see that most of the
  information that it displays is coming from launchpad.
 
  If it is, is it a good idea to force launchpad on users?
 
  Regards,
  Noorul
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 I can't think of any places that Zuul requires launchpad (or displays
 launchpad info for that matter). It is a bit coupled to Gerrit on one
 end and Gearman on the other, but not in an extreme way (the use of
 Gearman makes a bunch of sense imo, but having additional triggers
 instead of just Gerrit sounds great to me).

 Clark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Version Discovery Standardization

2014-02-13 Thread Sean Dague
On 02/13/2014 07:50 AM, Jamie Lennox wrote:
 Hi all,
 
 I am one of i think a number of efforts trying to make clients be 
 interoperable between different versions of an API.
 
 What i would like to talk about specifically here are the inconsistencies in 
 the version listing of the different servers when you query the root GET '/' 
 and GET '/vX' address for versions. This is a badly formatted sampling of the 
 policies out there: http://paste.openstack.org/show/64770/
 
 This is my draft of a common solution 
 https://wiki.openstack.org/wiki/VersionDiscovery which has some changes for 
 everyone, but I at least hope can be a guide for new services and a target 
 for the existing
 
 There are a number of major inconsistencies that i hope to address:
 
 1. The 'status' of an API. 
 
 Keystone uses the word 'stable' to indicate a stable API, there are a number 
 of services using 'CURRENT' and i'm not sure what 'SUPPORTED' is supposed to 
 mean here. In general I think 'stable' makes the most sense and in many ways 
 keystone has to be the leader here as it is the first contact. Any ideas how 
 to convert existing APIs to this scheme? 

From that link, only Keystone is different. Glance, Cinder, Neutron,
Nova all use CURRENT. So while not ideal, I'm not sure why we'd change
the rest of the world to match keystone, vs. changing keystone to match
the rest of the world.

Also realize changing version discovery itself is an API change, so my
feeling is this should be done in the smallest number of places possible.

 2. HTTP Status
 
 Some services are 200, some 300. It also doesn't matter how many responses 
 there are in this status. 

Ideally - 300 should be returned if there are multiple versions, and 200
otherwise.

 3. Keystone uses ['versions']['values']
 
 Yep. Not sure why that is. Sorry, we should be able to have a copy under 
 'values' and one in the root 'versions' simultaneously for a while and then 
 drop the 'values' in some future release. 

Again, keystone seems to be the odd man out here.

 4. Glance does a version entry for each minor version. 
 
 Seperate entries for v2.2, v2.1, v2.0. They all point to the same place so 
 IMO this is unnecessary. 

Probably agreed, curious if any Glance folks know of w reason for it.

 5. Differences between entry in GET '/' and GET '/vX'
 
 There is often a log more information in GET '/vX' like media-type that is 
 not present in the root. I'm not sure if this was on purpose but i think it 
 easier (and less lookups) to have this information consistent.

Agreed, I expect it's historical following of nova that media-type is
not in the root. I think it's fixable.

 6. GET '/' is unrestricted. GET '/vX' is often token restricted. 
 
 Keystone allows access to /v2.0 and /v3 but most services give a HTTP 
 Unauthorized. This is a real problem for discovery because we need to be able 
 to evaluate the endpoints in the service catalog. I think we need to make 
 these unauthorized.

Agreed, however due to the way the wsgi stacks work in these projects,
this might not be trivial. I'd set that as a goal to address.

 Please have a look over the wiki page and how it addresses the above and fits 
 into the existing schemes and reply with any comments or problems that you 
 see. Is this going to mess with any pre-existing clients?
 
 Then is there somewhere we can do 'new project guidelines' that we can link 
 this from?
 
 
 Jamie
 
 
 PS. This is the script I used for the sampling if you want to test yourself: 
 http://paste.openstack.org/show/65015/ it makes assumptions on URL structures 
 and it won't pass code review.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gamification and on-boarding ...

2014-02-13 Thread Sean Dague
On 02/13/2014 05:37 AM, Thierry Carrez wrote:
 Sandy Walsh wrote:
 The informal OpenStack motto is automate everything, so perhaps we should 
 consider some form of gamification [1] to help us? Can we offer badges, 
 quests and challenges to new users to lead them on the way to being strong 
 contributors?

 Fixed your first bug badge
 Updated the docs badge
 Got your blueprint approved badge
 Triaged a bug badge
 Reviewed a branch badge
 Contributed to 3 OpenStack projects badge
 Fixed a Cells bug badge
 Constructive in IRC badge
 Freed the gate badge
 Reverted branch from a core badge
 etc. 
 
 I think that works if you only keep the ones you can automate.
 Constructive in IRC for example sounds a bit subjective to me, and you
 don't want to issue those badges one-by-one manually.
 
 Second thing, you don't want the game to start polluting your bug
 status, i.e. people randomly setting bugs to triaged to earn the
 Triaged a bug badge. So the badges we keep should be provably useful ;)
 
 A few other suggestions:
 Found a valid security issue (to encourage security reports)
 Fixed a bug submitted by someone else (to encourage attacking random bugs)
 Removed code (to encourage tech debt reduction)
 Backported a fix to a stable branch (to encourage backporting)
 Fixed a bug that was tagged nobody-wants-to-fix-this-one (to encourage
 people to attack critical / hard bugs)
 
 We might need protected tags to automate this: tags that only some
 people could set to bugs/tasks to designate gate-freeing or
 nobody-wants-to-fix-this-one bugs that will give you badges if you fix
 them.
 
 So overall it's a good idea, but it sounds a bit tricky to automate it
 properly to avoid bad side-effects.

Gamification is a cool idea, if someone were to implement it, I'd be +1.

Realistically, the biggest issue I see with on-boarding is mentoring
time. Especially with folks completely new to our structure, there is a
lot of confusing things going on. And OpenStack is a ton to absorb. I
get pinged a lot on IRC, answer when I can, and sometimes just have to
ignore things because there are only so many hours in the day.

I think Anita has been doing a great job with the Neutron CI onboarding
and new folks, and that's given me perspective on just how many
dedicated mentors we'd need to bring new folks on. With 400 new people
showing up each release, it's a lot of engagement time. It's also
investment in our future, as some of these folks will become solid
contributors and core reviewers.

So it seems like the only way we'd make real progress here is to get a
chunk of people to devote some dedicated time to mentoring in the next
cycle. Gamification might be most useful, but honestly I expect a Start
Here page with the consolidated list of low-hanging-fruit bugs, and a
Review Here page with all reviews for low hanging fruit bugs (so they
don't get lost by core review team) would be a great start.

The delays on reviews for relatively trivial fixes I think is something
that is probably more demotivating to new folks than the lack of badges.
So some ability to keep on top of that I think would be really great.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Question about Zuul's role in Solum

2014-02-13 Thread CARVER, PAUL
Julien Vey wrote:

About Gerrit, I think it is also a little too much. Many users have their
 own reviewing system, Pull requests with github, bitbucket or stash,
their own instance of gerrit, or even a custom git workflow.
Gerrit would be a great feature for future versions of Solum. but only
as an optionnal one, we should not force people into it.

I'm just an observer since I haven't managed to negotiate the CLA hurdle with 
my employer yet, but Gerrit seems to me to work fantastically well.

If there are better options than Gerrit that people are using I'd be interested 
in hearing about them. I'm always interested in learning who has the best in 
class tool for any particular task. However I think that having multiple tools 
for the same job within OpenStack is going to be a bad idea that results in 
confusion and difficulty in cooperation.

Now, if the intention is for Solum to NOT be an OpenStack project that's fine. 
OpenStack can use and be used by lots of projects that aren't part of it. But 
if someone learns the tools and processes for one OpenStack project they ought 
to be able to jump right into any other OpenStack project without having to 
worry about what code review tool, or other tools or processes are different 
from one to the next.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Mission Statement wording

2014-02-13 Thread Alexander Ignatov
lgtm what was proposed by Doug.

Regards,
Alexander Ignatov



On 13 Feb 2014, at 16:29, Sergey Lukjanov slukja...@mirantis.com wrote:

 Hi folks,
 
 I'm working now on adding Savanna's mission statement to governance docs [0]. 
 There are some comments on our current one to make it simpler and remove 
 marketing like stuff.
 
 So, current option is:
 
 To provide a scalable data processing stack and associated management 
 interfaces.
 
 (thanks for Doug for proposing it).
 
 So, please, share your objections (and suggestions too). Additionally I'd 
 like to talk about it on todays IRC meeting.
 
 Thanks.
 
 [0] https://review.openstack.org/#/c/71045/1/reference/programs.yaml
 
 -- 
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Question about Zuul's role in Solum

2014-02-13 Thread Clint Byrum
Excerpts from Julien Vey's message of 2014-02-13 05:18:19 -0800:
 Hi,
 
 I have some concerns about using Zuul in Solum
 
 I agree gating is a great feature but it is not useful for every project
 and as Adrian said, not understood by everyone.
 I think many Solum users, and PaaS users in general, are
 single-project/single-build/simple git worklow and do not care about gating.
 

The gate can be a noop. Easier to insert more gate tests when it matters
than it is to swap out technologies at that time.

 I see 2 drawbacks with Zuul :
 - Tenant Isolation : How do we allow access on zuul (and jenkins) for a
 specific tenant in isolation to the others tenants using Solum.

Same way Trove has a mysql database per tenant I presume.

 - Build customization : One of the biggest advantage of Jenkins is its
 ecosystem and the many build customization it offers. Using zuul will
 prohibit this.

Nobody is saying you can't use Jenkins. People are saying use Zuul and
get more than Jenkins. But go on and use Jenkins as well.

 
 About Gerrit, I think it is also a little too much. Many users have their
 own reviewing system, Pull requests with github, bitbucket or stash, their
 own instance of gerrit, or even a custom git workflow.
 Gerrit would be a great feature for future versions of Solum. but only as
 an optionnal one, we should not force people into it.

Agreed on that.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] bad default values in conf files

2014-02-13 Thread David Kranz
I was recently bitten by a case where some defaults in keystone.conf 
were not appropriate for real deployment, and our puppet modules were 
not providing better values 
https://bugzilla.redhat.com/show_bug.cgi?id=1064061. Since there are 
hundreds (thousands?) of options across all the services. I am wondering 
whether there are other similar issues lurking and if we have done what 
we can to flush them out.


Defaults in conf files seem to be one of the following:

- Generic, appropriate for most situations
- Appropriate for devstack
- Appropriate for small, distro-based deployment
- Approprate for large deployment

Upstream, I don't think there is a shared view of how defaults should be 
chosen.


Keeping bad defaults can have a huge impact on performance and when a 
system falls over but the problems may not be visible until some time 
after a system gets into real use. Have the folks creating our puppet 
modules and install recommendations taken a close look at all the 
options and determined
that the defaults are appropriate for deploying RHEL OSP in the 
configurations we are recommending?


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Version Discovery Standardization

2014-02-13 Thread Dean Troyer
FWIW, an early proposal to address this, as well as capability discovery,
still lives at
https://etherpad.openstack.org/p/api-version-discovery-proposal.  I've lost
track of where this went, and even which design summit this is from, but
I've been using it as a sanity check for the discovery bits in OSC.

On Thu, Feb 13, 2014 at 6:50 AM, Jamie Lennox jamielen...@redhat.comwrote:

 6. GET '/' is unrestricted. GET '/vX' is often token restricted.


 Keystone allows access to /v2.0 and /v3 but most services give a HTTP
 Unauthorized. This is a real problem for discovery because we need to be
 able to evaluate the endpoints in the service catalog. I think we need to
 make these unauthorized.


I agree, however from a client discovery process point-of-view, you do not
necessarily have an endpoint until after you auth and get a service catalog
anyway.  For example, in the specific case of OpenStackClient Help command
output, the commands listed may depend on the desired API version.  To get
the endpoints to query for version support still requires a service catalog
so nothing really changes there.

And this doesn't even touch on the SC endpoints that include things like
tenant/project id...


 Please have a look over the wiki page and how it addresses the above and
 fits into the existing schemes and reply with any comments or problems that
 you see. Is this going to mess with any pre-existing clients?


* id: Let's either make this a real semantic version so we can parse and
use the major.minor.patch components (and dump the 'v') or make it an
identifier that matches the URL path component.  Right now

* updated: I think it would be a friendly gesture to update this for
unstable changes as the id is likely to not be updated mid-stream.  During
debugging I would want to be able to verify exactly which implementation I
was talking to anyway.


There are two transitional things to also consider:

* We have to produce a discovery mechanism that takes in to account the
historical authentication URLs published by deployments that include a
version.  ayoung's Ml thread last week discussed this a bit, we should
document the approaches that we're testing and why they do or do not work.

* There are real-world deployments that do not configure admin_endpoint
and/or public_endpoint in keystone.conf.  Discovery is really useless if
the URL you are given is https://localhost:5000/v2.0.  Do we need to talk
about another horrible horrible hack to deal with these or are these
deployments going to be left out in the cold?

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] API 1.1 feedback

2014-02-13 Thread Flavio Percoco

On 13/02/14 10:30 +, Jamie Hannaford wrote:

I noticed a few things about the new 1.1 spec that I thought I'd give feedback
on:

1. Set Queue Metadata
A PUT operation is provided, which does a hard replace of metadata values. New
items are inserted, and existing items that are not specified are wiped. 


Nova also provides a POST operation that is more sympathetic - allowing you to
update only the values specified, leaving existing unspecified items
unmodified. Could a similar operation be added to this API - since there
definitely seems like a use case for it.


Neither PUT nor POST are the right methods to use here. I think we
discussed this at some point and we agreed on adding a PATCH action
for the queue metadata. This still needs to be discussed a bi further
though, even the queue metadata utility can be questionable as a
public feature.

I'd love to hear from you what kind of metadata would you put into the
queue and whether you think this is something useful for all users or
just operators.

The idea behind the queue metadata is to allow users to store
information relative to the queue and also customize some of the
queue's limits without exceeding the limits configured server side.

There are more use cases but I'd love to here some from you


2. Get a Specific Message
In the response body, the `href` field is provided as a relative URI - why?
Surely absolute URIs are more convenient for the end-user.


This is because the client does the work of joining the relative path
to the host address. We had long discussions about whether we should
return the absolute URL as opposed to the relative URL. Although
relative are less convenient when curling the API, they are more
convenient when the client is doing this work for you and you have
several nodes you could talk to.



3. Deleting Multiple Messages
a. How does one delete multiple claimed messages? What would the URI template
look like? It is not specified whether this is possible or not.
b. If I provide a bunch of IDs, and one of them is a claimed message, what
happens? Will it be silently ignored? The behavior is undefined.


Hehe, this is a good one! :)

So, as of now it is just possible to do bulk deletes which is a leaky
abstraction, TBH. This is a must fix issue for v1.1 because there's a
race condition that would allow A to delete a message that was
claimed by user B after getting it.


So, to answer:

a. you just do a bulk delete
b. it'd delete the message regardless it is claimed. :(



4. Read a Shard
Should these response structures be nested in a top-level shard object? Same
will the List Shards collection.


Yes, there's a plan to make response more consistent, not sure if it
is mentioned there. :)



5. The request body for Post Message(s) contains malformed JSON - the `=`
should be `:`


Oh mmh, you mean in the spec, right?



Sorry if some of these issues have already been settled or discussed :)




Thanks a lot for raising these questions and for reviewing the API
v1.1 spec.

Cheers,
Fla.

--
@flaper87
Flavio Percoco


pgpM0AN3lxcQR.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Mission Statement wording

2014-02-13 Thread Andrew Lazarev
Short version looks good for me.

Andrew.


On Thu, Feb 13, 2014 at 4:29 AM, Sergey Lukjanov slukja...@mirantis.comwrote:

 Hi folks,

 I'm working now on adding Savanna's mission statement to governance docs
 [0]. There are some comments on our current one to make it simpler and
 remove marketing like stuff.

 So, current option is:

 To provide a scalable data processing stack and associated management
 interfaces.

 (thanks for Doug for proposing it).

 So, please, share your objections (and suggestions too). Additionally I'd
 like to talk about it on todays IRC meeting.

 Thanks.

 [0] https://review.openstack.org/#/c/71045/1/reference/programs.yaml

 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] bad default values in conf files

2014-02-13 Thread Clint Byrum
Excerpts from David Kranz's message of 2014-02-13 06:38:52 -0800:
 I was recently bitten by a case where some defaults in keystone.conf 
 were not appropriate for real deployment, and our puppet modules were 
 not providing better values 
 https://bugzilla.redhat.com/show_bug.cgi?id=1064061.

Just taking a look at that issue, Keystone's PKI and revocation are
causing all kinds of issues with performance that are being tackled with
a bit of a redesign. I doubt we can find a cache timeout setting that
will work generically for everyone, but if we make detecting revocation
scale, we won't have to.

The default probably is too low, but raising it too high will cause
concern with those who want revoked tokens to take effect immediately
and are willing to scale the backend to get that result.

 Since there are 
 hundreds (thousands?) of options across all the services. I am wondering 
 whether there are other similar issues lurking and if we have done what 
 we can to flush them out.
 
 Defaults in conf files seem to be one of the following:
 
 - Generic, appropriate for most situations
 - Appropriate for devstack
 - Appropriate for small, distro-based deployment
 - Approprate for large deployment
 
 Upstream, I don't think there is a shared view of how defaults should be 
 chosen.
 

I don't know that we have been clear enough about this, but nobody has
ever challenged the assertion we've been making for a while in TripleO
which is that OpenStack _must_ have production defaults. We don't make
OpenStack for devstack.

In TripleO, we consider it a bug when we can't run with a default value
that isn't directly related to whatever makes that cloud unique. So
the virt driver: meh, that's a choice, but leaving file injection on is
really not appropriate for 99% of users in production. Also you'll see
quite a few commits from me in the keystone SQL token driver trying to
speed it up because the old default token backend was KVS (in-memory),
which was fast, but REALLY not useful in production. We found these
things by running defaults and noticing in a long running cloud where
the performance problems are, and we intend to keep doing that.

So perhaps we should encode this assertion in
https://wiki.openstack.org/wiki/ReviewChecklist

 Keeping bad defaults can have a huge impact on performance and when a 
 system falls over but the problems may not be visible until some time 
 after a system gets into real use. Have the folks creating our puppet 
 modules and install recommendations taken a close look at all the 
 options and determined
 that the defaults are appropriate for deploying RHEL OSP in the 
 configurations we are recommending?


TripleO is the official deployment program. We are taking the approach
described above. We're standing up several smallish (50 nodes) clouds
with the intention of testing the defaults on real hardware in the gate
of OpenStack eventually.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] bad default values in conf files

2014-02-13 Thread Boris Pavlovic
David,

Good that you rise this topic. It is actually sad that you have to make a
big investigation of OpenStack config params, before you are able to use
OpenStack. I think that this work should be done mostly inside upstream.


So I have a couple of ideas how we can simplify investigation of how CONF
values impact on performance at scale (without having tons of servers).

In closer future you will be able to use Rally [1] for it:
1) (WIP) Deploy multinode installation in click inside lxc containers [2]
(you need only 200 MB of Ram for 1 compute node)
2) Use fake virtualization
3) Run rally benchmarks and get performance for different conf parameters
4) Analyze results and set as default best one.
5) (WIP) I am working as well on pure OpenStack cross service profiler [3],
that will allows us to find slow parts of code, and analyze CONF args that
are related to them. (not whole list of cong params)



[1] https://wiki.openstack.org/wiki/Rally
[2]
https://review.openstack.org/#/c/57240/27/doc/samples/deployments/multihost.rst
[3] https://github.com/pboris/osprofiler

Best regards,
Boris Pavlovic



On Thu, Feb 13, 2014 at 6:38 PM, David Kranz dkr...@redhat.com wrote:

 I was recently bitten by a case where some defaults in keystone.conf were
 not appropriate for real deployment, and our puppet modules were not
 providing better values https://bugzilla.redhat.com/
 show_bug.cgi?id=1064061. Since there are hundreds (thousands?) of options
 across all the services. I am wondering whether there are other similar
 issues lurking and if we have done what we can to flush them out.

 Defaults in conf files seem to be one of the following:

 - Generic, appropriate for most situations
 - Appropriate for devstack
 - Appropriate for small, distro-based deployment
 - Approprate for large deployment

 Upstream, I don't think there is a shared view of how defaults should be
 chosen.

 Keeping bad defaults can have a huge impact on performance and when a
 system falls over but the problems may not be visible until some time after
 a system gets into real use. Have the folks creating our puppet modules and
 install recommendations taken a close look at all the options and determined
 that the defaults are appropriate for deploying RHEL OSP in the
 configurations we are recommending?

  -David

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] bad default values in conf files

2014-02-13 Thread Jay Pipes
On Thu, 2014-02-13 at 09:38 -0500, David Kranz wrote:
 I was recently bitten by a case where some defaults in keystone.conf 
 were not appropriate for real deployment, and our puppet modules were 
 not providing better values 
 https://bugzilla.redhat.com/show_bug.cgi?id=1064061. Since there are 
 hundreds (thousands?) of options across all the services. I am wondering 
 whether there are other similar issues lurking and if we have done what 
 we can to flush them out.
 
 Defaults in conf files seem to be one of the following:
 
 - Generic, appropriate for most situations
 - Appropriate for devstack
 - Appropriate for small, distro-based deployment
 - Approprate for large deployment
 
 Upstream, I don't think there is a shared view of how defaults should be 
 chosen.
 
 Keeping bad defaults can have a huge impact on performance and when a 
 system falls over but the problems may not be visible until some time 
 after a system gets into real use. Have the folks creating our puppet 
 modules and install recommendations taken a close look at all the 
 options and determined
 that the defaults are appropriate for deploying RHEL OSP in the 
 configurations we are recommending?

This is a very common problem in the configuration management space,
frankly. One good example is the upstream mysql Chef cookbook keeping
ludicrously low InnoDB buffer pool, log and data file sizes. The
defaults from MySQL -- which were chosen, frankly, in the 1990s, are
useful for nothing more than a test environment, but unfortunately they
propogate to far too many deployments with folks unaware of the serious
side-effects on performance and scalability until it's too late [1].

I think it's an excellent idea to do a review of the values in all of
the configuration files and do the following:

* Identify settings that simply aren't appropriate for anything and make
the change to a better default.

* Identify settings that need to scale with the size of the underlying
VM or host capabilities, and provide patches to the configuration file
comments that clearly indicate a recommended scaling factor. Remember
that folks writing Puppet modules, Ansible scripts, Salt SLS files, and
Chef cookbooks look first to the configuration files to get an idea of
how to set the values.

Best,
-jay

[1] The reason I say it's too late is because for some configuration
value -- notably innodb_log_file_size and innodb_data_file_size -- it is
not possible to change the configuration values after data has been
written to disk. You need to literally dump the contents of the DBs and
reload the database after removing the files and restarting the DBs
after changing the configuration options in my.cnf. See this bug for
details on this pain in the behind:

https://tickets.opscode.com/browse/COOK-2100


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Regarding language pack database schema

2014-02-13 Thread Georgy Okrokvertskhov
Hi Arati,


I would vote for Option #2 as a short term solution. Probably later we can
consider using NoSQL DB or MariaDB which has Column_JSON type to store
complex types.

Thanks
Georgy


On Thu, Feb 13, 2014 at 8:12 AM, Arati Mahimane 
arati.mahim...@rackspace.com wrote:

  Hi All,

  I have been working on defining the Language pack database schema. Here
 is a link to my review which is still a WIP -
 https://review.openstack.org/#/c/71132/3.
 There are a couple of different opinions on how we should be designing the
 schema.

  Language pack has several complex attributes which are listed here -
 https://etherpad.openstack.org/p/Solum-Language-pack-json-format
 We need to support search queries on language packs based on various
 criteria. One example could be 'find a language pack where type='java' and
 version1.4'

  Following are the two options that are currently being discussed for the
 DB schema:

  *Option 1:* Having a separate table for each complex attribute, in order
 to achieve normalization. The current schema follows this approach.
However, this design has certain drawbacks. It will
 result in a lot of complex DB queries and each new attribute will require a
 code change.
 *Option 2:* We could have a predefined subset of attributes on which we
 would support search queries.
 In this case, we would define columns (separate tables
 in case of complex attributes) only for this subset of attributes and all
 other attributes will be a part of a json blob.
 With this option, we will have to go through a schema
 change in case we decide to support search queries on other attributes at a
 later stage.

  I would like to know everyone's thoughts on these two approaches so that
 we can take a final decision and go ahead with one approach.
 Suggestions regarding any other approaches are welcome too!

  Thanks,
 Arati


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [GSoC] Call for Mentors and Participants (#openstack-gsoc)

2014-02-13 Thread Alejandro Cabrera
Hey,

Today is the last day to prepare a submission for the Google Summer of
Code project.

If interested, please volunteer by adding your name and project
affiliation here: https://wiki.openstack.org/wiki/GSoC2014#Mentors

Join us @ freenode: #openstack-gsoc for the latest updates or if you
have any questions.

Thanks!
- Alej

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [TROVE] Manual Installation: problems with trove-manage

2014-02-13 Thread Giuseppe Galeota
Hi Michael,
I'm using this unique guide: 
http://docs.openstack.org/developer/trove/dev/manual_install.html;.

If you can give me an useful guide, I would be grateful!

 1) Which user do I need use in order to install TROVE, root user or a
 non-root user?

   Installation should be the same as other projects in OpenStack. If its
   not, we definitely have issues :)

  OK

 2)  Why is it necessary to run a virtual environment (virtualenv)? Is it
 the right way in order to realize a production Openstack environment?

  ? I have helped a few companies install trove, and i have _never_
run in
  a venv. Not saying you should/shouldnt, im just saying its not
  necessary. Im not sure where you got that.


OK


 3) When I run the command:

 (env)root@hostname:~#trove-manage
 --config-file=/root/trove/etc/trove/trove.conf.sample db_wipe
 trove_test.sqlite mysql fake

 I obtain this output:

 (env)root@hostname:~# trove-manage
 --config-file=/root/trove/etc/trove/trove.conf.sample db_wipe
 trove_test.sqlite mysql fake
 usage: trove-manage [-h] [--config-dir DIR] [--config-file PATH]
[--debug]
 [--log-config-append PATH] [--log-date-format
 DATE_FORMAT]
 [--log-dir LOG_DIR] [--log-file PATH]
 [--log-format FORMAT] [--nodebug] [--nouse-syslog]
 [--noverbose] [--syslog-log-facility
 SYSLOG_LOG_FACILITY]
 [--use-syslog] [--verbose] [--version]

{db_sync,db_upgrade,db_downgrade,datastore_update,datastore_version_update,db_wipe}
 ...
 trove-manage: error: unrecognized arguments: mysql fake

  Looks like you hit a bug :) Im not sure if anyone has run that via trove
manage before!!

 I have followed this unique guide (
http://docs.openstack.org/developer/trove/dev/manual_install.html), and
precisely this instructions:

   -

   Initialize the database:

   # trove-manage --config-file=PathToTroveConf db_wipe
trove_test.sqlite mysql fake



  where --config-file=/root/trove/etc/trove/trove.conf.sample

So:
 a) How should I inizialize the trove's database?

 b) Why the config files are under the
/root/trove/etc insted of /etc/trove/  ?


   However, if I run that command without *mysql fake*:

(env)root@hostname:~#
trove-manage --config-file=/root/trove/etc/trove/trove.conf.sample
db_wipe trove_test.sqlite

   it seems to work. In the trove's database, infact, I can see a
lot of tables.


 Furthermore, I obtain the trove-manage: error: unrecognized
arguments: image_update
 when I run the command:

 (env)root@hostname:~#
trove-manage --config-file=/root/trove/etc/trove/trove.conf.sample
image_update mysql `nova --os-username trove --os-password trove
--os-tenant-name trove --os-auth-url http://KeystoneIp:5000/v2.0
image-list | awk '/trove-image/ {print $2}'`



  I dont see image_update in the list of commands above. Im not sure
where
  you got image_update from, but i dont see it in the current
trove-manage
  code. If you got that from a wiki article, its crazy wrong!!

I have got image_update from the guide
http://docs.openstack.org/developer/trove/dev/manual_install.html , ed in
particular from the section:


   -

   Setup trove to use the uploaded image. Enter the following in a single
   line, note quotes (') and backquotes(`):

   # trove-manage
--config-file=/root/trove/etc/trove/trove.conf.sample image_update
mysql `nova --os-username trove --os-password trove
--os-tenant-name trove --os-auth-url http://KeystoneIp:5000/v2.0
image-list | awk '/trove-image/ {print $2}'`



  If u want synchronous help, find us in #openstack-trove.

  I will do it.

Can you help me with a more useful guide that makes trove working?

Thank you very much!
Giuseppe


2014-02-13 17:18 GMT+01:00 Michael Basnight mbasni...@gmail.com:

 Giuseppe Galeota giuseppegale...@gmail.com writes:

  [...]
 
  1) Which user do I need use in order to install TROVE, root user or a
  non-root user?

 Installation should be the same as other projects in OpenStack. If its
 not, we definitely have issues :)

 
  2)  Why is it necessary to run a virtual environment (virtualenv)? Is it
  the right way in order to realize a production Openstack environment?

 ? I have helped a few companies install trove, and i have _never_ run in
 a venv. Not saying you should/shouldnt, im just saying its not
 necessary. Im not sure where you got that.

 
 
 
  3) When I run the command:
 
  (env)root@hostname:~#trove-manage
  --config-file=/root/trove/etc/trove/trove.conf.sample db_wipe
  trove_test.sqlite mysql fake
 
  I obtain this output:
 
  (env)root@hostname:~# trove-manage
  --config-file=/root/trove/etc/trove/trove.conf.sample db_wipe
  trove_test.sqlite mysql fake
  usage: trove-manage [-h] [--config-dir DIR] [--config-file PATH]
 [--debug]
  

Re: [openstack-dev] [OpenStack-Dev] [Cinder] Cinder driver verification

2014-02-13 Thread Walter A. Boring IV

On 02/13/2014 02:51 AM, Thierry Carrez wrote:

John Griffith wrote:

So we've talked about this a bit and had a number of ideas regarding
how to test and show compatibility for third-party drivers in Cinder.
This has been an eye opening experience (the number of folks that have
NEVER run tempest before, as well as the problems uncovered now that
they're trying it).

I'm even more convinced now that having vendors run these tests is a
good thing and should be required.  That being said there's a ton of
push back from my proposal to require that results from a successful
run of the tempest tests to accompany any new drivers submitted to
Cinder.

Could you describe the nature of the pushback ? Is it that the tests are
too deep and reject valid drivers ? Is it that it's deemed unfair to
block new drivers while the existing ones aren't better ? Is it that
it's difficult for them to run those tests and get a report ? Or is it
because they care more about having their name covered in mainline and
not so much about having the code working properly ?


The consensus from the Cinder community for now is that we'll
log a bug for each driver after I3, stating that it hasn't passed
certification tests.  We'll then have a public record showing
drivers/vendors that haven't demonstrated functional compatibility,
and in order to close those bugs they'll be required to run the tests
and submit the results to the bug in Launchpad.

So, this seems to be the approach we're taking for Icehouse at least,
it's far from ideal IMO, however I think it's still progress and it's
definitely exposed some issues with how drivers are currently
submitted to Cinder so those are positive things that we can learn
from and improve upon in future releases.

To add some controversy and keep the original intent of having only
known tested and working drivers in the Cinder release, I am going to
propose that any driver that has not submitted successful functional
testing by RC1 that that driver be removed.  I'd at least like to see
driver maintainers try... if the test fails a test or two that's
something that can be discussed, but it seems that until now most
drivers just flat out are not even being tested.

I think there are multiple stages here.

Stage 0: noone knows if drivers work
Stage 1: we know the (potentially sad) state of the drivers that are in
the release
Stage 2: only drivers that pass tests are added, drivers that don't pass
tests have a gap analysis and a plan to fix it
Stage 3: drivers that fail tests are removed before release
Stage 4: 3rd-party testing rigs must run tests on every change in order
to stay in tree

At the very minimum you should be at stage 1 for the Icehouse release,
so I agree with your last paragraph. I'd recommend that you start the
Juno cycle at stage 2 (for new drivers), and try to reach stage 3 for
the end of the Juno release.

I have to agree with Thierry here.  I think if we can get drivers to 
pass the tests

in the Juno timeframe, then it's fine to remove then during Juno.
I think the idea of having drivers run their code through tempest and work
towards passing all of those tests is a great thing for Cinder and 
OpenStack in general.


What I would do different for the Icehouse release is this:

If a driver doesn't pass the certification test by IceHouse RC1, then we 
have a bug filed
against the driver.   I would also put a warning message in the log for 
that driver that it
doesn't pass the certification test.  I would not remove it from the 
codebase.


Also:
   if a driver hasn't even run the certification test by RC1, then we 
mark the driver as
uncertified and deprecated in the code and throw an error at driver init 
time.
We can have a option in cinder.conf that says 
ignore_uncertified_drivers=False.
If an admin wants to ignore the error, they set the flag to True, and we 
let the driver init at next startup.

The admin then takes full responsibility for running uncertified code.

  I think removing the drivers outright is premature for Icehouse, 
since the certification process is a new thing.
For Juno, we remove any drivers that are still marked as uncertified and 
haven't run the tests.


I think the purpose of the tests is to get vendors to actually run their 
code through tempest and
prove to the community that they are willing to show that they are 
fixing their code.  At the end of the day,

it better serves the community and Cinder if we have many working drivers.

My $0.02,
Walt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Devstack installation failed with CINDER installation

2014-02-13 Thread Ben Nemec
 

Looks like a transient pypi failure. You can either wait for pypi to get
its act together or configure pip to use the pypi.openstack.org mirror.
This is the relevant part of my ~/.pip/pip.conf file: 

[fedora@openstack .pip]$ cat pip.conf 
[global]
index-url = http://pypi.openstack.org/openstack 

I think that's enough to make it use pypi.openstack.org. 

-Ben 

On 2014-02-13 03:31, trinath.soman...@freescale.com wrote: 

 Hi stackers- 
 
 I have an issue while installing devstack on my machine. 
 
 While configuring cider root wrap, devstack installation failed. 
 
 Here is the complete log 
 
 .dEYRz -e /opt/stack/cinder 
 
 Obtaining file:///opt/stack/cinder 
 
 Running setup.py egg_info for package from file:///opt/stack/cinder 
 
 [pbr] Reusing existing SOURCES.txt 
 
 Requirement already satisfied (use --upgrade to upgrade): pbr=0.6,1.0 in 
 /opt/stack/pbr (from cinder==2014.1.dev166.g893aaa9) 
 
 Requirement already satisfied (use --upgrade to upgrade): amqplib=0.6.1 in 
 /usr/lib/python2.7/dist-packages (from cinder==2014.1.dev166.g893aaa9) 
 
 Requirement already satisfied (use --upgrade to upgrade): anyjson=0.3.3 in 
 /usr/lib/python2.7/dist-packages (from cinder==2014.1.dev166.g893aaa9) 
 
 Requirement already satisfied (use --upgrade to upgrade): Babel=1.3 in 
 /usr/local/lib/python2.7/dist-packages/Babel-1.3-py2.7.egg (from 
 cinder==2014.1.dev166.g893aaa9) 
 
 Requirement already satisfied (use --upgrade to upgrade): eventlet=0.13.0 in 
 /usr/lib/python2.7/dist-packages (from cinder==2014.1.dev166.g893aaa9) 
 
 Requirement already satisfied (use --upgrade to upgrade): greenlet=0.3.2 in 
 /usr/lib/python2.7/dist-packages (from cinder==2014.1.dev166.g893aaa9) 
 
 Downloading/unpacking iso8601=0.1.8 (from cinder==2014.1.dev166.g893aaa9) 
 
 Could not fetch URL https://pypi.python.org/simple/iso8601/: There was a 
 problem confirming the ssl certificate: urlopen error [Errno 1] _ssl.c:504: 
 error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify 
 failed 
 
 Will skip URL https://pypi.python.org/simple/iso8601/ when looking for 
 download links for iso8601=0.1.8 (from cinder==2014.1.dev166.g893aaa9) 
 
 Could not fetch URL https://pypi.python.org/simple/: There was a problem 
 confirming the ssl certificate: urlopen error [Errno 1] _ssl.c:504: 
 error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify 
 failed 
 
 Will skip URL https://pypi.python.org/simple/ when looking for download links 
 for iso8601=0.1.8 (from cinder==2014.1.dev166.g893aaa9) 
 
 Cannot fetch index base URL https://pypi.python.org/simple/ 
 
 Could not fetch URL https://pypi.python.org/simple/iso8601/: There was a 
 problem confirming the ssl certificate: urlopen error [Errno 1] _ssl.c:504: 
 error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify 
 failed 
 
 Will skip URL https://pypi.python.org/simple/iso8601/ when looking for 
 download links for iso8601=0.1.8 (from cinder==2014.1.dev166.g893aaa9) 
 
 Could not find any downloads that satisfy the requirement iso8601=0.1.8 
 (from cinder==2014.1.dev166.g893aaa9) 
 
 Cleaning up... 
 
 No distributions at all found for iso8601=0.1.8 (from 
 cinder==2014.1.dev166.g893aaa9) 
 
 Storing complete log in /home/stack/.pip/pip.log 
 
 + safe_chown -R stack /opt/stack/cinder/cinder.egg-info 
 
 + _safe_permission_operation chown -R stack /opt/stack/cinder/cinder.egg-info 
 
 + args=($@) 
 
 + local args 
 
 + local last 
 
 + local sudo_cmd 
 
 + local dir_to_check 
 
 + let 'last=4 - 1' 
 
 + dir_to_check=/opt/stack/cinder/cinder.egg-info 
 
 + '[' '!' -d /opt/stack/cinder/cinder.egg-info ']' 
 
 + is_nfs_directory /opt/stack/cinder/cinder.egg-info 
 
 ++ stat -f -L -c %T /opt/stack/cinder/cinder.egg-info 
 
 + local mount_type=ext2/ext3 
 
 + test ext2/ext3 == nfs 
 
 + [[ False = True ]] 
 
 + sudo_cmd=sudo 
 
 + sudo chown -R stack /opt/stack/cinder/cinder.egg-info 
 
 + '[' True = True ']' 
 
 + '[' 0 -eq 0 ']' 
 
 + cd /opt/stack/cinder 
 
 + git reset --hard 
 
 HEAD is now at 893aaa9 Merge GlusterFS: Fix create/restore backup 
 
 + configure_cinder 
 
 + [[ ! -d /etc/cinder ]] 
 
 + sudo chown stack /etc/cinder 
 
 + cp -p /opt/stack/cinder/etc/cinder/policy.json /etc/cinder 
 
 + configure_cinder_rootwrap 
 
 ++ get_rootwrap_location cinder 
 
 ++ local module=cinder 
 
 +++ get_python_exec_prefix 
 
 +++ is_fedora 
 
 +++ [[ -z Ubuntu ]] 
 
 +++ '[' Ubuntu = Fedora ']' 
 
 +++ '[' Ubuntu = 'Red Hat' ']' 
 
 +++ '[' Ubuntu = CentOS ']' 
 
 +++ is_suse 
 
 +++ [[ -z Ubuntu ]] 
 
 +++ '[' Ubuntu = openSUSE ']' 
 
 +++ '[' Ubuntu = 'SUSE LINUX' ']' 
 
 +++ echo /usr/local/bin 
 
 ++ echo /usr/local/bin/cinder-rootwrap 
 
 + CINDER_ROOTWRAP=/usr/local/bin/cinder-rootwrap 
 
 + [[ ! -x /usr/local/bin/cinder-rootwrap ]] 
 
 ++ get_rootwrap_location oslo 
 
 ++ local module=oslo 
 
 +++ get_python_exec_prefix 
 
 +++ is_fedora 
 
 +++ [[ -z Ubuntu ]] 
 
 +++ '[' Ubuntu = Fedora ']' 
 
 +++ '[' Ubuntu = 'Red Hat' ']' 
 

Re: [openstack-dev] [Neutron]A problem produced by accidentally deleting DHCP port

2014-02-13 Thread Carl Baldwin
Hi,

Good find.  This looks like a duplicate of a bug that is in progress
[1].  Stephen Ma has a review up that addresses it [2].

Carl

[1] https://bugs.launchpad.net/neutron/+bug/1244853
[2] https://review.openstack.org/#/c/57954

On Thu, Feb 13, 2014 at 1:19 AM, shihanzhang ayshihanzh...@126.com wrote:
 Howdy folks!
 I am a beginer of neutron, there is a problem which has confused me. In my
 environment using openvswich plugin, I delete the dhcp port by mistack, then
 I found the VM in the subnet whose dhcp port is deleted by my mistack can
 not get IP, I found the reason is  when a dhcp port is deleted, neutron will
 create the dhcp port automaticly, but the VIF TAP will not be deleted, this
 time there will be an IP address on the two TAP.
 Even if the problem is caused by error operation, I think the dhcp port
 should not allow be deleted, because the port is created by neutron
 automaticly, not by tenant. Similarly, the port on router is not allow be
 deleted.
 I want to know whether it is a problem?
 this is the bug I have
 commited:https://bugs.launchpad.net/neutron/+bug/1279683



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] bad default values in conf files

2014-02-13 Thread Ben Nemec

On 2014-02-13 09:01, Clint Byrum wrote:

Excerpts from David Kranz's message of 2014-02-13 06:38:52 -0800:

I was recently bitten by a case where some defaults in keystone.conf
were not appropriate for real deployment, and our puppet modules were
not providing better values
https://bugzilla.redhat.com/show_bug.cgi?id=1064061.


Just taking a look at that issue, Keystone's PKI and revocation are
causing all kinds of issues with performance that are being tackled 
with

a bit of a redesign. I doubt we can find a cache timeout setting that
will work generically for everyone, but if we make detecting revocation
scale, we won't have to.

The default probably is too low, but raising it too high will cause
concern with those who want revoked tokens to take effect immediately
and are willing to scale the backend to get that result.


Since there are
hundreds (thousands?) of options across all the services. I am 
wondering
whether there are other similar issues lurking and if we have done 
what

we can to flush them out.

Defaults in conf files seem to be one of the following:

- Generic, appropriate for most situations
- Appropriate for devstack
- Appropriate for small, distro-based deployment
- Approprate for large deployment

Upstream, I don't think there is a shared view of how defaults should 
be

chosen.



I don't know that we have been clear enough about this, but nobody has
ever challenged the assertion we've been making for a while in TripleO
which is that OpenStack _must_ have production defaults. We don't make
OpenStack for devstack.


Especially since devstack has config overrides in place to make sure 
everything is set up the way it needs.  There's absolutely no reason to 
set a default because devstack needs it - just have devstack set it when 
it runs.


Of course, what qualifies as production-ready for my single-node 
OpenStack installation may not be appropriate for a 1 node 
installation.  Basically what you talked about above with Keystone.  
Some of those defaults might not be as easy to set, but it should be a 
more manageable subset of options that have that problem.




In TripleO, we consider it a bug when we can't run with a default value
that isn't directly related to whatever makes that cloud unique. So
the virt driver: meh, that's a choice, but leaving file injection on is
really not appropriate for 99% of users in production. Also you'll see
quite a few commits from me in the keystone SQL token driver trying to
speed it up because the old default token backend was KVS (in-memory),
which was fast, but REALLY not useful in production. We found these
things by running defaults and noticing in a long running cloud where
the performance problems are, and we intend to keep doing that.

So perhaps we should encode this assertion in
https://wiki.openstack.org/wiki/ReviewChecklist


+1




Keeping bad defaults can have a huge impact on performance and when a
system falls over but the problems may not be visible until some time
after a system gets into real use. Have the folks creating our puppet
modules and install recommendations taken a close look at all the
options and determined
that the defaults are appropriate for deploying RHEL OSP in the
configurations we are recommending?



TripleO is the official deployment program. We are taking the 
approach

described above. We're standing up several smallish (50 nodes) clouds
with the intention of testing the defaults on real hardware in the gate
of OpenStack eventually.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Interested in attracting new contributors?

2014-02-13 Thread Julie Pichon
Dolph Mathews dolph.math...@gmail.com wrote:
 On Wed, Feb 12, 2014 at 8:30 AM, Julie Pichon jpic...@redhat.com wrote:
 
 
  I can definitely sympathise with the comment in Stefano's article that
  there are not enough easy tasks / simple issues for newcomers. There's
  a lot to learn already when you're starting out (git, gerrit, python,
  devstack, ...) and simple bugs are so hard to find - something that
  will take a few minutes to an existing contributor will take much
  longer for someone who's still figuring out where to get the code
  from.
 
 
 My counterargument to this is to jump straight into
 http://review.openstack.org/ (which happens to be publicly available to
 newcomers).
 
 Easy tasks / simple issues (i.e. nits!) are *frequently* cited in code
 review, and although our community tends to get hung up on seeing them
 fixed prior merging the patchset in question (sometimes with good reason,
 sometimes due to arbitrary opinion), that doesn't always happen (for
 example, it's not worth delaying approval of an important patch to see a
 typo fixed in an inline comment) and isn't always appropriate (such as,
 this other thing over here should be refactored).
 
 There's a lot of such scenarios where new contributors can quickly find
 things to contribute, or at lest provide incredibly valuable feedback to
 the project in the form of reviews! As a bonus, new contributors jumping
 straight into reviews tend to get up to speed on the code base *much* more
 quickly than they otherwise would (IMO), as they become directly involved
 in design discussions, etc.

I wouldn't consider this a counterargument but complementary, it's a
great suggestion. I try to find out what people are interested in
helping out with first and not assume that it's necessarily about
submitting a code contribution either (though so far it's mostly been
the case). Different approaches.

I just realised that Reviewing isn't in the table of contents on the
how to contribute landing page [1] and was probably low visibility. I
made a first stab at adding something more explicit about it, feel free
to expand and clarify.

Thanks to the people who contributed to this thread. I look forward to
seeing new projects popping up on OpenHatch soon :-)

Julie

[1] https://wiki.openstack.org/wiki/HowToContribute

 
 
 
  [1]
  http://opensource.com/business/14/2/analyzing-contributions-to-openstack
  [2] http://openhatch.org/
  [3] http://openhatch.org/+projects/OpenStack%20dashboard%20%28Horizon%29
  [4] https://openhatch.org/wiki/Contacting_new_contributors
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] Cinder driver verification

2014-02-13 Thread Dean Troyer
On Thu, Feb 13, 2014 at 4:51 AM, Thierry Carrez thie...@openstack.orgwrote:

 John Griffith wrote:
  To add some controversy and keep the original intent of having only
  known tested and working drivers in the Cinder release, I am going to
  propose that any driver that has not submitted successful functional
  testing by RC1 that that driver be removed.  I'd at least like to see
  driver maintainers try... if the test fails a test or two that's
  something that can be discussed, but it seems that until now most
  drivers just flat out are not even being tested.


+1


 I think there are multiple stages here.

 Stage 0: noone knows if drivers work
 Stage 1: we know the (potentially sad) state of the drivers that are in
 the release
 Stage 2: only drivers that pass tests are added, drivers that don't pass
 tests have a gap analysis and a plan to fix it
 Stage 3: drivers that fail tests are removed before release
 Stage 4: 3rd-party testing rigs must run tests on every change in order
 to stay in tree

 At the very minimum you should be at stage 1 for the Icehouse release,
 so I agree with your last paragraph. I'd recommend that you start the
 Juno cycle at stage 2 (for new drivers), and try to reach stage 3 for
 the end of the Juno release.


Are any of these drivers new for Icehouse?  I think adding broken drivers
in Icehouse is a mistake.  The timing WRT Icehouse release schedule is
unfortunate but so is shipping immature drivers that have to be supported
and possibly deprecated.  Should new drivers that are lacking have some
not-quite-supported status to allow them to be removed in Juno if not
brought up to par?  Or moved into cinder/contrib?

I don't mean to be picking on Cinder here, this seems to be recurring theme
in OpenStack.  I think we benefit from strengthening the precedent that
makes it harder to get things in that are not ready even if the timing is
inconvenient.  We're seeing this in project incubation and I think we all
benefit in the end.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [TROVE] Manual Installation: problems with trove-manage

2014-02-13 Thread Michael Basnight
Giuseppe Galeota giuseppegale...@gmail.com writes:

 Hi Michael,
 I'm using this unique guide: 
 http://docs.openstack.org/developer/trove/dev/manual_install.html;.

Thats developer guide uses virtualenv, but its by no means necessary.

 Can you help me with a more useful guide that makes trove working?

Id love to help you out. #openstack-trove has a bunch of people who have
installed trove for dev and prod use, so lets make the document better!


pgpDPev78jvpw.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] Cinder driver verification

2014-02-13 Thread John Griffith
On Thu, Feb 13, 2014 at 9:59 AM, Walter A. Boring IV
walter.bor...@hp.com wrote:
 On 02/13/2014 02:51 AM, Thierry Carrez wrote:

 John Griffith wrote:

 So we've talked about this a bit and had a number of ideas regarding
 how to test and show compatibility for third-party drivers in Cinder.
 This has been an eye opening experience (the number of folks that have
 NEVER run tempest before, as well as the problems uncovered now that
 they're trying it).

 I'm even more convinced now that having vendors run these tests is a
 good thing and should be required.  That being said there's a ton of
 push back from my proposal to require that results from a successful
 run of the tempest tests to accompany any new drivers submitted to
 Cinder.

 Could you describe the nature of the pushback ? Is it that the tests are
 too deep and reject valid drivers ? Is it that it's deemed unfair to
 block new drivers while the existing ones aren't better ? Is it that
 it's difficult for them to run those tests and get a report ? Or is it
 because they care more about having their name covered in mainline and
 not so much about having the code working properly ?

 The consensus from the Cinder community for now is that we'll
 log a bug for each driver after I3, stating that it hasn't passed
 certification tests.  We'll then have a public record showing
 drivers/vendors that haven't demonstrated functional compatibility,
 and in order to close those bugs they'll be required to run the tests
 and submit the results to the bug in Launchpad.

 So, this seems to be the approach we're taking for Icehouse at least,
 it's far from ideal IMO, however I think it's still progress and it's
 definitely exposed some issues with how drivers are currently
 submitted to Cinder so those are positive things that we can learn
 from and improve upon in future releases.

 To add some controversy and keep the original intent of having only
 known tested and working drivers in the Cinder release, I am going to
 propose that any driver that has not submitted successful functional
 testing by RC1 that that driver be removed.  I'd at least like to see
 driver maintainers try... if the test fails a test or two that's
 something that can be discussed, but it seems that until now most
 drivers just flat out are not even being tested.

 I think there are multiple stages here.

 Stage 0: noone knows if drivers work
 Stage 1: we know the (potentially sad) state of the drivers that are in
 the release
 Stage 2: only drivers that pass tests are added, drivers that don't pass
 tests have a gap analysis and a plan to fix it
 Stage 3: drivers that fail tests are removed before release
 Stage 4: 3rd-party testing rigs must run tests on every change in order
 to stay in tree

 At the very minimum you should be at stage 1 for the Icehouse release,
 so I agree with your last paragraph. I'd recommend that you start the
 Juno cycle at stage 2 (for new drivers), and try to reach stage 3 for
 the end of the Juno release.

 I have to agree with Thierry here.  I think if we can get drivers to pass
 the tests
 in the Juno timeframe, then it's fine to remove then during Juno.
 I think the idea of having drivers run their code through tempest and work
 towards passing all of those tests is a great thing for Cinder and OpenStack
 in general.

 What I would do different for the Icehouse release is this:

 If a driver doesn't pass the certification test by IceHouse RC1, then we
 have a bug filed
 against the driver.   I would also put a warning message in the log for that
 driver that it
 doesn't pass the certification test.  I would not remove it from the
 codebase.

 Also:
if a driver hasn't even run the certification test by RC1, then we mark
 the driver as
 uncertified and deprecated in the code and throw an error at driver init
 time.
 We can have a option in cinder.conf that says
 ignore_uncertified_drivers=False.
 If an admin wants to ignore the error, they set the flag to True, and we let
 the driver init at next startup.
 The admin then takes full responsibility for running uncertified code.

   I think removing the drivers outright is premature for Icehouse, since the
 certification process is a new thing.
 For Juno, we remove any drivers that are still marked as uncertified and
 haven't run the tests.

 I think the purpose of the tests is to get vendors to actually run their
 code through tempest and
 prove to the community that they are willing to show that they are fixing
 their code.  At the end of the day,
 it better serves the community and Cinder if we have many working drivers.

 My $0.02,
 Walt


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
I'm fine with all of the recommendations above, however I do want to
point out that having your driver/device work in OpenStack should not
be something new to you.  That's what's so 

Re: [openstack-dev] [OpenStack-Dev] [Cinder] Cinder driver verification

2014-02-13 Thread Mike Perez
On Thu, Feb 13, 2014 at 9:30 AM, Dean Troyer dtro...@gmail.com wrote:

 Are any of these drivers new for Icehouse?  I think adding broken drivers
 in Icehouse is a mistake.  The timing WRT Icehouse release schedule is
 unfortunate but so is shipping immature drivers that have to be supported
 and possibly deprecated.  Should new drivers that are lacking have some
 not-quite-supported status to allow them to be removed in Juno if not
 brought up to par?  Or moved into cinder/contrib?

 I don't mean to be picking on Cinder here, this seems to be recurring
 theme in OpenStack.  I think we benefit from strengthening the precedent
 that makes it harder to get things in that are not ready even if the timing
 is inconvenient.  We're seeing this in project incubation and I think we
 all benefit in the end.

 dt


Since the cert tests were introduced, new drivers have been required to
pass in order to be merged.


-Mike Perez
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] Cinder driver verification

2014-02-13 Thread Avishay Traeger
Walter A. Boring IV walter.bor...@hp.com wrote on 02/13/2014 06:59:38 
PM:
 What I would do different for the Icehouse release is this:
 
 If a driver doesn't pass the certification test by IceHouse RC1, then we 

 have a bug filed
 against the driver.   I would also put a warning message in the log for 
 that driver that it
 doesn't pass the certification test.  I would not remove it from the 
 codebase.
 
 Also:
 if a driver hasn't even run the certification test by RC1, then we 
 mark the driver as
 uncertified and deprecated in the code and throw an error at driver init 

 time.
 We can have a option in cinder.conf that says 
 ignore_uncertified_drivers=False.
 If an admin wants to ignore the error, they set the flag to True, and we 

 let the driver init at next startup.
 The admin then takes full responsibility for running uncertified code.
 
I think removing the drivers outright is premature for Icehouse, 
 since the certification process is a new thing.
 For Juno, we remove any drivers that are still marked as uncertified and 

 haven't run the tests.
 
 I think the purpose of the tests is to get vendors to actually run their 

 code through tempest and
 prove to the community that they are willing to show that they are 
 fixing their code.  At the end of the day,
 it better serves the community and Cinder if we have many working 
drivers.
 
 My $0.02,
 Walt


I like this.  Make that $0.04 now :)

Avishay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Interest in discussing vendor plugins for L3 services?

2014-02-13 Thread Kanzhe Jiang
I am interested too. UTC-8.


On Wed, Feb 12, 2014 at 11:38 PM, Gary Duan garyd...@gmail.com wrote:

 I'm interested in the discussion. UTC-8.

 Gary


 On Wed, Feb 12, 2014 at 10:22 AM, Mandeep Dhami 
 dh...@noironetworks.comwrote:


 I would be interested as well (UTC-8).

 Regards,
 Mandeep



 On Wed, Feb 12, 2014 at 8:18 AM, Eugene Nikanorov 
 enikano...@mirantis.com wrote:

 I'd be interested too.

 Thanks,
 Eugene.


 On Wed, Feb 12, 2014 at 7:51 PM, Carl Baldwin c...@ecbaldwin.netwrote:

 Paul,

 I'm interesting in joining the discussion.  UTC-7.  Any word on when
 this will take place?

 Carl

 On Mon, Feb 3, 2014 at 3:19 PM, Paul Michali p...@cisco.com wrote:
  I'd like to see if there is interest in discussing vendor plugins for
 L3
  services. The goal is to strive for consistency across vendor
  plugins/drivers and across service types (if possible/sensible). Some
 of
  this could/should apply to reference drivers as well. I'm thinking
 about
  these topics (based on questions I've had on VPNaaS - feel free to
 add to
  the list):
 
  How to handle vendor specific validation (e.g. say a vendor has
 restrictions
  or added capabilities compared to the reference drivers for
 attributes).
  Providing client feedback (e.g. should help and validation be
 extended to
  include vendor capabilities or should it be delegated to server
 reporting?)
  Handling and reporting of errors to the user (e.g. how to indicate to
 the
  user that a failure has occurred establishing a IPSec tunnel in device
  driver?)
  Persistence of vendor specific information (e.g. should new tables be
 used
  or should/can existing reference tables be extended?).
  Provider selection for resources (e.g. should we allow --provider
 attribute
  on VPN IPSec policies to have vendor specific policies or should we
 rely on
  checks at connection creation for policy compatibility?)
  Handling of multiple device drivers per vendor (e.g. have service
 driver
  determine which device driver to send RPC requests, or have agent
 determine
  what driver requests should go to - say based on the router type)
 
  If you have an interest, please reply to me and include some
 days/times that
  would be good for you, and I'll send out a notice on the ML of the
 time/date
  and we can discuss.
 
  Looking to hearing form you!
 
  PCM (Paul Michali)
 
  MAIL  p...@cisco.com
  IRCpcm_  (irc.freenode.net)
  TW@pmichali
  GPG key4525ECC253E31A83
  Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] Cinder driver verification

2014-02-13 Thread John Griffith
On Thu, Feb 13, 2014 at 10:30 AM, Dean Troyer dtro...@gmail.com wrote:
 On Thu, Feb 13, 2014 at 4:51 AM, Thierry Carrez thie...@openstack.org
 wrote:

 John Griffith wrote:
  To add some controversy and keep the original intent of having only
  known tested and working drivers in the Cinder release, I am going to
  propose that any driver that has not submitted successful functional
  testing by RC1 that that driver be removed.  I'd at least like to see
  driver maintainers try... if the test fails a test or two that's
  something that can be discussed, but it seems that until now most
  drivers just flat out are not even being tested.


 +1


 I think there are multiple stages here.

 Stage 0: noone knows if drivers work
 Stage 1: we know the (potentially sad) state of the drivers that are in
 the release
 Stage 2: only drivers that pass tests are added, drivers that don't pass
 tests have a gap analysis and a plan to fix it
 Stage 3: drivers that fail tests are removed before release
 Stage 4: 3rd-party testing rigs must run tests on every change in order
 to stay in tree

 At the very minimum you should be at stage 1 for the Icehouse release,
 so I agree with your last paragraph. I'd recommend that you start the
 Juno cycle at stage 2 (for new drivers), and try to reach stage 3 for
 the end of the Juno release.


 Are any of these drivers new for Icehouse?  I think adding broken drivers in
 Icehouse is a mistake.  The timing WRT Icehouse release schedule is
 unfortunate but so is shipping immature drivers that have to be supported
 and possibly deprecated.  Should new drivers that are lacking have some
 not-quite-supported status to allow them to be removed in Juno if not
 brought up to par?  Or moved into cinder/contrib?

Yes, there are a boatload of new drivers being added.


 I don't mean to be picking on Cinder here, this seems to be recurring theme
 in OpenStack.  I think we benefit from strengthening the precedent that
 makes it harder to get things in that are not ready even if the timing is
 inconvenient.  We're seeing this in project incubation and I think we all
 benefit in the end.

 dt

 --

 Dean Troyer
 dtro...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I have another tact we can take on this in the interim. I like the
contrib dir idea raised by Dean, a hybrid of that and the original
proposal is we leave the certification optional, but we publish a
certified driver list.  We can also use the contrib idea with that as
well, so the contrib dir would denote drivers that are not officially
certified.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] first review jam session redux

2014-02-13 Thread Devananda van der Veen
Just a quick follow-up to our first review jam session. We got 5 patches
landed in the server 3 in the client, and zuul is merging another 5 right
now.

We started an etherpad part-way through
  https://etherpad.openstack.org/p/IronicReviewDay
Let's continue to use that to track work that spins out of these sessions.

I think this was great. We got a lot accomplished in very little time --
let's plan to do this again next Thursday, 8am PST (16:00 GMT).

Let's also have a shorter review session at the same time on Monday
morning, before the meeting.

Cheers!
Deva
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Mission Statement wording

2014-02-13 Thread Sergey Lukjanov
We have a small voting on the meeting and agreed on this one (To provide a
scalable data processing stack and associated management interfaces).


On Thu, Feb 13, 2014 at 6:53 PM, Andrew Lazarev alaza...@mirantis.comwrote:

 Short version looks good for me.

 Andrew.


 On Thu, Feb 13, 2014 at 4:29 AM, Sergey Lukjanov 
 slukja...@mirantis.comwrote:

 Hi folks,

 I'm working now on adding Savanna's mission statement to governance docs
 [0]. There are some comments on our current one to make it simpler and
 remove marketing like stuff.

 So, current option is:

 To provide a scalable data processing stack and associated management
 interfaces.

 (thanks for Doug for proposing it).

 So, please, share your objections (and suggestions too). Additionally I'd
 like to talk about it on todays IRC meeting.

 Thanks.

 [0] https://review.openstack.org/#/c/71045/1/reference/programs.yaml

 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.





-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Devstack installation failed with CINDER installation

2014-02-13 Thread Asselin, Ramy
Thanks Ben! This worked for me too!
The pip.conf file wasn’t there. I created it (as root) and added the 2 lines 
you suggested.
I can now run Tox without running into pypi timeout / dependency issues.

Ramy



From: Ben Nemec [mailto:openst...@nemebean.com]
Sent: Thursday, February 13, 2014 8:56 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Devstack installation failed with CINDER 
installation


Looks like a transient pypi failure.  You can either wait for pypi to get its 
act together or configure pip to use the pypi.openstack.org mirror.  This is 
the relevant part of my ~/.pip/pip.conf file:

[fedora@openstack .pip]$ cat pip.conf
[global]
index-url = http://pypi.openstack.org/openstack

I think that's enough to make it use pypi.openstack.org.

-Ben

On 2014-02-13 03:31, 
trinath.soman...@freescale.commailto:trinath.soman...@freescale.com wrote:
Hi stackers-

I have an issue while installing devstack on my machine.

While configuring cider root wrap, devstack installation failed.

Here is the complete log

.dEYRz -e /opt/stack/cinder
Obtaining file:///opt/stack/cinderfile:///\\opt\stack\cinder
  Running setup.py egg_info for package from 
file:///opt/stack/cinderfile:///\\opt\stack\cinder
[pbr] Reusing existing SOURCES.txt
Requirement already satisfied (use --upgrade to upgrade): pbr=0.6,1.0 in 
/opt/stack/pbr (from cinder==2014.1.dev166.g893aaa9)
Requirement already satisfied (use --upgrade to upgrade): amqplib=0.6.1 in 
/usr/lib/python2.7/dist-packages (from cinder==2014.1.dev166.g893aaa9)
Requirement already satisfied (use --upgrade to upgrade): anyjson=0.3.3 in 
/usr/lib/python2.7/dist-packages (from cinder==2014.1.dev166.g893aaa9)
Requirement already satisfied (use --upgrade to upgrade): Babel=1.3 in 
/usr/local/lib/python2.7/dist-packages/Babel-1.3-py2.7.egg (from 
cinder==2014.1.dev166.g893aaa9)
Requirement already satisfied (use --upgrade to upgrade): eventlet=0.13.0 in 
/usr/lib/python2.7/dist-packages (from cinder==2014.1.dev166.g893aaa9)
Requirement already satisfied (use --upgrade to upgrade): greenlet=0.3.2 in 
/usr/lib/python2.7/dist-packages (from cinder==2014.1.dev166.g893aaa9)
Downloading/unpacking iso8601=0.1.8 (from cinder==2014.1.dev166.g893aaa9)
  Could not fetch URL https://pypi.python.org/simple/iso8601/: There was a 
problem confirming the ssl certificate: urlopen error [Errno 1] _ssl.c:504: 
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify 
failed
  Will skip URL https://pypi.python.org/simple/iso8601/ when looking for 
download links for iso8601=0.1.8 (from cinder==2014.1.dev166.g893aaa9)
  Could not fetch URL https://pypi.python.org/simple/: There was a problem 
confirming the ssl certificate: urlopen error [Errno 1] _ssl.c:504: 
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify 
failed
  Will skip URL https://pypi.python.org/simple/ when looking for download links 
for iso8601=0.1.8 (from cinder==2014.1.dev166.g893aaa9)
  Cannot fetch index base URL https://pypi.python.org/simple/
  Could not fetch URL https://pypi.python.org/simple/iso8601/: There was a 
problem confirming the ssl certificate: urlopen error [Errno 1] _ssl.c:504: 
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify 
failed
  Will skip URL https://pypi.python.org/simple/iso8601/ when looking for 
download links for iso8601=0.1.8 (from cinder==2014.1.dev166.g893aaa9)
  Could not find any downloads that satisfy the requirement iso8601=0.1.8 
(from cinder==2014.1.dev166.g893aaa9)
Cleaning up...
No distributions at all found for iso8601=0.1.8 (from 
cinder==2014.1.dev166.g893aaa9)
Storing complete log in /home/stack/.pip/pip.log
+ safe_chown -R stack /opt/stack/cinder/cinder.egg-info
+ _safe_permission_operation chown -R stack /opt/stack/cinder/cinder.egg-info
+ args=($@)
+ local args
+ local last
+ local sudo_cmd
+ local dir_to_check
+ let 'last=4 - 1'
+ dir_to_check=/opt/stack/cinder/cinder.egg-info
+ '[' '!' -d /opt/stack/cinder/cinder.egg-info ']'
+ is_nfs_directory /opt/stack/cinder/cinder.egg-info
++ stat -f -L -c %T /opt/stack/cinder/cinder.egg-info
+ local mount_type=ext2/ext3
+ test ext2/ext3 == nfs
+ [[ False = True ]]
+ sudo_cmd=sudo
+ sudo chown -R stack /opt/stack/cinder/cinder.egg-info
+ '[' True = True ']'
+ '[' 0 -eq 0 ']'
+ cd /opt/stack/cinder
+ git reset --hard
HEAD is now at 893aaa9 Merge GlusterFS: Fix create/restore backup
+ configure_cinder
+ [[ ! -d /etc/cinder ]]
+ sudo chown stack /etc/cinder
+ cp -p /opt/stack/cinder/etc/cinder/policy.json /etc/cinder
+ configure_cinder_rootwrap
++ get_rootwrap_location cinder
++ local module=cinder
+++ get_python_exec_prefix
+++ is_fedora
+++ [[ -z Ubuntu ]]
+++ '[' Ubuntu = Fedora ']'
+++ '[' Ubuntu = 'Red Hat' ']'
+++ '[' Ubuntu = CentOS ']'
+++ is_suse
+++ [[ -z Ubuntu ]]
+++ '[' Ubuntu = openSUSE ']'
+++ '[' Ubuntu = 'SUSE LINUX' ']'
+++ echo /usr/local/bin
++ echo /usr/local/bin/cinder-rootwrap
+ CINDER_ROOTWRAP=/usr/local/bin/cinder-rootwrap
+ 

Re: [openstack-dev] [Ironic] first review jam session redux

2014-02-13 Thread Devananda van der Veen
On Thu, Feb 13, 2014 at 11:06 AM, Chris K nobody...@gmail.com wrote:

 *I think this was great. We got a lot accomplished in very little time --
 let's plan to do this again **next Thursday*, *8am* *PST*
 Totally +1 from me

 *Let's also have a shorter review session at the same time on
 Monday morning, before the meeting.*
 Would this be another review session or recap for the meeting?


I'm thinking another review session. Besides landing all the things, we'll
also see if there are important things we're blocked on that need to be
brought up, and possibly solve them before the meeting :)

-Deva
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Interested in attracting new contributors?

2014-02-13 Thread Ben Nemec

On 2014-02-12 15:34, Adrian Otto wrote:

On Feb 12, 2014, at 12:41 PM, Ben Nemec openst...@nemebean.com
 wrote:


On 2014-02-12 13:48, Adrian Otto wrote:

On Feb 12, 2014, at 10:13 AM, Ben Nemec openst...@nemebean.com
wrote:

On 2014-02-12 09:51, Jesse Noller wrote:
On Feb 12, 2014, at 8:30 AM, Julie Pichon jpic...@redhat.com 
wrote:

Hi folks,
Stefano's post on how to make contributions to OpenStack easier 
[1]
finally stirred me into writing about something that vkmc and 
myself
have been doing on the side for a few months to help new 
contributors

to get involved.
Some of you may be aware of OpenHatch [2], a non-profit dedicated 
to
helping newcomers get started in open-source. About 6 months ago 
we

created a project page for Horizon [3], filled in a few high level
details, set ourselves up as mentors. Since then people have been
expressing interest in the project and a number of them got a 
patch
submitted and approved, a couple are sticking around (often 
helping out
with bug triaging, as confirming new bugs is one of the few tasks 
one

can help out with when only having limited time).
I can definitely sympathise with the comment in Stefano's article 
that
there are not enough easy tasks / simple issues for newcomers. 
There's
a lot to learn already when you're starting out (git, gerrit, 
python,
devstack, ...) and simple bugs are so hard to find - something 
that

will take a few minutes to an existing contributor will take much
longer for someone who's still figuring out where to get the code
from. Unfortunately it's not uncommon for existing contributors to 
take
on tasks marked as low-hanging-fruit because it's only 5 minutes 
(I
can understand this coming up to an RC but otherwise 
low-hanging-fruits
are often low priority nits that could wait a little bit longer). 
In
Horizon the low-hanging-fruits definitely get snatched up quickly 
and I

try to keep a list of typos or other low impact, trivial bugs that
would make good first tasks for people reaching out via OpenHatch.
OpenHatch doesn't spam, you get one email a week if one or more 
people
indicated they want to help. The initial effort is not 
time-consuming,

following OpenHatch's advice [4] you can refine a nice initial
contact email that helps you get people started and understand 
what
they are interested in quickly. I don't find the time commitment 
to be

too much so far, and it's incredibly gratifying to see someone
submitting their first patch after you answered a couple of 
questions
or helped resolve a hairy git issue. I'm happy to chat about it 
more,

if you're curious or have any questions.
In any case if you'd like to attract more contributors to your 
project,
and/or help newcomers get started in open-source, consider adding 
your

project to OpenHatch too!
Cheers,
Julie

+10
There’s been quite a bit of talk about this - but not necessarily 
on

the dev list. I think openhatch is great - mentorship programs in
general go a *long* way to help raise up and gain new people. Core
Python has had this issue for awhile, and many other large OSS
projects continue to suffer from it (“barrier to entry too high”).
Some random thoughts:
I’d like to see something like Solum’s Contributing page:
https://wiki.openstack.org/wiki/Solum/Contributing
Expanded a little and potentially be the recommended “intro to
contribution” guide -
https://wiki.openstack.org/wiki/How_To_Contribute is good, but a 
more
accessible version goes a long way. You want to show them how easy 
/

fast it is, not all of the options at once.
So, glancing over the Solum page, I don't see anything specific to 
Solum in there besides a few paths in examples.  It's basically a 
condensed version of https://wiki.openstack.org/wiki/GerritWorkflow 
sans a lot of the detail.  This might be a good thing to add as a 
QuickStart section on that wiki page (which is linked from the how 
to contribute page, although maybe not as prominently as it should 
be).  But, a lot of that detail is needed before a change is going 
to be accepted anyway.  I'm not sure giving a new contributor just 
the bare minimum is actually doing them any favors.  Without letting 
them know things like how to format a commit message and configure 
their ssh keys on Gerrit, they aren't going to be able to get a 
change accepted anyway and IMHO they're likely to just give up 
anyway (and possibly waste some reviewer time in the process).
The key point I'd like to emphasize is that we should not optimize 
for

the ease and comfort of the incumbent OpenStack developers and
reviewers. Instead, we should focus effort on welcoming new
contribution. I like to think about this through a long term lens. I
believe that long lived organizations thrive based size and diversity
of their membership. I'm not saying we disregard quality and
efficiency of our conduct, but we should place a higher value on
making OpenStack a community that people are delighted to join.


I'm not disagreeing, but there has to be a balance. 

Re: [openstack-dev] [Ironic] [TripleO] Goal setting // progress towards integration

2014-02-13 Thread Dan Smith
 I would also like to see CI (either third party or in the gate) for
 the nova driver before merging it. There's a chicken and egg problem
 here if its in the gate, but I'd like to see it at least proposed as a
 review.

Yeah, I think that the existing nova-baremetal driver is kinda frozen in
a pre-deprecation state right now, which gives it a special pass on the
CI requirement. To me, I think it makes sense to avoid ripping it out
since it's already on ice.

However, for the Ironic driver, I would definitely rather see real CI up
_and_ working before we merge it. I think that probably means it will be
a post-icehouse thing at this point, unless that effort is farther along
than I think.

At the Nova meetup this week, we had a serious discussion about ripping
out major drivers that might not make the deadline. I don't think it
makes sense to rip those out and merge another without meeting the
requirement.

--Dan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting minutes Feb 13

2014-02-13 Thread Sergey Lukjanov
Thanks everyone who have joined Savanna meeting.

Here are the logs from the meeting:

Minutes:
http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-02-13-18.00.html
Log:
http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-02-13-18.00.log.html

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][neutron][nova][3rd party testing] Gerrit Jenkins plugin will not fulfill requirements of 3rd party testing

2014-02-13 Thread Sukhdev Kapur
Jay,

Just an FYI. We have modified the Gerrit plugin it accept/match regex to
generate notifications of for receck no bug/bug ###. It turned out to be
very simple fix and we (Arista Testing) is now triggering on recheck
comments as well.

regards..
-Sukhdev



On Thu, Feb 6, 2014 at 4:16 PM, Sukhdev Kapur sukh...@aristanetworks.comwrote:

 Hi Jay,

 Thanks for bringing this up. I have been trying to make the recheck work
 and have not had much success. Therefore, I agree that we should go with
 option a) for the short term until b) or c) becomes available.
 I would prefer b) because we have already invested a lot in our solution
 and it is fully operational.

 Thanks
 -Sukhdev




 On Tue, Feb 4, 2014 at 3:55 PM, Jay Pipes jaypi...@gmail.com wrote:

 Sorry for cross-posting to both mailing lists, but there's lots of folks
 working on setting up third-party testing platforms that are not members
 of the openstack-infra ML...

 tl;dr
 -

 The third party testing documentation [1] has requirements [2] that
 include the ability to trigger a recheck based on a gerrit comment.

 Unfortunately, the Gerrit Jenkins Trigger plugin [3] does not have the
 ability to trigger job runs based on a regex-filtered comment (only on
 the existence of any new comment to the code review).

 Therefore, we either should:

 a) Relax the requirement that the third party system trigger test
 re-runs when a comment including the word recheck appears in the
 Gerrit event stream

 b) Modify the Jenkins Gerrit plugin to support regex filtering on the
 comment text (in the same way that it currently supports regex filtering
 on the project name)

 or

 c) Add documentation to the third party testing pages that explains how
 to use Zuul as a replacement for the Jenkins Gerrit plugin.

 I propose we do a) for the short term, and I'll work on c) long term.
 However, I'm throwing this out there just in case there are some Java
 and Jenkins whizzes out there that could get b) done in a jiffy.

 details
 ---

 OK, so I've been putting together documentation on how to set up an
 external Jenkins platform that is linked [4] with the upstream
 OpenStack CI system.

 Recently, I wrote an article detailing how the upstream CI system
 worked, including a lot of the gory details from the
 openstack-infra/config project's files. [5]

 I've been working on a follow-up article that goes through how to set up
 a Jenkins system, and in writing that article, I created a source
 repository [6] that contains scripts, instructions and Puppet modules
 that set up a Jenkins system, the Jenkins Job Builder tool, and
 installs/configures the Jenkins Gerrit plugin [7].

 I planned to use the Jenkins Gerrit plugin as the mechanism that
 triggers Jenkins jobs on the external system based on gerrit events
 published by the OpenStack review.openstack.org Gerrit service. In
 addition to being mentioned in the third party documentation, Jenkins
 Job Builder has the ability to construct Jenkins jobs that are triggered
 by the Jenkins Gerrit plugin [8].

 Unforunately, I've run into a bit of a snag.

 The third party testing documentation has requirements that include the
 ability to trigger a recheck based on a gerrit comment:

 quote
 Support recheck to request re-running a test.
  * Support the following syntaxes recheck no bug and recheck bug ###.
  * Recheck means recheck everything. A single recheck comment should
 re-trigger all testing systems.
 /quote

 The documentation has a section on using the Gerrit Jenkins Trigger
 plugin [3] to accept notifications from the upstream OpenStack Gerrit
 instance.

 But unfortunately, the Jenkins Gerrit plugin does not support the
 ability to trigger a re-run of a job given a regex match of the word
 recheck. :(

 So, we either need to a) change the requirements of third party testers,
 b) enhance the Jenkins Gerrit plugin with the missing functionality, or
 c) add documentation on how to set up Zuul as the triggering system
 instead of the Jenkins Gerrit plugin.

 I'm happy to work on c), but I think relaxing the restriction (a) is
 probably needed short-term.

 Best,
 -jay

 [1] http://ci.openstack.org/third_party.html
 [2] http://ci.openstack.org/third_party.html#requirements
 [3]

 http://ci.openstack.org/third_party.html#the-jenkins-gerrit-trigger-plugin-way
 [4] By linked I mean it both reads from the OpenStack Gerrit system
 and writes (adds comments) to it
 [5] http://www.joinfu.com/2014/01/understanding-the-openstack-ci-system/
 [6] http://github.com/jaypipes/os-ext-testing
 [7] https://wiki.jenkins-ci.org/display/JENKINS/Gerrit+Trigger
 [8]

 https://github.com/openstack-infra/jenkins-job-builder/blob/master/jenkins_jobs/modules/triggers.py#L121




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][neutron][nova][3rd party testing] Gerrit Jenkins plugin will not fulfill requirements of 3rd party testing

2014-02-13 Thread Jay Pipes
On Thu, 2014-02-13 at 12:34 -0800, Sukhdev Kapur wrote:
 Jay, 
 
 Just an FYI. We have modified the Gerrit plugin it accept/match regex
 to generate notifications of for receck no bug/bug ###. It turned
 out to be very simple fix and we (Arista Testing) is now triggering on
 recheck comments as well.

Thanks for the update, Sukhdev! Is this updated Gerrit plugin somewhere
where other folks can use it?

I've got Zuul actually working pretty well in my os-ext-testing repo
now. Only problem remaining is with the Jenkins slave trigger (not
related to Gerrit...)

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][neutron][nova][3rd party testing] Gerrit Jenkins plugin will not fulfill requirements of 3rd party testing

2014-02-13 Thread Sergey Lukjanov
nice, are you planning to commit it to upstream?


On Fri, Feb 14, 2014 at 12:34 AM, Sukhdev Kapur
sukh...@aristanetworks.comwrote:

 Jay,

 Just an FYI. We have modified the Gerrit plugin it accept/match regex to
 generate notifications of for receck no bug/bug ###. It turned out to be
 very simple fix and we (Arista Testing) is now triggering on recheck
 comments as well.

 regards..
 -Sukhdev



 On Thu, Feb 6, 2014 at 4:16 PM, Sukhdev Kapur 
 sukh...@aristanetworks.comwrote:

 Hi Jay,

 Thanks for bringing this up. I have been trying to make the recheck work
 and have not had much success. Therefore, I agree that we should go with
 option a) for the short term until b) or c) becomes available.
 I would prefer b) because we have already invested a lot in our solution
 and it is fully operational.

 Thanks
 -Sukhdev




 On Tue, Feb 4, 2014 at 3:55 PM, Jay Pipes jaypi...@gmail.com wrote:

 Sorry for cross-posting to both mailing lists, but there's lots of folks
 working on setting up third-party testing platforms that are not members
 of the openstack-infra ML...

 tl;dr
 -

 The third party testing documentation [1] has requirements [2] that
 include the ability to trigger a recheck based on a gerrit comment.

 Unfortunately, the Gerrit Jenkins Trigger plugin [3] does not have the
 ability to trigger job runs based on a regex-filtered comment (only on
 the existence of any new comment to the code review).

 Therefore, we either should:

 a) Relax the requirement that the third party system trigger test
 re-runs when a comment including the word recheck appears in the
 Gerrit event stream

 b) Modify the Jenkins Gerrit plugin to support regex filtering on the
 comment text (in the same way that it currently supports regex filtering
 on the project name)

 or

 c) Add documentation to the third party testing pages that explains how
 to use Zuul as a replacement for the Jenkins Gerrit plugin.

 I propose we do a) for the short term, and I'll work on c) long term.
 However, I'm throwing this out there just in case there are some Java
 and Jenkins whizzes out there that could get b) done in a jiffy.

 details
 ---

 OK, so I've been putting together documentation on how to set up an
 external Jenkins platform that is linked [4] with the upstream
 OpenStack CI system.

 Recently, I wrote an article detailing how the upstream CI system
 worked, including a lot of the gory details from the
 openstack-infra/config project's files. [5]

 I've been working on a follow-up article that goes through how to set up
 a Jenkins system, and in writing that article, I created a source
 repository [6] that contains scripts, instructions and Puppet modules
 that set up a Jenkins system, the Jenkins Job Builder tool, and
 installs/configures the Jenkins Gerrit plugin [7].

 I planned to use the Jenkins Gerrit plugin as the mechanism that
 triggers Jenkins jobs on the external system based on gerrit events
 published by the OpenStack review.openstack.org Gerrit service. In
 addition to being mentioned in the third party documentation, Jenkins
 Job Builder has the ability to construct Jenkins jobs that are triggered
 by the Jenkins Gerrit plugin [8].

 Unforunately, I've run into a bit of a snag.

 The third party testing documentation has requirements that include the
 ability to trigger a recheck based on a gerrit comment:

 quote
 Support recheck to request re-running a test.
  * Support the following syntaxes recheck no bug and recheck bug ###.
  * Recheck means recheck everything. A single recheck comment should
 re-trigger all testing systems.
 /quote

 The documentation has a section on using the Gerrit Jenkins Trigger
 plugin [3] to accept notifications from the upstream OpenStack Gerrit
 instance.

 But unfortunately, the Jenkins Gerrit plugin does not support the
 ability to trigger a re-run of a job given a regex match of the word
 recheck. :(

 So, we either need to a) change the requirements of third party testers,
 b) enhance the Jenkins Gerrit plugin with the missing functionality, or
 c) add documentation on how to set up Zuul as the triggering system
 instead of the Jenkins Gerrit plugin.

 I'm happy to work on c), but I think relaxing the restriction (a) is
 probably needed short-term.

 Best,
 -jay

 [1] http://ci.openstack.org/third_party.html
 [2] http://ci.openstack.org/third_party.html#requirements
 [3]

 http://ci.openstack.org/third_party.html#the-jenkins-gerrit-trigger-plugin-way
 [4] By linked I mean it both reads from the OpenStack Gerrit system
 and writes (adds comments) to it
 [5] http://www.joinfu.com/2014/01/understanding-the-openstack-ci-system/
 [6] http://github.com/jaypipes/os-ext-testing
 [7] https://wiki.jenkins-ci.org/display/JENKINS/Gerrit+Trigger
 [8]

 https://github.com/openstack-infra/jenkins-job-builder/blob/master/jenkins_jobs/modules/triggers.py#L121





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] Gamification and on-boarding ...

2014-02-13 Thread Sergey Lukjanov
+1, nice idea, it could be really funny.

agreed with Thierry's note about automation.


On Thu, Feb 13, 2014 at 5:53 PM, Sean Dague s...@dague.net wrote:

 On 02/13/2014 05:37 AM, Thierry Carrez wrote:
  Sandy Walsh wrote:
  The informal OpenStack motto is automate everything, so perhaps we
 should consider some form of gamification [1] to help us? Can we offer
 badges, quests and challenges to new users to lead them on the way to being
 strong contributors?
 
  Fixed your first bug badge
  Updated the docs badge
  Got your blueprint approved badge
  Triaged a bug badge
  Reviewed a branch badge
  Contributed to 3 OpenStack projects badge
  Fixed a Cells bug badge
  Constructive in IRC badge
  Freed the gate badge
  Reverted branch from a core badge
  etc.
 
  I think that works if you only keep the ones you can automate.
  Constructive in IRC for example sounds a bit subjective to me, and you
  don't want to issue those badges one-by-one manually.
 
  Second thing, you don't want the game to start polluting your bug
  status, i.e. people randomly setting bugs to triaged to earn the
  Triaged a bug badge. So the badges we keep should be provably useful ;)
 
  A few other suggestions:
  Found a valid security issue (to encourage security reports)
  Fixed a bug submitted by someone else (to encourage attacking random
 bugs)
  Removed code (to encourage tech debt reduction)
  Backported a fix to a stable branch (to encourage backporting)
  Fixed a bug that was tagged nobody-wants-to-fix-this-one (to encourage
  people to attack critical / hard bugs)
 
  We might need protected tags to automate this: tags that only some
  people could set to bugs/tasks to designate gate-freeing or
  nobody-wants-to-fix-this-one bugs that will give you badges if you fix
  them.
 
  So overall it's a good idea, but it sounds a bit tricky to automate it
  properly to avoid bad side-effects.

 Gamification is a cool idea, if someone were to implement it, I'd be +1.

 Realistically, the biggest issue I see with on-boarding is mentoring
 time. Especially with folks completely new to our structure, there is a
 lot of confusing things going on. And OpenStack is a ton to absorb. I
 get pinged a lot on IRC, answer when I can, and sometimes just have to
 ignore things because there are only so many hours in the day.

 I think Anita has been doing a great job with the Neutron CI onboarding
 and new folks, and that's given me perspective on just how many
 dedicated mentors we'd need to bring new folks on. With 400 new people
 showing up each release, it's a lot of engagement time. It's also
 investment in our future, as some of these folks will become solid
 contributors and core reviewers.

 So it seems like the only way we'd make real progress here is to get a
 chunk of people to devote some dedicated time to mentoring in the next
 cycle. Gamification might be most useful, but honestly I expect a Start
 Here page with the consolidated list of low-hanging-fruit bugs, and a
 Review Here page with all reviews for low hanging fruit bugs (so they
 don't get lost by core review team) would be a great start.

 The delays on reviews for relatively trivial fixes I think is something
 that is probably more demotivating to new folks than the lack of badges.
 So some ability to keep on top of that I think would be really great.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] instructions for creating a new oslo library

2014-02-13 Thread Sergey Lukjanov
Doug, great work, I think it could be sometimes be a base for a detailed
guide about different type projects creation.


On Thu, Feb 13, 2014 at 1:29 AM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:




 On Wed, Feb 12, 2014 at 4:12 PM, Ben Nemec openst...@nemebean.com wrote:

  On 2014-02-12 13:28, Doug Hellmann wrote:

  I have been working on instructions for creating new Oslo libraries,
 either from scratch or by graduating code from the incubator. I would
 appreciate any feedback about whether I have all of the necessary details
 included in https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary

 Thanks,
 Doug

 First off: \o/

 This should really help cut down on the amount of code we need to sync
 from incubator.

 Given all the fun we've had with oslo.sphinx, maybe we should add a note
 that only runtime deps should use the oslo. namespace?


 Yes, definitely, I added that. Thanks!



 Other than that, I think I'd have to run through the process to have
 further comments.


 You'll have your chance to do that soon, I hope! :-)

 Doug


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Version Discovery Standardization

2014-02-13 Thread Sean Dague
On 02/13/2014 09:45 AM, Dean Troyer wrote:
 FWIW, an early proposal to address this, as well as capability
 discovery, still lives
 at https://etherpad.openstack.org/p/api-version-discovery-proposal.
  I've lost track of where this went, and even which design summit this
 is from, but I've been using it as a sanity check for the discovery bits
 in OSC.
 
 On Thu, Feb 13, 2014 at 6:50 AM, Jamie Lennox jamielen...@redhat.com
 mailto:jamielen...@redhat.com wrote:
 
 6. GET '/' is unrestricted. GET '/vX' is often token restricted.
 
 
 Keystone allows access to /v2.0 and /v3 but most services give a
 HTTP Unauthorized. This is a real problem for discovery because we
 need to be able to evaluate the endpoints in the service catalog. I
 think we need to make these unauthorized.
 
 
 I agree, however from a client discovery process point-of-view, you do
 not necessarily have an endpoint until after you auth and get a service
 catalog anyway.  For example, in the specific case of OpenStackClient
 Help command output, the commands listed may depend on the desired API
 version.  To get the endpoints to query for version support still
 requires a service catalog so nothing really changes there.
 
 And this doesn't even touch on the SC endpoints that include things like
 tenant/project id...
  
 
 Please have a look over the wiki page and how it addresses the above
 and fits into the existing schemes and reply with any comments or
 problems that you see. Is this going to mess with any pre-existing
 clients?
 
 
 * id: Let's either make this a real semantic version so we can parse and
 use the major.minor.patch components (and dump the 'v') or make it an
 identifier that matches the URL path component.  Right now 
 
 * updated: I think it would be a friendly gesture to update this for
 unstable changes as the id is likely to not be updated mid-stream.
  During debugging I would want to be able to verify exactly which
 implementation I was talking to anyway.

So, I'd actually like to extend this a bit differently, and add a micro
version to the API as a normal part of our flows.
https://review.openstack.org/#/c/73090/ is an early sketch of this.

GET /

Content-Type: application/json
Content-Length: 327
Date: Thu, 13 Feb 2014 20:51:48 GMT

{
versions: [
{
status: CURRENT,
updated: 2011-01-21T11:33:21Z,
rev: 2.,
id: v2.0,
links: [
{
href: http://localhost:8774/v2/;,
rel: self
}
]
},
{
status: EXPERIMENTAL,
updated: 2013-07-23T11:33:21Z,
rev: 2.0900,
id: v3.0,
links: [
{
href: http://localhost:8774/v3/;,
rel: self
}
]
}
]
}

And on hitting something under the /v3/ tree:

Content-Type: application/json
X-Osapi-Version: 2.0900
Content-Length: 651
X-Compute-Request-Id: req-6a4ed4f0-07e4-401a-8315-8d114005c6ab
Date: Thu, 13 Feb 2014 20:51:48 GMT

{
version: {
status: EXPERIMENTAL,
updated: 2013-07-23T11:33:21Z,
links: [
{
href: http://localhost:8774/v3/;,
rel: self
},
{
href:
http://docs.openstack.org/api/openstack-compute/3/os-compute-devguide-3.pdf;,

type: application/pdf,
rel: describedby
},
{
href:
http://docs.openstack.org/api/openstack-compute/3/wadl/os-compute-3.wadl;,
type: application/vnd.sun.wadl+xml,
rel: describedby
}
],
rev: 2.0900,
media-types: [
{
base: application/xml,
type: application/vnd.openstack.compute+xml;version=3
},
{
base: application/json,
type: application/vnd.openstack.compute+json;version=3
}
],
id: v3.0
}
}


that would then let us return a pretty fine grained global API version
that included the non breaking backwards compatible changes. Nova is
going to version extensions this time around, but a global increment
would be much better for a consistent view of the world.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gamification and on-boarding ...

2014-02-13 Thread Robert Collins
On 14 February 2014 02:53, Sean Dague s...@dague.net wrote:

 So it seems like the only way we'd make real progress here is to get a
 chunk of people to devote some dedicated time to mentoring in the next
 cycle. Gamification might be most useful, but honestly I expect a Start
 Here page with the consolidated list of low-hanging-fruit bugs, and a
 Review Here page with all reviews for low hanging fruit bugs (so they
 don't get lost by core review team) would be a great start.

 The delays on reviews for relatively trivial fixes I think is something
 that is probably more demotivating to new folks than the lack of badges.
 So some ability to keep on top of that I think would be really great.

+2
-Rob
-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] consistency vs packages in TripleO

2014-02-13 Thread Robert Collins
So progressing with the 'and folk that want to use packages can' arc,
we're running into some friction.

I've copied -operators in on this because its very relevant IMO to operators :)

So far:
 - some packages use different usernames
 - some put things in different places (and all of them use different
places to the bare metal ephemeral device layout which requires
/mnt/).
 - possibly more in future.

Now, obviously its a 'small matter of code' to deal with this, but the
impact on ops folk isn't so small. There are basically two routes that
I can see:

# A
 - we have a reference layout - install from OpenStack git / pypi
releases; this is what we will gate on, and can document.
 - and then each distro (both flavor of Linux and also possibly things
like Fuel that distribution OpenStack) is different - install on X,
get some delta vs reference.
 - we need multiple manuals describing how to operate and diagnose
issues in such a deployment, which is a matrix that overlays platform
differences the user selects like 'Fedora' and 'Xen'.

# B
 - we have one layout, with one set of install paths, usernames
 - package installs vs source installs make no difference - we coerce
the package into reference upstream shape as part of installing it.
 - documentation is then identical for all TripleO installs, except
the platform differences (as above - systemd on Fedora, upstart on
Ubuntu, Xen vs KVM)

B seems much more useful to our ops users - less subtly wrong docs, we
avoid bugs where tools we write upstream make bad assumptions,
experience operating a TripleO deployed OpenStack is more widely
applicable (applies to all such installs, not just those that happened
to use the same package source).

I see this much like the way Nova abstracts out trivial Hypervisor
differences to let you 'nova boot' anywhere, that we should be hiding
these incidental (vs fundamental capability) differences.

What say ye all?

-Robv


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gamification and on-boarding ...

2014-02-13 Thread Anne Gentle
On Thu, Feb 13, 2014 at 7:53 AM, Sean Dague s...@dague.net wrote:

 On 02/13/2014 05:37 AM, Thierry Carrez wrote:
  Sandy Walsh wrote:
  The informal OpenStack motto is automate everything, so perhaps we
 should consider some form of gamification [1] to help us? Can we offer
 badges, quests and challenges to new users to lead them on the way to being
 strong contributors?
 
  Fixed your first bug badge
  Updated the docs badge
  Got your blueprint approved badge
  Triaged a bug badge
  Reviewed a branch badge
  Contributed to 3 OpenStack projects badge
  Fixed a Cells bug badge
  Constructive in IRC badge
  Freed the gate badge
  Reverted branch from a core badge
  etc.
 
  I think that works if you only keep the ones you can automate.
  Constructive in IRC for example sounds a bit subjective to me, and you
  don't want to issue those badges one-by-one manually.
 
  Second thing, you don't want the game to start polluting your bug
  status, i.e. people randomly setting bugs to triaged to earn the
  Triaged a bug badge. So the badges we keep should be provably useful ;)
 
  A few other suggestions:
  Found a valid security issue (to encourage security reports)
  Fixed a bug submitted by someone else (to encourage attacking random
 bugs)
  Removed code (to encourage tech debt reduction)
  Backported a fix to a stable branch (to encourage backporting)
  Fixed a bug that was tagged nobody-wants-to-fix-this-one (to encourage
  people to attack critical / hard bugs)
 
  We might need protected tags to automate this: tags that only some
  people could set to bugs/tasks to designate gate-freeing or
  nobody-wants-to-fix-this-one bugs that will give you badges if you fix
  them.
 
  So overall it's a good idea, but it sounds a bit tricky to automate it
  properly to avoid bad side-effects.

 Gamification is a cool idea, if someone were to implement it, I'd be +1.

 Realistically, the biggest issue I see with on-boarding is mentoring
 time. Especially with folks completely new to our structure, there is a
 lot of confusing things going on. And OpenStack is a ton to absorb. I
 get pinged a lot on IRC, answer when I can, and sometimes just have to
 ignore things because there are only so many hours in the day.

 I think Anita has been doing a great job with the Neutron CI onboarding
 and new folks, and that's given me perspective on just how many
 dedicated mentors we'd need to bring new folks on. With 400 new people
 showing up each release, it's a lot of engagement time. It's also
 investment in our future, as some of these folks will become solid
 contributors and core reviewers.


Yep, it's not just docs, wiki pages, well-triaged bugs, badges, but it's
mostly people. We need mentors and to treat them like gold! (We do.)

Julie Pichon is a great mentor, mentors with the Outreach Program for
Women, and is using Open Hatch for this contributor recruiting, onboarding,
and retaining. [1]

I'd encourage finding ways like this rather than building a badge system.
Not that I'd stop you, but I just don't know if a badges.openstack.org is
the goal when we can repurpose another site.

And when what we really need is mentors.

Love that we're all noodling on this.
Anne

1 http://openhatch.org/projects/OpenStack%20dashboard%20(Horizon)


 So it seems like the only way we'd make real progress here is to get a
 chunk of people to devote some dedicated time to mentoring in the next
 cycle. Gamification might be most useful, but honestly I expect a Start
 Here page with the consolidated list of low-hanging-fruit bugs, and a
 Review Here page with all reviews for low hanging fruit bugs (so they
 don't get lost by core review team) would be a great start.

The delays on reviews for relatively trivial fixes I think is something
 that is probably more demotivating to new folks than the lack of badges.
 So some ability to keep on top of that I think would be really great.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] instructions for creating a new oslo library

2014-02-13 Thread Doug Hellmann
That's the idea. The wiki is easier to edit while we are uncovering all of
the steps, but after we have them worked out we can put this in the infra
docs somewhere and create a choose-your-own adventure walk-through guide to
deal with the branching for different use cases.


On Thu, Feb 13, 2014 at 3:47 PM, Sergey Lukjanov slukja...@mirantis.comwrote:

 Doug, great work, I think it could be sometimes be a base for a detailed
 guide about different type projects creation.


 On Thu, Feb 13, 2014 at 1:29 AM, Doug Hellmann 
 doug.hellm...@dreamhost.com wrote:




 On Wed, Feb 12, 2014 at 4:12 PM, Ben Nemec openst...@nemebean.comwrote:

  On 2014-02-12 13:28, Doug Hellmann wrote:

  I have been working on instructions for creating new Oslo libraries,
 either from scratch or by graduating code from the incubator. I would
 appreciate any feedback about whether I have all of the necessary details
 included in https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary

 Thanks,
 Doug

 First off: \o/

 This should really help cut down on the amount of code we need to sync
 from incubator.

 Given all the fun we've had with oslo.sphinx, maybe we should add a note
 that only runtime deps should use the oslo. namespace?


 Yes, definitely, I added that. Thanks!



 Other than that, I think I'd have to run through the process to have
 further comments.


 You'll have your chance to do that soon, I hope! :-)

 Doug


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Question about Zuul's role in Solum

2014-02-13 Thread Clayton Coleman


- Original Message -
 From: Julien Vey  vey.jul...@gmail.com 
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org 
 Date: Thursday, February 13, 2014 7:18 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org 
 Subject: Re: [openstack-dev] [Solum] Question about Zuul's role in Solum
 
 
 
 
 Hi,
 
 I have some concerns about using Zuul in Solum
 
 
 
 
 I agree gating is a great feature but it is not useful for every project and
 as Adrian said, not understood by everyone.
 I think many Solum users, and PaaS users in general, are
 single-project/single-build/simple git worklow and do not care about gating.
 
 Agreed, this is my major concern, I also think majority of users will be
 after quick app creation from a single git repository. For those people
 having to jump through any additional hoops around gating will give them a
 barrier that will cause them to evaluate if this is the right solution for
 them.
 
 
 
 
 
 I see 2 drawbacks with Zuul :
 - Tenant Isolation : How do we allow access on zuul (and jenkins) for a
 specific tenant in isolation to the others tenants using Solum.
 
 Tenant isolation is a big problem for service providers, but maybe not for
 enterprises running their own Openstack/Solum.
 
 
 
 
 - Build customization : One of the biggest advantage of Jenkins is its
 ecosystem and the many build customization it offers. Using zuul will
 prohibit this.
 
 
 Agreed. If we make CI/CD pluggable and provide two reference implementations
 showing both use cases ( which should be achievable with M1 workflow ).
 
 * `if it builds its good` and provide example code repos that use travisCI we
 can show external tooling CI/CD workflow. [1]
 * full code gating functionality of Gerrit-Zuul-Solum. I see this as
 external to the core of Solum with the user's source control being the
 middleware, but could be something that solum could assist with building (
 provide heat template ). [2]
 
 [1] attached Solum-M1.png
 [2] attached Solum-Gerrit.png
 
 
 
 
 
 About Gerrit, I think it is also a little too much. Many users have their own
 reviewing system, Pull requests with github, bitbucket or stash, their own
 instance of gerrit, or even a custom git workflow.
 Gerrit would be a great feature for future versions of Solum. but only as an
 optionnal one, we should not force people into it.
 
 Agreed. The M1 workflow should make it easy to provide examples of plugging
 in common external CI/CD tooling, or running your own Gerrit+Zuul.
 
 Using a gating system like Gerrit/Zuul could make for some very interesting
 advanced testing, for example one of the tests run could be to hit the solum
 API and deploy the application to it and provide the URL to reviewers, then
 when the code is merged it can kill that test. But this is just as doable as
 an pluggable integration as it is tightly coupling Zuul to solum.

I think it's worth noting that the things Zuul can do (outside of gating or 
being dependent on Gerrit) is allow a reasonable extensible workflow solution 
for source - outcome flows.  We should be careful to separate the value zuul 
brings as part of a Solum/OpenStack deployment (not inventing an extensible 
workflow story for source) out from gating.  

I agree with the statement that we should not tightly couple Zuul to Solum.  
The M1 workflow does not necessarily need to require zuul if the right 
interaction points (which Zuul would require anyway) are exposed for a much 
simpler solution.  However, it would be a mistake to build a simple service now 
and then to continue to expand it until it looks a lot like Zuul without making 
effort to improve Zuul's multi tenancy, customizability, and extensibility 
stories.  There are certainly pieces of Zuul that don't apply to most Solum 
workflows - it's just that we have to resist the temptation to create a 
standalone solution for source - outcome flows outside of Zuul.

Simple, practical, example solution for M1 without gating?  Good.
Building a zuul equivalent?  Bad.
Demonstrating a simple integration with Solum build flows that proves out Solum 
endpoints for receiving updates about new deployment units being available?  
Good.

 
 This also gives service providers a good way to productize Solum to different
 ( or learning ) audiences. Assuming a service provider creates a product
 based off Solum called 'Bananas' they could offer
 
 * 'Banana Lite' which will simply deploy an app ( single instance free, pay
 for more ) from a git-push/git-pull. Only gating is 'did the build succeed'
 * 'Banana Pro' Adds basic CI/CD flow ( Solum calls into a test framework as
 part of the Build )
 * 'Banana Enterprise' Provides full code review, git hosting, automated
 testing (via Gerrit+github Enterprise), deploy to test instance, deploy App.
 
 
 
 
 
 Julien
 
 2014-02-13 5:47 GMT+01:00 Clark Boylan  clark.boy...@gmail.com  :
 
 
 
 On Wed, Feb 

Re: [openstack-dev] [Solum] Regarding language pack database schema

2014-02-13 Thread Clayton Coleman
I like option #2, simply because we should force ourselves to justify every 
attribute that is extracted as a queryable parameter, rather than making them 
queryable at the start.

- Original Message -
 Hi Arati,
 
 
 I would vote for Option #2 as a short term solution. Probably later we can
 consider using NoSQL DB or MariaDB which has Column_JSON type to store
 complex types.
 
 Thanks
 Georgy
 
 
 On Thu, Feb 13, 2014 at 8:12 AM, Arati Mahimane 
 arati.mahim...@rackspace.com  wrote:
 
 
 
 Hi All,
 
 I have been working on defining the Language pack database schema. Here is a
 link to my review which is still a WIP -
 https://review.openstack.org/#/c/71132/3 .
 There are a couple of different opinions on how we should be designing the
 schema.
 
 Language pack has several complex attributes which are listed here -
 https://etherpad.openstack.org/p/Solum-Language-pack-json-format
 We need to support search queries on language packs based on various
 criteria. One example could be 'find a language pack where type='java' and
 version1.4'
 
 Following are the two options that are currently being discussed for the DB
 schema:
 
 Option 1: Having a separate table for each complex attribute, in order to
 achieve normalization. The current schema follows this approach.
 However, this design has certain drawbacks. It will result in a lot of
 complex DB queries and each new attribute will require a code change.
 Option 2: We could have a predefined subset of attributes on which we would
 support search queries.
 In this case, we would define columns (separate tables in case of complex
 attributes) only for this subset of attributes and all other attributes will
 be a part of a json blob.
 With this option, we will have to go through a schema change in case we
 decide to support search queries on other attributes at a later stage.
 
 I would like to know everyone's thoughts on these two approaches so that we
 can take a final decision and go ahead with one approach.
 Suggestions regarding any other approaches are welcome too!
 
 Thanks,
 Arati
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 --
 Georgy Okrokvertskhov
 Architect,
 OpenStack Platform Products,
 Mirantis
 http://www.mirantis.com
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [TripleO] consistency vs packages in TripleO

2014-02-13 Thread John Dewey
On Thursday, February 13, 2014 at 1:27 PM, Robert Collins wrote:
 So progressing with the 'and folk that want to use packages can' arc,
 we're running into some friction.
 
 I've copied -operators in on this because its very relevant IMO to operators 
 :)
 
 So far:
 - some packages use different usernames
 - some put things in different places (and all of them use different
 places to the bare metal ephemeral device layout which requires
 /mnt/).
 - possibly more in future.
 
 Now, obviously its a 'small matter of code' to deal with this, but the
 impact on ops folk isn't so small. There are basically two routes that
 I can see:
 
 # A
 - we have a reference layout - install from OpenStack git / pypi
 releases; this is what we will gate on, and can document.
 - and then each distro (both flavor of Linux and also possibly things
 like Fuel that distribution OpenStack) is different - install on X,
 get some delta vs reference.
 - we need multiple manuals describing how to operate and diagnose
 issues in such a deployment, which is a matrix that overlays platform
 differences the user selects like 'Fedora' and 'Xen'.
 
 # B
 - we have one layout, with one set of install paths, usernames
 - package installs vs source installs make no difference - we coerce
 the package into reference upstream shape as part of installing it.
 - documentation is then identical for all TripleO installs, except
 the platform differences (as above - systemd on Fedora, upstart on
 Ubuntu, Xen vs KVM)
 
 B seems much more useful to our ops users - less subtly wrong docs, we
 avoid bugs where tools we write upstream make bad assumptions,
 experience operating a TripleO deployed OpenStack is more widely
 applicable (applies to all such installs, not just those that happened
 to use the same package source).
 
 I see this much like the way Nova abstracts out trivial Hypervisor
 differences to let you 'nova boot' anywhere, that we should be hiding
 these incidental (vs fundamental capability) differences.
 
 

I personally like B.  In the OpenStack Chef community, there has been quite a 
bit of excitement over the work that Craig Tracey has been doing with 
omnibus-openstack [1].  It is very similar to B, however, it builds a super 
package per distro, with all dependencies into a known location (e.g. 
/opt/openstack/).

Regardless of how B is ultimately implemented, I personally like the suggestion.

[1] https://github.com/craigtracey/omnibus-openstack

John 
 
 What say ye all?
 
 -Robv
 
 
 -- 
 Robert Collins rbtcoll...@hp.com (mailto:rbtcoll...@hp.com)
 Distinguished Technologist
 HP Converged Cloud
 
 ___
 OpenStack-operators mailing list
 openstack-operat...@lists.openstack.org 
 (mailto:openstack-operat...@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [cinder][neutron][nova][3rd party testing] Gerrit Jenkins plugin will not fulfill requirements of 3rd party testing

2014-02-13 Thread Akihiro Motoki
Hi,

I wrote a blog post about how to setup Zuul manually.
http://ritchey98.blogspot.jp/2014/02/openstack-third-party-testing-how-to.html

It covers how to migrate from Gerrit Trigger plugin to Zuul and
some tips including a way to define vendor-specific recheck trigger
in addition to the setup procedure.

Jay's puppet manifest is nice, but I hope the manual installation step
is also helpful to set up 3rd party testing.

Thanks,
Akihiro

On Fri, Feb 14, 2014 at 5:39 AM, Jay Pipes jaypi...@gmail.com wrote:
 On Thu, 2014-02-13 at 12:34 -0800, Sukhdev Kapur wrote:
 Jay,

 Just an FYI. We have modified the Gerrit plugin it accept/match regex
 to generate notifications of for receck no bug/bug ###. It turned
 out to be very simple fix and we (Arista Testing) is now triggering on
 recheck comments as well.

 Thanks for the update, Sukhdev! Is this updated Gerrit plugin somewhere
 where other folks can use it?

 I've got Zuul actually working pretty well in my os-ext-testing repo
 now. Only problem remaining is with the Jenkins slave trigger (not
 related to Gerrit...)

 Best,
 -jay



 ___
 OpenStack-Infra mailing list
 openstack-in...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Proposal for model change - Multiple services per floating IP

2014-02-13 Thread Stephen Balukoff
Hi Eugene,

Aah, Ok. FWIW, splitting up the VIP into instance/floating IP entity
separate from listener (ie. carries most of the attributes of VIP, in
current implementation) still allows us to ensure tenants don't end up
accidentally sharing an IP address. The instance could be associated with
the neutron network port, and the haproxy listeners (one process per
listener) could simply be made to listen on that port (ie. in that network
namespace on the neutron node). There wouldn't be a need for two instances
to share a single neutron network port.

Has any thought been put to preventing tenants from accidentally sharing an
IP if we stick with the current model?

Stephen


On Thu, Feb 13, 2014 at 4:20 AM, Eugene Nikanorov
enikano...@mirantis.comwrote:

 So we have some constraints here because of existing haproxy driver impl,
 the particular reason is that VIP created by haproxy is not a floating ip,
 but an ip on the internal tenant network with a neutron port. So ip
 uniqueness is enforced at port level and not at VIP level. We need to allow
 VIPs to share the port, that is a part of multiple-vips-per-pool blueprint.

 Thanks,
 Eugene.



-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] consistency vs packages in TripleO

2014-02-13 Thread James Slagle
On Thu, Feb 13, 2014 at 4:27 PM, Robert Collins
robe...@robertcollins.net wrote:
 So progressing with the 'and folk that want to use packages can' arc,
 we're running into some friction.

 I've copied -operators in on this because its very relevant IMO to operators 
 :)

 So far:
  - some packages use different usernames
  - some put things in different places (and all of them use different
 places to the bare metal ephemeral device layout which requires
 /mnt/).
  - possibly more in future.

 Now, obviously its a 'small matter of code' to deal with this, but the
 impact on ops folk isn't so small. There are basically two routes that
 I can see:

 # A
  - we have a reference layout - install from OpenStack git / pypi
 releases; this is what we will gate on, and can document.
  - and then each distro (both flavor of Linux and also possibly things
 like Fuel that distribution OpenStack) is different - install on X,
 get some delta vs reference.

So far I think the delta is: service name and user account names. You
mention install paths below, but I'll reply to that there.

For service names, I think we already have a solution with the
os-svc-* tools. We're going to need tools like those anyway just to
account for system service name differences. We might as well tell
people to use them for the OpenStack services too.

For user account names, we don't have a great solution for this yet.

  - we need multiple manuals describing how to operate and diagnose
 issues in such a deployment, which is a matrix that overlays platform
 differences the user selects like 'Fedora' and 'Xen'.

I think the delta is rather small, and I don't think that it's
necessarily TripleO's job to document it all. We have the reference
layout and that's what should be documented well. People who choose to
use packages know that they're doing so. In fact they are likely doing
so b/c they want the defined and consistent patterns they're used to.
They don't want every OpenStack service running in its own virtualenv
with dependencies duplicated across them. That's why they're using to
choose packages. They've deviated from the reference, that's Ok IMO.

 # B
  - we have one layout, with one set of install paths, usernames

I'm actually not clear what you mean by install paths. Do you mean the
execution path to get stuff installed? Or where stuff ends up on the
actual filesystem?

Assuming you mean the latter, e.g., moving files that were installed
by a package under /opt/stack so the package install is the same as
the source install, I think that's a bit counter-intuitive to one of
the reasons people might be using packages to begin with.

Like I mentioned, one of the reasons that people  want to use packages
is for the consistent patterns they provide.  Moving python code out
from underneath site-packages so that it's all under /opt/stack even
for a packaged install makes the documentation worse IMO. We'd have to
document everything we've done to change the package, b/c we've undone
what people expect. That goes against the principle of least surprise,
b/c people who use packages, are expecting (and I suspect wanting)
stuff to end up where the package puts it.

  - package installs vs source installs make no difference - we coerce
 the package into reference upstream shape as part of installing it.

We could document the reference and it would apply to everything. But,
I don't think the problem is solved. We would need to add
documentation for stuff like:
- your package manager is now going to complain about this set of
things, which you are safe to ignore
- your package dependencies aren't going to be reported correctly
- probably more

  - documentation is then identical for all TripleO installs, except
 the platform differences (as above - systemd on Fedora, upstart on
 Ubuntu, Xen vs KVM)

If we didn't do B, I think the documentation is still mostly
identical. IMO, it's not up to TripleO to document how rpm's install
OpenStack or how debs install OpenStack, or how Fuel does it.

 B seems much more useful to our ops users - less subtly wrong docs, we
 avoid bugs where tools we write upstream make bad assumptions,
 experience operating a TripleO deployed OpenStack is more widely
 applicable (applies to all such installs, not just those that happened
 to use the same package source).

Personally, I think B is much less useful to operators. But, I'm not
actually an operator :-).

However, if I were, and I used TripleO with package based installs,
and TripleO moved everything around and undid much of what the package
was laying down, I would find that extremely frustrating and not
useful at all.

Keep in mind I'm not talking about doing configuration changes, which
I think are well within the scope of stuff TripleO should do. But most
(if not all) package managers allow and support configuration changes
to config files without complaining.

All that being said, assuming if we go with A, I think we could
likely come up with some more elegant 

Re: [openstack-dev] Version Discovery Standardization

2014-02-13 Thread Christopher Yeoh
On Thu, 13 Feb 2014 15:54:23 -0500
Sean Dague s...@dague.net wrote:

 On 02/13/2014 09:45 AM, Dean Troyer wrote:
  FWIW, an early proposal to address this, as well as capability
  discovery, still lives
  at https://etherpad.openstack.org/p/api-version-discovery-proposal.
   I've lost track of where this went, and even which design summit
  this is from, but I've been using it as a sanity check for the
  discovery bits in OSC.
  
  On Thu, Feb 13, 2014 at 6:50 AM, Jamie Lennox
  jamielen...@redhat.com mailto:jamielen...@redhat.com wrote:
  
  6. GET '/' is unrestricted. GET '/vX' is often token restricted.
  
  
  Keystone allows access to /v2.0 and /v3 but most services give a
  HTTP Unauthorized. This is a real problem for discovery because
  we need to be able to evaluate the endpoints in the service
  catalog. I think we need to make these unauthorized.
  
  
  I agree, however from a client discovery process point-of-view, you
  do not necessarily have an endpoint until after you auth and get a
  service catalog anyway.  For example, in the specific case of
  OpenStackClient Help command output, the commands listed may depend
  on the desired API version.  To get the endpoints to query for
  version support still requires a service catalog so nothing really
  changes there.
  
  And this doesn't even touch on the SC endpoints that include things
  like tenant/project id...
   
  
  Please have a look over the wiki page and how it addresses the
  above and fits into the existing schemes and reply with any
  comments or problems that you see. Is this going to mess with any
  pre-existing clients?
  
  
  * id: Let's either make this a real semantic version so we can
  parse and use the major.minor.patch components (and dump the 'v')
  or make it an identifier that matches the URL path component.
  Right now 
  
  * updated: I think it would be a friendly gesture to update this for
  unstable changes as the id is likely to not be updated mid-stream.
   During debugging I would want to be able to verify exactly which
  implementation I was talking to anyway.
 
 So, I'd actually like to extend this a bit differently, and add a
 micro version to the API as a normal part of our flows.
 https://review.openstack.org/#/c/73090/ is an early sketch of this.
 
 GET /
 
 Content-Type: application/json
 Content-Length: 327
 Date: Thu, 13 Feb 2014 20:51:48 GMT
 
 {
 versions: [
 {
 status: CURRENT,
 updated: 2011-01-21T11:33:21Z,
 rev: 2.,
 id: v2.0,
 links: [
 {
 href: http://localhost:8774/v2/;,
 rel: self
 }
 ]
 },
 {
 status: EXPERIMENTAL,
 updated: 2013-07-23T11:33:21Z,
 rev: 2.0900,
 id: v3.0,
 links: [
 {
 href: http://localhost:8774/v3/;,
 rel: self
 }
 ]
 }
 ]
 }
 
 And on hitting something under the /v3/ tree:
 
 Content-Type: application/json
 X-Osapi-Version: 2.0900

So is that a typo there and it should be 3.0900?

 Content-Length: 651
 X-Compute-Request-Id: req-6a4ed4f0-07e4-401a-8315-8d114005c6ab
 Date: Thu, 13 Feb 2014 20:51:48 GMT
 
 {
 version: {
 status: EXPERIMENTAL,
 updated: 2013-07-23T11:33:21Z,
 links: [
 {
 href: http://localhost:8774/v3/;,
 rel: self
 },
 {
 href:
 http://docs.openstack.org/api/openstack-compute/3/os-compute-devguide-3.pdf;,
 
 type: application/pdf,
 rel: describedby
 },
 {
 href:
 http://docs.openstack.org/api/openstack-compute/3/wadl/os-compute-3.wadl;,
 type: application/vnd.sun.wadl+xml,
 rel: describedby
 }
 ],
 rev: 2.0900,
 media-types: [
 {
 base: application/xml,
 type:
 application/vnd.openstack.compute+xml;version=3 },
 {
 base: application/json,
 type:
 application/vnd.openstack.compute+json;version=3 }
 ],
 id: v3.0
 }
 }
 
 
 that would then let us return a pretty fine grained global API version
 that included the non breaking backwards compatible changes. Nova is
 going to version extensions this time around, but a global increment
 would be much better for a consistent view of the world.

So one question I have around a global version is what happens when we
have the following situation:

- Extension (not core) A is bumped to version 3, global version bumped
  to 3.01
- Extension B (not core) is bumped to version 6, global version bumped
  to 3.02

but the deployer for $REASONS (perhaps stability/testing/whatever)
really wants to deploy with version 2 of A but 

Re: [openstack-dev] [Neutron] Neutron py27 test 'FAIL: process-returncode'

2014-02-13 Thread 黎林果
I've encountered this FAIL in novalient.

Binary content:
traceback (test/plain; charset=utf8)
Ran 708 tests in 12.268s
FAILED (id=0, failures=1)
error: testr failed (1)
ERROR: InvocationError:
'/home/jenkins/workspace/gate-python-novaclient-python26/.tox/py26/bin/python
setup.p

Who can help me to have a look. Thanks!

address: https://review.openstack.org/#/c/67074/3

2014-02-12 17:08 GMT+08:00 Gary Duan garyd...@gmail.com:
 Oleg,

 Thanks for the suggestion. I will give it a try.

 Gary


 On Wed, Feb 12, 2014 at 12:12 AM, Oleg Bondarev obonda...@mirantis.com
 wrote:

 Hi Gary,

 please see my comments on the review.

 Thanks,
 Oleg


 On Wed, Feb 12, 2014 at 5:52 AM, Gary Duan garyd...@gmail.com wrote:

 Hi,

 The patch I submitted for L3 service framework integration fails on
 jenkins test, py26 and py27. The console only gives following error message,

 2014-02-12 00:45:01.710 | FAIL: process-returncode
 2014-02-12 00:45:01.711 | tags: worker-1

 and at the end,

 2014-02-12 00:45:01.916 | ERROR: InvocationError:
 '/home/jenkins/workspace/gate-neutron-python27/.tox/py27/bin/python -m
 neutron.openstack.common.lockutils python setup.py testr --slowest
 --testr-args='
 2014-02-12 00:45:01.917 | ___ summary
 
 2014-02-12 00:45:01.918 | ERROR:   py27: commands failed

 I wonder what might be the reason for the failure and how to debug this
 problem?

 The patch is at, https://review.openstack.org/#/c/59242/


 The console output is,
 http://logs.openstack.org/42/59242/7/check/gate-neutron-python27/e395b06/console.html



 Thanks,

 Gary


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

2014-02-13 Thread Nachi Ueno
+1

2014年2月12日水曜日、Mayur Patilram.nath241...@gmail.comさんは書きました:

 +1

 *--*
 *Cheers,*
 *Mayur*

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] dhcp-all-interfaces changes reverted

2014-02-13 Thread Robert Collins
Dan - your DHCP-all-interfaces changes broke on Ubuntu 'testenv'
environments - we've backed them out to give them time to be fixed
without it being a firedrill.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Interested in attracting new contributors?

2014-02-13 Thread Jeremy Stanley
On 2014-02-12 14:42:17 -0600 (-0600), Dolph Mathews wrote:
[...]
 There's a lot of such scenarios where new contributors can
 quickly find things to contribute, or at lest provide incredibly
 valuable feedback to the project in the form of reviews!
[...]

I heartily second the suggestion. The biggest and best thing I did
as a new contributor was to start reviewing changes first thing. An
initial contributor, if they have any aptitude for software
development at all, will be able to tell a ton about our development
community by how it interacts through code review. The test-centric
methodology, style guidelines and general level of
acceptance/tolerance for various things become immediately apparent.
You also get to test your understanding of the source by watching
all the mistakes other reviewers find that you missed in your
reviewing. Refine and repeat.

Getting a couple of very simple changes in right away also helps you
pick up the workflow and toolset, but reviewing others changes is a
huge boon to both the project and the would-be contributors doing
the reviewing... much more so than correcting a handful of
typographical errors.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Version Discovery Standardization

2014-02-13 Thread Sean Dague
On 02/13/2014 08:28 PM, Christopher Yeoh wrote:
 On Thu, 13 Feb 2014 15:54:23 -0500
 Sean Dague s...@dague.net wrote:
 
 On 02/13/2014 09:45 AM, Dean Troyer wrote:
 FWIW, an early proposal to address this, as well as capability
 discovery, still lives
 at https://etherpad.openstack.org/p/api-version-discovery-proposal.
  I've lost track of where this went, and even which design summit
 this is from, but I've been using it as a sanity check for the
 discovery bits in OSC.

 On Thu, Feb 13, 2014 at 6:50 AM, Jamie Lennox
 jamielen...@redhat.com mailto:jamielen...@redhat.com wrote:

 6. GET '/' is unrestricted. GET '/vX' is often token restricted.


 Keystone allows access to /v2.0 and /v3 but most services give a
 HTTP Unauthorized. This is a real problem for discovery because
 we need to be able to evaluate the endpoints in the service
 catalog. I think we need to make these unauthorized.


 I agree, however from a client discovery process point-of-view, you
 do not necessarily have an endpoint until after you auth and get a
 service catalog anyway.  For example, in the specific case of
 OpenStackClient Help command output, the commands listed may depend
 on the desired API version.  To get the endpoints to query for
 version support still requires a service catalog so nothing really
 changes there.

 And this doesn't even touch on the SC endpoints that include things
 like tenant/project id...
  

 Please have a look over the wiki page and how it addresses the
 above and fits into the existing schemes and reply with any
 comments or problems that you see. Is this going to mess with any
 pre-existing clients?


 * id: Let's either make this a real semantic version so we can
 parse and use the major.minor.patch components (and dump the 'v')
 or make it an identifier that matches the URL path component.
 Right now 

 * updated: I think it would be a friendly gesture to update this for
 unstable changes as the id is likely to not be updated mid-stream.
  During debugging I would want to be able to verify exactly which
 implementation I was talking to anyway.

 So, I'd actually like to extend this a bit differently, and add a
 micro version to the API as a normal part of our flows.
 https://review.openstack.org/#/c/73090/ is an early sketch of this.

 GET /

 Content-Type: application/json
 Content-Length: 327
 Date: Thu, 13 Feb 2014 20:51:48 GMT

 {
 versions: [
 {
 status: CURRENT,
 updated: 2011-01-21T11:33:21Z,
 rev: 2.,
 id: v2.0,
 links: [
 {
 href: http://localhost:8774/v2/;,
 rel: self
 }
 ]
 },
 {
 status: EXPERIMENTAL,
 updated: 2013-07-23T11:33:21Z,
 rev: 2.0900,
 id: v3.0,
 links: [
 {
 href: http://localhost:8774/v3/;,
 rel: self
 }
 ]
 }
 ]
 }

 And on hitting something under the /v3/ tree:

 Content-Type: application/json
 X-Osapi-Version: 2.0900
 
 So is that a typo there and it should be 3.0900?

So actually I was thinking about this as a 2.9000 API, as the
pre-release. We can decide it's really 3. instead.

 Content-Length: 651
 X-Compute-Request-Id: req-6a4ed4f0-07e4-401a-8315-8d114005c6ab
 Date: Thu, 13 Feb 2014 20:51:48 GMT

 {
 version: {
 status: EXPERIMENTAL,
 updated: 2013-07-23T11:33:21Z,
 links: [
 {
 href: http://localhost:8774/v3/;,
 rel: self
 },
 {
 href:
 http://docs.openstack.org/api/openstack-compute/3/os-compute-devguide-3.pdf;,

 type: application/pdf,
 rel: describedby
 },
 {
 href:
 http://docs.openstack.org/api/openstack-compute/3/wadl/os-compute-3.wadl;,
 type: application/vnd.sun.wadl+xml,
 rel: describedby
 }
 ],
 rev: 2.0900,
 media-types: [
 {
 base: application/xml,
 type:
 application/vnd.openstack.compute+xml;version=3 },
 {
 base: application/json,
 type:
 application/vnd.openstack.compute+json;version=3 }
 ],
 id: v3.0
 }
 }


 that would then let us return a pretty fine grained global API version
 that included the non breaking backwards compatible changes. Nova is
 going to version extensions this time around, but a global increment
 would be much better for a consistent view of the world.
 
 So one question I have around a global version is what happens when we
 have the following situation:
 
 - Extension (not core) A is bumped to version 3, global version bumped
   to 3.01
 - Extension B (not core) is bumped to version 6, global version bumped
   to 3.02
 
 

Re: [openstack-dev] [OpenStack-Dev] [Cinder] Cinder driver verification

2014-02-13 Thread Walter A. Boring IV

On 02/13/2014 09:51 AM, Avishay Traeger wrote:

Walter A. Boring IV walter.bor...@hp.com wrote on 02/13/2014 06:59:38
PM:

What I would do different for the Icehouse release is this:

If a driver doesn't pass the certification test by IceHouse RC1, then we
have a bug filed
against the driver.   I would also put a warning message in the log for
that driver that it
doesn't pass the certification test.  I would not remove it from the
codebase.

Also:
 if a driver hasn't even run the certification test by RC1, then we
mark the driver as
uncertified and deprecated in the code and throw an error at driver init
time.
We can have a option in cinder.conf that says
ignore_uncertified_drivers=False.
If an admin wants to ignore the error, they set the flag to True, and we
let the driver init at next startup.
The admin then takes full responsibility for running uncertified code.

I think removing the drivers outright is premature for Icehouse,
since the certification process is a new thing.
For Juno, we remove any drivers that are still marked as uncertified and
haven't run the tests.

I think the purpose of the tests is to get vendors to actually run their
code through tempest and
prove to the community that they are willing to show that they are
fixing their code.  At the end of the day,
it better serves the community and Cinder if we have many working

drivers.

My $0.02,
Walt


I like this.  Make that $0.04 now :)


I wrote a bit of code so we had something to discuss if anyone thinks 
it's a good enough

compromise.
https://review.openstack.org/#/c/73464/

Walt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Version Discovery Standardization

2014-02-13 Thread Christopher Yeoh
On Thu, 13 Feb 2014 21:10:01 -0500
Sean Dague s...@dague.net wrote:
 On 02/13/2014 08:28 PM, Christopher Yeoh wrote:
  On Thu, 13 Feb 2014 15:54:23 -0500
  Sean Dague s...@dague.net wrote:

  
  So one question I have around a global version is what happens when
  we have the following situation:
  
  - Extension (not core) A is bumped to version 3, global version
  bumped to 3.01
  - Extension B (not core) is bumped to version 6, global version
  bumped to 3.02
  
  but the deployer for $REASONS (perhaps stability/testing/whatever)
  really wants to deploy with version 2 of A but version 6 of B. 
  
  With versioning just on the extensions individually they're ok, but
  I don't think there's any real sane way to get a global micro
  version calculated for this scenario that makes sense to the end
  user.
 
 So there remains a question about extensions vs. global version. I
 think a big piece of this is anything which is a core extension,

So to reduce confusion I've been trying to introduce the nomenclature of
everything is a plugin. And then some plugins are compulsory (eg
core) and others are optional (extensions)

 stops getting listed as an extension and instead is part of properly
 core and using the global version.
 
 How extensions impact global version is I think an open question. But
 Nova OS API is actually really weird if you think about it relative to
 other cloud APIs (ec2, gce, softlayer). We've defined it not as the
 Nova API, but as a small core compute API, and many dozens optional
 features, which every deployer makes decisions on what comes and goes.
 
 I agree we need to think through a few things. But I think that if we
 get to v3, only to have to do a ton more stuff for v4, and take 2 more
 years to get there, we're in a world of hurt. The current model of API
 revisions as giant big bangs isn't good for any one. A way to make an
 API be able to grow over time, in a backwards compatible way, and some
 mechanism to deprecate and remove a feature over time would be much
 more advantageous to our consumers.
 

I agree we don't want to avoid another big bang version change for as
long as we can. Given that we have extensions (and I know that some
people really don't like that) however I'd be a lot more comfortable 
if this minor global version was only bumped when there were changes to
the core plugins or a plugin was added to the core (I don't think we
can ever remove them from core within a major version). There should be
a high bar on making any changes to core plugins (even though
they are backwards compatible).

I'm also fine with core plugins not appearing in the /v3/extensions
list. Its a simple enough change and agree that it will reduce
confusion over interoperability between openstack clouds. 

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Devstack installation failed with CINDER installation

2014-02-13 Thread Jeremy Stanley
On 2014-02-13 10:56:28 -0600 (-0600), Ben Nemec wrote:
[...]
 configure pip to use the pypi.openstack.org mirror.
[...]

While this is sometimes a useful hack for working around
intermittent PyPI CDN growing pains on your personal development
workstation, or maybe for ferreting out whether your local tests are
getting different results because of varied package set between PyPI
and our mirror, I fear that some people reading this might assume
it's a stable public service and encode it into production
configuration.

The pypi.openstack.org mirror is just a single VM, while
pypi.python.org has CDN services fronting it for improved
reachability, reliability and scalability. In fact,
pypi.openstack.org resides on the same single-point-of-failure VM
which also provides access to build logs and lots of other data.
It's intended mostly as a place for our automated build systems to
look for packages so as not to hammer actual PyPI constantly and to
provide us an additional layer of control over what we test with. It
is *not* secure. Let me reiterate that point. It is for test jobs,
so the content is served via plain unencrypted HTTP *only* and is
therefore easily modified by a man-in-the-middle attack. It's also
not guaranteed to be around indefinitely, or to necessarily be
reachable outside the cloud provider networks where testing is
performed, or to carry all the packages you may need, or to have
enough bandwidth available to serve the entire user base, or to be
up and on line 100% of the time, or...

...you get the idea.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Interested in attracting new contributors?

2014-02-13 Thread Luis de Bethencourt
On 13 February 2014 21:09, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2014-02-12 14:42:17 -0600 (-0600), Dolph Mathews wrote:
 [...]
  There's a lot of such scenarios where new contributors can
  quickly find things to contribute, or at lest provide incredibly
  valuable feedback to the project in the form of reviews!
 [...]

 I heartily second the suggestion. The biggest and best thing I did
 as a new contributor was to start reviewing changes first thing. An
 initial contributor, if they have any aptitude for software
 development at all, will be able to tell a ton about our development
 community by how it interacts through code review. The test-centric
 methodology, style guidelines and general level of
 acceptance/tolerance for various things become immediately apparent.
 You also get to test your understanding of the source by watching
 all the mistakes other reviewers find that you missed in your
 reviewing. Refine and repeat.

 Getting a couple of very simple changes in right away also helps you
 pick up the workflow and toolset, but reviewing others changes is a
 huge boon to both the project and the would-be contributors doing
 the reviewing... much more so than correcting a handful of
 typographical errors.
 --
 Jeremy Stanley



That is a very good idea Jeremy.

I started learning and contributing to OpenStack yesterday. I have been
writing down all the things I do, read and discover. Planning to blog about
and share it. I think it would be valuable to show how to contribute and
learn the project from the point of view of a novice to it.

Cheers,
Luis
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] adding handler to neutron logger in neutron.openstack.common

2014-02-13 Thread Hemanth Ravi
Hi,

We are in the process of submitting a neutron third-party plugin and need
some advise on logging configuration to resolve the review comments for
this.

The plugin currently defines a log format and adds a log handler to forward
logs to a remote syslog server in this format. We would like to leave the
logs from the rest of neutron to be as configured by the log options in
neutron.conf.

Is it possible to change the format and handler only for the logs generated
by the plugin using the log module in neutron.openstack.common?

Appreciate any help to resolve this.

-hemanth
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Should we limit the disk IO bandwidth in copy_image while creating new instance?

2014-02-13 Thread Wangpan
Currently nova doesn't limit the disk IO bandwidth in copy_image() method while 
creating a new instance, so the other instances on this host may be affected by 
this high disk IO consuming operation, and some time-sensitive business(e.g RDS 
instance with heartbeat) may be switched between master and slave.

So can we use the `rsync --bwlimit=${bandwidth} src dst` command instead of `cp 
src dst` while copy_image in create_image() of libvirt driver, the remote image 
copy operation also can be limited by `rsync --bwlimit=${bandwidth}` or `scp 
-l=${bandwidth}`, this parameter ${bandwidth} can be a new configuration in 
nova.conf which allow cloud admin to config it, it's default value is 0 which 
means no limitation, then the instances on this host will be not affected while 
a new instance with not cached image is creating.

the example codes:
nova/virt/libvit/utils.py:
diff --git a/nova/virt/libvirt/utils.py b/nova/virt/libvirt/utils.py
index e926d3d..5d7c935 100644
--- a/nova/virt/libvirt/utils.py
+++ b/nova/virt/libvirt/utils.py
@@ -473,7 +473,10 @@ def copy_image(src, dest, host=None):
 # sparse files.  I.E. holes will not be written to DEST,
 # rather recreated efficiently.  In addition, since
 # coreutils 8.11, holes can be read efficiently too.
-execute('cp', src, dest)
+if CONF.mbps_in_copy_image  0:
+execute('rsync', '--bwlimit=%s' % CONF.mbps_in_copy_image * 1024, 
src, dest)
+else:
+execute('cp', src, dest)
 else:
 dest = %s:%s % (host, dest)
 # Try rsync first as that can compress and create sparse dest files.
@@ -484,11 +487,22 @@ def copy_image(src, dest, host=None):
 # Do a relatively light weight test first, so that we
 # can fall back to scp, without having run out of space
 # on the destination for example.
-execute('rsync', '--sparse', '--compress', '--dry-run', src, dest)
+if CONF.mbps_in_copy_image  0:
+execute('rsync', '--sparse', '--compress', '--dry-run',
+'--bwlimit=%s' % CONF.mbps_in_copy_image * 1024, src, 
dest)
+else:
+execute('rsync', '--sparse', '--compress', '--dry-run', src, 
dest)
 except processutils.ProcessExecutionError:
-execute('scp', src, dest)
+if CONF.mbps_in_copy_image  0:
+execute('scp', '-l', '%s' % CONF.mbps_in_copy_image * 1024 * 
8, src, dest)
+else:
+execute('scp', src, dest)
 else:
-execute('rsync', '--sparse', '--compress', src, dest)
+if CONF.mbps_in_copy_image  0:
+execute('rsync', '--sparse', '--compress',
+'--bwlimit=%s' % CONF.mbps_in_copy_image * 1024, src, 
dest)
+else:
+execute('rsync', '--sparse', '--compress', src, dest)


2014-02-14



Wangpan___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gamification and on-boarding ...

2014-02-13 Thread Mike Spreitzer
 From: Sean Dague s...@dague.net
...
 Realistically, the biggest issue I see with on-boarding is mentoring
 time. Especially with folks completely new to our structure, there is a
 lot of confusing things going on. And OpenStack is a ton to absorb. I
 get pinged a lot on IRC, answer when I can, and sometimes just have to
 ignore things because there are only so many hours in the day.

A great way to magnify your effort is to write things down where seekers 
can find it.  The documentation is pretty confusing for someone just 
starting out.  I am going through this myself.  I just made several 
updates to the wiki, adding things that newbies need to know (I hope I got 
them right, and trust someone will speak up if I did not).  I also posted 
a couple of doc bugs for non-wiki issues.

Answering questions on mailing lists and ask.openstack.org also leaves a 
helpful written trail.  I just posted a question on the openstack mailing 
list yesterday, I must be doing something stupid with DevStack, but nobody 
has answered it and there are several related-looking questions on 
ask.openstack.org with no answers.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Need more sample HOT templates for users

2014-02-13 Thread Qiming Teng
Hi,

  I have been recently trying to convince some co-workers and even some
  customers to try deploy and manipulate their applications using Heat.

  Here are some feedbacks I got from them, which could be noteworthy for
  the Heat team, hopefully.

  - No document can be found on how each Resource is supposed to be
used. This is partly solved by the adding resource_schema API but it
seems not yet exposed by heatclient thus the CLI.

In addition to this, resource schema itself may print only simple
help message in ONE sentence, which could be insufficient for users
to gain a full understanding.

  - The current 'heat-templates' project provides quite some samples in
the CFN format, but not so many in HOT format.  For early users,
this means either they will get more accustomed to CFN templates, or
they need to write HOT templates from scratch.

Another suggestion is also related to Resource usage. Maybe more
smaller HOT templates each focusing on teaching one or two resources
would be helpful. There could be some complex samples as show cases
as well.

 Some thoughts on documenting the Resources:

  - The doc can be inlined in the source file, where a developer
provides the manual of a resource when it is developed. People won't
forget to update it if the implementation is changed. A Resource can
provide a 'describe' or 'usage' or 'help' method to be inherited and
implemented by all resource types.

One problem with this is that code mixed with long help text may be
annoying for some developers.  Another problem is about
internationalization.

  - Another option is to create a subdirectory in the doc directory,
dedicated to resource usage. In addition to the API references, we
also provide resource references (think of the AWS CFN online docs).

  Does this makes senses?

Regards,
  - Qiming

-
Qiming Teng, PhD.
Research Staff Member
IBM Research - China
e-mail: teng...@cn.ibm.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] dhcp-all-interfaces changes reverted

2014-02-13 Thread Matthew Mosesohn
Robert,

I have noticed that trying to DHCP on all interfaces at once in Ubuntu
12.04 results in wrong interfaces getting particular reservations. It is
better to do one at a time (with all interfaces down first) with a pause in
between.
On Feb 14, 2014 6:03 AM, Robert Collins robe...@robertcollins.net wrote:

 Dan - your DHCP-all-interfaces changes broke on Ubuntu 'testenv'
 environments - we've backed them out to give them time to be fixed
 without it being a firedrill.

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Need more sample HOT templates for users

2014-02-13 Thread Thomas Spatzier
Hi Qiming,

not sure if you have already seen it, but there is some documentation
available at the following locations. If you already know it, sorry for
dup ;-)

Entry to Heat documentation:
http://docs.openstack.org/developer/heat/

Template Guide with pointers to more details like documentation of all
resources:
http://docs.openstack.org/developer/heat/template_guide/index.html

HOT template guide:
http://docs.openstack.org/developer/heat/template_guide/hot_guide.html

HOT template spec:
http://docs.openstack.org/developer/heat/template_guide/hot_spec.html

Regards,
Thomas

Qiming Teng teng...@linux.vnet.ibm.com wrote on 14/02/2014 06:55:56:

 From: Qiming Teng teng...@linux.vnet.ibm.com
 To: openstack-dev@lists.openstack.org
 Date: 14/02/2014 07:04
 Subject: [openstack-dev] [Heat] Need more sample HOT templates for users

 Hi,

   I have been recently trying to convince some co-workers and even some
   customers to try deploy and manipulate their applications using Heat.

   Here are some feedbacks I got from them, which could be noteworthy for
   the Heat team, hopefully.

   - No document can be found on how each Resource is supposed to be
 used. This is partly solved by the adding resource_schema API but it
 seems not yet exposed by heatclient thus the CLI.

 In addition to this, resource schema itself may print only simple
 help message in ONE sentence, which could be insufficient for users
 to gain a full understanding.

   - The current 'heat-templates' project provides quite some samples in
 the CFN format, but not so many in HOT format.  For early users,
 this means either they will get more accustomed to CFN templates, or
 they need to write HOT templates from scratch.

 Another suggestion is also related to Resource usage. Maybe more
 smaller HOT templates each focusing on teaching one or two resources
 would be helpful. There could be some complex samples as show cases
 as well.

  Some thoughts on documenting the Resources:

   - The doc can be inlined in the source file, where a developer
 provides the manual of a resource when it is developed. People won't
 forget to update it if the implementation is changed. A Resource can
 provide a 'describe' or 'usage' or 'help' method to be inherited and
 implemented by all resource types.

 One problem with this is that code mixed with long help text may be
 annoying for some developers.  Another problem is about
 internationalization.

   - Another option is to create a subdirectory in the doc directory,
 dedicated to resource usage. In addition to the API references, we
 also provide resource references (think of the AWS CFN online docs).

   Does this makes senses?

 Regards,
   - Qiming

 -
 Qiming Teng, PhD.
 Research Staff Member
 IBM Research - China
 e-mail: teng...@cn.ibm.com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev