[openstack-dev] heat stack-create with two vm instances always got one failed

2014-07-16 Thread Yuling_C
Dell Customer Communication

Hi,
I'm using heat to create a stack with two instances. I always got one of them 
successful, but the other would fail. If I split the template into two and each 
of them contains one instance then it worked. However, I thought Heat template 
would allow multiple instances being created?

Here I attach the heat template:
{
AWSTemplateFormatVersion : 2010-09-09,
Description : Sample Heat template that spins up multiple instances and 
a private network (JSON),
Resources : {
test_net : {
 Type : OS::Neutron::Net,
 Properties : {
 name : test_net
  }
  },
  test_subnet : {
  Type : OS::Neutron::Subnet,
  Properties : {
  name : test_subnet,
  cidr : 120.10.9.0/24,
  enable_dhcp : True,
  gateway_ip : 120.10.9.1,
  network_id : { Ref : test_net }
  }
  },
 test_net_port : {
 Type : OS::Neutron::Port,
 Properties : {
 admin_state_up : True,
 network_id : { Ref : test_net }
 }
 },
 instance1 : {
 Type : OS::Nova::Server,
 Properties : {
 name : instance1,
 image : 8e2b4c71-448c-4313-8b41-b238af31f419,
 flavor: tvm-tt_lite,
 networks : [
 {port : { Ref : test_net_port }}
 ]
   }
   },
 instance2 : {
 Type : OS::Nova::Server,
 Properties : {
 name : instance2,
 image : 8e2b4c71-448c-4313-8b41-b238af31f419,
 flavor: tvm-tt_lite,
 networks : [
 {port : { Ref : test_net_port }}
 ]
   }
   }
}
}
The error that I got from heat-engine.log is as follows:

2014-07-16 01:49:50.514 25101 DEBUG heat.engine.scheduler [-] Task 
resource_action complete step 
/usr/lib/python2.6/site-packages/heat/engine/scheduler.py:170
2014-07-16 01:49:50.515 25101 DEBUG heat.engine.scheduler [-] Task stack_task 
from Stack teststack sleeping _sleep 
/usr/lib/python2.6/site-packages/heat/engine/scheduler.py:108
2014-07-16 01:49:51.516 25101 DEBUG heat.engine.scheduler [-] Task stack_task 
from Stack teststack running step 
/usr/lib/python2.6/site-packages/heat/engine/scheduler.py:164
2014-07-16 01:49:51.516 25101 DEBUG heat.engine.scheduler [-] Task 
resource_action running step 
/usr/lib/python2.6/site-packages/heat/engine/scheduler.py:164
2014-07-16 01:49:51.960 25101 DEBUG urllib3.connectionpool [-] GET 
/v2/b64803d759e04b999e616b786b407661/servers/7cb9459c-29b3-4a23-a52c-17d85fce0559
 HTTP/1.1 200 1854 _make_request 
/usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295
2014-07-16 01:49:51.963 25101 ERROR heat.engine.resource [-] CREATE : Server 
instance1
2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource Traceback (most recent 
call last):
2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource   File 
/usr/lib/python2.6/site-packages/heat/engine/resource.py, line 371, in 
_do_action
2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource while not 
check(handle_data):
2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource   File 
/usr/lib/python2.6/site-packages/heat/engine/resources/server.py, line 239, 
in check_create_complete
2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource return 
self._check_active(server)
2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource   File 
/usr/lib/python2.6/site-packages/heat/engine/resources/server.py, line 255, 
in _check_active
2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource raise exc
2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource Error: Creation of 
server instance1 failed.
2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource
2014-07-16 01:49:51.996 25101 DEBUG heat.engine.scheduler [-] Task 
resource_action cancelled cancel 
/usr/lib/python2.6/site-packages/heat/engine/scheduler.py:187
2014-07-16 01:49:52.004 25101 DEBUG heat.engine.scheduler [-] Task stack_task 
from Stack teststack complete step 
/usr/lib/python2.6/site-packages/heat/engine/scheduler.py:170
2014-07-16 01:49:52.005 25101 WARNING heat.engine.service [-] Stack create 
failed, status FAILED
2014-07-16 01:50:29.218 25101 DEBUG heat.openstack.common.rpc.amqp [-] received 
{u'_context_roles': [u'Member', u'admin'], u'_msg_id': 
u'9aedf86fda304cfc857dc897d8393427', u'_context_password': 'SANITIZED', 
u'_context_auth_url': u'http://172.17.252.60:5000/v2.0', u'_unique_id': 
u'f02188b068de4a4aba0ec203ec3ad54a', u'_reply_q': 
u'reply_f841b6a2101d4af9a9af59889630ee77', u'_context_aws_creds': None, 
u'args': {}, u'_context_tenant': u'TVM', u'_context_trustor_user_id': None, 
u'_context_trust_id': None, u'_context_auth_token': 'SANITIZED', 
u'_context_is_admin': True, u'version': u'1.0', u'_context_tenant_id': 
u'b64803d759e04b999e616b786b407661', 

Re: [openstack-dev] [Fuel] Neutron ML2 Blueprints

2014-07-16 Thread Andrew Woodward
[2] appears to be made worse, if not caused by neutron services
autostarting with debian, no patch yet, need to add mechanism to ha
layer to generate override files.
[3] appears to have stopped with this mornings master
[4] deleting the cluster, and restarting mostly removed this, was
getting issue with $::osnailyfacter::swift_partition/.. not existing
(/var/lib/glance), but is fixed in rev 29

[5] is still the critical issue blocking progress, I'm super at a loss
of why this is occuring. Changes to ordering have no affect. Next
steps probably involve pre-hacking keystone and neutron and
nova-client to be more verbose about it's key usage. As a hack we
could simply restart neutron-server but I'm not convinced the issue
can't come back since we don't know how it started.



On Tue, Jul 15, 2014 at 6:34 AM, Sergey Vasilenko
svasile...@mirantis.com wrote:
 [1] fixed in https://review.openstack.org/#/c/107046/
 Thanks for report a bug.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Andrew
Mirantis
Ceph community

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [ML2] kindly request neutron-cores to review and approve patch Cisco DFA ML2 Mechanism Driver

2014-07-16 Thread Anita Kuno
On 07/16/2014 12:01 AM, Milton Xu (mxu) wrote:
 Hi,
 
 This patch was initially uploaded on Jun 27, 2014 and we have got a number of 
 reviews from the community. A lot of thanks to these who kindly reviewed and 
 provided feedback.
 
 Can the neutron cores please review/approve it so we can make progress here?  
 Really appreciate your attention and help here.
 I also include the cores who reviewed and approved the spec earlier.
 
 Code patch:
 https://review.openstack.org/#/c/103281/
 
 Approved Spec:
 https://review.openstack.org/#/c/89740/
 
 
 Thanks,
 Milton
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Hi:

The mailing list is not the correct place to ask for a review.

The preferred methods are discussed here:
http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html

Thank you,
Anita.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-16 Thread Samuel Bercovici
OK.

Let me be more precise, extracting the information for view sake / validation 
would be good.
Providing values that are different than what is in the x509 is what I am 
opposed to.

+1 for Carlos on the library and that it should be ubiquitously used.

I will wait for Vijay to speak for himself in this regard…

-Sam.


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Tuesday, July 15, 2014 8:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
SubjectCommonName and/or SubjectAlternativeNames from X509

+1 to German's and  Carlos' comments.

It's also worth pointing out that some UIs will definitely want to show SAN 
information and the like, so either having this available as part of the API, 
or as a standard library we write which then gets used by multiple drivers is 
going to be necessary.

If we're extracting the Subject Common Name in any place in the code then we 
also need to be extracting the Subject Alternative Names at the same place. 
From the perspective of the SNI standard, there's no difference in how these 
fields should be treated, and if we were to treat SANs differently then we're 
both breaking the standard and setting a bad precedent.

Stephen

On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza 
carlos.ga...@rackspace.commailto:carlos.ga...@rackspace.com wrote:

On Jul 15, 2014, at 10:55 AM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com
 wrote:

 Hi,


 Obtaining the domain name from the x509 is probably more of a 
 driver/backend/device capability, it would make sense to have a library that 
 could be used by anyone wishing to do so in their driver code.
You can do what ever you want in *your* driver. The code to extract this 
information will be apart of the API and needs to be mentioned in the spec now. 
PyOpenSSL with PyASN1 are the most likely candidates.

Carlos D. Garza

 -Sam.



 From: Eichberger, German 
 [mailto:german.eichber...@hp.commailto:german.eichber...@hp.com]
 Sent: Tuesday, July 15, 2014 6:43 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

 Hi,

 My impression was that the frontend would extract the names and hand them to 
 the driver.  This has the following advantages:

 · We can be sure all drivers can extract the same names
 · No duplicate code to maintain
 · If we ever allow the user to specify the names on UI rather in the 
 certificate the driver doesn’t need to change.

 I think I saw Adam say something similar in a comment to the code.

 Thanks,
 German

 From: Evgeny Fedoruk [mailto:evge...@radware.commailto:evge...@radware.com]
 Sent: Tuesday, July 15, 2014 7:24 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
 SubjectCommonName and/or SubjectAlternativeNames from X509

 Hi All,

 Since this issue came up from TLS capabilities RST doc review, I opened a ML 
 thread for it to make the decision.
 Currently, the document says:

 “
 For SNI functionality, tenant will supply list of TLS containers in specific
 Order.
 In case when specific back-end is not able to support SNI capabilities,
 its driver should throw an exception. The exception message should state
 that this specific back-end (provider) does not support SNI capability.
 The clear sign of listener's requirement for SNI capability is
 a none empty SNI container ids list.
 However, reference implementation must support SNI capability.

 Specific back-end code may retrieve SubjectCommonName and/or altSubjectNames
 from the certificate which will determine the hostname(s) the certificate
 is associated with.

 The order of SNI containers list may be used by specific back-end code,
 like Radware's, for specifying priorities among certificates.
 In case when two or more uploaded certificates are valid for the same DNS name
 and the tenant has specific requirements around which one wins this collision,
 certificate ordering provides a mechanism to define which cert wins in the
 event of a collision.
 Employing the order of certificates list is not a common requirement for
 all back-end implementations.
 “

 The question is about SCN and SAN extraction from X509.
 1.   Extraction of SCN/ SAN should be done while provisioning and not 
 during TLS handshake
 2.   Every back-end code/driver must(?) extract SCN and(?) SAN and use it 
 for certificate determination for host

 Please give your feedback

 Thanks,
 Evg
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___

Re: [openstack-dev] heat stack-create with two vm instances always got one failed

2014-07-16 Thread Thomas Spatzier
I think the problem could be that you need one network port for each
server, but you just have one OS::Neutron::Port resource defined.

yulin...@dell.com wrote on 16/07/2014 08:17:00:
 From: yulin...@dell.com
 To: openstack-dev@lists.openstack.org
 Date: 16/07/2014 08:20
 Subject: [openstack-dev] heat stack-create with two vm instances
 always got one failed

 Dell Customer Communication

 Hi,
 I'm using heat to create a stack with two instances. I always got
 one of them successful, but the other would fail. If I split the
 template into two and each of them contains one instance then it
 worked. However, I thought Heat template would allow multiple
 instances being created?

 Here I attach the heat template:
 {
 AWSTemplateFormatVersion : 2010-09-09,
 Description : Sample Heat template that spins up multiple
 instances and a private network (JSON),
 Resources : {
 test_net : {
  Type : OS::Neutron::Net,
  Properties : {
  name : test_net
   }
   },
   test_subnet : {
   Type : OS::Neutron::Subnet,
   Properties : {
   name : test_subnet,
   cidr : 120.10.9.0/24,
   enable_dhcp : True,
   gateway_ip : 120.10.9.1,
   network_id : { Ref : test_net }
   }
   },
  test_net_port : {
  Type : OS::Neutron::Port,
  Properties : {
  admin_state_up : True,
  network_id : { Ref : test_net }
  }
  },
  instance1 : {
  Type : OS::Nova::Server,
  Properties : {
  name : instance1,
  image : 8e2b4c71-448c-4313-8b41-b238af31f419,
  flavor: tvm-tt_lite,
  networks : [
  {port : { Ref : test_net_port }}
  ]
}
},
  instance2 : {
  Type : OS::Nova::Server,
  Properties : {
  name : instance2,
  image : 8e2b4c71-448c-4313-8b41-b238af31f419,
  flavor: tvm-tt_lite,
  networks : [
  {port : { Ref : test_net_port }}
  ]
}
}
 }
 }
 The error that I got from heat-engine.log is as follows:

 2014-07-16 01:49:50.514 25101 DEBUG heat.engine.scheduler [-] Task
 resource_action complete step /usr/lib/python2.6/site-packages/heat/
 engine/scheduler.py:170
 2014-07-16 01:49:50.515 25101 DEBUG heat.engine.scheduler [-] Task
 stack_task from Stack teststack sleeping _sleep /usr/lib/python2.
 6/site-packages/heat/engine/scheduler.py:108
 2014-07-16 01:49:51.516 25101 DEBUG heat.engine.scheduler [-] Task
 stack_task from Stack teststack running step /usr/lib/python2.6/
 site-packages/heat/engine/scheduler.py:164
 2014-07-16 01:49:51.516 25101 DEBUG heat.engine.scheduler [-] Task
 resource_action running step /usr/lib/python2.6/site-packages/heat/
 engine/scheduler.py:164
 2014-07-16 01:49:51.960 25101 DEBUG urllib3.connectionpool [-] GET
 /v2/b64803d759e04b999e616b786b407661/servers/7cb9459c-29b3-4a23-
 a52c-17d85fce0559 HTTP/1.1 200 1854 _make_request /usr/lib/python2.
 6/site-packages/urllib3/connectionpool.py:295
 2014-07-16 01:49:51.963 25101 ERROR heat.engine.resource [-] CREATE
 : Server instance1
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource Traceback
 (most recent call last):
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource   File /
 usr/lib/python2.6/site-packages/heat/engine/resource.py, line 371,
 in _do_action
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource while
 not check(handle_data):
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource   File /
 usr/lib/python2.6/site-packages/heat/engine/resources/server.py,
 line 239, in check_create_complete
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource return
 self._check_active(server)
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource   File /
 usr/lib/python2.6/site-packages/heat/engine/resources/server.py,
 line 255, in _check_active
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource raise exc
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource Error:
 Creation of server instance1 failed.
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource
 2014-07-16 01:49:51.996 25101 DEBUG heat.engine.scheduler [-] Task
 resource_action cancelled cancel /usr/lib/python2.6/site-packages/
 heat/engine/scheduler.py:187
 2014-07-16 01:49:52.004 25101 DEBUG heat.engine.scheduler [-] Task
 stack_task from Stack teststack complete step /usr/lib/python2.6/
 site-packages/heat/engine/scheduler.py:170
 2014-07-16 01:49:52.005 25101 WARNING heat.engine.service [-] Stack
 create failed, status FAILED
 2014-07-16 01:50:29.218 25101 DEBUG heat.openstack.common.rpc.amqp
 [-] received {u'_context_roles': [u'Member', u'admin'], u'_msg_id':
 u'9aedf86fda304cfc857dc897d8393427', u'_context_password':
 'SANITIZED', 

Re: [openstack-dev] [Ironic] [Horizon] [UX] Wireframes for Node Management - Juno

2014-07-16 Thread Jaromir Coufal

Hi Wan,

thanks for great notes. My response is inline:

On 2014/15/07 23:19, Wan-yen Hsu wrote:

The Register Nodes panel uses IPMI user and IPMI Password.
However, not all Ironic drivers use IPMI, for instance, some Ironic
drivers will use iLO or other BMC interfaces instead of IPMI.  I would
like to suggest changing IPMI to BMC or IPMI/BMC to acomodate
more Ironic drivers.  The Driver field will reflect what power
management interface (e.g., IPMI + PXE, or iLO + Virtual Media) is used
so it can be used to correlate the user and password fields.


We are already prepared for multiple drivers. If you look at the Driver 
field, there is a dropdown menu from which you can choose a driver and 
based on the selection the additional information (like IP, user, passw) 
will be changed.



Also, myself and a few folks are working on Ironic UEFI support and
we hope to land this feature in Juno (Spec is still in review state but
the feature is on the Ironic Juno Prioritized list). In order to add
UEFI boot feature, a Supported Boot Modes field in the hardware info
is needed.  The possible values are BIOS Only, UEFI Only, and
BIOS+UEFI.   We will need to work with you to add this field onto
hardware info.


There is no problem to accommodate this change in the UI once the 
back-end supports it. So if there is a desire to expose the feature in 
the UI, when there is already working back-end solution, feel free to 
send a patch which adds that to the HW info - it's an easy addition and 
the UI is prepared for such types of expansions.




Thanks!

wanyen


Cheers
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Horizon] [UX] Wireframes for Node Management - Juno

2014-07-16 Thread Jaromir Coufal

On 2014/15/07 20:29, Gregory Haynes wrote:

Excerpts from Jaromir Coufal's message of 2014-07-15 07:15:12 +:

On 2014/10/07 22:19, Gregory Haynes wrote:

Excerpts from Jaromir Coufal's message of 2014-07-09 07:51:56 +:

Hey folks,

after few rounds of reviews and feedbacks, I am sending wireframes,
which are ready for implementation in Juno:

http://people.redhat.com/~jcoufal/openstack/juno/2014-07-09_nodes-ui_juno.pdf

Let me know in case of any questions.



Looks awesome!

I may be way off base here (not super familiar with Tuskar) but what
about bulk importing of nodes? This is basically the only way devtest
makes use of nodes nowdays, so it might be nice to allow people to use
the same data file in both places (nodes.json blob).

-Greg


Hi Greg,

thanks a lot for the feedback. We planned to provide a bulk import of
nodes as well. First we need to provide the basic functionality. I hope
we also manage to add import function in Juno but it depends on how the
progress of implementation goes. The challenge here is that I am not
aware of any standardized way for the data structure of the imported
file (do you have any suggestions here?).




We currently accept a nodes.json blob in the following format:

[{
 pm_password: foo,
 mac: [78:e7:d1:24:99:a5],
 pm_addr: 10.22.51.66,
 pm_type: pxe_ipmitool,
 memory: 98304,
 disk: 1600,
 arch: amd64,
 cpu: 24,
 pm_user: Administrator
},
...
]

So this might be a good starting point?

-Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Thanks Greg, that's a good starting point. We will need to create a 
blueprint in Horizon's launchpad for it - would you mind to register one 
with above mentioned format?


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Gap 0 (database migrations) closed!

2014-07-16 Thread Anna Kamyshnikova
Hello everyone!

I would like to bring the next two points to everybody's attention:

1) As Henry mentioned if you add new migration you should make it
unconditional. Conditional migrations should not be merged since now.

2) If you add some new models you should ensure that module containing it
is imported in /neutron/db/migration/models/head.py.

The second point in important for testing which I hope will be merged soon:
https://review.openstack.org/76520.

Regards,
Ann



On Wed, Jul 16, 2014 at 5:54 AM, Kyle Mestery mest...@mestery.com wrote:

 On Tue, Jul 15, 2014 at 5:49 PM, Henry Gessau ges...@cisco.com wrote:
  I am happy to announce that the first (zero'th?) item in the Neutron Gap
  Coverage[1] has merged[2]. The Neutron database now contains all tables
 for
  all plugins, and database migrations are no longer conditional on the
  configuration.
 
  In the short term, Neutron developers who write migration scripts need
 to set
migration_for_plugins = ['*']
  but we will soon clean up the template for migration scripts so that
 this will
  be unnecessary.
 
  I would like to say special thanks to Ann Kamyshnikova and Jakub
 Libosvar for
  their great work on this solution. Also thanks to Salvatore Orlando and
 Mark
  McClain for mentoring this through to the finish.
 
  [1]
 
 https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage
  [2] https://review.openstack.org/96438
 
 This is great news! Thanks to everyone who worked on this particular
 gap. We're making progress on the other gaps identified in that plan,
 I'll send an email out once Juno-2 closes with where we're at.

 Thanks,
 Kyle

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [compute][tempest] Upgrading libvirt-lxc support status

2014-07-16 Thread Joe Gordon
On Tue, Jul 15, 2014 at 11:40 AM, Nels Nelson nels.nel...@rackspace.com
wrote:

 Thanks for your response, Joe.

 Am I understanding you correctly that the Hypervisor Support Status does
 not in fact hinge on any particular Tempest tests, but rather, simply on
 individual tests for the libvirt-lxc driver used for gating?


The hypervisor support status hinges on the existence of a third party
testing system (ci.openstack.org/third_party.html) that tests libvirt-lxc


 Also, one last question, am I using the incorrect [subheader][category]
 info in my subject?  I've had to bump this topic twice now, and you're the
 only person to reply.



You are using the category correctly, but you asked the questions in just
the right way that they didn't make a lot of sense (with tempest being
driver agnostic and all). As this is only a nova issue, you may have better
luck by bringing this topic up in the nova IRC room (
https://wiki.openstack.org/wiki/IRC#OpenStack_IRC_channels_.28chat.freenode.net.29
)




 Thanks very much for your time.

 Best regards,
 -Nels Nelson


 From:  Joe Gordon joe.gord...@gmail.com
 On Tue, Jul 1, 2014 at 2:32 PM, Nels Nelson
 nels.nel...@rackspace.com wrote:
 
 Greetings list,-
 
 Over the next few weeks I will be working on developing additional Tempest
 gating unit and functional tests for the libvirt-lxc compute driver.
 
 
 
 Tempest is driver agnostic, just like the nova APIs strive to be. As a
 consumer of nova I shouldn't need to know what driver is being used.
 So there should not be any libvirt-lxc only tests in Tempest.
 
 
 
 I am trying to figure out exactly what is required in order to accomplish
 the goal of ensuring the continued inclusion (without deprecation) of the
 libvirt-lxc compute driver in OpenStack.  My understanding is that this
 requires the upgrading of the support status in the Hypervisor Support
 Matrix document by developing the necessary Tempest tests.  To that end, I
 am trying to determine what tests are necessary as precisely as possible.
 
 I have some questions:
 
 * Who maintains the Hypervisor Support Matrix document?
 
 
 https://wiki.openstack.org/wiki/HypervisorSupportMatrix
 https://wiki.openstack.org/wiki/HypervisorSupportMatrix
 
 * Who is in charge of the governance over the Support Status process?  Is
 there single person in charge of evaluating every driver?
 
 
 
 
 The nova team is responsible for this, with the PTL as the lead of that
 team.
 
 
 
 * Regarding that process, how is the information in the Hypervisor
 Support Matrix substantiated?  Is there further documentation in the wiki
 for this?  Is an evaluation task simply performed on the functionality for
 the given driver, and the results logged in the HSM?  Is this an automated
 process?  Who is responsible for that evaluation?
 
 
 
 I am actually not sure about this one, but I don't believe it is
 automated though.
 
 
 
 * How many of the boxes in the HSM must be checked positively, in
 order to move the driver into a higher supported group?  (From group C to
 B, and from B to A.)
 
 * Or, must they simply all be marked with a check or minus,
 substantiated by a particular gating test which passes based on the
 expected support?
 
 * In other words, is it sufficient to provide enough automated testing
 to simply be able to indicate supported/not supported on the support
 matrix chart?  Else, is writing supporting documentation of an evaluation
 of the hypervisor sufficient to substantiate those marks in the support
 matrix?
 
 * Do unit tests that gate commits specifically refer to tests
 written to verify the functionality described by the annotation in the
 support matrix? Or are the annotations substantiated by functional
 testing that gate commits?
 
 
 
 In order to get a driver out of group C and into group B, a third party
 testing system should run tempest on all nova patches. Similar to what we
 have for Xen
 (
 https://review.openstack.org/#/q/reviewer:openstack%2540citrix.com+status
 :open,n,z).
 
 To move from Group B to group A, the driver must have first party testing
 that we gate on (we cannot land any patches that fail for that driver).
 
 
 
 Thank you for your time and attention.
 
 Best regards,
 -Nels Nelson
 Software Developer
 Rackspace Hosting
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.serialization repo review

2014-07-16 Thread Flavio Percoco
On 07/15/2014 07:42 PM, Ben Nemec wrote:
 And the link, since I forgot it before:
 https://github.com/cybertron/oslo.serialization
 

LGTM!

Thanks for working on this!

 On 07/14/2014 04:59 PM, Ben Nemec wrote:
 Hi oslophiles,

 I've (finally) started the graduation of oslo.serialization, and I'm up
 to the point of having a repo on github that passes the unit tests.

 I realize there is some more work to be done (e.g. replacing all of the
 openstack.common files with libs) but my plan is to do that once it's
 under Gerrit control so we can review the changes properly.

 Please take a look and leave feedback as appropriate.  Thanks!

 -Ben

 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] cinder querying nova-api

2014-07-16 Thread Abbass MAROUNI

Hello guys,

I'm in the process of writing a cinder filter and weigher, I need to 
know whether I can use something like 'nova-api' inside filter/weigher 
to query the tags of a virtual machine running on a compute-node.
I need to create the cinder volume on the same host as the VM (which was 
created beforehand).


I really appreciate any insights or workarounds.

Best Regards,

Abbass,

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Meeting time change

2014-07-16 Thread Flavio Percoco
On 07/15/2014 06:20 PM, Kurt Griffiths wrote:
 Hi folks, we’ve been talking about this in IRC, but I wanted to bring it
 to the ML to get broader feedback and make sure everyone is aware. We’d
 like to change our meeting time to better accommodate folks that live
 around the globe. Proposals:
 
 Tuesdays, 1900 UTC 
 Wednessdays, 2000 UTC
 Wednessdays, 2100 UTC
 
 I believe these time slots are free, based
 on: https://wiki.openstack.org/wiki/Meetings
 
 Please respond with ONE of the following:
 
 A. None of these times work for me
 B. An ordered list of the above times, by preference
 C. I am a robot

I don't like the idea of switching days :/

Since the reason we're using Wednesday is because we don't want the
meeting to overlap with the TC and projects meeting, what if we change
the day of both meeting times in order to keep them on the same day (and
perhaps also channel) but on different times?

I think changing day and time will be more confusing than just changing
the time.

From a quick look, #openstack-meeting-alt is free on Wednesdays on both
times: 15 UTC and 21 UTC. Does this sound like a good day/time/idea to
folks?

Cheers,
Flavio


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][infra] New package dependencies for oslo.messaging's AMQP 1.0 support.

2014-07-16 Thread Flavio Percoco
On 07/15/2014 07:16 PM, Doug Hellmann wrote:
 On Tue, Jul 15, 2014 at 1:03 PM, Ken Giusti kgiu...@gmail.com wrote:

 These packages may be obtained via EPEL for Centos/RHEL systems
 (qpid-proton-c-devel), and via the Qpid project's PPA [3]
 (libqpid-proton2-dev) for Debian/Ubuntu.  They are also available for
 Fedora via the default yum repos.  Otherwise, the source can be pulled
 directly from the Qpid project and built/installed manually [4].
 
 Do you know the timeline for having those added to the Ubuntu cloud
 archives? I think we try not to add PPAs in devstack, but I'm not sure
 if that's a hard policy.

IIUC, the package has been accepted in Debian - Ken, correct me if I'm
wrong. Here's the link to the Debian's mentor page:

http://mentors.debian.net/package/qpid-proton

 

 I'd like to get the blueprint accepted, but I'll have to address these
 new dependencies first.  What is the best way to get these new
 packages into CI, devstack, etc?  And will developers be willing to
 install the proton development libraries, or can this be done
 automagically?
 
 To set up integration tests we'll need an option in devstack to set
 the messaging driver to this new one. That flag should also trigger
 setting up the dependencies needed. Before you spend time implementing
 that, though, we should clarify the policy on PPAs.

Agreed. FWIW, the work on devstack is on the works but it's being held
off while we clarify the policy on PPAs.

Cheers,
Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-16 Thread Joe Gordon
On Tue, Jul 15, 2014 at 7:20 AM, Morgan Fainberg morgan.fainb...@gmail.com
wrote:



 On Tuesday, July 15, 2014, Steven Hardy sha...@redhat.com wrote:

 On Mon, Jul 14, 2014 at 02:43:19PM -0400, Adam Young wrote:
  On 07/14/2014 11:47 AM, Steven Hardy wrote:
  Hi all,
  
  I'm probably missing something, but can anyone please tell me when
 devstack
  will be moving to keystone v3, and in particular when API auth_token
 will
  be configured such that auth_version is v3.0 by default?
  
  Some months ago, I posted this patch, which switched auth_version to
 v3.0
  for Heat:
  
  https://review.openstack.org/#/c/80341/
  
  That patch was nack'd because there was apparently some version
 discovery
  code coming which would handle it, but AFAICS I still have to manually
  configure auth_version to v3.0 in the heat.conf for our API to work
  properly with requests from domains other than the default.
  
  The same issue is observed if you try to use non-default-domains via
  python-heatclient using this soon-to-be-merged patch:
  
  https://review.openstack.org/#/c/92728/
  
  Can anyone enlighten me here, are we making a global devstack move to
 the
  non-deprecated v3 keystone API, or do I need to revive this devstack
 patch?
  
  The issue for Heat is we support notifications from stack domain
 users,
  who are created in a heat-specific domain, thus won't work if the
  auth_token middleware is configured to use the v2 keystone API.
  
  Thanks for any information :)
  
  Steve
  There are reviews out there in client land now that should work.  I was
  testing discover just now and it seems to be doing the right thing.  If
 the
  AUTH_URL is chopped of the V2.0 or V3 the client should be able to
 handle
  everything from there on forward.

 Perhaps I should restate my problem, as I think perhaps we still have
 crossed wires:

 - Certain configurations of Heat *only* work with v3 tokens, because we
   create users in a non-default domain
 - Current devstack still configures versioned endpoints, with v2.0
 keystone
 - Heat breaks in some circumstances on current devstack because of this.
 - Adding auth_version='v3.0' to the auth_token section of heat.conf fixes
   the problem.

 So, back in March, client changes were promised to fix this problem, and
 now, in July, they still have not - do I revive my patch, or are fixes for
 this really imminent this time?

 Basically I need the auth_token middleware to accept a v3 token for a user
 in a non-default domain, e.g validate it *always* with the v3 API not
 v2.0,
 even if the endpoint is still configured versioned to v2.0.

 Sorry to labour the point, but it's frustrating to see this still broken
 so long after I proposed a fix and it was rejected.


 We just did a test converting over the default to v3 (and falling back to
 v2 as needed, yes fallback will still be needed) yesterday (Dolph posted a
 couple of test patches and they seemed to succeed - yay!!) It looks like it
 will just work. Now there is a big caveate, this default will only change
 in the keystone middleware project, and it needs to have a patch or three
 get through gate converting projects over to use it before we accept the
 code.

 Nova has approved the patch to switch over, it is just fighting with Gate.
 Other patches are proposed for other projects and are in various states of
 approval.


I assume you mean switch over to keystone middleware project [0], not
switch over to keystone v3. Based on [1] my understanding is no changes to
nova are needed to use the v2 compatible parts of the v3 API, But are
changes needed to support domains or is this not a problem because the auth
middleware uses uuids for user_id and project_id, so nova doesn't need to
have any concept of domains? Are any nova changes needed to support the v3
API?


Switching over the default to v3 in the middleware doesn't test nova + v3
user tokens, tempest nova tests don't generate v3 user tokens (although I
hear there is an experimental job to do this).  So you are testing that
moving the middleware to v3 but accepting v2 API user tokens works. But
what happens if someone tries to use a the non-default domain? Or using
other v3 only features? Switching over to v3 for the middleware without
actually testing any v3 user facing features sounds like a big testing gap.

I see the keystone middleware patch has landed [3]



[0] https://review.openstack.org/#/c/102342/
[1] http://docs.openstack.org/developer/keystone/http-api.htm
[2]
http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/models.py#n200
[3] https://review.openstack.org/#/c/106819




 So, in short. This is happening and soon. There are some things that need
 to get through gate and then we will do the release of keystonemiddleware
 that should address your problem here. At least my reading of the issue and
 the fixes that are pending indicates as much. (Please let me know if I am
 misreading anything here).

 Cheers,
 Morgan

 

[openstack-dev] [oslo.vmware] Updates

2014-07-16 Thread Gary Kotton
Hi,
I just thought it would be nice to give the community a little update about the 
current situation:

  1.  Version is 0.4 
(https://github.com/openstack/requirements/blob/master/global-requirements.txt#L58)
 *   This is used by glance and ceilometer
 *   There is a patch in review for Nova to integrate with this - 
https://review.openstack.org/#/c/70175/.
  2.  Current version in development will have the following highlights:
 *   Better support for suds faults
 *   Support for VC extensions - this enables for example Nova to mark a VM 
as being owned by OpenStack
 *   Retry mechanism is 'TaskInProgress' exception is thrown

Thanks
Gary

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposal to add Jon Paul Sullivan and Alexis Lee to core review team

2014-07-16 Thread Alexis Lee
Robert Collins said on Wed, Jul 16, 2014 at 09:13:52AM +1200:
 Alexis, Jon - core status means a commitment to three reviews a work
 day (on average), keeping track of changing policies and our various
 specs and initiatives, and obviously being excellent to us all :).

Hello,

Thank you all for this great opportunity, especially Clint for taking
the time to do the metareview as Rob mentioned. I'd like very much to
join the team and hope I can be of service. Looking forward to meeting
you all face to face soon (probably Paris), first round's on me.


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposal to add Jon Paul Sullivan and Alexis Lee to core review team

2014-07-16 Thread Sullivan, Jon Paul
Hi Rob,

Being added as a core review would be great, thank you all for the votes of 
confidence, and I'll do my best to keep tripleo making great progress.

Thanks, 
Jon-Paul Sullivan ☺ Cloud Services - @hpcloud

Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, Galway.
Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's 
Quay, Dublin 2. 
Registered Number: 361933
 
The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error you should 
delete it from your system immediately and advise the sender.

To any recipient of this message within HP, unless otherwise stated, you should 
consider this message and attachments as HP CONFIDENTIAL.

 -Original Message-
 From: Robert Collins [mailto:robe...@robertcollins.net]
 Sent: 15 July 2014 22:14
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Lee, Alexis; Sullivan, Jon Paul
 Subject: Re: [openstack-dev] [TripleO] Proposal to add Jon Paul Sullivan
 and Alexis Lee to core review team
 
 Clint, thanks heaps for making the time to do a meta-review. With the
 clear support of the other cores, I'm really happy to be able to invite
 Alexis and JP to core status.
 
 Alexis, Jon - core status means a commitment to three reviews a work day
 (on average), keeping track of changing policies and our various specs
 and initiatives, and obviously being excellent to us all :).
 
 You don't have to take up the commitment if you don't want to - not
 everyone has the time to keep up to date with everything going on etc.
 
 Let me know your decision and I'll add you to the team :).
 
 -Ro
 
 
 
 On 10 July 2014 03:52, Clint Byrum cl...@fewbar.com wrote:
  Hello!
 
  I've been looking at the statistics, and doing a bit of review of the
  reviewers, and I think we have an opportunity to expand the core
  reviewer team in TripleO. We absolutely need the help, and I think
  these two individuals are well positioned to do that.
 
  I would like to draw your attention to this page:
 
  http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt
 
  Specifically these two lines:
 
  +---+---+-
 ---+
  |  Reviewer | Reviews   -2  -1  +1  +2  +A+/- % |
 Disagreements* |
  +---+---+-
 ---+
  |  jonpaul-sullivan | 1880  43 145   0   077.1% |   28 (
 14.9%)  |
  |   lxsli   | 1860  23 163   0   087.6% |   27 (
 14.5%)  |
 
  Note that they are right at the level we expect, 3 per work day. And
  I've looked through their reviews and code contributions: it is clear
  that they understand what we're trying to do in TripleO, and how it
  all works. I am a little dismayed at the slightly high disagreement
  rate, but looking through the disagreements, most of them were jp and
  lxsli being more demanding of submitters, so I am less dismayed.
 
  So, I propose that we add jonpaul-sullivan and lxsli to the TripleO
  core reviewer team.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Missing logs in Midokura CI Bot Inbox x

2014-07-16 Thread Tomoe Sugihara
Hi there,

Just to apologize and inform that most of the links to the logs of Midokura
CI bot on gerrit are dead now. That is because I accidentally deleted all
the logs (instead of over a month old logs) today. Logs for the jobs after
the deletion are saved just fine.
We'll be more careful about handling the logs.

Best,
Tomoe
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposal to add Jon Paul Sullivan and Alexis Lee to core review team

2014-07-16 Thread mar...@redhat.com
On 14/07/14 19:11, Ben Nemec wrote:
 +1.  In my experience they've both demonstrated that they know what
 they're doing.
 
 I think the bikeshedding/grammar nits on specs is kind of a separate
 issue that will need to be worked out in general.  It's still very early
 on in this new *-specs repo world, and I think everyone's still trying
 to figure out where to draw the line on how much grammar/spelling
 nit-picking is appropriate.

+1 from me too for both.

I agree with the gist of Tomas comments but I really agree with Ben's
comments above ... trying to convey 'rules' about what constitutes
bikeshedding is basically impossible given that there will be varying
opinions.

In any case, if it really is just e.g.  a 'rephrase' or a
small/inconsequential commit message nit/typo and as echoed by others
here, you can just make a suggestion. A +1 (and not a -1 or even a +2
for example) should be sufficient to make that suggestion. Then its up
to others to vote either way, and you haven't held up progress with a
-1, just my 2c,

thanks, marios

 
 -Ben
 
 On 07/09/2014 10:52 AM, Clint Byrum wrote:
 Hello!

 I've been looking at the statistics, and doing a bit of review of the
 reviewers, and I think we have an opportunity to expand the core reviewer
 team in TripleO. We absolutely need the help, and I think these two
 individuals are well positioned to do that.

 I would like to draw your attention to this page:

 http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

 Specifically these two lines:

 +---+---++
 |  Reviewer | Reviews   -2  -1  +1  +2  +A+/- % | Disagreements* 
 |
 +---+---++
 |  jonpaul-sullivan | 1880  43 145   0   077.1% |   28 ( 14.9%)  
 |
 |   lxsli   | 1860  23 163   0   087.6% |   27 ( 14.5%)  
 |

 Note that they are right at the level we expect, 3 per work day. And
 I've looked through their reviews and code contributions: it is clear
 that they understand what we're trying to do in TripleO, and how it all
 works. I am a little dismayed at the slightly high disagreement rate,
 but looking through the disagreements, most of them were jp and lxsli
 being more demanding of submitters, so I am less dismayed.

 So, I propose that we add jonpaul-sullivan and lxsli to the TripleO core
 reviewer team.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] heat stack-create with two vm instances always got one failed

2014-07-16 Thread Qiming Teng
It seems that you are sharing one port between two instances, which
won't be a legal configuration.

On Wed, Jul 16, 2014 at 01:17:00AM -0500, yulin...@dell.com wrote:
 Dell Customer Communication
 
 Hi,
 I'm using heat to create a stack with two instances. I always got one of them 
 successful, but the other would fail. If I split the template into two and 
 each of them contains one instance then it worked. However, I thought Heat 
 template would allow multiple instances being created?
 
 Here I attach the heat template:
 {
 AWSTemplateFormatVersion : 2010-09-09,
 Description : Sample Heat template that spins up multiple instances 
 and a private network (JSON),
 Resources : {
 test_net : {
  Type : OS::Neutron::Net,
  Properties : {
  name : test_net
   }
   },
   test_subnet : {
   Type : OS::Neutron::Subnet,
   Properties : {
   name : test_subnet,
   cidr : 120.10.9.0/24,
   enable_dhcp : True,
   gateway_ip : 120.10.9.1,
   network_id : { Ref : test_net }
   }
   },
  test_net_port : {
  Type : OS::Neutron::Port,
  Properties : {
  admin_state_up : True,
  network_id : { Ref : test_net }
  }
  },
  instance1 : {
  Type : OS::Nova::Server,
  Properties : {
  name : instance1,
  image : 8e2b4c71-448c-4313-8b41-b238af31f419,
  flavor: tvm-tt_lite,
  networks : [
  {port : { Ref : test_net_port }}
  ]
}
},
  instance2 : {
  Type : OS::Nova::Server,
  Properties : {
  name : instance2,
  image : 8e2b4c71-448c-4313-8b41-b238af31f419,
  flavor: tvm-tt_lite,
  networks : [
  {port : { Ref : test_net_port }}
  ]
}
}
 }
 }
 The error that I got from heat-engine.log is as follows:
 
 2014-07-16 01:49:50.514 25101 DEBUG heat.engine.scheduler [-] Task 
 resource_action complete step 
 /usr/lib/python2.6/site-packages/heat/engine/scheduler.py:170
 2014-07-16 01:49:50.515 25101 DEBUG heat.engine.scheduler [-] Task stack_task 
 from Stack teststack sleeping _sleep 
 /usr/lib/python2.6/site-packages/heat/engine/scheduler.py:108
 2014-07-16 01:49:51.516 25101 DEBUG heat.engine.scheduler [-] Task stack_task 
 from Stack teststack running step 
 /usr/lib/python2.6/site-packages/heat/engine/scheduler.py:164
 2014-07-16 01:49:51.516 25101 DEBUG heat.engine.scheduler [-] Task 
 resource_action running step 
 /usr/lib/python2.6/site-packages/heat/engine/scheduler.py:164
 2014-07-16 01:49:51.960 25101 DEBUG urllib3.connectionpool [-] GET 
 /v2/b64803d759e04b999e616b786b407661/servers/7cb9459c-29b3-4a23-a52c-17d85fce0559
  HTTP/1.1 200 1854 _make_request 
 /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295
 2014-07-16 01:49:51.963 25101 ERROR heat.engine.resource [-] CREATE : Server 
 instance1
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource Traceback (most 
 recent call last):
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource   File 
 /usr/lib/python2.6/site-packages/heat/engine/resource.py, line 371, in 
 _do_action
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource while not 
 check(handle_data):
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource   File 
 /usr/lib/python2.6/site-packages/heat/engine/resources/server.py, line 239, 
 in check_create_complete
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource return 
 self._check_active(server)
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource   File 
 /usr/lib/python2.6/site-packages/heat/engine/resources/server.py, line 255, 
 in _check_active
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource raise exc
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource Error: Creation of 
 server instance1 failed.
 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource
 2014-07-16 01:49:51.996 25101 DEBUG heat.engine.scheduler [-] Task 
 resource_action cancelled cancel 
 /usr/lib/python2.6/site-packages/heat/engine/scheduler.py:187
 2014-07-16 01:49:52.004 25101 DEBUG heat.engine.scheduler [-] Task stack_task 
 from Stack teststack complete step 
 /usr/lib/python2.6/site-packages/heat/engine/scheduler.py:170
 2014-07-16 01:49:52.005 25101 WARNING heat.engine.service [-] Stack create 
 failed, status FAILED
 2014-07-16 01:50:29.218 25101 DEBUG heat.openstack.common.rpc.amqp [-] 
 received {u'_context_roles': [u'Member', u'admin'], u'_msg_id': 
 u'9aedf86fda304cfc857dc897d8393427', u'_context_password': 'SANITIZED', 
 u'_context_auth_url': u'http://172.17.252.60:5000/v2.0', u'_unique_id': 
 u'f02188b068de4a4aba0ec203ec3ad54a', u'_reply_q': 
 u'reply_f841b6a2101d4af9a9af59889630ee77', 

Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-16 Thread Evgeny Fedoruk
Thanks for your feedbacks and comments, guys
This is a proposal for modified SNI management part for next RST patch:

“
For SNI functionality, tenant will supply list of TLS containers in specific
Order.
In case when specific back-end is not able to support SNI capabilities,
its driver should throw an exception. The exception message should state
that this specific back-end (provider) does not support SNI capability.
The clear sign of listener's requirement for SNI capability is
a none empty SNI container ids list.
However, reference implementation must support SNI capability.

New separate module will be developed in Neutron LBaaS for Barbican TLS 
containers interactions.
The module will have API for:

1.  Ensuring Barbican TLS container existence (used by LBaaS front-end API)

2.  Validating Barbican TLS container (used by LBaaS front-end API)

3.  Extracting SubjectCommonName and SubjecAlternativetNames from 
certificates’ X509 (used by LBaaS front-end API)

4.  Extracting certificate’s data from Barbican TLS container (used by 
provider/driver code)
The module will use pyOpenSSL and PyASN1 packages.
Only this new common module should be used by Neutron LBaaS code for Barbican 
containers interactions.

Front-end LBaaS API (plugin) code will use a new developed module for 
validating Barbican TLS containers and extracting SubjectCommonName and 
SubjecAlternativetNames from certificates’ X509.
When SNI settings are forwarded to the driver, this SubjectCommonName and 
SubjecAlternativetNames info will be attached along with each SNI container id.
Driver, in its turn, can use this info for its specific SNI implementation.
Any specific driver implementation may extract host names info from 
certificates using the mentioned above common module only, if needed.

The order of SNI containers list may be used by specific back-end code,
like Radware's, for specifying priorities among certificates.
In case when two or more uploaded certificates are valid for the same DNS name
and the tenant has specific requirements around which one wins this collision,
certificate ordering provides a mechanism to define which cert wins in the
event of a collision.
Employing the order of certificates list is not a common requirement for
all back-end implementations.
“
Other parts of the RST document will be modified according to this approach.

Please post your thoughts.
If this is acceptable by all, I will push a new patch.
Thank you,
Evg


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Tuesday, July 15, 2014 8:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
SubjectCommonName and/or SubjectAlternativeNames from X509

+1 to German's and  Carlos' comments.

It's also worth pointing out that some UIs will definitely want to show SAN 
information and the like, so either having this available as part of the API, 
or as a standard library we write which then gets used by multiple drivers is 
going to be necessary.

If we're extracting the Subject Common Name in any place in the code then we 
also need to be extracting the Subject Alternative Names at the same place. 
From the perspective of the SNI standard, there's no difference in how these 
fields should be treated, and if we were to treat SANs differently then we're 
both breaking the standard and setting a bad precedent.

Stephen

On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza 
carlos.ga...@rackspace.commailto:carlos.ga...@rackspace.com wrote:

On Jul 15, 2014, at 10:55 AM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com
 wrote:

 Hi,


 Obtaining the domain name from the x509 is probably more of a 
 driver/backend/device capability, it would make sense to have a library that 
 could be used by anyone wishing to do so in their driver code.
You can do what ever you want in *your* driver. The code to extract this 
information will be apart of the API and needs to be mentioned in the spec now. 
PyOpenSSL with PyASN1 are the most likely candidates.

Carlos D. Garza

 -Sam.



 From: Eichberger, German 
 [mailto:german.eichber...@hp.commailto:german.eichber...@hp.com]
 Sent: Tuesday, July 15, 2014 6:43 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

 Hi,

 My impression was that the frontend would extract the names and hand them to 
 the driver.  This has the following advantages:

 · We can be sure all drivers can extract the same names
 · No duplicate code to maintain
 · If we ever allow the user to specify the names on UI rather in the 
 certificate the driver doesn’t need to change.

 I think I saw Adam say something similar in a comment to the code.

 Thanks,
 German

 From: Evgeny Fedoruk [mailto:evge...@radware.commailto:evge...@radware.com]
 

Re: [openstack-dev] [oslo] oslo.serialization repo review

2014-07-16 Thread Davanum Srinivas
Ben,

LGTM as well. i was finally able to look :)

-- dims

On Wed, Jul 16, 2014 at 4:35 AM, Flavio Percoco fla...@redhat.com wrote:
 On 07/15/2014 07:42 PM, Ben Nemec wrote:
 And the link, since I forgot it before:
 https://github.com/cybertron/oslo.serialization


 LGTM!

 Thanks for working on this!

 On 07/14/2014 04:59 PM, Ben Nemec wrote:
 Hi oslophiles,

 I've (finally) started the graduation of oslo.serialization, and I'm up
 to the point of having a repo on github that passes the unit tests.

 I realize there is some more work to be done (e.g. replacing all of the
 openstack.common files with libs) but my plan is to do that once it's
 under Gerrit control so we can review the changes properly.

 Please take a look and leave feedback as appropriate.  Thanks!

 -Ben



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.vmware] Updates

2014-07-16 Thread Davanum Srinivas
Very cool Gary.

On Wed, Jul 16, 2014 at 5:31 AM, Gary Kotton gkot...@vmware.com wrote:
 Hi,
 I just thought it would be nice to give the community a little update about
 the current situation:

 Version is 0.4
 (https://github.com/openstack/requirements/blob/master/global-requirements.txt#L58)

 This is used by glance and ceilometer
 There is a patch in review for Nova to integrate with this -
 https://review.openstack.org/#/c/70175/.

 Current version in development will have the following highlights:

 Better support for suds faults
 Support for VC extensions – this enables for example Nova to mark a VM as
 being owned by OpenStack
 Retry mechanism is ‘TaskInProgress’ exception is thrown

 Thanks
 Gary


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][keystone] (98)Address already in use: make_sock: could not bind to address [::]:5000 0.0.0.0:5000

2014-07-16 Thread Joe Jiang
Hi all, 


When I just set up my develope environment use devstack at CentOS 6.5, 
that fetch devstack source via github.com and checkout stable/icehouse branch.
and bellow[1] is the error log fragment.
I'm not sure if I am ok to ask my question in this mail list or not,
because I search all of the web and still not resolve it.
Anyway, I need you help. and, your help is a highly appreciated.


Thanks.
Joe.


2014-07-16 11:08:53.282 | + sudo sed '/^Listen/s/^.*$/Listen 0.0.0.0:80/' -i 
/etc/httpd/conf/httpd.conf
2014-07-16 11:08:53.295 | + sudo rm -f '/var/log/httpd/horizon_*'
2014-07-16 11:08:53.310 | + sudo sh -c 'sed -e 
2014-07-16 11:08:53.310 | s,%USER%,stack,g;
2014-07-16 11:08:53.310 | s,%GROUP%,stack,g;
2014-07-16 11:08:53.310 | s,%HORIZON_DIR%,/opt/stack/horizon,g;
2014-07-16 11:08:53.310 | s,%APACHE_NAME%,httpd,g;
2014-07-16 11:08:53.310 | s,%DEST%,/opt/stack,g;
2014-07-16 11:08:53.310 | s,%HORIZON_REQUIRE%,,g;
2014-07-16 11:08:53.310 |  /home/devstack/files/apache-horizon.template 
/etc/httpd/conf.d/horizon.conf'
2014-07-16 11:08:53.321 | + start_horizon
2014-07-16 11:08:53.321 | + restart_apache_server
2014-07-16 11:08:53.321 | + restart_service httpd
2014-07-16 11:08:53.321 | + is_ubuntu
2014-07-16 11:08:53.321 | + [[ -z rpm ]]
2014-07-16 11:08:53.322 | + '[' rpm = deb ']'
2014-07-16 11:08:53.322 | + sudo /sbin/service httpd restart
2014-07-16 11:08:53.361 | Stopping httpd:  [FAILED]
2014-07-16 11:08:53.532 | Starting httpd: httpd: Could not reliably determine 
the server's fully qualified domain name, using 127.0.0.1 for ServerName
2014-07-16 11:08:53.533 | (98)Address already in use: make_sock: could not bind 
to address [::]:5000
2014-07-16 11:08:53.533 | (98)Address already in use: make_sock: could not bind 
to address 0.0.0.0:5000
2014-07-16 11:08:53.533 | no listening sockets available, shutting down
2014-07-16 11:08:53.533 | Unable to open logs
2014-07-16 11:08:53.547 |  [FAILED]
2014-07-16 11:08:53.549 | + exit_trap
2014-07-16 11:08:53.549 | + local r=1
2014-07-16 11:08:53.549 | ++ jobs -p
2014-07-16 11:08:53.550 | + jobs=
2014-07-16 11:08:53.550 | + [[ -n '' ]]
2014-07-16 11:08:53.550 | + exit 1
[stack@stack devstack]$___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Getting rolling with javelin2

2014-07-16 Thread Chris Dent

On Mon, 14 Jul 2014, Sean Dague wrote:


Javelin2 lives in tempest, currently the following additional fixes are
needed for it to pass the server  image creation in grenade -
https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:javelin_img_fix,n,z


Thanks for the pointers. That stuff looks good (and is now merged)
and I'm testing my changes against the new shiny.


Those were posted for review last Friday, need eyes on them. This is
still basically the minimum viable code there, and additional unit tests
should be added. Assistance there appreciated.


I have to admit I'm struggling to get my head around _how_ to unit
something that is itself a test. Is the idea to mock the clients?
I'm not sure how much value that will have (compared to just running
the thing).


There is a grenade patch that will consume that once landed -
https://review.openstack.org/#/c/97317/ - local testing gets us to an
unrelated ceilometer bug. However landing the 2 tempest patches first
should be done.


If you'd like me to look into that ceilometer bug, please let me
know what it is.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Resources to fix MS Outlook (was Re: [Nova] [Gantt] Scheduler split status (updated))

2014-07-16 Thread Alexis Lee
Dugger, Donald D said on Tue, Jul 15, 2014 at 09:15:06PM +:
 I `really` dislike paging through 10 screens of an email to discover
 the single comment buried somewhere near the end.

https://wiki.openstack.org/wiki/MailingListEtiquette#Trimming

That's not inline style's fault, that's pure laziness (on the part of
the author). Good inline style is to trim the quoted text to just the
relevant parts, preferably so new content is at least as long as the
quote.

 With top posting the new content is always right there at the top,
 which is all I need for threads I'm familiar with and, if I lack
 context, I can just go to the bottom and scan up, finding the info
 that I need.

That may be acceptable once but after the fourth or fifth time it
becomes aggravating. The policy needs to be work for longer discussions
(like Jay/Paul) as well as short ones.

 Yes, top posting requires a little discipline, ... but that's a small
 price to pay.

So does inline posting :)

Inline style is explicitly stated in the etiquette, so rather than
reopening this can of worms, it'd save time just to follow the policy.

https://wiki.openstack.org/wiki/MailingListEtiquette#Replies


My apologies if this came across as overly harsh, I just don't see the
need to retread old ground.


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-16 Thread Joe Gordon
On Wed, Jul 16, 2014 at 11:24 AM, Joe Gordon joe.gord...@gmail.com wrote:




 On Tue, Jul 15, 2014 at 7:20 AM, Morgan Fainberg 
 morgan.fainb...@gmail.com wrote:



 On Tuesday, July 15, 2014, Steven Hardy sha...@redhat.com wrote:

 On Mon, Jul 14, 2014 at 02:43:19PM -0400, Adam Young wrote:
  On 07/14/2014 11:47 AM, Steven Hardy wrote:
  Hi all,
  
  I'm probably missing something, but can anyone please tell me when
 devstack
  will be moving to keystone v3, and in particular when API auth_token
 will
  be configured such that auth_version is v3.0 by default?
  
  Some months ago, I posted this patch, which switched auth_version to
 v3.0
  for Heat:
  
  https://review.openstack.org/#/c/80341/
  
  That patch was nack'd because there was apparently some version
 discovery
  code coming which would handle it, but AFAICS I still have to manually
  configure auth_version to v3.0 in the heat.conf for our API to work
  properly with requests from domains other than the default.
  
  The same issue is observed if you try to use non-default-domains via
  python-heatclient using this soon-to-be-merged patch:
  
  https://review.openstack.org/#/c/92728/
  
  Can anyone enlighten me here, are we making a global devstack move to
 the
  non-deprecated v3 keystone API, or do I need to revive this devstack
 patch?
  
  The issue for Heat is we support notifications from stack domain
 users,
  who are created in a heat-specific domain, thus won't work if the
  auth_token middleware is configured to use the v2 keystone API.
  
  Thanks for any information :)
  
  Steve
  There are reviews out there in client land now that should work.  I was
  testing discover just now and it seems to be doing the right thing.
  If the
  AUTH_URL is chopped of the V2.0 or V3 the client should be able to
 handle
  everything from there on forward.

 Perhaps I should restate my problem, as I think perhaps we still have
 crossed wires:

 - Certain configurations of Heat *only* work with v3 tokens, because we
   create users in a non-default domain
 - Current devstack still configures versioned endpoints, with v2.0
 keystone
 - Heat breaks in some circumstances on current devstack because of this.
 - Adding auth_version='v3.0' to the auth_token section of heat.conf fixes
   the problem.

 So, back in March, client changes were promised to fix this problem, and
 now, in July, they still have not - do I revive my patch, or are fixes
 for
 this really imminent this time?

 Basically I need the auth_token middleware to accept a v3 token for a
 user
 in a non-default domain, e.g validate it *always* with the v3 API not
 v2.0,
 even if the endpoint is still configured versioned to v2.0.

 Sorry to labour the point, but it's frustrating to see this still broken
 so long after I proposed a fix and it was rejected.


 We just did a test converting over the default to v3 (and falling back to
 v2 as needed, yes fallback will still be needed) yesterday (Dolph posted a
 couple of test patches and they seemed to succeed - yay!!) It looks like it
 will just work. Now there is a big caveate, this default will only change
 in the keystone middleware project, and it needs to have a patch or three
 get through gate converting projects over to use it before we accept the
 code.

 Nova has approved the patch to switch over, it is just fighting with
 Gate. Other patches are proposed for other projects and are in various
 states of approval.


 I assume you mean switch over to keystone middleware project [0], not
 switch over to keystone v3. Based on [1] my understanding is no changes to
 nova are needed to use the v2 compatible parts of the v3 API, But are
 changes needed to support domains or is this not a problem because the auth
 middleware uses uuids for user_id and project_id, so nova doesn't need to
 have any concept of domains? Are any nova changes needed to support the v3
 API?


 Switching over the default to v3 in the middleware doesn't test nova + v3
 user tokens, tempest nova tests don't generate v3 user tokens (although I
 hear there is an experimental job to do this).  So you are testing that
 moving the middleware to v3 but accepting v2 API user tokens works. But
 what happens if someone tries to use a the non-default domain? Or using
 other v3 only features? Switching over to v3 for the middleware without
 actually testing any v3 user facing features sounds like a big testing gap.

 I see the keystone middleware patch has landed [3]


I found the nova spec on this :https://review.openstack.org/#/c/103617/





 [0] https://review.openstack.org/#/c/102342/
 [1] http://docs.openstack.org/developer/keystone/http-api.htm
 [2]
 http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/models.py#n200
 [3] https://review.openstack.org/#/c/106819




 So, in short. This is happening and soon. There are some things that need
 to get through gate and then we will do the release of keystonemiddleware
 that should address your problem here. At 

[openstack-dev] [specs] how to continue spec discussion

2014-07-16 Thread Tim Bell
As we approach Juno-3, a number of specs have been correctly marked as 
abandoned since they are not expected to be ready in time for the release.

Is there a mechanism to keep these specs open for discussion even though there 
is no expectation that they will be ready for Juno and 'defer' them to 'K' ?

It seems a pity to archive the comments and reviewer lists along with losing a 
place to continue the discussions even if we are not expecting to see code in 
Juno.

Tim


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [specs] how to continue spec discussion

2014-07-16 Thread Daniel P. Berrange
On Wed, Jul 16, 2014 at 11:57:33AM +, Tim Bell wrote:
 As we approach Juno-3, a number of specs have been correctly marked
 as abandoned since they are not expected to be ready in time for the
 release.
 
 Is there a mechanism to keep these specs open for discussion even
 though there is no expectation that they will be ready for Juno
 and 'defer' them to 'K' ?
 
 It seems a pity to archive the comments and reviewer lists along
 with losing a place to continue the discussions even if we are not
 expecting to see code in Juno.

Agreed, that is sub-optimal to say the least.

The spec documents themselves are in a release specific directory
though. Any which are to be postponed to Kxxx would need to move
into a specs/k directory instead of specs/juno, but we don't
know what the k directory needs to be called yet :-(  Assuming
we determine the directory name, then IMHO any spec could just be
restored placing the spec into the new directory for K.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] cinder querying nova-api

2014-07-16 Thread Duncan Thomas
So I see a couple of issues here:

1) reliability - need to decide what the scheduler does if the nova
api isn't responding - hanging and ignoring future scheduling requests
is not a good option... a timeout and putting the volume into error
might be fine.

2) Nova doesn't expose hostname as identifiers unless I'm mistaken, it
exposes some abstract host_id. Need to figure out the mapping between
those and cinder backends.

With those to caveats in mind, I don't see why not, nor indeed any
other way of solving the problem unless / until the
grand-unified-sheduler-of-everything happens.


Starting a cinder spec on the subject might be the best place to
collect people's thoughts?

On 16 July 2014 09:38, Abbass MAROUNI abbass.maro...@virtualscale.fr wrote:
 Hello guys,

 I'm in the process of writing a cinder filter and weigher, I need to know
 whether I can use something like 'nova-api' inside filter/weigher to query
 the tags of a virtual machine running on a compute-node.
 I need to create the cinder volume on the same host as the VM (which was
 created beforehand).

 I really appreciate any insights or workarounds.

 Best Regards,

 Abbass,

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Flavor framework proposal

2014-07-16 Thread Eugene Nikanorov
Some comments inline:


 Agreed-- I think we need to more fully flesh out how extension list / tags
 should work here before we implement it. But this doesn't prevent us from
 rolling forward with a version 1 of flavors so that we can start to use
 some of the benefits of having flavors (like the ability to use multiple
 service profiles with a single driver/provider, or multiple service
 profiles for a single kind of service).

Agree here.



 Yes, I think there are many benefits we can get out of the flavor
 framework without having to have an extensions list / tags at this
 revision. But I'm curious: Did we ever define what we were actually trying
 to solve with flavors?  Maybe that's the reason the discussion on this has
 been all of the place: People are probably making assumptions about the
 problem we're trying to solve and we need to get on the same page about
 this.


Yes, we did!
 The original problem has several aspects aspects:
1) providing users with some information about what service implementation
they get (capabilities)
2) providing users with ability to specify (choose, actually) some
implementation details that don't relate to a logical configuration
(capacity, insertion mode, HA mode, resiliency, security standards, etc)
3) providing operators a way to setup different modes of one driver
4) providing operators to seamlessly change drivers backing up existing
logical configurations (now it's not so easy to do because logical config
is tightly coupled with provider/driver)

The proposal we're discussing right is mostly covering points (2), (3) and
(4) which is already a good thing.
So for now I'd propose to put 'information about service implementation' in
the description to cover (1)

I'm currently implementing the proposal (API and DB parts, no integration
with services yet)


Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][specs] Please stop doing specs for any changes in projects

2014-07-16 Thread Duncan Thomas
On 16 July 2014 03:57, Jay S. Bryant jsbry...@electronicjungle.net wrote:
 John,

 So you have said a few times that the specs are a learning process.
 What do you feel with have learned thus far using specs?

I'm not John, but I'm going to answer as if you'd addressed the question wider:
- Specs can definitely help flesh out ideas and are much better than
blueprints as a way of tracking concerns, questions, etc

- We as a community are rather shy about making decisions as
individuals, even low risk ones like 'Does this seem to require a
spec' - if there doesn't seem to be value in a spec, don't do one
unless somebody asks for one

- Not all questions can be answered at spec time, sometimes you need
to go bash out some code to see what works, then circle again

- Careful planning reduces velocity. No significant evidence either
way as to whether it improves quality, but my gut feeling is that it
does. We need to figure out what tradeoffs on either scale we're happy
to make, and perhaps that answer is different based on the area of
code being touched and the date (e.g. a change that doesn't affect
external APIs in J-1 might need less careful planning than a change in
J-3. API changes or additions need more discussion and eyes on than
none-API changes)

- Specs are terrible for tracking work items, but no worse than blueprints

- Multiple people might choose to work on the same blueprint in
parallel - this is going to happen, isn't necessarily rude and the
correct solution to competing patches is entirely subjective

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [specs] how to continue spec discussion

2014-07-16 Thread Sylvain Bauza
Le 16/07/2014 14:09, Daniel P. Berrange a écrit :
 On Wed, Jul 16, 2014 at 11:57:33AM +, Tim Bell wrote:
 As we approach Juno-3, a number of specs have been correctly marked
 as abandoned since they are not expected to be ready in time for the
 release.

 Is there a mechanism to keep these specs open for discussion even
 though there is no expectation that they will be ready for Juno
 and 'defer' them to 'K' ?

 It seems a pity to archive the comments and reviewer lists along
 with losing a place to continue the discussions even if we are not
 expecting to see code in Juno.
 Agreed, that is sub-optimal to say the least.

 The spec documents themselves are in a release specific directory
 though. Any which are to be postponed to Kxxx would need to move
 into a specs/k directory instead of specs/juno, but we don't
 know what the k directory needs to be called yet :-(  Assuming
 we determine the directory name, then IMHO any spec could just be
 restored placing the spec into the new directory for K.

 Regards,
 Daniel


I'm just thinking about the opportunity of creating a 'next' directory
in addition to the juno and Kxxx dirs, so that the logic would be the
same as for Launchpad.
That would also mean that a spec could be merged with target set to
'next', which is not a non-sense but just means the idea is validated,
without having determined a release date yet.

Thoughts ?

-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

2014-07-16 Thread Czesnowicz, Przemyslaw
Hi,

We were looking at this solution in the beginning, but it’s won’t work with 
opendaylight.
With opendaylight there is no agent running on the node so this info would have 
to be provided by opendaylight.

Thanks
Przemek
From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Sunday, July 13, 2014 8:31 AM
To: Czesnowicz, Przemyslaw; OpenStack Development Mailing List (not for usage 
questions)
Cc: Mooney, Sean K
Subject: RE: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

Hi,
For agent way to notify server regarding node specific info, you can leverage 
the  periodic state report that neutron agent sends to the neutron Server.
As an option, the ML2 Mechanism Driver can check that agent report and 
depending on the
datapath_type, update vif_details.
This can be done similar to bridge_mappings:
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mech_openvswitch.py#43
BR,
Irena


From: Czesnowicz, Przemyslaw [mailto:przemyslaw.czesnow...@intel.com]
Sent: Thursday, July 10, 2014 6:20 PM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage 
questions)
Cc: Mooney, Sean K
Subject: RE: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

Hi,

Thanks for Your answers.

Yep using binding:vif_details makes more sense. We would like to reuse 
VIF_TYPE_OVS and modify the nova to use the userspace vhost when ‘use_dpdk’ 
flag is present.
What we are missing is how to inform the ml2 plugin/mechanism drivers when to 
put that ‘use_dpdk’ flag into vif_details.

On the node ovs_neutron_agent could look up datapath_type in ovsdb, but how can 
we provide that info to the plugin?
Currently there is no mechanism to get node specific info into the ml2 plugin 
(or at least we don’t see any).

Any ideas on how this could be implemented?

Regards
Przemek
From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Thursday, July 10, 2014 8:08 AM
To: OpenStack Development Mailing List (not for usage questions); Czesnowicz, 
Przemyslaw
Cc: Mooney, Sean K
Subject: RE: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

Hi,
For passing  information from neutron to nova VIF Driver, you should use 
binding:vif_details dictionary.  You may not require new VIF_TYPE, but can 
leverage the existing VIF_TYPE_OVS, and add ‘use_dpdk’   in vif_details 
dictionary. This will require some rework of the existing libvirt vif_driver 
VIF_TYPE_OVS.

Binding:profile is considered as input dictionary that is used to pass 
information required for port binding on Server side. You  may use 
binding:profile to pass in  a dpdk ovs request, so it will be taken into port 
binding consideration by ML2 plugin.

I am not sure regarding new vnic_type, since it will require  port owner to 
pass in the requested type. Is it your intention? Should the port owner be 
aware of dpdk ovs usage?
There is also VM scheduling consideration that if certain vnic_type is 
requested, VM should be scheduled on the node that can satisfy the request.

Regards,
Irena


From: loy wolfe [mailto:loywo...@gmail.com]
Sent: Thursday, July 10, 2014 6:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Mooney, Sean K
Subject: Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

i think both a new vnic_type and a new vif_type should be added. now vnic has 
three types: normal, direct, macvtap, then we need a new type of uservhost.

as for vif_type, now we have VIF_TYPE_OVS, VIF_TYPE_QBH/QBG, VIF_HW_VEB, so we 
need a new VIF_TYPE_USEROVS

I don't think it's a good idea to directly reuse ovs agent, for we have to 
consider use cases that ovs and userovs co-exists. Now it's a little painful to 
fork and write a new agent, but it will be easier when ML2 agent BP is merged 
in the future. (https://etherpad.openstack.org/p/modular-l2-agent-outline)

On Wed, Jul 9, 2014 at 11:08 PM, Czesnowicz, Przemyslaw 
przemyslaw.czesnow...@intel.commailto:przemyslaw.czesnow...@intel.com wrote:
Hi

We (Intel Openstack team) would like to add support for dpdk based userspace 
openvswitch using mech_openvswitch and mech_odl from ML2 plugin.
The dpdk enabled ovs comes in two flavours one is netdev incorporated into 
vanilla ovs the other is a fork of ovs with a dpdk datapath 
(https://github.com/01org/dpdk-ovs ).
Both flavours use userspace vhost mechanism to connect the VMs to the switch.

Our initial approach was to extend ovs vif bindings in nova and add a config 
parameter to specify when userspace vhost should be used.
Spec : https://review.openstack.org/95805
Code: https://review.openstack.org/100256

Nova devs rejected this approach saying that Neutron should pass all necessary 
information to nova to select vif bindings.

Currently we are looking for a way to pass information from Neutron to Nova 
that dpdk enabled ovs is being used while still being able to use 
mech_openvswitch and ovs_neutron_agent or mech_odl.

We thought of two possible solutions:

1.  Use 

[openstack-dev] [QA] No Meeting this Week

2014-07-16 Thread Matthew Treinish
Hi Everyone,

Just wanted to send a reminder to the list that we're not having a meeting this
week. Since most people are here in Darmstadt this week at the mid-cycle meet-up
there isn't a reason to have the regular weekly meeting. So I'm cancelling this
week's meeting. We will have our next meeting next week on July, 24th at
2200 UTC.

-Matt Treinish


pgpJp4ne4xPHz.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] REST API access to configuration options

2014-07-16 Thread Chmouel Boudjnah
On Tue, Jul 15, 2014 at 9:54 AM, Henry Nash hen...@linux.vnet.ibm.com
wrote:

 Do people think this is a good idea?  Useful in other projects?  Concerned
 about the risks?



FWIW, we have this in Swift for a while and we actually uses it for
different testing in cloud capabilities.

I personally find it useful for clients behavioural features.

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Radware LBaaS v2 driver design doc

2014-07-16 Thread Avishay Balderman
Hi
Please review: https://review.openstack.org/#/c/105669/
Thanks
Avishay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [specs] how to continue spec discussion

2014-07-16 Thread Thierry Carrez
Daniel P. Berrange wrote:
 On Wed, Jul 16, 2014 at 11:57:33AM +, Tim Bell wrote:
 It seems a pity to archive the comments and reviewer lists along
 with losing a place to continue the discussions even if we are not
 expecting to see code in Juno.
 
 Agreed, that is sub-optimal to say the least.
 
 The spec documents themselves are in a release specific directory
 though. Any which are to be postponed to Kxxx would need to move
 into a specs/k directory instead of specs/juno, but we don't
 know what the k directory needs to be called yet :-(

The poll ends in 18 hours, so that should no longer be a blocker :)

I think what we don't really want to abandon those specs and lose
comments and history... but we want to shelve them in a place where they
do not interrupt core developers workflow as they concentrate on Juno
work. It will be difficult to efficiently ignore them if they are filed
in a next or a kxxx directory, as they would still clutter /most/ Gerrit
views.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

2014-07-16 Thread Czesnowicz, Przemyslaw
I don't think this is a usecase that could be supported right now.
There will be multiple issues with running two ovs instances on the node, e.g. 
how to  manage two sets of userspace utilities, two ovsdb servers etc. 
Also there would be some limitations from how ml2 plugin does port  binding 
(different segmentation ids would have to be used for the two ovs instances)

This could be done if ovs was able to run two datapaths at the same time 
(kernel and dpdk enabled userspace datapath).
I would like to concentrate on the more simple usecase where some nodes are 
optimized for high perf  net i/o 

Thanks 
Przemek

-Original Message-
From: Mathieu Rohon [mailto:mathieu.ro...@gmail.com] 
Sent: Friday, July 11, 2014 10:30 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

A simple usecase could be to have a compute node able start VM with optimized 
net I/O or standard net I/O, depending on the network flavor ordered for this 
VM.

On Fri, Jul 11, 2014 at 11:16 AM, Czesnowicz, Przemyslaw 
przemyslaw.czesnow...@intel.com wrote:


 Can you explain whats the use case for  running both ovs and userspace 
 ovs on the same host?



 Thanks

 Przemek

 From: loy wolfe [mailto:loywo...@gmail.com]
 Sent: Friday, July 11, 2014 3:17 AM


 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 
 plugin



 +1



 It's totally different between ovs and userspace ovs.

 also, there is strong need to keep ovs even we have a userspace ovs in 
 the same host





 --


 Intel Shannon Limited
 Registered in Ireland
 Registered Office: Collinstown Industrial Park, Leixlip, County 
 Kildare Registered Number: 308263 Business address: Dromore House, 
 East Park, Shannon, Co. Clare

 This e-mail and any attachments may contain confidential material for 
 the sole use of the intended recipient(s). Any review or distribution 
 by others is strictly prohibited. If you are not the intended 
 recipient, please contact the sender and delete all copies.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] recheck no bug and comment

2014-07-16 Thread Alexis Lee
Hello,

What do you think about allowing some text after the words recheck no
bug? EG to include a snippet from the log showing the failure has been
at least briefly investigated before attempting a recheck. EG:

  recheck no bug

  Compute node failed to spawn:

2014-07-15 12:18:09.936 | 3f1e7f32-812e-48c8-a83c-2615c4451fa6 |
  overcloud-NovaCompute0-zahdxwar7zlh | ERROR  | - | NOSTATE | |


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][infra] New package dependencies for oslo.messaging's AMQP 1.0 support.

2014-07-16 Thread Ken Giusti
On 07/15/2014 10:58:50 +0200, Flavio Percoco wrote:
On 07/15/2014 07:16 PM, Doug Hellmann wrote:
 On Tue, Jul 15, 2014 at 1:03 PM, Ken Giusti kgiu...@gmail.com wrote:

 These packages may be obtained via EPEL for Centos/RHEL systems
 (qpid-proton-c-devel), and via the Qpid project's PPA [3]
 (libqpid-proton2-dev) for Debian/Ubuntu.  They are also available for
 Fedora via the default yum repos.  Otherwise, the source can be pulled
 directly from the Qpid project and built/installed manually [4].

 Do you know the timeline for having those added to the Ubuntu cloud
 archives? I think we try not to add PPAs in devstack, but I'm not sure
 if that's a hard policy.

IIUC, the package has been accepted in Debian - Ken, correct me if I'm
wrong. Here's the link to the Debian's mentor page:

http://mentors.debian.net/package/qpid-proton


No, it hasn't been accepted yet - it is still pending approval by the
sponsor.  That's one of the reasons the Qpid project has set up its
own PPA.



 I'd like to get the blueprint accepted, but I'll have to address these
 new dependencies first.  What is the best way to get these new
 packages into CI, devstack, etc?  And will developers be willing to
 install the proton development libraries, or can this be done
 automagically?

 To set up integration tests we'll need an option in devstack to set
 the messaging driver to this new one. That flag should also trigger
 setting up the dependencies needed. Before you spend time implementing
 that, though, we should clarify the policy on PPAs.

Agreed. FWIW, the work on devstack is on the works but it's being held
off while we clarify the policy on PPAs.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [specs] how to continue spec discussion

2014-07-16 Thread John Garbutt
On 16 July 2014 14:07, Thierry Carrez thie...@openstack.org wrote:
 Daniel P. Berrange wrote:
 On Wed, Jul 16, 2014 at 11:57:33AM +, Tim Bell wrote:
 It seems a pity to archive the comments and reviewer lists along
 with losing a place to continue the discussions even if we are not
 expecting to see code in Juno.

Agreed we should keep those comments.

 Agreed, that is sub-optimal to say the least.

 The spec documents themselves are in a release specific directory
 though. Any which are to be postponed to Kxxx would need to move
 into a specs/k directory instead of specs/juno, but we don't
 know what the k directory needs to be called yet :-(

 The poll ends in 18 hours, so that should no longer be a blocker :)

Aww, there goes our lame excuse for punting making a decision on this.

 I think what we don't really want to abandon those specs and lose
 comments and history... but we want to shelve them in a place where they
 do not interrupt core developers workflow as they concentrate on Juno
 work. It will be difficult to efficiently ignore them if they are filed
 in a next or a kxxx directory, as they would still clutter /most/ Gerrit
 views.

+1

My intention was that once the specific project is open for K specs,
people will restore their original patch set, and move the spec to the
K directory, thus keeping all the history.

For Nova, the open reviews, with a -2, are ones that are on the
potential exception list, and so still might need some reviews. If
they gain an exception, the -2 will be removed. The list of possible
exceptions is currently included in bottom of this etherpad:
https://etherpad.openstack.org/p/nova-juno-spec-priorities

At some point we will open nova-specs for K, right now we are closed
for all spec submissions. We already have more blueprints approved
than we will be able to merge during the rest of Juno.

The idea is that everyone can now focus more on fixing bugs, reviewing
bug fixes, and reviewing remaining higher priority features, rather
than reviewing designs for K features. It is syncing a lot of
reviewers time looking at nova-specs, and it feels best to divert
attention.

We could leave the reviews open in gerrit, but we are trying hard to
set expectations around the likelihood of being reviewed and/or
accepted. In the past people have got very frustraighted and
complained about not finding out about what is happening (or not) with
what they have up for reviews.

This is all very new, so we are mostly making this up as we go along,
based on what we do with code submissions. Ideas on a better approach
that still meet most of the above goals, would be awesome.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-16 Thread Morgan Fainberg


On Wednesday, July 16, 2014, Joe Gordon joe.gord...@gmail.com wrote:



On Tue, Jul 15, 2014 at 7:20 AM, Morgan Fainberg morgan.fainb...@gmail.com 
wrote:


On Tuesday, July 15, 2014, Steven Hardy sha...@redhat.com wrote:
On Mon, Jul 14, 2014 at 02:43:19PM -0400, Adam Young wrote:
 On 07/14/2014 11:47 AM, Steven Hardy wrote:
 Hi all,
 
 I'm probably missing something, but can anyone please tell me when devstack
 will be moving to keystone v3, and in particular when API auth_token will
 be configured such that auth_version is v3.0 by default?
 
 Some months ago, I posted this patch, which switched auth_version to v3.0
 for Heat:
 
 https://review.openstack.org/#/c/80341/
 
 That patch was nack'd because there was apparently some version discovery
 code coming which would handle it, but AFAICS I still have to manually
 configure auth_version to v3.0 in the heat.conf for our API to work
 properly with requests from domains other than the default.
 
 The same issue is observed if you try to use non-default-domains via
 python-heatclient using this soon-to-be-merged patch:
 
 https://review.openstack.org/#/c/92728/
 
 Can anyone enlighten me here, are we making a global devstack move to the
 non-deprecated v3 keystone API, or do I need to revive this devstack patch?
 
 The issue for Heat is we support notifications from stack domain users,
 who are created in a heat-specific domain, thus won't work if the
 auth_token middleware is configured to use the v2 keystone API.
 
 Thanks for any information :)
 
 Steve
 There are reviews out there in client land now that should work.  I was
 testing discover just now and it seems to be doing the right thing.  If the
 AUTH_URL is chopped of the V2.0 or V3 the client should be able to handle
 everything from there on forward.

Perhaps I should restate my problem, as I think perhaps we still have
crossed wires:

- Certain configurations of Heat *only* work with v3 tokens, because we
  create users in a non-default domain
- Current devstack still configures versioned endpoints, with v2.0 keystone
- Heat breaks in some circumstances on current devstack because of this.
- Adding auth_version='v3.0' to the auth_token section of heat.conf fixes
  the problem.

So, back in March, client changes were promised to fix this problem, and
now, in July, they still have not - do I revive my patch, or are fixes for
this really imminent this time?

Basically I need the auth_token middleware to accept a v3 token for a user
in a non-default domain, e.g validate it *always* with the v3 API not v2.0,
even if the endpoint is still configured versioned to v2.0.

Sorry to labour the point, but it's frustrating to see this still broken
so long after I proposed a fix and it was rejected.


We just did a test converting over the default to v3 (and falling back to v2 as 
needed, yes fallback will still be needed) yesterday (Dolph posted a couple of 
test patches and they seemed to succeed - yay!!) It looks like it will just 
work. Now there is a big caveate, this default will only change in the keystone 
middleware project, and it needs to have a patch or three get through gate 
converting projects over to use it before we accept the code.

Nova has approved the patch to switch over, it is just fighting with Gate. 
Other patches are proposed for other projects and are in various states of 
approval.

I assume you mean switch over to keystone middleware project [0], not switch 
over to keystone v3. Based on [1] my understanding is no changes to nova are 
needed to use the v2 compatible parts of the v3 API, But are changes needed to 
support domains or is this not a problem because the auth middleware uses uuids 
for user_id and project_id, so nova doesn't need to have any concept of 
domains? Are any nova changes needed to support the v3 API?


 
This change simply makes it so the middleware will prefer v3 over v2 if both 
are available for validating UUID tokens and fetching certs. It still falls 
back to v2 as needed. It is transparent to all services (it was blocking on 
Nova and some uniform catalog related issues a while back, but Jamie Lennox 
resolved those, see below for more details).

It does not mean Nova (or anyone else) are magically using features they 
weren't already using. It just means Heat isn't needing to do a bunch of 
conditional stuff to get the V3 information out of the middleware. This change 
is only used in the case that V2 and V3 are available when auth_token 
middleware looks at the auth_url (limited discovery). It is still possible to 
force V2 by setting the ‘identity_uri' to the V2.0 specific root (no discovery 
performed).

Switching over the default to v3 in the middleware doesn't test nova + v3 user 
tokens, tempest nova tests don't generate v3 user tokens (although I hear there 
is an experimental job to do this).  So you are testing that moving the 
middleware to v3 but accepting v2 API user tokens works. But what happens if 
someone tries to use a the 

Re: [openstack-dev] [Neutron] Gap 0 (database migrations) closed!

2014-07-16 Thread Jakub Libosvar
On 07/16/2014 04:29 PM, Paddu Krishnan (padkrish) wrote:
 Hello,
 A follow-up development question related to this:
 
 As a part of https://review.openstack.org/#/c/105563/, which was
 introducing a new table in Neutron DB, I was trying to send for review a
 new file in neutron/db/migration/alembic_migrations/versions/
 https://review.openstack.org/#/c/105563/4/neutron/db/migration/alembic_migrations/versions/1be5bdeb1d9a_ml2_network_overlay_type_driver.py
  which
 got generated through script neutron-db-manage. This also
 updated  neutron/db/migration/alembic_migrations/versions/
 https://review.openstack.org/#/c/105563/4/neutron/db/migration/alembic_migrations/versions/1be5bdeb1d9a_ml2_network_overlay_type_driver.pyHEAD.
 I was trying to send this file for review as well.
 
 git review failed and I saw merge errors
 in neutron/db/migration/alembic_migrations/versions/
 https://review.openstack.org/#/c/105563/4/neutron/db/migration/alembic_migrations/versions/1be5bdeb1d9a_ml2_network_overlay_type_driver.pyHEAD.
  
 
 W/O HEAD modified, jenkins was failing. I am working to fix this and saw
 this e-mail. 
 
 I had to go through all the links in detail in this thread. But,
 meanwhile, the two points mentioned below looks related to the
 patch/issues I am facing. 
 So, if I add a new table, I don't need to run the neutron-db-manage
 script to generate the file and modify the HEAD anymore? Is (2) below
 need to be done manually?
Hi Paddu,

the process is the same (create migration script, update HEAD file), but
all migrations should have

migration_for_plugins = ['*']


Because you created a new DB model in new module, you also need to add

from neutron.plugins.ml2.drivers import type_network_overlay

to neutron/db/migration/models/head.py module.

I hope it helps.

Kuba

 
 Thanks,
 Paddu



 
 From: Anna Kamyshnikova akamyshnik...@mirantis.com
 mailto:akamyshnik...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Date: Wednesday, July 16, 2014 1:14 AM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron] Gap 0 (database migrations) closed!
 
 Hello everyone!
 
 I would like to bring the next two points to everybody's attention:
 
 1) As Henry mentioned if you add new migration you should make it
 unconditional. Conditional migrations should not be merged since now.
 
 2) If you add some new models you should ensure that module containing
 it is imported in /neutron/db/migration/models/head.py.
 
 The second point in important for testing which I hope will be merged
 soon: https://review.openstack.org/76520.
 
 Regards,
 Ann
 
 
 
 On Wed, Jul 16, 2014 at 5:54 AM, Kyle Mestery mest...@mestery.com
 mailto:mest...@mestery.com wrote:
 
 On Tue, Jul 15, 2014 at 5:49 PM, Henry Gessau ges...@cisco.com
 mailto:ges...@cisco.com wrote:
  I am happy to announce that the first (zero'th?) item in the Neutron Gap
  Coverage[1] has merged[2]. The Neutron database now contains all tables 
 for
  all plugins, and database migrations are no longer conditional on the
  configuration.
 
  In the short term, Neutron developers who write migration scripts need 
 to set
migration_for_plugins = ['*']
  but we will soon clean up the template for migration scripts so that 
 this will
  be unnecessary.
 
  I would like to say special thanks to Ann Kamyshnikova and Jakub 
 Libosvar for
  their great work on this solution. Also thanks to Salvatore Orlando and 
 Mark
  McClain for mentoring this through to the finish.
 
  [1]
  
 https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage
  [2] https://review.openstack.org/96438
 
 This is great news! Thanks to everyone who worked on this particular
 gap. We're making progress on the other gaps identified in that plan,
 I'll send an email out once Juno-2 closes with where we're at.
 
 Thanks,
 Kyle
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [devstack][keystone] (98)Address already in use: make_sock: could not bind to address [::]:5000 0.0.0.0:5000

2014-07-16 Thread Brian Haley
On 07/16/2014 07:34 AM, Joe Jiang wrote:
 Hi all, 
 
 When I just set up my develope environment use devstack at CentOS 6.5, 
 that fetch devstack source via github.com and checkout stable/icehouse branch.
 and bellow[1] is the error log fragment.
 I'm not sure if I am ok to ask my question in this mail list or not,
 because I search all of the web and still not resolve it.
 Anyway, I need you help. and, your help is a highly appreciated.

I tripped over a similar issue with Horizon yesterday and found this bug:

https://bugs.launchpad.net/devstack/+bug/1340660

The error I saw was with port 80, so I was able to disable Horizon to get around
it, and I didn't see anything obvious in the apache error logs to explain it.

-Brian


 2014-07-16 11:08:53.282 | + sudo sed '/^Listen/s/^.*$/Listen 0.0.0.0:80/' -i
 /etc/httpd/conf/httpd.conf
 2014-07-16 11:08:53.295 | + sudo rm -f '/var/log/httpd/horizon_*'
 2014-07-16 11:08:53.310 | + sudo sh -c 'sed -e 
 2014-07-16 11:08:53.310 | s,%USER%,stack,g;
 2014-07-16 11:08:53.310 | s,%GROUP%,stack,g;
 2014-07-16 11:08:53.310 | s,%HORIZON_DIR%,/opt/stack/horizon,g;
 2014-07-16 11:08:53.310 | s,%APACHE_NAME%,httpd,g;
 2014-07-16 11:08:53.310 | s,%DEST%,/opt/stack,g;
 2014-07-16 11:08:53.310 | s,%HORIZON_REQUIRE%,,g;
 2014-07-16 11:08:53.310 |  /home/devstack/files/apache-horizon.template
/etc/httpd/conf.d/horizon.conf'
 2014-07-16 11:08:53.321 | + start_horizon
 2014-07-16 11:08:53.321 | + restart_apache_server
 2014-07-16 11:08:53.321 | + restart_service httpd
 2014-07-16 11:08:53.321 | + is_ubuntu
 2014-07-16 11:08:53.321 | + [[ -z rpm ]]
 2014-07-16 11:08:53.322 | + '[' rpm = deb ']'
 2014-07-16 11:08:53.322 | + sudo /sbin/service httpd restart
 2014-07-16 11:08:53.361 | Stopping httpd:  [FAILED]
 2014-07-16 11:08:53.532 | Starting httpd: httpd: Could not reliably determine
 the server's fully qualified domain name, using 127.0.0.1 for ServerName
 2014-07-16 11:08:53.533 | (98)Address already in use: make_sock: could not 
 bind
 to address [::]:5000
 2014-07-16 11:08:53.533 | (98)Address already in use: make_sock: could not 
 bind
 to address 0.0.0.0:5000
 2014-07-16 11:08:53.533 | no listening sockets available, shutting down
 2014-07-16 11:08:53.533 | Unable to open logs
 2014-07-16 11:08:53.547 |  [FAILED]
 2014-07-16 11:08:53.549 | + exit_trap
 2014-07-16 11:08:53.549 | + local r=1
 2014-07-16 11:08:53.549 | ++ jobs -p
 2014-07-16 11:08:53.550 | + jobs=
 2014-07-16 11:08:53.550 | + [[ -n '' ]]
 2014-07-16 11:08:53.550 | + exit 1
 [stack@stack devstack]$
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-16 Thread Morgan Fainberg
I apologize for the very mixed up/missed quoting in that response, looks like 
my client ate a bunch of the quotes when writing up the email. 

—
Morgan Fainberg


--
From: Morgan Fainberg morgan.fainb...@gmail.com
Reply: Morgan Fainberg morgan.fainb...@gmail.com
Date: July 16, 2014 at 07:34:57
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject:  [openstack-dev] [devstack][keystone] Devstack, auth_token and 
keystone v3

  
  
 On Wednesday, July 16, 2014, Joe Gordon wrote:
  
  
  
 On Tue, Jul 15, 2014 at 7:20 AM, Morgan Fainberg wrote:  
  
  
 On Tuesday, July 15, 2014, Steven Hardy wrote:
 On Mon, Jul 14, 2014 at 02:43:19PM -0400, Adam Young wrote:
  On 07/14/2014 11:47 AM, Steven Hardy wrote:
  Hi all,
  
  I'm probably missing something, but can anyone please tell me when devstack
  will be moving to keystone v3, and in particular when API auth_token will
  be configured such that auth_version is v3.0 by default?
  
  Some months ago, I posted this patch, which switched auth_version to v3.0
  for Heat:
  
  https://review.openstack.org/#/c/80341/
  
  That patch was nack'd because there was apparently some version discovery
  code coming which would handle it, but AFAICS I still have to manually
  configure auth_version to v3.0 in the heat.conf for our API to work
  properly with requests from domains other than the default.
  
  The same issue is observed if you try to use non-default-domains via
  python-heatclient using this soon-to-be-merged patch:
  
  https://review.openstack.org/#/c/92728/
  
  Can anyone enlighten me here, are we making a global devstack move to the
  non-deprecated v3 keystone API, or do I need to revive this devstack patch?
  
  The issue for Heat is we support notifications from stack domain users,
  who are created in a heat-specific domain, thus won't work if the
  auth_token middleware is configured to use the v2 keystone API.
  
  Thanks for any information :)
  
  Steve
  There are reviews out there in client land now that should work. I was
  testing discover just now and it seems to be doing the right thing. If the
  AUTH_URL is chopped of the V2.0 or V3 the client should be able to handle
  everything from there on forward.
  
 Perhaps I should restate my problem, as I think perhaps we still have
 crossed wires:
  
 - Certain configurations of Heat *only* work with v3 tokens, because we
 create users in a non-default domain
 - Current devstack still configures versioned endpoints, with v2.0 keystone
 - Heat breaks in some circumstances on current devstack because of this.
 - Adding auth_version='v3.0' to the auth_token section of heat.conf fixes
 the problem.
  
 So, back in March, client changes were promised to fix this problem, and
 now, in July, they still have not - do I revive my patch, or are fixes for
 this really imminent this time?
  
 Basically I need the auth_token middleware to accept a v3 token for a user
 in a non-default domain, e.g validate it *always* with the v3 API not v2.0,
 even if the endpoint is still configured versioned to v2.0.
  
 Sorry to labour the point, but it's frustrating to see this still broken
 so long after I proposed a fix and it was rejected.
  
  
 We just did a test converting over the default to v3 (and falling back to v2 
 as needed, yes  
 fallback will still be needed) yesterday (Dolph posted a couple of test 
 patches and they  
 seemed to succeed - yay!!) It looks like it will just work. Now there is a 
 big caveate, this  
 default will only change in the keystone middleware project, and it needs to 
 have a patch  
 or three get through gate converting projects over to use it before we accept 
 the code.  
  
 Nova has approved the patch to switch over, it is just fighting with Gate. 
 Other patches  
 are proposed for other projects and are in various states of approval.
  
 I assume you mean switch over to keystone middleware project [0], not switch 
 over to keystone  
 v3. Based on [1] my understanding is no changes to nova are needed to use the 
 v2 compatible  
 parts of the v3 API, But are changes needed to support domains or is this not 
 a problem because  
 the auth middleware uses uuids for user_id and project_id, so nova doesn't 
 need to have  
 any concept of domains? Are any nova changes needed to support the v3 API?
  
  
  
 This change simply makes it so the middleware will prefer v3 over v2 if both 
 are available  
 for validating UUID tokens and fetching certs. It still falls back to v2 as 
 needed. It  
 is transparent to all services (it was blocking on Nova and some uniform 
 catalog related  
 issues a while back, but Jamie Lennox resolved those, see below for more 
 details).
  
 It does not mean Nova (or anyone else) are magically using features they 
 weren't already  
 using. It just means Heat isn't needing to do a bunch of conditional stuff to 
 get the V3  
 information out 

[openstack-dev] [Neutron] l2pop problems

2014-07-16 Thread Zang MingJie
Hi, all:

While resolving ovs restart rebuild br-tun flows[1], we have found
several l2pop problems:

1. L2pop is depending on agent_boot_time to decide whether send all
port information or not, but the agent_boot_time is unreliable, for
example if the service receives port up message before agent status
report, the agent won't receive any port on other agents forever.

2. If the openvswitch restarted, all flows will be lost, including all
l2pop flows, the agent is unable to fetch or recreate the l2pop flows.

To resolve the problems, I'm suggesting some changes:

1. Because the agent_boot_time is unreliable, the service can't decide
whether to send flooding entry or not. But the agent can build up the
flooding entries from unicast entries, it has already been
implemented[2]

2. Create a rpc from agent to service which fetch all fdb entries, the
agent calls the rpc in `provision_local_vlan`, before setting up any
port.[3]

After these changes, the l2pop service part becomes simpler and more
robust, mainly 2 function: first, returns all fdb entries at once when
requested; second, broadcast fdb single entry when a port is up/down.

[1] https://bugs.launchpad.net/neutron/+bug/1332450
[2] https://review.openstack.org/#/c/101581/
[3] https://review.openstack.org/#/c/107409/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] No DVR Meeting today

2014-07-16 Thread Vasudevan, Swaminathan (PNB Roseville)
Hi Folks,
DVR IRC Meeting for Today is Cancelled.
We will meet next week.
Thanks

Swaminathan Vasudevan
Systems Software Engineer (TC)


HP Networking
Hewlett-Packard
8000 Foothills Blvd
M/S 5541
Roseville, CA - 95747
tel: 916.785.0937
fax: 916.785.1815
email: swaminathan.vasude...@hp.commailto:swaminathan.vasude...@hp.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Daniel P. Berrange
On Wed, Jul 16, 2014 at 04:15:40PM +0200, Sean Dague wrote:
 Recently the main gate updated from Ubuntu 12.04 to 14.04, and in doing
 so we started executing the livesnapshot code in the nova libvirt
 driver. Which fails about 20% of the time in the gate, as we're bringing
 computes up and down while doing a snapshot. Dan Berange did a bunch of
 debug on that and thinks it might be a qemu bug. We disabled these code
 paths, so live snapshot has now been ripped out.
 
 In January we also triggered a libvirt bug, and had to carry a private
 build of libvirt for 6 weeks in order to let people merge code in OpenStack.
 
 We never were able to switch to libvirt 1.1.1 in the gate using the
 Ubuntu Cloud Archive during Icehouse development, because it has a
 different set of failures that would have prevented people from merging
 code.
 
 Based on these experiences, libvirt version differences seem to be as
 substantial as major hypervisor differences.

I think that is a pretty dubious conclusion to draw from just a
couple of bugs. The reason they really caused pain is that because
the CI test system was based on old version for too long. If it
were tracking current upstream version of libvirt/KVM we'd have
seen the problem much sooner  been able to resolve it during
review of the change introducing the feature, as we do with any
other bugs we encounter in software such as the breakage we see
with my stuff off pypi.

 There is a proposal here -
 https://review.openstack.org/#/c/103923/ to hold newer versions of
 libvirt to the same standard we hold xen, vmware, hyperv, docker,
 ironic, etc.

That is rather misleading statement you're making there. Libvirt is
in fact held to *higher* standards than xen/vmware/hypver because it
is actually gating all commits. The 3rd party CI systems can be
broken for days, weeks and we still happily accept code for those
virt. drivers.

AFAIK there has never been any statement that every feature added
to xen/vmware/hyperv must be tested by the 3rd party CI system.
All of the CI systems, for whatever driver, are currently testing
some arbitrary subset of the overall features of that driver, and
by no means every new feature being approved in review has coverage.

 I'm somewhat concerned that the -2 pile on in this review is a double
 standard of libvirt features, and features exploiting really new
 upstream features. I feel like a lot of the language being used here
 about the burden of doing this testing is exactly the same as was
 presented by the docker team before their driver was removed, which was
 ignored by the Nova team at the time. It was the concern by the freebsd
 team, which was also ignored and they were told to go land libvirt
 patches instead.

As above the only double standard is that libvirt tests are all gating
and 3rd party tests are non-gating. 

 If we want to reduce the standards for libvirt we should reconsider
 what's being asked of 3rd party CI teams, and things like the docker
 driver, as well as the A, B, C driver classification. Because clearly
 libvirt 1.2.5+ isn't actually class A supported.

AFAIK the requirement for 3rd party CI is merely that it has to exist,
running some arbitrary version of the hypervisor in question. We've
not said that 3rd party CI has to be covering every version or every
feature, as is trying to be pushed on libvirt here.

The Class A, Class B, Class C classifications were always only
ever going to be a crude approximation. Unless you define them to be
wrt the explicit version of every single deb/pypi package installed
in the gate system (which I don't believe anyone has every suggested)
there is always risk that a different version of some package has a
bug that Nova tickles.

IMHO the classification we do for drivers provides an indication as 
to the quality of the *Nova* code. IOW class A indicates that we've
throughly tested the Nova code and believe it to be free of bugs for
the features we've tested. If there is a bug in a 3rd party package
that doesn't imply that the Nova code is any less well tested or
more buggy. Replace libvirt with mysql in your example above. A new
version of mysql with a bug does not imply that Nova is suddenly not
class A tested.

IMHO it is upto the downstream vendors to run testing to ensure that
what they give to their customers, still achieves the quality level
indicated by the tests upstream has performed on the Nova code.

 Anyway, discussion welcomed. My primary concern right now isn't actually
 where we set the bar, but that we set the same bar for everyone.

As above, aside from the question of gating vs non-gating, the bar is
already set at the same level of everyone. There has to be a CI system
somewhere testing some arbitrary version of the software. Everyone meets
that requirement.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- 

Re: [openstack-dev] [devstack][keystone] (98)Address already in use: make_sock: could not bind to address [::]:5000 0.0.0.0:5000

2014-07-16 Thread Rich Megginson

On 07/16/2014 08:43 AM, Brian Haley wrote:

On 07/16/2014 07:34 AM, Joe Jiang wrote:

Hi all,

When I just set up my develope environment use devstack at CentOS 6.5,
that fetch devstack source via github.com and checkout stable/icehouse branch.
and bellow[1] is the error log fragment.
I'm not sure if I am ok to ask my question in this mail list or not,
because I search all of the web and still not resolve it.
Anyway, I need you help. and, your help is a highly appreciated.

I tripped over a similar issue with Horizon yesterday and found this bug:

https://bugs.launchpad.net/devstack/+bug/1340660

The error I saw was with port 80, so I was able to disable Horizon to get around
it, and I didn't see anything obvious in the apache error logs to explain it.

-Brian


Another problem with port 5000 in Fedora, and probably more recent 
versions of RHEL, is the selinux policy:


# sudo semanage port -l|grep 5000
...
commplex_main_port_t   tcp  5000
commplex_main_port_t   udp  5000

There is some service called commplex that has already claimed port 
5000 for its use, at least as far as selinux goes.






2014-07-16 11:08:53.282 | + sudo sed '/^Listen/s/^.*$/Listen 0.0.0.0:80/' -i
/etc/httpd/conf/httpd.conf
2014-07-16 11:08:53.295 | + sudo rm -f '/var/log/httpd/horizon_*'
2014-07-16 11:08:53.310 | + sudo sh -c 'sed -e 
2014-07-16 11:08:53.310 | s,%USER%,stack,g;
2014-07-16 11:08:53.310 | s,%GROUP%,stack,g;
2014-07-16 11:08:53.310 | s,%HORIZON_DIR%,/opt/stack/horizon,g;
2014-07-16 11:08:53.310 | s,%APACHE_NAME%,httpd,g;
2014-07-16 11:08:53.310 | s,%DEST%,/opt/stack,g;
2014-07-16 11:08:53.310 | s,%HORIZON_REQUIRE%,,g;
2014-07-16 11:08:53.310 |  /home/devstack/files/apache-horizon.template

/etc/httpd/conf.d/horizon.conf'

2014-07-16 11:08:53.321 | + start_horizon
2014-07-16 11:08:53.321 | + restart_apache_server
2014-07-16 11:08:53.321 | + restart_service httpd
2014-07-16 11:08:53.321 | + is_ubuntu
2014-07-16 11:08:53.321 | + [[ -z rpm ]]
2014-07-16 11:08:53.322 | + '[' rpm = deb ']'
2014-07-16 11:08:53.322 | + sudo /sbin/service httpd restart
2014-07-16 11:08:53.361 | Stopping httpd:  [FAILED]
2014-07-16 11:08:53.532 | Starting httpd: httpd: Could not reliably determine
the server's fully qualified domain name, using 127.0.0.1 for ServerName
2014-07-16 11:08:53.533 | (98)Address already in use: make_sock: could not bind
to address [::]:5000
2014-07-16 11:08:53.533 | (98)Address already in use: make_sock: could not bind
to address 0.0.0.0:5000
2014-07-16 11:08:53.533 | no listening sockets available, shutting down
2014-07-16 11:08:53.533 | Unable to open logs
2014-07-16 11:08:53.547 |  [FAILED]
2014-07-16 11:08:53.549 | + exit_trap
2014-07-16 11:08:53.549 | + local r=1
2014-07-16 11:08:53.549 | ++ jobs -p
2014-07-16 11:08:53.550 | + jobs=
2014-07-16 11:08:53.550 | + [[ -n '' ]]
2014-07-16 11:08:53.550 | + exit 1
[stack@stack devstack]$




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Test Ceilometer polling in tempest

2014-07-16 Thread Ildikó Váncsa
Hi Folks,

We've faced with some problems during running Ceilometer integration tests on 
the gate. The main issue is that we cannot test the polling mechanism, as if we 
use a small polling interval, like 1 min, then it puts a high pressure on Nova 
API. If we use a longer interval, like 10 mins, then we will not be able to 
execute any tests successfully, because it would run too long.

The idea, to solve this issue,  is to reconfigure Ceilometer, when the polling 
is tested. Which would mean to change the polling interval from the default 10 
mins to 1 min at the beginning of the test, restart the service and when the 
test is finished, the polling interval should be changed back to 10 mins, which 
will require one more service restart. The downside of this idea is, that it 
needs service restart today. It is on the list of plans to support dynamic 
re-configuration of Ceilometer, which would mean the ability to change the 
polling interval without restarting the service.

I know that this idea isn't ideal from the PoV that the system configuration is 
changed during running the tests, but this is an expected scenario even in a 
production environment. We would change a parameter that can be changed by a 
user any time in a way as users do it too. Later on, when we can reconfigure 
the polling interval without restarting the service, this approach will be even 
simpler.

This idea would make it possible to test the polling mechanism of Ceilometer 
without any radical change in the ordering of test cases or any other things 
that would be strange in integration tests. We couldn't find any better way to 
solve the issue of the load on the APIs caused by polling.

What's your opinion about this scenario? Do you think it could be a viable 
solution to the above described problem?

Thanks and Best Regards,
Ildiko
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-16 Thread Morgan Fainberg
Reposted now will a lot less bad quote issues. Thanks for being patient with 
the re-send!

--
From: Joe Gordon joe.gord...@gmail.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: July 16, 2014 at 02:27:42
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and 
keystone v3

 On Tue, Jul 15, 2014 at 7:20 AM, Morgan Fainberg  
 wrote:
  
 
 
  On Tuesday, July 15, 2014, Steven Hardy wrote:
 
  On Mon, Jul 14, 2014 at 02:43:19PM -0400, Adam Young wrote:
   On 07/14/2014 11:47 AM, Steven Hardy wrote:
   Hi all,
   
   I'm probably missing something, but can anyone please tell me when
  devstack
   will be moving to keystone v3, and in particular when API auth_token
  will
   be configured such that auth_version is v3.0 by default?
   
   Some months ago, I posted this patch, which switched auth_version to
  v3.0
   for Heat:
   
   https://review.openstack.org/#/c/80341/
   
   That patch was nack'd because there was apparently some version
  discovery
   code coming which would handle it, but AFAICS I still have to manually
   configure auth_version to v3.0 in the heat.conf for our API to work
   properly with requests from domains other than the default.
   
   The same issue is observed if you try to use non-default-domains via
   python-heatclient using this soon-to-be-merged patch:
   
   https://review.openstack.org/#/c/92728/
   
   Can anyone enlighten me here, are we making a global devstack move to
  the
   non-deprecated v3 keystone API, or do I need to revive this devstack
  patch?
   
   The issue for Heat is we support notifications from stack domain
  users,
   who are created in a heat-specific domain, thus won't work if the
   auth_token middleware is configured to use the v2 keystone API.
   
   Thanks for any information :)
   
   Steve
   There are reviews out there in client land now that should work. I was
   testing discover just now and it seems to be doing the right thing. If
  the
   AUTH_URL is chopped of the V2.0 or V3 the client should be able to
  handle
   everything from there on forward.
 
  Perhaps I should restate my problem, as I think perhaps we still have
  crossed wires:
 
  - Certain configurations of Heat *only* work with v3 tokens, because we
  create users in a non-default domain
  - Current devstack still configures versioned endpoints, with v2.0
  keystone
  - Heat breaks in some circumstances on current devstack because of this.
  - Adding auth_version='v3.0' to the auth_token section of heat.conf fixes
  the problem.
 
  So, back in March, client changes were promised to fix this problem, and
  now, in July, they still have not - do I revive my patch, or are fixes for
  this really imminent this time?
 
  Basically I need the auth_token middleware to accept a v3 token for a user
  in a non-default domain, e.g validate it *always* with the v3 API not
  v2.0,
  even if the endpoint is still configured versioned to v2.0.
 
  Sorry to labour the point, but it's frustrating to see this still broken
  so long after I proposed a fix and it was rejected.
 
 
  We just did a test converting over the default to v3 (and falling back to
  v2 as needed, yes fallback will still be needed) yesterday (Dolph posted a
  couple of test patches and they seemed to succeed - yay!!) It looks like it
  will just work. Now there is a big caveate, this default will only change
  in the keystone middleware project, and it needs to have a patch or three
  get through gate converting projects over to use it before we accept the
  code.
 
  Nova has approved the patch to switch over, it is just fighting with Gate.
  Other patches are proposed for other projects and are in various states of
  approval.
 
  
 I assume you mean switch over to keystone middleware project [0], not

Correct, switch to middleware (a requirement before we landed this patch in 
middleware). I was unclear in that statement. Sorry didn’t mean to make anyone 
jumpy that something was approved in Nova that shouldn’t have been or that did 
massive re-workings internal to Nova.

 switch over to keystone v3. Based on [1] my understanding is no changes to
 nova are needed to use the v2 compatible parts of the v3 API, But are
 changes needed to support domains or is this not a problem because the auth
 middleware uses uuids for user_id and project_id, so nova doesn't need to
 have any concept of domains? Are any nova changes needed to support the v3
 API?
 

This change simply makes it so the middleware will prefer v3 over v2 if both 
are available 
for validating UUID tokens and fetching certs. It still falls back to v2 as 
needed. It 
is transparent to all services (it was blocking on Nova and some uniform 
catalog related 
issues a while back, but Jamie Lennox resolved those, see below for 

Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Mark McLoughlin
On Wed, 2014-07-16 at 16:15 +0200, Sean Dague wrote:
..
 Based on these experiences, libvirt version differences seem to be as
 substantial as major hypervisor differences. There is a proposal here -
 https://review.openstack.org/#/c/103923/ to hold newer versions of
 libvirt to the same standard we hold xen, vmware, hyperv, docker,
 ironic, etc.

That's a bit of a mis-characterization - in terms of functional test
coverage, the libvirt driver is the bar that all the other drivers
struggle to meet.

And I doubt any of us pay too close attention to the feature coverage
that the 3rd party CI test jobs have.

 I'm somewhat concerned that the -2 pile on in this review is a double
 standard of libvirt features, and features exploiting really new
 upstream features. I feel like a lot of the language being used here
 about the burden of doing this testing is exactly the same as was
 presented by the docker team before their driver was removed, which was
 ignored by the Nova team at the time.

Personally, I wasn't very comfortable with the docker driver move. It
certainly gave an outward impression that we're an unfriendly community.
The mitigating factor was that a lot of friendly, collaborative,
coaching work went on in the background for months. Expectations were
communicated well in advance.

Kicking the docker driver out of the tree has resulted in an uptick in
the amount of work happening on it, but I suspect most people involved
have a bad taste in their mouths. I guess there's incentives at play
which mean they'll continue plugging away at it, but those incentives
aren't always at play.

 It was the concern by the freebsd
 team, which was also ignored and they were told to go land libvirt
 patches instead.
 
 I'm ok with us as a project changing our mind and deciding that the test
 bar needs to be taken down a notch or two because it's too burdensome to
 contributors and vendors, but if we are doing that, we need to do it for
 everyone. A lot of other organizations have put a ton of time and energy
 into this, and are carrying a maintenance cost of running these systems
 to get results back in a timely basis.

I don't agree that we need to apply the same rules equally to everyone.

At least part of the reasoning behind the emphasis on 3rd party CI
testing was that projects (Neutron in particular) were being overwhelmed
by contributions to drivers from developers who never contributed in any
way to the core. The corollary of that is the contributors who do
contribute to the core should be given a bit more leeway in return.

There's a natural building of trust and element of human relationships
here. As a reviewer, you learn to trust contributors with a good track
record and perhaps prioritize contributions from them.

 As we seem deadlocked in the review, I think the mailing list is
 probably a better place for this.
 
 If we want to reduce the standards for libvirt we should reconsider
 what's being asked of 3rd party CI teams, and things like the docker
 driver, as well as the A, B, C driver classification. Because clearly
 libvirt 1.2.5+ isn't actually class A supported.

No, there are features or code paths of the libvirt 1.2.5+ driver that
aren't as well tested as the class A designation implies. And we have
a proposal to make sure these aren't used by default:

  https://review.openstack.org/107119

i.e. to stray off the class A path, an operator has to opt into it by
changing a configuration option that explains they will be enabling code
paths which aren't yet tested upstream.

These features have value to some people now, they don't risk regressing
the class A driver and there's a clear path to them being elevated to
class A in time. We should value these contributions and nurture these
contributors.

Appending some of my comments from the review below. The tl;dr is that I
think we're losing sight of the importance of welcoming and nurturing
contributors, and valuing whatever contributions they can make. That
terrifies me. 

Mark.

---

Compared to other open source projects, we have done an awesome job in
OpenStack of having good functional test coverage. Arguably, given the
complexity of the system, we couldn't have got this far without it. I
can take zero credit for any of it.

However, not everything is tested now, nor is the tests we have
foolproof. When you consider the number of configuration options we
have, the supported distros, the ranges of library versions we claim to
support, etc., etc. I don't think we can ever get to an everything is
tested point.

In the absence of that, I think we should aim to be more clear what *is*
tested. The config option I suggest does that, which is a big part of
its merit IMHO.

We've had some success with the be nasty enough to driver contributors
and they'll do what we want approach so far, but IMHO that was an
exceptional approach for an exceptional situation - drivers that were
completely broken, and driver developers who didn't contribute to the
core 

Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Clark Boylan
On Wed, Jul 16, 2014 at 7:50 AM, Daniel P. Berrange berra...@redhat.com wrote:
 On Wed, Jul 16, 2014 at 04:15:40PM +0200, Sean Dague wrote:
 Recently the main gate updated from Ubuntu 12.04 to 14.04, and in doing
 so we started executing the livesnapshot code in the nova libvirt
 driver. Which fails about 20% of the time in the gate, as we're bringing
 computes up and down while doing a snapshot. Dan Berange did a bunch of
 debug on that and thinks it might be a qemu bug. We disabled these code
 paths, so live snapshot has now been ripped out.

 In January we also triggered a libvirt bug, and had to carry a private
 build of libvirt for 6 weeks in order to let people merge code in OpenStack.

 We never were able to switch to libvirt 1.1.1 in the gate using the
 Ubuntu Cloud Archive during Icehouse development, because it has a
 different set of failures that would have prevented people from merging
 code.

 Based on these experiences, libvirt version differences seem to be as
 substantial as major hypervisor differences.

 I think that is a pretty dubious conclusion to draw from just a
 couple of bugs. The reason they really caused pain is that because
 the CI test system was based on old version for too long. If it
 were tracking current upstream version of libvirt/KVM we'd have
 seen the problem much sooner  been able to resolve it during
 review of the change introducing the feature, as we do with any
 other bugs we encounter in software such as the breakage we see
 with my stuff off pypi.

How do you suggest we do this effectively with libvirt? In the past we
have tried to use newer versions of libvirt and they completely broke.
And the time to fixing that was non trivial. For most of our pypi
stuff we attempt to fix upstream and if that does not happen quickly
we pin (arguably we don't do this well either, see the sqlalchemy=0.7
issues of the past).

I am worried that we would just regress to the current process because
we have tried something similar to this previously and were forced to
regress to the current process.

 There is a proposal here -
 https://review.openstack.org/#/c/103923/ to hold newer versions of
 libvirt to the same standard we hold xen, vmware, hyperv, docker,
 ironic, etc.

 That is rather misleading statement you're making there. Libvirt is
 in fact held to *higher* standards than xen/vmware/hypver because it
 is actually gating all commits. The 3rd party CI systems can be
 broken for days, weeks and we still happily accept code for those
 virt. drivers.

 AFAIK there has never been any statement that every feature added
 to xen/vmware/hyperv must be tested by the 3rd party CI system.
 All of the CI systems, for whatever driver, are currently testing
 some arbitrary subset of the overall features of that driver, and
 by no means every new feature being approved in review has coverage.

 I'm somewhat concerned that the -2 pile on in this review is a double
 standard of libvirt features, and features exploiting really new
 upstream features. I feel like a lot of the language being used here
 about the burden of doing this testing is exactly the same as was
 presented by the docker team before their driver was removed, which was
 ignored by the Nova team at the time. It was the concern by the freebsd
 team, which was also ignored and they were told to go land libvirt
 patches instead.

 As above the only double standard is that libvirt tests are all gating
 and 3rd party tests are non-gating.

 If we want to reduce the standards for libvirt we should reconsider
 what's being asked of 3rd party CI teams, and things like the docker
 driver, as well as the A, B, C driver classification. Because clearly
 libvirt 1.2.5+ isn't actually class A supported.

 AFAIK the requirement for 3rd party CI is merely that it has to exist,
 running some arbitrary version of the hypervisor in question. We've
 not said that 3rd party CI has to be covering every version or every
 feature, as is trying to be pushed on libvirt here.

 The Class A, Class B, Class C classifications were always only
 ever going to be a crude approximation. Unless you define them to be
 wrt the explicit version of every single deb/pypi package installed
 in the gate system (which I don't believe anyone has every suggested)
 there is always risk that a different version of some package has a
 bug that Nova tickles.

 IMHO the classification we do for drivers provides an indication as
 to the quality of the *Nova* code. IOW class A indicates that we've
 throughly tested the Nova code and believe it to be free of bugs for
 the features we've tested. If there is a bug in a 3rd party package
 that doesn't imply that the Nova code is any less well tested or
 more buggy. Replace libvirt with mysql in your example above. A new
 version of mysql with a bug does not imply that Nova is suddenly not
 class A tested.

 IMHO it is upto the downstream vendors to run testing to ensure that
 what 

Re: [openstack-dev] [devstack][keystone] (98)Address already in use: make_sock: could not bind to address [::]:5000 0.0.0.0:5000

2014-07-16 Thread Morgan Fainberg

--
From: Rich Megginson rmegg...@redhat.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: July 16, 2014 at 08:08:00
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [devstack][keystone] (98)Address already in use: 
make_sock: could not bind to address [::]:5000  0.0.0.0:5000

SNIP

 Another problem with port 5000 in Fedora, and probably more recent
 versions of RHEL, is the selinux policy:
  
 # sudo semanage port -l|grep 5000
 ...
 commplex_main_port_t tcp 5000
 commplex_main_port_t udp 5000
  
 There is some service called commplex that has already claimed port
 5000 for its use, at least as far as selinux goes.
 

Wouldn’t this also affect the eventlet-based Keystone using port 5000? This is 
not an apache-specific related issue is it?

—Morgan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] REST API access to configuration options

2014-07-16 Thread Oleg Gelbukh
On Tue, Jul 15, 2014 at 1:08 PM, Mark McLoughlin mar...@redhat.com wrote:

 Also, this is going to tell you how the API service you connected to was
 configured. Where there are multiple API servers, what about the others?
 How do operators verify all of the API servers behind a load balancer
 with this?

 And in the case of something like Nova, what about the many other nodes
 behind the API server?


A query for configuration could be a part of /hypervisors API extension. It
doesn't solve multiple API servers issue though.

--
Best regards,
Oleg Gelbukh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Daniel P. Berrange
On Wed, Jul 16, 2014 at 08:12:47AM -0700, Clark Boylan wrote:
 On Wed, Jul 16, 2014 at 7:50 AM, Daniel P. Berrange berra...@redhat.com 
 wrote:
  On Wed, Jul 16, 2014 at 04:15:40PM +0200, Sean Dague wrote:
  Recently the main gate updated from Ubuntu 12.04 to 14.04, and in doing
  so we started executing the livesnapshot code in the nova libvirt
  driver. Which fails about 20% of the time in the gate, as we're bringing
  computes up and down while doing a snapshot. Dan Berange did a bunch of
  debug on that and thinks it might be a qemu bug. We disabled these code
  paths, so live snapshot has now been ripped out.
 
  In January we also triggered a libvirt bug, and had to carry a private
  build of libvirt for 6 weeks in order to let people merge code in 
  OpenStack.
 
  We never were able to switch to libvirt 1.1.1 in the gate using the
  Ubuntu Cloud Archive during Icehouse development, because it has a
  different set of failures that would have prevented people from merging
  code.
 
  Based on these experiences, libvirt version differences seem to be as
  substantial as major hypervisor differences.
 
  I think that is a pretty dubious conclusion to draw from just a
  couple of bugs. The reason they really caused pain is that because
  the CI test system was based on old version for too long. If it
  were tracking current upstream version of libvirt/KVM we'd have
  seen the problem much sooner  been able to resolve it during
  review of the change introducing the feature, as we do with any
  other bugs we encounter in software such as the breakage we see
  with my stuff off pypi.
 
 How do you suggest we do this effectively with libvirt? In the past we
 have tried to use newer versions of libvirt and they completely broke.
 And the time to fixing that was non trivial. For most of our pypi
 stuff we attempt to fix upstream and if that does not happen quickly
 we pin (arguably we don't do this well either, see the sqlalchemy=0.7
 issues of the past).

The real big problem we had was the firewall deadlock problem. When
I was made aware of that problem I worked on fixing that in upstream
libvirt immediately. IIRC we had a solution in a week or two which
was added to a libvirt stable release update. Much of the further
delay was in waiting for the fixes to make their way into the
Ubuntu repositories. If the gate were ignoring Ubuntu repos and
pulling latest upstream libvirt, then we could have just pinned
to an older libvirt until the fix was pushed out to a stable
libvirt release. The libvirt community release process is flexible
enough to push out priority bug fix releases in a matter of days,
or less,  if needed. So temporarily pinning isn't the end of the
world in that respect.

 I am worried that we would just regress to the current process because
 we have tried something similar to this previously and were forced to
 regress to the current process.

IMHO the longer we wait between updating the gate to new versions
the bigger the problems we create for ourselves. eg we were switching
from 0.9.8 released Dec 2011, to  1.1.1 released Jun 2013, so we
were exposed to over 1 + 1/2 years worth of code churn in a single
event. The fact that we only hit a couple of bugs in that, is actually
remarkable given the amount of feature development that had gone into
libvirt in that time. If we had been tracking each intervening libvirt
release I expect the majority of updates would have had no ill effect
on us at all. For the couple of releases where there was a problem we
would not be forced to rollback to a version years older again, we'd
just drop back to the previous release at most 1 month older.

Ultimately, thanks to us identifying  fixing those previously seen
bugs, we did just switch from 0.9.8 to 1.2.2 which is a 2+1/2 year
jump, and the only problem we've hit is the live snapshot problem
which appears to be a QEMU bug.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Dan Smith
 Based on these experiences, libvirt version differences seem to be as
 substantial as major hypervisor differences.
 
 I think that is a pretty dubious conclusion to draw from just a
 couple of bugs. The reason they really caused pain is that because
 the CI test system was based on old version for too long.

I think the conclusion being made is that libvirt versions two years
apart are effectively like different major versions of a hypervisor. I
don't think that's wrong.

 That is rather misleading statement you're making there. Libvirt is
 in fact held to *higher* standards than xen/vmware/hypver because it
 is actually gating all commits. The 3rd party CI systems can be
 broken for days, weeks and we still happily accept code for those
 virt. drivers.

Right, and we've talked about raising that bar as well, by tracking
their status more closely, automatically -2'ing patches that touch the
subdirectory but don't get a passing vote from the associated CI system,
etc.

You're definitely right that libvirt is held to a higher bar in terms of
it being required to pass tests before we can even mechanically land a
patch. However, there is a lot of function in the driver that we don't
test right now because of the version we're tied to in the gate nodes.
It's actually *easier* for a 3rd party system like vmware to roll their
environment and enable tests of newer features, so I don't think that
this requirement would cause existing 3rd party CI systems any trouble.

 AFAIK there has never been any statement that every feature added
 to xen/vmware/hyperv must be tested by the 3rd party CI system.

On almost every spec that doesn't already call it out, a reviewer asks
how are you going to test this beyond just unit tests? I think the
assumption and feeling among most reviewers is that new features,
especially that depend on new things (be it storage drivers, hypervisor
versions, etc) are concerned about approving without testing.

 AFAIK the requirement for 3rd party CI is merely that it has to exist,
 running some arbitrary version of the hypervisor in question. We've
 not said that 3rd party CI has to be covering every version or every
 feature, as is trying to be pushed on libvirt here.

The requirement in the past has been that it has to exist. At the last
summit, we had a discussion about how to raise the bar on what we
currently have. We made a lot of progress getting those systems
established (only because we had a requirement, by the way) in the last
cycle. Going forward, we need to have new levels of expectations in
terms of coverage and reliability of those things, IMHO.

 As above, aside from the question of gating vs non-gating, the bar is
 already set at the same level of everyone. There has to be a CI system
 somewhere testing some arbitrary version of the software. Everyone meets
 that requirement.

Wording our current requirement as you have here makes it sound like an
arbitrary ticky mark, which saddens and kind of offends me. What we
currently have was a step in the right direction. It was a lot of work,
but it's by no means arbitrary nor sufficient, IMHO.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Missing logs in Midokura CI Bot Inbox x

2014-07-16 Thread Kyle Mestery
On Wed, Jul 16, 2014 at 4:48 AM, Tomoe Sugihara to...@midokura.com wrote:
 Hi there,

 Just to apologize and inform that most of the links to the logs of Midokura
 CI bot on gerrit are dead now. That is because I accidentally deleted all
 the logs (instead of over a month old logs) today. Logs for the jobs after
 the deletion are saved just fine.
 We'll be more careful about handling the logs.

Thanks for the update here Tomoe, it's appreciated!

Kyle

 Best,
 Tomoe

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Flavor framework proposal

2014-07-16 Thread Sumit Naiksatam
To the earlier question on whether we had defined what we wanted to
solve with the flavors framework, a high level requirement was
captured in the following approved spec for advanced services:
https://review.openstack.org/#/c/92200

On Wed, Jul 16, 2014 at 5:18 AM, Eugene Nikanorov
enikano...@mirantis.com wrote:
 Some comments inline:


 Agreed-- I think we need to more fully flesh out how extension list / tags
 should work here before we implement it. But this doesn't prevent us from
 rolling forward with a version 1 of flavors so that we can start to use
 some of the benefits of having flavors (like the ability to use multiple
 service profiles with a single driver/provider, or multiple service profiles
 for a single kind of service).

 Agree here.



 Yes, I think there are many benefits we can get out of the flavor
 framework without having to have an extensions list / tags at this revision.
 But I'm curious: Did we ever define what we were actually trying to solve
 with flavors?  Maybe that's the reason the discussion on this has been all
 of the place: People are probably making assumptions about the problem we're
 trying to solve and we need to get on the same page about this.


 Yes, we did!
  The original problem has several aspects aspects:
 1) providing users with some information about what service implementation
 they get (capabilities)
 2) providing users with ability to specify (choose, actually) some
 implementation details that don't relate to a logical configuration
 (capacity, insertion mode, HA mode, resiliency, security standards, etc)
 3) providing operators a way to setup different modes of one driver
 4) providing operators to seamlessly change drivers backing up existing
 logical configurations (now it's not so easy to do because logical config is
 tightly coupled with provider/driver)

 The proposal we're discussing right is mostly covering points (2), (3) and
 (4) which is already a good thing.
 So for now I'd propose to put 'information about service implementation' in
 the description to cover (1)

 I'm currently implementing the proposal (API and DB parts, no integration
 with services yet)


 Thanks,
 Eugene.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Eric Windisch
On Wed, Jul 16, 2014 at 10:15 AM, Sean Dague s...@dague.net wrote:

 Recently the main gate updated from Ubuntu 12.04 to 14.04, and in doing
 so we started executing the livesnapshot code in the nova libvirt
 driver. Which fails about 20% of the time in the gate, as we're bringing
 computes up and down while doing a snapshot. Dan Berange did a bunch of
 debug on that and thinks it might be a qemu bug. We disabled these code
 paths, so live snapshot has now been ripped out.

 In January we also triggered a libvirt bug, and had to carry a private
 build of libvirt for 6 weeks in order to let people merge code in
 OpenStack.

 We never were able to switch to libvirt 1.1.1 in the gate using the
 Ubuntu Cloud Archive during Icehouse development, because it has a
 different set of failures that would have prevented people from merging
 code.

 Based on these experiences, libvirt version differences seem to be as
 substantial as major hypervisor differences. There is a proposal here -
 https://review.openstack.org/#/c/103923/ to hold newer versions of
 libvirt to the same standard we hold xen, vmware, hyperv, docker,
 ironic, etc.

 I'm somewhat concerned that the -2 pile on in this review is a double
 standard of libvirt features, and features exploiting really new
 upstream features. I feel like a lot of the language being used here
 about the burden of doing this testing is exactly the same as was
 presented by the docker team before their driver was removed, which was
 ignored by the Nova team at the time. It was the concern by the freebsd
 team, which was also ignored and they were told to go land libvirt
 patches instead.


For running our own CI, the burden was largely a matter of resource and
time constraints for individual contributors and/or startups to setup and
maintain 3rd-party CI, especially in light of a parallel requirement to
pass the CI itself. I received community responses that equated to, if you
were serious, you'd dedicate several full-time developers and/or
infrastructure engineers available for OpenStack development, plus several
thousand a month in infrastructure itself.  For Docker, these were simply
not options. Back in January, putting 2-3 engineers fulltime toward
OpenStack would have been a contribution of 10-20% of our engineering
force. OpenStack is not more important to us than Docker itself.

This thread highlights more deeply the problems for the FreeBSD folks.
First, I still disagree with the recommendation that they contribute to
libvirt. It's a classic example of creating two or more problems from one.
Once they have support in libvirt, how long before their code is in a
version of libvirt acceptable to Nova? When they hit edge-cases or bugs,
requiring changes in libvirt, how long before those fixes are accepted by
Nova?

I concur with thoughts in the Gerrit review which suggest there should be a
non-voting gate for testing against the latest libvirt.

I think the ideal situation would be to functionally test against multiple
versions of libvirt. We'd have at least two versions: trunk,
latest-stable. We might want trunk, trunk-snapshot-XYZ, latest-stable,
version-in-ubuntu, version-in-rhel, or any number of back-versions
included in the gate. The version-in-rhel and version-in-ubuntu might be
good candidates for 3rd-party CI.


Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Test Ceilometer polling in tempest

2014-07-16 Thread Dina Belova
Ildiko, thanks for starting this discussion.

Really, that is quite painful problem for Ceilometer and QA team. As far as
I know, currently there is some kind of tendency of making integration
Tempest tests quicker and less resource consuming - that's quite logical
IMHO. Polling as a way of information collecting from different services
and projects is quite consuming speaking about load on Nova API, etc. -
that's why I completely understand the wish of QA team to get rid of it,
although polling still makes lots work inside Ceilometer, and that's why
integration testing for this feature is really important for me as
Ceilometer contributor - without pollsters testing we have no way to check
its workability.

That's why I'll be really glad if Ildiko's (or whatever other) solution
that will allow polling testing in the gate will be found and accepted.

Problem with described above solution requires some kind of change in what
do we call environment preparing for the integration testing - and we
really need QA crew help here. Afair polling deprecation was suggested in
some of the IRC discussions (by only notifications usage), but that's not
the solution that might be just used right now - but we need way of
Ceilometer workability verification right now to continue work on its
improvement.

So any suggestions and comments are welcome here :)

Thanks!
Dina


On Wed, Jul 16, 2014 at 7:06 PM, Ildikó Váncsa ildiko.van...@ericsson.com
wrote:

  Hi Folks,



 We’ve faced with some problems during running Ceilometer integration tests
 on the gate. The main issue is that we cannot test the polling mechanism,
 as if we use a small polling interval, like 1 min, then it puts a high
 pressure on Nova API. If we use a longer interval, like 10 mins, then we
 will not be able to execute any tests successfully, because it would run
 too long.



 The idea, to solve this issue,  is to reconfigure Ceilometer, when the
 polling is tested. Which would mean to change the polling interval from the
 default 10 mins to 1 min at the beginning of the test, restart the service
 and when the test is finished, the polling interval should be changed back
 to 10 mins, which will require one more service restart. The downside of
 this idea is, that it needs service restart today. It is on the list of
 plans to support dynamic re-configuration of Ceilometer, which would mean
 the ability to change the polling interval without restarting the service.



 I know that this idea isn’t ideal from the PoV that the system
 configuration is changed during running the tests, but this is an expected
 scenario even in a production environment. We would change a parameter that
 can be changed by a user any time in a way as users do it too. Later on,
 when we can reconfigure the polling interval without restarting the
 service, this approach will be even simpler.



 This idea would make it possible to test the polling mechanism of
 Ceilometer without any radical change in the ordering of test cases or any
 other things that would be strange in integration tests. We couldn’t find
 any better way to solve the issue of the load on the APIs caused by polling.



 What’s your opinion about this scenario? Do you think it could be a viable
 solution to the above described problem?



 Thanks and Best Regards,

 Ildiko

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers][Nova] Containers Team Mid-Cycle Meetup to join Nova Meetup

2014-07-16 Thread Adrian Otto
Additional Update:

Two important additions:

1) No Formal Thursday Meetings.

We are eliminating our plans to meet formally on the 31st. You are still 
welcome to meet informally. We want to keep these discussions as productive as 
possible, and want to avoid attendee burnout. My deepest apologies to those who 
have made travel plans around this. See me if there are financial 
considerations to resolve.

2) Containers Team Registration

To better manage attendance expectations, register for the event that you will 
attend as a primary. For those attending primarily for Containers, register 
here:

https://www.eventbrite.com/e/openstack-containers-team-juno-mid-cycle-developer-meetup-tickets-12304951441

If you are registering for Nova, use this link:

https://www.eventbrite.com.au/e/openstack-nova-juno-mid-cycle-developer-meetup-tickets-11878128803

If you are already registered for the Nova Meetup, but will be attending in the 
Containers Team Meetup as the primary, you can return your tickets for Nova as 
long as you have a Containers Team Meetup ticket. That will allow for a more 
accurate count, and make sure that all the Nova devs who need to attend can.

Logistics details:

https://wiki.openstack.org/wiki/Sprints/BeavertonJunoSprint

Event Etherpad:

https://etherpad.openstack.org/p/juno-containers-sprint

Thanks,

Adrian


On Jul 11, 2014, at 3:31 PM, Adrian Otto 
adrian.o...@rackspace.commailto:adrian.o...@rackspace.com wrote:

CORRECTION: This event happens July 28-31. Sorry for any confusion! Corrected 
Announcement:

Containers Team,

We have decided to hold our Mid-Cycle meetup along with the Nova Meetup in 
Beaverton, Oregon on July 28-31.The Nova Meetup is scheduled for July 28-30.

https://www.eventbrite.com.au/e/openstack-nova-juno-mid-cycle-developer-meetup-tickets-11878128803

Those of us interested in Containers topic will use one of the breakout rooms 
generously offered by Intel. We will also stay on Thursday to focus on 
implementation plans and to engage with those members of the Nova Team who will 
be otherwise occupied on July 28-30, and will have a chance to focus entirely 
on Containers on the 31st.

Please take a moment now to register using the link above, and I look forward 
to seeing you there.

Thanks,

Adrian Otto


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-16 Thread Vijay Venkatachalam
Apologies for the delayed response.

I am OK with displaying the certificates contents as part of the API, that 
should not harm.

I think the discussion has to be split into 2 topics.


1.   Certificate conflict resolution. Meaning what is expected when 2 or 
more certificates become eligible during SSL negotiation

2.   SAN support

I will send out 2 separate mails on this.


From: Samuel Bercovici [mailto:samu...@radware.com]
Sent: Tuesday, July 15, 2014 11:52 PM
To: OpenStack Development Mailing List (not for usage questions); Vijay 
Venkatachalam
Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
SubjectCommonName and/or SubjectAlternativeNames from X509

OK.

Let me be more precise, extracting the information for view sake / validation 
would be good.
Providing values that are different than what is in the x509 is what I am 
opposed to.

+1 for Carlos on the library and that it should be ubiquitously used.

I will wait for Vijay to speak for himself in this regard…

-Sam.


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Tuesday, July 15, 2014 8:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
SubjectCommonName and/or SubjectAlternativeNames from X509

+1 to German's and  Carlos' comments.

It's also worth pointing out that some UIs will definitely want to show SAN 
information and the like, so either having this available as part of the API, 
or as a standard library we write which then gets used by multiple drivers is 
going to be necessary.

If we're extracting the Subject Common Name in any place in the code then we 
also need to be extracting the Subject Alternative Names at the same place. 
From the perspective of the SNI standard, there's no difference in how these 
fields should be treated, and if we were to treat SANs differently then we're 
both breaking the standard and setting a bad precedent.

Stephen

On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza 
carlos.ga...@rackspace.commailto:carlos.ga...@rackspace.com wrote:

On Jul 15, 2014, at 10:55 AM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com
 wrote:

 Hi,


 Obtaining the domain name from the x509 is probably more of a 
 driver/backend/device capability, it would make sense to have a library that 
 could be used by anyone wishing to do so in their driver code.
You can do what ever you want in *your* driver. The code to extract this 
information will be apart of the API and needs to be mentioned in the spec now. 
PyOpenSSL with PyASN1 are the most likely candidates.

Carlos D. Garza

 -Sam.



 From: Eichberger, German 
 [mailto:german.eichber...@hp.commailto:german.eichber...@hp.com]
 Sent: Tuesday, July 15, 2014 6:43 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

 Hi,

 My impression was that the frontend would extract the names and hand them to 
 the driver.  This has the following advantages:

 · We can be sure all drivers can extract the same names
 · No duplicate code to maintain
 · If we ever allow the user to specify the names on UI rather in the 
 certificate the driver doesn’t need to change.

 I think I saw Adam say something similar in a comment to the code.

 Thanks,
 German

 From: Evgeny Fedoruk [mailto:evge...@radware.commailto:evge...@radware.com]
 Sent: Tuesday, July 15, 2014 7:24 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
 SubjectCommonName and/or SubjectAlternativeNames from X509

 Hi All,

 Since this issue came up from TLS capabilities RST doc review, I opened a ML 
 thread for it to make the decision.
 Currently, the document says:

 “
 For SNI functionality, tenant will supply list of TLS containers in specific
 Order.
 In case when specific back-end is not able to support SNI capabilities,
 its driver should throw an exception. The exception message should state
 that this specific back-end (provider) does not support SNI capability.
 The clear sign of listener's requirement for SNI capability is
 a none empty SNI container ids list.
 However, reference implementation must support SNI capability.

 Specific back-end code may retrieve SubjectCommonName and/or altSubjectNames
 from the certificate which will determine the hostname(s) the certificate
 is associated with.

 The order of SNI containers list may be used by specific back-end code,
 like Radware's, for specifying priorities among certificates.
 In case when two or more uploaded certificates are valid for the same DNS name
 and the tenant has specific requirements around which one wins this collision,
 certificate ordering provides a mechanism to define which cert wins in the
 event of a collision.

Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Daniel P. Berrange
On Wed, Jul 16, 2014 at 08:29:26AM -0700, Dan Smith wrote:
  Based on these experiences, libvirt version differences seem to be as
  substantial as major hypervisor differences.
  
  I think that is a pretty dubious conclusion to draw from just a
  couple of bugs. The reason they really caused pain is that because
  the CI test system was based on old version for too long.
 
 I think the conclusion being made is that libvirt versions two years
 apart are effectively like different major versions of a hypervisor. I
 don't think that's wrong.
 
  That is rather misleading statement you're making there. Libvirt is
  in fact held to *higher* standards than xen/vmware/hypver because it
  is actually gating all commits. The 3rd party CI systems can be
  broken for days, weeks and we still happily accept code for those
  virt. drivers.
 
 Right, and we've talked about raising that bar as well, by tracking
 their status more closely, automatically -2'ing patches that touch the
 subdirectory but don't get a passing vote from the associated CI system,
 etc.
 
 You're definitely right that libvirt is held to a higher bar in terms of
 it being required to pass tests before we can even mechanically land a
 patch. However, there is a lot of function in the driver that we don't
 test right now because of the version we're tied to in the gate nodes.
 It's actually *easier* for a 3rd party system like vmware to roll their
 environment and enable tests of newer features, so I don't think that
 this requirement would cause existing 3rd party CI systems any trouble.
 
  AFAIK there has never been any statement that every feature added
  to xen/vmware/hyperv must be tested by the 3rd party CI system.
 
 On almost every spec that doesn't already call it out, a reviewer asks
 how are you going to test this beyond just unit tests? I think the
 assumption and feeling among most reviewers is that new features,
 especially that depend on new things (be it storage drivers, hypervisor
 versions, etc) are concerned about approving without testing.

Expecting new functionality to have testing coverage in the common
case is entirely reasonable. What I disagree with is the proposal
to say it is mandatory, when the current CI system is not able to
test it for any given reason. In some cases it might be reasonable
to expect the contributor to setup 3rd party CI, but we absolutely
cannot make that a fixed rule or we'll kill contributions from
people who are not backed by vendors in a position to spend the
significant resource it takes to setup  maintain CI.  IMHO the
burden is on the maintainer of the CI to ensure it is able to
follow the needs of the contributors. ie if the feature needs a
newer libvirt version in order to test with, the CI maintainer(s)
should deal with that. We should not turn away the contributor
for a problem that is outside their control.

  AFAIK the requirement for 3rd party CI is merely that it has to exist,
  running some arbitrary version of the hypervisor in question. We've
  not said that 3rd party CI has to be covering every version or every
  feature, as is trying to be pushed on libvirt here.
 
 The requirement in the past has been that it has to exist. At the last
 summit, we had a discussion about how to raise the bar on what we
 currently have. We made a lot of progress getting those systems
 established (only because we had a requirement, by the way) in the last
 cycle. Going forward, we need to have new levels of expectations in
 terms of coverage and reliability of those things, IMHO.

IMHO we need to maintain a balance between ensuring code quality
and being welcoming  accepting to new contributors. 

New features have a certain value $NNN to the project  our users.
The lack of CI testing does not automatically imply that the value
of that work is erased to $0 or negative $MMM. Of course the lack
of CI will create uncertainty in how valuable it is, and potentially
imply costs for us if we have to deal with resolving bugs later.
We must be careful not to overly obsess on the problems of work
that might have bugs, to the detriment of all the many submissions
that work well.

We need to take a pragmatic view of this tradeoff based on the risk
implied by the new feature. If the new work is impacting existing
functional codepaths then this clearly exposes existing users to
risk of regressions, so if that codepath is not tested this is
something to be very wary of. If the new work is adding new code
paths that existing deployments wouldn't exercise unless they 
explicitly opt in to the feature, the risk is significantly lower.
The existence of unit tests will also serve to limit the risk in
many, but not all, situations. If something is not CI tested then
I'd also expect it to get greater attention during review, with
the reviewers actually testing it functionally themselves as well
as code inspection. Finally we should also have some good faith in
our contributors that they are not in fact just submitting 

Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Kashyap Chamarthy
On Wed, Jul 16, 2014 at 04:15:40PM +0200, Sean Dague wrote:

[. . .]

 Anyway, discussion welcomed. My primary concern right now isn't actually
 where we set the bar, but that we set the same bar for everyone.

As someone who tries to test Nova w/ upstream libvirt/QEMU, couple of
points why I disagree with your above comments:


  - From time time I find myself frustrated due to older versions of
libvirt on CI infra systems: I try to investigate a bug, 2 hours
into debugging, it turns out that CI system is using very old
libvirt, alas - it's not in my control. Consequence: The bug
needlessly got bumped up in priority for investigation, while
it's already solved in an existing upstream release, just waiting to
be picked up CI infra.

  - Also, as a frequent tester of libvirt upstream, and a participant
in debugging the recent Nova snapshots issue mentioned here, the
comment[1] (by Daniel Berrange) debunks the illusion of the
required verison of libvirt should have been released for at least
30 days very convincingly in crystal clear language.

  - FWIW, I feel the libvirt version cap[2] is a fine idea to alleviate
this.

[1] https://review.openstack.org/#/c/103923/ (Comment:Jul 14 9:24 PM)
  -
  The kind of new features we're depending on in Nova (looking at specs
  proposed for Juno) are not the kind of features that users in any distro
  are liable to test themselves, outside of the context of Nova (or
  perhaps oVirt) applications. eg Users in a distro aren't likely to
  seriously test the NUMA/Hugepages stuff in libvirt until it is part of
  Nova and that Nova release is in their distro, which creates a
  chicken+egg problem wrt your proposal. In addition I have not seen any
  evidence of significant libvirt testing by the distro maintainers
  themselves either, except for the enterprise distros and we if we wait
  for enterprise distros to pick up a new libvirt we'd be talking 1 year+
  of delay. Finally if just having it in a distro is your benchmark,
  then this is satisfied by Fedora rawhide inclusion, but there's
  basically no user testing of that. So if you instead set the
  benchmark to be a released distro, then saying this is a 1 month
  delay is rather misleading, because distros only release once every
  6 months, so you'd really be talking about a 7 month delay on using
  new features. For all these reasons, tieing Nova acceptance to
  distro inclusion of libvirt is a fundamentally flawed idea that does
  not achieve what it purports to achieve  is detrimental to Nova.
  
  I think the key problem here is that our testing is inadequate and we
  need to address that aspect of it rather than crippling our development
  process.
  -

 [2] https://review.openstack.org/#/c/107119/ -- libvirt: add version
 cap tied to gate CI testing

-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] recheck no bug and comment

2014-07-16 Thread Derek Higgins
On 16/07/14 14:48, Steve Martinelli wrote:
 What are the benefits of doing this over looking at the existing
 rechecks, and if not there opening a bug and rechecking the new bug?

I agree we should be using a bug number (or open one when needed), the
example in the original email should have included a bug number but now
that the topic has come up

I think this would serve as a good way to provide a little explanation
as to why somebody has not provided a bug number e.g.

recheck no bug
   zuul was restarted

Derek

 
 
 Regards,
 
 *Steve Martinelli*
 Software Developer - Openstack
 Keystone Core Member
 
 *Phone:*1-905-413-2851*
 E-mail:*_steve...@ca.ibm.com_ mailto:steve...@ca.ibm.com
 8200 Warden Ave
 Markham, ON L6G 1C7
 Canada
 
 
 
 
 
 
 From:Alexis Lee alex...@hp.com
 To:OpenStack Development Mailing List (not for usage
 questions) openstack-dev@lists.openstack.org,
 Date:07/16/2014 09:19 AM
 Subject:[openstack-dev]  [infra] recheck no bug and comment
 
 
 
 
 Hello,
 
 What do you think about allowing some text after the words recheck no
 bug? EG to include a snippet from the log showing the failure has been
 at least briefly investigated before attempting a recheck. EG:
 
  recheck no bug
 
  Compute node failed to spawn:
 
2014-07-15 12:18:09.936 | 3f1e7f32-812e-48c8-a83c-2615c4451fa6 |
  overcloud-NovaCompute0-zahdxwar7zlh | ERROR  | - | NOSTATE | |
 
 
 Alexis
 -- 
 Nova Engineer, HP Cloud.  AKA lealexis, lxsli.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] TLS capability - Certificate conflict resolution

2014-07-16 Thread Vijay Venkatachalam

Do you know if the SSL/SNI IETF spec details about conflict resolution. I am 
assuming not.

Because of this ambiguity each backend employs its own mechanism to resolve 
conflicts.

There are 3 choices now
1.   The LBaaS extension does not allow conflicting certificates to be 
bound using validation
2.   Allow each backend conflict resolution mechanism to get into the spec
3.   Does not specify anything in the spec, no mechanism introduced and let 
the driver deal with it.

Both HA proxy and Radware uses configuration as a mechanism to resolve. Radware 
uses order while HA Proxy uses externally specified DNS names.
NetScaler implementation uses the best possible match algorithm

For ex, let’s say 3 certs are bound to the same endpoint with the following SNs
www.finance.abc.comhttp://www.finance.abc.com
*.finance.abc.com
*.*.abc.com

If the host request is  payroll.finance.abc.com  we shall  use  
*.finance.abc.com
If it is  payroll.engg.abc.com  we shall use  *.*.abc.com

NetScaler won’t not allow 2 certs to have the same SN.

From: Samuel Bercovici [mailto:samu...@radware.com]
Sent: Tuesday, July 15, 2014 11:52 PM
To: OpenStack Development Mailing List (not for usage questions); Vijay 
Venkatachalam
Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
SubjectCommonName and/or SubjectAlternativeNames from X509

OK.

Let me be more precise, extracting the information for view sake / validation 
would be good.
Providing values that are different than what is in the x509 is what I am 
opposed to.

+1 for Carlos on the library and that it should be ubiquitously used.

I will wait for Vijay to speak for himself in this regard…

-Sam.


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Tuesday, July 15, 2014 8:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
SubjectCommonName and/or SubjectAlternativeNames from X509

+1 to German's and  Carlos' comments.

It's also worth pointing out that some UIs will definitely want to show SAN 
information and the like, so either having this available as part of the API, 
or as a standard library we write which then gets used by multiple drivers is 
going to be necessary.

If we're extracting the Subject Common Name in any place in the code then we 
also need to be extracting the Subject Alternative Names at the same place. 
From the perspective of the SNI standard, there's no difference in how these 
fields should be treated, and if we were to treat SANs differently then we're 
both breaking the standard and setting a bad precedent.

Stephen

On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza 
carlos.ga...@rackspace.commailto:carlos.ga...@rackspace.com wrote:

On Jul 15, 2014, at 10:55 AM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com
 wrote:

 Hi,


 Obtaining the domain name from the x509 is probably more of a 
 driver/backend/device capability, it would make sense to have a library that 
 could be used by anyone wishing to do so in their driver code.
You can do what ever you want in *your* driver. The code to extract this 
information will be apart of the API and needs to be mentioned in the spec now. 
PyOpenSSL with PyASN1 are the most likely candidates.

Carlos D. Garza

 -Sam.



 From: Eichberger, German 
 [mailto:german.eichber...@hp.commailto:german.eichber...@hp.com]
 Sent: Tuesday, July 15, 2014 6:43 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

 Hi,

 My impression was that the frontend would extract the names and hand them to 
 the driver.  This has the following advantages:

 · We can be sure all drivers can extract the same names
 · No duplicate code to maintain
 · If we ever allow the user to specify the names on UI rather in the 
 certificate the driver doesn’t need to change.

 I think I saw Adam say something similar in a comment to the code.

 Thanks,
 German

 From: Evgeny Fedoruk [mailto:evge...@radware.commailto:evge...@radware.com]
 Sent: Tuesday, July 15, 2014 7:24 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
 SubjectCommonName and/or SubjectAlternativeNames from X509

 Hi All,

 Since this issue came up from TLS capabilities RST doc review, I opened a ML 
 thread for it to make the decision.
 Currently, the document says:

 “
 For SNI functionality, tenant will supply list of TLS containers in specific
 Order.
 In case when specific back-end is not able to support SNI capabilities,
 its driver should throw an exception. The exception message should state
 that this specific back-end (provider) does not support SNI capability.
 The clear sign of listener's requirement for SNI capability is
 a 

Re: [openstack-dev] [all] Treating notifications as a contract

2014-07-16 Thread Sandy Walsh
On 7/11/2014 6:08 AM, Chris Dent wrote:
 On Fri, 11 Jul 2014, Lucas Alvares Gomes wrote:

 The data format that Ironic will send was part of the spec proposed
 and could have been reviewed. I think there's still time to change it
 tho, if you have a better format talk to Haomeng which is the guys
 responsible for that work in Ironic and see if he can change it (We
 can put up a following patch to fix the spec with the new format as
 well) . But we need to do this ASAP because we want to get it landed
 in Ironic soon.
 It was only after doing the work that I realized how it might be an
 example for the sake of this discussion. As the architecure of
 Ceilometer currently exist there still needs to be some measure of
 custom code, even if the notifications are as I described them.

 However, if we want to take this opportunity to move some of the
 smarts from Ceilomer into the Ironic code then the paste that I created
 might be a guide to make it possible:

 http://paste.openstack.org/show/86071/

 However on that however, if there's some chance that a large change could
 happen, it might be better to wait, I don't know.


Just to give a sense of what we're dealing with, as while back I wrote a
little script to dump the schema of all events StackTach collected from
Nova.  The value fields are replaced with types (or ? if it was a
class object).

http://paste.openstack.org/show/54140/




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][CI] DB migration error

2014-07-16 Thread trinath.soman...@freescale.com
Hi-

With the neutron Update to my CI, I get the following error while configuring 
Neutron in devstack.

2014-07-16 16:12:06.349 | INFO  [alembic.autogenerate.compare] Detected server 
default on column 'poolmonitorassociations.status'
2014-07-16 16:12:06.411 | INFO  
[neutron.db.migration.alembic_migrations.heal_script] Detected added foreign 
key for column 'id' on table u'ml2_brocadeports'
2014-07-16 16:12:14.853 | Traceback (most recent call last):
2014-07-16 16:12:14.853 |   File /usr/local/bin/neutron-db-manage, line 10, 
in module
2014-07-16 16:12:14.853 | sys.exit(main())
2014-07-16 16:12:14.854 |   File 
/opt/stack/new/neutron/neutron/db/migration/cli.py, line 171, in main
2014-07-16 16:12:14.854 | CONF.command.func(config, CONF.command.name)
2014-07-16 16:12:14.854 |   File 
/opt/stack/new/neutron/neutron/db/migration/cli.py, line 85, in 
do_upgrade_downgrade
2014-07-16 16:12:14.854 | do_alembic_command(config, cmd, revision, 
sql=CONF.command.sql)
2014-07-16 16:12:14.854 |   File 
/opt/stack/new/neutron/neutron/db/migration/cli.py, line 63, in 
do_alembic_command
2014-07-16 16:12:14.854 | getattr(alembic_command, cmd)(config, *args, 
**kwargs)
2014-07-16 16:12:14.854 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/command.py, line 124, in 
upgrade
2014-07-16 16:12:14.854 | script.run_env()
2014-07-16 16:12:14.854 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/script.py, line 199, in run_env
2014-07-16 16:12:14.854 | util.load_python_file(self.dir, 'env.py')
2014-07-16 16:12:14.854 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/util.py, line 205, in 
load_python_file
2014-07-16 16:12:14.854 | module = load_module_py(module_id, path)
2014-07-16 16:12:14.854 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/compat.py, line 58, in 
load_module_py
2014-07-16 16:12:14.854 | mod = imp.load_source(module_id, path, fp)
2014-07-16 16:12:14.854 |   File 
/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py, line 
106, in module
2014-07-16 16:12:14.854 | run_migrations_online()
2014-07-16 16:12:14.855 |   File 
/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py, line 
90, in run_migrations_online
2014-07-16 16:12:14.855 | options=build_options())
2014-07-16 16:12:14.855 |   File string, line 7, in run_migrations
2014-07-16 16:12:14.855 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/environment.py, line 681, in 
run_migrations
2014-07-16 16:12:14.855 | self.get_context().run_migrations(**kw)
2014-07-16 16:12:14.855 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/migration.py, line 225, in 
run_migrations
2014-07-16 16:12:14.855 | change(**kw)
2014-07-16 16:12:14.856 |   File 
/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/versions/1d6ee1ae5da5_db_healing.py,
 line 32, in upgrade
2014-07-16 16:12:14.856 | heal_script.heal()
2014-07-16 16:12:14.856 |   File 
/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py,
 line 78, in heal
2014-07-16 16:12:14.856 | execute_alembic_command(el)
2014-07-16 16:12:14.856 |   File 
/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py,
 line 93, in execute_alembic_command
2014-07-16 16:12:14.856 | parse_modify_command(command)
2014-07-16 16:12:14.856 |   File 
/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py,
 line 126, in parse_modify_command
2014-07-16 16:12:14.856 | op.alter_column(table, column, **kwargs)
2014-07-16 16:12:14.856 |   File string, line 7, in alter_column
2014-07-16 16:12:14.856 |   File string, line 1, in lambda
2014-07-16 16:12:14.856 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/util.py, line 322, in go
2014-07-16 16:12:14.857 | return fn(*arg, **kw)
2014-07-16 16:12:14.857 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/operations.py, line 300, in 
alter_column
2014-07-16 16:12:14.857 | existing_autoincrement=existing_autoincrement
2014-07-16 16:12:14.857 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/ddl/mysql.py, line 42, in 
alter_column
2014-07-16 16:12:14.857 | else existing_autoincrement
2014-07-16 16:12:14.857 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py, line 76, in _exec
2014-07-16 16:12:14.857 | conn.execute(construct, *multiparams, **params)
2014-07-16 16:12:14.857 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 727, 
in execute
2014-07-16 16:12:14.857 | return meth(self, multiparams, params)
2014-07-16 16:12:14.858 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/ddl.py, line 67, in 
_execute_on_connection
2014-07-16 16:12:14.858 | return connection._execute_ddl(self, multiparams, 
params)
2014-07-16 16:12:14.858 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 775, 
in _execute_ddl
2014-07-16 16:12:14.858 | compiled = 

Re: [openstack-dev] [Neutron][CI] DB migration error

2014-07-16 Thread Kevin Benton
This bug is also affecting Ryu and the Big Switch CI.
There is a patch to bump the version requirement for alembic linked in the
bug report that should fix it. It we can't get that merged we may have to
revert the healing patch.

https://bugs.launchpad.net/bugs/1342507
On Jul 16, 2014 9:27 AM, trinath.soman...@freescale.com 
trinath.soman...@freescale.com wrote:

  Hi-



 With the neutron Update to my CI, I get the following error while
 configuring Neutron in devstack.



 2014-07-16 16:12:06.349 | INFO  [alembic.autogenerate.compare] Detected
 server default on column 'poolmonitorassociations.status'

 2014-07-16 16:12:06.411 | INFO
 [neutron.db.migration.alembic_migrations.heal_script] Detected added
 foreign key for column 'id' on table u'ml2_brocadeports'

 2014-07-16 16:12:14.853 | Traceback (most recent call last):

 2014-07-16 16:12:14.853 |   File /usr/local/bin/neutron-db-manage, line
 10, in module

 2014-07-16 16:12:14.853 | sys.exit(main())

 2014-07-16 16:12:14.854 |   File
 /opt/stack/new/neutron/neutron/db/migration/cli.py, line 171, in main

 2014-07-16 16:12:14.854 | CONF.command.func(config, CONF.command.name)

 2014-07-16 16:12:14.854 |   File
 /opt/stack/new/neutron/neutron/db/migration/cli.py, line 85, in
 do_upgrade_downgrade

 2014-07-16 16:12:14.854 | do_alembic_command(config, cmd, revision,
 sql=CONF.command.sql)

 2014-07-16 16:12:14.854 |   File
 /opt/stack/new/neutron/neutron/db/migration/cli.py, line 63, in
 do_alembic_command

 2014-07-16 16:12:14.854 | getattr(alembic_command, cmd)(config, *args,
 **kwargs)

 2014-07-16 16:12:14.854 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/command.py, line 124, in
 upgrade

 2014-07-16 16:12:14.854 | script.run_env()

 2014-07-16 16:12:14.854 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/script.py, line 199, in
 run_env

 2014-07-16 16:12:14.854 | util.load_python_file(self.dir, 'env.py')

 2014-07-16 16:12:14.854 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/util.py, line 205, in
 load_python_file

 2014-07-16 16:12:14.854 | module = load_module_py(module_id, path)

 2014-07-16 16:12:14.854 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/compat.py, line 58, in
 load_module_py

 2014-07-16 16:12:14.854 | mod = imp.load_source(module_id, path, fp)

 2014-07-16 16:12:14.854 |   File
 /opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py,
 line 106, in module

 2014-07-16 16:12:14.854 | run_migrations_online()

 2014-07-16 16:12:14.855 |   File
 /opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py,
 line 90, in run_migrations_online

 2014-07-16 16:12:14.855 | options=build_options())

 2014-07-16 16:12:14.855 |   File string, line 7, in run_migrations

 2014-07-16 16:12:14.855 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/environment.py, line 681,
 in run_migrations

 2014-07-16 16:12:14.855 | self.get_context().run_migrations(**kw)

 2014-07-16 16:12:14.855 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/migration.py, line 225, in
 run_migrations

 2014-07-16 16:12:14.855 | change(**kw)

 2014-07-16 16:12:14.856 |   File
 /opt/stack/new/neutron/neutron/db/migration/alembic_migrations/versions/1d6ee1ae5da5_db_healing.py,
 line 32, in upgrade

 2014-07-16 16:12:14.856 | heal_script.heal()

 2014-07-16 16:12:14.856 |   File
 /opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py,
 line 78, in heal

 2014-07-16 16:12:14.856 | execute_alembic_command(el)

 2014-07-16 16:12:14.856 |   File
 /opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py,
 line 93, in execute_alembic_command

 2014-07-16 16:12:14.856 | parse_modify_command(command)

 2014-07-16 16:12:14.856 |   File
 /opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py,
 line 126, in parse_modify_command

 2014-07-16 16:12:14.856 | op.alter_column(table, column, **kwargs)

 2014-07-16 16:12:14.856 |   File string, line 7, in alter_column

 2014-07-16 16:12:14.856 |   File string, line 1, in lambda

 2014-07-16 16:12:14.856 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/util.py, line 322, in go

 2014-07-16 16:12:14.857 | return fn(*arg, **kw)

 2014-07-16 16:12:14.857 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/operations.py, line 300,
 in alter_column

 2014-07-16 16:12:14.857 | existing_autoincrement=existing_autoincrement

 2014-07-16 16:12:14.857 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/ddl/mysql.py, line 42, in
 alter_column

 2014-07-16 16:12:14.857 | else existing_autoincrement

 2014-07-16 16:12:14.857 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py, line 76, in
 _exec

 2014-07-16 16:12:14.857 | conn.execute(construct, *multiparams,
 **params)

 2014-07-16 16:12:14.857 |   File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line
 727, in execute

 

Re: [openstack-dev] [marconi] Meeting time change

2014-07-16 Thread Malini Kamalambal

On 7/16/14 4:43 AM, Flavio Percoco fla...@redhat.com wrote:

On 07/15/2014 06:20 PM, Kurt Griffiths wrote:
 Hi folks, we¹ve been talking about this in IRC, but I wanted to bring it
 to the ML to get broader feedback and make sure everyone is aware. We¹d
 like to change our meeting time to better accommodate folks that live
 around the globe. Proposals:
 
 Tuesdays, 1900 UTC
 Wednessdays, 2000 UTC
 Wednessdays, 2100 UTC
 
 I believe these time slots are free, based
 on: https://wiki.openstack.org/wiki/Meetings
 
 Please respond with ONE of the following:
 
 A. None of these times work for me
 B. An ordered list of the above times, by preference
 C. I am a robot

I don't like the idea of switching days :/

Since the reason we're using Wednesday is because we don't want the
meeting to overlap with the TC and projects meeting, what if we change
the day of both meeting times in order to keep them on the same day (and
perhaps also channel) but on different times?

I think changing day and time will be more confusing than just changing
the time.

If we can find an agreeable time on a non Tuesday, I take the ownership of
pinging  getting you to #openstack-meeting-alt ;)

From a quick look, #openstack-meeting-alt is free on Wednesdays on both
times: 15 UTC and 21 UTC. Does this sound like a good day/time/idea to
folks?

1500 UTC might still be too early for our NZ folks - I thought we wanted
to have the meeting at/after 1900 UTC.
That being said, I will be able to attend only part of the meeting any
time after 1900 UTC - unless it is @ Thursday 1900 UTC
Sorry for making this a puzzle :(




Cheers,
Flavio


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Cinder coverage

2014-07-16 Thread Dan Prince
Hi TripleO!

It would appear that we have no coverage in devtest which ensures that
Cinder consistently works in the overcloud. As such the TripleO Cinder
elements are often broken (as of today I can't fully use lio or tgt w/
upstream TripleO elements).

How do people feel about swapping out our single 'nova boot' command to
boot from a volume. Something like this:

 https://review.openstack.org/#/c/107437

There is a bit of tradeoff here in that the conversion will take a bit
of time (qemu-img has to run). Also our boot code path won't be exactly
the same as booting from an image.

Long term we want to run Tempest but due to resource constraints we
can't do that today. Until then this sort of deep systems test (running
a command that exercises more code) might serve us well and give us the
Cinder coverage we need.

Thoughts?

I would also like to split the test configurations so that we use
cinder-lio for some (cinder-tgt is our existing default in devtest).

Dan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Johannes Erdfelt
On Wed, Jul 16, 2014, Mark McLoughlin mar...@redhat.com wrote:
 No, there are features or code paths of the libvirt 1.2.5+ driver that
 aren't as well tested as the class A designation implies. And we have
 a proposal to make sure these aren't used by default:
 
   https://review.openstack.org/107119
 
 i.e. to stray off the class A path, an operator has to opt into it by
 changing a configuration option that explains they will be enabling code
 paths which aren't yet tested upstream.

So that means the libvirt driver will be a mix of tested and untested
features, but only the tested code paths will be enabled by default?

The gate not only tests code as it gets merged, it tests to make sure it
doesn't get broken in the future by other changes.

What happens when it comes time to bump the default version_cap in the
future? It looks like there could potentially be a scramble to fix code
that has been merged but doesn't work now that it's being tested. Which
potentially further slows down development since now unrelated code
needs to be fixed.

This sounds like we're actively weakening the gate we currently have.

 However, not everything is tested now, nor is the tests we have
 foolproof. When you consider the number of configuration options we
 have, the supported distros, the ranges of library versions we claim to
 support, etc., etc. I don't think we can ever get to an everything is
 tested point.
 
 In the absence of that, I think we should aim to be more clear what *is*
 tested. The config option I suggest does that, which is a big part of
 its merit IMHO.

I like the sound of this especially since it's not clear right now at
all.

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Roman Bogorodskiy
  Eric Windisch wrote:

 This thread highlights more deeply the problems for the FreeBSD folks.
 First, I still disagree with the recommendation that they contribute to
 libvirt. It's a classic example of creating two or more problems from one.
 Once they have support in libvirt, how long before their code is in a
 version of libvirt acceptable to Nova? When they hit edge-cases or bugs,
 requiring changes in libvirt, how long before those fixes are accepted by
 Nova?

Could you please elaborate why you disagree on the contributing patches
to libvirt approach and what the alternative approach do you propose?

Also, could you please elaborate on what is 'version of libvirt
acceptable to Nova'? Cannot we just say that e.g. Nova requires libvirt
X.Y to be deployed on FreeBSD?

Anyway, speaking about FreeBSD support I assume we actually talking
about Bhyve support. I think it'd be good to break the task and
implement FreeBSD support for libvirt/Qemu first.

Qemu driver of libvirt works fine with FreeBSD for quite some time
already and adding support for that in Nova will allow to do all the
ground work before we could move to the libvirt/bhyve support.

I'm planning to start with adding networking support. Unfortunately, it
seems I got late with the spec for Juno though:

https://review.openstack.org/#/c/95328/

Roman Bogorodskiy


pgpkeNEjFWmYC.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] (98)Address already in use: make_sock: could not bind to address [::]:5000 0.0.0.0:5000

2014-07-16 Thread Rich Megginson

On 07/16/2014 09:10 AM, Morgan Fainberg wrote:

--
From: Rich Megginson rmegg...@redhat.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: July 16, 2014 at 08:08:00
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [devstack][keystone] (98)Address already in use: 
make_sock: could not bind to address [::]:5000  0.0.0.0:5000

SNIP


Another problem with port 5000 in Fedora, and probably more recent
versions of RHEL, is the selinux policy:
  
# sudo semanage port -l|grep 5000

...
commplex_main_port_t tcp 5000
commplex_main_port_t udp 5000
  
There is some service called commplex that has already claimed port

5000 for its use, at least as far as selinux goes.
  

Wouldn’t this also affect the eventlet-based Keystone using port 5000?


Yes, it should.


This is not an apache-specific related issue is it?


No, afaict.



—Morgan



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Vishvananda Ishaya

On Jul 16, 2014, at 8:28 AM, Daniel P. Berrange berra...@redhat.com wrote:

 On Wed, Jul 16, 2014 at 08:12:47AM -0700, Clark Boylan wrote:
 
 I am worried that we would just regress to the current process because
 we have tried something similar to this previously and were forced to
 regress to the current process.
 
 IMHO the longer we wait between updating the gate to new versions
 the bigger the problems we create for ourselves. eg we were switching
 from 0.9.8 released Dec 2011, to  1.1.1 released Jun 2013, so we
 were exposed to over 1 + 1/2 years worth of code churn in a single
 event. The fact that we only hit a couple of bugs in that, is actually
 remarkable given the amount of feature development that had gone into
 libvirt in that time. If we had been tracking each intervening libvirt
 release I expect the majority of updates would have had no ill effect
 on us at all. For the couple of releases where there was a problem we
 would not be forced to rollback to a version years older again, we'd
 just drop back to the previous release at most 1 month older.

This is a really good point. As someone who has to deal with packaging
issues constantly, it is odd to me that libvirt is one of the few places
where we depend on upstream packaging. We constantly pull in new python
dependencies from pypi that are not packaged in ubuntu. If we had to
wait for packaging before merging the whole system would grind to a halt.

I think we should be updating our libvirt version more frequently vy
installing from source or our own ppa instead of waiting for the ubuntu
team to package it.

Vish


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SubjectAlternativeNames (SAN)

2014-07-16 Thread Vijay Venkatachalam

I think it is best not to mention about SAN in the OpenStack 
TLS spec. It is expected that the backend should implement according to the 
SSL/SNI IETF spec.
Let’s leave the implementation/validation part to the driver.  For ex. 
NetScaler does not support SAN and the NetScaler driver could either throw an 
error if certs with SAN are used or ignore it.

Does anyone see a requirement for detailing?


Thanks,
Vijay V.


From: Vijay Venkatachalam
Sent: Wednesday, July 16, 2014 8:54 AM
To: 'Samuel Bercovici'; 'OpenStack Development Mailing List (not for usage 
questions)'
Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
SubjectCommonName and/or SubjectAlternativeNames from X509

Apologies for the delayed response.

I am OK with displaying the certificates contents as part of the API, that 
should not harm.

I think the discussion has to be split into 2 topics.


1.   Certificate conflict resolution. Meaning what is expected when 2 or 
more certificates become eligible during SSL negotiation

2.   SAN support

I will send out 2 separate mails on this.


From: Samuel Bercovici [mailto:samu...@radware.com]
Sent: Tuesday, July 15, 2014 11:52 PM
To: OpenStack Development Mailing List (not for usage questions); Vijay 
Venkatachalam
Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
SubjectCommonName and/or SubjectAlternativeNames from X509

OK.

Let me be more precise, extracting the information for view sake / validation 
would be good.
Providing values that are different than what is in the x509 is what I am 
opposed to.

+1 for Carlos on the library and that it should be ubiquitously used.

I will wait for Vijay to speak for himself in this regard…

-Sam.


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Tuesday, July 15, 2014 8:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
SubjectCommonName and/or SubjectAlternativeNames from X509

+1 to German's and  Carlos' comments.

It's also worth pointing out that some UIs will definitely want to show SAN 
information and the like, so either having this available as part of the API, 
or as a standard library we write which then gets used by multiple drivers is 
going to be necessary.

If we're extracting the Subject Common Name in any place in the code then we 
also need to be extracting the Subject Alternative Names at the same place. 
From the perspective of the SNI standard, there's no difference in how these 
fields should be treated, and if we were to treat SANs differently then we're 
both breaking the standard and setting a bad precedent.

Stephen

On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza 
carlos.ga...@rackspace.commailto:carlos.ga...@rackspace.com wrote:

On Jul 15, 2014, at 10:55 AM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com
 wrote:

 Hi,


 Obtaining the domain name from the x509 is probably more of a 
 driver/backend/device capability, it would make sense to have a library that 
 could be used by anyone wishing to do so in their driver code.
You can do what ever you want in *your* driver. The code to extract this 
information will be apart of the API and needs to be mentioned in the spec now. 
PyOpenSSL with PyASN1 are the most likely candidates.

Carlos D. Garza

 -Sam.



 From: Eichberger, German 
 [mailto:german.eichber...@hp.commailto:german.eichber...@hp.com]
 Sent: Tuesday, July 15, 2014 6:43 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

 Hi,

 My impression was that the frontend would extract the names and hand them to 
 the driver.  This has the following advantages:

 · We can be sure all drivers can extract the same names
 · No duplicate code to maintain
 · If we ever allow the user to specify the names on UI rather in the 
 certificate the driver doesn’t need to change.

 I think I saw Adam say something similar in a comment to the code.

 Thanks,
 German

 From: Evgeny Fedoruk [mailto:evge...@radware.commailto:evge...@radware.com]
 Sent: Tuesday, July 15, 2014 7:24 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
 SubjectCommonName and/or SubjectAlternativeNames from X509

 Hi All,

 Since this issue came up from TLS capabilities RST doc review, I opened a ML 
 thread for it to make the decision.
 Currently, the document says:

 “
 For SNI functionality, tenant will supply list of TLS containers in specific
 Order.
 In case when specific back-end is not able to support SNI capabilities,
 its driver should throw an exception. The exception message should state
 that this specific back-end (provider) does not support SNI capability.
 

Re: [openstack-dev] [Ironic] [Horizon] [UX] Wireframes for

2014-07-16 Thread Wan-yen Hsu
Hi,  Jarda


We are already prepared for multiple drivers. If you look at the Driver

field, there is a dropdown menu from which you can choose a driver and

based on the selection the additional information (like IP, user, passw)

will be changed.



 So, if iLO + Virtual Media is chosen in the dropdown menu, Horizon node
managmement panel will display iLO Password and iLO user  instead of
IPMI user and IPMI Password?  This is great!





 Also, myself and a few folks are working on Ironic UEFI support and

 we hope to land this feature in Juno (Spec is still in review state but

 the feature is on the Ironic Juno Prioritized list). In order to add

 UEFI boot feature, a Supported Boot Modes field in the hardware info

 is needed.  The possible values are BIOS Only, UEFI Only, and

 BIOS+UEFI.   We will need to work with you to add this field onto

 hardware info.



There is no problem to accommodate this change in the UI once the

back-end supports it. So if there is a desire to expose the feature in

the UI, when there is already working back-end solution, feel free to

send a patch which adds that to the HW info - it's an easy addition and

the UI is prepared for such types of expansions.



 ok.  Thanks!







wanyen



Hi Wan,



thanks for great notes. My response is inline:



On 2014/15/07 23:19, Wan-yen Hsu wrote:

 The Register Nodes panel uses IPMI user and IPMI Password.

 However, not all Ironic drivers use IPMI, for instance, some Ironic

 drivers will use iLO or other BMC interfaces instead of IPMI.  I would

 like to suggest changing IPMI to BMC or IPMI/BMC to acomodate

 more Ironic drivers.  The Driver field will reflect what power

 management interface (e.g., IPMI + PXE, or iLO + Virtual Media) is used

 so it can be used to correlate the user and password fields.



We are already prepared for multiple drivers. If you look at the Driver

field, there is a dropdown menu from which you can choose a driver and

based on the selection the additional information (like IP, user, passw)

will be changed.



 Also, myself and a few folks are working on Ironic UEFI support and

 we hope to land this feature in Juno (Spec is still in review state but

 the feature is on the Ironic Juno Prioritized list). In order to add

 UEFI boot feature, a Supported Boot Modes field in the hardware info

 is needed.  The possible values are BIOS Only, UEFI Only, and

 BIOS+UEFI.   We will need to work with you to add this field onto

 hardware info.



There is no problem to accommodate this change in the UI once the

back-end supports it. So if there is a desire to expose the feature in

the UI, when there is already working back-end solution, feel free to

send a patch which adds that to the HW info - it's an easy addition and

the UI is prepared for such types of expansions.





 Thanks!



 wanyen



Cheers

-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-16 Thread Mike Spreitzer
Clint Byrum cl...@fewbar.com wrote on 07/02/2014 01:54:49 PM:

 Excerpts from Qiming Teng's message of 2014-07-02 00:02:14 -0700:
  Just some random thoughts below ...
  
  On Tue, Jul 01, 2014 at 03:47:03PM -0400, Mike Spreitzer wrote:
   In AWS, an autoscaling group includes health maintenance 
 functionality --- 
   both an ability to detect basic forms of failures and an abilityto 
react 
   properly to failures detected by itself or by a load balancer.  What 
is 
   the thinking about how to get this functionality in OpenStack? Since 

  
  We are prototyping a solution to this problem at IBM Research - China
  lab.  The idea is to leverage oslo.messaging and ceilometer events for
  instance (possibly other resource such as port, securitygroup ...)
  failure detection and handling.
  
 
 Hm.. perhaps you should be contributing some reviews here as you may
 have some real insight:
 
 https://review.openstack.org/#/c/100012/
 
 This sounds a lot like what we're working on for continuous convergence.

I noticed that health checking in AWS goes beyond convergence.  In AWS an 
ELB can be configured with a URL to ping, for application-level health 
checking.  And an ASG can simply be *told* the health of a member by a 
user's own external health system.  I think we should have analogous 
functionality in OpenStack.  Does that make sense to you?  If so, do you 
have any opinion on the right way to integrate, so that we do not have 
three completely independent health maintenance systems?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Monty Taylor
On 07/16/2014 07:27 PM, Vishvananda Ishaya wrote:
 
 On Jul 16, 2014, at 8:28 AM, Daniel P. Berrange berra...@redhat.com wrote:
 
 On Wed, Jul 16, 2014 at 08:12:47AM -0700, Clark Boylan wrote:

 I am worried that we would just regress to the current process because
 we have tried something similar to this previously and were forced to
 regress to the current process.

 IMHO the longer we wait between updating the gate to new versions
 the bigger the problems we create for ourselves. eg we were switching
 from 0.9.8 released Dec 2011, to  1.1.1 released Jun 2013, so we
 were exposed to over 1 + 1/2 years worth of code churn in a single
 event. The fact that we only hit a couple of bugs in that, is actually
 remarkable given the amount of feature development that had gone into
 libvirt in that time. If we had been tracking each intervening libvirt
 release I expect the majority of updates would have had no ill effect
 on us at all. For the couple of releases where there was a problem we
 would not be forced to rollback to a version years older again, we'd
 just drop back to the previous release at most 1 month older.
 
 This is a really good point. As someone who has to deal with packaging
 issues constantly, it is odd to me that libvirt is one of the few places
 where we depend on upstream packaging. We constantly pull in new python
 dependencies from pypi that are not packaged in ubuntu. If we had to
 wait for packaging before merging the whole system would grind to a halt.
 
 I think we should be updating our libvirt version more frequently vy
 installing from source or our own ppa instead of waiting for the ubuntu
 team to package it.

Shrinking in terror from what I'm about to say ... but I actually agree
with this, There are SEVERAL logistical issues we'd need to sort, not
the least of which involve the actual mechanics of us doing that and
properly gating,etc. But I think that, like the python depends where we
tell distros what version we _need_ rather than using what version they
have, libvirt, qemu, ovs and maybe one or two other things are areas in
which we may want or need to have a strongish opinion.

I'll bring this up in the room tomorrow at the Infra/QA meetup, and will
probably be flayed alive for it - but maybe I can put forward a
straw-man proposal on how this might work.

Monty



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Eric Windisch
On Wed, Jul 16, 2014 at 12:55 PM, Roman Bogorodskiy 
rbogorods...@mirantis.com wrote:

   Eric Windisch wrote:

  This thread highlights more deeply the problems for the FreeBSD folks.
  First, I still disagree with the recommendation that they contribute to
  libvirt. It's a classic example of creating two or more problems from
 one.
  Once they have support in libvirt, how long before their code is in a
  version of libvirt acceptable to Nova? When they hit edge-cases or bugs,
  requiring changes in libvirt, how long before those fixes are accepted by
  Nova?

 Could you please elaborate why you disagree on the contributing patches
 to libvirt approach and what the alternative approach do you propose?


I don't necessarily disagree with contributing patches to libvirt. I
believe that the current system makes it difficult to perform quick,
iterative development. I wish to see this thread attempt to solve that
problem and reduce the barrier to getting stuff done.


 Also, could you please elaborate on what is 'version of libvirt
 acceptable to Nova'? Cannot we just say that e.g. Nova requires libvirt
 X.Y to be deployed on FreeBSD?


This is precisely my point, that we need to support different versions of
libvirt and to test those versions. If we're going to support  different
versions of libvirt on FreeBSD, Ubuntu, and RedHat - those should be
tested, possibly as third-party options.

The primary testing path for libvirt upstream should be with the latest
stable release with a non-voting test against trunk. There might be value
in testing against a development snapshot as well, where we know there are
features we want in an unreleased version of libvirt but where we cannot
trust trunk to be stable enough for gate.


 Anyway, speaking about FreeBSD support I assume we actually talking
 about Bhyve support. I think it'd be good to break the task and
 implement FreeBSD support for libvirt/Qemu first


 I believe Sean was referencing to Bhyve support, this is how I interpreted
it.


-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] os-net-config

2014-07-16 Thread Dan Prince
Hi TripleO!

I wanted to get the word out on progress with a new os-net-config tool
for TripleO. The spec (not yet approved) lives here:

https://review.openstack.org/#/c/97859/

We've also got a working implementation here:

https://github.com/dprince/os-net-config

You can see WIP example of how it wires in here (more work to do on this
to fully support parity):

https://review.openstack.org/#/c/104054/1/elements/network-utils/bin/ensure-bridge,cm

The end goal is that we will be able to more flexibly control our host
level network settings in TripleO. Once it is fully integrated
os-net-config would provide a mechanism to drive more flexible
configurations (multiple bridges, bonding, etc.) via Heat metadata.

We are already in dire need of this sort of thing today because we can't
successfully deploy our CI overclouds without making manual changes to
our images (this is because we need 2 bridges and our heat templates
only support 1).

Dan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Chris Friesen

On 07/16/2014 11:59 AM, Monty Taylor wrote:

On 07/16/2014 07:27 PM, Vishvananda Ishaya wrote:



This is a really good point. As someone who has to deal with packaging
issues constantly, it is odd to me that libvirt is one of the few places
where we depend on upstream packaging. We constantly pull in new python
dependencies from pypi that are not packaged in ubuntu. If we had to
wait for packaging before merging the whole system would grind to a halt.

I think we should be updating our libvirt version more frequently vy
installing from source or our own ppa instead of waiting for the ubuntu
team to package it.


Shrinking in terror from what I'm about to say ... but I actually agree
with this, There are SEVERAL logistical issues we'd need to sort, not
the least of which involve the actual mechanics of us doing that and
properly gating,etc. But I think that, like the python depends where we
tell distros what version we _need_ rather than using what version they
have, libvirt, qemu, ovs and maybe one or two other things are areas in
which we may want or need to have a strongish opinion.

I'll bring this up in the room tomorrow at the Infra/QA meetup, and will
probably be flayed alive for it - but maybe I can put forward a
straw-man proposal on how this might work.


How would this work...would you have them uninstall the distro-provided 
libvirt/qemu and replace them with newer ones?  (In which case what 
happens if the version desired by OpenStack has bugs in features that 
OpenStack doesn't use, but that some other software that the user wants 
to run does use?)


Or would you have OpenStack versions of them installed in parallel in an 
alternate location?


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - Location for common third-party libs?

2014-07-16 Thread Kevin Benton
I have filed a bug in Red Hat[1], however I'm not sure if it's in the right
place.

Ihar, can you verify that it's correct or move it to the appropriate
location?

1. https://bugzilla.redhat.com/show_bug.cgi?id=1120332


On Wed, Jul 9, 2014 at 3:29 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 Reviving the old thread.

 On 17/06/14 11:23, Kevin Benton wrote:
  Hi Ihar,
 
  What is the reason to breakup neutron into so many packages? A
  quick disk usage stat shows the plugins directory is currently
  3.4M. Is that considered to be too much space for a package, or was
  it for another reason?

 I think the reasoning was that we don't want to pollute systems with
 unneeded files, and it seems to be easily achievable by splitting
 files into separate packages. It turned out now it's not that easy now
 that we have dependencies between ml2 mechanisms and separate plugins.

 So I would be in favor of merging plugin packages back into
 python-neutron package. AFAIK there is still no bug for that in Red
 Hat Bugzilla, so please report one.

 
  Thanks, Kevin Benton
 
 
  On Mon, Jun 16, 2014 at 3:37 PM, Ihar Hrachyshka
  ihrac...@redhat.com wrote:
 
  On 17/06/14 00:10, Anita Kuno wrote:
  On 06/16/2014 06:02 PM, Kevin Benton wrote:
  Hello,
 
  In the Big Switch ML2 driver, we rely on quite a bit of
  code from the Big Switch plugin. This works fine for
  distributions that include the entire neutron code base.
  However, some break apart the neutron code base into
  separate packages. For example, in CentOS I can't use the
  Big Switch ML2 driver with just ML2 installed because the
  Big Switch plugin directory is gone.
 
  Is there somewhere where we can put common third party code
  that will be safe from removal during packaging?
 
 
  Hi,
 
  I'm a neutron packager for redhat based distros.
 
  AFAIK the main reason is to avoid installing lots of plugins to
  systems that are not going to use them. No one really spent too
  much time going file by file and determining internal
  interdependencies.
 
  In your case, I would move those Brocade specific ML2 files to
  Brocade plugin package. I would suggest to report the bug in Red
  Hat bugzilla. I think this won't get the highest priority, but once
  packagers will have spare cycles, this can be fixed.
 
  Cheers, /Ihar
 
  ___ OpenStack-dev
  mailing list OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
 
 
  ___ OpenStack-dev
  mailing list OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

 iQEcBAEBCgAGBQJTvRmeAAoJEC5aWaUY1u57OSoIALVFA1a0CrIrUk/vc28I7245
 P3xe2WjV86txu71vtOVh0uSzh7oaGHkFOy1fpDDPp4httsALQepza8YziR2MsQHp
 8fotY/fOvR2MRLNNvR+ekE+2n8U+pZW5vRchfOo3xKBGNeHs30Is3ZZHLyF6I7+T
 TrSR1qcHhkWgUF6HB6IcnRGHlNjhXJt1RBAjLVhbc4FuQAqy41ZxtFpi1QfIsgIl
 7CmBJeZu+nTap+XvXqBqQslUbGdSeodbVh6uNMso6OP+P+3hKAwgXBhGD2Mc7Hed
 TMeKtY8BH5k1LAsadkMXgRm0L9f+vBPHeB5rzQgyLDBD6UpwH9bWryaDoDEJFYE=
 =M8GI
 -END PGP SIGNATURE-

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] minimal device driver for VPN

2014-07-16 Thread Nachi Ueno
QQ: do you have __init__.py in the directory?


2014-07-16 11:43 GMT-07:00 Julio Carlos Barrera Juez 
juliocarlos.barr...@i2cat.net:

 I am fighting with this for months. I want to develop a VPN Neutron
 plugin, but it is almost impossible to realize how to achieve it. this is a
 thread I opened months ago and Paul Mchali helped me a lot:
 http://lists.openstack.org/pipermail/openstack-dev/2014-February/028389.html

 I want to know the minimum requirements to develop a device driver and a
 service driver for a VPN Neutron plugin. I tried adding an empty device
 driver and I got this error:

 DeviceDriverImportError: Can not load driver
 :neutron.services.vpn.junos_vpnaas.device_drivers.fake_device_driver.FakeDeviceDriver

 Both Python file and class exists, but the implementation is empty. What
 is the problem? What I need to include in this file/class to avoid this
 error?

 Thank you.

  http://dana.i2cat.net   http://www.i2cat.net/en
 Julio C. Barrera Juez  [image: View my profile on LinkedIn]
 http://es.linkedin.com/in/jcbarrera/en
 Office phone: (+34) 93 357 99 27 (ext. 527)
 Office mobile phone: (+34) 625 66 77 26
 Distributed Applications and Networks Area (DANA)
 i2CAT Foundation, Barcelona

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] minimal device driver for VPN

2014-07-16 Thread Paul Michali (pcm)
Do you have a repo with the code that is visible to the public?

What does the /etc/neutron/vpn_agent.ini look like?

Can you put the log output of the actual error messages seen?

Regards,

PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On Jul 16, 2014, at 2:43 PM, Julio Carlos Barrera Juez 
juliocarlos.barr...@i2cat.net wrote:

 I am fighting with this for months. I want to develop a VPN Neutron plugin, 
 but it is almost impossible to realize how to achieve it. this is a thread I 
 opened months ago and Paul Mchali helped me a lot: 
 http://lists.openstack.org/pipermail/openstack-dev/2014-February/028389.html
 
 I want to know the minimum requirements to develop a device driver and a 
 service driver for a VPN Neutron plugin. I tried adding an empty device 
 driver and I got this error:
 
 DeviceDriverImportError: Can not load driver 
 :neutron.services.vpn.junos_vpnaas.device_drivers.fake_device_driver.FakeDeviceDriver
 
 Both Python file and class exists, but the implementation is empty. What is 
 the problem? What I need to include in this file/class to avoid this error?
 
 Thank you.
 
   
 Julio C. Barrera Juez  
 Office phone: (+34) 93 357 99 27 (ext. 527)
 Office mobile phone: (+34) 625 66 77 26
 Distributed Applications and Networks Area (DANA)
 i2CAT Foundation, Barcelona
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-16 Thread Carlos Garza

On Jul 16, 2014, at 10:55 AM, Vijay Venkatachalam 
vijay.venkatacha...@citrix.com
 wrote:

 Apologies for the delayed response.

 I am OK with displaying the certificates contents as part of the API, that 
 should not harm.
  
 I think the discussion has to be split into 2 topics.
  
 1.   Certificate conflict resolution. Meaning what is expected when 2 or 
 more certificates become eligible during SSL negotiation
 2.   SAN support
  

Ok cool that makes more sense. #2 seems to be met by Evgeny proposal. I'll 
let you folks decide the conflict resolution issue #1.


 I will send out 2 separate mails on this.
  
  
 From: Samuel Bercovici [mailto:samu...@radware.com] 
 Sent: Tuesday, July 15, 2014 11:52 PM
 To: OpenStack Development Mailing List (not for usage questions); Vijay 
 Venkatachalam
 Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
  
 OK.
  
 Let me be more precise, extracting the information for view sake / validation 
 would be good.
 Providing values that are different than what is in the x509 is what I am 
 opposed to.
  
 +1 for Carlos on the library and that it should be ubiquitously used.
  
 I will wait for Vijay to speak for himself in this regard…
  
 -Sam.
  
  
 From: Stephen Balukoff [mailto:sbaluk...@bluebox.net] 
 Sent: Tuesday, July 15, 2014 8:35 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
  
 +1 to German's and  Carlos' comments.
  
 It's also worth pointing out that some UIs will definitely want to show SAN 
 information and the like, so either having this available as part of the API, 
 or as a standard library we write which then gets used by multiple drivers is 
 going to be necessary.
  
 If we're extracting the Subject Common Name in any place in the code then we 
 also need to be extracting the Subject Alternative Names at the same place. 
 From the perspective of the SNI standard, there's no difference in how these 
 fields should be treated, and if we were to treat SANs differently then we're 
 both breaking the standard and setting a bad precedent.
  
 Stephen
  
 
 On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza carlos.ga...@rackspace.com 
 wrote:
 
 On Jul 15, 2014, at 10:55 AM, Samuel Bercovici samu...@radware.com
  wrote:
 
  Hi,
 
 
  Obtaining the domain name from the x509 is probably more of a 
  driver/backend/device capability, it would make sense to have a library 
  that could be used by anyone wishing to do so in their driver code.
 
 You can do what ever you want in *your* driver. The code to extract this 
 information will be apart of the API and needs to be mentioned in the spec 
 now. PyOpenSSL with PyASN1 are the most likely candidates.
 
 Carlos D. Garza
 
  -Sam.
 
 
 
  From: Eichberger, German [mailto:german.eichber...@hp.com]
  Sent: Tuesday, July 15, 2014 6:43 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
  Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
 
  Hi,
 
  My impression was that the frontend would extract the names and hand them 
  to the driver.  This has the following advantages:
 
  · We can be sure all drivers can extract the same names
  · No duplicate code to maintain
  · If we ever allow the user to specify the names on UI rather in 
  the certificate the driver doesn’t need to change.
 
  I think I saw Adam say something similar in a comment to the code.
 
  Thanks,
  German
 
  From: Evgeny Fedoruk [mailto:evge...@radware.com]
  Sent: Tuesday, July 15, 2014 7:24 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
  SubjectCommonName and/or SubjectAlternativeNames from X509
 
  Hi All,
 
  Since this issue came up from TLS capabilities RST doc review, I opened a 
  ML thread for it to make the decision.
  Currently, the document says:
 
  “
  For SNI functionality, tenant will supply list of TLS containers in specific
  Order.
  In case when specific back-end is not able to support SNI capabilities,
  its driver should throw an exception. The exception message should state
  that this specific back-end (provider) does not support SNI capability.
  The clear sign of listener's requirement for SNI capability is
  a none empty SNI container ids list.
  However, reference implementation must support SNI capability.
 
  Specific back-end code may retrieve SubjectCommonName and/or altSubjectNames
  from the certificate which will determine the hostname(s) the certificate
  is associated with.
 
  The order of SNI containers list may be used by specific back-end code,
  like Radware's, for specifying priorities 

[openstack-dev] [sahara] team meeting July 17 1800 UTC

2014-07-16 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20140717T18

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.serialization repo review

2014-07-16 Thread Ben Nemec
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 07/15/2014 03:52 PM, Ihar Hrachyshka wrote:
 On 15/07/14 20:36, Joshua Harlow wrote:
 LGTM.
 
 I'd be interesting in the future to see if we can transparently
 use some other serialization format (besides json)...
 
 That's my only compliant is that jsonutils is still named
 jsonutils instead of 'serializer' or something else but I
 understand the reasoning why...
 
 Now that jsonutils module contains all basic 'json' functions 
 (dump[s], load[s]), can we rename it to 'json' to mimic the
 standard 'json' library? I think jsonutils is now easy to use as an
 enhanced drop-in replacement for standard 'json' module, and I even
 envisioned a hacking rule that would suggest to use jsonutils
 instead of json. So appropriate naming would be helpful to push
 that use case.

We discussed this a bit on the oslo.utils spec, but we don't want to
shadow builtin names so we're leaving the utils suffix on the modules
that have it.  I would think the same applies here.

If someone wants to use this as a dropin they can still do from
oslo.serialization import jsonutils as json

 
 /Ihar
 
 
 -Josh
 
 On Jul 15, 2014, at 10:42 AM, Ben Nemec openst...@nemebean.com 
 wrote:
 
 And the link, since I forgot it before: 
 https://github.com/cybertron/oslo.serialization
 
 On 07/14/2014 04:59 PM, Ben Nemec wrote:
 Hi oslophiles,
 
 I've (finally) started the graduation of oslo.serialization, 
 and I'm up to the point of having a repo on github that
 passes the unit tests.
 
 I realize there is some more work to be done (e.g. replacing 
 all of the openstack.common files with libs) but my plan is
 to do that once it's under Gerrit control so we can review
 the changes properly.
 
 Please take a look and leave feedback as appropriate.
 Thanks!
 
 -Ben
 
 
 
 ___ OpenStack-dev 
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 
 ___ OpenStack-dev 
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTxt1rAAoJEDehGd0Fy7uqm0EH/RwwkuCOwrZy/f/DhxgHgyXA
zWi9x29M+q9kDdkJImdoSCoimReV1tXGMBe/hMtqiqa7XUtC0daltPDsDgZX1rCE
Od1luXfnD8jxdIWI+6ecDpf8eK3PZqe++FHditOEVDNN6R84xW6Zkkd/3ERipT5D
Jt4G1VBV6DmeO80p94InunAvlG6f15t1NuWfqo7a1fU8r9XpKRnYqmgSBrjNxZcL
8cDTW/3HH6X2kps1xVDJTDFCo2WionbK73N9FYy1NBRt0XKThseRVXQiC4sANlEN
/tHqlWVGZg6e6HCkvywV4gAUKnaNiuHVi6U0RDgz4KIa2Qrbazup3Azz2fsbt6U=
=39+D
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][CI] DB migration error

2014-07-16 Thread Kyle Mestery
I've poked some folks on the infra channel about this now, as we need
this merged soon.

On Wed, Jul 16, 2014 at 11:30 AM, Kevin Benton blak...@gmail.com wrote:
 This bug is also affecting Ryu and the Big Switch CI.
 There is a patch to bump the version requirement for alembic linked in the
 bug report that should fix it. It we can't get that merged we may have to
 revert the healing patch.

 https://bugs.launchpad.net/bugs/1342507

 On Jul 16, 2014 9:27 AM, trinath.soman...@freescale.com
 trinath.soman...@freescale.com wrote:

 Hi-



 With the neutron Update to my CI, I get the following error while
 configuring Neutron in devstack.



 2014-07-16 16:12:06.349 | INFO  [alembic.autogenerate.compare] Detected
 server default on column 'poolmonitorassociations.status'

 2014-07-16 16:12:06.411 | INFO
 [neutron.db.migration.alembic_migrations.heal_script] Detected added foreign
 key for column 'id' on table u'ml2_brocadeports'

 2014-07-16 16:12:14.853 | Traceback (most recent call last):

 2014-07-16 16:12:14.853 |   File /usr/local/bin/neutron-db-manage, line
 10, in module

 2014-07-16 16:12:14.853 | sys.exit(main())

 2014-07-16 16:12:14.854 |   File
 /opt/stack/new/neutron/neutron/db/migration/cli.py, line 171, in main

 2014-07-16 16:12:14.854 | CONF.command.func(config, CONF.command.name)

 2014-07-16 16:12:14.854 |   File
 /opt/stack/new/neutron/neutron/db/migration/cli.py, line 85, in
 do_upgrade_downgrade

 2014-07-16 16:12:14.854 | do_alembic_command(config, cmd, revision,
 sql=CONF.command.sql)

 2014-07-16 16:12:14.854 |   File
 /opt/stack/new/neutron/neutron/db/migration/cli.py, line 63, in
 do_alembic_command

 2014-07-16 16:12:14.854 | getattr(alembic_command, cmd)(config, *args,
 **kwargs)

 2014-07-16 16:12:14.854 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/command.py, line 124, in
 upgrade

 2014-07-16 16:12:14.854 | script.run_env()

 2014-07-16 16:12:14.854 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/script.py, line 199, in
 run_env

 2014-07-16 16:12:14.854 | util.load_python_file(self.dir, 'env.py')

 2014-07-16 16:12:14.854 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/util.py, line 205, in
 load_python_file

 2014-07-16 16:12:14.854 | module = load_module_py(module_id, path)

 2014-07-16 16:12:14.854 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/compat.py, line 58, in
 load_module_py

 2014-07-16 16:12:14.854 | mod = imp.load_source(module_id, path, fp)

 2014-07-16 16:12:14.854 |   File
 /opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py,
 line 106, in module

 2014-07-16 16:12:14.854 | run_migrations_online()

 2014-07-16 16:12:14.855 |   File
 /opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py,
 line 90, in run_migrations_online

 2014-07-16 16:12:14.855 | options=build_options())

 2014-07-16 16:12:14.855 |   File string, line 7, in run_migrations

 2014-07-16 16:12:14.855 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/environment.py, line 681,
 in run_migrations

 2014-07-16 16:12:14.855 | self.get_context().run_migrations(**kw)

 2014-07-16 16:12:14.855 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/migration.py, line 225, in
 run_migrations

 2014-07-16 16:12:14.855 | change(**kw)

 2014-07-16 16:12:14.856 |   File
 /opt/stack/new/neutron/neutron/db/migration/alembic_migrations/versions/1d6ee1ae5da5_db_healing.py,
 line 32, in upgrade

 2014-07-16 16:12:14.856 | heal_script.heal()

 2014-07-16 16:12:14.856 |   File
 /opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py,
 line 78, in heal

 2014-07-16 16:12:14.856 | execute_alembic_command(el)

 2014-07-16 16:12:14.856 |   File
 /opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py,
 line 93, in execute_alembic_command

 2014-07-16 16:12:14.856 | parse_modify_command(command)

 2014-07-16 16:12:14.856 |   File
 /opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py,
 line 126, in parse_modify_command

 2014-07-16 16:12:14.856 | op.alter_column(table, column, **kwargs)

 2014-07-16 16:12:14.856 |   File string, line 7, in alter_column

 2014-07-16 16:12:14.856 |   File string, line 1, in lambda

 2014-07-16 16:12:14.856 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/util.py, line 322, in go

 2014-07-16 16:12:14.857 | return fn(*arg, **kw)

 2014-07-16 16:12:14.857 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/operations.py, line 300, in
 alter_column

 2014-07-16 16:12:14.857 |
 existing_autoincrement=existing_autoincrement

 2014-07-16 16:12:14.857 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/ddl/mysql.py, line 42, in
 alter_column

 2014-07-16 16:12:14.857 | else existing_autoincrement

 2014-07-16 16:12:14.857 |   File
 /usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py, line 76, in
 _exec

 2014-07-16 16:12:14.857 | 

Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-07-16 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 16/07/14 01:50, Vishvananda Ishaya wrote:
 
 On Jul 15, 2014, at 3:30 PM, Ihar Hrachyshka ihrac...@redhat.com
 wrote:
 
 Signed PGP part On 14/07/14 22:48, Vishvananda Ishaya wrote:
 
 On Jul 13, 2014, at 9:29 AM, Ihar Hrachyshka
 ihrac...@redhat.com wrote:
 
 Signed PGP part On 12/07/14 03:17, Mike Bayer wrote:
 
 On 7/11/14, 7:26 PM, Carl Baldwin wrote:
 
 
 On Jul 11, 2014 5:32 PM, Vishvananda Ishaya 
 vishvana...@gmail.com
 mailto:vishvana...@gmail.com wrote:
 
 I have tried using pymysql in place of mysqldb and in
 real world
 concurrency
 tests against cinder and nova it performs slower. I
 was inspired by
 the mention
 of mysql-connector so I just tried that option
 instead.
 Mysql-connector seems
 to be slightly slower as well, which leads me to
 believe that the
 blocking inside of
 
 Do you have some numbers?  Seems to be slightly slower 
 doesn't
 really stand up as an argument against the numbers that
 have been posted in this thread.
 
 Numbers are highly dependent on a number of other factors, but
 I was seeing 100 concurrent list commands against cinder going
 from an average of 400 ms to an average of around 600 ms with
 both msql-connector and pymsql.
 
 I've made my tests on neutron only, so there is possibility that 
 cinder works somehow differently.
 
 But, those numbers don't tell a lot in terms of considering the 
 switch. Do you have numbers for mysqldb case?
 
 Sorry if my commentary above was unclear. The  400ms is mysqldb. 
 The 600ms average was the same for both the other options.
 
 
 It is also worth mentioning that my test of 100 concurrent
 creates from the same project in cinder leads to average
 response times over 3 seconds. Note that creates return before
 the request is sent to the node for processing, so this is just
 the api creating the db record and sticking a message on the
 queue. A huge part of the slowdown is in quota reservation
 processing which does a row lock on the project id.
 
 Again, are those 3 seconds better or worse than what we have for
 mysqldb?
 
 The 3 seconds is from mysqldb. I don?t have average response times
 for mysql-connector due to the timeouts I mention below.
 
 
 Before we are sure that an eventlet friendly backend ?gets rid
 of all deadlocks?, I will mention that trying this test
 against connector leads to some requests timing out at our load
 balancer (5 minute timeout), so we may actually be introducing
 deadlocks where the retry_on_deadlock operator is used.
 
 Deadlocks != timeouts. I attempt to fix eventlet-triggered db 
 deadlocks, not all possible deadlocks that you may envision, or
 timeouts.
 
 That may be true, but if switching the default is trading one
 problem for another it isn?t necessarily the right fix. The timeout
 means that one or more greenthreads are never actually generating a
 response. I suspect and endless retry_on_deadlock between a couple
 of competing greenthreads which we don?t hit with mysqldb, but it
 could be any number of things.
 
 
 
 Consider the above anecdotal for the moment, since I can?t
 verify for sure that switching the sql driver didn?t introduce
 some other race or unrelated problem.
 
 Let me just caution that we can?t recommend replacing our
 mysql backend without real performance and load testing.
 
 I agree. Not saying that the tests are somehow complete, but here
 is what I was into last two days.
 
 There is a nice openstack project called Rally that is designed
 to allow easy benchmarks for openstack projects. They have four
 scenarios for neutron implemented: for networks, ports, routers,
 and subnets. Each scenario combines create and list commands.
 
 I've run each test with the following runner settings: times =
 100, concurrency = 10, meaning each scenario is run 100 times in
 parallel, and there were not more than 10 parallel scenarios
 running. Then I've repeated the same for times = 100, concurrency
 = 20 (also set max_pool_size to 20 to allow sqlalchemy utilize
 that level of parallelism), and times = 1000, concurrency = 100
 (same note on sqlalchemy parallelism).
 
 You can find detailed html files with nice graphs here [1].
 Brief description of results is below:
 
 1. create_and_list_networks scenario: for 10 parallel workers 
 performance boost is -12.5% from original time, for 20 workers
 -6.3%, for 100 workers there is a slight reduction of average
 time spent for scenario +9.4% (this is the only scenario that
 showed slight reduction in performance, I'll try to rerun the
 test tomorrow to see whether it was some discrepancy when I
 executed it that influenced the result).
 
 2. create_and_list_ports scenario: for 10 parallel workers boost
 is -25.8%, for 20 workers it's -9.4%, and for 100 workers it's
 -12.6%.
 
 3. create_and_list_routers scenario: for 10 parallel workers
 boost is -46.6% (almost half of original time), for 20 workers
 it's -51.7% (more than a half), for 100 workers it's -41.5%.
 
 4. 

[openstack-dev] What's Up Doc? July 16 2014

2014-07-16 Thread Anne Gentle
Hi all,
First of all, I'm sorry we had to skip last week's doc team meeting and
that I didn't send this note out last week -- had to take care of my son's
health. As Pa from Little House on the Prairie would say, All's well that
ends well.

Thanks to the APAC team for holding the docs team meeting this week.
Minutes and logs:
http://eavesdrop.openstack.org/meetings/docteam/2014/docteam.2014-07-16-03.06.html
http://eavesdrop.openstack.org/meetings/docteam/2014/docteam.2014-07-16-03.06.log.html

__In review and merged this past week__
The CLI Reference has been updated for the release of:
 python-novaclient 2.18.1
 python-keystoneclient  0.9
 python-cinderclient 1.0.9 https://review.openstack.org/106553
 python-glanceclient 0.13.1

We're now labeling the release name in a running sidebar of text on older
releases and the current release for all docs pages that correlate with an
integrated release.

The neutron.conf advanced configuration info has been updated to use the
alias openvswitch rather than, for example,
neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2.

__High priority doc work__

I'm as eager as you are to get my hands on the results of the Architecture
and Design Guide! We're working on the best output and should have it
available soon.

__Ongoing doc work__

To clarify the request for docs-specs, while we have some wiki page
specifications for Launchpad blueprints, I was hoping to try out the
docs-specs repo for the networking guide, the HOT template user guide
chapter, and for app developer deliverables. We are trying out the
docs-specs repo rather than wiki pages. So far Andreas has proposed one for
a common glossary, and Gauvain is working on another for the HOT template
user guide chapter. Phil and Matt are working on the networking guide, so
that leaves Tom and me working on developer deliverables. Training guides,
do you have any blueprints you want reviewed? Let's get them proposed.

Or, if we think we should stick to wiki pages for specs, that's okay too.

__New incoming doc requests__

The Trove team gets a gold star for outlining their doc gaps here:
https://etherpad.openstack.org/p/trove-doc-items. Their goal is to get
those items at least in draft by 7/24.

Mostly the interest is in the HOT templates doc and the upcoming Networking
doc swarm and spec.

__Doc tools updates__

I want to be clear that there's no Foundation support for any purchased
licenses of a proprietary toolchain. Our entire docs toolchain is open.
Some of us choose to use Oxygen for authoring, and Oxygen XML, the company,
chooses to support open source projects by providing free licenses for a
longer trial than their 30 day trial. So as far as I know, something like
Prince for output wouldn't be supported.

The clouddocs-maven-plugin has a 2.1.2 release (release notes:
https://github.com/stackforge/clouddocs-maven-plugin/blob/master/RELEASE_NOTES.rst#clouddocs-maven-plugin-212-july-15-2014)
which enables hyphenation. To update to 2.1.2, update the version
indicated for the plugin in the pom.xml and try out hyphenation!

__Other doc news__

I plan to attend the Ops Meetup in San Antonio Aug. 25-26th. More details
at https://etherpad.openstack.org/p/SAT-ops-meetup. Please let me know your
Ops Docs Needs prior to or at that event.

I absolutely love this blog post by Mark McLoughlin at
http://blogs.gnome.org/markmc/2014/06/06/an-ideal-openstack-developer/  -
an excellent example of satire and how we should all watch each other for
burnout. :) Best paragraph:
And then there’s docs, always the poor forgotten child of any open source
project. Yet OpenStack has some relatively awesome docs and a great team
developing them. They can never hope to cope with the workload themselves,
though, so they need you to pitch in and help perfect those docs in your
area of expertise.
Great job docs team, for working so hard on docs.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-16 Thread Clint Byrum
Excerpts from Mike Spreitzer's message of 2014-07-16 10:50:42 -0700:
 Clint Byrum cl...@fewbar.com wrote on 07/02/2014 01:54:49 PM:
 
  Excerpts from Qiming Teng's message of 2014-07-02 00:02:14 -0700:
   Just some random thoughts below ...
   
   On Tue, Jul 01, 2014 at 03:47:03PM -0400, Mike Spreitzer wrote:
In AWS, an autoscaling group includes health maintenance 
  functionality --- 
both an ability to detect basic forms of failures and an abilityto 
 react 
properly to failures detected by itself or by a load balancer.  What 
 is 
the thinking about how to get this functionality in OpenStack? Since 
 
   
   We are prototyping a solution to this problem at IBM Research - China
   lab.  The idea is to leverage oslo.messaging and ceilometer events for
   instance (possibly other resource such as port, securitygroup ...)
   failure detection and handling.
   
  
  Hm.. perhaps you should be contributing some reviews here as you may
  have some real insight:
  
  https://review.openstack.org/#/c/100012/
  
  This sounds a lot like what we're working on for continuous convergence.
 
 I noticed that health checking in AWS goes beyond convergence.  In AWS an 
 ELB can be configured with a URL to ping, for application-level health 
 checking.  And an ASG can simply be *told* the health of a member by a 
 user's own external health system.  I think we should have analogous 
 functionality in OpenStack.  Does that make sense to you?  If so, do you 
 have any opinion on the right way to integrate, so that we do not have 
 three completely independent health maintenance systems?

The check url is already a part of Neutron LBaaS IIRC. What may not be
a part is notifications for when all members are reporting down (which
might be something to trigger scale-up).

If we don't have push checks in our auto scaling implementation then we
don't have a proper auto scaling implementation.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SubjectAlternativeNames (SAN)

2014-07-16 Thread Carlos Garza

On Jul 16, 2014, at 12:30 PM, Vijay Venkatachalam 
vijay.venkatacha...@citrix.com
 wrote:

We will have the code that will parse the X509 in the API scope of the 
code. The validation I'm refering to is making sure the key matches the cert 
used and that we mandate that at a minimum the backend driver support RSA and 
that since the X509 validation is happeneing at the api layer this same module 
will also handling the extraction of the SANs. I am proposing that the methods 
that can extract the SAN SCN from the x509 be present in the api portion of the 
code and that drivers can call these methods if they need too. Infact I'm 
already working to get these extraction methods contributed to the PyOpenSSL 
project so that they will already available at a more fundemental layer then 
our nuetron/LBAAS code. At the very least I want to spec to declare that SAN 
SCN and parsing must be made available from the API layer. If the PyOpenSSL has 
the methods available at that time then I we can simply write wrappers for this 
in the API or simple write more higher level methods in the API module. Bottom 
line I 

 I am partioally open to the idea of letting the driver handle the behavior 
of the cert parsing. Although I defer this to the rest of the folks as I get 
this feeling having differn't implementations exhibiting differen't behavior 
may sound scary. 

  
 I think it is best not to mention about SAN in the OpenStack 
 TLS spec. It is expected that the backend should implement according to the 
 SSL/SNI IETF spec.
 Let’s leave the implementation/validation part to the driver.  For ex. 
 NetScaler does not support SAN and the NetScaler driver could either throw an 
 error if certs with SAN are used or ignore it.

How is netscaler making the decision when choosing the cert to associate 
with the SNI handshake?

  
 Does anyone see a requirement for detailing?
  
  
 Thanks,
 Vijay V.
  
  
 From: Vijay Venkatachalam 
 Sent: Wednesday, July 16, 2014 8:54 AM
 To: 'Samuel Bercovici'; 'OpenStack Development Mailing List (not for usage 
 questions)'
 Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
  
 Apologies for the delayed response.

 I am OK with displaying the certificates contents as part of the API, that 
 should not harm.
  
 I think the discussion has to be split into 2 topics.
  
 1.   Certificate conflict resolution. Meaning what is expected when 2 or 
 more certificates become eligible during SSL negotiation
 2.   SAN support
  
 I will send out 2 separate mails on this.
  
  
 From: Samuel Bercovici [mailto:samu...@radware.com] 
 Sent: Tuesday, July 15, 2014 11:52 PM
 To: OpenStack Development Mailing List (not for usage questions); Vijay 
 Venkatachalam
 Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
  
 OK.
  
 Let me be more precise, extracting the information for view sake / validation 
 would be good.
 Providing values that are different than what is in the x509 is what I am 
 opposed to.
  
 +1 for Carlos on the library and that it should be ubiquitously used.
  
 I will wait for Vijay to speak for himself in this regard…
  
 -Sam.
  
  
 From: Stephen Balukoff [mailto:sbaluk...@bluebox.net] 
 Sent: Tuesday, July 15, 2014 8:35 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
  
 +1 to German's and  Carlos' comments.
  
 It's also worth pointing out that some UIs will definitely want to show SAN 
 information and the like, so either having this available as part of the API, 
 or as a standard library we write which then gets used by multiple drivers is 
 going to be necessary.
  
 If we're extracting the Subject Common Name in any place in the code then we 
 also need to be extracting the Subject Alternative Names at the same place. 
 From the perspective of the SNI standard, there's no difference in how these 
 fields should be treated, and if we were to treat SANs differently then we're 
 both breaking the standard and setting a bad precedent.
  
 Stephen
  
 
 On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza carlos.ga...@rackspace.com 
 wrote:
 
 On Jul 15, 2014, at 10:55 AM, Samuel Bercovici samu...@radware.com
  wrote:
 
  Hi,
 
 
  Obtaining the domain name from the x509 is probably more of a 
  driver/backend/device capability, it would make sense to have a library 
  that could be used by anyone wishing to do so in their driver code.
 
 You can do what ever you want in *your* driver. The code to extract this 
 information will be apart of the API and needs to be mentioned in the spec 
 now. PyOpenSSL with PyASN1 are the most likely candidates.
 
 Carlos D. Garza
 
  -Sam.
 
 
 
  From: 

Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-16 Thread Doug Wiegley


On 7/16/14, 2:43 PM, Clint Byrum cl...@fewbar.com wrote:

Excerpts from Mike Spreitzer's message of 2014-07-16 10:50:42 -0700:
 Clint Byrum cl...@fewbar.com wrote on 07/02/2014 01:54:49 PM:
 
  Excerpts from Qiming Teng's message of 2014-07-02 00:02:14 -0700:
   Just some random thoughts below ...
   
   On Tue, Jul 01, 2014 at 03:47:03PM -0400, Mike Spreitzer wrote:
In AWS, an autoscaling group includes health maintenance
  functionality ---
both an ability to detect basic forms of failures and an
abilityto 
 react 
properly to failures detected by itself or by a load balancer.
What 
 is 
the thinking about how to get this functionality in OpenStack?
Since 
 
   
   We are prototyping a solution to this problem at IBM Research -
China
   lab.  The idea is to leverage oslo.messaging and ceilometer events
for
   instance (possibly other resource such as port, securitygroup ...)
   failure detection and handling.
   
  
  Hm.. perhaps you should be contributing some reviews here as you may
  have some real insight:
  
  https://review.openstack.org/#/c/100012/
  
  This sounds a lot like what we're working on for continuous
convergence.
 
 I noticed that health checking in AWS goes beyond convergence.  In AWS
an 
 ELB can be configured with a URL to ping, for application-level health
 checking.  And an ASG can simply be *told* the health of a member by a
 user's own external health system.  I think we should have analogous
 functionality in OpenStack.  Does that make sense to you?  If so, do
you 
 have any opinion on the right way to integrate, so that we do not have
 three completely independent health maintenance systems?

The check url is already a part of Neutron LBaaS IIRC. What may not be
a part is notifications for when all members are reporting down (which
might be something to trigger scale-up).

You do recall correctly, and there are currently no mechanisms for
notifying anything outside of the load balancer backend when the health
monitor/member state changes.

There is also currently no way for an external system to inject health
information about an LB or its members.

Both would be interesting additions.

doug



If we don't have push checks in our auto scaling implementation then we
don't have a proper auto scaling implementation.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SubjectAlternativeNames (SAN)

2014-07-16 Thread Carlos Garza

On Jul 16, 2014, at 3:49 PM, Carlos Garza carlos.ga...@rackspace.com wrote:

 
 On Jul 16, 2014, at 12:30 PM, Vijay Venkatachalam 
 vijay.venkatacha...@citrix.com
 wrote:
 
We will have the code that will parse the X509 in the API scope of the 
 code. The validation I'm refering to is making sure the key matches the cert 
 used and that we mandate that at a minimum the backend driver support RSA and 
 that since the X509 validation is happeneing at the api layer this same 
 module will also handling the extraction of the SANs. I am proposing that the 
 methods that can extract the SAN SCN from the x509 be present in the api 
 portion of the code and that drivers can call these methods if they need too. 
 Infact I'm already working to get these extraction methods contributed to the 
 PyOpenSSL project so that they will already available at a more fundemental 
 layer then our nuetron/LBAAS code. At the very least I want to spec to 
 declare that SAN SCN and parsing must be made available from the API layer. 
 If the PyOpenSSL has the methods available at that time then I we can simply 
 write wrappers for this in the API or simple write more higher level methods 
 in the API module.  

I meant to say bottom line I want the parsing code exposed in the API and 
not duplicated in everyone elses driver.

 I am partioally open to the idea of letting the driver handle the 
 behavior of the cert parsing. Although I defer this to the rest of the folks 
 as I get this feeling having differn't implementations exhibiting differen't 
 behavior may sound scary. 
 
 
I think it is best not to mention about SAN in the OpenStack 
 TLS spec. It is expected that the backend should implement according to the 
 SSL/SNI IETF spec.
 Let’s leave the implementation/validation part to the driver.  For ex. 
 NetScaler does not support SAN and the NetScaler driver could either throw 
 an error if certs with SAN are used or ignore it.
 
How is netscaler making the decision when choosing the cert to associate 
 with the SNI handshake?
 
 
 Does anyone see a requirement for detailing?
 
 
 Thanks,
 Vijay V.
 
 
 From: Vijay Venkatachalam 
 Sent: Wednesday, July 16, 2014 8:54 AM
 To: 'Samuel Bercovici'; 'OpenStack Development Mailing List (not for usage 
 questions)'
 Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
 
 Apologies for the delayed response.
 
 I am OK with displaying the certificates contents as part of the API, that 
 should not harm.
 
 I think the discussion has to be split into 2 topics.
 
 1.   Certificate conflict resolution. Meaning what is expected when 2 or 
 more certificates become eligible during SSL negotiation
 2.   SAN support
 
 I will send out 2 separate mails on this.
 
 
 From: Samuel Bercovici [mailto:samu...@radware.com] 
 Sent: Tuesday, July 15, 2014 11:52 PM
 To: OpenStack Development Mailing List (not for usage questions); Vijay 
 Venkatachalam
 Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
 
 OK.
 
 Let me be more precise, extracting the information for view sake / 
 validation would be good.
 Providing values that are different than what is in the x509 is what I am 
 opposed to.
 
 +1 for Carlos on the library and that it should be ubiquitously used.
 
 I will wait for Vijay to speak for himself in this regard…
 
 -Sam.
 
 
 From: Stephen Balukoff [mailto:sbaluk...@bluebox.net] 
 Sent: Tuesday, July 15, 2014 8:35 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
 Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
 
 +1 to German's and  Carlos' comments.
 
 It's also worth pointing out that some UIs will definitely want to show SAN 
 information and the like, so either having this available as part of the 
 API, or as a standard library we write which then gets used by multiple 
 drivers is going to be necessary.
 
 If we're extracting the Subject Common Name in any place in the code then we 
 also need to be extracting the Subject Alternative Names at the same place. 
 From the perspective of the SNI standard, there's no difference in how these 
 fields should be treated, and if we were to treat SANs differently then 
 we're both breaking the standard and setting a bad precedent.
 
 Stephen
 
 
 On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza carlos.ga...@rackspace.com 
 wrote:
 
 On Jul 15, 2014, at 10:55 AM, Samuel Bercovici samu...@radware.com
 wrote:
 
 Hi,
 
 
 Obtaining the domain name from the x509 is probably more of a 
 driver/backend/device capability, it would make sense to have a library 
 that could be used by anyone wishing to do so in their driver code.
 
You can do what ever you want in *your* driver. The code to extract this 
 information will be 

Re: [openstack-dev] [TripleO] os-net-config

2014-07-16 Thread Robert Collins
On 17 July 2014 05:58, Dan Prince dpri...@redhat.com wrote:
 Hi TripleO!

 I wanted to get the word out on progress with a new os-net-config tool
 for TripleO. The spec (not yet approved) lives here:

 https://review.openstack.org/#/c/97859/

 We've also got a working implementation here:

 https://github.com/dprince/os-net-config

 You can see WIP example of how it wires in here (more work to do on this
 to fully support parity):

 https://review.openstack.org/#/c/104054/1/elements/network-utils/bin/ensure-bridge,cm

 The end goal is that we will be able to more flexibly control our host
 level network settings in TripleO. Once it is fully integrated
 os-net-config would provide a mechanism to drive more flexible
 configurations (multiple bridges, bonding, etc.) via Heat metadata.

 We are already in dire need of this sort of thing today because we can't
 successfully deploy our CI overclouds without making manual changes to
 our images (this is because we need 2 bridges and our heat templates
 only support 1).

I'm really glad this is coming along. One small thing to note - we
don't need two bridges for CI overclouds - the rearranging of things
I've done over the last couple of weeks means we no longer *break* the
build in bridge, and so we can use br-ex for everything.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] [RBD] Copy-on-write cloning for RBD-backed disks

2014-07-16 Thread Dmitry Borodaenko
I've got a bit of good news and bad news about the state of landing
the rbd-ephemeral-clone patch series for Nova in Juno.

The good news is that the first patch in the series
(https://review.openstack.org/91722 fixing a data loss inducing bug
with live migrations of instances with RBD backed ephemeral drives)
was merged yesterday.

The bad news is that after 2 months of sitting in review queue and
only getting its first a +1 from a core reviewer on the spec approval
freeze day, the spec for the blueprint rbd-clone-image-handler
(https://review.openstack.org/91486) wasn't approved in time. Because
of that, today the blueprint was rejected along with the rest of the
commits in the series, even though the code itself was reviewed and
approved a number of times.

Our last chance to avoid putting this work on hold for yet another
OpenStack release cycle is to petition for a spec freeze exception in
the next Nova team meeting:
https://wiki.openstack.org/wiki/Meetings/Nova

If you're using Ceph RBD as backend for ephemeral disks in Nova and
are interested this patch series, please speak up. Since the biggest
concern raised about this spec so far has been lack of CI coverage,
please let us know if you're already using this patch series with
Juno, Icehouse, or Havana.

I've put together an etherpad with a summary of where things are with
this patch series and how we got here:
https://etherpad.openstack.org/p/nova-ephemeral-rbd-clone-status

Previous thread about this patch series on ceph-users ML:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-March/028097.html

-- 
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] request to tag novaclient 2.18.0

2014-07-16 Thread Steve Baker
On 12/07/14 09:25, Joe Gordon wrote:



 On Fri, Jul 11, 2014 at 4:42 AM, Jeremy Stanley fu...@yuggoth.org
 mailto:fu...@yuggoth.org wrote:

 On 2014-07-11 11:21:19 +0200 (+0200), Matthias Runge wrote:
  this broke horizon stable and master; heat stable is affected as
  well.
 [...]

 I guess this is a plea for applying something like the oslotest
 framework to client libraries so they get backward-compat jobs run
 against unit tests of all dependant/consuming software... branchless
 tempest already alleviates some of this, but not the case of changes
 in a library which will break unit/functional tests of another
 project.


 We actually do have some tests for backwards compatibility, and they
 all passed. Presumably because both heat and horizon have poor
 integration test.

 We ran 

   * check-tempest-dsvm-full-havana
 
 http://logs.openstack.org/66/94166/3/check/check-tempest-dsvm-full-havana/8e09faa
  SUCCESS in
 40m 47s (non-voting)
   * check-tempest-dsvm-neutron-havana
 
 http://logs.openstack.org/66/94166/3/check/check-tempest-dsvm-neutron-havana/b4ad019
  SUCCESS in
 36m 17s (non-voting)
   * check-tempest-dsvm-full-icehouse
 
 http://logs.openstack.org/66/94166/3/check/check-tempest-dsvm-full-icehouse/c0c62e5
  SUCCESS in
 53m 05s
   * check-tempest-dsvm-neutron-icehouse
 
 http://logs.openstack.org/66/94166/3/check/check-tempest-dsvm-neutron-icehouse/a54aedb
  SUCCESS in
 57m 28s


 on the offending patches (https://review.openstack.org/#/c/94166/)
  

 Infra patch that added these tests:
 https://review.openstack.org/#/c/80698/


Heat-proper would have continued working fine with novaclient 2.18.0.
The regression was with raising novaclient exceptions, which is only
required in our unit tests. I saw this break coming and switched to
raising via from_response
https://review.openstack.org/#/c/97977/22/heat/tests/v1_1/fakes.py

Unit tests tend to deal with more internals of client libraries just for
mocking purposes, and there have been multiple breaks in unit tests for
heat and horizon when client libraries make internal changes.

This could be avoided if the client gate jobs run the unit tests for the
projects which consume them.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >