Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-15 Thread Samuel Bercovici
OK.

Let me be more precise, extracting the information for view sake / validation 
would be good.
Providing values that are different than what is in the x509 is what I am 
opposed to.

+1 for Carlos on the library and that it should be ubiquitously used.

I will wait for Vijay to speak for himself in this regard…

-Sam.


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Tuesday, July 15, 2014 8:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
SubjectCommonName and/or SubjectAlternativeNames from X509

+1 to German's and  Carlos' comments.

It's also worth pointing out that some UIs will definitely want to show SAN 
information and the like, so either having this available as part of the API, 
or as a standard library we write which then gets used by multiple drivers is 
going to be necessary.

If we're extracting the Subject Common Name in any place in the code then we 
also need to be extracting the Subject Alternative Names at the same place. 
From the perspective of the SNI standard, there's no difference in how these 
fields should be treated, and if we were to treat SANs differently then we're 
both breaking the standard and setting a bad precedent.

Stephen

On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza 
mailto:carlos.ga...@rackspace.com>> wrote:

On Jul 15, 2014, at 10:55 AM, Samuel Bercovici 
mailto:samu...@radware.com>>
 wrote:

> Hi,
>
>
> Obtaining the domain name from the x509 is probably more of a 
> driver/backend/device capability, it would make sense to have a library that 
> could be used by anyone wishing to do so in their driver code.
You can do what ever you want in *your* driver. The code to extract this 
information will be apart of the API and needs to be mentioned in the spec now. 
PyOpenSSL with PyASN1 are the most likely candidates.

Carlos D. Garza
>
> -Sam.
>
>
>
> From: Eichberger, German 
> [mailto:german.eichber...@hp.com]
> Sent: Tuesday, July 15, 2014 6:43 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
>
> Hi,
>
> My impression was that the frontend would extract the names and hand them to 
> the driver.  This has the following advantages:
>
> · We can be sure all drivers can extract the same names
> · No duplicate code to maintain
> · If we ever allow the user to specify the names on UI rather in the 
> certificate the driver doesn’t need to change.
>
> I think I saw Adam say something similar in a comment to the code.
>
> Thanks,
> German
>
> From: Evgeny Fedoruk [mailto:evge...@radware.com]
> Sent: Tuesday, July 15, 2014 7:24 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
> SubjectCommonName and/or SubjectAlternativeNames from X509
>
> Hi All,
>
> Since this issue came up from TLS capabilities RST doc review, I opened a ML 
> thread for it to make the decision.
> Currently, the document says:
>
> “
> For SNI functionality, tenant will supply list of TLS containers in specific
> Order.
> In case when specific back-end is not able to support SNI capabilities,
> its driver should throw an exception. The exception message should state
> that this specific back-end (provider) does not support SNI capability.
> The clear sign of listener's requirement for SNI capability is
> a none empty SNI container ids list.
> However, reference implementation must support SNI capability.
>
> Specific back-end code may retrieve SubjectCommonName and/or altSubjectNames
> from the certificate which will determine the hostname(s) the certificate
> is associated with.
>
> The order of SNI containers list may be used by specific back-end code,
> like Radware's, for specifying priorities among certificates.
> In case when two or more uploaded certificates are valid for the same DNS name
> and the tenant has specific requirements around which one wins this collision,
> certificate ordering provides a mechanism to define which cert wins in the
> event of a collision.
> Employing the order of certificates list is not a common requirement for
> all back-end implementations.
> “
>
> The question is about SCN and SAN extraction from X509.
> 1.   Extraction of SCN/ SAN should be done while provisioning and not 
> during TLS handshake
> 2.   Every back-end code/driver must(?) extract SCN and(?) SAN and use it 
> for certificate determination for host
>
> Please give your feedback
>
> Thanks,
> Evg
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___

Re: [openstack-dev] [Neutron] [ML2] kindly request neutron-cores to review and approve patch "Cisco DFA ML2 Mechanism Driver"

2014-07-15 Thread Anita Kuno
On 07/16/2014 12:01 AM, Milton Xu (mxu) wrote:
> Hi,
> 
> This patch was initially uploaded on Jun 27, 2014 and we have got a number of 
> reviews from the community. A lot of thanks to these who kindly reviewed and 
> provided feedback.
> 
> Can the neutron cores please review/approve it so we can make progress here?  
> Really appreciate your attention and help here.
> I also include the cores who reviewed and approved the spec earlier.
> 
> Code patch:
> https://review.openstack.org/#/c/103281/
> 
> Approved Spec:
> https://review.openstack.org/#/c/89740/
> 
> 
> Thanks,
> Milton
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
Hi:

The mailing list is not the correct place to ask for a review.

The preferred methods are discussed here:
http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html

Thank you,
Anita.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Neutron ML2 Blueprints

2014-07-15 Thread Andrew Woodward
[2] appears to be made worse, if not caused by neutron services
autostarting with debian, no patch yet, need to add mechanism to ha
layer to generate override files.
[3] appears to have stopped with this mornings master
[4] deleting the cluster, and restarting mostly removed this, was
getting issue with $::osnailyfacter::swift_partition/.. not existing
(/var/lib/glance), but is fixed in rev 29

[5] is still the critical issue blocking progress, I'm super at a loss
of why this is occuring. Changes to ordering have no affect. Next
steps probably involve pre-hacking keystone and neutron and
nova-client to be more verbose about it's key usage. As a hack we
could simply restart neutron-server but I'm not convinced the issue
can't come back since we don't know how it started.



On Tue, Jul 15, 2014 at 6:34 AM, Sergey Vasilenko
 wrote:
> [1] fixed in https://review.openstack.org/#/c/107046/
> Thanks for report a bug.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Andrew
Mirantis
Ceph community

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] heat stack-create with two vm instances always got one failed

2014-07-15 Thread Yuling_C
Dell Customer Communication

Hi,
I'm using heat to create a stack with two instances. I always got one of them 
successful, but the other would fail. If I split the template into two and each 
of them contains one instance then it worked. However, I thought Heat template 
would allow multiple instances being created?

Here I attach the heat template:
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description" : "Sample Heat template that spins up multiple instances and 
a private network (JSON)",
"Resources" : {
"test_net" : {
 "Type" : "OS::Neutron::Net",
 "Properties" : {
 "name" : "test_net"
  }
  },
  "test_subnet" : {
  "Type" : "OS::Neutron::Subnet",
  "Properties" : {
  "name" : "test_subnet",
  "cidr" : "120.10.9.0/24",
  "enable_dhcp" : "True",
  "gateway_ip" : "120.10.9.1",
  "network_id" : { "Ref" : "test_net" }
  }
  },
 "test_net_port" : {
 "Type" : "OS::Neutron::Port",
 "Properties" : {
 "admin_state_up" : "True",
 "network_id" : { "Ref" : "test_net" }
 }
 },
 "instance1" : {
 "Type" : "OS::Nova::Server",
 "Properties" : {
 "name" : "instance1",
 "image" : "8e2b4c71-448c-4313-8b41-b238af31f419",
 "flavor": "tvm-tt_lite",
 "networks" : [
 {"port" : { "Ref" : "test_net_port" }}
 ]
   }
   },
 "instance2" : {
 "Type" : "OS::Nova::Server",
 "Properties" : {
 "name" : "instance2",
 "image" : "8e2b4c71-448c-4313-8b41-b238af31f419",
 "flavor": "tvm-tt_lite",
 "networks" : [
 {"port" : { "Ref" : "test_net_port" }}
 ]
   }
   }
}
}
The error that I got from heat-engine.log is as follows:

2014-07-16 01:49:50.514 25101 DEBUG heat.engine.scheduler [-] Task 
resource_action complete step 
/usr/lib/python2.6/site-packages/heat/engine/scheduler.py:170
2014-07-16 01:49:50.515 25101 DEBUG heat.engine.scheduler [-] Task stack_task 
from Stack "teststack" sleeping _sleep 
/usr/lib/python2.6/site-packages/heat/engine/scheduler.py:108
2014-07-16 01:49:51.516 25101 DEBUG heat.engine.scheduler [-] Task stack_task 
from Stack "teststack" running step 
/usr/lib/python2.6/site-packages/heat/engine/scheduler.py:164
2014-07-16 01:49:51.516 25101 DEBUG heat.engine.scheduler [-] Task 
resource_action running step 
/usr/lib/python2.6/site-packages/heat/engine/scheduler.py:164
2014-07-16 01:49:51.960 25101 DEBUG urllib3.connectionpool [-] "GET 
/v2/b64803d759e04b999e616b786b407661/servers/7cb9459c-29b3-4a23-a52c-17d85fce0559
 HTTP/1.1" 200 1854 _make_request 
/usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295
2014-07-16 01:49:51.963 25101 ERROR heat.engine.resource [-] CREATE : Server 
"instance1"
2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource Traceback (most recent 
call last):
2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/heat/engine/resource.py", line 371, in 
_do_action
2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource while not 
check(handle_data):
2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/heat/engine/resources/server.py", line 239, 
in check_create_complete
2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource return 
self._check_active(server)
2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/heat/engine/resources/server.py", line 255, 
in _check_active
2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource raise exc
2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource Error: Creation of 
server instance1 failed.
2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource
2014-07-16 01:49:51.996 25101 DEBUG heat.engine.scheduler [-] Task 
resource_action cancelled cancel 
/usr/lib/python2.6/site-packages/heat/engine/scheduler.py:187
2014-07-16 01:49:52.004 25101 DEBUG heat.engine.scheduler [-] Task stack_task 
from Stack "teststack" complete step 
/usr/lib/python2.6/site-packages/heat/engine/scheduler.py:170
2014-07-16 01:49:52.005 25101 WARNING heat.engine.service [-] Stack create 
failed, status FAILED
2014-07-16 01:50:29.218 25101 DEBUG heat.openstack.common.rpc.amqp [-] received 
{u'_context_roles': [u'Member', u'admin'], u'_msg_id': 
u'9aedf86fda304cfc857dc897d8393427', u'_context_password': '', 
u'_context_auth_url': u'http://172.17.252.60:5000/v2.0', u'_unique_id': 
u'f02188b068de4a4aba0ec203ec3ad54a', u'_reply_q': 
u'reply_f841b6a2101d4af9a9af59889630ee77', u'_context_aws_creds': None, 
u'args': {}, u'_context_tenant': u'TVM', u'_context_trustor_user_id': None, 
u'_context_trust_id': None, u'_context_auth_token':

Re: [openstack-dev] [all][specs] Please stop doing specs for any changes in projects

2014-07-15 Thread Dolph Mathews
On Tuesday, July 15, 2014, Jay S. Bryant 
wrote:

> John,
>
> So you have said a few times that the specs are a learning process.
> What do you feel with have learned thus far using specs?


- we are inexperienced at doing design work
- we are inexperienced at considering the full impact of a change
- agreeing on the problem statement is more important than
agreeing a solution
- it's easy to confuse design work with implementation work, and we need to
get better at separating the two
- specs more easily allow cross-project collaboration and
non-dev stakeholders to provide early feedback when compared to blueprints
or implementation changes
- specs don't make it easy to compare two solutions to the same problem,
but I hope they will
- blueprints are even more cumbersome than ever
- specs are terrible at tracking work items and assignees


> I think you
> had the hope that it would help organize targeting blueprints and not
> missing things for a release.  Do you feel that is working?
>
> I would like to hear more about how you feel this is working.
>
> Personally, initially, having specs made sense given that they are a
> better way to spark discussion around a design change.  They were
> working well early in the release for that purpose.
>
> Now, however, they have become a point of confusion.  There is the
> question of "when is just a BP sufficient" versus when is neither
> required.
>
> I think this is part of the learning process.  Just worth having
> discussion so we know for the next round what is needed when.
>
> Jay
>
> On Sun, 2014-07-13 at 16:31 -0600, John Griffith wrote:
> >
> >
> >
> >
> > On Sun, Jul 13, 2014 at 8:01 AM, Dolph Mathews
> > > wrote:
> >
> > On Wed, Jul 9, 2014 at 1:54 PM, Jay Bryant
> > > wrote:
> > I had been under the impression that all BPs we going
> > to require a spec.  I, however,  was made are in
> > today's cinder meeting that we are only requiring
> > specs for changes that change the user's interaction
> > with the system or are a large change that touches the
> > broader cinder code base.
> >
> > That's generally where we use blueprints in Keystone, anyway.
> > If the change has no impact on end users, deployers or other
> > projects, then it doesn't need a spec/blueprint. For example,
> > a refactor only affects developers of Keystone, so I'd argue
> > that blueprints are unnecessary.
> >
> >
> >
> > The premise of a "large change that touches the broader ...
> > code base" requiring a blueprint is flawed though - we don't
> > want to review large changes anyway ;)
> >
> >
> > ​Just have to say I really like this last point... also even though
> > I'm the one who made that statement the problem with it is that it's
> > rather subjective.
> >
> >
> > My position is and continues to be that specs are a learning process,
> > we're hammering it out and these conversations on the ML are helpful.​
> > This seemsto make sense to me.  The user's commit
> > message and unit tests should show the thought of the
> > changes impact.
> >
> > Jay
> >
> > On Jul 9, 2014 7:57 AM, "Dugger, Donald D"
> > > wrote:
> > Much as I dislike the overhead and the extra
> > latency involved (now you need to have a
> > review cycle for the spec plus the review
> > cycle for the patch itself) I agreed with the
> > `small features require small specs’.  The
> > problem is that even a small change can have a
> > big impact.  Forcing people to create a spec
> > even for small features means that it’s very
> > clear that the implications of the feature
> > have been thought about and addressed.
> >
> >
> >
> > Note that there is a similar issue with bugs.
> >  I would expect that a patch to fix a bug
> > would have to have a corresponding bug report.
> > Just accepting patches with no known
> > justification seems like the wrong way to go.
> >
> >
> >
> > --
> >
> > Don Dugger
> >
> > "Censeo Toto nos in Kansa esse decisse." - D.
> > Gale
> >
> > Ph: 303/443-3786
> >
> >
> >
> > From: Dolph Mathews
> > [mailto:dolph.math...@gmail.com ]
> > Sent: Tuesday, July 1, 2014 11:02 AM
> > To: OpenStack Development Mailing List (not
> > for usage ques

Re: [openstack-dev] [Nova] Registration for the mid cycle meetup is now closed

2014-07-15 Thread Michael Still
The containers meetup is in a different room with different space
constraints, so containers focussed people should do whatever Adrian
is doing for registration.

Michael

On Wed, Jul 16, 2014 at 12:53 PM, Eric Windisch  wrote:
>
>
>
> On Tue, Jul 15, 2014 at 6:42 PM, Rick Harris 
> wrote:
>>
>> Hey Michael,
>>
>> Would love to attend and give an update on where we are with Libvirt+LXC
>> containers.
>>
>> We have bunch of patches proposed and more coming down the pike, so would
>> love to get some feedback on where we are and where we should go with this.
>>
>> I just found out I was cleared to attend yesterday, so that's why I'm late
>> in getting registered. Anyway I could squeeze in?
>
>
> While I am registered (and registered early), I'm not sure all of the
> containers-oriented folks were originally planning to come prior to last
> week's addition of the containers breakout room.
>
> I suspect other containers-oriented folks might yet want to register. If so,
> I think now would be the time to speak up.
>
> --
> Regards,
> Eric Windisch
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] top gate bug is libvirt snapshot

2014-07-15 Thread Alex Xu
Question about swap volume, swap volume's implementation is very similar 
with live snapshot.
Both implemented by blockRebase. But swap volume didn't check any 
libvirt and qemu version.
Should we add version check for swap_volume now? That means swap_volume 
will be disable also.


On 2014?06?26? 19:00, Sean Dague wrote:

While the Trusty transition was mostly uneventful, it has exposed a
particular issue in libvirt, which is generating ~ 25% failure rate now
on most tempest jobs.

As can be seen here -
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L294-L297


... the libvirt live_snapshot code is something that our test pipeline
has never tested before, because it wasn't a new enough libvirt for us
to take that path.

Right now it's exploding, a lot -
https://bugs.launchpad.net/nova/+bug/1334398

Snapshotting gets used in Tempest to create images for testing, so image
setup tests are doing a decent number of snapshots. If I had to take a
completely *wild guess*, it's that libvirt can't do 2 live_snapshots at
the same time. It's probably something that most people haven't hit. The
wild guess is based on other libvirt issues we've hit that other people
haven't, and they are basically always a parallel ops triggered problem.

My 'stop the bleeding' suggested fix is this -
https://review.openstack.org/#/c/102643/ which just effectively disables
this code path for now. Then we can get some libvirt experts engaged to
help figure out the right long term fix.

I think there are a couple:

1) see if newer libvirt fixes this (1.2.5 just came out), and if so
mandate at some known working version. This would actually take a bunch
of work to be able to test a non packaged libvirt in our pipeline. We'd
need volunteers for that.

2) lock snapshot operations in nova-compute, so that we can only do 1 at
a time. Hopefully it's just 2 snapshot operations that is the issue, not
any other libvirt op during a snapshot, so serializing snapshot ops in
n-compute could put the kid gloves on libvirt and make it not break
here. This also needs some volunteers as we're going to be playing a
game of progressive serialization until we get to a point where it looks
like the failures go away.

3) Roll back to precise. I put this idea here for completeness, but I
think it's a terrible choice. This is one isolated, previously untested
(by us), code path. We can't stay on libvirt 0.9.6 forever, so actually
need to fix this for real (be it in nova's use of libvirt, or libvirt
itself).

There might be other options as well, ideas welcomed.

But for right now, we should stop the bleeding, so that nova/libvirt
isn't blocking everyone else from merging code.

-Sean



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][specs] Please stop doing specs for any changes in projects

2014-07-15 Thread Jay S. Bryant
John,

So you have said a few times that the specs are a learning process.
What do you feel with have learned thus far using specs?  I think you
had the hope that it would help organize targeting blueprints and not
missing things for a release.  Do you feel that is working?

I would like to hear more about how you feel this is working.

Personally, initially, having specs made sense given that they are a
better way to spark discussion around a design change.  They were
working well early in the release for that purpose.

Now, however, they have become a point of confusion.  There is the
question of "when is just a BP sufficient" versus when is neither
required.

I think this is part of the learning process.  Just worth having
discussion so we know for the next round what is needed when.

Jay

On Sun, 2014-07-13 at 16:31 -0600, John Griffith wrote:
> 
> 
> 
> 
> On Sun, Jul 13, 2014 at 8:01 AM, Dolph Mathews
>  wrote:
> 
> On Wed, Jul 9, 2014 at 1:54 PM, Jay Bryant
>  wrote:
> I had been under the impression that all BPs we going
> to require a spec.  I, however,  was made are in
> today's cinder meeting that we are only requiring
> specs for changes that change the user's interaction
> with the system or are a large change that touches the
> broader cinder code base.
> 
> That's generally where we use blueprints in Keystone, anyway.
> If the change has no impact on end users, deployers or other
> projects, then it doesn't need a spec/blueprint. For example,
> a refactor only affects developers of Keystone, so I'd argue
> that blueprints are unnecessary.
> 
> 
> 
> The premise of a "large change that touches the broader ...
> code base" requiring a blueprint is flawed though - we don't
> want to review large changes anyway ;)
> 
> 
> ​Just have to say I really like this last point... also even though
> I'm the one who made that statement the problem with it is that it's
> rather subjective.  
> 
> 
> My position is and continues to be that specs are a learning process,
> we're hammering it out and these conversations on the ML are helpful.​
> This seemsto make sense to me.  The user's commit
> message and unit tests should show the thought of the
> changes impact. 
> 
> Jay
> 
> On Jul 9, 2014 7:57 AM, "Dugger, Donald D"
>  wrote:
> Much as I dislike the overhead and the extra
> latency involved (now you need to have a
> review cycle for the spec plus the review
> cycle for the patch itself) I agreed with the
> `small features require small specs’.  The
> problem is that even a small change can have a
> big impact.  Forcing people to create a spec
> even for small features means that it’s very
> clear that the implications of the feature
> have been thought about and addressed.
> 
>  
> 
> Note that there is a similar issue with bugs.
>  I would expect that a patch to fix a bug
> would have to have a corresponding bug report.
> Just accepting patches with no known
> justification seems like the wrong way to go.
> 
>  
> 
> --
> 
> Don Dugger
> 
> "Censeo Toto nos in Kansa esse decisse." - D.
> Gale
> 
> Ph: 303/443-3786
> 
>  
> 
> From: Dolph Mathews
> [mailto:dolph.math...@gmail.com] 
> Sent: Tuesday, July 1, 2014 11:02 AM
> To: OpenStack Development Mailing List (not
> for usage questions)
> Subject: Re: [openstack-dev] [all][specs]
> Please stop doing specs for any changes in
> projects
> 
>  
> 
> The argument has been made in the past that
> small features will require correspondingly
> small specs. If there's a counter-argument to
> this example (a 

Re: [openstack-dev] [Nova] Registration for the mid cycle meetup is now closed

2014-07-15 Thread Eric Windisch
On Tue, Jul 15, 2014 at 6:42 PM, Rick Harris 
wrote:

> Hey Michael,
>
> Would love to attend and give an update on where we are with Libvirt+LXC
> containers.
>
> We have bunch of patches proposed and more coming down the pike, so would
> love to get some feedback on where we are and where we should go with this.
>
> I just found out I was cleared to attend yesterday, so that's why I'm late
> in getting registered. Anyway I could squeeze in?
>

While I am registered (and registered early), I'm not sure all of the
containers-oriented folks were originally planning to come prior to last
week's addition of the containers breakout room.

I suspect other containers-oriented folks might yet want to register. If
so, I think now would be the time to speak up.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Gap 0 (database migrations) closed!

2014-07-15 Thread Kyle Mestery
On Tue, Jul 15, 2014 at 5:49 PM, Henry Gessau  wrote:
> I am happy to announce that the first (zero'th?) item in the Neutron Gap
> Coverage[1] has merged[2]. The Neutron database now contains all tables for
> all plugins, and database migrations are no longer conditional on the
> configuration.
>
> In the short term, Neutron developers who write migration scripts need to set
>   migration_for_plugins = ['*']
> but we will soon clean up the template for migration scripts so that this will
> be unnecessary.
>
> I would like to say special thanks to Ann Kamyshnikova and Jakub Libosvar for
> their great work on this solution. Also thanks to Salvatore Orlando and Mark
> McClain for mentoring this through to the finish.
>
> [1]
> https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage
> [2] https://review.openstack.org/96438
>
This is great news! Thanks to everyone who worked on this particular
gap. We're making progress on the other gaps identified in that plan,
I'll send an email out once Juno-2 closes with where we're at.

Thanks,
Kyle

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Registration for the mid cycle meetup is now closed

2014-07-15 Thread Matt Odden
I was actually waiting on my hotel confirmation before registering at 
the Eventbrite site, and didn't expect there to be a limit on seats. I 
am still planning on being there, and would still appreciate a seat.


Thanks,
Matt Odden

On 7/15/2014 3:56 PM, Michael Still wrote:

Hi.

We've now hit our room capacity, so I have closed registrations for
the nova mid cycle meetup. Please reply to this thread if that's a
significant problem for someone.

Thanks,
Michael




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][infra] issue with a requirement update

2014-07-15 Thread Arnaud Legendre
Greetings,

I am facing an issue and looking for guidance/best practice. So, here is the 
problem:

- glance [stable/icehouse] contains a requirement to oslo.vmware >= 0.2 [1] and 
consequently requirements/global-requirements [stable/icehouse] also contains 
oslo.vmware >= 0.2.[2] So far nothing wrong.

- a requirement/global-requirement to the retrying library has been added in 
master [3].

- now, if we add the retrying requirement to oslo.vmware in master, grenade 
fails [4] because stable/icehouse will pick the latest version of oslo.vmware 
(containing retrying) but using requirements/global-requirements 
[stable/icehouse] which doesn’t contain retrying.

So, I can see two options:
1. pin the oslo.vmware version in stable/icehouse. something like oslo.vmware 
>= 0.2,<0.4. This means two patches: one in requirements/global-requirements 
and one in glance.
I am not sure if it is OK to have requirements/global-requirements and glance 
having different version intervals for some time: global-requirements would 
contain  oslo.vmware >= 0.2,<0.4 but glance would contain oslo.vmware >= 0.2. 
Does Glance requirements and global-requirements need to contain the exact 
version interval for a given library at any time? or the fact that >= 0.2,<0.4 
includes >= 0.2 is enough? in which case, this seems the way to go.

2. add the retrying requirement to global-requirements [stable/icehouse], but 
this would mean that for any new library added to oslo.vmware (not being in the 
requirements of a previous release), we would have the same problem.


[1] 
https://github.com/openstack/glance/blob/stable/icehouse/requirements.txt#L30
[2] 
https://github.com/openstack/requirements/blob/stable/icehouse/global-requirements.txt#L52
[3] 
https://github.com/openstack/requirements/blob/master/global-requirements.txt#L182
[4] https://review.openstack.org/#/c/106488/


Thank you,
Arnaud









___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] REST API access to configuration options

2014-07-15 Thread joehuang
Hello, 

My personal view is that not to put all configuration options being accessed by 
RESTFUL API, but for these configurations which will leads to restart the 
controller nodes should be able to be configured dynamically through RESTFUL 
api.

There are lots of configuration change only become valid after the controller 
nodes restart. It's hard for the O&M If too many such configuration because any 
change should be taken into account whether the change will break the cloud 
service continuity or not.

Best Regards
Chaoyi Huang ( Joe Huang )

-邮件原件-
发件人: Mark McLoughlin [mailto:mar...@redhat.com] 
发送时间: 2014年7月15日 17:08
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] REST API access to configuration options

On Tue, 2014-07-15 at 08:54 +0100, Henry Nash wrote:
> HI
> 
> As the number of configuration options increases and OpenStack 
> installations become more complex, the chances of incorrect 
> configuration increases. There is no better way of enabling cloud 
> providers to be able to check the configuration state of an OpenStack 
> service than providing a direct REST API that allows the current 
> running values to be inspected. Having an API to provide this 
> information becomes increasingly important for dev/ops style 
> operation.
> 
> As part of Keystone we are considering adding such an ability (see:
> https://review.openstack.org/#/c/106558/).  However, since this is the 
> sort of thing that might be relevant to and/or affect other projects, 
> I wanted to get views from the wider dev audience.
> 
> Any such change obviously has to take security in mind - and as the 
> spec says, just like when we log config options, any options marked as 
> secret will be obfuscated.  In addition, the API will be protected by 
> the normal policy mechanism and is likely in most installations to be 
> left as "admin required".  And of course, since it is an extension, if 
> a particular installation does not want to use it, they don't need to 
> load it.
> 
> Do people think this is a good idea?  Useful in other projects?
> Concerned about the risks?

I would have thought operators would be comfortable gleaning this information 
from the log files?

Also, this is going to tell you how the API service you connected to was 
configured. Where there are multiple API servers, what about the others?
How do operators verify all of the API servers behind a load balancer with this?

And in the case of something like Nova, what about the many other nodes behind 
the API server?

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-07-15 Thread Vishvananda Ishaya

On Jul 15, 2014, at 3:30 PM, Ihar Hrachyshka  wrote:

> Signed PGP part
> On 14/07/14 22:48, Vishvananda Ishaya wrote:
> >
> > On Jul 13, 2014, at 9:29 AM, Ihar Hrachyshka 
> > wrote:
> >
> >> Signed PGP part On 12/07/14 03:17, Mike Bayer wrote:
> >>>
> >>> On 7/11/14, 7:26 PM, Carl Baldwin wrote:
> 
> 
>  On Jul 11, 2014 5:32 PM, "Vishvananda Ishaya"
>   >>> > wrote:
> >
> > I have tried using pymysql in place of mysqldb and in real
> > world
> >>> concurrency
> > tests against cinder and nova it performs slower. I was
> > inspired by
> >>> the mention
> > of mysql-connector so I just tried that option instead.
> >>> Mysql-connector seems
> > to be slightly slower as well, which leads me to believe
> > that the
> >>> blocking inside of
> 
>  Do you have some numbers?  "Seems to be slightly slower"
>  doesn't
> >>> really stand up as an argument against the numbers that have
> >>> been posted in this thread.
> >
> > Numbers are highly dependent on a number of other factors, but I
> > was seeing 100 concurrent list commands against cinder going from
> > an average of 400 ms to an average of around 600 ms with both
> > msql-connector and pymsql.
> 
> I've made my tests on neutron only, so there is possibility that
> cinder works somehow differently.
> 
> But, those numbers don't tell a lot in terms of considering the
> switch. Do you have numbers for mysqldb case?

Sorry if my commentary above was unclear. The  400ms is mysqldb.
The 600ms average was the same for both the other options.
> 
> >
> > It is also worth mentioning that my test of 100 concurrent creates
> > from the same project in cinder leads to average response times
> > over 3 seconds. Note that creates return before the request is sent
> > to the node for processing, so this is just the api creating the db
> > record and sticking a message on the queue. A huge part of the
> > slowdown is in quota reservation processing which does a row lock
> > on the project id.
> 
> Again, are those 3 seconds better or worse than what we have for mysqldb?

The 3 seconds is from mysqldb. I don’t have average response times for
mysql-connector due to the timeouts I mention below.
> 
> >
> > Before we are sure that an eventlet friendly backend “gets rid of
> > all deadlocks”, I will mention that trying this test against
> > connector leads to some requests timing out at our load balancer (5
> > minute timeout), so we may actually be introducing deadlocks where
> > the retry_on_deadlock operator is used.
> 
> Deadlocks != timeouts. I attempt to fix eventlet-triggered db
> deadlocks, not all possible deadlocks that you may envision, or timeouts.

That may be true, but if switching the default is trading one problem
for another it isn’t necessarily the right fix. The timeout means that
one or more greenthreads are never actually generating a response. I suspect
and endless retry_on_deadlock between a couple of competing greenthreads
which we don’t hit with mysqldb, but it could be any number of things.

> 
> >
> > Consider the above anecdotal for the moment, since I can’t verify
> > for sure that switching the sql driver didn’t introduce some other
> > race or unrelated problem.
> >
> > Let me just caution that we can’t recommend replacing our mysql
> > backend without real performance and load testing.
> 
> I agree. Not saying that the tests are somehow complete, but here is
> what I was into last two days.
> 
> There is a nice openstack project called Rally that is designed to
> allow easy benchmarks for openstack projects. They have four scenarios
> for neutron implemented: for networks, ports, routers, and subnets.
> Each scenario combines create and list commands.
> 
> I've run each test with the following runner settings: times = 100,
> concurrency = 10, meaning each scenario is run 100 times in parallel,
> and there were not more than 10 parallel scenarios running. Then I've
> repeated the same for times = 100, concurrency = 20 (also set
> max_pool_size to 20 to allow sqlalchemy utilize that level of
> parallelism), and times = 1000, concurrency = 100 (same note on
> sqlalchemy parallelism).
> 
> You can find detailed html files with nice graphs here [1]. Brief
> description of results is below:
> 
> 1. create_and_list_networks scenario: for 10 parallel workers
> performance boost is -12.5% from original time, for 20 workers -6.3%,
> for 100 workers there is a slight reduction of average time spent for
> scenario +9.4% (this is the only scenario that showed slight reduction
> in performance, I'll try to rerun the test tomorrow to see whether it
> was some discrepancy when I executed it that influenced the result).
> 
> 2. create_and_list_ports scenario: for 10 parallel workers boost is
> -25.8%, for 20 workers it's -9.4%, and for 100 workers it's -12.6%.
> 
> 3. create_and_list_routers scenario: for 10 parallel workers boost is
> -46.6% (almost half of original time),

[openstack-dev] [Neutron] Gap 0 (database migrations) closed!

2014-07-15 Thread Henry Gessau
I am happy to announce that the first (zero'th?) item in the Neutron Gap
Coverage[1] has merged[2]. The Neutron database now contains all tables for
all plugins, and database migrations are no longer conditional on the
configuration.

In the short term, Neutron developers who write migration scripts need to set
  migration_for_plugins = ['*']
but we will soon clean up the template for migration scripts so that this will
be unnecessary.

I would like to say special thanks to Ann Kamyshnikova and Jakub Libosvar for
their great work on this solution. Also thanks to Salvatore Orlando and Mark
McClain for mentoring this through to the finish.

[1]
https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage
[2] https://review.openstack.org/96438

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Registration for the mid cycle meetup is now closed

2014-07-15 Thread Rick Harris
Hey Michael,

Would love to attend and give an update on where we are with Libvirt+LXC
containers.

We have bunch of patches proposed and more coming down the pike, so would
love to get some feedback on where we are and where we should go with this.

I just found out I was cleared to attend yesterday, so that's why I'm late
in getting registered. Anyway I could squeeze in?

Thanks!

-Rick


On Tue, Jul 15, 2014 at 4:35 PM, chuck.short 
wrote:

> Hi
>
> I haven't registered yet unfortunately can you squeeze in one more person?
>
> Chuck
>
>
>
>  Original message 
> From: Michael Still
> Date:07-15-2014 4:56 PM (GMT-05:00)
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [Nova] Registration for the mid cycle meetup is
> now closed
>
> Hi.
>
> We've now hit our room capacity, so I have closed registrations for
> the nova mid cycle meetup. Please reply to this thread if that's a
> significant problem for someone.
>
> Thanks,
> Michael
>
> --
> Rackspace Australia
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-07-15 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 14/07/14 22:48, Vishvananda Ishaya wrote:
> 
> On Jul 13, 2014, at 9:29 AM, Ihar Hrachyshka 
> wrote:
> 
>> Signed PGP part On 12/07/14 03:17, Mike Bayer wrote:
>>> 
>>> On 7/11/14, 7:26 PM, Carl Baldwin wrote:
 
 
 On Jul 11, 2014 5:32 PM, "Vishvananda Ishaya" 
 >> > wrote:
> 
> I have tried using pymysql in place of mysqldb and in real 
> world
>>> concurrency
> tests against cinder and nova it performs slower. I was 
> inspired by
>>> the mention
> of mysql-connector so I just tried that option instead.
>>> Mysql-connector seems
> to be slightly slower as well, which leads me to believe
> that the
>>> blocking inside of
 
 Do you have some numbers?  "Seems to be slightly slower"
 doesn't
>>> really stand up as an argument against the numbers that have
>>> been posted in this thread.
> 
> Numbers are highly dependent on a number of other factors, but I
> was seeing 100 concurrent list commands against cinder going from
> an average of 400 ms to an average of around 600 ms with both
> msql-connector and pymsql.

I've made my tests on neutron only, so there is possibility that
cinder works somehow differently.

But, those numbers don't tell a lot in terms of considering the
switch. Do you have numbers for mysqldb case?

> 
> It is also worth mentioning that my test of 100 concurrent creates
> from the same project in cinder leads to average response times
> over 3 seconds. Note that creates return before the request is sent
> to the node for processing, so this is just the api creating the db
> record and sticking a message on the queue. A huge part of the
> slowdown is in quota reservation processing which does a row lock
> on the project id.

Again, are those 3 seconds better or worse than what we have for mysqldb?

> 
> Before we are sure that an eventlet friendly backend “gets rid of
> all deadlocks”, I will mention that trying this test against
> connector leads to some requests timing out at our load balancer (5
> minute timeout), so we may actually be introducing deadlocks where
> the retry_on_deadlock operator is used.

Deadlocks != timeouts. I attempt to fix eventlet-triggered db
deadlocks, not all possible deadlocks that you may envision, or timeouts.

> 
> Consider the above anecdotal for the moment, since I can’t verify
> for sure that switching the sql driver didn’t introduce some other
> race or unrelated problem.
> 
> Let me just caution that we can’t recommend replacing our mysql
> backend without real performance and load testing.

I agree. Not saying that the tests are somehow complete, but here is
what I was into last two days.

There is a nice openstack project called Rally that is designed to
allow easy benchmarks for openstack projects. They have four scenarios
for neutron implemented: for networks, ports, routers, and subnets.
Each scenario combines create and list commands.

I've run each test with the following runner settings: times = 100,
concurrency = 10, meaning each scenario is run 100 times in parallel,
and there were not more than 10 parallel scenarios running. Then I've
repeated the same for times = 100, concurrency = 20 (also set
max_pool_size to 20 to allow sqlalchemy utilize that level of
parallelism), and times = 1000, concurrency = 100 (same note on
sqlalchemy parallelism).

You can find detailed html files with nice graphs here [1]. Brief
description of results is below:

1. create_and_list_networks scenario: for 10 parallel workers
performance boost is -12.5% from original time, for 20 workers -6.3%,
for 100 workers there is a slight reduction of average time spent for
scenario +9.4% (this is the only scenario that showed slight reduction
in performance, I'll try to rerun the test tomorrow to see whether it
was some discrepancy when I executed it that influenced the result).

2. create_and_list_ports scenario: for 10 parallel workers boost is
- -25.8%, for 20 workers it's -9.4%, and for 100 workers it's -12.6%.

3. create_and_list_routers scenario: for 10 parallel workers boost is
- -46.6% (almost half of original time), for 20 workers it's -51.7%
(more than a half), for 100 workers it's -41.5%.

4. create_and_list_subnets scenario: for 10 parallel workers boost is
- -26.4%, for 20 workers it's -51.1% (more than half reduction in time
spent for average scenario), and for 100 workers it's -31.7%.

I've tried to check how it scales till 200 parallel workers, but was
hit by local file opened limits and mysql max_connection settings. I
will retry my tests with limits raised tomorrow to see how it handles
that huge load.

Tomorrow I will also try to test new library with multiple API workers.

Other than that, what are your suggestions on what to check/test?

FYI: [1] contains the following directories:

mysqlconnector/
mysqldb/

Each of them contains the following directories:
10-10/ - 10 parallel workers, max_pool_size = 10 (defau

Re: [openstack-dev] [Nova] Registration for the mid cycle meetup is now closed

2014-07-15 Thread Yathiraj Udupi (yudupi)
Hi Michael, 

I was planning to attend the meet up, and was going to register today.
Based on today¹s Nova/Gantt discussions in gantt subgroup meeting, it
would be really helpful to meet in person with the folks involved with
Gantt efforts. 

Can you please make an exception and allow one more registration for me?

Thanks,
Yathi. 



On 7/15/14, 1:56 PM, "Michael Still"  wrote:

>Hi.
>
>We've now hit our room capacity, so I have closed registrations for
>the nova mid cycle meetup. Please reply to this thread if that's a
>significant problem for someone.
>
>Thanks,
>Michael
>
>-- 
>Rackspace Australia
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Flavor framework proposal

2014-07-15 Thread Eichberger, German
Hi Stephen,

+1

Admittedly,  since Stephen and I come from an operator centric world we have 
sometimes trouble grasping other use cases so I am wondering if you can provide 
one which would help us understand the need for grouping multiple different 
devices (LB, VPN, FW) under a single flavor.

Thanks,
German


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Tuesday, July 15, 2014 3:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Flavor framework proposal

Hi Salvatore and Eugene,

Responses inline:

On Tue, Jul 15, 2014 at 12:59 PM, Salvatore Orlando 
mailto:sorla...@nicira.com>> wrote:
I think I've provided some examples in the review.

I was hoping for specific examples. The discussion I've seen so far has been 
vague enough that it's difficult to see what people mean. It's also possible 
you gave specific examples but these were buried in comments on previous 
revisions (one of my biggest gripes with the way Gerrit works. :P ) Could you 
please give a specific example of what you mean, as well as how it simplifies 
the user experience?

However, the point is mostly to simplify usage from a user perspective - 
allowing consumers of the neutron API to use the same flavour object for 
multiple services.

Actually, I would argue the having a single flavor valid for several different 
services complicates the user experience (and vastly complicates the operator 
experience). This is because:

* Flavors are how Operators will provide different service levels, or different 
feature sets for similar kinds of service. Users interested in paying for those 
services are likely to be more confused if a single flavor lists features for 
several different kinds of service.
* Billing becomes more incomprehensible when the same flavor is used for 
multiple kinds of service. Users and Operators should not expect to pay the 
same rate for a "Gold" FWaaS instance and "Gold" VPNaaS instance, so why 
complicate things by putting them under the same flavor?
* Because of the above concerns, it's likely that Operators will only deploy 
service profiles in a flavor for a single type of service anyway. But from the 
user's perspective, it's not apparent when looking at the list of flavors, 
which are valid for which kinds of service. What if a user tries to deploy a 
LBaaS service using a flavor that only has FWaaS service profiles associated 
with it? Presumably, the system must respond with an error indicating that no 
valid service profiles could be found for that service in that flavor. But this 
isn't very helpful to the user and is likely to lead to increased support load 
for the Operator who will need to explain this.
* A single-service flavor is going to be inherently easier to understand than a 
multi-service flavor.
* Single-service flavors do not preclude the ability for vendors to have 
multi-purpose appliances serve multiple roles in an OpenStack cloud.

There are other considerations which could be made, but since they're dependent 
on features which do not yet exist (NFV, service insertion, chaining and 
steering) I think there is no point in arguing over it.

Agreed. Though, I don't think single-service flavors paint us into a corner 
here at all. Again, things get complicated enough when it comes to service 
insertion, chaining, steering, etc. that what we'll really need at that point 
is actual orchestration. Flavors alone will not solve these problems, and 
orchestration can work with many single-service flavors to provide the illusion 
of multi-service flavors.

In conclusion I think the idea makes sense, and is a minor twist in the current 
design which should not either make the feature too complex neither prevent any 
other use case for which the flavours are being conceived. For the very same 
reason however, it is worth noting that this is surely not an aspect which will 
cause me or somebody else to put a veto on this work item.

I don't think this is a minor twist in the current design, actually:
* We'll have to deal with cases like the above where no valid service profiles 
can be found for a given kind of flavor (which we can avoid entirely if a 
flavor can have service profiles valid for only one kind of service).
* When and if tags/capabilities/extensions get introduced, we would need to 
provide an additional capabilities list on the service profiles in order to be 
able to select which service profiles provide the capabilities requested.
* The above point makes things much more complicated when it comes to 
scheduling algorithms for choosing which service profile to use when multiple 
can meet the need for a given service. What does 'weight' mean if all but two 
low-weight service profiles get eliminated as not suitable?

Another aspect to consider is how the flavours will work when the advanced 
service type they refer to is not consumable through the neutron API, which 
would be the case with an independent load bala

Re: [openstack-dev] [Neutron] Flavor framework proposal

2014-07-15 Thread Eichberger, German
Hi Eugene,

I understand the argument with preferring tags over extensions to turn/features 
on and off since that is more fine grained. Now you are bringing up the policy 
framework to actually controlling which features are available. So let’s look 
at this example:

As an operator I want to offer a load balancer without TLS – so based on my 
understanding of flavors I would

· Create a flavor which does not have a TLS extension/tags

· Add some description on my homepage “Flavor Bronze - the reliable TLS 
less load balancer”

· Use profiles to link that flavor to some haproxy and also some 
hardware load balancer aka F6

o   Set parameters in my profile to disable TLS for the F6 LB

Now, the user  asks for a “Bronze: load balancer and I give him an F6. So he 
can go ahead and enable TLS via the TLS API (since flavors don’t control API 
restrictions) and go his merry way – unless I also use some to-be-developed 
policy extension to restrict access to certain API features.

I am just wondering if this is what we are trying to build – and then why would 
we need tags and extensions if the heavy lifting is done with the policy 
framework…

German

From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Tuesday, July 15, 2014 2:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Flavor framework proposal

Hi Stephen,

So, as was discussed, existing proposal has some aspects which better to be 
postponed, like extension list on the flavor (instead of tags).
Particularly that idea has several drawbacks:
 - it makes public API inflexible
 - turning features on/off is not what flavors should be doing, it's a task for 
policy framework and not flavors
 - flavor-based rest call dispatching is quite complex solution giving no 
benefits for service plugins
While this is not explicitly written in proposal - that's what implied there.
I think that one is a major blocker of the proposal right now, it deserves 
future discussion and not essential to the problem flavors are supposed to 
solve.
Other than that, I personally don't have much disagreements on the proposal.

The question about service type on the flavor is minor IMO. We can allow it to 
be NULL, which would mean multiservice flavor.
However, multiservice flavors may put some minor requirements to driver API 
(that's mainly because of how flavor plugin interacts with service plugins)

Thanks,
Eugene.

On Tue, Jul 15, 2014 at 11:21 PM, Stephen Balukoff 
mailto:sbaluk...@bluebox.net>> wrote:
Hi folks!

I've noticed progress on the flavor framework discussion slowing down over the 
last week. We would really like to see this happen for Juno because it's 
critical for many of the features we'd also like to get into Juno for LBaaS. I 
understand there are other Neutron extensions which will need it too.

The proposal under discussion is here:

https://review.openstack.org/#/c/102723/

One of the things I've seen come up frequently in the comments is the idea that 
a single flavor would apply to more than one service type (service type being 
'LBaaS', 'FWaaS', 'VPNaaS', etc.). I've commented extensively on this, and my 
opinion is that this doesn't make a whole lot of sense.  However, there are 
people who have a different view on this, so I would love to hear from them:

Could you describe a specific usage scenario where this makes sense? What are 
the characteristics of a flavor that applies to more than one service type?

Let's see if we can come to some conclusions on this so that we can get flavors 
into Juno, please!

Thanks,
Stephen

--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Flavor framework proposal

2014-07-15 Thread Stephen Balukoff
Hi Salvatore and Eugene,

Responses inline:

On Tue, Jul 15, 2014 at 12:59 PM, Salvatore Orlando 
wrote:

> I think I've provided some examples in the review.
>

I was hoping for specific examples. The discussion I've seen so far has
been vague enough that it's difficult to see what people mean. It's also
possible you gave specific examples but these were buried in comments on
previous revisions (one of my biggest gripes with the way Gerrit works. :P
) Could you please give a specific example of what you mean, as well as how
it simplifies the user experience?

However, the point is mostly to simplify usage from a user perspective -
> allowing consumers of the neutron API to use the same flavour object for
> multiple services.
>

Actually, I would argue the having a single flavor valid for several
different services complicates the user experience (and vastly complicates
the operator experience). This is because:

* Flavors are how Operators will provide different service levels, or
different feature sets for similar kinds of service. Users interested in
paying for those services are likely to be more confused if a single flavor
lists features for several different kinds of service.
* Billing becomes more incomprehensible when the same flavor is used for
multiple kinds of service. Users and Operators should not expect to pay the
same rate for a "Gold" FWaaS instance and "Gold" VPNaaS instance, so why
complicate things by putting them under the same flavor?
* Because of the above concerns, it's likely that Operators will only
deploy service profiles in a flavor for a single type of service anyway.
But from the user's perspective, it's not apparent when looking at the list
of flavors, which are valid for which kinds of service. What if a user
tries to deploy a LBaaS service using a flavor that only has FWaaS service
profiles associated with it? Presumably, the system must respond with an
error indicating that no valid service profiles could be found for that
service in that flavor. But this isn't very helpful to the user and is
likely to lead to increased support load for the Operator who will need to
explain this.
* A single-service flavor is going to be inherently easier to understand
than a multi-service flavor.
* Single-service flavors do not preclude the ability for vendors to have
multi-purpose appliances serve multiple roles in an OpenStack cloud.


> There are other considerations which could be made, but since they're
> dependent on features which do not yet exist (NFV, service insertion,
> chaining and steering) I think there is no point in arguing over it.
>

Agreed. Though, I don't think single-service flavors paint us into a corner
here at all. Again, things get complicated enough when it comes to service
insertion, chaining, steering, etc. that what we'll really need at that
point is actual orchestration. Flavors alone will not solve these problems,
and orchestration can work with many single-service flavors to provide the
illusion of multi-service flavors.


> In conclusion I think the idea makes sense, and is a minor twist in the
> current design which should not either make the feature too complex neither
> prevent any other use case for which the flavours are being conceived. For
> the very same reason however, it is worth noting that this is surely not an
> aspect which will cause me or somebody else to put a veto on this work item.
>

I don't think this is a minor twist in the current design, actually:
* We'll have to deal with cases like the above where no valid service
profiles can be found for a given kind of flavor (which we can avoid
entirely if a flavor can have service profiles valid for only one kind of
service).
* When and if tags/capabilities/extensions get introduced, we would need to
provide an additional capabilities list on the service profiles in order to
be able to select which service profiles provide the capabilities requested.
* The above point makes things much more complicated when it comes to
scheduling algorithms for choosing which service profile to use when
multiple can meet the need for a given service. What does 'weight' mean if
all but two low-weight service profiles get eliminated as not suitable?

Another aspect to consider is how the flavours will work when the advanced
> service type they refer to is not consumable through the neutron API, which
> would be the case with an independent load balancing API endpoint. But this
> is probably another story.
>

As far as I'm aware, flavors will only ever apply to advanced services
consumable through the Neutron API. If this weren't the case, what's the
point of having a flavor describing the service at all? If you're talking
about Octavia here--  well, our plan is to have Octavia essentially be an
other load balancer vendor, interfaced through a driver in the Neutron
LBaaS extension. (This is also why so many developers interested in seeing
Octavia come to light are spending all their time right now improving
Neutro

[openstack-dev] [Neutron] [ML2] kindly request neutron-cores to review and approve patch "Cisco DFA ML2 Mechanism Driver"

2014-07-15 Thread Milton Xu (mxu)
Hi,

This patch was initially uploaded on Jun 27, 2014 and we have got a number of 
reviews from the community. A lot of thanks to these who kindly reviewed and 
provided feedback.

Can the neutron cores please review/approve it so we can make progress here?  
Really appreciate your attention and help here.
I also include the cores who reviewed and approved the spec earlier.

Code patch:
https://review.openstack.org/#/c/103281/

Approved Spec:
https://review.openstack.org/#/c/89740/


Thanks,
Milton

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Registration for the mid cycle meetup is now closed

2014-07-15 Thread chuck.short
Hi

I haven't registered yet unfortunately can you squeeze in one more person?

Chuck



 Original message From: Michael Still 
 Date:07-15-2014  4:56 PM  (GMT-05:00) 
To: OpenStack Development Mailing List 
 Subject: [openstack-dev] [Nova] 
Registration for the mid cycle meetup is now
closed 
Hi.

We've now hit our room capacity, so I have closed registrations for
the nova mid cycle meetup. Please reply to this thread if that's a
significant problem for someone.

Thanks,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Resources to fix MS Outlook (was Re: [Nova] [Gantt] Scheduler split status (updated))

2014-07-15 Thread Dugger, Donald D
Well, I'll probably get ostracized for this, but I actually prefer top posting 
(I like RPN calculators also, I'm just warped that way).  I `really` dislike 
paging through 10 screens of an email to discover the single comment buried 
somewhere near the end.  With top posting the new content is always right there 
at the top, which is all I need for threads I'm familiar with and, if I lack 
context, I can just go to the bottom and scan up, finding the info that I need.

Yes, top posting requires a little discipline, you need to copy/paste a little 
context up to the top if it's not obvious what portion of the email you're 
replying to, but that's a small price to pay.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

-Original Message-
From: Stefano Maffulli [mailto:stef...@openstack.org] 
Sent: Tuesday, July 15, 2014 1:03 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Resources to fix MS Outlook (was Re: [Nova] [Gantt] 
Scheduler split status (updated))

On 07/15/2014 09:02 AM, Dugger, Donald D wrote:
> Unfortunately, much as I agree with your sentiment (Death to MS
> Outlook) my IT overlords have pretty much forced me into using it.  

It's been a while since I had to deal with really awful software but since the 
is a common problem, shall we collect some tips, tricks, resources to help the 
fellows in need?

I remember back in the day, there were some fairly easy fixes to overcome 
Outlooks lack of respect for RFCs. Something like:

http://home.in.tum.de/~jain/software/outlook-quotefix/
http://www.lemis.com/grog/email/fixing-outlook.php

Do these still work? What else can we do to help make mailing lists messages 
more easy to read?

Gmail is also pretty badly giving up on respecting RFCs, giving way too many 
incentives for people to top-posting and not quote properly.

--
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Horizon] [UX] Wireframes for Node Management - Juno

2014-07-15 Thread Wan-yen Hsu
Hi Jaromir,


> Excerpts from Jaromir Coufal's message of 2014-07-09 07:51:56 +:

>> Hey folks,

>>

>> after few rounds of reviews and feedbacks, I am sending wireframes,

>> which are ready for implementation in Juno:

>>

>>
http://people.redhat.com/~jcoufal/openstack/juno/2014-07-09_nodes-ui_juno.pdf



>>

>> Let me know in case of any questions.

>>



   This looks great!  I have a couple comments-



The "Register Nodes" panel uses "IPMI user" and "IPMI Password".  However,
not all Ironic drivers use IPMI, for instance, some Ironic drivers will use
iLO or other BMC interfaces instead of IPMI.  I would like to suggest
changing "IPMI" to "BMC" or ""IPMI/BMC" to acomodate more Ironic drivers.
The "Driver" field will reflect what power management interface (e.g., IPMI
+ PXE, or iLO + Virtual Media) is used so it can be used to correlate the
user and password fields.



   Also, myself and a few folks are working on Ironic UEFI support and we
hope to land this feature in Juno (Spec is still in review state but the
feature is on the Ironic Juno Prioritized list).   In order to add UEFI
boot feature, a "Supported Boot Modes" field in the hardware info is
needed.  The possible values are "BIOS Only", "UEFI Only", and
"BIOS+UEFI".   We will need to work with you to add this field onto
hardware info.



Thanks!



wanyen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][sriov] patch that depends on multiple existing patches under review

2014-07-15 Thread Robert Li (baoli)
Thanks Russell for the quick response. I¹ll give it a try rearranging the
dependencies. 

‹Robert 

On 7/15/14, 3:26 PM, "Russell Bryant"  wrote:

>On 07/15/2014 03:12 PM, Robert Li (baoli) wrote:
>> Hi,
>> 
>> I was working on the last patch that I¹d planned to submit for SR-IOV.
>> It turned out this patch would depend on multiple existing patches. ³git
>> review ­d² seems to be supporting one dependency only. Do let me know
>> how we can create a patch that depends on multiple existing patches
>> under review. Otherwise, I would have to collapse all of them and submit
>> a single patch instead.
>
>Ideally this whole set of patches would be coordinated into a single
>dependency chain.  If A and B are currently independent, but your new
>patch (C) depends on both of them, I would rebase so that your patch
>depends on B, and B depends on A.  Each patch can only have one parent.
>
>-- 
>Russell Bryant
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposal to add Jon Paul Sullivan and Alexis Lee to core review team

2014-07-15 Thread Robert Collins
Clint, thanks heaps for making the time to do a meta-review. With the
clear support of the other cores, I'm really happy to be able to
invite Alexis and JP to core status.

Alexis, Jon - core status means a commitment to three reviews a work
day (on average), keeping track of changing policies and our various
specs and initiatives, and obviously being excellent to us all :).

You don't have to take up the commitment if you don't want to - not
everyone has the time to keep up to date with everything going on etc.

Let me know your decision and I'll add you to the team :).

-Ro



On 10 July 2014 03:52, Clint Byrum  wrote:
> Hello!
>
> I've been looking at the statistics, and doing a bit of review of the
> reviewers, and I think we have an opportunity to expand the core reviewer
> team in TripleO. We absolutely need the help, and I think these two
> individuals are well positioned to do that.
>
> I would like to draw your attention to this page:
>
> http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt
>
> Specifically these two lines:
>
> +---+---++
> |  Reviewer | Reviews   -2  -1  +1  +2  +A+/- % | Disagreements* |
> +---+---++
> |  jonpaul-sullivan | 1880  43 145   0   077.1% |   28 ( 14.9%)  |
> |   lxsli   | 1860  23 163   0   087.6% |   27 ( 14.5%)  |
>
> Note that they are right at the level we expect, 3 per work day. And
> I've looked through their reviews and code contributions: it is clear
> that they understand what we're trying to do in TripleO, and how it all
> works. I am a little dismayed at the slightly high disagreement rate,
> but looking through the disagreements, most of them were jp and lxsli
> being more demanding of submitters, so I am less dismayed.
>
> So, I propose that we add jonpaul-sullivan and lxsli to the TripleO core
> reviewer team.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] statsd client opening a new socket everytime a stats is updated

2014-07-15 Thread John Dickinson
We've been chatting in IRC, but for the mailing list archives, yes! we'd love 
to see patches to improve this.

--John


On Jul 15, 2014, at 1:22 PM, Tatiana Al-Chueyr Martins 
 wrote:

> Hello!
> 
> I'm new to both Swift and OpenStack, I hope you can help me.
> 
> Considering statsd is enabled, each time something is logged, a new socket is 
> being opened.
> 
> At least, this is what I understood from the implementation and usage of 
> StatsdClient at:
> - swift/common/utils.py
> - swift/common/middleware/proxy_logging.py
> 
> If this analysis is correct: is there any special reason for this behavior 
> (open a new socket each request)?
> 
> We could significantly improve performance reusing the same socket.
> 
> Would you be interested in a patch in this regard?
> 
> Best,
> -- 
> Tatiana Al-Chueyr
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Flavor framework proposal

2014-07-15 Thread Eugene Nikanorov
Hi Stephen,

So, as was discussed, existing proposal has some aspects which better to be
postponed, like extension list on the flavor (instead of tags).
Particularly that idea has several drawbacks:
 - it makes public API inflexible
 - turning features on/off is not what flavors should be doing, it's a task
for policy framework and not flavors
 - flavor-based rest call dispatching is quite complex solution giving no
benefits for service plugins
While this is not explicitly written in proposal - that's what implied
there.
I think that one is a major blocker of the proposal right now, it deserves
future discussion and not essential to the problem flavors are supposed to
solve.
Other than that, I personally don't have much disagreements on the proposal.

The question about service type on the flavor is minor IMO. We can allow it
to be NULL, which would mean multiservice flavor.
However, multiservice flavors may put some minor requirements to driver API
(that's mainly because of how flavor plugin interacts with service plugins)

Thanks,
Eugene.


On Tue, Jul 15, 2014 at 11:21 PM, Stephen Balukoff 
wrote:

> Hi folks!
>
> I've noticed progress on the flavor framework discussion slowing down over
> the last week. We would really like to see this happen for Juno because
> it's critical for many of the features we'd also like to get into Juno for
> LBaaS. I understand there are other Neutron extensions which will need it
> too.
>
> The proposal under discussion is here:
>
> https://review.openstack.org/#/c/102723/
>
> One of the things I've seen come up frequently in the comments is the idea
> that a single flavor would apply to more than one service type (service
> type being 'LBaaS', 'FWaaS', 'VPNaaS', etc.). I've commented extensively on
> this, and my opinion is that this doesn't make a whole lot of sense.
>  However, there are people who have a different view on this, so I would
> love to hear from them:
>
> Could you describe a specific usage scenario where this makes sense? What
> are the characteristics of a flavor that applies to more than one service
> type?
>
> Let's see if we can come to some conclusions on this so that we can get
> flavors into Juno, please!
>
> Thanks,
> Stephen
>
> --
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Registration for the mid cycle meetup is now closed

2014-07-15 Thread Michael Still
Hi.

We've now hit our room capacity, so I have closed registrations for
the nova mid cycle meetup. Please reply to this thread if that's a
significant problem for someone.

Thanks,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Blazar] subscription mechanism BP

2014-07-15 Thread Lakmal Silva
Hi,

I have posted a blue print for a subscription mechanism for blazar.

https://blueprints.launchpad.net/blazar/+spec/subscription-mechanism

Appreciate your feedback on the BP.


Regards,
Lakmal
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.serialization repo review

2014-07-15 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 15/07/14 20:36, Joshua Harlow wrote:
> LGTM.
> 
> I'd be interesting in the future to see if we can transparently use
> some other serialization format (besides json)...
> 
> That's my only compliant is that jsonutils is still named jsonutils
> instead of 'serializer' or something else but I understand the
> reasoning why...

Now that jsonutils module contains all basic 'json' functions
(dump[s], load[s]), can we rename it to 'json' to mimic the standard
'json' library? I think jsonutils is now easy to use as an enhanced
drop-in replacement for standard 'json' module, and I even envisioned
a hacking rule that would suggest to use jsonutils instead of json. So
appropriate naming would be helpful to push that use case.

/Ihar

> 
> -Josh
> 
> On Jul 15, 2014, at 10:42 AM, Ben Nemec 
> wrote:
> 
>> And the link, since I forgot it before: 
>> https://github.com/cybertron/oslo.serialization
>> 
>> On 07/14/2014 04:59 PM, Ben Nemec wrote:
>>> Hi oslophiles,
>>> 
>>> I've (finally) started the graduation of oslo.serialization,
>>> and I'm up to the point of having a repo on github that passes
>>> the unit tests.
>>> 
>>> I realize there is some more work to be done (e.g. replacing
>>> all of the openstack.common files with libs) but my plan is to
>>> do that once it's under Gerrit control so we can review the
>>> changes properly.
>>> 
>>> Please take a look and leave feedback as appropriate.  Thanks!
>>> 
>>> -Ben
>>> 
>> 
>> 
>> ___ OpenStack-dev
>> mailing list OpenStack-dev@lists.openstack.org 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>> 
> 
> ___ OpenStack-dev
> mailing list OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJTxZShAAoJEC5aWaUY1u57oLoH/Aw0x10t3HeotJUfKPz12k1U
ca9Grr0IYFfR48bRldmdomm8gT8vMSsB3Js4EaRwORSokgIumF/9h/cPKq6c49Pt
OWQV/MVgSxdaobz189Ai6mbAukjtyNcBrhJ1sYFyQH8lDgM06PYHXdMThcemSXIp
/8BTVyUDVm5Lq7cRe5LB+tVg5L/4iYoLLVl6hcvVSOr0Quey+wjbroEG/Cg5Biwz
em3vJlO22eao7hDsuNh8foeUWKGRUirbK6TZH/VfgJuB0fp0v0gz7GFoiNsvyqaT
9d5P3P17nccokeC+nntzrAM+RwJdio8GGPXzgCyLr4JDCAJlFCoziJ1lghTXXOk=
=GE52
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] request to tag novaclient 2.18.0

2014-07-15 Thread Michael Still
Ok, I just released 2.18.1 to address this issue.
https://launchpad.net/python-novaclient/+milestone/2.18.1

Cheers,
Michael

On Sat, Jul 12, 2014 at 7:18 AM, Michael Still  wrote:
> I can do another release once https://review.openstack.org/#/c/106447/ merges.
>
> Michael
>
> On Sat, Jul 12, 2014 at 3:51 AM, Russell Bryant  wrote:
>> On 07/11/2014 01:27 PM, Russell Bryant wrote:
>>> On 07/11/2014 05:29 AM, Thierry Carrez wrote:
 Matthias Runge wrote:
> On 11/07/14 02:04, Michael Still wrote:
>> Sorry for the delay here. This email got lost in my inbox while I was
>> travelling.
>>
>> This release is now tagged. Additionally, I have created a milestone
>> for this release in launchpad, which is the keystone process for
>> client releases. This means that users of launchpad can now see what
>> release a given bug was fixed in, and improves our general launchpad
>> bug hygiene. However, because we haven't done this before, this first
>> release is a bit bigger than it should me.
>>
>> I'm having some pain marking the milestone as released in launchpad,
>> but I am arguing with launchpad about that now.
>>
>> Michael
>>
> Cough,
>
> this broke horizon stable and master; heat stable is affected as well.
>
> For Horizon, I filed bug https://bugs.launchpad.net/horizon/+bug/1340596

 The same bug (https://bugs.launchpad.net/bugs/1340596) will be used to
 track Heat tasks as well.

>>>
>>> Thanks for pointing this out.  These non-backwards compatible changes
>>> should not have been merged, IMO.  They really should have waited until
>>> a v2.0, or at least done in a backwards copmatible way.  I'll look into
>>> what reverts are needed.
>>>
>>
>> I posted a couple of reverts that I think will resolve these problems:
>>
>> https://review.openstack.org/#/c/106446/
>> https://review.openstack.org/#/c/106447/
>>
>> --
>> Russell Bryant
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Rackspace Australia



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-15 Thread Morgan Fainberg

>  
> > Thanks for the info - any chance you can provide links to the relevant
> > reviews here? If so I'll be happy to pull them and locally test to ensure
> > our issues will be addressed :)
> >
> Sure!
>  
> https://review.openstack.org/#/c/106819/ is the change for the 
> keystonemiddleware  
> package (where the change will actually land), and 
> https://review.openstack.org/#/c/106833/  
> is the change to keystoneclient to show that the change will succeed (this 
> will not merge  
> to keystoneclient, if you want the v3-preferred by default behavior, the 
> project must  
> use keystonemiddleware).
>  
> Cheers,
> Morgan
>  

And just to be clear, the reason for the keystoneclient “test” change is 
because projects have not all converted over to keystonemiddleware yet (this is 
in process). We don’t want projects to be split between keystoneclient and the 
middleware going forward, but we cannot remove the client version for 
compatibility reasons (previous releases of OpenStack, etc). The version in the 
client is “Frozen” and will only receive security updates (based on the 
specification to split the middleware to it’s own package).

—Morgan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-15 Thread Morgan Fainberg

> Thanks for the info - any chance you can provide links to the relevant
> reviews here? If so I'll be happy to pull them and locally test to ensure
> our issues will be addressed :)
>  
Sure!

https://review.openstack.org/#/c/106819/ is the change for the 
keystonemiddleware package (where the change will actually land), and 
https://review.openstack.org/#/c/106833/ is the change to keystoneclient to 
show that the change will succeed (this will not merge to keystoneclient, if 
you want the v3-preferred by default behavior, the project must use 
keystonemiddleware).

Cheers,
Morgan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] statsd client opening a new socket everytime a stats is updated

2014-07-15 Thread Tatiana Al-Chueyr Martins
Hello!

I'm new to both Swift and OpenStack, I hope you can help me.

Considering statsd is enabled, each time something is logged, a new socket
is being opened.

At least, this is what I understood from the implementation and usage of
StatsdClient at:
- swift/common/utils.py
- swift/common/middleware/proxy_logging.py

If this analysis is correct: is there any special reason for this behavior
(open a new socket each request)?

We could significantly improve performance reusing the same socket.

Would you be interested in a patch in this regard?

Best,
-- 
Tatiana Al-Chueyr
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-15 Thread Steven Hardy
On Tue, Jul 15, 2014 at 07:20:47AM -0700, Morgan Fainberg wrote:

>We just did a test converting over the default to v3 (and falling back to
>v2 as needed, yes fallback will still be needed) yesterday (Dolph posted a
>couple of test patches and they seemed to succeed - yay!!)A It looks like
>it will just work. Now there is a big caveate, this default will only
>change in the keystone middleware project, and it needs to have a patch or
>three get through gate converting projects over to use it before we accept
>the code.
>Nova has approved the patch to switch over, it is just fighting with Gate.
>Other patches are proposed for other projects and are in various states of
>approval.
>So, in short. This is happening and soon. There are some things that need
>to get through gate and then we will do the release of keystonemiddleware
>that should address your problem here. At least my reading of the issue
>and the fixes that are pending indicates as much. (Please let me know if I
>am misreading anything here).

Thanks for the info - any chance you can provide links to the relevant
reviews here?  If so I'll be happy to pull them and locally test to ensure
our issues will be addressed :)

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Meeting time change

2014-07-15 Thread Dave Thomas
B

Wednessdays, 2100 UTC
Wednessdays, 2000 UTC
Tuesdays, 1900 UTC



On Tue, Jul 15, 2014 at 12:12 PM, Sriram Madapusi Vasudevan <
sriram.madapusiva...@rackspace.com> wrote:

>  I’ll go with B.
>
>  Cheers,
> Sriram Madapusi Vasudevan
>
>
>
>  On Jul 15, 2014, at 2:45 PM, Victoria Martínez de la Cruz <
> victo...@vmartinezdelacruz.com> wrote:
>
>   2014-07-15 13:20 GMT-03:00 Kurt Griffiths 
> :
>
>>  Hi folks, we’ve been talking about this in IRC, but I wanted to bring
>> it to the ML to get broader feedback and make sure everyone is aware. We’d
>> like to change our meeting time to better accommodate folks that live
>> around the globe. Proposals:
>>
>>  Tuesdays, 1900 UTC
>> Wednessdays, 2000 UTC
>> Wednessdays, 2100 UTC
>>
>>  I believe these time slots are free, based on:
>> https://wiki.openstack.org/wiki/Meetings
>>
>>  Please respond with ONE of the following:
>>
>>  A. None of these times work for me
>> B. An ordered list of the above times, by preference
>> C. I am a robot
>>
>>  Cheers,
>> Kurt
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>  I'm ok with anyone of those! Thanks for looking into this.
>
>  Cheers,
>
>  Victoria
>  ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] python-barbicanclient 2.2.1 released

2014-07-15 Thread Douglas Mendizabal
The Barbican development team would like to announce the release of
python-barbicanclient version 2.2.1

python-barbicanclient is a client library for the Barbican Key Management
Service.  It provides a Python API (barbicanclient module) and a
command-line tool (barbican).

This release can be installed from PyPI:

https://pypi.python.org/pypi/python-barbicanclient

Thanks,
Doug


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5  0CC9 AD14 1F30 2D58 923C




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Flavor framework proposal

2014-07-15 Thread Salvatore Orlando
I think I've provided some examples in the review.

However, the point is mostly to simplify usage from a user perspective -
allowing consumers of the neutron API to use the same flavour object for
multiple services.
There are other considerations which could be made, but since they're
dependent on features which do not yet exist (NFV, service insertion,
chaining and steering) I think there is no point in arguing over it.

In conclusion I think the idea makes sense, and is a minor twist in the
current design which should not either make the feature too complex neither
prevent any other use case for which the flavours are being conceived. For
the very same reason however, it is worth noting that this is surely not an
aspect which will cause me or somebody else to put a veto on this work item.

Another aspect to consider is how the flavours will work when the advanced
service type they refer to is not consumable through the neutron API, which
would be the case with an independent load balancing API endpoint. But this
is probably another story.

Salvatore



On 15 July 2014 21:21, Stephen Balukoff  wrote:

> Hi folks!
>
> I've noticed progress on the flavor framework discussion slowing down over
> the last week. We would really like to see this happen for Juno because
> it's critical for many of the features we'd also like to get into Juno for
> LBaaS. I understand there are other Neutron extensions which will need it
> too.
>
> The proposal under discussion is here:
>
> https://review.openstack.org/#/c/102723/
>
> One of the things I've seen come up frequently in the comments is the idea
> that a single flavor would apply to more than one service type (service
> type being 'LBaaS', 'FWaaS', 'VPNaaS', etc.). I've commented extensively on
> this, and my opinion is that this doesn't make a whole lot of sense.
>  However, there are people who have a different view on this, so I would
> love to hear from them:
>
> Could you describe a specific usage scenario where this makes sense? What
> are the characteristics of a flavor that applies to more than one service
> type?
>
> Let's see if we can come to some conclusions on this so that we can get
> flavors into Juno, please!
>
> Thanks,
> Stephen
>
> --
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposal to add Jon Paul Sullivan and Alexis Lee to core review team

2014-07-15 Thread Derek Higgins
On 09/07/14 16:52, Clint Byrum wrote:
> Hello!
> 
> I've been looking at the statistics, and doing a bit of review of the
> reviewers, and I think we have an opportunity to expand the core reviewer
> team in TripleO. We absolutely need the help, and I think these two
> individuals are well positioned to do that.
> 
> I would like to draw your attention to this page:
> 
> http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt
> 
> Specifically these two lines:
> 
> +---+---++
> |  Reviewer | Reviews   -2  -1  +1  +2  +A+/- % | Disagreements* |
> +---+---++
> |  jonpaul-sullivan | 1880  43 145   0   077.1% |   28 ( 14.9%)  |
> |   lxsli   | 1860  23 163   0   087.6% |   27 ( 14.5%)  |
> 
> Note that they are right at the level we expect, 3 per work day. And
> I've looked through their reviews and code contributions: it is clear
> that they understand what we're trying to do in TripleO, and how it all
> works. I am a little dismayed at the slightly high disagreement rate,
> but looking through the disagreements, most of them were jp and lxsli
> being more demanding of submitters, so I am less dismayed.
> 
> So, I propose that we add jonpaul-sullivan and lxsli to the TripleO core
> reviewer team.

+1 to both


> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Group-based Policy code sprint

2014-07-15 Thread Sumit Naiksatam
Hi All,

The Group Policy team is planning to meet on July 24th to focus on
making progress with the pending items for Juno, and also to
facilitate the vendor drivers. The specific agenda will be posted on
the Group Policy wiki:
https://wiki.openstack.org/wiki/Neutron/GroupPolicy

Prasad Vellanki from One Convergence has graciously offered to host
this for those planning to attend in person in the bay area:
Address:
2290 N First Street
Suite # 304
San Jose, CA 95131

Time: 9.30 AM

For those not being able to attend in person, we will post remote
attendance details on the above Group Policy wiki.

Thanks for your participation.

~Sumit.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][sriov] patch that depends on multiple existing patches under review

2014-07-15 Thread Russell Bryant
On 07/15/2014 03:12 PM, Robert Li (baoli) wrote:
> Hi,
> 
> I was working on the last patch that I’d planned to submit for SR-IOV.
> It turned out this patch would depend on multiple existing patches. “git
> review –d” seems to be supporting one dependency only. Do let me know
> how we can create a patch that depends on multiple existing patches
> under review. Otherwise, I would have to collapse all of them and submit
> a single patch instead. 

Ideally this whole set of patches would be coordinated into a single
dependency chain.  If A and B are currently independent, but your new
patch (C) depends on both of them, I would rebase so that your patch
depends on B, and B depends on A.  Each patch can only have one parent.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Flavor framework proposal

2014-07-15 Thread Stephen Balukoff
Hi folks!

I've noticed progress on the flavor framework discussion slowing down over
the last week. We would really like to see this happen for Juno because
it's critical for many of the features we'd also like to get into Juno for
LBaaS. I understand there are other Neutron extensions which will need it
too.

The proposal under discussion is here:

https://review.openstack.org/#/c/102723/

One of the things I've seen come up frequently in the comments is the idea
that a single flavor would apply to more than one service type (service
type being 'LBaaS', 'FWaaS', 'VPNaaS', etc.). I've commented extensively on
this, and my opinion is that this doesn't make a whole lot of sense.
 However, there are people who have a different view on this, so I would
love to hear from them:

Could you describe a specific usage scenario where this makes sense? What
are the characteristics of a flavor that applies to more than one service
type?

Let's see if we can come to some conclusions on this so that we can get
flavors into Juno, please!

Thanks,
Stephen

-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Meeting time change

2014-07-15 Thread Sriram Madapusi Vasudevan
I’ll go with B.

Cheers,
Sriram Madapusi Vasudevan



On Jul 15, 2014, at 2:45 PM, Victoria Martínez de la Cruz 
mailto:victo...@vmartinezdelacruz.com>> wrote:

2014-07-15 13:20 GMT-03:00 Kurt Griffiths 
mailto:kurt.griffi...@rackspace.com>>:
Hi folks, we’ve been talking about this in IRC, but I wanted to bring it to the 
ML to get broader feedback and make sure everyone is aware. We’d like to change 
our meeting time to better accommodate folks that live around the globe. 
Proposals:

Tuesdays, 1900 UTC
Wednessdays, 2000 UTC
Wednessdays, 2100 UTC

I believe these time slots are free, based on: 
https://wiki.openstack.org/wiki/Meetings

Please respond with ONE of the following:

A. None of these times work for me
B. An ordered list of the above times, by preference
C. I am a robot

Cheers,
Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I'm ok with anyone of those! Thanks for looking into this.

Cheers,

Victoria
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][sriov] patch that depends on multiple existing patches under review

2014-07-15 Thread Robert Li (baoli)
Hi,

I was working on the last patch that I’d planned to submit for SR-IOV. It 
turned out this patch would depend on multiple existing patches. “git review 
–d” seems to be supporting one dependency only. Do let me know how we can 
create a patch that depends on multiple existing patches under review. 
Otherwise, I would have to collapse all of them and submit a single patch 
instead.


thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Resources to fix MS Outlook (was Re: [Nova] [Gantt] Scheduler split status (updated))

2014-07-15 Thread Stefano Maffulli
On 07/15/2014 09:02 AM, Dugger, Donald D wrote:
> Unfortunately, much as I agree with your sentiment (Death to MS
> Outlook) my IT overlords have pretty much forced me into using it.  

It's been a while since I had to deal with really awful software but
since the is a common problem, shall we collect some tips, tricks,
resources to help the fellows in need?

I remember back in the day, there were some fairly easy fixes to
overcome Outlooks lack of respect for RFCs. Something like:

http://home.in.tum.de/~jain/software/outlook-quotefix/
http://www.lemis.com/grog/email/fixing-outlook.php

Do these still work? What else can we do to help make mailing lists
messages more easy to read?

Gmail is also pretty badly giving up on respecting RFCs, giving way too
many incentives for people to top-posting and not quote properly.

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Reminder of Juno-2 and Juno-3 important dates

2014-07-15 Thread John Garbutt
Hi all,

Some friendly reminders from your Blueprint Czar...


Juno-2 is just over a week away:
https://launchpad.net/nova/+milestone/juno-2

I have moved most blueprints that don't have all their code ready to
be reviewed to Juno-3. Reviewers, if something is really not going to
make it for Juno-2, feel free to suggest that on the blueprint, and
punt it towards Juno-3, or just drop me a note on IRC or whatever.

For the next week, if at all possible, please try to help out
reviewing Juno-2 blueprints. Great non-core reviews are just as
valuable as core reviews comments. Its also how we get to know you and
grow the core team.


Juno-3...

FeatureProposalFreeze is currently 21st August:
https://wiki.openstack.org/wiki/Juno_Release_Schedule

Please have all code up ready for review as soon as possible, but no
later than 21st August.

Its looking like a very busy Juno-3. History has taught us, that we
are unlikely to merge your code if its up for review just before the
deadline. So to avoid disappointment, please submit your code ASAP,
and remember to mark your blueprint as "Needs Code Review".

Bugs are likely to get more priority the closer we get towards the end
of Juno-3. As such, if on 31st July there is no code up for review,
and no sign of activity on a blueprint, we are likely to start
suggesting you resubmit the blueprint for K.


nova-specs freeze in operation...

We have passed the nova-specs freeze for Juno. I am working through
what missed the deadline, looking for any issues, and tracking that
progress here:
https://etherpad.openstack.org/p/nova-juno-spec-priorities

If you want an exception, please shout up at the next nova-meeting and
argue your case. If that doesn't work out, we can resort of emails, in
the usual way. Please assume the answer will be no.


What about K?

We have not yet opened specs for the K release. Sorry.

Frankly I don't think the current process is quite working for Nova.
We will have to make some changes for K. For example, we need to make
life easier for smaller blueprints. We need to do better at following
through form the summit and commenting priorities, etc, etc.

With all this in mind, the current idea is to build some consensus for
on what we want for K at the mid-cycle meetup. Suggestions welcome,
but please start a new email thread, this is just informational.


Any questions or suggestions, please do shout.


Thanks,
johnthetubaguy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [compute][tempest] Upgrading libvirt-lxc support status

2014-07-15 Thread Nels Nelson
Thanks for your response, Joe.

Am I understanding you correctly that the Hypervisor Support Status does
not in fact hinge on any particular Tempest tests, but rather, simply on
individual tests for the libvirt-lxc driver used for gating?

Also, one last question, am I using the incorrect [subheader][category]
info in my subject?  I've had to bump this topic twice now, and you're the
only person to reply.

Thanks very much for your time.

Best regards,
-Nels Nelson


From:  Joe Gordon 
>On Tue, Jul 1, 2014 at 2:32 PM, Nels Nelson
> wrote:
>
>Greetings list,-
>
>Over the next few weeks I will be working on developing additional Tempest
>gating unit and functional tests for the libvirt-lxc compute driver.
>
>
>
>Tempest is driver agnostic, just like the nova APIs strive to be. As a
>consumer of nova I shouldn't need to know what driver is being used.
>So there should not be any libvirt-lxc only tests in Tempest.
> 
>
>
>I am trying to figure out exactly what is required in order to accomplish
>the goal of ensuring the continued inclusion (without deprecation) of the
>libvirt-lxc compute driver in OpenStack.  My understanding is that this
>requires the upgrading of the support status in the Hypervisor Support
>Matrix document by developing the necessary Tempest tests.  To that end, I
>am trying to determine what tests are necessary as precisely as possible.
>
>I have some questions:
>
>* Who maintains the Hypervisor Support Matrix document?
>
>  
>https://wiki.openstack.org/wiki/HypervisorSupportMatrix
>
>
>* Who is in charge of the governance over the Support Status process?  Is
>there single person in charge of evaluating every driver?
>
>
>
>
>The nova team is responsible for this, with the PTL as the lead of that
>team.
> 
>
>
>* Regarding that process, how is the information in the Hypervisor
>Support Matrix substantiated?  Is there further documentation in the wiki
>for this?  Is an evaluation task simply performed on the functionality for
>the given driver, and the results logged in the HSM?  Is this an automated
>process?  Who is responsible for that evaluation?
>
>
>
>I am actually not sure about this one, but I don't believe it is
>automated though.
> 
>
>
>* How many of the boxes in the HSM must be checked positively, in
>order to move the driver into a higher supported group?  (From group C to
>B, and from B to A.)
>
>* Or, must they simply all be marked with a check or minus,
>substantiated by a particular gating test which passes based on the
>expected support?
>
>* In other words, is it sufficient to provide enough automated testing
>to simply be able to indicate supported/not supported on the support
>matrix chart?  Else, is writing supporting documentation of an evaluation
>of the hypervisor sufficient to substantiate those marks in the support
>matrix?
>
>* Do "unit tests that gate commits" specifically refer to tests
>written to verify the functionality described by the annotation in the
>support matrix? Or are the annotations substantiated by "functional
>testing that gate commits"?
>
>
>
>In order to get a driver out of group C and into group B, a third party
>testing system should run tempest on all nova patches. Similar to what we
>have for Xen 
>(https://review.openstack.org/#/q/reviewer:openstack%2540citrix.com+status
>:open,n,z).
>
>To move from Group B to group A, the driver must have first party testing
>that we gate on (we cannot land any patches that fail for that driver).
> 
>
>
>Thank you for your time and attention.
>
>Best regards,
>-Nels Nelson
>Software Developer
>Rackspace Hosting
>
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Meeting time change

2014-07-15 Thread Victoria Martínez de la Cruz
2014-07-15 13:20 GMT-03:00 Kurt Griffiths :

>  Hi folks, we’ve been talking about this in IRC, but I wanted to bring it
> to the ML to get broader feedback and make sure everyone is aware. We’d
> like to change our meeting time to better accommodate folks that live
> around the globe. Proposals:
>
>  Tuesdays, 1900 UTC
> Wednessdays, 2000 UTC
> Wednessdays, 2100 UTC
>
>  I believe these time slots are free, based on:
> https://wiki.openstack.org/wiki/Meetings
>
>  Please respond with ONE of the following:
>
>  A. None of these times work for me
> B. An ordered list of the above times, by preference
> C. I am a robot
>
>  Cheers,
> Kurt
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
I'm ok with anyone of those! Thanks for looking into this.

Cheers,

Victoria
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.serialization repo review

2014-07-15 Thread Joshua Harlow
LGTM.

I'd be interesting in the future to see if we can transparently use some other 
serialization format (besides json)...

That's my only compliant is that jsonutils is still named jsonutils instead of 
'serializer' or something else but I understand the reasoning why...

-Josh

On Jul 15, 2014, at 10:42 AM, Ben Nemec  wrote:

> And the link, since I forgot it before:
> https://github.com/cybertron/oslo.serialization
> 
> On 07/14/2014 04:59 PM, Ben Nemec wrote:
>> Hi oslophiles,
>> 
>> I've (finally) started the graduation of oslo.serialization, and I'm up
>> to the point of having a repo on github that passes the unit tests.
>> 
>> I realize there is some more work to be done (e.g. replacing all of the
>> openstack.common files with libs) but my plan is to do that once it's
>> under Gerrit control so we can review the changes properly.
>> 
>> Please take a look and leave feedback as appropriate.  Thanks!
>> 
>> -Ben
>> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Extended get_attr support for ResourceGroup

2014-07-15 Thread Tomas Sedovic
On 15/07/14 20:01, Zane Bitter wrote:
> On 14/07/14 12:21, Tomas Sedovic wrote:
>> On 12/07/14 06:41, Zane Bitter wrote:
>>> On 11/07/14 09:37, Tomas Sedovic wrote:
> 
[snip]
> 
>>>
 Alternatively, we could extend the ResourceGroup's get_attr behaviour:

   {get_attr: [controller_group, resource.0.networks.ctlplane.0]}

 but the former is a bit cleaner and more generic.
>>>
>>> I wrote a patch that implements this (and also handles (3) above in a
>>> similar manner), but in the end I decided that this:
>>>
>>>{get_attr: [controller_group, resource.0, networks, ctlplane, 0]}
>>>
>>> would be better than either that or the current syntax (which was
>>> obviously obscure enough that you didn't discover it). My only
>>> reservation was that it might make things a little weird when we have an
>>> autoscaling API to get attributes from compared with the dotted syntax
>>> that you suggest, but I soon got over it ;)
>>
>> So now that I understand how this works, I'm not against keeping things
>> the way we are. There is a consistency there, we just need to document
>> it and perhaps show some examples.
> 
> It kind of fell out of the work I was doing on the patches above anyway.
> It would be harder to _not_ implement this (and the existing way still
> works too).

Okay, fine by me :-)

> 
>>>
 ---
[snip]
>>
>>>
>>> There is one aspect of this that probably doesn't work yet: originally
>>> outputs and attributes were only allowed to be strings. We changed that
>>> for attributes, but probably not yet for outputs (even though outputs of
>>> provider templates become attributes of the facade resource). But that
>>> should be easy to fix. (And if your data can be returned as a string, it
>>> should already work.)
>>
>> Unless I misunderstood what you're saying, it seems to be working now:
>>
>> controller.yaml:
>>
>>  outputs:
>>hosts_entry:
>>  description: An IP address and a hostname of the server
>>  value:
>>ip: {get_attr: [controller_server, networks, private, 0]}
>>name: {get_attr: [controller_server, name]}
>>
>> environment.yaml:
>>
>>  resource_registry:
>>OS::TripleO::Controller: controller.yaml
>>
>> test-resource-group.yaml:
>>
>>  resources:
>>servers:
>>  type: OS::Heat::ResourceGroup
>>  properties:
>>count: 3
>>resource_def:
>>  type: OS::TripleO::Controller
>>  properties:
>>key_name: {get_param: key_name}
>>image: {get_param: image_id}
>>
>>  outputs:
>>hosts:
>>  description: "/etc/hosts entries for each server"
>>  value: {get_attr: [servers, hosts_entry]}
>>
>> Heat stack-show test-resource-group:
>>
>> {
>>   "output_value": [
>> "{u'ip': u'10.0.0.4', u'name':
>> u'rg-7heh-0-tweejsvubaht-controller_server-mscy33sbtirn'}",
>> "{u'ip': u'10.0.0.3', u'name':
>> u'rg-7heh-1-o4szl7lry27d-controller_server-sxpkalgi27ii'}",
>> "{u'ip': u'10.0.0.2', u'name':
>> u'rg-7heh-2-l2y6rqxml2fi-controller_server-u4jcjacjdrea'}"
>>   ],
>>   "description": "/etc/hosts entries for each server",
>>   "output_key": "hosts"
>> },
> 
> It looks like the dicts are being converted to strings by Python, so
> there probably is a small bug here to be fixed. (At the very least, if
> we're converting to strings we should do so using json.dumps(), not
> repr().)

Ooops, you're right. In my excitement, I completely missed that!

> 
> [snip]
> 

 So this boils down to 4 features proposals:

 1. Support extended attributes in ResourceGroup's members
>>>
>>> Sorted.
>>
>> Yep
>>
>>>
 2. Allow a way to use a Resource ID (e.g. what you get by {get_attr:
 [ResourceGroup, refs]} or {get_attr: [ResourceGroup, resource.0]}) with
 existing intrinsic functions (get_resource, get_attr)
>>>
>>> No dice, but (1) solves the problem anyway.
>>
>> Agreed
>>
>>>
 3. A `map` intrinsic function that turns a list of items to another
 list
 by doing operations on each item
>>>
>>> There may be a better solution available to us already, so IMO
>>> investigate that first. If that turns out not to be the case then we'll
>>> need to reach a consensus on whether map is something we want.
>>
>> You're right. I no longer think map (or anything like it) is necessary.
> 
> That's the kind of thing I love to hear :D
> 
 4. A `concat_list` intrinsic function that joins multiple lists into
 one.
>>>
>>> Low priority.
>>
>> Yeah.
>>
>>>
 I think the first two are not controversial. What about the other two?
 I've shown you some examples where we would find a good use in the
 TripleO templates. The lack of `map` actually blocks us from going
 all-Heat.
>>>
>>> Hopefully that's not actually the case.
>>>
 The alternative would be to say that this sort of stuff to be done
 inside the instance by os-apply-config et al.

Re: [openstack-dev] [Ironic] [Horizon] [UX] Wireframes for Node Management - Juno

2014-07-15 Thread Gregory Haynes
Excerpts from Jaromir Coufal's message of 2014-07-15 07:15:12 +:
> On 2014/10/07 22:19, Gregory Haynes wrote:
> > Excerpts from Jaromir Coufal's message of 2014-07-09 07:51:56 +:
> >> Hey folks,
> >>
> >> after few rounds of reviews and feedbacks, I am sending wireframes,
> >> which are ready for implementation in Juno:
> >>
> >> http://people.redhat.com/~jcoufal/openstack/juno/2014-07-09_nodes-ui_juno.pdf
> >>
> >> Let me know in case of any questions.
> >>
> >
> > Looks awesome!
> >
> > I may be way off base here (not super familiar with Tuskar) but what
> > about bulk importing of nodes? This is basically the only way devtest
> > makes use of nodes nowdays, so it might be nice to allow people to use
> > the same data file in both places (nodes.json blob).
> >
> > -Greg
> 
> Hi Greg,
> 
> thanks a lot for the feedback. We planned to provide a bulk import of 
> nodes as well. First we need to provide the basic functionality. I hope 
> we also manage to add import function in Juno but it depends on how the 
> progress of implementation goes. The challenge here is that I am not 
> aware of any standardized way for the data structure of the imported 
> file (do you have any suggestions here?).
> 
> 

We currently accept a nodes.json blob in the following format:

[{
"pm_password": "foo",
"mac": ["78:e7:d1:24:99:a5"],
"pm_addr": "10.22.51.66",
"pm_type": "pxe_ipmitool",
"memory": 98304,
"disk": 1600,
"arch": "amd64",
"cpu": 24,
"pm_user": "Administrator"
},
...
]

So this might be a good starting point?

-Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Extended get_attr support for ResourceGroup

2014-07-15 Thread Zane Bitter

On 14/07/14 12:21, Tomas Sedovic wrote:

On 12/07/14 06:41, Zane Bitter wrote:

On 11/07/14 09:37, Tomas Sedovic wrote:


[snip]


3. List of IP addresses of all controllers:

https://github.com/openstack/tripleo-heat-templates/blob/a7f2a2c928e9c78a18defb68feb40da8c7eb95d6/overcloud-source.yaml#L405


We cannot do this, because resource group doesn't support extended
attributes.

Would need something like:

  {get_attr: [controller_group, networks, ctlplane, 0]}

(ctlplane is the network controller_group servers are on)


I was going to give an explanation of how we could implement this, but
then I realised a patch was going to be easier:

https://review.openstack.org/#/c/106541/
https://review.openstack.org/#/c/106542/


Thanks, that looks great.




4. IP address of the first node in the resource group:

https://github.com/openstack/tripleo-heat-templates/blob/a7f2a2c928e9c78a18defb68feb40da8c7eb95d6/swift-deploy.yaml#L29


Can't do: extended attributes are not supported for the n-th node for
the group either.


I believe this is possible today using:

   {get_attr: [controller_group, resource.0.networks, ctlplane, 0]}


Yeah, I've missed this. I have actually checked the ResourceGroup's
GetAtt method but didn't realise the connection with the GetAtt function
so I hadn't tried it before.



[snip]




Alternatively, we could extend the ResourceGroup's get_attr behaviour:

  {get_attr: [controller_group, resource.0.networks.ctlplane.0]}

but the former is a bit cleaner and more generic.


I wrote a patch that implements this (and also handles (3) above in a
similar manner), but in the end I decided that this:

   {get_attr: [controller_group, resource.0, networks, ctlplane, 0]}

would be better than either that or the current syntax (which was
obviously obscure enough that you didn't discover it). My only
reservation was that it might make things a little weird when we have an
autoscaling API to get attributes from compared with the dotted syntax
that you suggest, but I soon got over it ;)


So now that I understand how this works, I'm not against keeping things
the way we are. There is a consistency there, we just need to document
it and perhaps show some examples.


It kind of fell out of the work I was doing on the patches above anyway. 
It would be harder to _not_ implement this (and the existing way still 
works too).





---


That was the easy stuff, where we can get by with the current
functionality (plus a few fixes).

What follows are examples that really need new intrinsic functions (or
seriously complicating the ResourceGroup attribute code and syntax).


5. Building a list of {ip: ..., name: ...} dictionaries to configure
haproxy:

https://github.com/openstack/tripleo-heat-templates/blob/a7f2a2c928e9c78a18defb68feb40da8c7eb95d6/overcloud-source.yaml#L478


This really calls for a mapping/for-each kind of functionality. Trying
to invent a ResourceGroup syntax for this would be perverse.

Here's what it could look like under Clint's `map` proposal:

  map:
  - ip: {get_attr: [{get_resource: "$1"}, networks, ctlplane, 0]
name: {get_attr: [{get_resource: "$1"}, name]}
  - {get_attr: [compute_group, refs]}


This has always been the tricky one :D

IMHO the real problem here is that we're trying to collate the data at
the point where it is consumed, not the point where it is produced. It's
not like we were just given a big blob of data and now have to somehow
extract the useful parts; we produced it ourselves by combining data
from the scaled units. If we didn't get the right data, we have only
ourselves to blame ;)

So if the provider template that defines e.g. a compute node contains
the section:

   outputs:
 host_entry:
   value:
 ip: {get_attr: [compute_server, networks, ctlplane, 0]}
 name: {get_attr: [compute_server, name]}

Then in your main template all you need to do is:

   {getattr: [compute_group, host_entry]}


Oh this is absolutely wonderful. I've had all the pieces in my head but
I didn't make the connection.

You're completely right about using a provider template here anyway --
that's what I planned to do, but I didn't fully appreciate the
connection between outputs and attributes (even though I knew about it).



to get the list of {ip: ..., name: ...} dicts. (As a bonus, this is
about as straightforward to read as I can imagine it ever getting.)

Note that you *will* want to be using a provider template as the scaled
unit _anyway_, because each compute_server will have associated software
deployments and quite possibly a bunch of other resources that need to
be in the scaled unit.


Yep, exactly.



There is one aspect of this that probably doesn't work yet: originally
outputs and attributes were only allowed to be strings. We changed that
for attributes, but probably not yet for outputs (even though outputs of
provider templates become attributes of the facade resource). But that
should be easy to fix. (And if your data can 

Re: [openstack-dev] [oslo] oslo.serialization repo review

2014-07-15 Thread Doug Hellmann
LGTM. I did leave one comment [1], but it can wait until after the
repo is imported.

Doug

[1] 
https://github.com/cybertron/oslo.serialization/commit/af8fafcf34762898e9e19199690a1d636f5fe748


On Tue, Jul 15, 2014 at 1:42 PM, Ben Nemec  wrote:
> And the link, since I forgot it before:
> https://github.com/cybertron/oslo.serialization
>
> On 07/14/2014 04:59 PM, Ben Nemec wrote:
>> Hi oslophiles,
>>
>> I've (finally) started the graduation of oslo.serialization, and I'm up
>> to the point of having a repo on github that passes the unit tests.
>>
>> I realize there is some more work to be done (e.g. replacing all of the
>> openstack.common files with libs) but my plan is to do that once it's
>> under Gerrit control so we can review the changes properly.
>>
>> Please take a look and leave feedback as appropriate.  Thanks!
>>
>> -Ben
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.serialization repo review

2014-07-15 Thread Ben Nemec
And the link, since I forgot it before:
https://github.com/cybertron/oslo.serialization

On 07/14/2014 04:59 PM, Ben Nemec wrote:
> Hi oslophiles,
> 
> I've (finally) started the graduation of oslo.serialization, and I'm up
> to the point of having a repo on github that passes the unit tests.
> 
> I realize there is some more work to be done (e.g. replacing all of the
> openstack.common files with libs) but my plan is to do that once it's
> under Gerrit control so we can review the changes properly.
> 
> Please take a look and leave feedback as appropriate.  Thanks!
> 
> -Ben
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-15 Thread Stephen Balukoff
+1 to German's and  Carlos' comments.

It's also worth pointing out that some UIs will definitely want to show SAN
information and the like, so either having this available as part of the
API, or as a standard library we write which then gets used by multiple
drivers is going to be necessary.

If we're extracting the Subject Common Name in any place in the code then
we also need to be extracting the Subject Alternative Names at the same
place. From the perspective of the SNI standard, there's no difference in
how these fields should be treated, and if we were to treat SANs
differently then we're both breaking the standard and setting a bad
precedent.

Stephen


On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza 
wrote:

>
> On Jul 15, 2014, at 10:55 AM, Samuel Bercovici 
>  wrote:
>
> > Hi,
> >
> >
> > Obtaining the domain name from the x509 is probably more of a
> driver/backend/device capability, it would make sense to have a library
> that could be used by anyone wishing to do so in their driver code.
>
> You can do what ever you want in *your* driver. The code to extract
> this information will be apart of the API and needs to be mentioned in the
> spec now. PyOpenSSL with PyASN1 are the most likely candidates.
>
> Carlos D. Garza
> >
> > -Sam.
> >
> >
> >
> > From: Eichberger, German [mailto:german.eichber...@hp.com]
> > Sent: Tuesday, July 15, 2014 6:43 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI -
> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
> >
> > Hi,
> >
> > My impression was that the frontend would extract the names and hand
> them to the driver.  This has the following advantages:
> >
> > · We can be sure all drivers can extract the same names
> > · No duplicate code to maintain
> > · If we ever allow the user to specify the names on UI rather in
> the certificate the driver doesn’t need to change.
> >
> > I think I saw Adam say something similar in a comment to the code.
> >
> > Thanks,
> > German
> >
> > From: Evgeny Fedoruk [mailto:evge...@radware.com]
> > Sent: Tuesday, July 15, 2014 7:24 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI -
> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
> >
> > Hi All,
> >
> > Since this issue came up from TLS capabilities RST doc review, I opened
> a ML thread for it to make the decision.
> > Currently, the document says:
> >
> > “
> > For SNI functionality, tenant will supply list of TLS containers in
> specific
> > Order.
> > In case when specific back-end is not able to support SNI capabilities,
> > its driver should throw an exception. The exception message should state
> > that this specific back-end (provider) does not support SNI capability.
> > The clear sign of listener's requirement for SNI capability is
> > a none empty SNI container ids list.
> > However, reference implementation must support SNI capability.
> >
> > Specific back-end code may retrieve SubjectCommonName and/or
> altSubjectNames
> > from the certificate which will determine the hostname(s) the certificate
> > is associated with.
> >
> > The order of SNI containers list may be used by specific back-end code,
> > like Radware's, for specifying priorities among certificates.
> > In case when two or more uploaded certificates are valid for the same
> DNS name
> > and the tenant has specific requirements around which one wins this
> collision,
> > certificate ordering provides a mechanism to define which cert wins in
> the
> > event of a collision.
> > Employing the order of certificates list is not a common requirement for
> > all back-end implementations.
> > “
> >
> > The question is about SCN and SAN extraction from X509.
> > 1.   Extraction of SCN/ SAN should be done while provisioning and
> not during TLS handshake
> > 2.   Every back-end code/driver must(?) extract SCN and(?) SAN and
> use it for certificate determination for host
> >
> > Please give your feedback
> >
> > Thanks,
> > Evg
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][infra] New package dependencies for oslo.messaging's AMQP 1.0 support.

2014-07-15 Thread Doug Hellmann
On Tue, Jul 15, 2014 at 1:03 PM, Ken Giusti  wrote:
> Hi,
>
> The AMQP 1.0 blueprint proposed for oslo.messaging Juno [0] introduces
> dependencies on a few packages that provide AMQP functionality.
>
> These packages are:
>
> * pyngus - a client API
> * python-qpid-proton - the python bindings for the Proton AMQP library
> * qpid-proton: the AMQP 1.0 library.
>
> pyngus is a pure-python module available at pypi [1].
>
> python-qpid-proton is also available at pypi [2], but it contains a C
> extension.  This C extension requires that the qpid-proton development
> libraries are installed in order to build the extension when
> installing python-qpid-proton.
>
> So this means that oslo.messaging developers, as well as the CI
> systems, etc, will need to have the qpid-proton development packages
> installed.
>
> These packages may be obtained via EPEL for Centos/RHEL systems
> (qpid-proton-c-devel), and via the Qpid project's PPA [3]
> (libqpid-proton2-dev) for Debian/Ubuntu.  They are also available for
> Fedora via the default yum repos.  Otherwise, the source can be pulled
> directly from the Qpid project and built/installed manually [4].

Do you know the timeline for having those added to the Ubuntu cloud
archives? I think we try not to add PPAs in devstack, but I'm not sure
if that's a hard policy.

>
> I'd like to get the blueprint accepted, but I'll have to address these
> new dependencies first.  What is the best way to get these new
> packages into CI, devstack, etc?  And will developers be willing to
> install the proton development libraries, or can this be done
> automagically?

To set up integration tests we'll need an option in devstack to set
the messaging driver to this new one. That flag should also trigger
setting up the dependencies needed. Before you spend time implementing
that, though, we should clarify the policy on PPAs.

Doug

>
> thanks for your help,
>
>
> [0] 
> https://blueprints.launchpad.net/oslo.messaging/+spec/amqp10-driver-implementation
> [1] https://pypi.python.org/pypi/pyngus
> [2] https://pypi.python.org/pypi/python-qpid-proton/0.7-0
> [3] https://launchpad.net/~qpid
> [4] http://qpid.apache.org/download.html
>
> --
> Ken Giusti  (kgiu...@gmail.com)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging][infra] New package dependencies for oslo.messaging's AMQP 1.0 support.

2014-07-15 Thread Ken Giusti
Hi,

The AMQP 1.0 blueprint proposed for oslo.messaging Juno [0] introduces
dependencies on a few packages that provide AMQP functionality.

These packages are:

* pyngus - a client API
* python-qpid-proton - the python bindings for the Proton AMQP library
* qpid-proton: the AMQP 1.0 library.

pyngus is a pure-python module available at pypi [1].

python-qpid-proton is also available at pypi [2], but it contains a C
extension.  This C extension requires that the qpid-proton development
libraries are installed in order to build the extension when
installing python-qpid-proton.

So this means that oslo.messaging developers, as well as the CI
systems, etc, will need to have the qpid-proton development packages
installed.

These packages may be obtained via EPEL for Centos/RHEL systems
(qpid-proton-c-devel), and via the Qpid project's PPA [3]
(libqpid-proton2-dev) for Debian/Ubuntu.  They are also available for
Fedora via the default yum repos.  Otherwise, the source can be pulled
directly from the Qpid project and built/installed manually [4].

I'd like to get the blueprint accepted, but I'll have to address these
new dependencies first.  What is the best way to get these new
packages into CI, devstack, etc?  And will developers be willing to
install the proton development libraries, or can this be done
automagically?

thanks for your help,


[0] 
https://blueprints.launchpad.net/oslo.messaging/+spec/amqp10-driver-implementation
[1] https://pypi.python.org/pypi/pyngus
[2] https://pypi.python.org/pypi/python-qpid-proton/0.7-0
[3] https://launchpad.net/~qpid
[4] http://qpid.apache.org/download.html

-- 
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-15 Thread Carlos Garza

On Jul 15, 2014, at 10:55 AM, Samuel Bercovici 
 wrote:

> Hi,
>  
> 
> Obtaining the domain name from the x509 is probably more of a 
> driver/backend/device capability, it would make sense to have a library that 
> could be used by anyone wishing to do so in their driver code.

You can do what ever you want in *your* driver. The code to extract this 
information will be apart of the API and needs to be mentioned in the spec now. 
PyOpenSSL with PyASN1 are the most likely candidates.

Carlos D. Garza
>  
> -Sam.
>  
>  
>  
> From: Eichberger, German [mailto:german.eichber...@hp.com] 
> Sent: Tuesday, July 15, 2014 6:43 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
>  
> Hi,
>  
> My impression was that the frontend would extract the names and hand them to 
> the driver.  This has the following advantages:
>  
> · We can be sure all drivers can extract the same names
> · No duplicate code to maintain
> · If we ever allow the user to specify the names on UI rather in the 
> certificate the driver doesn’t need to change.
>  
> I think I saw Adam say something similar in a comment to the code.
>  
> Thanks,
> German
>  
> From: Evgeny Fedoruk [mailto:evge...@radware.com] 
> Sent: Tuesday, July 15, 2014 7:24 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
> SubjectCommonName and/or SubjectAlternativeNames from X509
>  
> Hi All,
>  
> Since this issue came up from TLS capabilities RST doc review, I opened a ML 
> thread for it to make the decision.
> Currently, the document says:
>  
> “
> For SNI functionality, tenant will supply list of TLS containers in specific
> Order.
> In case when specific back-end is not able to support SNI capabilities,
> its driver should throw an exception. The exception message should state
> that this specific back-end (provider) does not support SNI capability.
> The clear sign of listener's requirement for SNI capability is
> a none empty SNI container ids list.
> However, reference implementation must support SNI capability.
>  
> Specific back-end code may retrieve SubjectCommonName and/or altSubjectNames
> from the certificate which will determine the hostname(s) the certificate
> is associated with.
>  
> The order of SNI containers list may be used by specific back-end code,
> like Radware's, for specifying priorities among certificates.
> In case when two or more uploaded certificates are valid for the same DNS name
> and the tenant has specific requirements around which one wins this collision,
> certificate ordering provides a mechanism to define which cert wins in the
> event of a collision.
> Employing the order of certificates list is not a common requirement for
> all back-end implementations.
> “
>  
> The question is about SCN and SAN extraction from X509.
> 1.   Extraction of SCN/ SAN should be done while provisioning and not 
> during TLS handshake
> 2.   Every back-end code/driver must(?) extract SCN and(?) SAN and use it 
> for certificate determination for host
>  
> Please give your feedback
>  
> Thanks,
> Evg
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [OSTF] OSTF stops working after password is changed

2014-07-15 Thread Vitaly Kramskikh
We had a short discussion and decided to implement this feature for 5.1 in
this way:

   1. Do not store credentials at all even in browser
   2. Do not implement specific handling of auth errors
   3. Make the form hidden by default; it can be shown by clicking a button
   4. There will be a short description

It will look like this:

http://i.imgur.com/0Uwx0M5.png

http://i.imgur.com/VF1skHw.png

I think we'll change the button text to "Provide Credentials" and the
description to "If you changed the credentials after deployment, you need
to provide new ones to run the checks. The credentials won't be stored
anywhere.". Your suggestions are welcome.


2014-07-12 2:54 GMT+04:00 David Easter :

> I think showing this only upon failure is good – if the user is also given
> the option to sore the credentials in the browser.  That way, you only have
> to re-enter the credentials once if you want convenience, or do it every
> time if you want improved security.
>
> One downside would be that if you don’t cache the credentials, you’ll have
> to “fail” the auth every time to be given the chance to re-enter the
> credentials.  It may not be obvious that clicking “run tests” will then let
> you enter new credentials.   I was thinking that having a button you can
> press to enter the credentials would make it more obvious, but wouldn’t
> reduce the number of clicks… I.e. either run tests and fail or click “Enter
> credentials” and enter new ones.  The “Enter credential” option would
> obviously be a little faster…
>
> - David J. Easter
>   Director of Product Management,   Mirantis, Inc.
>
> From: Mike Scherbakov 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Friday, July 11, 2014 at 2:36 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Fuel] [OSTF] OSTF stops working after
> password is changed
>
> I'm wondering if we can show all these windows ONLY if there is authz
> failure with existing credentials from Nailgun.
> So the flow would be: user clicks on "Run tests" button, healthcheck tries
> to access OpenStack and fails. It shows up text fields to enter
> tenant/user/pass with the message similar to "Default administrative
> credentials to OpenStack were changed since the deployment time. Please
> provide current credentials so HealthCheck can access OpenStack and run
> verification tests."
>
> I think it should be more obvious this way...
>
> Anyone, it must be a choice for a user, if he wants to store creds in a
> browser.
>
>
> On Fri, Jul 11, 2014 at 8:50 PM, Vitaly Kramskikh  > wrote:
>
>> Hi,
>>
>> In the current implementation we store provided credentials in browser
>> local storage. What's your opinion on that? Maybe we shouldn't store new
>> credentials at all even in browser? So users have to enter them manually
>> every time they want to run OSTF.
>>
>>
>> 2014-06-25 13:47 GMT+04:00 Dmitriy Shulyak :
>>
>> It is possible to change everything so username, password and tenant
>>> fields
>>>
>>> Also this way we will be able to run tests not only as admin user
>>>
>>>
>>> On Wed, Jun 25, 2014 at 12:29 PM, Vitaly Kramskikh <
>>> vkramsk...@mirantis.com> wrote:
>>>
 Dmitry,

 Fields or field? Do we need to provide password only or other
 credentials are needed?


 2014-06-25 13:02 GMT+04:00 Dmitriy Shulyak :

 Looks like we will stick to #2 option, as most reliable one.
>
> - we have no way to know that openrc is changed, even if some scripts
> relies on it - ostf should not fail with auth error
> - we can create ostf user in post-deployment stage, but i heard that
> some ceilometer tests relied on admin user, also
>   operator may not want to create additional user, for some reasons
>
> So, everybody is ok with additional fields on HealthCheck tab?
>
>
>
>
> On Fri, Jun 20, 2014 at 8:17 PM, Andrew Woodward 
> wrote:
>
>> The openrc file has to be up to date for some of the HA scripts to
>> work, we could just source that.
>>
>> On Fri, Jun 20, 2014 at 12:12 AM, Sergii Golovatiuk
>>  wrote:
>> > +1 for #2.
>> >
>> > ~Sergii
>> >
>> >
>> > On Fri, Jun 20, 2014 at 1:21 AM, Andrey Danin 
>> wrote:
>> >>
>> >> +1 to Mike. Let the user provide actual credentials and use them
>> in place.
>> >>
>> >>
>> >> On Fri, Jun 20, 2014 at 2:01 AM, Mike Scherbakov
>> >>  wrote:
>> >>>
>> >>> I'm in favor of #2. I think users might not want to have their
>> password
>> >>> stored in Fuel Master node.
>> >>> And if so, then it actually means we should not save it when user
>> >>> provides it on HealthCheck tab.
>> >>>
>> >>>
>> >>> On Thu, Jun 19, 2014 at 8:05 PM, Vitaly Kramskikh
>> >>>  wrote:
>> 
>>  Hi folks,
>> 
>> 

Re: [openstack-dev] [Nova] [Gantt] Scheduler split status (updated)

2014-07-15 Thread Chris Friesen

On 07/14/2014 12:10 PM, Jay Pipes wrote:

On 07/14/2014 10:16 AM, Sylvain Bauza wrote:



From an operator perspective, people waited so long for having a
scheduler doing "scheduling" and not only "resource placement".


Could you elaborate a bit here? What operators are begging for the
scheduler to do more than resource placement? And if they are begging
for this, what use cases are they trying to address?


I'm curious about this as well, what more than resource placement 
*should* the scheduler handle?


On the other hand, I *do* see a usecase for a more holistic scheduler 
that can take into account a whole group of resources at once (multiple 
instances, volumes, networks, etc. with various constraints on them, 
combined with things like server groups and host aggregates).


In a simple scenario, suppose a compute node fails.  I want to evacuate 
all the instances that had been on that compute node, but ideally I'd 
like the scheduler to look at all the instances simultaneously to try to 
place them...if it does them one at a time it might make a decision that 
doesn't use resources in an optimal way, possibly even resulting in the 
failure of one or more evacuations even though there is technically 
sufficient room in the cluster for all instances.


Alternately, suppose I have a group of instances that really want to be 
placed on the same physical network segment.  (For low latency, maybe.)


Lastly, I think that any information that fed into the original 
scheduler decision should be preserved for use by subsequent scheduling 
operations (migration, evacuation, etc.)  This includes stuff like 
image/flavor metadata, boot-time scheduler hints, etc.


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Meeting time change

2014-07-15 Thread Kurt Griffiths
Hi folks, we’ve been talking about this in IRC, but I wanted to bring it to the 
ML to get broader feedback and make sure everyone is aware. We’d like to change 
our meeting time to better accommodate folks that live around the globe. 
Proposals:

Tuesdays, 1900 UTC
Wednessdays, 2000 UTC
Wednessdays, 2100 UTC

I believe these time slots are free, based on: 
https://wiki.openstack.org/wiki/Meetings

Please respond with ONE of the following:

A. None of these times work for me
B. An ordered list of the above times, by preference
C. I am a robot

Cheers,
Kurt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Gantt] Scheduler split status (updated)

2014-07-15 Thread Dugger, Donald D
Unfortunately, much as I agree with your sentiment (Death to MS Outlook) my IT 
overlords have pretty much forced me into using it.  I still top post but try 
and use some copied context (typically by adding an `in re:' to be explicit) so 
you know what part of the long email I'm referring to.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Tuesday, July 15, 2014 9:05 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] [Gantt] Scheduler split status (updated)

Hi Paul, thanks for your reply. Comments inline.

BTW, is there any way to reply inline instead of top-posting? On these longer 
emails, it gets hard sometimes to follow your reply to specific things I 
mentioned (vs. what John G mentioned).

Death to MS Outlook.

On 07/14/2014 04:40 PM, Murray, Paul (HP Cloud) wrote:
> On extensible resource tracking
>
> Jay, I am surprised to hear you say no one has explained to you why 
> there is an extensible resource tracking blueprint. It's simple, there 
> was a succession of blueprints wanting to add data about this and that 
> to the resource tracker and the scheduler and the database tables used 
> to communicate. These included capabilities, all the stuff in the 
> stats, rxtx_factor, the equivalent for cpu (which only works on one 
> hypervisor I think), pci_stats and more were coming including,
>
> 
>https://blueprints.launchpad.net/nova/+spec/network-bandwidth-entitleme
>nt https://blueprints.launchpad.net/nova/+spec/cpu-entitlement
>
> So, in short, your claim that there are no operators asking for 
> additional stuff is simply not true.

A few things about the above blueprints

1) Neither above blueprint is approved.

2) Neither above blueprint nor the extensible resource tracker blueprint 
contains a single *use case*. The blueprints are full of statements like "We 
want to extend this model to add a measure of XXX" and "We propose a unified 
API to support YYY", however none of them actually contains a real use case. A 
use case is in the form of "As a XXX user, I want to be able to YYY so that my 
ZZZ can do AAA." Blueprints without use cases are not necessarily things to be 
disregarded, but when the blueprint proposes a significant change in 
behaviour/design or a new feature, without specifying one or more use cases 
that are satisfied by the proposed spec, the blueprint is suspicious, in my 
mind.

3) The majority of the feature requests in the CPUEntitlement are enabled with 
the use of existing host aggregates and their cpu_allocation_ratios and Dan 
Berrange's work on adding NUMA topology aspects to the compute node and 
flavours.

4) In my previous emails, I was pretty specific that I had never met a single 
operator or admin that was "sitting there tinkering with weight multipliers" 
trying to control the placement of VMs in their cloud. When I talk about the 
*needless complexity* in the current scheduler design, I am talking 
specifically about the weigher multipliers. I can guarantee you that there 
isn't a single person out there sitting behind the scenes going "Oooh, let me 
change my ram weigher multiplier from 1.0 to .675 and see what happens". It's 
just not something that is done -- that is way too low of a level for the 
Compute scheduler to be thinking at. The Nova scheduler *is not a process or 
thread scheduler*. Folks who think that the Nova scheduler should emulate the 
Linux kernel scheduling policies and strategies are thinking on *completely* 
the wrong level, IMO. We should be focusing on making the scheduler *simpler*, 
with admin users *easily* able to figure out how to control placement decisions 
fo
 r their host aggregates and, more importantly, allow *tenant-by-tenant sorting 
policies* [1] so that scheduling decisions for different classes of tenants can 
be controlled distinctly.

> Around about the Icehouse summit (I think) it was suggested that we 
> should stop the obvious trend and add a way to make resource tracking 
> extensible, similar to metrics, which had just been added as an 
> extensible way of collecting on going usage data (because that was 
> also wanted).

OK, I do understand that. I actually think it would have been more appropriate 
to define real models for these new resource types instead of making it a 
free-for-all with too much ability to support out-of-tree custom, non-standard 
resource types, but I understand the idea behind it.

> The json blob you refer to was down to the bad experience of the 
> compute_node_stats table implemented for stats - which had a 
> particular performance hit because it required an expensive join.
> This was dealt with by removing the table and adding a string field to 
> contain the data as a json blob. A pure performance optimization.

Interesting. This is good to know (and would have been good to note on the ERT 
blueprint).

The problem I have with this is that we ar

Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-15 Thread Samuel Bercovici
Hi,

I think that the discussion have asked that obtaining information out of the 
x509 via the SAN field will not be defined as mandatory.

For example Radware's backend extracts this information from the x509 in the 
(virtual) device itself, specifying dns values different than what exists in 
the x509 is not relevant.
I think that NetScaler case, is similar with the exception (if I understand 
correctly) that it does not extract the values from the SAN field. Also in this 
case, if the front end will provide the domain name outside the x509 it will 
not matter.

Obtaining the domain name from the x509 is probably more of a 
driver/backend/device capability, it would make sense to have a library that 
could be used by anyone wishing to do so in their driver code.

-Sam.



From: Eichberger, German [mailto:german.eichber...@hp.com]
Sent: Tuesday, July 15, 2014 6:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
SubjectCommonName and/or SubjectAlternativeNames from X509

Hi,

My impression was that the frontend would extract the names and hand them to 
the driver.  This has the following advantages:


* We can be sure all drivers can extract the same names

* No duplicate code to maintain

* If we ever allow the user to specify the names on UI rather in the 
certificate the driver doesn't need to change.

I think I saw Adam say something similar in a comment to the code.

Thanks,
German

From: Evgeny Fedoruk [mailto:evge...@radware.com]
Sent: Tuesday, July 15, 2014 7:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
SubjectCommonName and/or SubjectAlternativeNames from X509

Hi All,

Since this issue came up from TLS capabilities RST doc review, I opened a ML 
thread for it to make the decision.
Currently, the document says:

"
For SNI functionality, tenant will supply list of TLS containers in specific
Order.
In case when specific back-end is not able to support SNI capabilities,
its driver should throw an exception. The exception message should state
that this specific back-end (provider) does not support SNI capability.
The clear sign of listener's requirement for SNI capability is
a none empty SNI container ids list.
However, reference implementation must support SNI capability.

Specific back-end code may retrieve SubjectCommonName and/or altSubjectNames
from the certificate which will determine the hostname(s) the certificate
is associated with.

The order of SNI containers list may be used by specific back-end code,
like Radware's, for specifying priorities among certificates.
In case when two or more uploaded certificates are valid for the same DNS name
and the tenant has specific requirements around which one wins this collision,
certificate ordering provides a mechanism to define which cert wins in the
event of a collision.
Employing the order of certificates list is not a common requirement for
all back-end implementations.
"

The question is about SCN and SAN extraction from X509.

1.   Extraction of SCN/ SAN should be done while provisioning and not 
during TLS handshake

2.   Every back-end code/driver must(?) extract SCN and(?) SAN and use it 
for certificate determination for host

Please give your feedback

Thanks,
Evg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-15 Thread Carlos Garza

On Jul 15, 2014, at 9:24 AM, Evgeny Fedoruk  wrote:

> The question is about SCN and SAN extraction from X509.
> 1.   Extraction of SCN/ SAN should be done while provisioning and not 
> during TLS handshake
   Yes that makes the most sense. If some strange backend really wants to 
repeatedly extract this during TLS hand shake
I guess they are free to do this although its pretty brain damaged since the 
extracted fields will always be the same.

> 2.   Every back-end code/driver must(?) extract SCN and(?) SAN and use it 
> for certificate determination for host

No need for this to be in driver code. It was my understanding that the 
X509 was going to be pulled apart in the API code via pyOpenSSL(Which is what 
I'm working on now). Since we would be validating the key and x509 at the API 
layer already it made more sense to extract the SubjectAltName and SubjectSN 
here as well. If you want to do it in the driver as well at least use the same 
code thats already in the API layer.


>  
> Please give your feedback
>  
> Thanks,
> Evg
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Gantt][Scheduler-split] Why we need a Smart Placement Engine as a Service! (was: Scheduler split status (updated))

2014-07-15 Thread Debojyoti Dutta
https://etherpad.openstack.org/p/SchedulerUseCases

[08:43:35]  #action all update the use case etherpad
athttps://etherpad.openstack.org/p/SchedulerUseCases

Please update your use cases here ..

debo

On Mon, Jul 14, 2014 at 7:25 PM, Yathiraj Udupi (yudupi)
 wrote:
> Hi all,
>
> Adding to the interesting discussion thread regarding the scheduler split
> and its importance, I would like to pitch in a couple of thoughts in favor
> of Gantt.  It was in the Icehouse summit in HKG in one of the scheduler
> design sessions, I along with a few others (cc’d) pitched a session on Smart
> Resource Placement
> (https://etherpad.openstack.org/p/NovaIcehouse-Smart-Resource-Placement),
> where we pitched for a  Smart Placement Decision Engine  as a Service ,
> addressing cross-service scheduling as one of the use cases.  We pitched the
> idea as to how a stand-alone service can act as a  smart resource placement
> engine, (see figure:
> https://docs.google.com/drawings/d/1BgK1q7gl5nkKWy3zLkP1t_SNmjl6nh66S0jHdP0-zbY/edit?pli=1)
> that can use state data from all the services, and make a unified placement
> decision.   We even have proposed a separate blueprint
> (https://blueprints.launchpad.net/nova/+spec/solver-scheduler with working
> code now here: https://github.com/CiscoSystems/nova-solver-scheduler) called
> Smart Scheduler (Solver Scheduler), which has the goals of being able to do
> smart resource placement taking into account complex constraints
> incorporating compute(nova), storage(cinder), and network constraints.   The
> existing Filter Scheduler or the projects like Smart (Solver) Scheduler (for
> covering the complex constraints scenarios) could easily fulfill the
> decision making aspects of the placement engine.
>
> I believe the Gantt project is the right direction in terms of separating
> out the placement decision concern, and creating a separate scheduler as a
> service, so that it can freely talk to any of the other services, or use a
> unified global state repository and make the unified decision.  Projects
> like Smart(Solver) Scheduler can easily fit into the Gantt Project as
> pluggable drivers to add the additional smarts required.
>
> To make our Smart Scheduler as a service, we currently have prototyped this
> Scheduler as a service providing a RESTful interface to the smart scheduler,
> that is detached from Nova (loosely connected):
> For example a RESTful request like this (where I am requests for 2 Vms, with
> a requirement of 1 GB disk, and another request for 1 Vm of flavor
> ‘m1.tiny’, but also has a special requirement that it should be close to the
> volume with uuid: “ef6348300bc511e4bc4cc03fd564d1bc" (Compute-Volume
> affinity constraint)) :
>
>
> curl -i -H "Content-Type: application/json" -X POST -d
> '{"instance_requests": [{"num_instances": 2, "request_properties":
> {"instance_type": {"root_gb": 1}}}, {"num_instances": 1,
> "request_properties": {"flavor": "m1.tiny”, “volume_affinity":
> "ef6348300bc511e4bc4cc03fd564d1bc"}}]}'
> http:///smart-scheduler-as-a-service/v1.0/placement
>
>
> provides a placement decision something like this:
>
> {
>
>   "result": [
>
> [
>
>   {
>
> "host": {
>
>   "host": "Host1",
>
>   "nodename": "Node1"
>
> },
>
> "instance_uuid": "VM_ID_0_0"
>
>   },
>
>   {
>
> "host": {
>
>   "host": "Host2",
>
>   "nodename": "Node2"
>
> },
>
> "instance_uuid": "VM_ID_0_1"
>
>   }
>
> ],
>
> [
>
>   {
>
> "host": {
>
>   "host": "Host1",
>
>   "nodename": "Node1"
>
> },
>
> "instance_uuid": "VM_ID_1_0"
>
>   }
>
> ]
>
>   ]
>
> }
>
>
> This placement result can be used by Nova to proceed and complete the
> scheduling.
>
>
> This is where I see the potential for Gantt, which will be a stand alone
> placement decision engine, and can easily accommodate different pluggable
> engines (such as Smart Scheduler
> (https://blueprints.launchpad.net/nova/+spec/solver-scheduler))  to do smart
> placement decisions.
>
>
> Pointers:
> Smart Resource Placement overview:
> https://docs.google.com/document/d/1IiPI0sfaWb1bdYiMWzAAx0HYR6UqzOan_Utgml5W1HI/edit?pli=1
> Figure:
> https://docs.google.com/drawings/d/1BgK1q7gl5nkKWy3zLkP1t_SNmjl6nh66S0jHdP0-zbY/edit?pli=1
> Nova Design Session Etherpad:
> https://etherpad.openstack.org/p/NovaIcehouse-Smart-Resource-Placement
> https://etherpad.openstack.org/p/IceHouse-Nova-Scheduler-Sessions
> Smart Scheduler Blueprint:
> https://blueprints.launchpad.net/nova/+spec/solver-scheduler
> Working code: https://github.com/CiscoSystems/nova-solver-scheduler
>
>
> Thanks,
>
> Yathi.
>
>
>
>
>
>
> On 7/14/14, 1:40 PM, "Murray, Paul (HP Cloud)"  wrote:
>
> Hi All,
>
>
>
> I’m sorry I am so late to this lively discussion – it looks a good one! Jay
> has been driving the debate a bit so most of this is in response to his
> comments. But please, anyone should chip in.
>
>
>
> On e

Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-15 Thread Eichberger, German
Hi,

My impression was that the frontend would extract the names and hand them to 
the driver.  This has the following advantages:


* We can be sure all drivers can extract the same names

* No duplicate code to maintain

* If we ever allow the user to specify the names on UI rather in the 
certificate the driver doesn't need to change.

I think I saw Adam say something similar in a comment to the code.

Thanks,
German

From: Evgeny Fedoruk [mailto:evge...@radware.com]
Sent: Tuesday, July 15, 2014 7:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
SubjectCommonName and/or SubjectAlternativeNames from X509

Hi All,

Since this issue came up from TLS capabilities RST doc review, I opened a ML 
thread for it to make the decision.
Currently, the document says:

"
For SNI functionality, tenant will supply list of TLS containers in specific
Order.
In case when specific back-end is not able to support SNI capabilities,
its driver should throw an exception. The exception message should state
that this specific back-end (provider) does not support SNI capability.
The clear sign of listener's requirement for SNI capability is
a none empty SNI container ids list.
However, reference implementation must support SNI capability.

Specific back-end code may retrieve SubjectCommonName and/or altSubjectNames
from the certificate which will determine the hostname(s) the certificate
is associated with.

The order of SNI containers list may be used by specific back-end code,
like Radware's, for specifying priorities among certificates.
In case when two or more uploaded certificates are valid for the same DNS name
and the tenant has specific requirements around which one wins this collision,
certificate ordering provides a mechanism to define which cert wins in the
event of a collision.
Employing the order of certificates list is not a common requirement for
all back-end implementations.
"

The question is about SCN and SAN extraction from X509.

1.   Extraction of SCN/ SAN should be done while provisioning and not 
during TLS handshake

2.   Every back-end code/driver must(?) extract SCN and(?) SAN and use it 
for certificate determination for host

Please give your feedback

Thanks,
Evg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Gantt] Scheduler split status (updated)

2014-07-15 Thread Debojyoti Dutta
https://etherpad.openstack.org/p/SchedulerUseCases

[08:43:35]  #action all update the use case etherpad
athttps://etherpad.openstack.org/p/SchedulerUseCases

Please update your use cases here ..

thx
debo

On Tue, Jul 15, 2014 at 2:50 AM, Sylvain Bauza  wrote:
> Le 14/07/2014 20:10, Jay Pipes a écrit :
>> On 07/14/2014 10:16 AM, Sylvain Bauza wrote:
>>> Le 12/07/2014 06:07, Jay Pipes a écrit :
 On 07/11/2014 07:14 AM, John Garbutt wrote:
> On 10 July 2014 16:59, Sylvain Bauza  wrote:
>> Le 10/07/2014 15:47, Russell Bryant a écrit :
>>> On 07/10/2014 05:06 AM, Sylvain Bauza wrote:
 Hi all,

 === tl;dr: Now that we agree on waiting for the split
 prereqs to be done, we debate on if ResourceTracker should
 be part of the scheduler code and consequently Scheduler
 should expose ResourceTracker APIs so that Nova wouldn't
 own compute nodes resources. I'm proposing to first come
 with RT as Nova resource in Juno and move ResourceTracker
 in Scheduler for K, so we at least merge some patches by
 Juno. ===

 Some debates occured recently about the scheduler split, so
 I think it's important to loop back with you all to see
 where we are and what are the discussions. Again, feel free
 to express your opinions, they are welcome.
>>> Where did this resource tracker discussion come up?  Do you
>>> have any references that I can read to catch up on it?  I
>>> would like to see more detail on the proposal for what should
>>> stay in Nova vs. be moved.  What is the interface between
>>> Nova and the scheduler here?
>>
>> Oh, missed the most important question you asked. So, about
>> the interface in between scheduler and Nova, the original
>> agreed proposal is in the spec
>> https://review.openstack.org/82133 (approved) where the
>> Scheduler exposes : - select_destinations() : for querying the
>> scheduler to provide candidates - update_resource_stats() : for
>> updating the scheduler internal state (ie. HostState)
>>
>> Here, update_resource_stats() is called by the
>> ResourceTracker, see the implementations (in review)
>> https://review.openstack.org/82778 and
>> https://review.openstack.org/104556.
>>
>> The alternative that has just been raised this week is to
>> provide a new interface where ComputeNode claims for resources
>> and frees these resources, so that all the resources are fully
>> owned by the Scheduler. An initial PoC has been raised here
>> https://review.openstack.org/103598 but I tried to see what
>> would be a ResourceTracker proxified by a Scheduler client here
>> : https://review.openstack.org/105747. As the spec hasn't been
>> written, the names of the interfaces are not properly defined
>> but I made a proposal as : - select_destinations() : same as
>> above - usage_claim() : claim a resource amount -
>> usage_update() : update a resource amount - usage_drop(): frees
>> the resource amount
>>
>> Again, this is a dummy proposal, a spec has to written if we
>> consider moving the RT.
>
> While I am not against moving the resource tracker, I feel we
> could move this to Gantt after the core scheduling has been
> moved.

 Big -1 from me on this, John.

 Frankly, I see no urgency whatsoever -- and actually very little
 benefit -- to moving the scheduler out of Nova. The Gantt project I
 think is getting ahead of itself by focusing on a split instead of
 focusing on cleaning up the interfaces between nova-conductor,
 nova-scheduler, and nova-compute.

>>>
>>> -1 on saying there is no urgency. Don't you see the NFV group saying
>>> each meeting what is the status of the scheduler split ?
>>
>> Frankly, I don't think a lot of the NFV use cases are well-defined.
>>
>> Even more frankly, I don't see any benefit to a split-out scheduler to
>> a single NFV use case.
>
> I don't know if you can, but if you're interested in giving feedback to
> the NFV team, we do run weekly meeting on #openstack-meeting-alt every
> Wednesday 2pm UTC.
>
> You can find a list of all the associated blueprints here
> https://wiki.openstack.org/wiki/Teams/NFV#Active_Blueprints whose list
> is processed hourly by a backend script so it generates a Gerrit
> dashboard accessible here : http://nfv.russellbryant.net
>
> By saying that, you can find
> https://blueprints.launchpad.net/nova/+spec/solver-scheduler as a
> possible use-case for NFV.
> As Paul and Yathi said, there is a need for a global resource placement
> engine able to cope with both network and compute resources if we need
> to provide NFV use-cases, that appears to me quite clearly and that's
> why I joined the NFV team.
>
>>
>>> Don't you see each Summit the lots of talks (and people attending
>>> them) talking about how OpenStack should look 

Re: [openstack-dev] [gantt] scheduler group meeting agenda

2014-07-15 Thread Debojyoti Dutta
https://etherpad.openstack.org/p/SchedulerUseCases

[08:43:35]  #action all update the use case etherpad
athttps://etherpad.openstack.org/p/SchedulerUseCases

Please update your use cases here ..

2014-07-14 20:57 GMT-07:00 Dugger, Donald D :
> 1) Forklift (Tasks & status)
> 2) Opens
>
> --
> Don Dugger
> "Censeo Toto nos in Kansa esse decisse." - D. Gale
> Ph: 303/443-3786
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
-Debo~

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Gantt] Scheduler split status (updated)

2014-07-15 Thread Jay Pipes

Hi Paul, thanks for your reply. Comments inline.

BTW, is there any way to reply inline instead of top-posting? On these
longer emails, it gets hard sometimes to follow your reply to specific
things I mentioned (vs. what John G mentioned).

Death to MS Outlook.

On 07/14/2014 04:40 PM, Murray, Paul (HP Cloud) wrote:

On extensible resource tracking

Jay, I am surprised to hear you say no one has explained to you why
there is an extensible resource tracking blueprint. It’s simple,
there was a succession of blueprints wanting to add data about this
and that to the resource tracker and the scheduler and the database
tables used to communicate. These included capabilities, all the
stuff in the stats, rxtx_factor, the equivalent for cpu (which only
works on one hypervisor I think), pci_stats and more were coming
including,

https://blueprints.launchpad.net/nova/+spec/network-bandwidth-entitlement
https://blueprints.launchpad.net/nova/+spec/cpu-entitlement

So, in short, your claim that there are no operators asking for
additional stuff is simply not true.


A few things about the above blueprints

1) Neither above blueprint is approved.

2) Neither above blueprint nor the extensible resource tracker blueprint
contains a single *use case*. The blueprints are full of statements like
"We want to extend this model to add a measure of XXX" and "We propose a
unified API to support YYY", however none of them actually contains a
real use case. A use case is in the form of "As a XXX user, I want to be
able to YYY so that my ZZZ can do AAA." Blueprints without use cases are
not necessarily things to be disregarded, but when the blueprint
proposes a significant change in behaviour/design or a new feature,
without specifying one or more use cases that are satisfied by the
proposed spec, the blueprint is suspicious, in my mind.

3) The majority of the feature requests in the CPUEntitlement are 
enabled with the use of existing host aggregates and their

cpu_allocation_ratios and Dan Berrange's work on adding NUMA topology
aspects to the compute node and flavours.

4) In my previous emails, I was pretty specific that I had never met a
single operator or admin that was "sitting there tinkering with weight
multipliers" trying to control the placement of VMs in their cloud. When
I talk about the *needless complexity* in the current scheduler design,
I am talking specifically about the weigher multipliers. I can guarantee
you that there isn't a single person out there sitting behind the scenes
going "Oooh, let me change my ram weigher multiplier from 1.0 to .675
and see what happens". It's just not something that is done -- that is
way too low of a level for the Compute scheduler to be thinking at. The
Nova scheduler *is not a process or thread scheduler*. Folks who think
that the Nova scheduler should emulate the Linux kernel scheduling
policies and strategies are thinking on *completely* the wrong level,
IMO. We should be focusing on making the scheduler *simpler*, with admin
users *easily* able to figure out how to control placement decisions for
their host aggregates and, more importantly, allow *tenant-by-tenant
sorting policies* [1] so that scheduling decisions for different classes
of tenants can be controlled distinctly.


Around about the Icehouse summit (I think) it was suggested that we
should stop the obvious trend and add a way to make resource
tracking extensible, similar to metrics, which had just been added as
an extensible way of collecting on going usage data (because that
was also wanted).


OK, I do understand that. I actually think it would have been more
appropriate to define real models for these new resource types instead
of making it a free-for-all with too much ability to support out-of-tree
custom, non-standard resource types, but I understand the idea behind it.


The json blob you refer to was down to the bad experience of the
compute_node_stats table implemented for stats – which had a
particular performance hit because it required an expensive join.
This was dealt with by removing the table and adding a string field
to contain the data as a json blob. A pure performance optimization.


Interesting. This is good to know (and would have been good to note on
the ERT blueprint).

The problem I have with this is that we are muddying the code and the DB
schema unnecessarily because we don't want to optimize our DB read code
to not pull giant BLOB columns when we don't need or want to. Instead,
we take the easy route and shove everything into a JSON BLOB field.


Clearly there is no need to store things in this way and with Nova
objects being introduced there is a means to provide strict type
checking on the data even if it is stored as json blobs in the
database.


The big problem I have with the ERT implementation is that it does not
model the *resource*. Instead, it provides a plugin interface that is
designed to take a BLOB of data and pass back a BLOB of data, and
doesn't actually model the resource

Re: [openstack-dev] REST API access to configuration options

2014-07-15 Thread Doug Hellmann
On Tue, Jul 15, 2014 at 10:25 AM, Mark McLoughlin  wrote:
> On Tue, 2014-07-15 at 13:00 +0100, Henry Nash wrote:
>> Mark,
>>
>>
>> Thanks for your comments (as well as remarks on the WIP code-review).
>>
>>
>> So clearly gathering and analysing log files is an alternative
>> approach, perhaps not as immediate as an API call.  In general, I
>> believe that the more capability we provide via easy-to-consume APIs
>> (with appropriate permissions) the more effective (and innovative)
>> ways of management of OpenStack we will achieve (easier to build
>> automated management systems).
>
> I'm skeptical - like Joe says, this is a general problem and management
> tooling will have generic ways of tackling this without using a REST
> API.
>
>>   In terms of multi API servers, obviously each server would respond
>> to the API with the values it has set, so operators could check any or
>> all of the serversand this actually becomes more important as
>> people distribute config files around to the various servers (since
>> more chance of something getting out of sync).
>
> The fact that it only deals with API servers, and that you need to
> bypass the load balancer in order to iterate over all API servers, makes
> this of very limited use IMHO.

I have to agree. Those configuration management tools push settings
out to the cluster. I don't see a lot of value in having them query an
API to see what settings are already in place.

FWIW, we had a very similar discussion in ceilometer early on, because
we thought it might be tricky to configure a distributed set of
collector daemons exactly right. In the end we decided to rely on the
existing configuration tools to push out the settings rather than
trying to build that into the daemons. The nodes need enough
configuration that the service can come online to get the rest of its
configuration, and at that point the work of passing out the config is
done and it might as well include all of the settings. Providing an
API to check the config is similar -- the API service needs enough of
its settings to know how to run and where its config file is located
to provide an API for asking questions about what is in that file.

Doug

>
> Thanks,
> Mark.
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] "Neutron Ryu" status

2014-07-15 Thread YAMAMOTO Takashi
> On Tue, Jul 15, 2014 at 9:12 AM, YAMAMOTO Takashi
>  wrote:
>> if you are wondering why ofagent CI ("Neutron Ryu") reported a failure
>> (non-voting) for your review recently, you can probably safely ignore it.
>> sorry for inconvenience.
>>
>> the CI has been fixed recently.
>> unfortunately ofagent on master is broken (a consequence of the broken CI)
>> and the CI started detecting the breakage correctly.  the breakage
>> will be fixed if the following changes are merged.
>> https://review.openstack.org/#/c/103764/
>> https://review.openstack.org/#/c/106701/
>>
>> YAMAMOTO Takashi
>>
> Thanks for the update on this YAMAMOTO. I'll work to try and get those
> two bug fixes merged by having other cores review them ASAP so we can
> get ofagent working again.

thank you!

YAMAMOTO Takashi

> 
> Kyle
> 
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] "Neutron Ryu" status

2014-07-15 Thread Kyle Mestery
On Tue, Jul 15, 2014 at 9:12 AM, YAMAMOTO Takashi
 wrote:
> if you are wondering why ofagent CI ("Neutron Ryu") reported a failure
> (non-voting) for your review recently, you can probably safely ignore it.
> sorry for inconvenience.
>
> the CI has been fixed recently.
> unfortunately ofagent on master is broken (a consequence of the broken CI)
> and the CI started detecting the breakage correctly.  the breakage
> will be fixed if the following changes are merged.
> https://review.openstack.org/#/c/103764/
> https://review.openstack.org/#/c/106701/
>
> YAMAMOTO Takashi
>
Thanks for the update on this YAMAMOTO. I'll work to try and get those
two bug fixes merged by having other cores review them ASAP so we can
get ofagent working again.

Kyle

>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] REST API access to configuration options

2014-07-15 Thread Mark McLoughlin
On Tue, 2014-07-15 at 13:00 +0100, Henry Nash wrote:
> Mark,
> 
> 
> Thanks for your comments (as well as remarks on the WIP code-review).
> 
> 
> So clearly gathering and analysing log files is an alternative
> approach, perhaps not as immediate as an API call.  In general, I
> believe that the more capability we provide via easy-to-consume APIs
> (with appropriate permissions) the more effective (and innovative)
> ways of management of OpenStack we will achieve (easier to build
> automated management systems).

I'm skeptical - like Joe says, this is a general problem and management
tooling will have generic ways of tackling this without using a REST
API.

>   In terms of multi API servers, obviously each server would respond
> to the API with the values it has set, so operators could check any or
> all of the serversand this actually becomes more important as
> people distribute config files around to the various servers (since
> more chance of something getting out of sync).

The fact that it only deals with API servers, and that you need to
bypass the load balancer in order to iterate over all API servers, makes
this of very limited use IMHO.

Thanks,
Mark.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-15 Thread Evgeny Fedoruk
Hi All,

Since this issue came up from TLS capabilities RST doc review, I opened a ML 
thread for it to make the decision.
Currently, the document says:

"
For SNI functionality, tenant will supply list of TLS containers in specific
Order.
In case when specific back-end is not able to support SNI capabilities,
its driver should throw an exception. The exception message should state
that this specific back-end (provider) does not support SNI capability.
The clear sign of listener's requirement for SNI capability is
a none empty SNI container ids list.
However, reference implementation must support SNI capability.

Specific back-end code may retrieve SubjectCommonName and/or altSubjectNames
from the certificate which will determine the hostname(s) the certificate
is associated with.

The order of SNI containers list may be used by specific back-end code,
like Radware's, for specifying priorities among certificates.
In case when two or more uploaded certificates are valid for the same DNS name
and the tenant has specific requirements around which one wins this collision,
certificate ordering provides a mechanism to define which cert wins in the
event of a collision.
Employing the order of certificates list is not a common requirement for
all back-end implementations.
"

The question is about SCN and SAN extraction from X509.

1.   Extraction of SCN/ SAN should be done while provisioning and not 
during TLS handshake

2.   Every back-end code/driver must(?) extract SCN and(?) SAN and use it 
for certificate determination for host

Please give your feedback

Thanks,
Evg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-15 Thread Morgan Fainberg
On Tuesday, July 15, 2014, Steven Hardy  wrote:

> On Mon, Jul 14, 2014 at 02:43:19PM -0400, Adam Young wrote:
> > On 07/14/2014 11:47 AM, Steven Hardy wrote:
> > >Hi all,
> > >
> > >I'm probably missing something, but can anyone please tell me when
> devstack
> > >will be moving to keystone v3, and in particular when API auth_token
> will
> > >be configured such that auth_version is v3.0 by default?
> > >
> > >Some months ago, I posted this patch, which switched auth_version to
> v3.0
> > >for Heat:
> > >
> > >https://review.openstack.org/#/c/80341/
> > >
> > >That patch was nack'd because there was apparently some version
> discovery
> > >code coming which would handle it, but AFAICS I still have to manually
> > >configure auth_version to v3.0 in the heat.conf for our API to work
> > >properly with requests from domains other than the default.
> > >
> > >The same issue is observed if you try to use non-default-domains via
> > >python-heatclient using this soon-to-be-merged patch:
> > >
> > >https://review.openstack.org/#/c/92728/
> > >
> > >Can anyone enlighten me here, are we making a global devstack move to
> the
> > >non-deprecated v3 keystone API, or do I need to revive this devstack
> patch?
> > >
> > >The issue for Heat is we support notifications from "stack domain
> users",
> > >who are created in a heat-specific domain, thus won't work if the
> > >auth_token middleware is configured to use the v2 keystone API.
> > >
> > >Thanks for any information :)
> > >
> > >Steve
> > There are reviews out there in client land now that should work.  I was
> > testing discover just now and it seems to be doing the right thing.  If
> the
> > AUTH_URL is chopped of the V2.0 or V3 the client should be able to handle
> > everything from there on forward.
>
> Perhaps I should restate my problem, as I think perhaps we still have
> crossed wires:
>
> - Certain configurations of Heat *only* work with v3 tokens, because we
>   create users in a non-default domain
> - Current devstack still configures versioned endpoints, with v2.0 keystone
> - Heat breaks in some circumstances on current devstack because of this.
> - Adding auth_version='v3.0' to the auth_token section of heat.conf fixes
>   the problem.
>
> So, back in March, client changes were promised to fix this problem, and
> now, in July, they still have not - do I revive my patch, or are fixes for
> this really imminent this time?
>
> Basically I need the auth_token middleware to accept a v3 token for a user
> in a non-default domain, e.g validate it *always* with the v3 API not v2.0,
> even if the endpoint is still configured versioned to v2.0.
>
> Sorry to labour the point, but it's frustrating to see this still broken
> so long after I proposed a fix and it was rejected.
>
>
We just did a test converting over the default to v3 (and falling back to
v2 as needed, yes fallback will still be needed) yesterday (Dolph posted a
couple of test patches and they seemed to succeed - yay!!) It looks like it
will just work. Now there is a big caveate, this default will only change
in the keystone middleware project, and it needs to have a patch or three
get through gate converting projects over to use it before we accept the
code.

Nova has approved the patch to switch over, it is just fighting with Gate.
Other patches are proposed for other projects and are in various states of
approval.

So, in short. This is happening and soon. There are some things that need
to get through gate and then we will do the release of keystonemiddleware
that should address your problem here. At least my reading of the issue and
the fixes that are pending indicates as much. (Please let me know if I am
misreading anything here).

Cheers,
Morgan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][specs] Please stop doing specs for any changes in projects

2014-07-15 Thread Erlon Cruz
> Leave the option about when to submit a design vs. when to submit code to
the
> contributor.

That's is what is being done so far right? Hard to know (mainly if you are
a first contributor) when a change is big or not. I can see lots
of patches   being submitted without the spec getting rejected and being
asked to a SPEC. This line between large-needs-spec/small-dont-need can not
be subjective.

Why launchpad doesn't have a better discussion capability? I mean, users
should be able to post comments on the whiteboard of the vluprint/bug. Then
a quick discussion there could be used to define if a SPEC would be needed.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] python-glanceclient with requests is spamming the logs

2014-07-15 Thread Matt Riedemann
I've been looking at bug 1341777 since yesterday originally because of 
g-api logs and this warning:


"HttpConnectionPool is full, discarding connection: 127.0.0.1"

But that's been around awhile and it sounds like an issue with 
python-swiftclient since it started using python-requests (see bug 1295812).


I did also noticed that the warning started spiking in the n-cpu and 
c-vol logs on 7/11 and traced that back to this change in 
python-glanceclient to start using requests:


https://review.openstack.org/#/c/78269/

This is nasty because it's generating around 166K warnings since 7/11 in 
those logs:


http://goo.gl/p0urYm

It's a big change in glanceclient so I wouldn't want to propose a revert 
for this, but hopefully the glance team can sort this out quickly since 
it's going to impact our elastic search cluster.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] REST API access to configuration options

2014-07-15 Thread Joe Gordon
On Tue, Jul 15, 2014 at 6:57 AM, Henry Nash 
wrote:

> Joe,
>
> I'd imagine an API like this would be pretty useful for some of these
> config tools - so I'd imagine they might well be consumers of this API.
>

This may solve the OpenStack case, but something like this wouldn't solve
the general issue of configuration management (config options for mysql,
rabbit, apache, load balancers etc.)


>
> Henry
>
> On 15 Jul 2014, at 13:10, Joe Gordon  wrote:
>
>
>
>
> On Tue, Jul 15, 2014 at 5:00 AM, Henry Nash 
> wrote:
>
>> Mark,
>>
>> Thanks for your comments (as well as remarks on the WIP code-review).
>>
>> So clearly gathering and analysing log files is an alternative approach,
>> perhaps not as immediate as an API call.  In general, I believe that the
>> more capability we provide via easy-to-consume APIs (with appropriate
>> permissions) the more effective (and innovative) ways of management of
>> OpenStack we will achieve (easier to build automated management systems).
>>  In terms of multi API servers, obviously each server would respond to the
>> API with the values it has set, so operators could check any or all of the
>> serversand this actually becomes more important as people distribute
>> config files around to the various servers (since more chance of something
>> getting out of sync).
>>
>
> Where do you see configuration management tools like chef, puppet, and the
> os-*-config tools (http://git.openstack.org/cgit) fit in to this?
>
>
>>
>> Henry
>> On 15 Jul 2014, at 10:08, Mark McLoughlin  wrote:
>>
>> On Tue, 2014-07-15 at 08:54 +0100, Henry Nash wrote:
>>
>> HI
>>
>> As the number of configuration options increases and OpenStack
>> installations become more complex, the chances of incorrect
>> configuration increases. There is no better way of enabling cloud
>> providers to be able to check the configuration state of an OpenStack
>> service than providing a direct REST API that allows the current
>> running values to be inspected. Having an API to provide this
>> information becomes increasingly important for dev/ops style
>> operation.
>>
>> As part of Keystone we are considering adding such an ability (see:
>> https://review.openstack.org/#/c/106558/).  However, since this is the
>> sort of thing that might be relevant to and/or affect other projects,
>> I wanted to get views from the wider dev audience.
>>
>> Any such change obviously has to take security in mind - and as the
>> spec says, just like when we log config options, any options marked as
>> secret will be obfuscated.  In addition, the API will be protected by
>> the normal policy mechanism and is likely in most installations to be
>> left as "admin required".  And of course, since it is an extension, if
>> a particular installation does not want to use it, they don't need to
>> load it.
>>
>> Do people think this is a good idea?  Useful in other projects?
>> Concerned about the risks?
>>
>>
>> I would have thought operators would be comfortable gleaning this
>> information from the log files?
>>
>> Also, this is going to tell you how the API service you connected to was
>> configured. Where there are multiple API servers, what about the others?
>> How do operators verify all of the API servers behind a load balancer
>> with this?
>>
>> And in the case of something like Nova, what about the many other nodes
>> behind the API server?
>>
>> Mark.
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][specs] Please stop doing specs for any changes in projects

2014-07-15 Thread Duncan Thomas
On 15 July 2014 15:01, Erlon Cruz  wrote:
>> Leave the option about when to submit a design vs. when to submit code to
>> the contributor.
>
> That's is what is being done so far right? Hard to know (mainly if you are a
> first contributor) when a change is big or not. I can see lots of patches
> being submitted without the spec getting rejected and being asked to a SPEC.
> This line between large-needs-spec/small-dont-need can not be subjective.

Of course it is subjective, like good code style and bunch of other
things we review for. If it's hard to know, guess, it doesn't hurt
either way - an unnecessary spec that is really simple should get
approved quickly enough, and if you need a spec but haven't done one
then somebody will ask for one soon enough.

> Why launchpad doesn't have a better discussion capability? I mean, users
> should be able to post comments on the whiteboard of the vluprint/bug. Then
> a quick discussion there could be used to define if a SPEC would be needed.

Launchpad has proven terrible for this, which is a strong driver for
the spec process in the first place

-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] "Neutron Ryu" status

2014-07-15 Thread YAMAMOTO Takashi
if you are wondering why ofagent CI ("Neutron Ryu") reported a failure
(non-voting) for your review recently, you can probably safely ignore it.
sorry for inconvenience.

the CI has been fixed recently.
unfortunately ofagent on master is broken (a consequence of the broken CI)
and the CI started detecting the breakage correctly.  the breakage
will be fixed if the following changes are merged.
https://review.openstack.org/#/c/103764/
https://review.openstack.org/#/c/106701/

YAMAMOTO Takashi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance] what to do about tons of http connection pool is full warnings in g-api log?

2014-07-15 Thread Matt Riedemann



On 7/14/2014 5:28 PM, Matt Riedemann wrote:



On 7/14/2014 5:18 PM, Ben Nemec wrote:

On 07/14/2014 04:21 PM, Matt Riedemann wrote:



On 7/14/2014 4:09 PM, Matt Riedemann wrote:

I opened bug 1341777 [1] against glance but it looks like it's due to
the default log level for requests.packages.urllib3.connectionpool in
oslo's log module.

The problem is this warning shows up nearly 420K times in 7 days in
Tempest runs:

WARNING urllib3.connectionpool [-] HttpConnectionPool is full,
discarding connection: 127.0.0.1

So either glance is doing something wrong, or that's logging too
high of
a level (I think it should be debug in this case).  I'm not really sure
how to scope this down though, or figure out what is so damn chatty in
glance-api that is causing this.  It doesn't seem to be causing test
failures, but the rate at which this is logged in glance-api is
surprising.

[1] https://bugs.launchpad.net/glance/+bug/1341777



I found this older thread [1] which led to this in oslo [2] but I'm not
really sure how to use it to make the connectionpool logging quieter in
glance, any guidance there?  It looks like in Joe's change to nova for
oslo.messaging he just changed the value directly in the log module in
nova, something I thought was forbidden.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-March/030763.html

[2] https://review.openstack.org/#/c/94001/



There was a change recently in incubator to address something related,
but since it's setting to WARN I don't think it would get rid of this
message:
https://github.com/openstack/oslo-incubator/commit/3310d8d2d3643da2fc249fdcad8f5000866c4389


It looks like Joe's change was a cherry-pick of the incubator change to
add oslo.messaging, so discouraged but not forbidden (and apparently
during feature freeze, which is understandable).

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah it sounds like either a problem in glance because they don't allow
configuring the max pool size so it defaults to 1, or it's an issue in
python-swiftclient and is being tracked in a different bug:

https://bugs.launchpad.net/python-swiftclient/+bug/1295812



It looks like the issue for the g-api logs was bug 1295812 in 
python-swiftclient, around the time that moved to using python-requests.


I noticed last night that the n-cpu/c-vol logs started spiking with the 
urllib3 connectionpool warning on 7/11 which is when python-glanceclient 
started using requests, so I've changed bug 1341777 to a 
python-glanceclient bug.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-15 Thread Steven Hardy
On Mon, Jul 14, 2014 at 02:43:19PM -0400, Adam Young wrote:
> On 07/14/2014 11:47 AM, Steven Hardy wrote:
> >Hi all,
> >
> >I'm probably missing something, but can anyone please tell me when devstack
> >will be moving to keystone v3, and in particular when API auth_token will
> >be configured such that auth_version is v3.0 by default?
> >
> >Some months ago, I posted this patch, which switched auth_version to v3.0
> >for Heat:
> >
> >https://review.openstack.org/#/c/80341/
> >
> >That patch was nack'd because there was apparently some version discovery
> >code coming which would handle it, but AFAICS I still have to manually
> >configure auth_version to v3.0 in the heat.conf for our API to work
> >properly with requests from domains other than the default.
> >
> >The same issue is observed if you try to use non-default-domains via
> >python-heatclient using this soon-to-be-merged patch:
> >
> >https://review.openstack.org/#/c/92728/
> >
> >Can anyone enlighten me here, are we making a global devstack move to the
> >non-deprecated v3 keystone API, or do I need to revive this devstack patch?
> >
> >The issue for Heat is we support notifications from "stack domain users",
> >who are created in a heat-specific domain, thus won't work if the
> >auth_token middleware is configured to use the v2 keystone API.
> >
> >Thanks for any information :)
> >
> >Steve
> There are reviews out there in client land now that should work.  I was
> testing discover just now and it seems to be doing the right thing.  If the
> AUTH_URL is chopped of the V2.0 or V3 the client should be able to handle
> everything from there on forward.

Perhaps I should restate my problem, as I think perhaps we still have
crossed wires:

- Certain configurations of Heat *only* work with v3 tokens, because we
  create users in a non-default domain
- Current devstack still configures versioned endpoints, with v2.0 keystone
- Heat breaks in some circumstances on current devstack because of this.
- Adding auth_version='v3.0' to the auth_token section of heat.conf fixes
  the problem.

So, back in March, client changes were promised to fix this problem, and
now, in July, they still have not - do I revive my patch, or are fixes for
this really imminent this time?

Basically I need the auth_token middleware to accept a v3 token for a user
in a non-default domain, e.g validate it *always* with the v3 API not v2.0,
even if the endpoint is still configured versioned to v2.0.

Sorry to labour the point, but it's frustrating to see this still broken
so long after I proposed a fix and it was rejected.

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] REST API access to configuration options

2014-07-15 Thread Henry Nash
Joe,

I'd imagine an API like this would be pretty useful for some of these config 
tools - so I'd imagine they might well be consumers of this API.

Henry
On 15 Jul 2014, at 13:10, Joe Gordon  wrote:

> 
> 
> 
> On Tue, Jul 15, 2014 at 5:00 AM, Henry Nash  wrote:
> Mark,
> 
> Thanks for your comments (as well as remarks on the WIP code-review).
> 
> So clearly gathering and analysing log files is an alternative approach, 
> perhaps not as immediate as an API call.  In general, I believe that the more 
> capability we provide via easy-to-consume APIs (with appropriate permissions) 
> the more effective (and innovative) ways of management of OpenStack we will 
> achieve (easier to build automated management systems).  In terms of multi 
> API servers, obviously each server would respond to the API with the values 
> it has set, so operators could check any or all of the serversand this 
> actually becomes more important as people distribute config files around to 
> the various servers (since more chance of something getting out of sync).
> 
> Where do you see configuration management tools like chef, puppet, and the 
> os-*-config tools (http://git.openstack.org/cgit) fit in to this?
>  
> 
> Henry
> On 15 Jul 2014, at 10:08, Mark McLoughlin  wrote:
> 
>> On Tue, 2014-07-15 at 08:54 +0100, Henry Nash wrote:
>>> HI
>>> 
>>> As the number of configuration options increases and OpenStack
>>> installations become more complex, the chances of incorrect
>>> configuration increases. There is no better way of enabling cloud
>>> providers to be able to check the configuration state of an OpenStack
>>> service than providing a direct REST API that allows the current
>>> running values to be inspected. Having an API to provide this
>>> information becomes increasingly important for dev/ops style
>>> operation.
>>> 
>>> As part of Keystone we are considering adding such an ability (see:
>>> https://review.openstack.org/#/c/106558/).  However, since this is the
>>> sort of thing that might be relevant to and/or affect other projects,
>>> I wanted to get views from the wider dev audience.  
>>> 
>>> Any such change obviously has to take security in mind - and as the
>>> spec says, just like when we log config options, any options marked as
>>> secret will be obfuscated.  In addition, the API will be protected by
>>> the normal policy mechanism and is likely in most installations to be
>>> left as "admin required".  And of course, since it is an extension, if
>>> a particular installation does not want to use it, they don't need to
>>> load it.
>>> 
>>> Do people think this is a good idea?  Useful in other projects?
>>> Concerned about the risks?
>> 
>> I would have thought operators would be comfortable gleaning this
>> information from the log files?
>> 
>> Also, this is going to tell you how the API service you connected to was
>> configured. Where there are multiple API servers, what about the others?
>> How do operators verify all of the API servers behind a load balancer
>> with this?
>> 
>> And in the case of something like Nova, what about the many other nodes
>> behind the API server?
>> 
>> Mark.
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Neutron ML2 Blueprints

2014-07-15 Thread Sergey Vasilenko
[1] fixed in https://review.openstack.org/#/c/107046/
Thanks for report a bug.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Treating notifications as a contract

2014-07-15 Thread Sandy Walsh
On 7/15/2014 3:51 AM, Mark McLoughlin wrote:
> On Fri, 2014-07-11 at 10:04 +0100, Chris Dent wrote:
>> On Fri, 11 Jul 2014, Lucas Alvares Gomes wrote:
>>
>>> The data format that Ironic will send was part of the spec proposed
>>> and could have been reviewed. I think there's still time to change it
>>> tho, if you have a better format talk to Haomeng which is the guys
>>> responsible for that work in Ironic and see if he can change it (We
>>> can put up a following patch to fix the spec with the new format as
>>> well) . But we need to do this ASAP because we want to get it landed
>>> in Ironic soon.
>> It was only after doing the work that I realized how it might be an
>> example for the sake of this discussion. As the architecure of
>> Ceilometer currently exist there still needs to be some measure of
>> custom code, even if the notifications are as I described them.
>>
>> However, if we want to take this opportunity to move some of the
>> smarts from Ceilomer into the Ironic code then the paste that I created
>> might be a guide to make it possible:
>>
>> http://paste.openstack.org/show/86071/
> So you're proposing that all payloads should contain something like:
>
> 'events': [
> # one or more dicts with something like
> {
> # some kind of identifier for the type of event
> 'class': 'hardware.ipmi.temperature',
> 'type': '#thing that indicates threshold, discrete, 
> cumulative',
> 'id': 'DIMM GH VR Temp (0x3b)',
> 'value': '26',
> 'unit': 'C',
> 'extra': {
> ...
> }
>  }
>
> i.e. a class, type, id, value, unit and a space to put additional metadata.

This looks like a particular schema for one event-type (let's say
"foo.sample").  It's hard to extrapolate this one schema to a generic
set of common metadata applicable to all events. Really the only common
stuff we can agree on is the stuff already there: tenant, user, server,
message_id, request_id, timestamp, event_type, etc.

Side note on using notifications for sample data:

1. you should generate a proper notification when the rules of a sample
change (limits, alarms, sources, etc) ... but no actual measurements. 
This would be something like a "ironic.alarm-rule-change" notification
or something
2. you should generate a minimal event for the actual samples "CPU-xxx:
70%" that relates to the previous rule-changing notification. And do
this on a queue something like "foo.sample".

This way, we can keep important notifications in a priority queue and
handle them accordingly (since they hold important data), but let the
samples get routed across less-reliable transports (like UDP) via the
RoutingNotifier.

Also, send the samples one-at-a-time and let them either a) drop on the
floor (udp) or b) let the aggregator roll them up into something smaller
(sliding window, etc). Making these large notifications contain a list
of samples means we had to store state somewhere on the server until
transmission time. Ideally something we wouldn't want to rely on.



> On the subject of "notifications as a contract", calling the additional
> metadata field 'extra' suggests to me that there are no stability
> promises being made about those fields. Was that intentional?
>
>> However on that however, if there's some chance that a large change could
>> happen, it might be better to wait, I don't know.
> Unlikely that a larger change will be made in Juno - take small window
> of opportunity to rationalize Ironic's payload IMHO.
>
> Mark.
___ OpenStack-dev mailing
list OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Juno-2 Review Priority

2014-07-15 Thread Kyle Mestery
On Tue, Jul 15, 2014 at 5:00 AM, Salvatore Orlando  wrote:
> Kyle,
>
> It is probably my fault that I did not notice the review assignment page
> beforehand.
> Thankfully, I'm already engaged in reviewing the db 'healing' work. On the
> other hand, I've barely followed Oleg's progress on the migration work.
> I'm ok to assist Maru there, even if I'm surely less suitable than core devs
> like Aaron and Gary which have a long history of nova contributions. It is
> also worth noting that being nova work maybe a single neutron core dev might
> be enough (John and Dan are already following it from the nova side).
>
That sounds fine to me Salvatore. This was communicated in an email to
the list [1] and also at the Neutron meeting, but I didn't receive a
lot of feedback, so I wondered how many people really were aware. I've
spoken with Maru, and the Nova coverage is good there.

> I also would like to point out that I'm already reviewing several DVR
> patches, as well as the new flavour framework. I think I can add myself as
> core reviewer for these two work items, if you don't mind - also because DVR
> might use three or four core reviewers considering that there are several
> rather large patches to review.
>
This is also great, please feel free to do that.

The main intent of this experiment with assigning core reviewers was
to see how it would work for important community items. I envisioned
reviewers working with submitters to try and quickly turn some of
these patches around. We'll evaluate how this goes after Juno-2 and
see if it makes sense to continue in Juno-3.

Thanks,
Kyle


[1] http://lists.openstack.org/pipermail/openstack-dev/2014-July/039529.html

> Salvatore
>
>
> On 15 July 2014 04:21, Kyle Mestery  wrote:
>>
>> As we're getting down to the wire with Juno-2, I'd like the core team
>> to really focus on the BPs which are currently slated for Juno-2 while
>> reviewing [1]. I'm in the process of shuffling a few of these into
>> Juno-3 now (ones which don't have code posted, for example), but there
>> are a lot which have code ready for review. If you're looking for
>> things to review, please focus on approved BPs so we can merge these
>> before next week.
>>
>> In addition, for the big ticket community items, I've assigned
>> reviewers, and that list is available here [2] in the Neutron Juno
>> Project Plan wiki. Code submitters, please work with the reviewers
>> assigned if your BP is on that list.
>>
>> If you have questions, please reach out to me on IRC or reply on this
>> thread.
>>
>> Thanks!
>> Kyle
>>
>> [1] https://launchpad.net/neutron/+milestone/juno-2
>> [2]
>> https://wiki.openstack.org/wiki/NeutronJunoProjectPlan#Juno-2_BP_Assignments
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][qa] proposal for moving forward on cells/tempest testing

2014-07-15 Thread Matt Riedemann



On 7/15/2014 12:36 AM, Sean Dague wrote:

On 07/14/2014 07:44 PM, Matt Riedemann wrote:

Today we only gate on exercises in devstack for cells testing coverage
in the gate-devstack-dsvm-cells job.

The cells tempest non-voting job was moving to the experimental queue
here [1] since it doesn't work with a lot of the compute API tests.

I think we all agreed to tar and feather comstud if he didn't get
Tempest "working" (read: passing) with cells enabled in Juno.

The first part of this is just figuring out where we sit with what's
failing in Tempest (in the check-tempest-dsvm-cells-full job).

I'd like to propose that we do the following to get the ball rolling:

1. Add an option to tempest.conf under the compute-feature-enabled
section to toggle cells and then use that option to skip tests that we
know will fail in cells, e.g. security group tests.


I don't think we should do that. Part of creating the feature matrix in
devstack gate included the follow on idea of doing extension selection
based on branch or feature.

I'm happy if that gets finished, then tests are skipped by known not
working extensions, but just landing a ton of tempest ifdefs that will
all be removed is feeling very gorpy. Especially as we're now at Juno 2,
which was supposed to be the checkpoint for this being "on track for
completion" and... people are just talking about starting.


2. Open bugs for all of the tests we're skipping so we can track closing
those down, assuming they aren't already reported. [2]

3. Once the known failures are being skipped, we can move
check-tempest-dsvm-cells-full out of the experimental queue.  I'm not
proposing that it'd be voting right away, I think we have to see it burn
in for awhile first.

With at least this plan we should be able to move forward on identifying
issues and getting some idea for how much of Tempest doesn't work with
cells and the effort involved in making it work.

Thoughts? If there aren't any objections, I said I'd work on the qa-spec
and can start doing the grunt-work of opening bugs and skipping tests.

[1] https://review.openstack.org/#/c/87982/
[2] https://bugs.launchpad.net/nova/+bugs?field.tag=cells+



All the rest is fine, I just think we should work on the proper way to
skip things.

-Sean



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



OK I don't know anything about the extensions in devstack-gate or how 
the skips would work then, I'll have to bug some people in IRC unless 
there is an easy example that can be pointed out here.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] devtest error in netwrok

2014-07-15 Thread Peeyush Gupta
Hi all,

I have been trying to set up tripleO on a ubuntu 12.04 virtual machine.
Now i am following this guide: 
http://docs.openstack.org/developer/tripleo-incubator/devtest.html

Now, when I run devtest_testenv.sh command, I get the following error:

error: internal error Network is already in use by interface eth0

Any idea how to resolve this?
 
Thanks,
~Peeyush Gupta___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Devstack (icehouse) - XCP - enable live migration

2014-07-15 Thread Bob Ball
Hi Afef,

There was a regression in Icehouse that broke XenAPI aggregates.  This has been 
fixed in Juno, however we would recommend you use live migrate with block 
migration (using XCP 1.6 or XenServer 6.2 – which is Free as well now, see 
http://lists.openstack.org/pipermail/openstack-dev/2013-September/014556.html).

Thanks,

Bob

From: Afef Mdhaffar [mailto:afef.mdhaf...@gmail.com]
Sent: 14 July 2014 22:58
To: openst...@lists.launchpad.net; openst...@lists.openstack.org; OpenStack 
Development Mailing List
Subject: [openstack-dev] Devstack (icehouse) - XCP - enable live migration

Hi all,

I have installed the latest release of openstack (icehouse) via devstack.
I use XCP and would like to activate the "live migration" functionality. 
Therefore, I tried to set up the pool with creating "host aggregate".
After adding the slave compute, nova-compute does not want to start any more 
and shows the following error. Could you please help me to fix this issue.
2014-07-14 21:45:13.933 CRITICAL nova [req-c7965812-76cb-4479-8947-edd70644cd3d 
None None] AttributeError: 'Aggregate' object has no attribute 'metadetails'

2014-07-14 21:45:13.933 TRACE nova Traceback (most recent call last):
2014-07-14 21:45:13.933 TRACE nova   File "/usr/local/bin/nova-compute", line 
10, in 
2014-07-14 21:45:13.933 TRACE nova sys.exit(main())
2014-07-14 21:45:13.933 TRACE nova   File 
"/opt/stack/nova/nova/cmd/compute.py", line 72, in main
2014-07-14 21:45:13.933 TRACE nova db_allowed=CONF.conductor.use_local)
2014-07-14 21:45:13.933 TRACE nova   File "/opt/stack/nova/nova/service.py", 
line 273, in create
2014-07-14 21:45:13.933 TRACE nova db_allowed=db_allowed)
2014-07-14 21:45:13.933 TRACE nova   File "/opt/stack/nova/nova/service.py", 
line 147, in __init__
2014-07-14 21:45:13.933 TRACE nova self.manager = 
manager_class(host=self.host, *args, **kwargs)
2014-07-14 21:45:13.933 TRACE nova   File 
"/opt/stack/nova/nova/compute/manager.py", line 597, in __init__
2014-07-14 21:45:13.933 TRACE nova self.driver = 
driver.load_compute_driver(self.virtapi, compute_driver)
2014-07-14 21:45:13.933 TRACE nova   File 
"/opt/stack/nova/nova/virt/driver.py", line 1299, in load_compute_driver
2014-07-14 21:45:13.933 TRACE nova virtapi)
2014-07-14 21:45:13.933 TRACE nova   File 
"/opt/stack/nova/nova/openstack/common/importutils.py", line 50, in 
import_object_ns
2014-07-14 21:45:13.933 TRACE nova return import_class(import_value)(*args, 
**kwargs)
2014-07-14 21:45:13.933 TRACE nova   File 
"/opt/stack/nova/nova/virt/xenapi/driver.py", line 156, in __init__
2014-07-14 21:45:13.933 TRACE nova self._session = 
session.XenAPISession(url, username, password)
2014-07-14 21:45:13.933 TRACE nova   File 
"/opt/stack/nova/nova/virt/xenapi/client/session.py", line 87, in __init__
2014-07-14 21:45:13.933 TRACE nova self.host_uuid = self._get_host_uuid()
2014-07-14 21:45:13.933 TRACE nova   File 
"/opt/stack/nova/nova/virt/xenapi/client/session.py", line 140, in 
_get_host_uuid
2014-07-14 21:45:13.933 TRACE nova return aggr.metadetails[CONF.host]
2014-07-14 21:45:13.933 TRACE nova AttributeError: 'Aggregate' object has no 
attribute 'metadetails'
2014-07-14 21:45:13.933 TRACE nova

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][sriov] today's IRC meeting

2014-07-15 Thread Robert Li (baoli)
Hi,

I need to pick up my son at 9:00. It’s a short trip.  So I will be late about 
15 minutes.

Status wise, if everything goes well, the patches should be up in a couple of 
days. One of the challenges is due to dividing them up, some unit tests will 
fail due to missing module and it took time to fix. another challenge is that 
code repos for upstreaming and functional test are separate, with later 
combines all the changes and fake test code (due to missing neutron parts). 
Maintaining them in sync is not fun. Rebasing is another issue too.

I checked the logs for the past two weeks. Yongli indicated that VFs on intel 
card can be brought up down individually from the host. I’d like to hear more 
details about it.

thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] REST API access to configuration options

2014-07-15 Thread Joe Gordon
On Tue, Jul 15, 2014 at 5:00 AM, Henry Nash 
wrote:

> Mark,
>
> Thanks for your comments (as well as remarks on the WIP code-review).
>
> So clearly gathering and analysing log files is an alternative approach,
> perhaps not as immediate as an API call.  In general, I believe that the
> more capability we provide via easy-to-consume APIs (with appropriate
> permissions) the more effective (and innovative) ways of management of
> OpenStack we will achieve (easier to build automated management systems).
>  In terms of multi API servers, obviously each server would respond to the
> API with the values it has set, so operators could check any or all of the
> serversand this actually becomes more important as people distribute
> config files around to the various servers (since more chance of something
> getting out of sync).
>

Where do you see configuration management tools like chef, puppet, and the
os-*-config tools (http://git.openstack.org/cgit) fit in to this?


>
> Henry
> On 15 Jul 2014, at 10:08, Mark McLoughlin  wrote:
>
> On Tue, 2014-07-15 at 08:54 +0100, Henry Nash wrote:
>
> HI
>
> As the number of configuration options increases and OpenStack
> installations become more complex, the chances of incorrect
> configuration increases. There is no better way of enabling cloud
> providers to be able to check the configuration state of an OpenStack
> service than providing a direct REST API that allows the current
> running values to be inspected. Having an API to provide this
> information becomes increasingly important for dev/ops style
> operation.
>
> As part of Keystone we are considering adding such an ability (see:
> https://review.openstack.org/#/c/106558/).  However, since this is the
> sort of thing that might be relevant to and/or affect other projects,
> I wanted to get views from the wider dev audience.
>
> Any such change obviously has to take security in mind - and as the
> spec says, just like when we log config options, any options marked as
> secret will be obfuscated.  In addition, the API will be protected by
> the normal policy mechanism and is likely in most installations to be
> left as "admin required".  And of course, since it is an extension, if
> a particular installation does not want to use it, they don't need to
> load it.
>
> Do people think this is a good idea?  Useful in other projects?
> Concerned about the risks?
>
>
> I would have thought operators would be comfortable gleaning this
> information from the log files?
>
> Also, this is going to tell you how the API service you connected to was
> configured. Where there are multiple API servers, what about the others?
> How do operators verify all of the API servers behind a load balancer
> with this?
>
> And in the case of something like Nova, what about the many other nodes
> behind the API server?
>
> Mark.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] REST API access to configuration options

2014-07-15 Thread Henry Nash
Mark,

Thanks for your comments (as well as remarks on the WIP code-review).

So clearly gathering and analysing log files is an alternative approach, 
perhaps not as immediate as an API call.  In general, I believe that the more 
capability we provide via easy-to-consume APIs (with appropriate permissions) 
the more effective (and innovative) ways of management of OpenStack we will 
achieve (easier to build automated management systems).  In terms of multi API 
servers, obviously each server would respond to the API with the values it has 
set, so operators could check any or all of the serversand this actually 
becomes more important as people distribute config files around to the various 
servers (since more chance of something getting out of sync).

Henry
On 15 Jul 2014, at 10:08, Mark McLoughlin  wrote:

> On Tue, 2014-07-15 at 08:54 +0100, Henry Nash wrote:
>> HI
>> 
>> As the number of configuration options increases and OpenStack
>> installations become more complex, the chances of incorrect
>> configuration increases. There is no better way of enabling cloud
>> providers to be able to check the configuration state of an OpenStack
>> service than providing a direct REST API that allows the current
>> running values to be inspected. Having an API to provide this
>> information becomes increasingly important for dev/ops style
>> operation.
>> 
>> As part of Keystone we are considering adding such an ability (see:
>> https://review.openstack.org/#/c/106558/).  However, since this is the
>> sort of thing that might be relevant to and/or affect other projects,
>> I wanted to get views from the wider dev audience.  
>> 
>> Any such change obviously has to take security in mind - and as the
>> spec says, just like when we log config options, any options marked as
>> secret will be obfuscated.  In addition, the API will be protected by
>> the normal policy mechanism and is likely in most installations to be
>> left as "admin required".  And of course, since it is an extension, if
>> a particular installation does not want to use it, they don't need to
>> load it.
>> 
>> Do people think this is a good idea?  Useful in other projects?
>> Concerned about the risks?
> 
> I would have thought operators would be comfortable gleaning this
> information from the log files?
> 
> Also, this is going to tell you how the API service you connected to was
> configured. Where there are multiple API servers, what about the others?
> How do operators verify all of the API servers behind a load balancer
> with this?
> 
> And in the case of something like Nova, what about the many other nodes
> behind the API server?
> 
> Mark.
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Can not create cinder volume

2014-07-15 Thread Duncan Thomas
On 15 July 2014 11:59, Johnson Cheng  wrote:
> May I ask you another question that how cinder choose volume node to create
> volume if I have multi volume nodes?
>
> For example,
>
> when controller node and compute node are alive, the volume will be created
> at compute node.
>
> When I shutdown compute node, the volume will be created at controller node.
>
> How can I create volume at compute node when both controller node and
> compute node are alive?

The short answer is:

- If you want to manually control which backend is used per volume,
then you need to set up volume types
- If you want volumes to be automatically distributed between the two
backends, look at the weighters (capacity weighter will pick which
ever backend has the greatest free capacity for example)

The admin guide should be able to explain how to set either of these
two up. I suspect you want the later.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Can not create cinder volume

2014-07-15 Thread Johnson Cheng
Dear Duncan,

Thanks for your reply.
My rootwrap.conf is correct, but I find there is a garbage file at 
/etc/cinder/rootwrap.d/ folder.
When I remove it, everything is right.

May I ask you another question that how cinder choose volume node to create 
volume if I have multi volume nodes?
For example,
when controller node and compute node are alive, the volume will be created at 
compute node.
When I shutdown compute node, the volume will be created at controller node.
How can I create volume at compute node when both controller node and compute 
node are alive?


Regards,
Johnson

From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
Sent: Tuesday, July 15, 2014 3:51 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Can not create cinder volume

On 15 July 2014 08:24, Johnson Cheng 
mailto:johnson.ch...@qsantechnology.com>> 
wrote:

I have two questions here,

1.  It still has “Error encountered during initialization of driver: 
LVMISCSIDriver” error message in cinder-volume.log, how to fix this issue?

2.  From cinder-scheduler.log, it shows “volume service is down or 
disabled. (host: Compute)”. How can I modify cinder.conf to create cinder 
volume at controller node?


The good news is that you only have one problem. The driver not being 
initialized means that that cinder-volume service never signs on with the 
scheduler, so is never eligible to create volumes.

The problem you need to fix is this one:
2014-07-15 15:01:59.147 13901 TRACE cinder.volume.manager Stderr: 'Traceback 
(most recent call last):\n  File "/usr/bin/cinder-rootwrap", line 10, in 
\nsys.exit(main())\n  File 
"/usr/lib/python2.7/dist-packages/oslo/rootwrap/cmd.py", line 107, in main\n
filters = wrapper.load_filters(config.filters_path)\n  File 
"/usr/lib/python2.7/dist-packages/oslo/rootwrap/wrapper.py", line 119, in 
load_filters\nfor (name, value) in filterconfig.items("Filters"):\n  File 
"/usr/lib/python2.7/ConfigParser.py", line 347, in items\nraise 
NoSectionError(section)\nConfigParser.NoSectionError: No section: \'Filters\'\n'


It looks like the config file /etc/cinder/rootwrap.conf has become corrupted on 
one node - can you compare this file between the working and not working node 
and see if there are any differences?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Neutron ML2 Blueprints

2014-07-15 Thread Vladimir Kuklin
Andrew,

[2] may be due to agents failing to start. Incorrect agents configuration
will lead to agents unability to start and pacemaker timeouts. [3] I do not
see failures in rabbit service as I see that it succesfully transitioned
from stopped to running state. [4] Swift error shows that you have
incorrect rings configuration.


On Tue, Jul 15, 2014 at 9:23 AM, Andrew Woodward  wrote:

> Friday, resolved the haproxy / vip issue. It was due to loosing
> net.ipv4.ip_forward [1] and then we regressed back to not working at
> all in HA or Simple. There was also an issue with the neutron keystone
> call being cached oddly causing a trace. Removing the ability to cache
> the object appears to have stopped the error, but the changes leading
> into learning about this issue may have steered us into this not
> working again.
>
> Today, fought on the regression
>
> HA deployment testing is completely blocked by [2], [3]. Usually this
> occurs after puppet re-runs the controller after an error. This
> frequently occurs due to swift [4].
>
> Simple deployment testing is plagued by neutron notifiers auth failures
> [5].
>
> There is some minor annoyance with [6] that doesn't have to be fixed,
> but is annoying. There is another variant where other, non-recoverable
> (usually malformed-syntax errors) are also caught in the retry when
> they should be outright aborted on. This is likely due to the regex in
> neutron.rb being overly grabby.
>
> [1] https://bugs.launchpad.net/fuel/+bug/1340968
> [2]
> https://gist.github.com/xarses/219be742ab04faeb7f53#file-pacemaker-corosync-error
> [3]
> https://gist.github.com/xarses/219be742ab04faeb7f53#file-rabbit-service-failure
> [4] https://gist.github.com/xarses/219be742ab04faeb7f53#file-swift-error
> [5]
> https://gist.github.com/xarses/219be742ab04faeb7f53#file-neutron-vif-plug-error
> [6]
> https://gist.github.com/xarses/219be742ab04faeb7f53#file-neutron_network-changes-shared-on-existing-network
>
> On Fri, Jul 11, 2014 at 12:08 AM, Andrew Woodward 
> wrote:
> > Retested today
> > ubuntu single nova vlan - works
> > centos single nova dhcp - works
> > ubuntu single neutron gre - works
> > centos single neutron vlan - works
> > centos ha(1) neutron vlan - fail haproxy issue
> > ubuntu ha(1) neutron gre - fail haproxy issue.
> >
> > haproxy / vip issue:
> >
> > due to whatever reason that I haven't been able to track down yet, the ip
> > netns namespaces (public and management) ns_IPaddr2 vips can not ping or
> > otherwise communicate with nodes remote to who ever owns the respective
> vip.
> > Once this issue is resolved, I believe that CI should pass given that the
> > build appears 100% functional except that computes cant connect to the
> vip
> > properly.
> >
> >
> > On Thu, Jul 10, 2014 at 1:05 AM, Mike Scherbakov <
> mscherba...@mirantis.com>
> > wrote:
> >>
> >> We had a call between Andrew (@xarses), Vladimir Kuklin (@aglarendil)
> and
> >> myself (@mihgen) today to finally sort out Neutron ML2 integration in
> Fuel.
> >> We didn't have @xenolog on the call, but hopefully he is more or less
> fine
> >> with all below, and kindly request him to confirm :)
> >> Discussed following topics, with an agreement from all participants:
> >>
> >> Risks of merging https://review.openstack.org/#/c/103280 (@xarses,
> >> upstream puppet module, will further refer as "280") vs
> >> https://review.openstack.org/#/c/103947 (@xenolog, extending existing
> puppet
> >> module with ML2 support, will further refer as "947")
> >>
> >> We all agree, that 280 is strategically the way to go. It was so by
> >> design, and 947 was done only as risk mitigation plan in case if 280 is
> not
> >> ready in time
> >> Both 280 and 947 were manually verified in combinations of
> >> ubuntu/centos/vlan/gre/ha, 280 needs to be verified with nova-network
> >> 947 was ready a week ago and considered to be more stable solution
> >> 280 is has much higher risks to introduce regressions, as it is
> basically
> >> rearchitecturing of Neutron puppet module in Fuel
> >> 947 has backward compatibility support with legacy OVS, while 280
> doesn't
> >> have it at the moment
> >>
> >> Mellanox & VMWare NSX dependency on ML2 implementation rebase time
> >>
> >> Rebase itself should not be hard
> >> It has to be tested and may take up to next WE to do all
> >> rebases/testing/fixing
> >> As both Mellanox & NSX Neutron pieces are isolated, it can be an
> exception
> >> for merging by next TH
> >>
> >> Discussed sanitize_network_config [1]
> >>
> >> @aglarendil points out that we need to verify input params which puppet
> >> receives in advance, before waiting hours of deploy
> >> @mihgen's point of view is that we need to consider each Fuel component
> as
> >> module, and verify output of it with certain input params. So there is
> no
> >> point to verify input in puppet, if it's being verified in output of
> >> Nailgun.
> >> @xarses says that we need to verify configuration files created in
> system
> >> after module exe

Re: [openstack-dev] [neutron] Juno-2 Review Priority

2014-07-15 Thread Salvatore Orlando
Kyle,

It is probably my fault that I did not notice the review assignment page
beforehand.
Thankfully, I'm already engaged in reviewing the db 'healing' work. On the
other hand, I've barely followed Oleg's progress on the migration work.
I'm ok to assist Maru there, even if I'm surely less suitable than core
devs like Aaron and Gary which have a long history of nova contributions.
It is also worth noting that being nova work maybe a single neutron core
dev might be enough (John and Dan are already following it from the nova
side).

I also would like to point out that I'm already reviewing several DVR
patches, as well as the new flavour framework. I think I can add myself as
core reviewer for these two work items, if you don't mind - also because
DVR might use three or four core reviewers considering that there are
several rather large patches to review.

Salvatore


On 15 July 2014 04:21, Kyle Mestery  wrote:

> As we're getting down to the wire with Juno-2, I'd like the core team
> to really focus on the BPs which are currently slated for Juno-2 while
> reviewing [1]. I'm in the process of shuffling a few of these into
> Juno-3 now (ones which don't have code posted, for example), but there
> are a lot which have code ready for review. If you're looking for
> things to review, please focus on approved BPs so we can merge these
> before next week.
>
> In addition, for the big ticket community items, I've assigned
> reviewers, and that list is available here [2] in the Neutron Juno
> Project Plan wiki. Code submitters, please work with the reviewers
> assigned if your BP is on that list.
>
> If you have questions, please reach out to me on IRC or reply on this
> thread.
>
> Thanks!
> Kyle
>
> [1] https://launchpad.net/neutron/+milestone/juno-2
> [2]
> https://wiki.openstack.org/wiki/NeutronJunoProjectPlan#Juno-2_BP_Assignments
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Gantt] Scheduler split status (updated)

2014-07-15 Thread Sylvain Bauza
Le 14/07/2014 20:10, Jay Pipes a écrit :
> On 07/14/2014 10:16 AM, Sylvain Bauza wrote:
>> Le 12/07/2014 06:07, Jay Pipes a écrit :
>>> On 07/11/2014 07:14 AM, John Garbutt wrote:
 On 10 July 2014 16:59, Sylvain Bauza  wrote:
> Le 10/07/2014 15:47, Russell Bryant a écrit :
>> On 07/10/2014 05:06 AM, Sylvain Bauza wrote:
>>> Hi all,
>>>
>>> === tl;dr: Now that we agree on waiting for the split
>>> prereqs to be done, we debate on if ResourceTracker should
>>> be part of the scheduler code and consequently Scheduler
>>> should expose ResourceTracker APIs so that Nova wouldn't
>>> own compute nodes resources. I'm proposing to first come
>>> with RT as Nova resource in Juno and move ResourceTracker
>>> in Scheduler for K, so we at least merge some patches by
>>> Juno. ===
>>>
>>> Some debates occured recently about the scheduler split, so
>>> I think it's important to loop back with you all to see
>>> where we are and what are the discussions. Again, feel free
>>> to express your opinions, they are welcome.
>> Where did this resource tracker discussion come up?  Do you
>> have any references that I can read to catch up on it?  I
>> would like to see more detail on the proposal for what should
>> stay in Nova vs. be moved.  What is the interface between
>> Nova and the scheduler here?
>
> Oh, missed the most important question you asked. So, about
> the interface in between scheduler and Nova, the original
> agreed proposal is in the spec
> https://review.openstack.org/82133 (approved) where the
> Scheduler exposes : - select_destinations() : for querying the
> scheduler to provide candidates - update_resource_stats() : for
> updating the scheduler internal state (ie. HostState)
>
> Here, update_resource_stats() is called by the
> ResourceTracker, see the implementations (in review)
> https://review.openstack.org/82778 and
> https://review.openstack.org/104556.
>
> The alternative that has just been raised this week is to
> provide a new interface where ComputeNode claims for resources
> and frees these resources, so that all the resources are fully
> owned by the Scheduler. An initial PoC has been raised here
> https://review.openstack.org/103598 but I tried to see what
> would be a ResourceTracker proxified by a Scheduler client here
> : https://review.openstack.org/105747. As the spec hasn't been
> written, the names of the interfaces are not properly defined
> but I made a proposal as : - select_destinations() : same as
> above - usage_claim() : claim a resource amount -
> usage_update() : update a resource amount - usage_drop(): frees
> the resource amount
>
> Again, this is a dummy proposal, a spec has to written if we
> consider moving the RT.

 While I am not against moving the resource tracker, I feel we
 could move this to Gantt after the core scheduling has been
 moved.
>>>
>>> Big -1 from me on this, John.
>>>
>>> Frankly, I see no urgency whatsoever -- and actually very little
>>> benefit -- to moving the scheduler out of Nova. The Gantt project I
>>> think is getting ahead of itself by focusing on a split instead of
>>> focusing on cleaning up the interfaces between nova-conductor,
>>> nova-scheduler, and nova-compute.
>>>
>>
>> -1 on saying there is no urgency. Don't you see the NFV group saying
>> each meeting what is the status of the scheduler split ?
>
> Frankly, I don't think a lot of the NFV use cases are well-defined.
>
> Even more frankly, I don't see any benefit to a split-out scheduler to
> a single NFV use case.

I don't know if you can, but if you're interested in giving feedback to
the NFV team, we do run weekly meeting on #openstack-meeting-alt every
Wednesday 2pm UTC.

You can find a list of all the associated blueprints here
https://wiki.openstack.org/wiki/Teams/NFV#Active_Blueprints whose list
is processed hourly by a backend script so it generates a Gerrit
dashboard accessible here : http://nfv.russellbryant.net

By saying that, you can find
https://blueprints.launchpad.net/nova/+spec/solver-scheduler as a
possible use-case for NFV.
As Paul and Yathi said, there is a need for a global resource placement
engine able to cope with both network and compute resources if we need
to provide NFV use-cases, that appears to me quite clearly and that's
why I joined the NFV team.

>
>> Don't you see each Summit the lots of talks (and people attending
>> them) talking about how OpenStack should look at Pets vs. Cattle and
>> saying that the scheduler should be out of Nova ?
>
> There's been no concrete benefits discussed to having the scheduler
> outside of Nova.
>
> I don't really care how many people say that the scheduler should be
> out of Nova unless those same people come to the table with concrete
> reasons why. Just saying something is a benefit does not make it a

[openstack-dev] PyCon AU OpenStack Miniconf – Schedule now available!

2014-07-15 Thread Joshua Hesketh
Hello all,

The miniconf organisers are pleased to announce their first draft of a
schedule for the PyCon Australia OpenStack miniconf:

http://sites.rcbops.com/openstack_miniconf/2014/07/openstack-miniconf-programme-for-pycon-au/

The OpenStack miniconf is a one day conference held on Friday the 1st of
August 2014 in Brisbane before PyCon Australia. The day is dedicated to
talks and discussions related to the OpenStack  project.

The next OpenStack miniconf in the Asia-Pacific region will be held on
the 1st of August 2014 in Brisbane before PyCon AU.

Cheers,
Josh


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >