[openstack-dev] [Neutron][Spec Freeze Exception] midonet gw-mode extension support

2014-07-23 Thread Ryu Ishimoto
Hi All,

I was terribly SAD indeed that our spec proposal to add gw-mode
extension[1] to the midonet plugin was not accepted last Sunday, and I am
sending out this email to see if the core reviewers could accept this as
SFE.

It was originally rejected because the upstream plugin was not working
properly but all of the plugin patches to address the issues have already
been submitted[2] and our Third party CI has been set up and running for a
while.  At this point, there is not much we could do to get this feature
into Juno other than to seek SFE.

It's a very minor change to the plugin code but makes tremendous difference
for us in terms of product features.  I understand that the core reviewers
are asked to do somewhat unreasonable amount of spec reviews in a short
time, but I would be extremely grateful it if you could reconsider and
possibly accept this spec.

Thanks!
Ryu

[1] https://review.openstack.org/#/c/94785/
[2]
https://review.openstack.org/#/q/status:open+project:openstack/neutron+message:Midonet,n,z
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] More granular role management

2014-07-23 Thread Fei Long Wang
Greetings,

I'm trying to figure out if Keystone can support more granular role
management or if there is any plan to do that in the future. Currently,
AWS can support adding a role and assigning the capability from 3
different level/perspective: service, function and resource[1]. Keystone
can support the service level for now, but I didn't find the
function/resource level support from current code/blueprint. Am I
missing anything? Any comment is appreciated. Cheers.

[1] awspolicygen.s3.amazonaws.com/policygen.html

-- 
Cheers  Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Use Launcher/ProcessLauncher in glance

2014-07-23 Thread Tailor, Rajesh
Hi Jay,
Thank you for your response.
I will soon submit patch for the same.

Thanks,
Rajesh Tailor

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Tuesday, July 22, 2014 8:07 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [glance] Use Launcher/ProcessLauncher in glance

On 07/17/2014 03:07 AM, Tailor, Rajesh wrote:
 Hi all,

 Why glance is not using Launcher/ProcessLauncher (oslo-incubator) for 
 its wsgi service like it is used in other openstack projects i.e. 
 nova, cinder, keystone etc.

Glance uses the same WSGI service launch code as the other OpenStack project 
from which that code was copied: Swift.

 As of now when SIGHUP signal is sent to glance-api parent process, it 
 calls the callback handler and then throws OSError.

 The OSError is thrown because os.wait system call was interrupted due 
 to SIGHUP callback handler.

 As a result of this parent process closes the server socket.

 All the child processes also gets terminated without completing 
 existing api requests because the server socket is already closed and 
 the service doesn't restart.

 Ideally when SIGHUP signal is received by the glance-api process, it 
 should process all the pending requests and then restart the 
 glance-api service.

 If (oslo-incubator) Launcher/ProcessLauncher is used in glance then it 
 will handle service restart on 'SIGHUP' signal properly.

 Can anyone please let me know what will be the positive/negative 
 impact of using Launcher/ProcessLauncher (oslo-incubator) in glance?

Sounds like you've identified at least one good reason to move to 
oslo-incubator's Launcher/ProcessLauncher. Feel free to propose patches which 
introduce that change to Glance. :)

 Thank You,

 Rajesh Tailor
 __
 Disclaimer:This email and any attachments are sent in strictest 
 confidence for the sole use of the addressee and may contain legally 
 privileged, confidential, and proprietary data. If you are not the 
 intended recipient, please advise the sender by replying promptly to 
 this email and then delete and destroy this email and any attachments 
 without any further use, copying or forwarding

Please advise your corporate IT department that the above disclaimer on your 
emails is annoying, is entirely disregarded by 99.999% of the real world, has 
no legal standing or enforcement, and may be a source of problems with people's 
mailing list posts being sent into spam boxes.

All the best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Spec Freeze Exception] [Gantt] Scheduler Isolate DB spec

2014-07-23 Thread Sylvain Bauza
Le 23/07/2014 01:11, Michael Still a écrit :
 This spec freeze exception only has one core signed up. Are there any
 other cores interested in working with Sylvain on this one?

 Michael

By looking at
https://etherpad.openstack.org/p/nova-juno-spec-priorities, I can see
ndipanov as volunteer for sponsoring this blueprint.

-Sylvain

 On Mon, Jul 21, 2014 at 7:59 PM, John Garbutt j...@johngarbutt.com wrote:
 On 18 July 2014 09:10, Sylvain Bauza sba...@redhat.com wrote:
 Hi team,

 I would like to put your attention on https://review.openstack.org/89893
 This spec targets to isolate access within the filters to only Scheduler
 bits. This one is a prerequisite for a possible split of the scheduler
 into a separate project named Gantt, as it's necessary to remove direct
 access to other Nova objects (like aggregates and instances).

 This spec is one of the oldest specs so far, but its approval has been
 delayed because there were other concerns to discuss first about how we
 split the scheduler. Now that these concerns have been addressed, it is
 time for going back to that blueprint and iterate over it.

 I understand the exception is for a window of 7 days. In my opinion,
 this objective is targetable as now all the pieces are there for making
 a consensus.

 The change by itself is only a refactoring of the existing code with no
 impact on APIs neither on DB scheme, so IMHO this blueprint is a good
 opportunity for being on track with the objective of a split by
 beginning of Kilo.

 Cores, I leave you appreciate the urgency and I'm available by IRC or
 email for answering questions.
 Regardless of Gantt, tidying up the data dependencies here make sense.

 I feel we need to consider how the above works with upgrades.

 I am happy to sponsor this blueprint. Although I worry we might not
 get agreement in time.

 Thanks,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Spec Minimum Review Proposal

2014-07-23 Thread Lucas Alvares Gomes
On Tue, Jul 22, 2014 at 9:18 PM, Jay Dobies jason.dob...@redhat.com wrote:
 At the meetup today, the topic of our spec process came up. The general
 sentiment is that the process is still young and the hiccups are expected,
 but we do need to get better about making sure we're staying on top of them.

 As a first step, it was proposed to add 1 spec review a week to the existing
 3 reviews per day requirement for cores.

 Additionally, we're going to start to capture and review the metrics on spec
 patches specifically during the weekly meeting. That should help bring to
 light how long reviews are sitting in the queue without being touched.

 What are everyone's feelings on adding a 1 spec review per week requirement
 for cores?

You mean cores in Ironic and not the cores in the ironic-spec-core
group[1] right?

Either way, 1 review per week seems to be very reasonable, the minimum
you can expect from a core, so +1 for the idea.

[1] https://review.openstack.org/#/admin/groups/352,members

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][Scheduler] Promote select_destination as a REST API

2014-07-23 Thread Alex Xu
Maybe we can implement this goal by another way, adding new API 
'confirm_before_migration' that's similar with 'confirm_resize'. This 
also can resolve Chris Friesen's concern.


On 2014年07月23日 00:13, Jay Pipes wrote:

On 07/21/2014 11:16 PM, Jay Lau wrote:

Hi Jay,

There are indeed some China customers want this feature because before
they do some operations, they want to check the action plan, such as
where the VM will be migrated or created, they want to use some
interactive mode do some operations to make sure no errors.


This isn't something that normal tenants should have access to, IMO. 
The scheduler is not like a database optimizer that should give you a 
query plan for a SQL statement. The information the scheduler is 
acting on (compute node usage records, aggregate records, deployment 
configuration, etc) are absolutely NOT something that should be 
exposed to end-users.


I would certainly support a specification that intended to add 
detailed log message output from the scheduler that recorded how it 
made its decisions, so that an operator could evaluate the data and 
decision, but I'm not in favour of exposing this information via a 
tenant-facing API.


Best,
-jay


2014-07-22 10:23 GMT+08:00 Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com:

On 07/21/2014 07:45 PM, Jay Lau wrote:

There is one requirement that some customers want to get the
possible
host list when create/rebuild/migrate/__evacuate VM so as to
create a
resource plan for those operations, but currently
select_destination is
not a REST API, is it possible that we promote this API to be a
REST API?


Which customers want to get the possible host list?

/me imagines someone asking Amazon for a REST API that returned all
the possible servers that might be picked for placement... and what
answer Amazon might give to the request.

If by customer, you are referring to something like IBM Smart
Cloud Orchestrator, then I don't really see the point of supporting
something like this. Such a customer would only need to create a
resource plan for those operations if it was wholly supplanting
large pieces of OpenStack infrastructure, including parts of Nova
and much of Heat.

Best,
-jay


_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Thanks,

Jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][Scheduler] Promote select_destination as a REST API

2014-07-23 Thread Jay Lau
Thanks Alex and Jay Pipes.

@Alex, I want a common interface for all VM operations to get target host
list, seems only adding a new API 'confirm_before_migration' not enough to
handle this? ;-)

@Jay Pipes, I will try to see if we can export this in K or L via Gantt

Thanks.


2014-07-23 17:14 GMT+08:00 Alex Xu x...@linux.vnet.ibm.com:

 Maybe we can implement this goal by another way, adding new API
 'confirm_before_migration' that's similar with 'confirm_resize'. This also
 can resolve Chris Friesen's concern.


 On 2014年07月23日 00:13, Jay Pipes wrote:

 On 07/21/2014 11:16 PM, Jay Lau wrote:

 Hi Jay,

 There are indeed some China customers want this feature because before
 they do some operations, they want to check the action plan, such as
 where the VM will be migrated or created, they want to use some
 interactive mode do some operations to make sure no errors.


 This isn't something that normal tenants should have access to, IMO. The
 scheduler is not like a database optimizer that should give you a query
 plan for a SQL statement. The information the scheduler is acting on
 (compute node usage records, aggregate records, deployment configuration,
 etc) are absolutely NOT something that should be exposed to end-users.

 I would certainly support a specification that intended to add detailed
 log message output from the scheduler that recorded how it made its
 decisions, so that an operator could evaluate the data and decision, but
 I'm not in favour of exposing this information via a tenant-facing API.

 Best,
 -jay

  2014-07-22 10:23 GMT+08:00 Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com:

 On 07/21/2014 07:45 PM, Jay Lau wrote:

 There is one requirement that some customers want to get the
 possible
 host list when create/rebuild/migrate/__evacuate VM so as to
 create a
 resource plan for those operations, but currently
 select_destination is
 not a REST API, is it possible that we promote this API to be a
 REST API?


 Which customers want to get the possible host list?

 /me imagines someone asking Amazon for a REST API that returned all
 the possible servers that might be picked for placement... and what
 answer Amazon might give to the request.

 If by customer, you are referring to something like IBM Smart
 Cloud Orchestrator, then I don't really see the point of supporting
 something like this. Such a customer would only need to create a
 resource plan for those operations if it was wholly supplanting
 large pieces of OpenStack infrastructure, including parts of Nova
 and much of Heat.

 Best,
 -jay


 _
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Strategy for recovering crashed nodes in the Overcloud?

2014-07-23 Thread Howley, Tom
(Resending to properly start new thread.)



Hi,



I'm running a HA overcloud configuration and as far as I'm aware, there is 
currently no mechanism in place for restarting failed nodes in the cluster. 
Originally, I had been wondering if we would use a corosync/pacemaker cluster 
across the control plane with STONITH resources configured for each node (a 
STONITH plugin for Ironic could be written). This might be fine if a 
corosync/pacemaker stack is already being used for HA of some components, but 
it seems overkill otherwise. The undercloud heat could be in a good position to 
restart the overcloud nodes -- is that the plan or are there other options 
being considered?



Thanks,

Tom
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Manage multiple clusters using a single nova service

2014-07-23 Thread Vaddi, Kiran Kumar
Answers to some of your concerns

 Why can't ESXi hosts not run the nova-compute service? Is it like the
 XenServer driver that has a pitifully old version of Python (2.4) that
 constrains the code that is possible to run on it? If so, then I don't
 really think the poor constraints of the hypervisor dom0 should mean
 that Nova should change its design principles to accomodate. The
 XenServer driver uses custom agents to get around this issue, IIRC. Why
 can't the VCenter driver?

ESXi hosts are generally operated in a lock-down mode where installation of 
agents is not allowed.
All communication and tasks on the ESXi hosts must be done using vCenter.

 The fact that each connection to vCenter uses 140MB of memory is
 completely ridiculous. You can thank crappy SOAP for that, I believe.

Yes, and the problem becomes bigger if we create multiple services

 I'm just do not suppor the idea that Nova needs to
 change its fundamental design in order to support the *design* of other
 host management platforms.

The current implementation doesn't make nova change its design, the scheduling 
decisions are still done by nova.
Its only the deployment that has been changed. Agree that there are no separate 
topic-exchange queues for each cluster.

Thanks
Kiran

 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: Tuesday, July 22, 2014 9:30 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] Manage multiple clusters using a single
 nova service
 
 On 07/14/2014 04:34 AM, Vaddi, Kiran Kumar wrote:
  Hi,
 
  In the Juno summit, it was discussed that the existing approach of
  managing multiple VMware Clusters using a single nova compute service
  is not preferred and the approach of one nova compute service
  representing one cluster should be looked into.
 
 Even this is outside what I consider to be best practice for Nova,
 frankly. The model of scale-out inside Nova is to have a nova-compute
 worker responsible for only the distinct set of compute resources that
 are provided by a single bare metal node.
 
 Unfortunately, with the introduction of the bare-metal driver in Nova,
 as well as the clustered hypervisors like VCenter and Hyper-V, this
 architectural design point was shot in the head, and now it is only
 possible to scale the nova-compute - hypervisor communication layer
 using a scale-up model instead of a scale-out model. This is a big deal,
 and unfortunately, not enough discussion has been had around this, IMO.
 
 The proposed blueprint(s) around this and the code patches I've seen are
 moving Nova in the opposite direction it needs to go, IMHO.
 
  We would like to retain the existing approach (till we have resolved
   the issues) for the following reasons:
 
  1.Even though a single service is managing all the clusters,
  logically it is still one compute per cluster. To the scheduler each
   cluster is represented as individual computes. Even in the driver
  each cluster is represented separately.
 
 How is this so? In Kanagaraj Manickam's proposed blueprint about this
 [1], the proposed implementation would fork one process for each
 hypervisor or cluster. However, the problem with this is that the
 scheduler uses the single service record for the nova-compute worker to
 determine whether or not the node is available to place resources on.
 The servicegroup API would need to be refactored (rewritten, really) to
 change its definition of a service to instead of being a single daemon,
 now being a single process running within that daemon. Since the daemon
 only responds to a single RPC target endpoint and rpc.call direct and
 topic exchanges, all of that code would then need to be rewritten, or
 code would need to be added to nova.manager to dispatch events sent to
 the nova-compute's single RPC topic-exchange to one of the specific
 processes that is responsible for a particular cluster.
 
 In short, a huge chunk of code would need to be refactored in order to
 make Nova's worldview amenable to the design choices of certain
 clustered hypervisors. That, IMHO, is not something to be taken lightly,
 and not something we should even consider without a REALLY good reason.
 And the use case of Openstack is an platform and its good to provide
 flexibility in it to accommodate different needs. is not a really good
 reason, IMO.
 
  2.Since ESXi does not allow to run nova-compute service on the
  hypervisor unlike KVM, the service has to be run externally on a
  different server. Its easier from administration perspective to
  manage a single service than multiple.
 
 Why can't ESXi hosts not run the nova-compute service? Is it like the
 XenServer driver that has a pitifully old version of Python (2.4) that
 constrains the code that is possible to run on it? If so, then I don't
 really think the poor constraints of the hypervisor dom0 should mean
 that Nova should change its design principles to accomodate. The
 XenServer driver uses custom agents to 

[openstack-dev] Support for Django 1.7 in OpenStack

2014-07-23 Thread Thomas Goirand
Hi,

The Debian maintainer of python-django would like to upgrade to version
1.7. He asked, in multiple bug reports, to check for Django 1.7
compatibility. I have the following python modules and bug reports:

https://bugs.debian.org/755613 python-django-appconf
https://bugs.debian.org/755622 python-django-compressor
https://bugs.debian.org/755628 python-django-pyscss
https://bugs.debian.org/755641 python-django-discover-runner
https://bugs.debian.org/755646 python-django-openstack-auth
https://bugs.debian.org/755651 horizon
https://bugs.debian.org/755654 tuskar-ui
https://bugs.debian.org/755656 python-django-bootstrap-form

First, does anyone know if Django 1.7 is an issue with any of the above
packages? If there are effectively issues, is there currently any plan
to fix it?

Ideally, I would like all of the above packages to be able to run with
Django 1.7 in Icehouse. I don't expect upstream OpenStack to actually do
the backporting to Icehouse, but if it gets in trunk, I'd be happy to do
the work for backporting.

If it's considered a bad idea to fix Django 1.7 compatibility in
stable/icehouse, then I don't mind keeping the patches as Debian
specific. However, I will *not* have the time to either investigate the
issue, or do the work (I'm busy enough with packaging all of OpenStack
and its dependencies in Debian). Also, I don't think I have enough
expertise myself to work on this: just testing wouldn't be enough,
IMO, someone with an extensive Django experience needs to work on this.

So, I NEED HELP HERE! :)
Thoughts, comments, or whatever else is welcome! [1]

Cheers,

Thomas Goirand (zigo)

[1] but discussing how upgrading to Django 1.7 would be difficult isn't:
that's counter productive, not going forward, and not solving any problem.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work division

2014-07-23 Thread Evgeny Fedoruk
Hi,

I'm working on TLS integration with loadbalancer v2 extension and db.
Basing on Brandon's  patches https://review.openstack.org/#/c/105609 , 
https://review.openstack.org/#/c/105331/  , 
https://review.openstack.org/#/c/105610/
I will abandon previous 2 patches for TLS which are 
https://review.openstack.org/#/c/74031/ and 
https://review.openstack.org/#/c/102837/ 
Managing to submit my change later today. It will include lbaas extension v2 
modification, lbaas db v2 modifications, alembic migration for schema changes 
and new tests in unit testing for lbaas db v2.

Thanks,
Evg

-Original Message-
From: Carlos Garza [mailto:carlos.ga...@rackspace.com] 
Sent: Wednesday, July 23, 2014 3:54 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - work division

Since it looks like the TLS blueprint was approved I''m sure were all 
eager to start coded so how should we divide up work on the source code. I have 
Pull requests in pyopenssl https://github.com/pyca/pyopenssl/pull/143;. and a 
few one liners in pica/cryptography to expose the needed low-level that I'm 
hoping will be added pretty soon to that PR 143 test's can pass. Incase it 
doesn't we will fall back to using the pyasn1_modules as it already also has a 
means to fetch what we want at a lower level. 
I'm just hoping that we can split the work up so that we can collaborate 
together on this with out over serializing the work were people become 
dependent on waiting for some one else to complete their work or worse one 
person ending up doing all the work.


Carlos D. Garza ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Spec Minimum Review Proposal

2014-07-23 Thread Lucas Alvares Gomes
Oh sorry... I thought it was about Ironic not TripleO (morning issues)

Anyway, it could be something that we could adopt in Ironic as well :)

On Wed, Jul 23, 2014 at 9:40 AM, Lucas Alvares Gomes
lucasago...@gmail.com wrote:
 On Tue, Jul 22, 2014 at 9:18 PM, Jay Dobies jason.dob...@redhat.com wrote:
 At the meetup today, the topic of our spec process came up. The general
 sentiment is that the process is still young and the hiccups are expected,
 but we do need to get better about making sure we're staying on top of them.

 As a first step, it was proposed to add 1 spec review a week to the existing
 3 reviews per day requirement for cores.

 Additionally, we're going to start to capture and review the metrics on spec
 patches specifically during the weekly meeting. That should help bring to
 light how long reviews are sitting in the queue without being touched.

 What are everyone's feelings on adding a 1 spec review per week requirement
 for cores?

 You mean cores in Ironic and not the cores in the ironic-spec-core
 group[1] right?

 Either way, 1 review per week seems to be very reasonable, the minimum
 you can expect from a core, so +1 for the idea.

 [1] https://review.openstack.org/#/admin/groups/352,members

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] overuse of 'except Exception'

2014-07-23 Thread Chris Dent


I was having a bit of a browse through the ceilometer code and
noticed there are a fair few instances (sixty-some) of
`except Exception` scattered about.

While not as evil as a bare except, my Python elders always pointed
out that doing `except Exception` is a bit like using a sledgehammer
where something more akin to a gavel is what's wanted. The error
condition is obliterated but there's no judgement on what happened
and no apparent effort by the developer to effectively handle
discrete cases.

A common idiom appears as:

except Exception:
LOG.exception(_('something failed'))
return
# or continue

There's no information here about what failed or why.

That's bad enough, but much worse, this will catch all sorts of
exceptions, even ones that are completely unexpected and ought to
cause a more drastic (and thus immediately informative) failure
than 'something failed'.

So, my question: Is this something we who dig around in the ceilometer
code ought to care about and make an effort to clean up? If so, I'm
happy to get started.

Thanks.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Support for Django 1.7 in OpenStack

2014-07-23 Thread Felipe Reyes
Hi Thomas,

On Wed, Jul 23, 2014 at 06:56:51PM +0800, Thomas Goirand wrote:
 First, does anyone know if Django 1.7 is an issue with any of the above
 packages? If there are effectively issues, is there currently any plan
 to fix it?
 
 Ideally, I would like all of the above packages to be able to run with
 Django 1.7 in Icehouse. I don't expect upstream OpenStack to actually do
 the backporting to Icehouse, but if it gets in trunk, I'd be happy to do
 the work for backporting.
 
 If it's considered a bad idea to fix Django 1.7 compatibility in
 stable/icehouse, then I don't mind keeping the patches as Debian
 specific. However, I will *not* have the time to either investigate the
 issue, or do the work (I'm busy enough with packaging all of OpenStack
 and its dependencies in Debian). Also, I don't think I have enough
 expertise myself to work on this: just testing wouldn't be enough,
 IMO, someone with an extensive Django experience needs to work on this.
 
 So, I NEED HELP HERE! :)
 Thoughts, comments, or whatever else is welcome! [1]

According to the bug #1266676[0], some work was already done.

I ran the test suite using Django 1.7rc1 and this is the result:

  Ran 999 tests in 43.882s
  
  FAILED (SKIP=7, errors=90, failures=1)

I'm happy to help patching horizon to run with Django 1.7

Best Regards,

[0] https://bugs.launchpad.net/horizon/+bug/1266676

Felipe Reyes fre...@tty.cl


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-dev][neutron] A question about cisco network_profiles.xxxx uri

2014-07-23 Thread Yangxurong
Hi folks,

I'm planning to fix bug/1330095[1], which aims to solve the invalid suffix uri 
as follow, but I hit a problem of cisco n1kv plugin testing case[2].

[1] https://bugs.launchpad.net/neutron/+bug/1330095
When submitting a REST request as follow:

POST http://localhost:9696/v2.0/routers.@@@xxx
body:
{
router:{
  name: ddd
}
}

the request finishes without error.

Generally the string following . in the request path will be matched as the 
format which specifies the format of the request body, like xml or json. I 
think we need to check the validity of the suffix and filter out invalid format 
like @@@xxx.

I hit one testing case failure in cisco n1kv plugin when I submit my patch: 
https://review.openstack.org/108683

[2] the issues in network_profiles.xxx uri test case:
/network_profiles.{'network_profile': {'segment_range': '1-10010', 
'segment_type': 'overlay', 'name': 'netp1', 'tenant_id': 'some_tenant', 
'sub_type': 'enhanced', 'multicast_ip_range': '224.1.1.1-224.1.1.10'}}

So the content in the dictionary was matched as the format. This test case 
expects to catch a HTTP 400 exception. I am not sure whether this test case is 
testing an invalid request path, or cisco n1kv plugin supports such path, it's 
some mistakes in the dictionary that cause the exception.

Any good idea or suggestion about this issue?

Regards,
XuRong Yang
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Specs approved for Juno-3 and exceptions

2014-07-23 Thread Salvatore Orlando
I'm sure it is not news to anyone that we already have approved a too many
specifications for Juno-3. The PTL made clear indeed that Low priority
blueprints are considered best effort.

However, this already leaves us with 23 medium to high specifications to
merge in Juno-3. This is already quite close to what the core team can
handle, considering history from previous releases and the fact that there
are 3 very big items in the list (new LB APIs, distributed router, and
group policies).

I've counted already at least 7 requests for spec freeze exceptions on the
mailing list, and it is likely more will come. In order to limit
oversubscribing, I would suggest to exclude freeze exceptions requests for
items which are not:
- targeting stability and scalability for Neutron FOSS framework
- have a community interest. By that I do not mean necessarily targeting
the FOSS bits, but necessarily have support and interest from a number of
teams of neutron contributors.

I don't want to be evil to contributors, but I think it is better to be
clear now rather than arriving at the end of Juno-3 and having to tell
contributors that unfortunately we were not able to give their patches
enough review cycles.

Salvatore
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Guest prepare call polling mechanism issue

2014-07-23 Thread Denis Makogon
Hello, Stackers.


I’d like to discuss guestagent prepare call polling mechanism issue (see
[1]).

Let me first describe why this is actually an issue and why it should be
fixed. For those of you who is familiar with Trove knows that Trove can
provision instances through Nova API and Heat API (see [2] and see [3]).



What’s the difference between this two ways (in general)? The answer is
simple:

- Heat-based provisioning method has polling mechanism that verifies that
stack provisioning was completed with successful state (see [4]) which
means that all stack resources are in ACTIVE state.

- Nova-based provisioning method doesn’t do any polling (which is wrong,
since instance can’t fail as fast as possible because Trove-taskmanager
service doesn’t verify that launched server had reached ACTIVE state.
That’s the issue #1 - compute instance state is unknown, but right after
resources (deliverd by heat) already in ACTIVE states.

Once one method [2] or [3] finished, taskmanager trying to prepare data for
guest (see [5]) and then it tries to send prepare call to guest (see [6]).
Here comes issue #2 - polling mechanism does at least 100 API calls to Nova
to define compute instance status.

Also taskmanager does almost the same amount of calls to Trove backend to
discover guest status which is totally normal.

So, here comes the question,  why should i call 99 times Nova for the
same value if the value asked for the first time was completely acceptable?



There’s only one way to fix it. Since heat-based provisioning delivers
instance with status validation procedure, the same thing should be done
for nova-base provisioning (we should extract compute instance status
polling from guest prepare polling mechanism and integrate it into [2]) and
leave only guest status discovering in guest prepare polling mechanism.




Benefits? Proposed fix will give an ability for fast-failing for corrupted
instances, it would reduce amount of redundant Nova API calls while
attempting to discover guest status.


Proposed fix for this issue - [7].

[1] - https://launchpad.net/bugs/1325512

[2] -
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L198-L215

[3] -
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L190-L197

[4] -
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L420-L429

[5] -
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L217-L256

[6] -
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L254-L266

[7] - https://review.openstack.org/#/c/97194/


Thoughts?

Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Neutron integration test job

2014-07-23 Thread Denis Makogon
Hello, Stackers.


For those of you who’s interested in Trove just letting you know, that for
now Trove can work with Neutron (hooray!!)
 instead of Nova-network, see [1] and [2]. It’s a huge step forward on the
road of advanced OpenStack integration.

But let’s admit it’s not the end, we should deal with:

   1.

   Add Neutron-based configuration for DevStack to let folks try it (see
   [3]).
   2.

   Implementing/providing new type of testing job that will test on a
   regular basis all Trove tests with enabled Neutron to verify that all our
   networking preparations for instance are fine.


The last thing is the most interesting. And i’d like to discuss it with all
of you, folks.
So, i’ve wrote initial job template taking into account specific
configuration required by DevStack and Trove-integration, see [4], and i’d
like to receive all possible feedbacks as soon as possible.



[1] - Trove.
https://github.com/openstack/trove/commit/c68fef2b7a61f297b9fe7764dd430eefd4d4a767

[2] - Trove integration.
https://github.com/openstack/trove-integration/commit/9f42f5c9b1a0d8844b3e527bcf2eb9474485d23a

[3] - DevStack patchset. https://review.openstack.org/108966

[4] - POC. https://gist.github.com/denismakogon/76d9bd3181781097c39b


Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Soft code freeze is planned for July, 24th

2014-07-23 Thread Mike Scherbakov
Andrew,
thanks for pointing this out. Engineering in Europe has code review in
priority #1 after fixing critical issues which block us from further
testing.

Overall, I think it should be simple. If developer didn't push the crowd to
review the patch linked to Low/Medium bug, and it didn't get merged by SCF
- then it should be moved to next milestone. SCF by its definition means
that code has to be in master for Low / Medium bugs, not on review.

Considering the fact that we have so many patches on review, I can propose
the following:

   1. Actively ping each other to get those reviewed and merged
   2. We can do an exception for those which have at least one +1 from some
   developer, but were not addressed by core developer. In this case we can
   allow the core developer to review and merge such patches by the end of the
   week.

What we are trying to achieve is to limit the code flow into the master, so
avoiding possible regressions which could be introduced by Low/Medium bug
fixes.

Would it work?

Thanks,


On Tue, Jul 22, 2014 at 9:03 PM, Andrew Woodward xar...@gmail.com wrote:

 Mike,

 I don't think we should SCF until the review queue is addressed, there are
 far to many outstanding reviews presently. I'm not saying the queue has to
 be flushed and revised (although we should give this time given the size of
 the outstanding queue) , but all patches should be reviewed, and merged, or
 minused (addressed). They should not be penalized because they are not high
 priority and no one has gotten around to reviewing them.

 my though is: prior to SCF, the low and medium priority reviews must be
 addressed, and the submitter should have one additional day to revise the
 patch prior to their code being barred from the release. We could address
 this by having a review deadline the day prior to SCF, or watch excepted
 intently for revision the day after SCF.



 On Tue, Jul 22, 2014 at 8:08 AM, Mike Scherbakov mscherba...@mirantis.com
  wrote:

 Hi Fuelers,
 Looks like we are more or less good to call for a Soft Code Freeze [1] on
 Thursday.

 Then hard code freeze [2] will follow. It is planned to have no more than
 2 weeks between SCF and HCF [3]. When hard code freeze is called, we create
 stable/5.1 branch at the same time to accept only critical bug fixes, and
 release will be produced out of this branch. At the same time master will
 be re-opened for accepting new features and all types of bug fixes.

 [1] https://wiki.openstack.org/wiki/Fuel/Soft_Code_Freeze
 [2] https://wiki.openstack.org/wiki/Fuel/Hard_Code_Freeze
  [3] https://wiki.openstack.org/wiki/Fuel/5.1_Release_Schedule

 Let me know if anything blocks us from doing SCF on 24th.

 Thanks,
 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Andrew
 Mirantis
 Ceph community

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QA] Enabling full neutron Job

2014-07-23 Thread Salvatore Orlando
Here I am again bothering you with the state of the full job for Neutron.

The patch for fixing an issue in nova's server external events extension
merged yesterday [1]
We do not have yet enough data points to make a reliable assessment, but of
out 37 runs since the patch merged, we had only 5 failures, which puts
the failure rate at about 13%

This is ugly compared with the current failure rate of the smoketest (3%).
However, I think it is good enough to start making the full job voting at
least for neutron patches.
Once we'll be able to bring down failure rate to anything around 5%, we can
then enable the job everywhere.

As much as I hate asymmetric gating, I think this is a good compromise for
avoiding developers working on other projects are badly affected by the
higher failure rate in the neutron full job.
I will therefore resume work on [2] and remove the WIP status as soon as I
can confirm a failure rate below 15% with more data points.

Salvatore

[1] https://review.openstack.org/#/c/103865/
[2] https://review.openstack.org/#/c/88289/


On 10 July 2014 11:49, Salvatore Orlando sorla...@nicira.com wrote:




 On 10 July 2014 11:27, Ihar Hrachyshka ihrac...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 On 10/07/14 11:07, Salvatore Orlando wrote:
  The patch for bug 1329564 [1] merged about 11 hours ago. From [2]
  it seems there has been an improvement on the failure rate, which
  seem to have dropped to 25% from over 40%. Still, since the patch
  merged there have been 11 failures already in the full job out of
  42 jobs executed in total. Of these 11 failures: - 3 were due to
  problems in the patches being tested - 1 had the same root cause as
  bug 1329564. Indeed the related job started before the patch merged
  but finished after. So this failure doesn't count. - 1 was for an
  issue introduced about a week ago which actually causing a lot of
  failures in the full job [3]. Fix should be easy for it; however
  given the nature of the test we might even skip it while it's
  fixed. - 3 were for bug 1333654 [4]; for this bug discussion is
  going on on gerrit regarding the most suitable approach. - 3 were
  for lock wait timeout errors. Several people in the community are
  already working on them. I hope this will raise the profile of this
  issue (maybe some might think it's just a corner case as it rarely
  causes failures in smoke jobs, whereas the truth is that error
  occurs but it does not cause job failure because the jobs isn't
  parallel).

 Can you give directions on where to find those lock timeout failures?
 I'd like to check logs to see whether they have the same nature as
 most other failures (e.g. improper yield under transaction).


 This logstash query will give you all occurences of lock wait timeout
 issues: message:(OperationalError) (1205, 'Lock wait timeout exceeded; try
 restarting transaction') AND tags:screen-q-svc.txt

 The fact that in most cases the build succeeds anyway is misleading,
 because in many cases these errors occur in RPC handling between agents and
 servers, and therefore are not detected by tempest. The neutron full job,
 which is parallel, increases their occurrence because of parallelism - and
 since API request too occur concurrently it also yields a higher tempest
 build failure rate.

 However, as I argued in the past the lock wait timeout error should
 always be treated as an error condition.
 Eugene has already classified lock wait timeout failures and filed bugs
 for them a few weeks ago.


 
  Summarizing, I think time is not yet ripe to enable the full job;
  once bug 1333654 is fixed, we should go for it. AFAIK there is no
  way for working around it in gate tests other than disabling
  nova/neutron event reporting, which I guess we don't want to do.
 
  Salvatore
 
  [1] https://review.openstack.org/#/c/105239 [2]
 
 http://logstash.openstack.org/#eyJzZWFyY2giOiJidWlsZF9zdGF0dXM6RkFJTFVSRSBBTkQgbWVzc2FnZTpcIkZpbmlzaGVkOiBGQUlMVVJFXCIgQU5EIGJ1aWxkX25hbWU6XCJjaGVjay10ZW1wZXN0LWRzdm0tbmV1dHJvbi1mdWxsXCIgQU5EIGJ1aWxkX2JyYW5jaDpcIm1hc3RlclwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsiZnJvbSI6IjIwMTQtMDctMTBUMDA6MjQ6NTcrMDA6MDAiLCJ0byI6IjIwMTQtMDctMTBUMDg6MjQ6NTMrMDA6MDAiLCJ1c2VyX2ludGVydmFsIjoiMCJ9LCJzdGFtcCI6MTQwNDk4MjU2MjM2OCwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==
 
 
 [3]
 
 http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiSFRUUEJhZFJlcXVlc3Q6IFVucmVjb2duaXplZCBhdHRyaWJ1dGUocykgJ21lbWJlciwgdmlwLCBwb29sLCBoZWFsdGhfbW9uaXRvcidcIiBBTkQgdGFnczpcInNjcmVlbi1xLXN2Yy50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiY3VzdG9tIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7ImZyb20iOiIyMDE0LTA3LTAxVDA4OjU5OjAxKzAwOjAwIiwidG8iOiIyMDE0LTA3LTEwVDA4OjU5OjAxKzAwOjAwIiwidXNlcl9pbnRlcnZhbCI6IjAifSwic3RhbXAiOjE0MDQ5ODI3OTc3ODAsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=
 
 
 [4] https://bugs.launchpad.net/nova/+bug/1333654
 
 
  On 2 July 2014 17:57, Salvatore Orlando 

Re: [openstack-dev] [OpenStack] [Barbican] Cinder and Barbican

2014-07-23 Thread Coffman, Joel M.
We are currently working to support Barbican for Cinder volume encryption. Some 
links to our work are as follows:
Blueprint: 
https://blueprints.launchpad.net/cinder/+spec/encryption-with-barbican
Specification: https://review.openstack.org/#/c/106437/ (needs approval from 
another Cinder core)
Implementation: https://review.openstack.org/#/c/106437/

Cheers,
Joel


From: Giuseppe Galeota [mailto:giuseppegale...@gmail.com]
Sent: Tuesday, July 22, 2014 11:39 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [OpenStack] [Barbican] Cinder and Barbican

Dear all,
is Cinder capable today to use Barbican for encryption? If yes, can you link to 
me some useful doc?

Thank you,
Giuseppe
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how scheduler handle messages?

2014-07-23 Thread fdsafdsafd
Thanks. It really help. Thanks a lot.



At 2014-07-23 02:45:40, Vishvananda Ishaya vishvana...@gmail.com wrote:
Workers can consume more than one message at a time due to 
eventlet/greenthreads. The conf option rpc_thread_pool_size determines how many 
messages can theoretically be handled at once. Greenthread switching can happen 
any time a monkeypatched call is made.


Vish


On Jul 21, 2014, at 3:36 AM, fdsafdsafd jaze...@163.com wrote:


Hello,
   recently, i use rally to test boot-and-delete. I thought that one 
nova-scheduler will handle message sent to it one by one, but the log print 
show differences. So Can some one how nova-scheduler handle messages? I read 
the code in nova.service,  and found that one service will create fanout 
consumer, and that all fanout message consumed in one thread. So I wonder that, 
How the nova-scheduler handle message, if there are many messages casted to 
call scheduler's run_instance?
Thanks a lot.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][nova] resize

2014-07-23 Thread fdsafdsafd


At 2014-07-23 00:09:09, Lingxian Kong anlin.k...@gmail.com wrote:
Maybe you are using local storage for your vm system volume backend,
accroding to the 'resize' implementation, 'rsync' and 'scp' will be

executed during the resize process, which will be the bottleneck
No, i use nfs. I found that, the resize will convert qcow2 disk to raw, and 
then convert to qcow2, I do not know why ? why we directly resize qcow2?
I test havana. and in 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py
It also does this.
In comment line 5221 in the code get from above link, it said this

 If we have a non partitioned image that we can extend
then ensure we're in 'raw' format so we can extend file system.

But our colleague test that we can resize the qcow2 even if there have a non 
partioned image. He can resize an image that just resized.
So, i really do not know why.


 
2014-07-19 13:07 GMT+08:00 fdsafdsafd jaze...@163.com:
 Did someone test the concurrency of nova's resize? i found it has poor
 concurrency, i do not know why. I found most the failed request is rpc
 timeout.
 I write the resize test for nova is boot-resize-confirm-delete.






 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards!
---
Lingxian Kong

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QA] Enabling full neutron Job

2014-07-23 Thread Matthew Treinish
On Wed, Jul 23, 2014 at 02:40:02PM +0200, Salvatore Orlando wrote:
 Here I am again bothering you with the state of the full job for Neutron.
 
 The patch for fixing an issue in nova's server external events extension
 merged yesterday [1]
 We do not have yet enough data points to make a reliable assessment, but of
 out 37 runs since the patch merged, we had only 5 failures, which puts
 the failure rate at about 13%
 
 This is ugly compared with the current failure rate of the smoketest (3%).
 However, I think it is good enough to start making the full job voting at
 least for neutron patches.
 Once we'll be able to bring down failure rate to anything around 5%, we can
 then enable the job everywhere.

I think that sounds like a good plan. I'm also curious how the failure rates
compare to the other non-neutron jobs, that might be a useful comparison too
for deciding when to flip the switch everywhere.

 
 As much as I hate asymmetric gating, I think this is a good compromise for
 avoiding developers working on other projects are badly affected by the
 higher failure rate in the neutron full job.

So we discussed this during the project meeting a couple of weeks ago [3] and
there was a general agreement that doing it asymmetrically at first would be
better. Everyone should be wary of the potential harms with doing it
asymmetrically and I think priority will be given to fixing issues that block
the neutron gate should they arise.

 I will therefore resume work on [2] and remove the WIP status as soon as I
 can confirm a failure rate below 15% with more data points.
 

Thanks for keeping on top of this Salvatore. It'll be good to finally be at
least partially gating with a parallel job.

-Matt Treinish

 
 [1] https://review.openstack.org/#/c/103865/
 [2] https://review.openstack.org/#/c/88289/
[3] 
http://eavesdrop.openstack.org/meetings/project/2014/project.2014-07-08-21.03.log.html#l-28

 
 
 On 10 July 2014 11:49, Salvatore Orlando sorla...@nicira.com wrote:
 
 
 
 
  On 10 July 2014 11:27, Ihar Hrachyshka ihrac...@redhat.com wrote:
 
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA512
 
  On 10/07/14 11:07, Salvatore Orlando wrote:
   The patch for bug 1329564 [1] merged about 11 hours ago. From [2]
   it seems there has been an improvement on the failure rate, which
   seem to have dropped to 25% from over 40%. Still, since the patch
   merged there have been 11 failures already in the full job out of
   42 jobs executed in total. Of these 11 failures: - 3 were due to
   problems in the patches being tested - 1 had the same root cause as
   bug 1329564. Indeed the related job started before the patch merged
   but finished after. So this failure doesn't count. - 1 was for an
   issue introduced about a week ago which actually causing a lot of
   failures in the full job [3]. Fix should be easy for it; however
   given the nature of the test we might even skip it while it's
   fixed. - 3 were for bug 1333654 [4]; for this bug discussion is
   going on on gerrit regarding the most suitable approach. - 3 were
   for lock wait timeout errors. Several people in the community are
   already working on them. I hope this will raise the profile of this
   issue (maybe some might think it's just a corner case as it rarely
   causes failures in smoke jobs, whereas the truth is that error
   occurs but it does not cause job failure because the jobs isn't
   parallel).
 
  Can you give directions on where to find those lock timeout failures?
  I'd like to check logs to see whether they have the same nature as
  most other failures (e.g. improper yield under transaction).
 
 
  This logstash query will give you all occurences of lock wait timeout
  issues: message:(OperationalError) (1205, 'Lock wait timeout exceeded; try
  restarting transaction') AND tags:screen-q-svc.txt
 
  The fact that in most cases the build succeeds anyway is misleading,
  because in many cases these errors occur in RPC handling between agents and
  servers, and therefore are not detected by tempest. The neutron full job,
  which is parallel, increases their occurrence because of parallelism - and
  since API request too occur concurrently it also yields a higher tempest
  build failure rate.
 
  However, as I argued in the past the lock wait timeout error should
  always be treated as an error condition.
  Eugene has already classified lock wait timeout failures and filed bugs
  for them a few weeks ago.
 
 
  
   Summarizing, I think time is not yet ripe to enable the full job;
   once bug 1333654 is fixed, we should go for it. AFAIK there is no
   way for working around it in gate tests other than disabling
   nova/neutron event reporting, which I guess we don't want to do.
  
   Salvatore
  
   [1] https://review.openstack.org/#/c/105239 [2]
  
  

Re: [openstack-dev] [TripleO] Spec Minimum Review Proposal

2014-07-23 Thread Alexis Lee
On Tue, Jul 22, 2014 at 9:18 PM, Jay Dobies jason.dob...@redhat.com wrote:
 What are everyone's feelings on adding a 1 spec review per week requirement
 for cores?

Averaged over the standard 90d period I presume?

+1 here.


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Specs approved for Juno-3 and exceptions

2014-07-23 Thread Kyle Mestery
On Wed, Jul 23, 2014 at 7:28 AM, Salvatore Orlando sorla...@nicira.com wrote:
 I'm sure it is not news to anyone that we already have approved a too many
 specifications for Juno-3. The PTL made clear indeed that Low priority
 blueprints are considered best effort.

 However, this already leaves us with 23 medium to high specifications to
 merge in Juno-3. This is already quite close to what the core team can
 handle, considering history from previous releases and the fact that there
 are 3 very big items in the list (new LB APIs, distributed router, and group
 policies).

 I've counted already at least 7 requests for spec freeze exceptions on the
 mailing list, and it is likely more will come. In order to limit
 oversubscribing, I would suggest to exclude freeze exceptions requests for
 items which are not:
 - targeting stability and scalability for Neutron FOSS framework
 - have a community interest. By that I do not mean necessarily targeting
 the FOSS bits, but necessarily have support and interest from a number of
 teams of neutron contributors.

 I don't want to be evil to contributors, but I think it is better to be
 clear now rather than arriving at the end of Juno-3 and having to tell
 contributors that unfortunately we were not able to give their patches
 enough review cycles.

Thanks for sending this out Salvatore. We are way oversubscribed, and
at this point, I'm in agreement on not letting any new exceptions
which do not fall under the above guidelines. Given how much is
already packed in there, this makes the most sense.

Thanks,
Kyle

 Salvatore


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Spec Minimum Review Proposal

2014-07-23 Thread Ben Nemec
For everyone's reference, the tripleo-specs stats can be found here:
http://www.nemebean.com/reviewstats/tripleo-specs-30.txt

Note that looking at the stats, over 30 days 1 review per week is only
4, which most of our cores are already doing anyway.  I'm not sure
codifying a requirement to do at least that is going to help much.  To
move the needle I'm thinking we would need at least 3 - most of our
cores aren't meeting that today so it would actually require everyone to
do more reviews.  Spec reviews are difficult and tend to take a
significant amount of time, so that would be a considerable increase in
time commitments for cores.  I'm not sure how I feel about that,
although I'm probably biased because I'm not at 3 per week right now.
:-)

-Ben

On 2014-07-22 15:18, Jay Dobies wrote:
 At the meetup today, the topic of our spec process came up. The
 general sentiment is that the process is still young and the hiccups
 are expected, but we do need to get better about making sure we're
 staying on top of them.
 
 As a first step, it was proposed to add 1 spec review a week to the
 existing 3 reviews per day requirement for cores.
 
 Additionally, we're going to start to capture and review the metrics
 on spec patches specifically during the weekly meeting. That should
 help bring to light how long reviews are sitting in the queue without
 being touched.
 
 What are everyone's feelings on adding a 1 spec review per week
 requirement for cores?
 
 Not surprisingly, I'm +1 for it  :)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel-dev] Upgrades for Murano in MOS

2014-07-23 Thread Serg Melikyan
During this cycle we introduced migrations based on Alembic
https://bitbucket.org/zzzeek/alembic framework that are incompatible with
previous set of migrations based on sqlalchemy-migrate
https://github.com/stackforge/sqlalchemy-migrate. This changes are going
to be included to MOS with release targeting Juno release of Openstack.

New migration framework makes imposible to seamlessly migrate from previous
version of Murano to the next one - all data stored in the database is
going to be lost. Murano (as part of MOS) can't be upgraded from any
previous version of MOS to MOS 6.0.

I suggest to include this feature (migrations based on Alembic) to MOS as
soon as possible, to be precise to MOS 5.1. This will allow to have
upgrades for Murano from MOS 5.1 to all the next versions of MOS, including
6.0. Upgrade from 5.0.1 to 5.1 for Murano without loosing all data will be
impossible.
-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] More granular role management

2014-07-23 Thread Dolph Mathews
On Wed, Jul 23, 2014 at 1:03 AM, Fei Long Wang feil...@catalyst.net.nz
wrote:

 Greetings,

 I'm trying to figure out if Keystone can support more granular role
 management or if there is any plan to do that in the future. Currently,
 AWS can support adding a role and assigning the capability from 3
 different level/perspective: service, function and resource[1]. Keystone
 can support the service level for now, but I didn't find the
 function/resource level support from current code/blueprint. Am I
 missing anything? Any comment is appreciated. Cheers.


Absolutely, but Keystone does not own the definition of the role (it's
capabilities), which is distributed throughout the other services. So while
you can create a role in Keystone and assign it to users however you'd
like, you also have to give that role capabilities by defining policy rules
in the other services. For example, in nova's policy.json:

  https://github.com/openstack/nova/blob/master/etc/nova/policy.json



 [1] awspolicygen.s3.amazonaws.com/policygen.html

 --
 Cheers  Best regards,
 Fei Long Wang (王飞龙)
 --
 Senior Cloud Software Engineer
 Tel: +64-48032246
 Email: flw...@catalyst.net.nz
 Catalyst IT Limited
 Level 6, Catalyst House, 150 Willis Street, Wellington
 --


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel-dev] Upgrades for Murano in MOS

2014-07-23 Thread Mike Scherbakov
Hi Serg,
what needs to be done in order to include Alembic-related stuff into 5.1?
The thing is that we are just a day before Soft Code Freeze.

If this is trivial operation, such as adding a new package and updating
configuration file, then we could consider it to be included.

Thanks,


On Wed, Jul 23, 2014 at 5:36 PM, Serg Melikyan smelik...@mirantis.com
wrote:

 During this cycle we introduced migrations based on Alembic
 https://bitbucket.org/zzzeek/alembic framework that are incompatible
 with previous set of migrations based on sqlalchemy-migrate
 https://github.com/stackforge/sqlalchemy-migrate. This changes are
 going to be included to MOS with release targeting Juno release of
 Openstack.

 New migration framework makes imposible to seamlessly migrate from
 previous version of Murano to the next one - all data stored in the
 database is going to be lost. Murano (as part of MOS) can't be upgraded
 from any previous version of MOS to MOS 6.0.

 I suggest to include this feature (migrations based on Alembic) to MOS as
 soon as possible, to be precise to MOS 5.1. This will allow to have
 upgrades for Murano from MOS 5.1 to all the next versions of MOS, including
 6.0. Upgrade from 5.0.1 to 5.1 for Murano without loosing all data will be
 impossible.
 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] running heat horizon unit tests on client changes (was [nova] request to tag novaclient 2.18.0)

2014-07-23 Thread Steve Baker
On 18/07/14 08:35, Matt Riedemann wrote:


 On 7/17/2014 5:48 PM, Steve Baker wrote:
 On 18/07/14 00:44, Joe Gordon wrote:



 On Wed, Jul 16, 2014 at 11:28 PM, Steve Baker sba...@redhat.com
 mailto:sba...@redhat.com wrote:

 On 12/07/14 09:25, Joe Gordon wrote:



 On Fri, Jul 11, 2014 at 4:42 AM, Jeremy Stanley
 fu...@yuggoth.org mailto:fu...@yuggoth.org wrote:

 On 2014-07-11 11:21:19 +0200 (+0200), Matthias Runge wrote:
  this broke horizon stable and master; heat stable is
 affected as
  well.
 [...]

 I guess this is a plea for applying something like the
 oslotest
 framework to client libraries so they get backward-compat
 jobs run
 against unit tests of all dependant/consuming software...
 branchless
 tempest already alleviates some of this, but not the case of
 changes
 in a library which will break unit/functional tests of another
 project.


 We actually do have some tests for backwards compatibility, and
 they all passed. Presumably because both heat and horizon have
 poor integration test.

 We ran

   * check-tempest-dsvm-full-havana

 http://logs.openstack.org/66/94166/3/check/check-tempest-dsvm-full-havana/8e09faa
 SUCCESS in 40m 47s (non-voting)
   * check-tempest-dsvm-neutron-havana

 http://logs.openstack.org/66/94166/3/check/check-tempest-dsvm-neutron-havana/b4ad019
 SUCCESS in 36m 17s (non-voting)
   * check-tempest-dsvm-full-icehouse

 http://logs.openstack.org/66/94166/3/check/check-tempest-dsvm-full-icehouse/c0c62e5
 SUCCESS in 53m 05s
   * check-tempest-dsvm-neutron-icehouse

 http://logs.openstack.org/66/94166/3/check/check-tempest-dsvm-neutron-icehouse/a54aedb
 SUCCESS in 57m 28s


 on the offending patches (https://review.openstack.org/#/c/94166/)

 Infra patch that added these tests:
 https://review.openstack.org/#/c/80698/


 Heat-proper would have continued working fine with novaclient
 2.18.0. The regression was with raising novaclient exceptions,
 which is only required in our unit tests. I saw this break coming
 and switched to raising via from_response
 https://review.openstack.org/#/c/97977/22/heat/tests/v1_1/fakes.py

 Unit tests tend to deal with more internals of client libraries
 just for mocking purposes, and there have been multiple breaks in
 unit tests for heat and horizon when client libraries make
 internal changes.

 This could be avoided if the client gate jobs run the unit tests
 for the projects which consume them.

 That may work but isn't this exactly what integration testing is for?
 If you mean tempest then no, this is different.

 Client projects have done a good job of keeping their public library
 APIs stable. An exception type is public API, but the constructor for
 raising that type arguably is more of a gray area since only the client
 library should be raising its own exceptions.

 However heat and horizon unit tests need to raise client exceptions to
 test their own error condition handling, so exception constructors could
 be considered public API, but only for unit test mocking in other
 projects.

 This problem couldn't have been caught in an integration test because
 nothing outside the unit tests directly raises a client exception.

 There have been other breakages where internal client library changes
 have broken the mocking in our unit tests (I recall a neutronclient
 internal refactor).

 In many cases the cause may be inappropriate mocking in the unit tests,
 but that is cold comfort when the gates break when a client library is
 released.

 Maybe we can just start with adding heat and horizon to the check jobs
 of the clients they consume, but the following should also be
 considered:
 grep python-.*client */requirements.txt

 This could give client libraries more confidence that internal changes
 don't break anything, and allows them to fix mocking in other projects
 before their changes land.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 I don't think we should have to change the gate jobs just so that
 other projects can test against the internals of their dependent
 clients, that sounds like a flawed unit test design to me.

 Looking at
 https://review.openstack.org/#/c/97977/22/heat/tests/v1_1/fakes.py for
 example, why is a fake_exception needed to mock out novaclient's
 NotFound exception?  A better way to do this is that whatever is
 expecting to raise the NotFound should use mock with a side_effect to
 raise novaclient.exceptions.NotFound, then mock handles the spec being
 set on the mock and you don't have to worry about the internal
 construction of the exception class in your unit tests.

fakes.py is ancient and 

Re: [openstack-dev] [Fuel-dev] Upgrades for Murano in MOS

2014-07-23 Thread Serg Melikyan
Hi, Mike,

I can't be specific about implementation details due to lack of expertise
in Fuel, but to properly handle update of Murano from previous version to
MOS 5.1 we need to:

   1. show warning to the user about deleting all resources managed by
   Murano (all VMs, networks, etc.. created as part of applications
   deployment);
   2. remove them;
   3. delete database;
   4. install Murano as usual.

 I worry that first step may be quite hard to implement, here we need
expertise from Fuel team.



On Wed, Jul 23, 2014 at 5:53 PM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Hi Serg,
 what needs to be done in order to include Alembic-related stuff into 5.1?
 The thing is that we are just a day before Soft Code Freeze.

 If this is trivial operation, such as adding a new package and updating
 configuration file, then we could consider it to be included.

 Thanks,


 On Wed, Jul 23, 2014 at 5:36 PM, Serg Melikyan smelik...@mirantis.com
 wrote:

 During this cycle we introduced migrations based on Alembic
 https://bitbucket.org/zzzeek/alembic framework that are incompatible
 with previous set of migrations based on sqlalchemy-migrate
 https://github.com/stackforge/sqlalchemy-migrate. This changes are
 going to be included to MOS with release targeting Juno release of
 Openstack.

 New migration framework makes imposible to seamlessly migrate from
 previous version of Murano to the next one - all data stored in the
 database is going to be lost. Murano (as part of MOS) can't be upgraded
 from any previous version of MOS to MOS 6.0.

 I suggest to include this feature (migrations based on Alembic) to MOS as
 soon as possible, to be precise to MOS 5.1. This will allow to have
 upgrades for Murano from MOS 5.1 to all the next versions of MOS, including
 6.0. Upgrade from 5.0.1 to 5.1 for Murano without loosing all data will be
 impossible.
 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] HTTPS client breaks nova

2014-07-23 Thread Rob Crittenden
It looks like the switch to requests in python-glanceclient
(https://review.openstack.org/#/c/78269/) has broken nova when SSL is
enabled.

I think it is related to the custom object that the glanceclient uses.
If another connection gets pushed into the pool then things fail because
the object isn't a glanceclient VerifiedHTTPSConnection object.

The error seen is:

2014-07-22 16:20:57.571 ERROR nova.api.openstack
req-e9a94169-9af4-45e8-ab95-1ccd3f8caf04 admin admin Caught error:
VerifiedHTTPSConnection instance has no attribute 'insecure'

What I see is that nova works until glance is invoked.

These all work:

$ nova flavor-list
$ glance image-list
$ nova net-list

Now make it go boom:

$ nova image-list
ERROR (Unauthorized): Unauthorized (HTTP 401) (Request-ID:
req-ee964e9a-c2a9-4be9-bd52-3f42c805cf2c)

Now that a bad object is now in the pool nothing in nova works:

$ nova list
ERROR (Unauthorized): Unauthorized (HTTP 401) (Request-ID:
req-f670db83-c830-4e75-b29f-44f61ae161a1)

A restart of nova gets things back to normal.

I'm working on enabling SSL everywhere
(https://bugs.launchpad.net/devstack/+bug/1328226) either directly or
using TLS proxies (stud).
I'd like to eventually get SSL testing done as a gate job which will
help catch issues like this in advance.

rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Neutron ML2 Blueprints

2014-07-23 Thread Vladimir Kuklin
Andrew

AFAIK, extended tests on full HA envs failed due to errors in deployment of
secondary controllers. There is new patchset on review, but I am not sure
that this code is passing extended tests. If it does, then we can consider
merge of your code if it is working with NSX and Mellanox code. I am deeply
concerned about this and my opinion is that we should not do it because we
can introduce enormous regression right after Soft Code Freeze and put our
release under very high risk.


Mike, Andrew, what do you think?


On Fri, Jul 18, 2014 at 10:53 AM, Andrew Woodward xar...@gmail.com wrote:

 All issues should be resolved, and CI is passing. Please start testing.


 On Thu, Jul 17, 2014 at 4:30 AM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Andrew, we have extended system tests passing with our current pacemaker
 corosync code. Either it is your environment or some bug we cannot
 reproduce. Also, it may be related to puppet ordering issues thus trying to
 start some services before some others. As [2] is the only issue you are
 pointing at now, let's create a bug and track it in Launchpad.


 On Thu, Jul 17, 2014 at 11:20 AM, Andrew Woodward xar...@gmail.com
 wrote:

 [2] still has no positive progress, simply making puppet stop the
 services isn't all that usefull, will need to move towards always
 using over-ride files
 [3] is closed as it hasn't occurred in two days
 [4] may be closed as its not occuring in CI or on my testing anymore

 [5] is closed, was due to [7]

 [7] https://bugs.launchpad.net/puppet-neutron/+bug/1343009

 CI is passing CentOS now, and only failing ubuntu in OSTF. This
 appears to be due services not being properly managed in
 corosync/pacemaker

 On Tue, Jul 15, 2014 at 11:24 PM, Andrew Woodward xar...@gmail.com
 wrote:
  [2] appears to be made worse, if not caused by neutron services
  autostarting with debian, no patch yet, need to add mechanism to ha
  layer to generate override files.
  [3] appears to have stopped with this mornings master
  [4] deleting the cluster, and restarting mostly removed this, was
  getting issue with $::osnailyfacter::swift_partition/.. not existing
  (/var/lib/glance), but is fixed in rev 29
 
  [5] is still the critical issue blocking progress, I'm super at a loss
  of why this is occuring. Changes to ordering have no affect. Next
  steps probably involve pre-hacking keystone and neutron and
  nova-client to be more verbose about it's key usage. As a hack we
  could simply restart neutron-server but I'm not convinced the issue
  can't come back since we don't know how it started.
 
 
 
  On Tue, Jul 15, 2014 at 6:34 AM, Sergey Vasilenko
  svasile...@mirantis.com wrote:
  [1] fixed in https://review.openstack.org/#/c/107046/
  Thanks for report a bug.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Andrew
  Mirantis
  Ceph community



 --
 Andrew
 Mirantis
 Ceph community

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Andrew
 Mirantis
 Ceph community

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Spec Minimum Review Proposal

2014-07-23 Thread Macdonald-Wallace, Matthew
So given the increased complexity of a spec, why not make it 2 specs per week?

Matt

 -Original Message-
 From: Ben Nemec [mailto:openst...@nemebean.com]
 Sent: 23 July 2014 14:21
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [TripleO] Spec Minimum Review Proposal
 
 For everyone's reference, the tripleo-specs stats can be found here:
 http://www.nemebean.com/reviewstats/tripleo-specs-30.txt
 
 Note that looking at the stats, over 30 days 1 review per week is only 4, 
 which
 most of our cores are already doing anyway.  I'm not sure codifying a
 requirement to do at least that is going to help much.  To move the needle I'm
 thinking we would need at least 3 - most of our cores aren't meeting that 
 today
 so it would actually require everyone to do more reviews.  Spec reviews are
 difficult and tend to take a significant amount of time, so that would be a
 considerable increase in time commitments for cores.  I'm not sure how I feel
 about that, although I'm probably biased because I'm not at 3 per week right
 now.
 :-)
 
 -Ben
 
 On 2014-07-22 15:18, Jay Dobies wrote:
  At the meetup today, the topic of our spec process came up. The
  general sentiment is that the process is still young and the hiccups
  are expected, but we do need to get better about making sure we're
  staying on top of them.
 
  As a first step, it was proposed to add 1 spec review a week to the
  existing 3 reviews per day requirement for cores.
 
  Additionally, we're going to start to capture and review the metrics
  on spec patches specifically during the weekly meeting. That should
  help bring to light how long reviews are sitting in the queue without
  being touched.
 
  What are everyone's feelings on adding a 1 spec review per week
  requirement for cores?
 
  Not surprisingly, I'm +1 for it  :)
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel-dev] Upgrades for Murano in MOS

2014-07-23 Thread Mike Scherbakov
Serg,
as of 5.1, we do not have an ability to upgrade OpenStack.

Your case falls into upgrades capabilities. We plan to start working on
OpenStack upgrades in 6.0. As of 5.1, we will have an ability to patch
environments in terms of maintenance releases, i.e. to lay some patches on
your Icehouse code from stable/icehouse. Please do not propose such a large
change regarding DB migrations to stable branch, as Fuel will be unable to
even patch old envs.

Considering the fact that we can't upgrade Murano Icehouse to Juno, let's
focus on preparing Juno code in a way that it supports further upgrades
from Juno to K* and following releases.


On Wed, Jul 23, 2014 at 6:08 PM, Serg Melikyan smelik...@mirantis.com
wrote:

 Hi, Mike,

 I can't be specific about implementation details due to lack of expertise
 in Fuel, but to properly handle update of Murano from previous version to
 MOS 5.1 we need to:

1. show warning to the user about deleting all resources managed by
Murano (all VMs, networks, etc.. created as part of applications
deployment);
2. remove them;
3. delete database;
4. install Murano as usual.

  I worry that first step may be quite hard to implement, here we need
 expertise from Fuel team.



 On Wed, Jul 23, 2014 at 5:53 PM, Mike Scherbakov mscherba...@mirantis.com
  wrote:

 Hi Serg,
 what needs to be done in order to include Alembic-related stuff into 5.1?
 The thing is that we are just a day before Soft Code Freeze.

 If this is trivial operation, such as adding a new package and updating
 configuration file, then we could consider it to be included.

 Thanks,


 On Wed, Jul 23, 2014 at 5:36 PM, Serg Melikyan smelik...@mirantis.com
 wrote:

 During this cycle we introduced migrations based on Alembic
 https://bitbucket.org/zzzeek/alembic framework that are incompatible
 with previous set of migrations based on sqlalchemy-migrate
 https://github.com/stackforge/sqlalchemy-migrate. This changes are
 going to be included to MOS with release targeting Juno release of
 Openstack.

 New migration framework makes imposible to seamlessly migrate from
 previous version of Murano to the next one - all data stored in the
 database is going to be lost. Murano (as part of MOS) can't be upgraded
 from any previous version of MOS to MOS 6.0.

 I suggest to include this feature (migrations based on Alembic) to MOS
 as soon as possible, to be precise to MOS 5.1. This will allow to have
 upgrades for Murano from MOS 5.1 to all the next versions of MOS, including
 6.0. Upgrade from 5.0.1 to 5.1 for Murano without loosing all data will be
 impossible.
 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Support for Django 1.7 in OpenStack

2014-07-23 Thread Lyle, David
Django 1.7 drops support for python 2.6 [1], so until OpenStack drops
support for 2.6 which is slated for Kilo, Horizon is unfortunately capped
at  1.7.

David

[1] 
https://docs.djangoproject.com/en/dev/releases/1.7/#python-compatibility



On 7/23/14, 4:56 AM, Thomas Goirand z...@debian.org wrote:

Hi,

The Debian maintainer of python-django would like to upgrade to version
1.7. He asked, in multiple bug reports, to check for Django 1.7
compatibility. I have the following python modules and bug reports:

https://bugs.debian.org/755613 python-django-appconf
https://bugs.debian.org/755622 python-django-compressor
https://bugs.debian.org/755628 python-django-pyscss
https://bugs.debian.org/755641 python-django-discover-runner
https://bugs.debian.org/755646 python-django-openstack-auth
https://bugs.debian.org/755651 horizon
https://bugs.debian.org/755654 tuskar-ui
https://bugs.debian.org/755656 python-django-bootstrap-form

First, does anyone know if Django 1.7 is an issue with any of the above
packages? If there are effectively issues, is there currently any plan
to fix it?

Ideally, I would like all of the above packages to be able to run with
Django 1.7 in Icehouse. I don't expect upstream OpenStack to actually do
the backporting to Icehouse, but if it gets in trunk, I'd be happy to do
the work for backporting.

If it's considered a bad idea to fix Django 1.7 compatibility in
stable/icehouse, then I don't mind keeping the patches as Debian
specific. However, I will *not* have the time to either investigate the
issue, or do the work (I'm busy enough with packaging all of OpenStack
and its dependencies in Debian). Also, I don't think I have enough
expertise myself to work on this: just testing wouldn't be enough,
IMO, someone with an extensive Django experience needs to work on this.

So, I NEED HELP HERE! :)
Thoughts, comments, or whatever else is welcome! [1]

Cheers,

Thomas Goirand (zigo)

[1] but discussing how upgrading to Django 1.7 would be difficult isn't:
that's counter productive, not going forward, and not solving any problem.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Spec freeze exception] Support Stateful and Stateless DHCPv6 by dnsmasq

2014-07-23 Thread Martinx - ジェームズ
Just a note... This is huge!! Great news!! Nevertheless, if Juno comes only
with SLAAC, I'll be very, very happy!;-)

Nice job guys!


On 23 July 2014 01:06, Xu Han Peng pengxu...@gmail.com wrote:

  I would like to request one Juno Spec freeze exception for Support
 Stateful and Stateless DHCPv6 by dnsmasq BP.

 This BP is an important part if IPv6 support in Juno. Router advertisement
 support by RADVD has been merged and this BP is planned for configure
 OpenStack dnsmasq to co-work with router advertisement from either external
 router or OpenStack managed RADVD to get IPv6 network fully functional.
 This BP also supports dnsmasq to work independently for DHCPv6 subnet.
 Without this BP, only SLAAC mode is enabled without any stateful/stateless
 DHCPv6 support.

 The spec is under review:
 https://review.openstack.org/#/c/102411/

 Code change for this BP is submitted as well for a while:
 https://review.openstack.org/#/c/106299/

 Thanks,
 Xu Han

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] running heat horizon unit tests on client changes (was [nova] request to tag novaclient 2.18.0)

2014-07-23 Thread Lyle, David


On 7/23/14, 7:51 AM, Steve Baker sba...@redhat.com wrote:

On 18/07/14 08:35, Matt Riedemann wrote:


 On 7/17/2014 5:48 PM, Steve Baker wrote:
 On 18/07/14 00:44, Joe Gordon wrote:



 On Wed, Jul 16, 2014 at 11:28 PM, Steve Baker sba...@redhat.com
 mailto:sba...@redhat.com wrote:

 On 12/07/14 09:25, Joe Gordon wrote:



 On Fri, Jul 11, 2014 at 4:42 AM, Jeremy Stanley
 fu...@yuggoth.org mailto:fu...@yuggoth.org wrote:

 On 2014-07-11 11:21:19 +0200 (+0200), Matthias Runge wrote:
  this broke horizon stable and master; heat stable is
 affected as
  well.
 [...]

 I guess this is a plea for applying something like the
 oslotest
 framework to client libraries so they get backward-compat
 jobs run
 against unit tests of all dependant/consuming software...
 branchless
 tempest already alleviates some of this, but not the case of
 changes
 in a library which will break unit/functional tests of
another
 project.


 We actually do have some tests for backwards compatibility, and
 they all passed. Presumably because both heat and horizon have
 poor integration test.

 We ran

   * check-tempest-dsvm-full-havana

 
http://logs.openstack.org/66/94166/3/check/check-tempest-dsvm-full-ha
vana/8e09faa
 SUCCESS in 40m 47s (non-voting)
   * check-tempest-dsvm-neutron-havana

 
http://logs.openstack.org/66/94166/3/check/check-tempest-dsvm-neutron
-havana/b4ad019
 SUCCESS in 36m 17s (non-voting)
   * check-tempest-dsvm-full-icehouse

 
http://logs.openstack.org/66/94166/3/check/check-tempest-dsvm-full-ic
ehouse/c0c62e5
 SUCCESS in 53m 05s
   * check-tempest-dsvm-neutron-icehouse

 
http://logs.openstack.org/66/94166/3/check/check-tempest-dsvm-neutron
-icehouse/a54aedb
 SUCCESS in 57m 28s


 on the offending patches
(https://review.openstack.org/#/c/94166/)

 Infra patch that added these tests:
 https://review.openstack.org/#/c/80698/


 Heat-proper would have continued working fine with novaclient
 2.18.0. The regression was with raising novaclient exceptions,
 which is only required in our unit tests. I saw this break coming
 and switched to raising via from_response
 https://review.openstack.org/#/c/97977/22/heat/tests/v1_1/fakes.py

 Unit tests tend to deal with more internals of client libraries
 just for mocking purposes, and there have been multiple breaks in
 unit tests for heat and horizon when client libraries make
 internal changes.

 This could be avoided if the client gate jobs run the unit tests
 for the projects which consume them.

 That may work but isn't this exactly what integration testing is for?
 If you mean tempest then no, this is different.

 Client projects have done a good job of keeping their public library
 APIs stable. An exception type is public API, but the constructor for
 raising that type arguably is more of a gray area since only the client
 library should be raising its own exceptions.

 However heat and horizon unit tests need to raise client exceptions to
 test their own error condition handling, so exception constructors
could
 be considered public API, but only for unit test mocking in other
 projects.

 This problem couldn't have been caught in an integration test because
 nothing outside the unit tests directly raises a client exception.

 There have been other breakages where internal client library changes
 have broken the mocking in our unit tests (I recall a neutronclient
 internal refactor).

 In many cases the cause may be inappropriate mocking in the unit tests,
 but that is cold comfort when the gates break when a client library is
 released.

 Maybe we can just start with adding heat and horizon to the check jobs
 of the clients they consume, but the following should also be
 considered:
 grep python-.*client */requirements.txt

 This could give client libraries more confidence that internal changes
 don't break anything, and allows them to fix mocking in other projects
 before their changes land.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 I don't think we should have to change the gate jobs just so that
 other projects can test against the internals of their dependent
 clients, that sounds like a flawed unit test design to me.

 Looking at
 https://review.openstack.org/#/c/97977/22/heat/tests/v1_1/fakes.py for
 example, why is a fake_exception needed to mock out novaclient's
 NotFound exception?  A better way to do this is that whatever is
 expecting to raise the NotFound should use mock with a side_effect to
 raise novaclient.exceptions.NotFound, then mock handles the spec being
 set on the mock and you don't have to worry about the internal
 construction 

Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work division

2014-07-23 Thread Carlos Garza
Do you have any idea as to how we can split up the work?

On Jul 23, 2014, at 6:01 AM, Evgeny Fedoruk evge...@radware.com
 wrote:

 Hi,
 
 I'm working on TLS integration with loadbalancer v2 extension and db.
 Basing on Brandon's  patches https://review.openstack.org/#/c/105609 , 
 https://review.openstack.org/#/c/105331/  , 
 https://review.openstack.org/#/c/105610/
 I will abandon previous 2 patches for TLS which are 
 https://review.openstack.org/#/c/74031/ and 
 https://review.openstack.org/#/c/102837/ 
 Managing to submit my change later today. It will include lbaas extension v2 
 modification, lbaas db v2 modifications, alembic migration for schema changes 
 and new tests in unit testing for lbaas db v2.
 
 Thanks,
 Evg
 
 -Original Message-
 From: Carlos Garza [mailto:carlos.ga...@rackspace.com] 
 Sent: Wednesday, July 23, 2014 3:54 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - work division
 
   Since it looks like the TLS blueprint was approved I''m sure were all 
 eager to start coded so how should we divide up work on the source code. I 
 have Pull requests in pyopenssl https://github.com/pyca/pyopenssl/pull/143;. 
 and a few one liners in pica/cryptography to expose the needed low-level that 
 I'm hoping will be added pretty soon to that PR 143 test's can pass. Incase 
 it doesn't we will fall back to using the pyasn1_modules as it already also 
 has a means to fetch what we want at a lower level. 
 I'm just hoping that we can split the work up so that we can collaborate 
 together on this with out over serializing the work were people become 
 dependent on waiting for some one else to complete their work or worse one 
 person ending up doing all the work.
 
   
  Carlos D. Garza ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Neutron] Request voting for Tail-f CI account

2014-07-23 Thread Luke Gorrie
On 22 July 2014 11:06, Luke Gorrie l...@tail-f.com wrote:

 End of Part One.


Let's skip Part Two. That is just more frustration.

Let's talk about Part Three in which we all do awesome CI hacking in Juno
together :-).

Here is what I want to achieve in Juno:

NFV CI: Myself and my colleagues are developing the open source Neutron
networking for Deutsche Telekom's TeraStream project (and we want to bring
up a CI that tests this configuration. That will exercise new and
exciting-for-NFV features of QEMU, Libvirt, Nova, and Neutron. This should
serve several purposes: making TeraStream a success story for OpenStack and
Neutron, making the whole design easy to replicate for other users (it's
already open source), and providing test coverage for more OpenStack
features. (Good stuff for everybody, I hope! More info:
http://blog.ipspace.net/2013/11/deutsche-telekom-terastream-designed.html)

People: I want to onboard great new open source hackers into the OpenStack
community and get them contributing to CI. I am right now bringing new
people up to speed on OpenStack development and working with them on
bringing up our NFV CI this month.

shellci: I want to make shellci a practical alternative for CI operators
whose style is more screen+bash+awk than jenkins+zuul+nodepool. The
development is already done, and it works great in my own tests, so now we
plan to battle test it on the NFV CI. (link:
https://github.com/SnabbCo/shellci)

Tail-f NCS: I want to keep this feature well maintained and compliant with
all the rules. I am the person who wrote this driver originally, I have
been the responsible person for 90% of its lifetime, I am the person who
setup the current CI, and I am the one responsible for smooth operation of
that CI. I am reviewing its results with my morning coffee and have been
doing so for the past 6 weeks. I would like to have it start voting and I
believe that it and I are ready for that. I am responsive to email, I am
usually on IRC (lukego), and in case of emergency you can SMS/call my
mobile on +41 79 244 32 17.

So... Let's be friends again? (and do ever cooler stuff in Kilo?)

Cheers!
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-sdk-php] Weekly Meeting Cancelations

2014-07-23 Thread Matthew Farina
The PHP SDK Meetings for 7/23 and 7/30 are canceled. The next meeting will
be 8/6.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [nova] neutron / nova-network parity meeting minutes

2014-07-23 Thread Kyle Mestery
For those interested in the progress of this particular task, meeting
minutes are available at the below:

http://eavesdrop.openstack.org/meetings/neutron_nova_network_parity/2014/

Thanks to all who attended!

Kyle

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Image tagging

2014-07-23 Thread Serg Melikyan
I would also suggest to look at Graffiti
https://wiki.openstack.org/wiki/Graffiti project, I think Graffiti is
designed to solve problems related to our with images however I don't know
how well it is fit for us. They work very hard to make project
functionality available as part https://review.openstack.org/98554 of
Glance.

If it's really can solve our problem we can design solution that expose
functionality compatible in capabilities with Graffiti and have limited
short-term implementation that eventually can be replaced by Glance
[with *Metadata
Definitions Catalog* feature].


On Wed, Jul 23, 2014 at 1:52 AM, Stan Lagun sla...@mirantis.com wrote:

 How do you like alternate design: uses can chose any image he wants (say
 any Linux) but the JSON that is in image tag has enough information on what
 applications are installed on that image. And not just installed or not but
 the exact state installation was frozen (say binaries are deployed but
 config files are need to be modified). The deployment workflow can peak
 that state from image tag and continue right from the place it was stopped
 last time. So if user has chosen image with MySQL preinstalled the workflow
 will just post-configure it while if the user chosen clean Linux image it
 will do the whole deployment from scratch. Thus it will become only a
 matter of optimization and user will still be able to to share instance for
 several applications (good example is Firewall app) or deploy his app even
 if there is no image where it was built in.

 Those are only my thoughts and this need a proper design. For now I agree
 that we need to improve tagging to support yours use case. But this need to
 be done in a way that would allow both user and machine to work with. UI at
 least needs to distinguish between Linux and Windows while for user a
 free-form tagging may be appropriate. Both can be stored in a single JSON
 tag.

 So lets create blueprint/etherpad for this and both think on exact format
 that can be implemented right now

 Sincerely yours,
 Stan Lagun
 Principal Software Engineer @ Mirantis

  sla...@mirantis.com


 On Tue, Jul 22, 2014 at 10:08 PM, McLellan, Steven steve.mclel...@hp.com
 wrote:

  Thanks for the response.



 Primarily I’m thinking about a situation where I have an image that has a
 specific piece of software installed (let’s say MySQL for the sake of
 argument). My application (which configures mysql) requires a glance image
 that has MySQL pre-installed, and doesn’t particularly care what OS (though
 again for the sake of argument assume it’s linux of some kind, so that
 configuration files are expected to be in the same place regardless of OS).



 Currently we have a list of three hardcoded values in the UI, and none of
 them apply properly. I’m suggesting instead of that list, we allow
 free-form text; if you’re tagging glance images, you are expected to know
 what applications will be looking for. This still leaves a problem in that
 I can upload a package but I don’t necessarily have the ability to mark any
 images as valid for it, but I think that can be a later evolution; for now,
 I’m focusing on the situation where an admin is both uploading glance
 images and murano packages.



 As a slight side note, we do have the ability to filter image sizes based
 on glance properties (RAM, cpus), but this is in the UI code, not enforced
 at the contractual level. I agree reengineering some of this to be at the
 contract level is a good goal, but it seems like that would involve major
 reengineering of the dashboard to make it much dumber and go through the
 murano API for everything (which ultimately is probably a good thing).



 *From:* Stan Lagun [mailto:sla...@mirantis.com]
 *Sent:* Sunday, July 20, 2014 5:42 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Murano] Image tagging



 Hi!



 I think it would be usefull to share the original vision on tagging that
 we had back in 0.4 era when it was introduced.

 Tagging was supposed to be JSON image metadata with extendable scheme.
 Workflow should be able to both utilize that metadata and impose some
 constraints on it. That feature was never really designed so I cannot tell
 exactly how this JSON should work or look like. As far as I see it it can
 contain:



 1. Operating system information. For example os: { family: Linux,
 name: Ubuntu, version: 12.04, arch: x86_x64 } (this also may be
 encoded as a single string)

 Workflows (MuranoPL contracts) need to be able to express
 requirements based on those attributes. For example



 image:

   Contract($.class(Image).check($.family = Linux and $.arch = x86)



In UI only those images that matches such contract should be displayed.



 2. Human readable image title Ubuntu Linux 12.04 x86



 3. Information about built-in software for image-based deployment. Not
 sure exactly what information is needed. Meybe even portion of Object Model
 so 

Re: [openstack-dev] [nova] Manage multiple clusters using a single nova service

2014-07-23 Thread Dan Smith
 I'm just do not suppor the idea that Nova needs to change its 
 fundamental design in order to support the *design* of other host 
 management platforms.
 
 The current implementation doesn't make nova change its design, the 
 scheduling decisions are still done by nova.

Nova's design is not just making the scheduling decisions but also
includes the deployment model, which is intended to be a single compute
service tied to a single hypervisor. I think that's important for scale
and failure isolation at least.

 Its only the deployment that has been changed. Agree that there are 
 no separate topic-exchange queues for each cluster.

I'm definitely with Jay here: I want to get away from hiding larger
systems behind a single compute host/service.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work division

2014-07-23 Thread Evgeny Fedoruk
Hi Carlos,

As I understand you are working on common module for Barbican  interactions.
I will commit my code later today and I will appreciate if you and anybody else 
 who is interested will review this change.
There is one specific spot for the common Barbican interactions module API 
integration.
After the IRC meeting tomorrow, we can discuss the work items and decide who is 
interested/available to do them.
Does it make sense?

Thanks,
Evg

-Original Message-
From: Carlos Garza [mailto:carlos.ga...@rackspace.com] 
Sent: Wednesday, July 23, 2014 6:15 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work division

Do you have any idea as to how we can split up the work?

On Jul 23, 2014, at 6:01 AM, Evgeny Fedoruk evge...@radware.com
 wrote:

 Hi,
 
 I'm working on TLS integration with loadbalancer v2 extension and db.
 Basing on Brandon's  patches https://review.openstack.org/#/c/105609 , 
 https://review.openstack.org/#/c/105331/  , 
 https://review.openstack.org/#/c/105610/
 I will abandon previous 2 patches for TLS which are 
 https://review.openstack.org/#/c/74031/ and 
 https://review.openstack.org/#/c/102837/ 
 Managing to submit my change later today. It will include lbaas extension v2 
 modification, lbaas db v2 modifications, alembic migration for schema changes 
 and new tests in unit testing for lbaas db v2.
 
 Thanks,
 Evg
 
 -Original Message-
 From: Carlos Garza [mailto:carlos.ga...@rackspace.com] 
 Sent: Wednesday, July 23, 2014 3:54 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - work division
 
   Since it looks like the TLS blueprint was approved I''m sure were all 
 eager to start coded so how should we divide up work on the source code. I 
 have Pull requests in pyopenssl https://github.com/pyca/pyopenssl/pull/143;. 
 and a few one liners in pica/cryptography to expose the needed low-level that 
 I'm hoping will be added pretty soon to that PR 143 test's can pass. Incase 
 it doesn't we will fall back to using the pyasn1_modules as it already also 
 has a means to fetch what we want at a lower level. 
 I'm just hoping that we can split the work up so that we can collaborate 
 together on this with out over serializing the work were people become 
 dependent on waiting for some one else to complete their work or worse one 
 person ending up doing all the work.
 
   
  Carlos D. Garza ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Use FQDN in Ring files instead of ip

2014-07-23 Thread John Dickinson
Using hostnames instead of IPs is, as mentioned above, something under 
consideration in that patch.

However, note that until now, we've intentionally kept it as just IP addresses 
since using hostnames adds a lot of operational complexity and burden. I 
realize that hostnames may be preferred in some cases, but this places a very 
large strain on DNS systems. So basically, it's a question of do we add the 
feature, knowing that most people who use it will in fact be making their lives 
more difficult, or do we keep it out, knowing that we won't be serving those 
who actually require the feature.

--John



On Jul 23, 2014, at 2:29 AM, Matsuda, Kenichiro 
matsuda_keni...@jp.fujitsu.com wrote:

 Hi,
 
 Thank you for the info.
 I was able to understand that hostname support is under developing.
 
 Best Regards,
 Kenichiro Matsuda.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] HTTPS client breaks nova

2014-07-23 Thread Rob Crittenden
Rob Crittenden wrote:
 It looks like the switch to requests in python-glanceclient
 (https://review.openstack.org/#/c/78269/) has broken nova when SSL is
 enabled.
 
 I think it is related to the custom object that the glanceclient uses.
 If another connection gets pushed into the pool then things fail because
 the object isn't a glanceclient VerifiedHTTPSConnection object.
 
 The error seen is:
 
 2014-07-22 16:20:57.571 ERROR nova.api.openstack
 req-e9a94169-9af4-45e8-ab95-1ccd3f8caf04 admin admin Caught error:
 VerifiedHTTPSConnection instance has no attribute 'insecure'
 
 What I see is that nova works until glance is invoked.
 
 These all work:
 
 $ nova flavor-list
 $ glance image-list
 $ nova net-list
 
 Now make it go boom:
 
 $ nova image-list
 ERROR (Unauthorized): Unauthorized (HTTP 401) (Request-ID:
 req-ee964e9a-c2a9-4be9-bd52-3f42c805cf2c)
 
 Now that a bad object is now in the pool nothing in nova works:
 
 $ nova list
 ERROR (Unauthorized): Unauthorized (HTTP 401) (Request-ID:
 req-f670db83-c830-4e75-b29f-44f61ae161a1)
 
 A restart of nova gets things back to normal.
 
 I'm working on enabling SSL everywhere
 (https://bugs.launchpad.net/devstack/+bug/1328226) either directly or
 using TLS proxies (stud).
 I'd like to eventually get SSL testing done as a gate job which will
 help catch issues like this in advance.
 
 rob

FYI, my temporary workaround is to change the queue name (scheme) so the
glance clients are handled separately:

diff --git a/glanceclient/common/https.py b/glanceclient/common/https.py
index 6416c19..72ed929 100644
--- a/glanceclient/common/https.py
+++ b/glanceclient/common/https.py
@@ -72,7 +72,7 @@ class HTTPSAdapter(adapters.HTTPAdapter):
 def __init__(self, *args, **kwargs):
 # NOTE(flaper87): This line forces poolmanager to use
 # glanceclient HTTPSConnection
-poolmanager.pool_classes_by_scheme[https] = HTTPSConnectionPool
+poolmanager.pool_classes_by_scheme[glance_https] =
HTTPSConnectionPoo
 super(HTTPSAdapter, self).__init__(*args, **kwargs)

 def cert_verify(self, conn, url, verify, cert):
@@ -92,7 +92,7 @@ class
HTTPSConnectionPool(connectionpool.HTTPSConnectionPool):
 be used just when the user sets --no-ssl-compression.
 

-scheme = 'https'
+scheme = 'glance_https'

 def _new_conn(self):
 self.num_connections += 1

This at least lets me continue working.

rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Meeting time change

2014-07-23 Thread Kurt Griffiths
OK, I just checked and 1400 and 1500 are already taken, unless we want to
move our meetings to #openstack-meeting-3. If we want to stick with
#openstack-meeting-alt, it will have to be 1300 UTC.

On 7/22/14, 5:28 PM, Flavio Percoco fla...@redhat.com wrote:

On 07/22/2014 06:08 PM, Kurt Griffiths wrote:
 FYI, we chatted about this in #openstack-marconi today and decided to
try
 2100 UTC for tomorrow. If we would like to alternate at an earlier time
 every other week, is 1900 UTC good, or shall we do something more like
 1400 UTC?


We can keep the same time we're using, if possible. That is, 15UTC. If
that slot is taken, then 14UTC sounds good.

Cheers,
Flavio

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV] Meeting Summary - 2014-07-23-14.00

2014-07-23 Thread Steve Gordon
Hi all,

Please find the summaries and full logs for today's NFV sub team meeting at 
these locations:

Summary (HTML): 
http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-07-23-14.00.html
Full Log (HTML): 
http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-07-23-14.00.log.html
Summary (TXT): 
http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-07-23-14.00.log.txt
Fully Log (TXT): 
http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-07-23-14.00.txt

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] putting [tag] in LP bug titles instead of using LP tags

2014-07-23 Thread Mike Scherbakov
I'm not against creating bugs initially with such a title to make visual
search easier.
However I think that re-titling existing bugs is not needed, as at leads to
spam.

Mike Scherbakov
#mihgen
On Jul 23, 2014 4:24 AM, Dmitry Borodaenko dborodae...@mirantis.com
wrote:

 +1

 To provide some more context, we discussed this in the team meeting last
 week:

 http://eavesdrop.openstack.org/meetings/fuel/2014/fuel.2014-07-17-16.00.log.html#l-107

 and agreed to stop doing it until further discussion, or at all.


 On Tue, Jul 22, 2014 at 4:36 PM, Andrew Woodward xar...@gmail.com wrote:
  There has been an increased occurrence of using [tag] in the title
 instead
  of adding tag to the tags section of the LP bugs for Fuel.
 
  As we discussed in the Fuel meeting last Thursday, We should stop doing
 this
  as it causes several issues
  * It spams e-mail.
  * It breaks threading that your mail client may perform as it changes the
  subject.
  * They aren't searchable as easily as tags
  * They are going to look even more ugly when more tags are added or
 removed
  from the bug.
 
  --
  Andrew
  Mirantis
  Ceph community
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Dmitry Borodaenko

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Itzik Brown

Hi,

I see that the option to specify vif_driver in nova.conf for libvirt is 
deprecated for Juno release.
What is the way to use an external VIF driver (i.e. that is out of the 
tree)?


Itzik


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Guest prepare call polling mechanism issue

2014-07-23 Thread Tim Simpson
To summarize, this is a conversation about the following LaunchPad bug: 
https://launchpad.net/bugs/1325512
and Gerrit review: https://review.openstack.org/#/c/97194/6

You are saying the function _service_is_active in addition to polling the 
datastore service status also polls the status of the Nova resource. At first I 
thought this wasn't the case, however looking at your pull request I was 
surprised to see on line 320 
(https://review.openstack.org/#/c/97194/6/trove/taskmanager/models.py) polls 
Nova using the get method (which I wish was called refresh as to me it 
sounds like a lazy-loader or something despite making a full GET request each 
time).
So moving this polling out of there into the two respective create_server 
methods as you have done is not only going to be useful for Heat and avoid the 
issue of calling Nova 99 times you describe but it will actually help 
operations teams to see more clearly that the issue was with a server that 
didn't provision. We actually had an issue in Staging the other day that took 
us forever to figure out because the server wasn't provisioning, but before 
anything checked that it was ACTIVE the DNS code detected the server had no ip 
address (never mind it was in a FAILED state) so the logs surfaced this as a 
DNS error. This change should help us avoid such issues.

Thanks,

Tim



From: Denis Makogon [dmako...@mirantis.com]
Sent: Wednesday, July 23, 2014 7:30 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Trove] Guest prepare call polling mechanism issue


Hello, Stackers.


I’d like to discuss guestagent prepare call polling mechanism issue (see [1]).


Let me first describe why this is actually an issue and why it should be fixed. 
For those of you who is familiar with Trove knows that Trove can provision 
instances through Nova API and Heat API (see [2] and see [3]).



What’s the difference between this two ways (in general)? The answer is 
simple:

- Heat-based provisioning method has polling mechanism that verifies that stack 
provisioning was completed with successful state (see [4]) which means that all 
stack resources are in ACTIVE state.

- Nova-based provisioning method doesn’t do any polling (which is wrong, since 
instance can’t fail as fast as possible because Trove-taskmanager service 
doesn’t verify that launched server had reached ACTIVE state. That’s the issue 
#1 - compute instance state is unknown, but right after resources (deliverd by 
heat) already in ACTIVE states.


Once one method [2] or [3] finished, taskmanager trying to prepare data for 
guest (see [5]) and then it tries to send prepare call to guest (see [6]). Here 
comes issue #2 - polling mechanism does at least 100 API calls to Nova to 
define compute instance status.

Also taskmanager does almost the same amount of calls to Trove backend to 
discover guest status which is totally normal.


So, here comes the question,  why should i call 99 times Nova for the same 
value if the value asked for the first time was completely acceptable?



There’s only one way to fix it. Since heat-based provisioning delivers 
instance with status validation procedure, the same thing should be done for 
nova-base provisioning (we should extract compute instance status polling from 
guest prepare polling mechanism and integrate it into [2]) and leave only guest 
status discovering in guest prepare polling mechanism.





Benefits? Proposed fix will give an ability for fast-failing for corrupted 
instances, it would reduce amount of redundant Nova API calls while attempting 
to discover guest status.



Proposed fix for this issue - [7].


[1] - https://launchpad.net/bugs/1325512

[2] - 
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L198-L215

[3] - 
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L190-L197

[4] - 
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L420-L429

[5] - 
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L217-L256

[6] - 
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L254-L266

[7] - https://review.openstack.org/#/c/97194/



Thoughts?


Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] threading in nova (greenthreads, OS threads, etc.)

2014-07-23 Thread Chris Friesen


Hi all,

I was wondering if someone could point me to a doc describing the 
threading model for nova.


I know that we use greenthreads to map multiple threads of execution 
onto a single native OS thread.  And the python GIL results in 
limitations as well.


According to the description at 
https://bugs.launchpad.net/tripleo/+bug/1203906; for nova-api we 
potentially fork off multiple instances because it's database-heavy and 
we don't want to serialize on the database.


If that's the case, why do we only run one instance of nova-conductor on 
a single OS thread?


And looking at nova-compute on a compute node with no instances running 
I see 22 OS threads.  Where do these come from?  Are these related to 
libvirt?  Or are they forked the way that nova-api is?


Any pointers would be appreciated.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Daniel P. Berrange
On Wed, Jul 23, 2014 at 07:38:05PM +0300, Itzik Brown wrote:
 Hi,
 
 I see that the option to specify vif_driver in nova.conf for libvirt is
 deprecated for Juno release.

Hmm, that is not right. There's no intention to remove the vif_driver
parameter itself. We were supposed to merely deprecate the various
legacy VIF driver implementations in Nova, not remove the ability
to use 3rd party ones.

 What is the way to use an external VIF driver (i.e. that is out of the
 tree)?

Continue using the 'vif_driver' config parameter.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] threading in nova (greenthreads, OS threads, etc.)

2014-07-23 Thread Daniel P. Berrange
On Wed, Jul 23, 2014 at 10:41:06AM -0600, Chris Friesen wrote:
 
 Hi all,
 
 I was wondering if someone could point me to a doc describing the threading
 model for nova.
 
 I know that we use greenthreads to map multiple threads of execution onto a
 single native OS thread.  And the python GIL results in limitations as well.
 
 According to the description at
 https://bugs.launchpad.net/tripleo/+bug/1203906; for nova-api we
 potentially fork off multiple instances because it's database-heavy and we
 don't want to serialize on the database.
 
 If that's the case, why do we only run one instance of nova-conductor on a
 single OS thread?
 
 And looking at nova-compute on a compute node with no instances running I
 see 22 OS threads.  Where do these come from?  Are these related to libvirt?
 Or are they forked the way that nova-api is?

Since native C API calls block greenthreads, nova has a native thread pool
that is used for each libvirt API call. A similar thing is done for the
libguestfs API calls and optionally you can do it in the database driver
too. Basically any python module involving native C calls should be a
candidate for a native thread pool

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] graduating oslo.middleware

2014-07-23 Thread Ben Nemec
 

I left a comment on one of the commits, but in general here are my
thoughts: 

1) I would prefer not to do things like switch to oslo.i18n outside of
Gerrit. I realize we don't have a specific existing policy for this, but
doing that significant work outside of Gerrit is not desirable IMHO. It
needs to happen either before graduation or after import into Gerrit. 

2) I definitely don't want to be accepting enable [hacking check]
changes outside Gerrit. The github graduation step is _just_ to get the
code in shape so it can be imported with the tests passing. It's
perfectly acceptable to me to just ignore any hacking checks during this
step and fix them in Gerrit where, again, the changes can be reviewed. 

At a glance I don't see any problems with the changes that have been
made, but I haven't looked that closely and I think it brings up some
topics for clarification in the graduation process. 

Thanks. 

-Ben 

On 2014-07-22 08:44, gordon chung wrote: 

 hi, 
 
 following the oslo graduation protocol, could the oslo team review the 
 oslo.middleware library[1] i've created and see if there are any issues. 
 
 [1] https://github.com/chungg/oslo.middleware [2] 
 
 cheers,
 _gord_ 
 _ _ 
 _ _ 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [1]

 

Links:
--
[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] https://github.com/chungg/oslo.middleware
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FW: [Neutron][LBaaS] TLS capability - work division

2014-07-23 Thread Evgeny Fedoruk
My code is here:
https://review.openstack.org/#/c/109035/1



-Original Message-
From: Evgeny Fedoruk 
Sent: Wednesday, July 23, 2014 6:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - work division

Hi Carlos,

As I understand you are working on common module for Barbican  interactions.
I will commit my code later today and I will appreciate if you and anybody else 
 who is interested will review this change.
There is one specific spot for the common Barbican interactions module API 
integration.
After the IRC meeting tomorrow, we can discuss the work items and decide who is 
interested/available to do them.
Does it make sense?

Thanks,
Evg

-Original Message-
From: Carlos Garza [mailto:carlos.ga...@rackspace.com] 
Sent: Wednesday, July 23, 2014 6:15 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work division

Do you have any idea as to how we can split up the work?

On Jul 23, 2014, at 6:01 AM, Evgeny Fedoruk evge...@radware.com
 wrote:

 Hi,
 
 I'm working on TLS integration with loadbalancer v2 extension and db.
 Basing on Brandon's  patches https://review.openstack.org/#/c/105609 , 
 https://review.openstack.org/#/c/105331/  , 
 https://review.openstack.org/#/c/105610/
 I will abandon previous 2 patches for TLS which are 
 https://review.openstack.org/#/c/74031/ and 
 https://review.openstack.org/#/c/102837/ 
 Managing to submit my change later today. It will include lbaas extension v2 
 modification, lbaas db v2 modifications, alembic migration for schema changes 
 and new tests in unit testing for lbaas db v2.
 
 Thanks,
 Evg
 
 -Original Message-
 From: Carlos Garza [mailto:carlos.ga...@rackspace.com] 
 Sent: Wednesday, July 23, 2014 3:54 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - work division
 
   Since it looks like the TLS blueprint was approved I''m sure were all 
 eager to start coded so how should we divide up work on the source code. I 
 have Pull requests in pyopenssl https://github.com/pyca/pyopenssl/pull/143;. 
 and a few one liners in pica/cryptography to expose the needed low-level that 
 I'm hoping will be added pretty soon to that PR 143 test's can pass. Incase 
 it doesn't we will fall back to using the pyasn1_modules as it already also 
 has a means to fetch what we want at a lower level. 
 I'm just hoping that we can split the work up so that we can collaborate 
 together on this with out over serializing the work were people become 
 dependent on waiting for some one else to complete their work or worse one 
 person ending up doing all the work.
 
   
  Carlos D. Garza ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Dan Smith
 Hmm, that is not right. There's no intention to remove the vif_driver
 parameter itself. We were supposed to merely deprecate the various
 legacy VIF driver implementations in Nova, not remove the ability
 to use 3rd party ones.

I'm pretty sure it was deprecated specifically for that reason. Once we
stopped having the need to provide that as a way to control which
implementation was used, we (IIRC) marked it as deprecated with the
intention of removing it. We've been on a path to remove as many of the
provide your own class here plugin points as possible in recent cycles.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Daniel P. Berrange
On Wed, Jul 23, 2014 at 09:53:55AM -0700, Dan Smith wrote:
  Hmm, that is not right. There's no intention to remove the vif_driver
  parameter itself. We were supposed to merely deprecate the various
  legacy VIF driver implementations in Nova, not remove the ability
  to use 3rd party ones.
 
 I'm pretty sure it was deprecated specifically for that reason. Once we
 stopped having the need to provide that as a way to control which
 implementation was used, we (IIRC) marked it as deprecated with the
 intention of removing it. We've been on a path to remove as many of the
 provide your own class here plugin points as possible in recent cycles.

I don't see an issue with allowing people to configure 3rd party impl
for the VIF driver, provided we don't claim that the VIF driver API
contract is stable, same way we don't claim virt driver API is stable.
It lets users have a solution to enable custom NIC functionality while
waiting for Nova to officially support it. If we did remove it, then
users could still subclass the main libvirt driver class and make
it use their custom VIF driver, so they'd get to the same place just
with an extra inconvenient hoop to jump through. So is it worth removing
vif_driver ?

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [Neutron][LBaaS] TLS capability - work division

2014-07-23 Thread Doug Wiegley
Do we want any driver interface changes for this?  At one level, with the
current interface, conforming drivers could just reference
listener.sni_containers, with no changes.  But, do we want something in
place so that the API can return an unsupported error for non-TLS v2
drivers?  Or must all v2 drivers support TLS?

doug



On 7/23/14, 10:54 AM, Evgeny Fedoruk evge...@radware.com wrote:

My code is here:
https://review.openstack.org/#/c/109035/1



-Original Message-
From: Evgeny Fedoruk
Sent: Wednesday, July 23, 2014 6:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - work
division

Hi Carlos,

As I understand you are working on common module for Barbican
interactions.
I will commit my code later today and I will appreciate if you and
anybody else  who is interested will review this change.
There is one specific spot for the common Barbican interactions module
API integration.
After the IRC meeting tomorrow, we can discuss the work items and decide
who is interested/available to do them.
Does it make sense?

Thanks,
Evg

-Original Message-
From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
Sent: Wednesday, July 23, 2014 6:15 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work
division

Do you have any idea as to how we can split up the work?

On Jul 23, 2014, at 6:01 AM, Evgeny Fedoruk evge...@radware.com
 wrote:

 Hi,
 
 I'm working on TLS integration with loadbalancer v2 extension and db.
 Basing on Brandon's  patches https://review.openstack.org/#/c/105609 ,
https://review.openstack.org/#/c/105331/  ,
https://review.openstack.org/#/c/105610/
 I will abandon previous 2 patches for TLS which are
https://review.openstack.org/#/c/74031/ and
https://review.openstack.org/#/c/102837/
 Managing to submit my change later today. It will include lbaas
extension v2 modification, lbaas db v2 modifications, alembic migration
for schema changes and new tests in unit testing for lbaas db v2.
 
 Thanks,
 Evg
 
 -Original Message-
 From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
 Sent: Wednesday, July 23, 2014 3:54 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - work division
 
  Since it looks like the TLS blueprint was approved I''m sure were all
eager to start coded so how should we divide up work on the source code.
I have Pull requests in pyopenssl
https://github.com/pyca/pyopenssl/pull/143;. and a few one liners in
pica/cryptography to expose the needed low-level that I'm hoping will be
added pretty soon to that PR 143 test's can pass. Incase it doesn't we
will fall back to using the pyasn1_modules as it already also has a
means to fetch what we want at a lower level.
 I'm just hoping that we can split the work up so that we can
collaborate together on this with out over serializing the work were
people become dependent on waiting for some one else to complete their
work or worse one person ending up doing all the work.
 
 
   Carlos D. Garza ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday July 24th at 22:00 UTC

2014-07-23 Thread Matthew Treinish
Hi everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, July 24th at 22:00 UTC in the #openstack-meeting channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 22:00 UTC is in other timezones tomorrow's
meeting will be at:

18:00 EDT
07:00 JST
07:30 ACST
0:00 CEST
17:00 CDT
15:00 PDT

-Matt Treinish


pgpl6Qhb3DAC_.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] [Glance] Image tagging

2014-07-23 Thread Tripp, Travis S
Thank you Serg,

Yes, what you are discussing in this thread is actually directly related to 
many of the original reasons we worked on the Graffiti concept POC and then 
revised into the metadata definitions catalog we are working on for Glance.
Basically, you can define objects and properties that you care about in the 
definitions catalog and then use the UI to apply metadata to things like 
images. The UI of course is pulling from a REST API, so this isn’t limited to 
UI use only.  The catalog ensures consistency of applying the metadata so that 
the metadata is useable for users as well as tool automation.  We’ve got 
multiple sets of code in progress which I’ve highlighted below and we have a 
session at the Glance mini-summit this week to talk about it further.

The below are work in progress, but you probably would be able to fetch the 
horizon ones to get an idea of where things currently are.

Glance Metadata Definitions Catalog: https://review.openstack.org/#/c/105904/
Python Glance Client support: https://review.openstack.org/#/c/105231/
Horizon Metadata Tagging Widget: https://review.openstack.org/#/c/104956/
Horizon Admin UI: https://review.openstack.org/#/c/104063/

For Juno, we’ve scaled back some of our original Graffiti concepts (which 
included inheritance, elastic search, etc) to help get things landed in this 
iteration, but then we want to build out from there and would love to work with 
you to help this meet your needs.

Thanks,
Travis

From: Serg Melikyan [mailto:smelik...@mirantis.com]
Sent: Wednesday, July 23, 2014 9:28 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Murano] Image tagging
Importance: High

I would also suggest to look at 
Graffitihttps://wiki.openstack.org/wiki/Graffiti project, I think Graffiti is 
designed to solve problems related to our with images however I don't know how 
well it is fit for us. They work very hard to make project functionality 
available as parthttps://review.openstack.org/98554 of Glance.

If it's really can solve our problem we can design solution that expose 
functionality compatible in capabilities with Graffiti and have limited 
short-term implementation that eventually can be replaced by Glance [with 
Metadata Definitions Catalog feature].

On Wed, Jul 23, 2014 at 1:52 AM, Stan Lagun 
sla...@mirantis.commailto:sla...@mirantis.com wrote:
How do you like alternate design: uses can chose any image he wants (say any 
Linux) but the JSON that is in image tag has enough information on what 
applications are installed on that image. And not just installed or not but the 
exact state installation was frozen (say binaries are deployed but config files 
are need to be modified). The deployment workflow can peak that state from 
image tag and continue right from the place it was stopped last time. So if 
user has chosen image with MySQL preinstalled the workflow will just 
post-configure it while if the user chosen clean Linux image it will do the 
whole deployment from scratch. Thus it will become only a matter of 
optimization and user will still be able to to share instance for several 
applications (good example is Firewall app) or deploy his app even if there is 
no image where it was built in.

Those are only my thoughts and this need a proper design. For now I agree that 
we need to improve tagging to support yours use case. But this need to be done 
in a way that would allow both user and machine to work with. UI at least needs 
to distinguish between Linux and Windows while for user a free-form tagging may 
be appropriate. Both can be stored in a single JSON tag.

So lets create blueprint/etherpad for this and both think on exact format that 
can be implemented right now

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

On Tue, Jul 22, 2014 at 10:08 PM, McLellan, Steven 
steve.mclel...@hp.commailto:steve.mclel...@hp.com wrote:
Thanks for the response.

Primarily I’m thinking about a situation where I have an image that has a 
specific piece of software installed (let’s say MySQL for the sake of 
argument). My application (which configures mysql) requires a glance image that 
has MySQL pre-installed, and doesn’t particularly care what OS (though again 
for the sake of argument assume it’s linux of some kind, so that configuration 
files are expected to be in the same place regardless of OS).

Currently we have a list of three hardcoded values in the UI, and none of them 
apply properly. I’m suggesting instead of that list, we allow free-form text; 
if you’re tagging glance images, you are expected to know what applications 
will be looking for. This still leaves a problem in that I can upload a package 
but I don’t necessarily have the ability to mark any images as valid for it, 
but I think that can be a later evolution; for now, I’m focusing on the 
situation where an admin is both uploading glance images and murano packages.

As a slight side note, we do have 

[openstack-dev] [TripleO][Tuskar] REST API spec for Juno questions

2014-07-23 Thread Petr Blaho
Hi all,

I am working on API endpoints for Tuskar according to
https://github.com/openstack/tripleo-specs/blob/master/specs/juno/tripleo-juno-tuskar-rest-api.rst
and I found some inconsistencies.

On following lines I will present what I think are mistakes or I do not 
understand
well. Please, correct me if I am wrong. Then I am ok to write patch
for that spec.

1) UUID vs. id.
I can see usage of UUIDs in urls
(https://github.com/openstack/tripleo-specs/blame/master/specs/juno/tripleo-juno-tuskar-rest-api.rst#L107)
and UUID is referenced in condition for 404 HTTP status
(https://github.com/openstack/tripleo-specs/blame/master/specs/juno/tripleo-juno-tuskar-rest-api.rst#L125).
On the other hand we have id in returned json for plan
(https://github.com/openstack/tripleo-specs/blame/master/specs/juno/tripleo-juno-tuskar-rest-api.rst#L148).
The same applies for roles and its UUIDs or ids.
The problem I am pointing at is not in format of value but in its name.
I am convinced that these should be consistent and we should use UUIDs.

2) Request Data when adding role to plan.
According to
https://github.com/openstack/tripleo-specs/blame/master/specs/juno/tripleo-juno-tuskar-rest-api.rst#L376
there should be name and version of the role but json example has only
id value
(https://github.com/openstack/tripleo-specs/blame/master/specs/juno/tripleo-juno-tuskar-rest-api.rst#L382-L384).
I understand that that json code is just an example but I was confused
by differences between words describing data and example.
I can see from json representation of roles list
(https://github.com/openstack/tripleo-specs/blame/master/specs/juno/tripleo-juno-tuskar-rest-api.rst#L508-L527)
that role can be identified both by UUID/id and combination of
name+version.
From spec for DELETE role from plan 
(https://github.com/openstack/tripleo-specs/blame/master/specs/juno/tripleo-juno-tuskar-rest-api.rst#L405)
I can tell that we probably will be using name+version identifier to
know which role I want to add to plan so example mentioned above is just
missing name and version attributes.
Am I correct with this?

3) /v2/clouds in href for plan
This is probably remnant from previous versions of spec. We have
/v2/clouds where we probably should have /v2/plans
(https://github.com/openstack/tripleo-specs/blame/master/specs/juno/tripleo-juno-tuskar-rest-api.rst#L182).

4) Links to roles from plan json
We have a link for each role in plan that points to url like
/v2/roles/:role_uuid
(https://github.com/openstack/tripleo-specs/blame/master/specs/juno/tripleo-juno-tuskar-rest-api.rst#L158).
But we do not have an API endpoint returning single role.
We should either remove these links to single role or add GET
/v2/roles/:role_uuid endpoint and add this kind of links to list of
roles too.

I proposed solutions to points 1, 2 and 3 in
https://review.openstack.org/#/c/109040/.

Thanks for reading this.
I am looking for your input.
-- 
Petr Blaho, pbl...@redhat.com
Software Engineer

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Dan Smith
 I don't see an issue with allowing people to configure 3rd party impl
 for the VIF driver, provided we don't claim that the VIF driver API
 contract is stable, same way we don't claim virt driver API is stable.
 It lets users have a solution to enable custom NIC functionality while
 waiting for Nova to officially support it. If we did remove it, then
 users could still subclass the main libvirt driver class and make
 it use their custom VIF driver, so they'd get to the same place just
 with an extra inconvenient hoop to jump through. So is it worth removing
 vif_driver ?

In my opinion, we should (continue to) remove any of those plug points
that we don't want to actually support as plugin interfaces. The virt
driver plug point at least serves to allow us to develop and test
drivers outside of the tree (ironic and docker, for example) before
merging. The vif_driver (and others) imply that it's a plugin interface,
when we have no intention of making it one, and I think we should nuke them.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Neutron integration test job

2014-07-23 Thread Kyle Mestery
On Wed, Jul 23, 2014 at 7:28 AM, Denis Makogon dmako...@mirantis.com wrote:
 Hello, Stackers.



 For those of you who’s interested in Trove just letting you know, that for
 now Trove can work with Neutron (hooray!!)
  instead of Nova-network, see [1] and [2]. It’s a huge step forward on the
 road of advanced OpenStack integration.

 But let’s admit it’s not the end, we should deal with:

 Add Neutron-based configuration for DevStack to let folks try it (see [3]).

I have some comments on this patch which I've posted in the review.

 Implementing/providing new type of testing job that will test on a regular
 basis all Trove tests with enabled Neutron to verify that all our networking
 preparations for instance are fine.


 The last thing is the most interesting. And i’d like to discuss it with all
 of you, folks.
 So, i’ve wrote initial job template taking into account specific
 configuration required by DevStack and Trove-integration, see [4], and i’d
 like to receive all possible feedbacks as soon as possible.

This is great! I'd like to see this work land as well, thanks for
taking this on. I'll add this to my backlog of items to review and
provide some feedback as well.

Thanks,
Kyle



 [1] - Trove.
 https://github.com/openstack/trove/commit/c68fef2b7a61f297b9fe7764dd430eefd4d4a767

 [2] - Trove integration.
 https://github.com/openstack/trove-integration/commit/9f42f5c9b1a0d8844b3e527bcf2eb9474485d23a

 [3] - DevStack patchset. https://review.openstack.org/108966

 [4] - POC. https://gist.github.com/denismakogon/76d9bd3181781097c39b



 Best regards,

 Denis Makogon



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] graduating oslo.middleware

2014-07-23 Thread gordon chung
 I left a comment on one of the commits, but in general here are my thoughts:
 1) I would prefer not to do things like switch to oslo.i18n outside of 
 Gerrit.  I realize we don't have a specific existing policy for this, but 
 doing that significant 
 work outside of Gerrit is not desirable IMHO.  It needs to happen either 
 before graduation or after import into Gerrit.
 2) I definitely don't want to be accepting enable [hacking check] changes 
 outside Gerrit.  The github graduation step is _just_ to get the code in 
 shape so it 
 can be imported with the tests passing.  It's perfectly acceptable to me to 
 just ignore any hacking checks during this step and fix them in Gerrit where, 
 again, 
 the changes can be reviewed.
 At a glance I don't see any problems with the changes that have been made, 
 but I haven't looked that closely and I think it brings up some topics for 
 clarification in the graduation process.


i'm ok to revert if there are concerns. i just vaguely remember a reference in 
another oslo lib about waiting for i18n graduation but tbh i didn't actually 
check back to see what conclusion was.

cheers,
gord  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Daniel P. Berrange
On Wed, Jul 23, 2014 at 10:09:37AM -0700, Dan Smith wrote:
  I don't see an issue with allowing people to configure 3rd party impl
  for the VIF driver, provided we don't claim that the VIF driver API
  contract is stable, same way we don't claim virt driver API is stable.
  It lets users have a solution to enable custom NIC functionality while
  waiting for Nova to officially support it. If we did remove it, then
  users could still subclass the main libvirt driver class and make
  it use their custom VIF driver, so they'd get to the same place just
  with an extra inconvenient hoop to jump through. So is it worth removing
  vif_driver ?
 
 In my opinion, we should (continue to) remove any of those plug points
 that we don't want to actually support as plugin interfaces. The virt
 driver plug point at least serves to allow us to develop and test
 drivers outside of the tree (ironic and docker, for example) before
 merging. The vif_driver (and others) imply that it's a plugin interface,
 when we have no intention of making it one, and I think we should nuke them.

If we're going to do that, then we should be consistent. eg there is a
volume_drivers parameter that serves the same purpose as vif_driver

What is our story for people who are developing new network or storage
drivers for Neutron / Cinder and wish to test Nova ? Removing vif_driver
and volume_drivers config parameters would mean that they would have to
directly modify the existing Nova libvirt vif.py/volume.py codefiles.

This isn't neccessarily bad because they'll have to do this anyway if
they want to actually submit it to Nova.

It is, however, notably different from what they can do today where they
can drop in a impl for their new Neutron/Cinder driver without having to
modify any existing Nova code directly. This could be a pain if they wish
to provide the custom driver to users/customers of the previous stable
Nova release while waiting for official support in next Nova release. It
sounds like you're explicitly saying we don't want to support that use
case though.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Support for Django 1.7 in OpenStack

2014-07-23 Thread Thomas Goirand
On 07/23/2014 10:46 PM, Lyle, David wrote:
 Django 1.7 drops support for python 2.6 [1], so until OpenStack drops
 support for 2.6 which is slated for Kilo, Horizon is unfortunately capped
 at  1.7.
 
 David
 
 [1] 
 https://docs.djangoproject.com/en/dev/releases/1.7/#python-compatibility

Having the gate putting a cap on Django  1.7 doesn't mean that nobody
can write patches to support it. Just that it's going to be more
difficult to test.

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Mentor program?

2014-07-23 Thread Joshua Harlow
Hi all,

I was reading over a IMHO insightful hacker news thread last night:

https://news.ycombinator.com/item?id=8068547

Labeled/titled: 'I made a patch for Mozilla, and you can do it too'

It made me wonder what kind of mentoring support are we as a community offering 
to newbies (a random google search for 'openstack mentoring' shows mentors for 
GSoC, mentors for interns, outreach for women... but no mention of mentors as a 
way for everyone to get involved)?

Looking at the comments in that hacker news thread, the article itself it seems 
like mentoring is stressed over and over as the way to get involved.

Has there been ongoing efforts to establish such a program (I know there is 
training work that has been worked on, but that's not exactly the same).

Thoughts, comments...?

-Josh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Dan Smith
 If we're going to do that, then we should be consistent. eg there is
 a volume_drivers parameter that serves the same purpose as
 vif_driver

There are lots of them. We've had a bit of a background task running to
remove them when possible/convenient and try to avoid adding new ones.
I'm not opposed to aggressively removing them for sure, but it wouldn't
be super high on my priority list. However, I definitely don't want to
slide backwards when we have one already marked for removal :)

 What is our story for people who are developing new network or
 storage drivers for Neutron / Cinder and wish to test Nova ? Removing
 vif_driver and volume_drivers config parameters would mean that they
 would have to directly modify the existing Nova libvirt
 vif.py/volume.py codefiles.
 
 This isn't neccessarily bad because they'll have to do this anyway
 if they want to actually submit it to Nova.

I don't think there's any reason not to do that in nova itself, is
there? Virt drivers are large, so maybe making an exception for that
plug point makes sense purely for our own test efforts. However, for
something smaller like you mention, I don't see why we need to keep
them, especially given what it advertises (IMHO) to people.

 This could be a pain if they wish to provide the custom driver to
 users/customers of the previous stable Nova release while waiting for
 official support in next Nova release. It sounds like you're
 explicitly saying we don't want to support that use case though.

I can't really speak for we, but certainly _I_ don't want to support
that model. I think it leads to people thinking they can develop drivers
for things like this out of tree permanently, which I'd really like to
avoid.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Spec freeze exception] Controlled shutdown of GuestOS

2014-07-23 Thread Day, Phil
Hi Folks,

I'd like to propose the following as an exception to the spec freeze, on the 
basis that it addresses a potential data corruption issues in the Guest.

https://review.openstack.org/#/c/89650

We were pretty close to getting acceptance on this before, apart from a debate 
over whether one additional config value could be allowed to be set via image 
metadata - so I've given in for now on wanting that feature from a deployer 
perspective, and said that we'll hard code it as requested.

Initial parts of the implementation are here:
https://review.openstack.org/#/c/68942/
https://review.openstack.org/#/c/99916/


Phil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mentor program?

2014-07-23 Thread Cindy Pallares

On 07/23/2014 01:02 PM, Anne Gentle wrote:
 On Wed, Jul 23, 2014 at 12:29 PM, Joshua Harlow harlo...@outlook.com
 wrote:

 Hi all,

 I was reading over a IMHO insightful hacker news thread last night:

 https://news.ycombinator.com/item?id=8068547

 Labeled/titled: 'I made a patch for Mozilla, and you can do it too'

 It made me wonder what kind of mentoring support are we as a community
 offering to newbies (a random google search for 'openstack mentoring' shows
 mentors for GSoC, mentors for interns, outreach for women... but no mention
 of mentors as a way for everyone to get involved)?

 Looking at the comments in that hacker news thread, the article itself it
 seems like mentoring is stressed over and over as the way to get involved.

 Has there been ongoing efforts to establish such a program (I know there
 is training work that has been worked on, but that's not exactly the same).

 Thoughts, comments...?

 I'll let Stefano answer further, but yes, we've discussed a centralized
 mentoring program for a year or so. I'm not sure we have enough mentors
 available, there are certainly plenty of people seeking and needing
 mentoring. So he can elaborate more on our current thinking of how we'd
 overcome the imbalance and get more centralized coordination in this area.

 Thanks,
 Anne

Mozilla also has mentored bugs system which provide a mentor who
commits to helping a newbie get a single bug fixed. It would be nice to
have that in OpenStack. It would also be a great way for people to get
their feet wet in mentoring or who don't want to commit themselves too
much.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][Spec Freeze Exception]Support dpdkvhost in ovs vif bindings

2014-07-23 Thread Mooney, Sean K
Hi
The third iteration of the specs are now available for review at the links below

https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost

https://blueprints.launchpad.net/neutron/+spec/ml2-use-dpdkvhost
thanks for the feedback given so far.
Hopefully the current iteration addresses the issues raised.

Regards
Sean.


From: Czesnowicz, Przemyslaw
Sent: Friday, July 18, 2014 1:03 PM
To: openstack-dev@lists.openstack.org
Cc: Mooney, Sean K; Hoban, Adrian
Subject: [openstack-dev][nova][Spec Freeze Exception]Support dpdkvhost in ovs 
vif bindings

Hi Nova Cores,

I would like to ask for spec approval deadline exception for:
https://review.openstack.org/#/c/95805/2

This feature allows using DPDK enabled Open vSwitch with Openstack.
This is an important feature for NFV workloads that require high performance 
network I/O.

If the spec is approved, implementation should be straight forward and should 
not disrupt any other work happening in Nova.


Thanks,
Przemek


--
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mentor program?

2014-07-23 Thread Anne Gentle
On Wed, Jul 23, 2014 at 12:29 PM, Joshua Harlow harlo...@outlook.com
wrote:

 Hi all,

 I was reading over a IMHO insightful hacker news thread last night:

 https://news.ycombinator.com/item?id=8068547

 Labeled/titled: 'I made a patch for Mozilla, and you can do it too'

 It made me wonder what kind of mentoring support are we as a community
 offering to newbies (a random google search for 'openstack mentoring' shows
 mentors for GSoC, mentors for interns, outreach for women... but no mention
 of mentors as a way for everyone to get involved)?

 Looking at the comments in that hacker news thread, the article itself it
 seems like mentoring is stressed over and over as the way to get involved.

 Has there been ongoing efforts to establish such a program (I know there
 is training work that has been worked on, but that's not exactly the same).

 Thoughts, comments...?


I'll let Stefano answer further, but yes, we've discussed a centralized
mentoring program for a year or so. I'm not sure we have enough mentors
available, there are certainly plenty of people seeking and needing
mentoring. So he can elaborate more on our current thinking of how we'd
overcome the imbalance and get more centralized coordination in this area.

Thanks,
Anne



 -Josh
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [Neutron][LBaaS] TLS capability - work division

2014-07-23 Thread Brandon Logan
@Evgeny: Did you intend on adding another patchset in the reviews I've
been working on? If so I don't really see any changes, so if they're are
some changes you needed in there let me know.

@Doug: I think if the drivers see the TERMINATED_HTTPS protocol then
they can throw an exception.  I don't think a driver interface change is
needed.

Thanks,
Brandon


On Wed, 2014-07-23 at 17:02 +, Doug Wiegley wrote:
 Do we want any driver interface changes for this?  At one level, with the
 current interface, conforming drivers could just reference
 listener.sni_containers, with no changes.  But, do we want something in
 place so that the API can return an unsupported error for non-TLS v2
 drivers?  Or must all v2 drivers support TLS?
 
 doug
 
 
 
 On 7/23/14, 10:54 AM, Evgeny Fedoruk evge...@radware.com wrote:
 
 My code is here:
 https://review.openstack.org/#/c/109035/1
 
 
 
 -Original Message-
 From: Evgeny Fedoruk
 Sent: Wednesday, July 23, 2014 6:54 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - work
 division
 
 Hi Carlos,
 
 As I understand you are working on common module for Barbican
 interactions.
 I will commit my code later today and I will appreciate if you and
 anybody else  who is interested will review this change.
 There is one specific spot for the common Barbican interactions module
 API integration.
 After the IRC meeting tomorrow, we can discuss the work items and decide
 who is interested/available to do them.
 Does it make sense?
 
 Thanks,
 Evg
 
 -Original Message-
 From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
 Sent: Wednesday, July 23, 2014 6:15 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work
 division
 
 Do you have any idea as to how we can split up the work?
 
 On Jul 23, 2014, at 6:01 AM, Evgeny Fedoruk evge...@radware.com
  wrote:
 
  Hi,
  
  I'm working on TLS integration with loadbalancer v2 extension and db.
  Basing on Brandon's  patches https://review.openstack.org/#/c/105609 ,
 https://review.openstack.org/#/c/105331/  ,
 https://review.openstack.org/#/c/105610/
  I will abandon previous 2 patches for TLS which are
 https://review.openstack.org/#/c/74031/ and
 https://review.openstack.org/#/c/102837/
  Managing to submit my change later today. It will include lbaas
 extension v2 modification, lbaas db v2 modifications, alembic migration
 for schema changes and new tests in unit testing for lbaas db v2.
  
  Thanks,
  Evg
  
  -Original Message-
  From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
  Sent: Wednesday, July 23, 2014 3:54 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - work division
  
 Since it looks like the TLS blueprint was approved I''m sure were all
 eager to start coded so how should we divide up work on the source code.
 I have Pull requests in pyopenssl
 https://github.com/pyca/pyopenssl/pull/143;. and a few one liners in
 pica/cryptography to expose the needed low-level that I'm hoping will be
 added pretty soon to that PR 143 test's can pass. Incase it doesn't we
 will fall back to using the pyasn1_modules as it already also has a
 means to fetch what we want at a lower level.
  I'm just hoping that we can split the work up so that we can
 collaborate together on this with out over serializing the work were
 people become dependent on waiting for some one else to complete their
 work or worse one person ending up doing all the work.
  
  
Carlos D. Garza ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Spec freeze exception] ml2-use-dpdkvhost

2014-07-23 Thread Mooney, Sean K
Hi kyle

Thanks for your provisional support.
I would agree that unless the nova spec is also granted an exception both specs 
should be moved
To Kilo.

I have now uploaded the most recent version of the specs.
They are available to review here:
https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost
https://blueprints.launchpad.net/neutron/+spec/ml2-use-dpdkvhost

regards
sean


-Original Message-
From: Kyle Mestery [mailto:mest...@mestery.com] 
Sent: Tuesday, July 22, 2014 2:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] [Spec freeze exception] ml2-use-dpdkvhost

On Mon, Jul 21, 2014 at 10:04 AM, Mooney, Sean K sean.k.moo...@intel.com 
wrote:
 Hi

 I would like to propose
 https://review.openstack.org/#/c/107797/1/specs/juno/ml2-use-dpdkvhost
 .rst
 for a spec freeze exception.



 https://blueprints.launchpad.net/neutron/+spec/ml2-use-dpdkvhost



 This blueprint adds support for the Intel(R) DPDK Userspace vHost

 port binding to the Open Vswitch and Open Daylight ML2 Mechanism Drivers.

In general, I'd be ok with approving an exception for this BP.
However, please see below.



 This blueprint enables nova changes tracked by the following spec:

 https://review.openstack.org/#/c/95805/1/specs/juno/libvirt-ovs-use-us
 vhost.rst

This BP appears to also require an exception from the Nova team. I think these 
both require exceptions for this work to have a shot at landing in Juno. Given 
this, I'm actually leaning to move this to Kilo. But if you can get a Nova 
freeze exception, I'd consider the same for the Neutron BP.

Thanks,
Kyle



 regards

 sean

 --
 Intel Shannon Limited
 Registered in Ireland
 Registered Office: Collinstown Industrial Park, Leixlip, County 
 Kildare Registered Number: 308263 Business address: Dromore House, 
 East Park, Shannon, Co. Clare

 This e-mail and any attachments may contain confidential material for 
 the sole use of the intended recipient(s). Any review or distribution 
 by others is strictly prohibited. If you are not the intended 
 recipient, please contact the sender and delete all copies.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Spec freeze exception] Controlled shutdown of GuestOS

2014-07-23 Thread Daniel P. Berrange
On Wed, Jul 23, 2014 at 06:08:52PM +, Day, Phil wrote:
 Hi Folks,
 
 I'd like to propose the following as an exception to the spec freeze, on the 
 basis that it addresses a potential data corruption issues in the Guest.
 
 https://review.openstack.org/#/c/89650
 
 We were pretty close to getting acceptance on this before, apart from a 
 debate over whether one additional config value could be allowed to be set 
 via image metadata - so I've given in for now on wanting that feature from a 
 deployer perspective, and said that we'll hard code it as requested.
 
 Initial parts of the implementation are here:
 https://review.openstack.org/#/c/68942/
 https://review.openstack.org/#/c/99916/

Per my comments already, I think this is important for Juno and will
sponsor it.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [Neutron][LBaaS] TLS capability - work division

2014-07-23 Thread Doug Wiegley
@Doug: I think if the drivers see the TERMINATED_HTTPS protocol then
they can throw an exception.  I don't think a driver interface change is
needed.

They¹d have to know to throw it, which could be problematic.  But A
completely new protocol will probably result in some kind of exception, so
it¹s probably sufficient.

doug



On 7/23/14, 12:08 PM, Brandon Logan brandon.lo...@rackspace.com wrote:

@Evgeny: Did you intend on adding another patchset in the reviews I've
been working on? If so I don't really see any changes, so if they're are
some changes you needed in there let me know.

@Doug: I think if the drivers see the TERMINATED_HTTPS protocol then
they can throw an exception.  I don't think a driver interface change is
needed.

Thanks,
Brandon


On Wed, 2014-07-23 at 17:02 +, Doug Wiegley wrote:
 Do we want any driver interface changes for this?  At one level, with
the
 current interface, conforming drivers could just reference
 listener.sni_containers, with no changes.  But, do we want something in
 place so that the API can return an unsupported error for non-TLS v2
 drivers?  Or must all v2 drivers support TLS?
 
 doug
 
 
 
 On 7/23/14, 10:54 AM, Evgeny Fedoruk evge...@radware.com wrote:
 
 My code is here:
 https://review.openstack.org/#/c/109035/1
 
 
 
 -Original Message-
 From: Evgeny Fedoruk
 Sent: Wednesday, July 23, 2014 6:54 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - work
 division
 
 Hi Carlos,
 
 As I understand you are working on common module for Barbican
 interactions.
 I will commit my code later today and I will appreciate if you and
 anybody else  who is interested will review this change.
 There is one specific spot for the common Barbican interactions module
 API integration.
 After the IRC meeting tomorrow, we can discuss the work items and
decide
 who is interested/available to do them.
 Does it make sense?
 
 Thanks,
 Evg
 
 -Original Message-
 From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
 Sent: Wednesday, July 23, 2014 6:15 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work
 division
 
 Do you have any idea as to how we can split up the work?
 
 On Jul 23, 2014, at 6:01 AM, Evgeny Fedoruk evge...@radware.com
  wrote:
 
  Hi,
  
  I'm working on TLS integration with loadbalancer v2 extension and db.
  Basing on Brandon's  patches https://review.openstack.org/#/c/105609
,
 https://review.openstack.org/#/c/105331/  ,
 https://review.openstack.org/#/c/105610/
  I will abandon previous 2 patches for TLS which are
 https://review.openstack.org/#/c/74031/ and
 https://review.openstack.org/#/c/102837/
  Managing to submit my change later today. It will include lbaas
 extension v2 modification, lbaas db v2 modifications, alembic
migration
 for schema changes and new tests in unit testing for lbaas db v2.
  
  Thanks,
  Evg
  
  -Original Message-
  From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
  Sent: Wednesday, July 23, 2014 3:54 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - work
division
  
Since it looks like the TLS blueprint was approved I''m sure were
all
 eager to start coded so how should we divide up work on the source
code.
 I have Pull requests in pyopenssl
 https://github.com/pyca/pyopenssl/pull/143;. and a few one liners in
 pica/cryptography to expose the needed low-level that I'm hoping will
be
 added pretty soon to that PR 143 test's can pass. Incase it doesn't we
 will fall back to using the pyasn1_modules as it already also has a
 means to fetch what we want at a lower level.
  I'm just hoping that we can split the work up so that we can
 collaborate together on this with out over serializing the work were
 people become dependent on waiting for some one else to complete their
 work or worse one person ending up doing all the work.
  
   
Carlos D. Garza ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] Mentor program?

2014-07-23 Thread Tim Freund

On 07/23/2014 02:16 PM, Cindy Pallares wrote:


On 07/23/2014 01:02 PM, Anne Gentle wrote:

On Wed, Jul 23, 2014 at 12:29 PM, Joshua Harlow harlo...@outlook.com
wrote:


Hi all,

I was reading over a IMHO insightful hacker news thread last night:

https://news.ycombinator.com/item?id=8068547

Labeled/titled: 'I made a patch for Mozilla, and you can do it too'

It made me wonder what kind of mentoring support are we as a community
offering to newbies (a random google search for 'openstack mentoring' shows
mentors for GSoC, mentors for interns, outreach for women... but no mention
of mentors as a way for everyone to get involved)?

Looking at the comments in that hacker news thread, the article itself it
seems like mentoring is stressed over and over as the way to get involved.

Has there been ongoing efforts to establish such a program (I know there
is training work that has been worked on, but that's not exactly the same).

Thoughts, comments...?


I'll let Stefano answer further, but yes, we've discussed a centralized
mentoring program for a year or so. I'm not sure we have enough mentors
available, there are certainly plenty of people seeking and needing
mentoring. So he can elaborate more on our current thinking of how we'd
overcome the imbalance and get more centralized coordination in this area.

Thanks,
Anne


Mozilla also has mentored bugs system which provide a mentor who
commits to helping a newbie get a single bug fixed. It would be nice to
have that in OpenStack. It would also be a great way for people to get
their feet wet in mentoring or who don't want to commit themselves too
much.



I was a student in the OpenStack Upstream Training training that took 
place before the Atlanta Summit.  The training was great, but the weekly 
mentoring afterward really made the experience worth while.  Students 
selected bugs before the class, learned about the contribution process 
during the class, and then met weekly with a mentor until their 
contribution was merged.


Thanks,

Tim


--
Tim Freund
913-207-0983 | @timfreund
http://tim.freunds.net


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] overuse of 'except Exception'

2014-07-23 Thread Doug Hellmann

On Jul 23, 2014, at 7:13 AM, Chris Dent chd...@redhat.com wrote:

 
 I was having a bit of a browse through the ceilometer code and
 noticed there are a fair few instances (sixty-some) of
 `except Exception` scattered about.
 
 While not as evil as a bare except, my Python elders always pointed
 out that doing `except Exception` is a bit like using a sledgehammer
 where something more akin to a gavel is what's wanted. The error
 condition is obliterated but there's no judgement on what happened
 and no apparent effort by the developer to effectively handle
 discrete cases.
 
 A common idiom appears as:
 
except Exception:
LOG.exception(_('something failed'))
return
# or continue
 
 There's no information here about what failed or why.

LOG.exception() logs the full traceback, with the argument as a bit of context.

 
 That's bad enough, but much worse, this will catch all sorts of
 exceptions, even ones that are completely unexpected and ought to
 cause a more drastic (and thus immediately informative) failure
 than 'something failed’.

In most cases, we chose to handle errors this way to keep the service running 
even in the face of “bad” data, since we are trying to collect an audit stream 
and we don’t want to miss good data if we encounter bad data.

 
 So, my question: Is this something we who dig around in the ceilometer
 code ought to care about and make an effort to clean up? If so, I'm
 happy to get started.

If you would like to propose some changes for cases where more detailed 
exception handling is appropriate, we could discuss them on a case-by-case 
basis. I don’t think anyone used this exception handling lightly style and I 
wouldn’t want to change it without due consideration.

Doug

 
 Thanks.
 
 -- 
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Closing registration for the Mid-cycle meetup

2014-07-23 Thread Devananda van der Veen
Hi all,

We have had a few last-minute registrations for the mid-cycle, and are now
up to 20 attendees. I am going to close registration at this point and look
forward to seeing you all on Monday (or Sunday, if you're getting pizza
with me)!

Cheers,
-Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Spec freeze exception] ml2-use-dpdkvhost

2014-07-23 Thread Ian Wells
Speaking as someone who was reviewing both specs, I would personally
recommend you grant both exceptions.  The code changes are very limited in
scope - particularly the Nova one - which makes the code review simple, and
they're highly unlikely to affect anyone who isn't actually using DPDK OVS
(subject to the Neutron tests for its presence being solid), which makes
them low risk.  For even lower risk, we could have a config option to
enable the test for a CUSE-based binding (and yes, I know earlier in the
review everyone was against config items, but specifically what we didn't
want was *two* config items, one in Nova nd one in Neutron, that only
worked if they were in agreement; one solely in Neutron would, I think, be
acceptable).

All this subject to Sean getting all the CRs out of his spec, and maybe we
could add a spec test for that, because it's a right pain to have specs
full of CRs if you're trying to diff them online...
-- 
Ian.



On 23 July 2014 11:10, Mooney, Sean K sean.k.moo...@intel.com wrote:

 Hi kyle

 Thanks for your provisional support.
 I would agree that unless the nova spec is also granted an exception both
 specs should be moved
 To Kilo.

 I have now uploaded the most recent version of the specs.
 They are available to review here:
 https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost
 https://blueprints.launchpad.net/neutron/+spec/ml2-use-dpdkvhost

 regards
 sean


 -Original Message-
 From: Kyle Mestery [mailto:mest...@mestery.com]
 Sent: Tuesday, July 22, 2014 2:47 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron] [Spec freeze exception]
 ml2-use-dpdkvhost

 On Mon, Jul 21, 2014 at 10:04 AM, Mooney, Sean K sean.k.moo...@intel.com
 wrote:
  Hi
 
  I would like to propose
  https://review.openstack.org/#/c/107797/1/specs/juno/ml2-use-dpdkvhost
  .rst
  for a spec freeze exception.
 
 
 
  https://blueprints.launchpad.net/neutron/+spec/ml2-use-dpdkvhost
 
 
 
  This blueprint adds support for the Intel(R) DPDK Userspace vHost
 
  port binding to the Open Vswitch and Open Daylight ML2 Mechanism Drivers.
 
 In general, I'd be ok with approving an exception for this BP.
 However, please see below.

 
 
  This blueprint enables nova changes tracked by the following spec:
 
  https://review.openstack.org/#/c/95805/1/specs/juno/libvirt-ovs-use-us
  vhost.rst
 
 This BP appears to also require an exception from the Nova team. I think
 these both require exceptions for this work to have a shot at landing in
 Juno. Given this, I'm actually leaning to move this to Kilo. But if you can
 get a Nova freeze exception, I'd consider the same for the Neutron BP.

 Thanks,
 Kyle

 
 
  regards
 
  sean
 
  --
  Intel Shannon Limited
  Registered in Ireland
  Registered Office: Collinstown Industrial Park, Leixlip, County
  Kildare Registered Number: 308263 Business address: Dromore House,
  East Park, Shannon, Co. Clare
 
  This e-mail and any attachments may contain confidential material for
  the sole use of the intended recipient(s). Any review or distribution
  by others is strictly prohibited. If you are not the intended
  recipient, please contact the sender and delete all copies.
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 --
 Intel Shannon Limited
 Registered in Ireland
 Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
 Registered Number: 308263
 Business address: Dromore House, East Park, Shannon, Co. Clare

 This e-mail and any attachments may contain confidential material for the
 sole use of the intended recipient(s). Any review or distribution by others
 is strictly prohibited. If you are not the intended recipient, please
 contact the sender and delete all copies.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Daniel P. Berrange
On Wed, Jul 23, 2014 at 10:52:54AM -0700, Dan Smith wrote:
  If we're going to do that, then we should be consistent. eg there is
  a volume_drivers parameter that serves the same purpose as
  vif_driver
 
 There are lots of them. We've had a bit of a background task running to
 remove them when possible/convenient and try to avoid adding new ones.
 I'm not opposed to aggressively removing them for sure, but it wouldn't
 be super high on my priority list. However, I definitely don't want to
 slide backwards when we have one already marked for removal :)
 
  What is our story for people who are developing new network or
  storage drivers for Neutron / Cinder and wish to test Nova ? Removing
  vif_driver and volume_drivers config parameters would mean that they
  would have to directly modify the existing Nova libvirt
  vif.py/volume.py codefiles.
  
  This isn't neccessarily bad because they'll have to do this anyway
  if they want to actually submit it to Nova.
 
 I don't think there's any reason not to do that in nova itself, is
 there? Virt drivers are large, so maybe making an exception for that
 plug point makes sense purely for our own test efforts. However, for
 something smaller like you mention, I don't see why we need to keep
 them, especially given what it advertises (IMHO) to people.

The main reason for the plugin points I see is for vendors wishing to
ship custom out of tree extensions to their own customers/users without
sending them upstream, or before they've been released upstream. I don't
have much idea if this is a common thing vendors do though, as opposed
to just patching nova and giving their downstream consumers the entire
nova codebase instead of just a single extension file.

  This could be a pain if they wish to provide the custom driver to
  users/customers of the previous stable Nova release while waiting for
  official support in next Nova release. It sounds like you're
  explicitly saying we don't want to support that use case though.
 
 I can't really speak for we, but certainly _I_ don't want to support
 that model. I think it leads to people thinking they can develop drivers
 for things like this out of tree permanently, which I'd really like to
 avoid.

FWIW, I do actually agree with not exposing plugin points to things
that are not stable APIs and if they didn't already exist, I'd not
approve adding them. I'd actually go further and say not even the
virt driver API should be a plugin point, since we arbitrarily change
it during development any time we need to. The latter is not a serious
or practical view right now though given our out of tree Docker/Ironic
drivers. I'm just concerned that we've had these various extension
points exposed for a long time and we've not clearly articulated
that they are liable to be killed off (besides marking vif_driver
as deprecated)

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Ian Wells
On 23 July 2014 10:52, Dan Smith d...@danplanet.com wrote:

  What is our story for people who are developing new network or
  storage drivers for Neutron / Cinder and wish to test Nova ? Removing
  vif_driver and volume_drivers config parameters would mean that they
  would have to directly modify the existing Nova libvirt
  vif.py/volume.py codefiles.
 
  This isn't neccessarily bad because they'll have to do this anyway
  if they want to actually submit it to Nova.

 I don't think there's any reason not to do that in nova itself, is
 there? Virt drivers are large, so maybe making an exception for that
 plug point makes sense purely for our own test efforts. However, for
 something smaller like you mention, I don't see why we need to keep
 them, especially given what it advertises (IMHO) to people.


We should encourage new developers to use a new binding_type, rather than
continue with vif_driver substitution.  Replacing the generic VIF driver
basically loses all the nice binding_type support implemented there, when
what we actually want to do is say 'here is another VIF type, and here is a
binding type value you will see when you should be using it'.  An argument,
I think, for coming up with a mechanism in K that allows that to happen
with a little bit of config that isn't as manky as complete vif_driver
substitution and one that doesn't require nova and neutron config to be
precisely in lockstep (which was always the problem with vif_driver and why
the generic VIF driver was developed originally).  With that in mind I
would, absolutely, agree with deprecating the vif_driver setting.

I can't really speak for we, but certainly _I_ don't want to support
 that model. I think it leads to people thinking they can develop drivers
 for things like this out of tree permanently, which I'd really like to
 avoid.


I sympathise that we shouldn't expose any more interfaces to abuse than we
have to - not least because those interfaces then become frozen and hard to
change - but I think you need a stronger argument here.  It is useful to
have out of tree drivers for this stuff while people experiment, and
perhaps also in production systems.  It's clear by the variety of drivers
and their increasing number that we are still experimenting with VIF
plugging possibilities.  There's quite a lot of ways that you can attach a
VIF to a VM, and just because we happen to support a handful doesn't mean
to say we've provided every option.
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Dan Smith
 FWIW, I do actually agree with not exposing plugin points to things
 that are not stable APIs and if they didn't already exist, I'd not
 approve adding them. I'd actually go further and say not even the
 virt driver API should be a plugin point, since we arbitrarily change
 it during development any time we need to. The latter is not a serious
 or practical view right now though given our out of tree Docker/Ironic
 drivers. I'm just concerned that we've had these various extension
 points exposed for a long time and we've not clearly articulated
 that they are liable to be killed off (besides marking vif_driver
 as deprecated)

Yep, I think we agree. I think that as a project we've identified
exposing plug points that aren't stable (or intended to be replaceable)
as a bad thing, and thus we should be iterating on removing them.
Especially if we're generous with our deprecate-before-remove rules,
then I think that we're not likely to bite anyone suddenly with
something they're shipping while working it upstream in parallel. I
*really* thought we had called this one out on the ReleaseNotes, but
apparently that didn't happen (probably because we decide to throw in
those helper classes to avoid breaking configs). Going forward, marking
it deprecated in the code for a cycle, noting it on the release notes,
and then removing it the next cycle seems like plenty of warning.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-23 Thread Mike Spreitzer
Doug Wiegley do...@a10networks.com wrote on 07/16/2014 04:58:52 PM:

 You do recall correctly, and there are currently no mechanisms for
 notifying anything outside of the load balancer backend when the health
 monitor/member state changes.

But there *is* a mechanism for some outside thing to query the load 
balancer for the health of a pool member, right?  I am thinking 
specifically of 
http://docs.openstack.org/api/openstack-network/2.0/content/GET_showMember__v2.0_pools__pool_id__members__member_id__lbaas_ext_ops_member.html
 
--- whose response includes a status field for the member.  Is there 
documentation for what values can appear in that field, and what each 
value means?

Supposing we can leverage the pool member status, there remains an issue: 
establishing a link between an OS::Neutron::PoolMember and the 
corresponding scaling group member.  We could conceivably expand the 
scaling group code so that if the member type is a stack then the contents 
of the stack are searched (perhaps recursively) for resources of type 
OS::Neutron::PoolMember, but that is a tad too automatic for my taste.  It 
could pick up irrelevant PoolMembers.  And such a level of implicit 
behavior is outside our normal style of doing things.

We could follow the AWS style, by adding an optional property to the 
scaling group resource types --- where the value of that property can be 
the UUID of an OS::Neutron::LoadBalancer or an OS::Neutron::Pool.  But 
that still does not link up an individual scaling group member with its 
corresponding PoolMember.

Remember that if we are doing this at all, each scaling group member must 
be a stack.  I think the simplest way to solve this would be to define a 
way that a such stack can put in its outputs the ID of the corresponding 
PoolMember.  I would be willing to settle for simply saying that if such a 
stack has an output of type string and name __OS_pool_member then the 
value of that output is taken to be the ID of the corresponding 
PoolMember.  Some people do not like reserved names; if that must be 
avoided then we can expand the schema language with a way to identify 
which stack output carries the PoolMember ID.  Another alternative would 
be to add an optional scaling group property to carry the name of the 
stack output in question.

 There is also currently no way for an external system to inject health
 information about an LB or its members.

I do not know that the injection has to be to the LB; in AWS the injection 
is to the scaling group.  That would be acceptable to me too.

Thoughts?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-23 Thread Doug Wiegley
 But there *is* a mechanism for some outside thing to query the load balancer 
 for the health of a pool member, right?  I am thinking specifically of 
 http://docs.openstack.org/api/openstack-network/2.0/content/GET_showMember__v2.0_pools__pool_id__members__member_id__lbaas_ext_ops_member.html
  --- whose response includes a status field for the member.  Is there 
 documentation for what values can appear in that field, and what each value 
 means?

The state of the world today: ‘status’ in the neutron database is 
configuration/provisioning status, not operational status.  Neutron-wide thing. 
 We were discussing adding operational status fields (or a neutron REST call to 
get the info from the backend) last month, but it’s something that isn’t 
planned for a serious conversation until Kilo, at present.

The current possible lbaas values (from neutron/plugins/common/constants.py):

# Service operation status constants
ACTIVE = ACTIVE
DOWN = DOWN
PENDING_CREATE = PENDING_CREATE
PENDING_UPDATE = PENDING_UPDATE
PENDING_DELETE = PENDING_DELETE
INACTIVE = INACTIVE
ERROR = ERROR

… It does look like you can make a stats() call for some backends and get 
limited operational information, but it will not be uniform, nor universally 
supported.

Thanks,
doug

From: Mike Spreitzer mspre...@us.ibm.commailto:mspre...@us.ibm.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, July 23, 2014 at 1:27 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [heat] health maintenance in autoscaling groups

Doug Wiegley do...@a10networks.commailto:do...@a10networks.com wrote on 
07/16/2014 04:58:52 PM:

 You do recall correctly, and there are currently no mechanisms for
 notifying anything outside of the load balancer backend when the health
 monitor/member state changes.

But there *is* a mechanism for some outside thing to query the load balancer 
for the health of a pool member, right?  I am thinking specifically of 
http://docs.openstack.org/api/openstack-network/2.0/content/GET_showMember__v2.0_pools__pool_id__members__member_id__lbaas_ext_ops_member.html
 --- whose response includes a status field for the member.  Is there 
documentation for what values can appear in that field, and what each value 
means?

Supposing we can leverage the pool member status, there remains an issue: 
establishing a link between an OS::Neutron::PoolMember and the corresponding 
scaling group member.  We could conceivably expand the scaling group code so 
that if the member type is a stack then the contents of the stack are searched 
(perhaps recursively) for resources of type OS::Neutron::PoolMember, but that 
is a tad too automatic for my taste.  It could pick up irrelevant PoolMembers.  
And such a level of implicit behavior is outside our normal style of doing 
things.

We could follow the AWS style, by adding an optional property to the scaling 
group resource types --- where the value of that property can be the UUID of an 
OS::Neutron::LoadBalancer or an OS::Neutron::Pool.  But that still does not 
link up an individual scaling group member with its corresponding PoolMember.

Remember that if we are doing this at all, each scaling group member must be a 
stack.  I think the simplest way to solve this would be to define a way that a 
such stack can put in its outputs the ID of the corresponding PoolMember.  I 
would be willing to settle for simply saying that if such a stack has an output 
of type string and name __OS_pool_member then the value of that output is 
taken to be the ID of the corresponding PoolMember.  Some people do not like 
reserved names; if that must be avoided then we can expand the schema language 
with a way to identify which stack output carries the PoolMember ID.  Another 
alternative would be to add an optional scaling group property to carry the 
name of the stack output in question.

 There is also currently no way for an external system to inject health
 information about an LB or its members.

I do not know that the injection has to be to the LB; in AWS the injection is 
to the scaling group.  That would be acceptable to me too.

Thoughts?

Thanks,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] graduating oslo.middleware

2014-07-23 Thread Ben Nemec
 

On 2014-07-23 13:25, gordon chung wrote: 

 I left a comment on one of the commits, but in general here are my thoughts:
 1) I would prefer not to do things like switch to oslo.i18n outside of 
 Gerrit. I realize we don't have a specific existing policy for this, but 
 doing that significant 
 work outside of Gerrit is not desirable IMHO. It needs to happen either 
 before graduation or after import into Gerrit.
 2) I definitely don't want to be accepting enable [hacking check] changes 
 outside Gerrit. The github graduation step is _just_ to get the code in 
 shape so it 
 can be imported with the tests passing. It's perfectly acceptable to me to 
 just ignore any hacking checks during this step and fix them in Gerrit 
 where, again, 
 the changes can be reviewed.
 At a glance I don't see any problems with the changes that have been made, 
 but I haven't looked that closely and I think it brings up some topics for 
 clarification in the graduation process.
 
 i'm ok to revert if there are concerns. i just vaguely remember a reference 
 in another oslo lib about waiting for i18n graduation but tbh i didn't 
 actually check back to see what conclusion was. 
 
 cheers,
 _gord_

I have no specific concerns, but I don't want to set a precedent where
we make a bunch of changes on Github and then import that code. The work
on Github should be limited to the minimum necessary to get the unit
tests passing (basically if it's not listed in
https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary#Manual_Fixes
then it should happen in Gerrit). Once that happens the project can be
imported and any further changes made under our standard review process.
Either that or changes can be made in incubator before graduation and
reviewed then. 

So I guess I'm a soft -1 on this for right now, but I'll defer to the
other Oslo cores because I don't really have time to take a more
detailed look at the repo and I don't want to be a blocker when I may
not be around to discuss it. 

-Ben 
 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-23 Thread Mike Spreitzer
Doug Wiegley do...@a10networks.com wrote on 07/23/2014 03:43:02 PM:

 From: Doug Wiegley do...@a10networks.com
 ...
 The state of the world today: ‘status’ in the neutron database is 
 configuration/provisioning status, not operational status.  Neutron-
 wide thing.  We were discussing adding operational status fields (or
 a neutron REST call to get the info from the backend) last month, 
 but it’s something that isn’t planned for a serious conversation 
 until Kilo, at present.

Thanks for the prompt response.  Let me just grasp at one last straw: is 
there any chance that Neutron will soon define and implement Ceilometer 
metrics that reveal PoolMember health?

Thanks,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Russell Bryant
On 07/23/2014 03:04 PM, Dan Smith wrote:
 FWIW, I do actually agree with not exposing plugin points to
 things that are not stable APIs and if they didn't already exist,
 I'd not approve adding them. I'd actually go further and say not
 even the virt driver API should be a plugin point, since we
 arbitrarily change it during development any time we need to. The
 latter is not a serious or practical view right now though given
 our out of tree Docker/Ironic drivers. I'm just concerned that
 we've had these various extension points exposed for a long time
 and we've not clearly articulated that they are liable to be
 killed off (besides marking vif_driver as deprecated)
 
 Yep, I think we agree. I think that as a project we've identified 
 exposing plug points that aren't stable (or intended to be
 replaceable) as a bad thing, and thus we should be iterating on
 removing them. Especially if we're generous with our
 deprecate-before-remove rules, then I think that we're not likely
 to bite anyone suddenly with something they're shipping while
 working it upstream in parallel. I *really* thought we had called
 this one out on the ReleaseNotes, but apparently that didn't happen
 (probably because we decide to throw in those helper classes to
 avoid breaking configs). Going forward, marking it deprecated in
 the code for a cycle, noting it on the release notes, and then
 removing it the next cycle seems like plenty of warning.

+1 on this stance.  I'd like to remove all plug points that we don't
intend to be considered stable APIs with a reasonable deprecation cycle.

I personally don't consider any API in Nova except the v2 REST API to
be a stable API.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-23 Thread Doug Wiegley
Great question, and to my knowledge, not at present.  There is an ongoing 
discussion about a common usage framework for ceilometer, for all the various 
*aaS things, but status I not included (yet!).  I think that spec is in gerrit.

Thanks,
Doug


From: Mike Spreitzer mspre...@us.ibm.commailto:mspre...@us.ibm.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, July 23, 2014 at 2:03 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [heat] health maintenance in autoscaling groups

Doug Wiegley do...@a10networks.commailto:do...@a10networks.com wrote on 
07/23/2014 03:43:02 PM:

 From: Doug Wiegley do...@a10networks.commailto:do...@a10networks.com
 ...
 The state of the world today: ‘status’ in the neutron database is
 configuration/provisioning status, not operational status.  Neutron-
 wide thing.  We were discussing adding operational status fields (or
 a neutron REST call to get the info from the backend) last month,
 but it’s something that isn’t planned for a serious conversation
 until Kilo, at present.

Thanks for the prompt response.  Let me just grasp at one last straw: is there 
any chance that Neutron will soon define and implement Ceilometer metrics that 
reveal PoolMember health?

Thanks,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Looking for Coraid cinder contact

2014-07-23 Thread Duncan Thomas
Hi

I'm looking for a maintainer email address for the cinder coraid
driver. http://stackalytics.com/report/driverlog?project_id=openstack%2Fcinder
just lists it as Alyseo team with no contact details.


Thanks

-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Weekly IRC Agenda

2014-07-23 Thread Jorge Miramontes
Hey LBaaS folks,

This is you friendly reminder to provide any agenda items for tomorrow's weekly 
IRC meeting. The agenda currently has two items:

  *   Review Updates
  *   TLS work division

Cheers,
--Jorge

P.S. Please don't forget to update the weekly standup == 
https://etherpad.openstack.org/p/neutron-lbaas-weekly-standup
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] graduating oslo.middleware

2014-07-23 Thread Davanum Srinivas
I agree with Ben. ( I don't want to set a precedent where we make a
bunch of changes on Github and then import that code )

-- dims

On Wed, Jul 23, 2014 at 3:49 PM, Ben Nemec openst...@nemebean.com wrote:
 On 2014-07-23 13:25, gordon chung wrote:

 I left a comment on one of the commits, but in general here are my
 thoughts:
 1) I would prefer not to do things like switch to oslo.i18n outside of
 Gerrit.  I realize we don't have a specific existing policy for this, but
 doing that significant
 work outside of Gerrit is not desirable IMHO.  It needs to happen either
 before graduation or after import into Gerrit.
 2) I definitely don't want to be accepting enable [hacking check]
 changes outside Gerrit.  The github graduation step is _just_ to get the
 code in shape so it
 can be imported with the tests passing.  It's perfectly acceptable to me
 to just ignore any hacking checks during this step and fix them in Gerrit
 where, again,
 the changes can be reviewed.
 At a glance I don't see any problems with the changes that have been made,
 but I haven't looked that closely and I think it brings up some topics for
 clarification in the graduation process.


 i'm ok to revert if there are concerns. i just vaguely remember a reference
 in another oslo lib about waiting for i18n graduation but tbh i didn't
 actually check back to see what conclusion was.


 cheers,
 gord

 I have no specific concerns, but I don't want to set a precedent where we
 make a bunch of changes on Github and then import that code.  The work on
 Github should be limited to the minimum necessary to get the unit tests
 passing (basically if it's not listed in
 https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary#Manual_Fixes then
 it should happen in Gerrit).  Once that happens the project can be imported
 and any further changes made under our standard review process.  Either that
 or changes can be made in incubator before graduation and reviewed then.

 So I guess I'm a soft -1 on this for right now, but I'll defer to the other
 Oslo cores because I don't really have time to take a more detailed look at
 the repo and I don't want to be a blocker when I may not be around to
 discuss it.

 -Ben



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] graduating oslo.middleware

2014-07-23 Thread Doug Hellmann

On Jul 23, 2014, at 3:49 PM, Ben Nemec openst...@nemebean.com wrote:

 On 2014-07-23 13:25, gordon chung wrote:
 
  I left a comment on one of the commits, but in general here are my 
  thoughts:
  1) I would prefer not to do things like switch to oslo.i18n outside of 
  Gerrit.  I realize we don't have a specific existing policy for this, but 
  doing that significant 
  work outside of Gerrit is not desirable IMHO.  It needs to happen either 
  before graduation or after import into Gerrit.
  2) I definitely don't want to be accepting enable [hacking check] 
  changes outside Gerrit.  The github graduation step is _just_ to get the 
  code in shape so it 
  can be imported with the tests passing.  It's perfectly acceptable to me 
  to just ignore any hacking checks during this step and fix them in Gerrit 
  where, again, 
  the changes can be reviewed.
  At a glance I don't see any problems with the changes that have been made, 
  but I haven't looked that closely and I think it brings up some topics for 
  clarification in the graduation process.
 
 
 i'm ok to revert if there are concerns. i just vaguely remember a reference 
 in another oslo lib about waiting for i18n graduation but tbh i didn't 
 actually check back to see what conclusion was.
 
  
 cheers,
 gord
 I have no specific concerns, but I don't want to set a precedent where we 
 make a bunch of changes on Github and then import that code.  The work on 
 Github should be limited to the minimum necessary to get the unit tests 
 passing (basically if it's not listed in 
 https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary#Manual_Fixes then it 
 should happen in Gerrit).  Once that happens the project can be imported and 
 any further changes made under our standard review process.  Either that or 
 changes can be made in incubator before graduation and reviewed then.
 
 So I guess I'm a soft -1 on this for right now, but I'll defer to the other 
 Oslo cores because I don't really have time to take a more detailed look at 
 the repo and I don't want to be a blocker when I may not be around to discuss 
 it.
 
 

I agree with Ben on minimizing the amount of work that happens outside of the 
review process. I would have liked some discussion of the “remove stray tests”, 
for example.

Gordon, could you prepare a version of the repository that stops with the 
export and whatever changes are needed to make the test jobs for the new 
library run? If removing some of those tests is part of making the suite run, 
we can talk about that on the list here, but if you can make the job run 
without that commit we should review it in gerrit after the repository is 
imported.

Doug

 -Ben
 
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Jay Pipes

On 07/23/2014 03:04 PM, Dan Smith wrote:

FWIW, I do actually agree with not exposing plugin points to things
that are not stable APIs and if they didn't already exist, I'd not
approve adding them. I'd actually go further and say not even the
virt driver API should be a plugin point, since we arbitrarily change
it during development any time we need to. The latter is not a serious
or practical view right now though given our out of tree Docker/Ironic
drivers. I'm just concerned that we've had these various extension
points exposed for a long time and we've not clearly articulated
that they are liable to be killed off (besides marking vif_driver
as deprecated)


Yep, I think we agree. I think that as a project we've identified
exposing plug points that aren't stable (or intended to be replaceable)
as a bad thing, and thus we should be iterating on removing them.
Especially if we're generous with our deprecate-before-remove rules,
then I think that we're not likely to bite anyone suddenly with
something they're shipping while working it upstream in parallel. I
*really* thought we had called this one out on the ReleaseNotes, but
apparently that didn't happen (probably because we decide to throw in
those helper classes to avoid breaking configs). Going forward, marking
it deprecated in the code for a cycle, noting it on the release notes,
and then removing it the next cycle seems like plenty of warning.


The following are plugin points that I feel should be scrapped (sorry, 
I mean deprecated over a release cycle), as they really are not things 
that anyone actually provides extensions for and, IMO, they just add 
needless code abstraction, noise and indirection:


All of these are pointless:

* metadata_manager=nova.api.manager.MetadataManager
* compute_manager=nova.compute.manager.ComputeManager
* console_manager=nova.console.manager.ConsoleProxyManager
* consoleauth_manager=nova.consoleauth.manager.ConsoleAuthManager
* cert_manager=nova.cert.manager.CertManager
* scheduler_manager=nova.scheduler.manager.SchedulerManager
* db_driver=nova.db (pretty sure that ship has long since sailed)
* network_api_class=nova.network.api.API
* volume_api_class=nova.volume.cinder.API
* manager=nova.cells.manager.CellsManager
* manager=nova.conductor.manager.ConductorManager

Then there are the funnies:

This should not be a manager class at all, but rather a selector that 
switches the behaviour of the underlying network implementation -- i.e. 
it should not be swapped out by custom code but instead just have a 
switch option to indicate the type of network model in use):


* network_manager=nova.network.manager.VlanManager

Same goes for this one, which should just be selected based on the 
network model:


* l3_lib=nova.network.l3.LinuxNetL3

These ones should similarly be selected based on the binding_type, not 
provided as a plugin point (as Ian Wells alluded to):


* vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver
* vif_driver=nova.virt.xenapi.vif.XenAPIBridgeDriver

These config options should be renamed to use driver, not manager:

* floating_ip_dns_manager=nova.network.noop_dns_driver.NoopDNSDriver
* instance_dns_manager=nova.network.noop_dns_driver.NoopDNSDriver
* scheduler_host_manager=nova.scheduler.host_manager.HostManager
* power_manager=nova.virt.baremetal.ipmi.IPMI

This config option should be renamed to use driver, not api_class:

* api_class=nova.keymgr.conf_key_mgr.ConfKeyManager

This one should be renamed to use driver, not handler:

* image_upload_handler=nova.virt.xenapi.image.glance.GlanceStore

This one... who knows? There are no other schedulers for the cells 
module other than this one, and it doesn't follow the same manager - 
driver pattern as most of Nova, so, should it be called scheduler_driver 
or just scrapped?:


* scheduler=nova.cells.scheduler.CellsScheduler

This one isn't properly set up as a driver-based system but actually 
implements an API, which you'd have to then subclass identically and 
there would be zero point in doing that since you would need to return 
the same data as is set in the Stats class' methods:


* compute_stats_class=nova.compute.stats.Stats

I think it's pretty clear there's lots of room for consistency and 
improvements.


All the best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Jay Pipes

On 07/23/2014 03:04 PM, Dan Smith wrote:

FWIW, I do actually agree with not exposing plugin points to things
that are not stable APIs and if they didn't already exist, I'd not
approve adding them. I'd actually go further and say not even the
virt driver API should be a plugin point, since we arbitrarily change
it during development any time we need to. The latter is not a serious
or practical view right now though given our out of tree Docker/Ironic
drivers. I'm just concerned that we've had these various extension
points exposed for a long time and we've not clearly articulated
that they are liable to be killed off (besides marking vif_driver
as deprecated)


Yep, I think we agree. I think that as a project we've identified
exposing plug points that aren't stable (or intended to be replaceable)
as a bad thing, and thus we should be iterating on removing them.
Especially if we're generous with our deprecate-before-remove rules,
then I think that we're not likely to bite anyone suddenly with
something they're shipping while working it upstream in parallel. I
*really* thought we had called this one out on the ReleaseNotes, but
apparently that didn't happen (probably because we decide to throw in
those helper classes to avoid breaking configs). Going forward, marking
it deprecated in the code for a cycle, noting it on the release notes,
and then removing it the next cycle seems like plenty of warning.


The following are plugin points that I feel should be scrapped (sorry, 
I mean deprecated over a release cycle), as they really are not things 
that anyone actually provides extensions for and, IMO, they just add 
needless code abstraction, noise and indirection:


All of these are pointless:

* metadata_manager=nova.api.manager.MetadataManager
* compute_manager=nova.compute.manager.ComputeManager
* console_manager=nova.console.manager.ConsoleProxyManager
* consoleauth_manager=nova.consoleauth.manager.ConsoleAuthManager
* cert_manager=nova.cert.manager.CertManager
* scheduler_manager=nova.scheduler.manager.SchedulerManager
* db_driver=nova.db (pretty sure that ship has long since sailed)
* network_api_class=nova.network.api.API
* volume_api_class=nova.volume.cinder.API
* manager=nova.cells.manager.CellsManager
* manager=nova.conductor.manager.ConductorManager

Then there are the funnies:

This should not be a manager class at all, but rather a selector that 
switches the behaviour of the underlying network implementation -- i.e. 
it should not be swapped out by custom code but instead just have a 
switch option to indicate the type of network model in use):


* network_manager=nova.network.manager.VlanManager

Same goes for this one, which should just be selected based on the 
network model:


* l3_lib=nova.network.l3.LinuxNetL3

These ones should similarly be selected based on the binding_type, not 
provided as a plugin point (as Ian Wells alluded to):


* vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver
* vif_driver=nova.virt.xenapi.vif.XenAPIBridgeDriver

These config options should be renamed to use driver, not manager:

* floating_ip_dns_manager=nova.network.noop_dns_driver.NoopDNSDriver
* instance_dns_manager=nova.network.noop_dns_driver.NoopDNSDriver
* scheduler_host_manager=nova.scheduler.host_manager.HostManager
* power_manager=nova.virt.baremetal.ipmi.IPMI

This config option should be renamed to use driver, not api_class:

* api_class=nova.keymgr.conf_key_mgr.ConfKeyManager

This one should be renamed to use driver, not handler:

* image_upload_handler=nova.virt.xenapi.image.glance.GlanceStore

This one... who knows? There are no other schedulers for the cells 
module other than this one, and it doesn't follow the same manager - 
driver pattern as most of Nova, so, should it be called scheduler_driver 
or just scrapped?:


* scheduler=nova.cells.scheduler.CellsScheduler

This one isn't properly set up as a driver-based system but actually 
implements an API, which you'd have to then subclass identically and 
there would be zero point in doing that since you would need to return 
the same data as is set in the Stats class' methods:


* compute_stats_class=nova.compute.stats.Stats

I think it's pretty clear there's lots of room for consistency and 
improvements.


All the best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][oslo.messaging] Adding a new RPC backend for testing AMQP 1.0

2014-07-23 Thread Ken Giusti
Hi,

I'd like some help with $SUBJECT.  I've got a WIP patch up for review:

https://review.openstack.org/#/c/109118/

My goal is to have an RPC backend that I can use to test the new AMQP
1.0 oslo.messaging driver against.  I suspect this new backend would
initially only be used by tests specifically written against the
driver, but I'm hoping for wider adoption as the driver stabilizes and
AMQP 1.0 adoption increases.

As I said, this is only a WIP and doesn't completely work yet (though
it shouldn't break support for the existing backends).  I'm just
looking for some early feedback on whether or not this is the correct
approach.

thanks!

-- 
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Feasibility of adding global restrictions at trust creation time

2014-07-23 Thread Russell Bryant
On 07/22/2014 11:00 PM, Nathan Kinder wrote:
 
 
 On 07/22/2014 06:55 PM, Steven Hardy wrote:
 On Tue, Jul 22, 2014 at 05:20:44PM -0700, Nathan Kinder wrote:
 Hi,

 I've had a few discussions recently related to Keystone trusts with
 regards to imposing restrictions on trusts at a deployment level.
 Currently, the creator of a trust is able to specify the following
 restrictions on the trust at creation time:

   - an expiration time for the trust
   - the number of times that the trust can be used to issue trust tokens

 If an expiration time (expires_at) is not specified by the creator of
 the trust, then it never expires.  Similarly, if the number of uses
 (remaining_uses) is not specified by the creator of the trust, it has an
 unlimited number of uses.  The important thing to note is that the
 restrictions are entirely in the control of the trust creator.

 There may be cases where a particular deployment wants to specify global
 maximum values for these restrictions to prevent a trust from being
 granted indefinitely.  For example, Keystone configuration could specify
 that a trust can't be created that has 100 remaining uses or is valid
 for more than 6 months.  This would certainly cause problems for some
 deployments that may be relying on indefinite trusts, but it is also a
 nice security control for deployments that don't want to allow something
 so open-ended.

 I'm wondering about the feasibility of this sort of change, particularly
 from an API compatibility perspective.  An attempt to create a trust
 without an expires_at value should still be considered as an attempt to
 create a trust that never expires, but Keystone could return a '403
 Forbidden' response if this request violates the maximum specified in
 configuration (this would be similar for remaining_uses).  The semantics
 of the API remain the same, but the response has the potential to be
 rejected for new reasons.  Is this considered as an API change, or would
 this be considered to be OK to implement in the v3 API?  The existing
 API docs [1][2] don't really go to this level of detail with regards to
 when exactly a 403 will be returned for trust creation, though I know of
 specific cases where this response is returned for the create-trust request.

 FWIW if you start enforcing either of these restrictions by default, you
 will break heat, and every other delegation-to-a-service use case I'm aware
 of, where you simply don't have any idea how long the lifetime of the thing
 created by the service (e.g heat stack, Solum application definition,
 Mistral workflow or whatever) will be.

 So while I can understand the desire to make this configurable for some
 environments, please leave the defaults as the current behavior and be
 aware that adding these kind of restrictions won't work for many existing
 trusts use-cases.
 
 I fully agree.  In no way should the default behavior change.
 

 Maybe the solution would be some sort of policy defined exception to these
 limits?  E.g when delegating to a user in the service project, they do not
 apply?
 
 Role-based limits seem to be a natural progression of the idea, though I
 didn't want to throw that out there from the get-go.

I was concerned about this idea from an API compatibility perspective,
but I think the way you have laid it out here makes sense.  Like both
you and Steven said, the behavior of the API when the parameter is not
specified should *not* change.  However, allowing deployment-specific
policy that would reject the request seems fine.

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >