[openstack-dev] [Keystone] Token invalidation in deleting role assignments

2014-06-24 Thread Takashi Natsume
Hi all,

When deleting role assignments, not only tokens that are related with
deleted role assignments but also other tokens that the(same) user has are
invalidated in stable/icehouse(2014.1.1).

For example,
A) Role assignment between domain and user by OS-INHERIT(*1)
1. Assign a role(For example,'Member') between 'Domain1' and 'user' by
OS-INHERIT
2. Assign the role('Member') between 'Domain2' and 'user' by OS-INHERIT
3. Get a token with specifying 'user' and 'Project1'(in 'Domain1')
4. Get a token with specifying 'user' and 'Project2'(in 'Domain2')
5. Create reources(For example, cinder volumes) in 'Project1' with the token
that was gotten in "3."
it is possible to create them.
6. Create reources in 'Project2' with the token that was gotten in "4."
it is possible to create them.
7. Delete the role assignment between 'Domain1' and 'user' (that was added
in "1.")

(After validated token cache is expired in cinder, etc.)
8. Create reources in 'Project1' with the token that was gotten in "3."
it is not possible to create them. "401 Unauthorized."
9. Create reources in 'Project2' with the token that was gotten in "4."
it is not possible to create them. "401 Unauthorized."

In "9.", my expectation is that it is possible to create resources with the
token that was gotten in "4.".

*1:
v3/OS-INHERIT/domains/{domain_id}/users/{user_id}/roles/{role_id}/inherited_
to_projects

B) Role assignment between project and user
1. Assign a role(For example,'Member') between 'Project1' and 'user'
2. Assign the role('Member') between 'Project2' and 'user'
3. Get a token with specifying 'user' and 'Project1'
4. Get a token with specifying 'user' and 'Project2'
5. Create reources(For example, cinder volumes) in 'Project1' with the token
that was gotten in "3."
it is possible to create them.
6. Create reources in 'Project2' with the token that was gotten in "4."
it is possible to create them.
7. Delete the role assignment between 'Project1' and 'user' (that was added
in "1.")

(After validated token cache is expired in cinder, etc.)
8. Create reources in 'Project1' with the token that was gotten in "3."
it is not possible to create them. "401 Unauthorized."
9. Create reources in 'Project2' with the token that was gotten in "4."
it is not possible to create them. "401 Unauthorized."

In "9.", my expectation is that it is possible to create resources with the
token that was gotten in "4.".


Are these bugs?
Or are there any reasons to implement these ways?

Regards,
Takashi Natsume
NTT Software Innovation Center
Tel: +81-422-59-4399
E-mail: natsume.taka...@lab.ntt.co.jp




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ServiceVM] servicevm IRC meeting reminder (June 28 Tuesday 5:00(AM)UTC-)

2014-06-24 Thread Isaku Yamahata
As action items, I've moved API spec to google-doc
until stackforge repo is created.

Here is the link
https://docs.google.com/document/d/10v818QsHWw5lSpiCMfh908PAvVzkw7_ZUL0cgDiH3Vk/edit?usp=sharing

thanks,

On Mon, Jun 23, 2014 at 11:25:03PM +0900,
Isaku Yamahata  wrote:

> Hi. This is a reminder mail for the servicevm IRC meeting
> June 28, 2014 Tuesdays 5:00(AM)UTC-
> #openstack-meeting on freenode
> https://wiki.openstack.org/wiki/Meetings/ServiceVM
> 
> 
> agenda: (feel free to add your items)
> * announcements
> * action items from the last week
> * new repo in github and API discussion
>   I hoped to use stackforge so far, but the process of config seems
>   too slow. 
>   So I'd like to start actual discussion with github until stackforge repo
>   is created.
> * API discussion for consolidation
>   consolidate multiple existing implementations
> * NFV meeting follow up
> * blueprint follow up
> * open discussion
> * add your items
> -- 
> Isaku Yamahata 

-- 
Isaku Yamahata 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Support for plugins in fuel client

2014-06-24 Thread Dmitriy Shulyak
As i mentioned cliff uses similar approach, extending app by means of entry
points, and written by same author.
So i think stevedore will be used in cliff, or maybe already used in newer
versions.
But apart of stevedore-like dynamic extensions - cliff provides modular
layers for cli app, it is kindof framework for wrtiting
cli applications.


On Tue, Jun 24, 2014 at 11:15 PM, Andrey Danin  wrote:

> Why not to use stevedore?
>
>
> On Wed, Jun 18, 2014 at 1:42 PM, Igor Kalnitsky 
> wrote:
>
>> Hi guys,
>>
>> Actually, I'm not a fun of cliff, but I think it's a good solution to use
>> it in our fuel client.
>>
>> Here some pros:
>>
>> * pluggable design: we can encapsulate entire command logic in separate
>> plugin file
>> * builtin output formatters: we no need to implement various formatters
>> to represent received data
>> * interactive mode: cliff makes possible to provide a shell mode, just
>> like psql do
>>
>> Well, I vote to use cliff inside fuel client. Yeah, I know, we need to
>> rewrite a lot of code, but we
>> can do it step-by-step.
>>
>> - Igor
>>
>>
>>
>>
>> On Wed, Jun 18, 2014 at 9:14 AM, Dmitriy Shulyak 
>> wrote:
>>
>>> Hi folks,
>>>
>>> I am wondering what our story/vision for plugins in fuel client [1]?
>>>
>>> We can benefit from using cliff [2] as framework for fuel cli, apart
>>> from common code
>>> for building cli applications on top of argparse, it provides nice
>>> feature that allows to
>>> dynamicly add actions by means of entry points (stevedore-like).
>>>
>>> So we will be able to add new actions for fuel client simply by
>>> installing separate packages with correct entry points.
>>>
>>> Afaik stevedore is not used there, but i think it will be - cause of
>>> same author and maintainer.
>>>
>>> Do we need this? Maybe there is other options?
>>>
>>> Thanks
>>>
>>> [1] https://github.com/stackforge/fuel-web/tree/master/fuelclient
>>> [2]  https://github.com/openstack/cliff
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Andrey Danin
> ada...@mirantis.com
> skype: gcon.monolake
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Solum][Mistral] New class of requirements for Stackforge projects

2014-06-24 Thread Clark Boylan
On Tue, Jun 24, 2014 at 9:54 PM, Adrian Otto  wrote:
> Hello,
>
> Solum has run into a constraint with the current scheme for requirements 
> management within the OpenStack CI system. We have a proposal for dealing 
> with this constraint that involves making a contribution to openstack-infra. 
> This message explains the constraint, and our proposal for addressing it.
>
> == Background ==
>
> OpenStack uses a list of global requirements in the requirements repo[1], and 
> each project has it’s own requirements.txt and test-requirements.txt files. 
> The requirements are satisfied by gate jobs using pip configured to use the 
> pypi.openstack.org mirror, which is periodically updated with new content 
> from pypi.python.org. One motivation for doing this is that pypi.python.org 
> may not be as fast or as reliable as a local mirror. The gate/check jobs for 
> the projects use the OpenStack internal pypi mirror to ensure stability.
>
> The OpenStack CI system will sync up the requirements across all the official 
> projects and will create reviews in the participating projects for any 
> mis-matches. Solum is one of these projects, and enjoys this feature.
>
> Another motivation is so that users of OpenStack will have one single set of 
> python package requirements/dependencies to install and run the individual 
> OpenStack components.
>
> == Problem ==
>
> Stackforge projects listed in openstack/requirements/projects.txt that decide 
> to depend on each other (for example, Solum wanting to list mistralclient as 
> a requirement) are unable to, because they are not yet integrated, and are 
> not listed in openstack/requirements/global-requirements.txt yet. This means 
> that in order to depend on each other, a project must withdraw from 
> projects.txt and begin using pip with pypi.poython.org to satisfy all of 
> their requirements.I strongly dislike this option.
>
> Mistral is still evolving rapidly, and we don’t think it makes sense for them 
> to pursue integration wight now. The upstream distributions who include 
> packages to support OpenStack will also prefer not to deal with a requirement 
> that will be cutting a new version every week or two in order to satisfy 
> evolving needs as Solum and other consumers of Mistral help refine how it 
> works.
>
> == Proposal ==
>
> We want the best of both worlds. We want the freedom to innovate and use new 
> software for a limited selection of stackforge projects, and still use the 
> OpenStack pypi server to satisfy my regular requirements. We want the speed 
> and reliability of using our local mirror, and users of Solum to use a 
> matching set of requirements for all the things that we use, and integrated 
> projects use. We want to continue getting the reviews that bring us up to 
> date with new requirements versions.
>
> We propose that we submit an enhancement to the gate/check job setup that 
> will:
>
> 1) Begin (as it does today) by satisfying global-requirements.txt and my 
> local project’s requirements.txt and test-requirements.txt using the local 
> OpenStack pypi mirror.
> 2) After all requirements are satisfied, check the name of my project. If it 
> begins with ‘stackforge/‘ then look for a stackforge-requirements.txt file. 
> If one exists, reconfigure pip to switch to use pypi.python.org, and satisfy 
> the requirements listed in the file. We will list mistralclient there, and 
> get the latest tagged/released version of that.
>
I am reasonably sure that if you remove yourself from the
openstack/requirements project list this is basically how it will
work. Pip is configured to use the OpenStack mirror and fall back on
pypi.python.org for packages not available on the OpenStack mirror
[2]. So I don't think there is any work to do here with additional
requirements files. It should just work. Adding a new requirements
file will just make things more confusing for packagers and consumers
of your software.
>
> == Call To Action ==
>
> What do you think of this approach to satisfy a balance of interests? 
> Everything remains the same for OpenStack projects, and Stackforge projects 
> get a new feature that allows them to require software that has not yet been 
> integrated. Are there even better options that we should consider?
>
> Thanks,
>
> Adrian Otto
>
>
> References:
> [1] https://review.openstack.org/openstack/requirements

For what it is worth the Infra team has also been looking at
potentially using something like bandersnatch to mirror all of pypi
which is now a possibility because OpenStack doesn't depend on
packages that are hosted external to pypi. We would then do
requirements enforcement via checks rather than explicit use of a
restricted mirror. There are some things to sort out like platform
dependent wheels (I am not sure that any OpenStack project directly
consumes these but I have found them to be quite handy) and the
potential need for more enforcement to keep this working, but I think
this is a possibility.

Clark

[2]

[openstack-dev] [Infra][Solum][Mistral] New class of requirements for Stackforge projects

2014-06-24 Thread Adrian Otto
Hello,

Solum has run into a constraint with the current scheme for requirements 
management within the OpenStack CI system. We have a proposal for dealing with 
this constraint that involves making a contribution to openstack-infra. This 
message explains the constraint, and our proposal for addressing it.

== Background ==

OpenStack uses a list of global requirements in the requirements repo[1], and 
each project has it’s own requirements.txt and test-requirements.txt files. The 
requirements are satisfied by gate jobs using pip configured to use the 
pypi.openstack.org mirror, which is periodically updated with new content from 
pypi.python.org. One motivation for doing this is that pypi.python.org may not 
be as fast or as reliable as a local mirror. The gate/check jobs for the 
projects use the OpenStack internal pypi mirror to ensure stability.

The OpenStack CI system will sync up the requirements across all the official 
projects and will create reviews in the participating projects for any 
mis-matches. Solum is one of these projects, and enjoys this feature.

Another motivation is so that users of OpenStack will have one single set of 
python package requirements/dependencies to install and run the individual 
OpenStack components.

== Problem ==

Stackforge projects listed in openstack/requirements/projects.txt that decide 
to depend on each other (for example, Solum wanting to list mistralclient as a 
requirement) are unable to, because they are not yet integrated, and are not 
listed in openstack/requirements/global-requirements.txt yet. This means that 
in order to depend on each other, a project must withdraw from projects.txt and 
begin using pip with pypi.poython.org to satisfy all of their requirements.I 
strongly dislike this option.

Mistral is still evolving rapidly, and we don’t think it makes sense for them 
to pursue integration wight now. The upstream distributions who include 
packages to support OpenStack will also prefer not to deal with a requirement 
that will be cutting a new version every week or two in order to satisfy 
evolving needs as Solum and other consumers of Mistral help refine how it works.

== Proposal ==

We want the best of both worlds. We want the freedom to innovate and use new 
software for a limited selection of stackforge projects, and still use the 
OpenStack pypi server to satisfy my regular requirements. We want the speed and 
reliability of using our local mirror, and users of Solum to use a matching set 
of requirements for all the things that we use, and integrated projects use. We 
want to continue getting the reviews that bring us up to date with new 
requirements versions.

We propose that we submit an enhancement to the gate/check job setup that will:

1) Begin (as it does today) by satisfying global-requirements.txt and my local 
project’s requirements.txt and test-requirements.txt using the local OpenStack 
pypi mirror.
2) After all requirements are satisfied, check the name of my project. If it 
begins with ‘stackforge/‘ then look for a stackforge-requirements.txt file. If 
one exists, reconfigure pip to switch to use pypi.python.org, and satisfy the 
requirements listed in the file. We will list mistralclient there, and get the 
latest tagged/released version of that.

== Call To Action ==

What do you think of this approach to satisfy a balance of interests? 
Everything remains the same for OpenStack projects, and Stackforge projects get 
a new feature that allows them to require software that has not yet been 
integrated. Are there even better options that we should consider?

Thanks,

Adrian Otto


References:
[1] https://review.openstack.org/openstack/requirements
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [ironic]nova scheduler and ironic

2014-06-24 Thread Joe Gordon
On Jun 24, 2014 7:02 PM, "Jander lu"  wrote:
>
> hi, guys, I have two confused issue when reading source code.
>
> 1) can we have ironic driver and KVM driver both exist in the cloud? for
example, I have 8 compute nodes, I make 4 of them with compute_driver =
libvirt and remaining 4 nodes with
compute_driver=nova.virt.ironic.IronicDriver ?
>
> 2) if it works, how does nova scheduler work to choose the right node in
this case if I want boot a VM or a physical node ?

You can use host aggregates to make certain flavors bare metal and others
KVM

>
>
> thx all.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

2014-06-24 Thread Eichberger, German
Hi,

I second Stephen's suggestion with the status matrix. I have heard of 
(provisional) status, operational status, admin status -- I am curious what 
states exists and how an object would transition between them. 

Libra uses only one status field and it transitions as follows:

BUILDING -> ACTIVE|ERROR
ACTIVE -> DEGARDED|ERROR|DELETED
DEGRADED -> ACTIVE|ERROR|DELETED
ERROR -> DELETED

That said I don't think admin status is that important for me as an operator 
since my user's usually delete lbs and re-create them. But I am curious how 
other operators feel.

Thanks,
German

-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Tuesday, June 24, 2014 8:46 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

Alright y'all have convinced me for now.  How the status is show on shared 
entities is still yet to be determined.  However, we don't have any shared 
entities (unless we really want health monitors shareable right now) at this 
point so the status won't get complicated for this first iteration. 

Thanks,
Brandon

On Wed, 2014-06-25 at 01:10 +, Doug Wiegley wrote:
> Hi Stephen,
> 
> 
> > Ultimately, as we will have several objects which have many-to-many
> relationships with other objects, the 'status' of an object that is 
> shared between what will ultimately be two separate physical entities 
> on the back-end should be represented by a dictionary, and any 
> 'reduction' of this on behalf of the user should happen within the UI.
> It does make things more complex to deal with in certain kinds of 
> failure scenarios, but we don't help ourselves at all by trying to 
> hide, say, when a member of a pool referenced by one listener is 'UP'
> and the same member of the same pool referenced by a different 
> listener is 'DOWN'.  :/
> 
> 
> For M:N, that’s actually an additional status field that rightly 
> belongs as another column in the join table, if at all (allow me to 
> queue up all of my normal M:N objections here in this case, I’m just 
> talking normal db representation.)  The bare object itself still has 
> status of its own.
> 
> 
> Doug
> 
> 
> 
> 
> 
> 
> From: Stephen Balukoff 
> Reply-To: "OpenStack Development Mailing List (not for usage 
> questions)" 
> Date: Tuesday, June 24, 2014 at 6:02 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [Neutron][LBaaS] Which entities need 
> status
> 
> 
> 
> Ultimately, as we will have several objects which have many-to-many 
> relationships with other objects, the 'status' of an object that is 
> shared between what will ultimately be two separate physical entities 
> on the back-end should be represented by a dictionary, and any 
> 'reduction' of this on behalf of the user should happen within the UI.
> It does make things more complex to deal with in certain kinds of 
> failure scenarios, but we don't help ourselves at all by trying to 
> hide, say, when a member of a pool referenced by one listener is 'UP'
> and the same member of the same pool referenced by a different 
> listener is 'DOWN'.  :/
> 
> 
> Granted, our version 1 implementation of these objects is going to be 
> simplified, but it doesn't hurt to think about where we're headed with 
> this API and object model.
> 
> 
> I think it would be worthwhile for someone to produce a status matrix 
> showing which kinds of status are available for each object type, and 
> what the possible values of those statuses are, and what they mean.
> Given the question of what 'status' means is very complicated indeed, 
> I think this is the only way we're going to actually make forward 
> progress in this discussion.
> 
> 
> Stephen
> 
> 
> 
> 
> On Tue, Jun 24, 2014 at 4:02 PM, Dustin Lundquist 
>  wrote:
> I think there is significant value in having status on the
> listener object even in the case of HAProxy. While HAProxy can
> support multiple listeners in a single process, there is no
> reason it needs to be deployed that way. Additionally in the
> case of updating a configuration with an additional listener,
> the other listeners and the load balancer object are not in an
> unavailable or down state before the configuration is applied,
> only the new listener object is down or building. In the case
> of the HAProxy namespace driver, one could map the namespace
> creation and HAProxy process to the load balancer object
> status, but each listener can have its own status based on the
> availability of members in its pools. 
> 
> 
> For the initial version of our new object model we be
> pragmatic and minimize complexity and change, we can preform a
> reduction across all listeners to generate an overall load
> balancer status.
> 
> 
> 
> 
> -Dustin
>  

[openstack-dev] [Neutron] validation of input against column-size specified in schema

2014-06-24 Thread Manish Godara
Hi,

Is there any way in current neutron codebase that can be used to validate
the length of a string attribute against the max column size specified in
the schema for that attribute.

E.g. , in models_v2.py

class Network(model_base.BASEV2, HasId, HasTenant):
"""Represents a v2 neutron network."""

name = sa.Column(sa.String(255))
...


And if I want to validate the 'name' before storing in db, then how can I
get the max allowable length given this definition?  I don't see any such
validations being done in neutron for fields, so wondering how to do it.
Maybe it's there and I missed it.


Regards,
manish


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Bug squashing day on Tu, 24th of June

2014-06-24 Thread Dmitry Borodaenko
Updated numbers from the end of the day:


startdelta from 2014-06-17mid-daydelta from startenddelta from startdelta
from 2014-06-17New17517011-6-1Incomplete25-621-422-3-9Critical/High for 5.1

140
135-528Critical/High for 5.1, Confirmed Triaged92
87-584-8
Medium/Low/Undefined for 5.1, Confirmed/Triaged238
230-8228-10
In progress67
736747
Customer-found27
25-225-2
Confirmed/Triaged/In progress for 5.1

392
386-618Total open for 5.143928428-11419-208
As you can see, we've made a good progress today (unlike last week, we've
reduced all numbers except bugs in progress, looks like we need to be more
focused and do better at pushing patches through code review). However,
comparing with numbers at the end of bug squashing day last week, we're
still in the red. We're getting better at triaging bugs, but we're still
finding more High/Critical bugs than we're able to fix. I feel that some
bug priority inflation is taking place (total number of bugs has grown by
8, number of High/Critical bugs, by 28), we need to be more strict with
applying bug priority guidelines

.

Thank you all for participating, let's do even better next week!
-DmitryB



On Tue, Jun 24, 2014 at 12:41 PM, Dmitry Borodaenko <
dborodae...@mirantis.com> wrote:

> Mid-day numbers update:
>
>
> start delta from 2014-06-17mid-day delta from startend delta from startdelta
> from 2014-06-17 New175 17017 05Incomplete 25-621 -421-4 -10Critical/High
> for 5.1
>
> 140
> 140
> 33 Critical/High for 5.1, Confirmed Triaged92
> 87-587 -5
> Medium/Low/Undefined for 5.1, Confirmed/Triaged238
> 230-8 230-8
> In progress 67
> 73 6736
> Customer-found27
> 25-225 -2
> Confirmed/Triaged/In progress for 5.1
>
> 392
> 392
> 24Total open for 5.1439 28428-11 428-1117
> Spreadsheet:
>
> https://docs.google.com/a/mirantis.com/spreadsheets/d/10mUeRwOplnmoe_RFkrUSeVEw-__ZMU2nq23BOY-gzYs/edit#gid=1683970476
>
> --
> Dmitry Borodaenko
>



-- 
Dmitry Borodaenko
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

2014-06-24 Thread Brandon Logan
Alright y'all have convinced me for now.  How the status is show on
shared entities is still yet to be determined.  However, we don't have
any shared entities (unless we really want health monitors shareable
right now) at this point so the status won't get complicated for this
first iteration. 

Thanks,
Brandon

On Wed, 2014-06-25 at 01:10 +, Doug Wiegley wrote:
> Hi Stephen,
> 
> 
> > Ultimately, as we will have several objects which have many-to-many
> relationships with other objects, the 'status' of an object that is
> shared between what will ultimately be two separate physical entities
> on the back-end should be represented by a dictionary, and any
> 'reduction' of this on behalf of the user should happen within the UI.
> It does make things more complex to deal with in certain kinds of
> failure scenarios, but we don't help ourselves at all by trying to
> hide, say, when a member of a pool referenced by one listener is 'UP'
> and the same member of the same pool referenced by a different
> listener is 'DOWN'.  :/
> 
> 
> For M:N, that’s actually an additional status field that rightly
> belongs as another column in the join table, if at all (allow me to
> queue up all of my normal M:N objections here in this case, I’m just
> talking normal db representation.)  The bare object itself still has
> status of its own.
> 
> 
> Doug
> 
> 
> 
> 
> 
> 
> From: Stephen Balukoff 
> Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" 
> Date: Tuesday, June 24, 2014 at 6:02 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [Neutron][LBaaS] Which entities need
> status
> 
> 
> 
> Ultimately, as we will have several objects which have many-to-many
> relationships with other objects, the 'status' of an object that is
> shared between what will ultimately be two separate physical entities
> on the back-end should be represented by a dictionary, and any
> 'reduction' of this on behalf of the user should happen within the UI.
> It does make things more complex to deal with in certain kinds of
> failure scenarios, but we don't help ourselves at all by trying to
> hide, say, when a member of a pool referenced by one listener is 'UP'
> and the same member of the same pool referenced by a different
> listener is 'DOWN'.  :/ 
> 
> 
> Granted, our version 1 implementation of these objects is going to be
> simplified, but it doesn't hurt to think about where we're headed with
> this API and object model.
> 
> 
> I think it would be worthwhile for someone to produce a status matrix
> showing which kinds of status are available for each object type, and
> what the possible values of those statuses are, and what they mean.
> Given the question of what 'status' means is very complicated indeed,
> I think this is the only way we're going to actually make forward
> progress in this discussion.
> 
> 
> Stephen
> 
> 
> 
> 
> On Tue, Jun 24, 2014 at 4:02 PM, Dustin Lundquist
>  wrote:
> I think there is significant value in having status on the
> listener object even in the case of HAProxy. While HAProxy can
> support multiple listeners in a single process, there is no
> reason it needs to be deployed that way. Additionally in the
> case of updating a configuration with an additional listener,
> the other listeners and the load balancer object are not in an
> unavailable or down state before the configuration is applied,
> only the new listener object is down or building. In the case
> of the HAProxy namespace driver, one could map the namespace
> creation and HAProxy process to the load balancer object
> status, but each listener can have its own status based on the
> availability of members in its pools. 
> 
> 
> For the initial version of our new object model we be
> pragmatic and minimize complexity and change, we can preform a
> reduction across all listeners to generate an overall load
> balancer status.
> 
> 
> 
> 
> -Dustin
> 
> 
> On Tue, Jun 24, 2014 at 3:15 PM, Vijay B 
> wrote:
> Hi Brandon, Eugene, Doug, 
> 
> 
> During the hackathon, I remember that we had briefly
> discussed how listeners would manifest themselves on
> the LB VM/device, and it turned out that for some
> backends like HAProxy it simply meant creating a
> frontend entry in the cfg file whereas on other
> solutions it could mean spawning a process/equivalent.
> So we must have status fields to track the state of
> any such entities that are actually created. In the
> listener case, an ACTIVE state would mean that the
> appropriate backend processes have bee

[openstack-dev] [Neutron ML2] Potential DB lock when developing new mechanism driver

2014-06-24 Thread Li Ma
Hi all,

I'm developing a new mechanism driver. I'd like to access ml2-related tables in 
create_port_precommit and create_port_postcommit. However I find it hard to do 
that because the two functions are both inside an existed database transaction 
defined in create_port function of ml2/plugin.py.

The related code is as follows:

def create_port(self, context, port):
...
session = context.session
with session.begin(subtransactions=True):
...
self.mechanism_manager.create_port_precommit(mech_context)
try:
self.mechanism_manager.create_port_postcommit(mech_context)
...
...
return result

As a result, I need to carefully deal with the database nested transaction 
issue to prevent from db lock when I develop my own mechanism driver. Right 
now, I'm trying to get the idea behind the scene. Is it possible to refactor it 
in order to make precommit and postcommit out of the db transaction? I think it 
is perfect for those who develop mechanism driver and do not know well about 
the functioning context of the whole ML2 plugin.

Thanks,
Li Ma

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] nova needs a new release of neutronclient for OverQuotaClient exception

2014-06-24 Thread Angus Lees
On Tue, 24 Jun 2014 02:46:33 PM Kyle Mestery wrote:
> On Mon, Jun 23, 2014 at 11:08 AM, Kyle Mestery
> 
>  wrote:
> > On Mon, Jun 23, 2014 at 8:54 AM, Matt Riedemann
> > 
> >  wrote:
> >> There are at least two changes [1][2] proposed to Nova that use the new
> >> OverQuotaClient exception in python-neutronclient, but the unit test jobs
> >> no longer test against trunk-level code of the client packages so they
> >> fail. So I'm here to lobby for a new release of python-neutronclient if
> >> possible so we can keep these fixes moving.  Are there any issues with
> >> that?> 
> > Thanks for bringing this up Matt. I've put this on the agenda for the
> > Neutron meeting today, I'll reply on this thread with what comes out
> > of that discussion.
> > 
> > Kyle
> 
> As discussed in the meeting, we're going to work on making a new
> release of the client Matt. Ping me in channel later this week, we're
> working the details out on that release at the moment.

fyi, it would also make sense to include this neutronclient fix too:
 https://review.openstack.org/#/c/98318/
(assuming it gets sufficient reviews+submitted)

> Thanks,
> Kyle
> 
> > [1]
> > https://wiki.openstack.org/wiki/Network/Meetings#Team_Discussion_Topics
> > 
> >> [1] https://review.openstack.org/#/c/62581/
> >> [2] https://review.openstack.org/#/c/101462/
> >> --
> >> 
> >> Thanks,
> >> 
> >> Matt Riedemann
> >> 
> >> 
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
 - Gus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Announcing Keystone Middleware Project

2014-06-24 Thread Morgan Fainberg
I expect that we will be releasing the 1.0.0 shortly here (or at the very least 
an alpha so we can move forward) to make sure we have time get the new package 
in use during Juno. As soon as we have something released (should be very 
soon), I’ll make sure we give a heads up to all the packagers.

Cheers,
Morgan
—
Morgan Fainberg


From: Tom Fifield t...@openstack.org
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: June 24, 2014 at 20:23:42
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [Keystone] Announcing Keystone Middleware Project 
 

On 25/06/14 07:24, Morgan Fainberg wrote:  
> The Keystone team would like to announce the official split of  
> python-keystoneclient and the Keystone middleware code.  
> Over time the middleware (auth_token, s3_token, ec2_token) has developed  
> into a fairly expansive code base and  
> includes dependencies that are not necessarily appropriate for the  
> python-keystoneclient library and CLI tools. Combined  
> with the desire to be able to release updates of the middleware code  
> without requiring an update of the CLI and  
> python-keystoneclient library itself, we have opted to split the  
> packaging of the middleware.  

Seems sane :) If you haven't already, please consider giving a heads up  
to the debian/redhat/suse/ubuntu packagers so they're prepped as early  
as possible.  


Regards,  


Tom  

___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Announcing Keystone Middleware Project

2014-06-24 Thread Tom Fifield

On 25/06/14 07:24, Morgan Fainberg wrote:

The Keystone team would like to announce the official split of
python-keystoneclient and the Keystone middleware code.
Over time the middleware (auth_token, s3_token, ec2_token) has developed
into a fairly expansive code base and
includes dependencies that are not necessarily appropriate for the
python-keystoneclient library and CLI tools. Combined
with the desire to be able to release updates of the middleware code
without requiring an update of the CLI and
  python-keystoneclient library itself, we have opted to split the
packaging of the middleware.


Seems sane :) If you haven't already, please consider giving a heads up 
to the debian/redhat/suse/ubuntu packagers so they're prepped as early 
as possible.



Regards,


Tom

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we have a stale data indication in "nova list/show"?

2014-06-24 Thread Ahmed RAHAL

Hi,

Le 2014-06-24 20:12, Joe Gordon a écrit :


Finally, assuming the customer had access to this 'unknown' state
information, what would he be able to do with it ? Usually he has no
lever to 'evacuate' or 'recover' the VM. All he could do is spawn
another instance to replace the lost one. But only if the VM really
is currently unavailable, an information he must get from other sources.


If I was a user, and my instance went to an 'UNKNOWN' state, I would
check if its still operating, and if not delete it and start another
instance.


If I was a user and polled nova list/show on a regular basis just in 
case the management pane indicates a failure, I should have no 
expectation whatsoever. If service availability is my concern, I should 
monitor the service, nothing else. From there, once the service has 
failed, I can imagine checking if VM management is telling me something. 
However, if my service is down and I have no longer access to the VM ... 
simple case: destroy and respawn.


My point is that we should not make the nova state an expected source of 
truth regarding service availability in the VM, as there is no way to 
tell such a thing. If my VM is being DDOSed, nova would still say 
everything is fine, while my service is really down. In that situation, 
console access would help me determine if the VM management is wrong by 
stating everything is ok or if there is another root cause.
Similarly, should nova show a state change if load in the VM is through 
the roof and the service is not responsive ? or if OOM is killing all my 
processes because of a memory shortage ?


As stated before, providing such a state information is misleading 
because there are cases where node unavailability is not service 
disruptive, thus it would indicate a false positive while the opposite 
(everything is ok) is not at all indicative of a healthy status of the 
service.


Maybe am I overseeing a use case here where you absolutely need the user 
of the service to know a potential problem with his hosting platform.


Ahmed.

--
=
Ahmed Rahal  / iWeb Technologies
Spécialiste de l'Architecture TI
/ IT Architecture Specialist
=

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Meeting Tuesday June 24th at 19:00 UTC

2014-06-24 Thread Elizabeth K. Joseph
On Mon, Jun 23, 2014 at 9:39 AM, Elizabeth K. Joseph
 wrote:
> Hi everyone,
>
> The OpenStack Infrastructure (Infra) team is hosting our weekly
> meeting on Tuesday June 24th, at 19:00 UTC in #openstack-meeting

Meeting minutes and log available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-06-24-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-06-24-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-06-24-19.02.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Tempest testing for master is now running on Trusty

2014-06-24 Thread Kyle Mestery
On Tue, Jun 24, 2014 at 8:49 PM, Clark Boylan  wrote:
> Hello everyone,
>
> The Infra team switched all Tempest testing (really all jenkins jobs
> with names containing 'dsvm') for the master branch to Trusty slaves
> today. This means that integration testing has access to new libvirt,
> new mongodb, and new kernels among other things. Python27 unittesting
> for the master branch will be moved to Trusty too over the next few
> days.
>
> Note that all Tempest and python27 testing for Havana and Icehouse will 
> continue
> on Precise.
>
> Please let us know if you see oddness due to these changes, but initial
> indications are that the switch has been smooth so far.
>
> Thanks,
> Clark on behalf of the Infra team
>
This is great, thanks Clark and Infra team! This also means we have
access to a newer version of Open vSwitch, which is also nice.

Thanks,
Kyle

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] [ironic]nova scheduler and ironic

2014-06-24 Thread Jander lu
hi, guys, I have two confused issue when reading source code.

1) can we have ironic driver and KVM driver both exist in the cloud? for
example, I have 8 compute nodes, I make 4 of them with compute_driver =
libvirt and remaining 4 nodes with
compute_driver=nova.virt.ironic.IronicDriver ?

2) if it works, how does nova scheduler work to choose the right node in
this case if I want boot a VM or a physical node ?


thx all.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-24 Thread Steve Kowalik
On 25/06/14 07:26, Mark McLoughlin wrote:
> There's two sides to this coin - concern about alienating
> non-english-as-a-first-language speakers who feel undervalued because
> their language is nitpicked to death and concern about alienating
> english-as-a-first-language speakers who struggle to understand unclear
> or incorrect language.
> 
> Obviously there's a balance to be struck there and different people will
> judge that differently, but I'm personally far more concerned about the
> former rather than the latter case.
> 
> I expect many beyond the english-as-a-first-language world are pretty
> used to dealing with imperfect language but aren't so delighted with
> being constantly reminded that their use language is imperfect.

Just to throw my two cents into the ring, when I comment about language
use in a review, I will almost always include suggested wording in full.
If I can't come to a decision about if my wording is better, than I
don't comment on it.

Cheers,
-- 
Steve
"I'm a doctor, not a doorstop!"
 - EMH, USS Enterprise

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All] Tempest testing for master is now running on Trusty

2014-06-24 Thread Clark Boylan
Hello everyone,

The Infra team switched all Tempest testing (really all jenkins jobs
with names containing 'dsvm') for the master branch to Trusty slaves
today. This means that integration testing has access to new libvirt,
new mongodb, and new kernels among other things. Python27 unittesting
for the master branch will be moved to Trusty too over the next few
days.

Note that all Tempest and python27 testing for Havana and Icehouse will continue
on Precise.

Please let us know if you see oddness due to these changes, but initial
indications are that the switch has been smooth so far.

Thanks,
Clark on behalf of the Infra team

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] default security group rules in neutron

2014-06-24 Thread Lingxian Kong
hi Mathieu:

Glad to see there are a lot of people who are seeking solutions to
address this issue (it's very great that you have already done some
jobs), and I couldn’t wait to see this feature involved in upstream as
soon as possible. So that, we could make it benefit to everyone.

2014-06-24 4:54 GMT+08:00 Mathieu Gagné :
> On 2014-06-22 10:23 PM, Lingxian Kong wrote:
>>
>>
>> So, for the functionality parity between nova-network and neutron and
>> for our use case, I registered a blueprint[2] about default security
>> group rules in Neutron days ago and related neutron spec[3], and I
>> want it to be involved in Juno, so we can upgrade our deployment that
>> time for this feature. I'm ready for the code implementation[3].
>>
>> But I still want to see what's the community's thought about including
>> this feature in neutron, any of your feedback and comments are
>> appreciated!
>>
>
> +1
>
> That's awesome news! Glad to hear someone is working on it.
>
> I already implemented (for our own cloud) a similar feature which allows an
> operator to override the set of default security group rules using a yaml
> config file. So yea... you can't edit it through the API, I'm not that fancy
> =)
>
> I'm unfortunately guilty of not proposing it upstream or publishing it
> somewhere. I'll see if I can publish it somewhere this week. Though limited
> in feature, hopefully it will be useful to someone else too.
>
> --
> Mathieu
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Regards!
---
Lingxian Kong

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

2014-06-24 Thread Doug Wiegley
Hi Stephen,

> Ultimately, as we will have several objects which have many-to-many 
> relationships with other objects, the 'status' of an object that is shared 
> between what will ultimately be two separate physical entities on the 
> back-end should be represented by a dictionary, and any 'reduction' of this 
> on behalf of the user should happen within the UI. It does make things more 
> complex to deal with in certain kinds of failure scenarios, but we don't help 
> ourselves at all by trying to hide, say, when a member of a pool referenced 
> by one listener is 'UP' and the same member of the same pool referenced by a 
> different listener is 'DOWN'.  :/

For M:N, that’s actually an additional status field that rightly belongs as 
another column in the join table, if at all (allow me to queue up all of my 
normal M:N objections here in this case, I’m just talking normal db 
representation.)  The bare object itself still has status of its own.

Doug



From: Stephen Balukoff mailto:sbaluk...@bluebox.net>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 24, 2014 at 6:02 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

Ultimately, as we will have several objects which have many-to-many 
relationships with other objects, the 'status' of an object that is shared 
between what will ultimately be two separate physical entities on the back-end 
should be represented by a dictionary, and any 'reduction' of this on behalf of 
the user should happen within the UI. It does make things more complex to deal 
with in certain kinds of failure scenarios, but we don't help ourselves at all 
by trying to hide, say, when a member of a pool referenced by one listener is 
'UP' and the same member of the same pool referenced by a different listener is 
'DOWN'.  :/

Granted, our version 1 implementation of these objects is going to be 
simplified, but it doesn't hurt to think about where we're headed with this API 
and object model.

I think it would be worthwhile for someone to produce a status matrix showing 
which kinds of status are available for each object type, and what the possible 
values of those statuses are, and what they mean. Given the question of what 
'status' means is very complicated indeed, I think this is the only way we're 
going to actually make forward progress in this discussion.

Stephen



On Tue, Jun 24, 2014 at 4:02 PM, Dustin Lundquist 
mailto:dus...@null-ptr.net>> wrote:
I think there is significant value in having status on the listener object even 
in the case of HAProxy. While HAProxy can support multiple listeners in a 
single process, there is no reason it needs to be deployed that way. 
Additionally in the case of updating a configuration with an additional 
listener, the other listeners and the load balancer object are not in an 
unavailable or down state before the configuration is applied, only the new 
listener object is down or building. In the case of the HAProxy namespace 
driver, one could map the namespace creation and HAProxy process to the load 
balancer object status, but each listener can have its own status based on the 
availability of members in its pools.

For the initial version of our new object model we be pragmatic and minimize 
complexity and change, we can preform a reduction across all listeners to 
generate an overall load balancer status.


-Dustin


On Tue, Jun 24, 2014 at 3:15 PM, Vijay B 
mailto:os.v...@gmail.com>> wrote:
Hi Brandon, Eugene, Doug,

During the hackathon, I remember that we had briefly discussed how listeners 
would manifest themselves on the LB VM/device, and it turned out that for some 
backends like HAProxy it simply meant creating a frontend entry in the cfg file 
whereas on other solutions it could mean spawning a process/equivalent. So we 
must have status fields to track the state of any such entities that are 
actually created. In the listener case, an ACTIVE state would mean that the 
appropriate backend processes have been created or that the required config 
file entries have been made.

I like the idea of having relational objects and setting the status on them, 
and in our case we can use the status fields (pool/healthmonitor/listener) in 
each table to denote the state of the relationship (configuration/association 
on backend) to another object like LoadBalancer. So I think the status fields 
should stay.

In this scenario, some entities' status could be updated in lbaas proper, and 
some in the driver implementation. I don't have a strict preference as to which 
among lbaas proper or the driver layer announces the status since we discussed 
on the IRC that we'd have helper functions in the driver to do these updates.


Regards,
Vijay


On Tue, Jun 24, 2014 at 12:16 PM, Brandon Logan 
mailto:brandon.lo...@r

Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

2014-06-24 Thread Stephen Balukoff
Ultimately, as we will have several objects which have many-to-many
relationships with other objects, the 'status' of an object that is shared
between what will ultimately be two separate physical entities on the
back-end should be represented by a dictionary, and any 'reduction' of this
on behalf of the user should happen within the UI. It does make things more
complex to deal with in certain kinds of failure scenarios, but we don't
help ourselves at all by trying to hide, say, when a member of a pool
referenced by one listener is 'UP' and the same member of the same pool
referenced by a different listener is 'DOWN'.  :/

Granted, our version 1 implementation of these objects is going to be
simplified, but it doesn't hurt to think about where we're headed with this
API and object model.

I think it would be worthwhile for someone to produce a status matrix
showing which kinds of status are available for each object type, and what
the possible values of those statuses are, and what they mean. Given the
question of what 'status' means is very complicated indeed, I think this is
the only way we're going to actually make forward progress in this
discussion.

Stephen



On Tue, Jun 24, 2014 at 4:02 PM, Dustin Lundquist 
wrote:

> I think there is significant value in having status on the listener object
> even in the case of HAProxy. While HAProxy can support multiple listeners
> in a single process, there is no reason it needs to be deployed that way.
> Additionally in the case of updating a configuration with an additional
> listener, the other listeners and the load balancer object are not in an
> unavailable or down state before the configuration is applied, only the new
> listener object is down or building. In the case of the HAProxy namespace
> driver, one could map the namespace creation and HAProxy process to the
> load balancer object status, but each listener can have its own status
> based on the availability of members in its pools.
>
> For the initial version of our new object model we be pragmatic and
> minimize complexity and change, we can preform a reduction across all
> listeners to generate an overall load balancer status.
>
>
> -Dustin
>
>
> On Tue, Jun 24, 2014 at 3:15 PM, Vijay B  wrote:
>
>> Hi Brandon, Eugene, Doug,
>>
>> During the hackathon, I remember that we had briefly discussed how
>> listeners would manifest themselves on the LB VM/device, and it turned out
>> that for some backends like HAProxy it simply meant creating a frontend
>> entry in the cfg file whereas on other solutions it could mean spawning a
>> process/equivalent. So we must have status fields to track the state of any
>> such entities that are actually created. In the listener case, an ACTIVE
>> state would mean that the appropriate backend processes have been created
>> or that the required config file entries have been made.
>>
>> I like the idea of having relational objects and setting the status on
>> them, and in our case we can use the status fields
>> (pool/healthmonitor/listener) in each table to denote the state of the
>> relationship (configuration/association on backend) to another object like
>> LoadBalancer. So I think the status fields should stay.
>>
>> In this scenario, some entities' status could be updated in lbaas proper,
>> and some in the driver implementation. I don't have a strict preference as
>> to which among lbaas proper or the driver layer announces the status since
>> we discussed on the IRC that we'd have helper functions in the driver to do
>> these updates.
>>
>>
>> Regards,
>> Vijay
>>
>>
>> On Tue, Jun 24, 2014 at 12:16 PM, Brandon Logan <
>> brandon.lo...@rackspace.com> wrote:
>>
>>> On Tue, 2014-06-24 at 18:53 +, Doug Wiegley wrote:
>>> > Hi Brandon,
>>> >
>>> > I think just one status is overloading too much onto the LB object
>>> (which
>>> > is perhaps something that a UI should do for a user, but not something
>>> an
>>> > API should be doing.)
>>>
>>> That is a good point and perhaps its another discussion to just have
>>> some way to show the status an entity has for each load balancer, which
>>> is what mark suggested for the member status at the meet-up.
>>>
>>> >
>>> > > 1) If an entity exists without a link to a load balancer it is purely
>>> > > just a database entry, so it would always be ACTIVE, but not really
>>> > > active in a technical sense.
>>> >
>>> > Depends on the driver.  I don¹t think this is a decision for lbaas
>>> proper.
>>>
>>> Driver is linked to the flavor or provider.  Flavor or provider will/is
>>> linked to load balancer.  We won't be able get a driver to send anything
>>> to if there isn't a load balancer.  Without a driver it is a decision
>>> for lbaas proper.  I'd be fine with setting the status of these
>>> "orphaned" entities to just ACTIVE but I'm just worried about the status
>>> management in the future.
>>>
>>> >
>>> >
>>> > > 2) If some of these entities become shareable then how does the
>>> status
>>> > > reflect that the entit

Re: [openstack-dev] [Neutron] default security group rules in neutron

2014-06-24 Thread Aaron Rosen
Hi Lingxian,

I've definitely experienced this problem first hand when new tenants are
allowed access to our openstack cloud. I understand that nova has an
extension to do this but I'm curious if part of the tenant onboarding
script if the desired security group rules could be set. I'm not opposed to
adding this but it seems like if that's an okay solution that might be the
easiest thing to do as neutron already supports this :)

Best,

Aaron


On Mon, Jun 23, 2014 at 1:54 PM, Mathieu Gagné  wrote:

> On 2014-06-22 10:23 PM, Lingxian Kong wrote:
>
>>
>> So, for the functionality parity between nova-network and neutron and
>> for our use case, I registered a blueprint[2] about default security
>> group rules in Neutron days ago and related neutron spec[3], and I
>> want it to be involved in Juno, so we can upgrade our deployment that
>> time for this feature. I'm ready for the code implementation[3].
>>
>> But I still want to see what's the community's thought about including
>> this feature in neutron, any of your feedback and comments are
>> appreciated!
>>
>>
> +1
>
> That's awesome news! Glad to hear someone is working on it.
>
> I already implemented (for our own cloud) a similar feature which allows
> an operator to override the set of default security group rules using a
> yaml config file. So yea... you can't edit it through the API, I'm not that
> fancy =)
>
> I'm unfortunately guilty of not proposing it upstream or publishing it
> somewhere. I'll see if I can publish it somewhere this week. Though limited
> in feature, hopefully it will be useful to someone else too.
>
> --
> Mathieu
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][pci] A couple of questions

2014-06-24 Thread yongli he

Hi, Robert, Irenab

does your patches are properly seting up the topic, like 
pci-passthrough-sriov?
all SRIOV patch need this tag i think , help people find this set of 
patch to review.


https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/pci-passthrough-sriov,n,z

Yongli He

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we have a stale data indication in "nova list/show"?

2014-06-24 Thread Chris Behrens
I don't think we should be flipping states for instances on a potentially 
downed compute. We definitely should not set an instance to ERROR. I think a 
time associated with the last power state check might be nice and be good 
enough.

- Chris

> On Jun 24, 2014, at 5:17 PM, Joe Gordon  wrote:
> 
> 
> 
> 
>> On Tue, Jun 24, 2014 at 5:12 PM, Joe Gordon  wrote:
>> 
>> 
>> 
>>> On Tue, Jun 24, 2014 at 4:16 PM, Ahmed RAHAL  wrote:
>>> Le 2014-06-24 17:38, Joe Gordon a écrit :
 
 On Jun 24, 2014 2:31 PM, "Russell Bryant" >>> > wrote:
>>> 
  > There be dragons here.  Just because Nova doesn't see the node reporting
  > in, doesn't mean the VMs aren't actually still running.  I think this
  > needs to be left to logic outside of Nova.
  >
  > For example, if your deployment monitoring really does think the host is
  > down, you want to make sure it's *completely* dead before taking further
  > action such as evacuating the host.  You certainly don't want to risk
  > having the VM running on two different hosts.  This is just a business I
  > don't think Nova should be getting in to.
 
 I agree nova shouldn't take any actions. But I don't think leaving an
 instance as 'active' is right either.  I was thinking move instance to
 error state (maybe an unknown state would be more accurate) and let the
 user deal with it, versus just letting the user deal with everything.
 Since nova knows something *may* be wrong shouldn't we convey that to
 the user (I'm not 100% sure we should myself).
>>> 
>>> I saw compute nodes going down, from a management perspective (say, 
>>> nova-compute disappeared), but VMs were just fine. Reporting on the state 
>>> may be misleading. The 'unknown' state would fit, but nothing lets us 
>>> presume the VMs are non-functional or impacted.
>> 
>> nothing lets us presume the opposite as well. We don't know if the instance 
>> is still up.
>>  
>>> 
>>> As far as an operator is concerned, a compute node not responding is a 
>>> reason enough to check the situation.
>>> 
>>> To go further about other comments related to customer feedback, there are 
>>> many reasons a customer may think his VM is down, so showing him a 'useful 
>>> information' in some cases will only trigger more anxiety.
>>> Besides people will start hammering the API to check 'state' instead of 
>>> using proper monitoring.
>>> But, state is already reported if the customer shuts down a VM, so ...
>>> 
>>> Currently, compute nodes state reporting is done by the nova-compute 
>>> process himself, reporting back with a time stamp to the database (through 
>>> conductor if I recall well). It's more like a watchdog than a reporting 
>>> system.
>>> For VMs (assuming we find it useful) the same kind of process could occur: 
>>> nova-compute reporting back all states with time stamps for all VMs he 
>>> hosts. This shall then be optional, as I already sense scaling/performance 
>>> issues here (ceilometer anyone ?).
>>> 
>>> Finally, assuming the customer had access to this 'unknown' state 
>>> information, what would he be able to do with it ? Usually he has no lever 
>>> to 'evacuate' or 'recover' the VM. All he could do is spawn another 
>>> instance to replace the lost one. But only if the VM really is currently 
>>> unavailable, an information he must get from other sources.
>> 
>> If I was a user, and my instance went to an 'UNKNOWN' state, I would check 
>> if its still operating, and if not delete it and start another instance.
> 
> The alternative is how things work today, if a nova-compute goes down we 
> don't change any instance states, and the user is responsible for making sure 
> there instance is still operating even if the instance is set to ACTIVE.
>  
>>  
>>> 
>>> So, I see how the state reporting could be a useful information, but am not 
>>> sure that nova Status is the right place for it.
>>> 
>>> Ahmed. in
>>> 
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule - comapre_type values

2014-06-24 Thread Stephen Balukoff
Making sure all drivers support the features offered in Neutron LBaaS means
we are stuck going with the 'least common denominator' in all cases. While
this ensures all vendors implement the same things in the functionally the
same way, it also is probably a big reason the Neutron LBaaS project has
been so incredibly slow in seeing new features added over the last two
years.

In the gerrit review that Dustin linked, it sounds like the people
contributing to the discussion are in favor of allowing drivers to reject
some configurations as unsupported through use of exceptions (details on
how that will work is being hashed out now if you want to participate in
that discussion).  Let's assume, therefore, that with the LBaaS v2 API and
Object model we're also going to get this ability-- which of course also
means that drivers do not have to support every feature exposed by the API.

(And again, as Dustin pointed out, a Linux LVS-based driver definitely
wouldn't be able to support any L7 features at all, yet it's still a very
useful driver for many deployments.)

Finally, I do not believe that the LBaaS project should be "held back"
because one vendor's implementation doesn't work well with a couple
features exposed in the API. As Dustin said, let the API expose a rich
feature set and allow drivers to reject certain configurations when they
don't support them.

Stephen



On Tue, Jun 24, 2014 at 9:09 AM, Dustin Lundquist 
wrote:

> I brought this up on https://review.openstack.org/#/c/101084/.
>
>
> -Dustin
>
>
> On Tue, Jun 24, 2014 at 7:57 AM, Avishay Balderman 
> wrote:
>
>>  Hi Dustin
>>
>> I agree with the concept you described but as far as I understand it is
>> not currently supported in Neutron.
>>
>> So a driver should be fully compatible with the interface it implements.
>>
>>
>>
>> Avishay
>>
>>
>>
>> *From:* Dustin Lundquist [mailto:dus...@null-ptr.net]
>> *Sent:* Tuesday, June 24, 2014 5:41 PM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7
>> Rule - comapre_type values
>>
>>
>>
>> I think the API should provide an richly featured interface, and
>> individual drivers should indicate if they support the provided
>> configuration. For example there is a spec for a Linux LVS LBaaS driver,
>> this driver would not support TLS termination or any layer 7 features, but
>> would still be valuable for some deployments. The user experience of such a
>> solution could be improved if the driver to propagate up a message
>> specifically identifying the unsupported feature.
>>
>>
>>
>>
>>
>> -Dustin
>>
>>
>>
>> On Tue, Jun 24, 2014 at 4:28 AM, Avishay Balderman 
>> wrote:
>>
>> Hi
>>
>> One of L7 Rule attributes is ‘compare_type’.
>>
>> This field is the match operator that the rule should activate against
>> the value found in the request.
>>
>> Below is list of the possible values:
>>
>> - Regexp
>>
>> - StartsWith
>>
>> - EndsWith
>>
>> - Contains
>>
>> - EqualTo (*)
>>
>> - GreaterThan (*)
>>
>> - LessThan (*)
>>
>>
>>
>> The last 3 operators (*) in the list are used in numerical matches.
>>
>> Radware load balancing backend does not support those operators   “out of
>> the box” and a significant development effort should be done in order to
>> support it.
>>
>> We are afraid to miss the Junu timeframe if we will have to focus in
>> supporting the numerical operators.
>>
>> Therefore we ask to support the non-numerical operators for Junu and add
>> the numerical operators support post Junu.
>>
>>
>>
>> See
>> https://review.openstack.org/#/c/99709/4/specs/juno/lbaas-l7-rules.rst
>>
>>
>>
>> Thanks
>>
>> Avishay
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] neutron config not working

2014-06-24 Thread Mark Kirkwood

On 25/06/14 10:59, Rob Crittenden wrote:

Before I get punted onto the operators list, I post this here because
this is the default config and I'd expect the defaults to just work.

Running devstack inside a VM with a single NIC configured and this in
localrc:

disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
Q_USE_DEBUG_COMMAND=True

Results in a successful install but no DHCP address assigned to hosts I
launch and other oddities like no CIDR in nova net-list output.

Is this still the default way to set things up for single node? It is
according to https://wiki.openstack.org/wiki/NeutronDevstack




That does look ok: I have an essentially equivalent local.conf:

...
ENABLED_SERVICES+=,-n-net
ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,tempest

I don't have 'neutron' specifically enabled... not sure if/why that 
might make any difference tho. However instance launching and ip address 
assignment seem to work ok.


However I *have* seen the issue of instances not getting ip addresses in 
single host setups, and it is often due to use of virt io with bridges 
(with is the default I think). Try:


nova.conf:
...
libvirt_use_virtio_for_bridges=False


Regards

Mark



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Small questions re executor

2014-06-24 Thread Dmitri Zimine
Got some questions while fixing a bug https://review.openstack.org/#/c/102391/: 

* We must convey the action ERROR details back to the engine, and to the end 
user. Log is not sufficient. How exactly? Via context? Via extra parameters to 
convey_execution_results? Need a field in the model.
https://github.com/stackforge/mistral/blob/master/mistral/engine/drivers/default/executor.py#L46-L59


* What is the reason to update status on task failure in handle_task_error via 
direct DB access, not via convey_task_results? 
https://github.com/stackforge/mistral/blob/master/mistral/engine/drivers/default/executor.py#L61
 Bypassing convey_task_results can cause grief from missing TRACE statements to 
more serious stuff… And looks like we are failing the whole execution there? 
Just because one action had failed? Please clarify the intend here. Note: 
details may all go away while doing Refine Engine <-> Executor protocol 
blueprint 
https://blueprints.launchpad.net/mistral/+spec/mistral-engine-executor-protocol,
 we just need to clarify the intent

DZ> ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we have a stale data indication in "nova list/show"?

2014-06-24 Thread Joe Gordon
On Tue, Jun 24, 2014 at 5:12 PM, Joe Gordon  wrote:

>
>
>
> On Tue, Jun 24, 2014 at 4:16 PM, Ahmed RAHAL  wrote:
>
>> Le 2014-06-24 17:38, Joe Gordon a écrit :
>>
>>>
>>> On Jun 24, 2014 2:31 PM, "Russell Bryant" >> > wrote:
>>>
>>
>>   > There be dragons here.  Just because Nova doesn't see the node
>>> reporting
>>>  > in, doesn't mean the VMs aren't actually still running.  I think this
>>>  > needs to be left to logic outside of Nova.
>>>  >
>>>  > For example, if your deployment monitoring really does think the host
>>> is
>>>  > down, you want to make sure it's *completely* dead before taking
>>> further
>>>  > action such as evacuating the host.  You certainly don't want to risk
>>>  > having the VM running on two different hosts.  This is just a
>>> business I
>>>  > don't think Nova should be getting in to.
>>>
>>> I agree nova shouldn't take any actions. But I don't think leaving an
>>> instance as 'active' is right either.  I was thinking move instance to
>>> error state (maybe an unknown state would be more accurate) and let the
>>> user deal with it, versus just letting the user deal with everything.
>>> Since nova knows something *may* be wrong shouldn't we convey that to
>>> the user (I'm not 100% sure we should myself).
>>>
>>
>> I saw compute nodes going down, from a management perspective (say,
>> nova-compute disappeared), but VMs were just fine. Reporting on the state
>> may be misleading. The 'unknown' state would fit, but nothing lets us
>> presume the VMs are non-functional or impacted.
>>
>
> nothing lets us presume the opposite as well. We don't know if the
> instance is still up.
>
>
>>
>> As far as an operator is concerned, a compute node not responding is a
>> reason enough to check the situation.
>>
>> To go further about other comments related to customer feedback, there
>> are many reasons a customer may think his VM is down, so showing him a
>> 'useful information' in some cases will only trigger more anxiety.
>> Besides people will start hammering the API to check 'state' instead of
>> using proper monitoring.
>> But, state is already reported if the customer shuts down a VM, so ...
>>
>> Currently, compute nodes state reporting is done by the nova-compute
>> process himself, reporting back with a time stamp to the database (through
>> conductor if I recall well). It's more like a watchdog than a reporting
>> system.
>> For VMs (assuming we find it useful) the same kind of process could
>> occur: nova-compute reporting back all states with time stamps for all VMs
>> he hosts. This shall then be optional, as I already sense
>> scaling/performance issues here (ceilometer anyone ?).
>>
>> Finally, assuming the customer had access to this 'unknown' state
>> information, what would he be able to do with it ? Usually he has no lever
>> to 'evacuate' or 'recover' the VM. All he could do is spawn another
>> instance to replace the lost one. But only if the VM really is currently
>> unavailable, an information he must get from other sources.
>>
>
> If I was a user, and my instance went to an 'UNKNOWN' state, I would check
> if its still operating, and if not delete it and start another instance.
>

The alternative is how things work today, if a nova-compute goes down we
don't change any instance states, and the user is responsible for making
sure there instance is still operating even if the instance is set to
ACTIVE.


>
>
>>
>> So, I see how the state reporting could be a useful information, but am
>> not sure that nova Status is the right place for it.
>>
>> Ahmed. in
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Should TLS settings for listener be set through separate API/model?

2014-06-24 Thread Stephen Balukoff
Evgeny--

Two minor nits:

* Your spec lists the new SNI related settings 'sni_list' (and it contains
more than just IDs, so calling it 'sni_container_ids_list' is misleading).
Please be precise in the terms you use, and don't switch them mid
discussion. :)
* Also, I personally really hate long table names when they're unnecessary.
"vipsniassociations" isn't mentioned in your spec anywhere, and frankly, is
a lot worse than "sni_list." I personally prefer "SNIPolicies", but I'm
also OK with a short name like "sni_list".

Otherwise I agree with you on all points.

Stephen





On Tue, Jun 24, 2014 at 3:26 AM, Evgeny Fedoruk  wrote:

>  Vijay, there is no intension for a new TLS settings API.
>
> Creation of a listener with TLS offloading will be one-step.
>
>
>
> When tenant creates listener with TERMINATED-HTTPS protocol he must supply
> default_tls_container_id for offloading.
>
> Not supplying default TLS container id for offloading for TERMINATED-HTTPS
> listener will raise an error.
>
> SNI list may or may not be supplied by the tenant. Default value for SNI
> certificates list is an empty list.
>
>
>
> So listener resource will have another two attributes:
> default_tls_container_id and sni_container_ids_list. These are relevant for
> TERMINATED-HTTPS protocol listeners only. In other cases its default value
> are ‘None’ and empty list.
>
> In schema, Default_tls_container_id will be added to listener object as
> another column.
>
> Sni_container_ids_list wil be managed by new table “vipsniassociations”
> which has listener_id, container_id, and position (for ordering) columns
>
>
>
> Does it make sense?
>
>
>
> Thanks,
>
> Evg
>
>
>
>
>
>
>
>
>
> *From:* Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
> *Sent:* Tuesday, June 24, 2014 12:31 PM
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Should TLS settings for
> listener be set through separate API/model?
>
>
>
>
>
> To clarify, the request is for a new TLS settings API with
> “default_tls_container_id” & “sni_list”.
>
>
>
> If there is a new API, then we would have an object model reflecting this
> as a separate entity.
>
>
>
> The tenant would do the following
>
>
>
> 1.   Create a listener with TERMINATED_HTTPS
>
> 2.   Set the TLS settings for the listener using
> /v2.0/listener//tlssettings  (if at all we are having some
> default values this can be reflected here)
>
>
>
> The only good thing is the separation of the TLS settings out of the
> listener API.
>
>
>
> But, I can see 2 downsides
>
> 1.   The loadbalancer creation is a 2 step procedure
>
> 2.   We cannot enforce certificate attachment as part of the create
> of listener.
>
>
>
> If the new API itself has “-1”s then I am perfectly OK with the current
> object model with default_tls_container_id in listener table.
>
>
>
> Thanks,
>
> Vijay V.
>
>
>
>
>
> *From:* Evgeny Fedoruk [mailto:evge...@radware.com ]
> *Sent:* Tuesday, June 24, 2014 2:19 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Should TLS settings for
> listener be set through separate API/model?
>
>
>
> Vipsniassociations table: Line 147 in last patch of the document
>
>
>
> *From:* Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com
> ]
> *Sent:* Tuesday, June 24, 2014 10:17 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Should TLS settings for
> listener be set through separate API/model?
>
>
>
>
>
> >>SNI list is managed by separate entity
>
> What is this entity?
>
>
>
> *From:* Evgeny Fedoruk [mailto:evge...@radware.com ]
> *Sent:* Tuesday, June 24, 2014 12:25 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Should TLS settings for
> listener be set through separate API/model?
>
>
>
> +1 for option 1. SNI list is managed by separate entity, default TLS
> container is part of a listener object. It will have None value when
> listener does not offloads TLS.
>
> Managing another entity for 1:0-1 relationship just for future use seems
> not right to me. Breaking TLS settings apart from listener can be done when
> needed, if needed.
>
>
>
> Thanks,
>
> Evg
>
>
>
>
>
> *From:* Stephen Balukoff [mailto:sbaluk...@bluebox.net
> ]
> *Sent:* Tuesday, June 24, 2014 4:26 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Should TLS settings for
> listener be set through separate API/model?
>
>
>
> Ok, so we've got opinions on both sides of the argument here. I'm actually
> pretty ambivalent about it. Do others have strong opinions on this?
>
>
>
> On Mon, Jun 23, 2014 at 6:03 PM, Doug Wiegley 
> wrote:
>
> Put me down for being in favor of option 1.
>
>
>
> A single attribute in a 1:1 relationship?  Putting that in a new table
> sounds

Re: [openstack-dev] [nova] should we have a stale data indication in "nova list/show"?

2014-06-24 Thread Joe Gordon
On Tue, Jun 24, 2014 at 4:16 PM, Ahmed RAHAL  wrote:

> Le 2014-06-24 17:38, Joe Gordon a écrit :
>
>>
>> On Jun 24, 2014 2:31 PM, "Russell Bryant" > > wrote:
>>
>
>   > There be dragons here.  Just because Nova doesn't see the node
>> reporting
>>  > in, doesn't mean the VMs aren't actually still running.  I think this
>>  > needs to be left to logic outside of Nova.
>>  >
>>  > For example, if your deployment monitoring really does think the host
>> is
>>  > down, you want to make sure it's *completely* dead before taking
>> further
>>  > action such as evacuating the host.  You certainly don't want to risk
>>  > having the VM running on two different hosts.  This is just a business
>> I
>>  > don't think Nova should be getting in to.
>>
>> I agree nova shouldn't take any actions. But I don't think leaving an
>> instance as 'active' is right either.  I was thinking move instance to
>> error state (maybe an unknown state would be more accurate) and let the
>> user deal with it, versus just letting the user deal with everything.
>> Since nova knows something *may* be wrong shouldn't we convey that to
>> the user (I'm not 100% sure we should myself).
>>
>
> I saw compute nodes going down, from a management perspective (say,
> nova-compute disappeared), but VMs were just fine. Reporting on the state
> may be misleading. The 'unknown' state would fit, but nothing lets us
> presume the VMs are non-functional or impacted.
>

nothing lets us presume the opposite as well. We don't know if the instance
is still up.


>
> As far as an operator is concerned, a compute node not responding is a
> reason enough to check the situation.
>
> To go further about other comments related to customer feedback, there are
> many reasons a customer may think his VM is down, so showing him a 'useful
> information' in some cases will only trigger more anxiety.
> Besides people will start hammering the API to check 'state' instead of
> using proper monitoring.
> But, state is already reported if the customer shuts down a VM, so ...
>
> Currently, compute nodes state reporting is done by the nova-compute
> process himself, reporting back with a time stamp to the database (through
> conductor if I recall well). It's more like a watchdog than a reporting
> system.
> For VMs (assuming we find it useful) the same kind of process could occur:
> nova-compute reporting back all states with time stamps for all VMs he
> hosts. This shall then be optional, as I already sense scaling/performance
> issues here (ceilometer anyone ?).
>
> Finally, assuming the customer had access to this 'unknown' state
> information, what would he be able to do with it ? Usually he has no lever
> to 'evacuate' or 'recover' the VM. All he could do is spawn another
> instance to replace the lost one. But only if the VM really is currently
> unavailable, an information he must get from other sources.
>

If I was a user, and my instance went to an 'UNKNOWN' state, I would check
if its still operating, and if not delete it and start another instance.


>
> So, I see how the state reporting could be a useful information, but am
> not sure that nova Status is the right place for it.
>
> Ahmed.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] environment deletion

2014-06-24 Thread McLellan, Steven
Is there any reason the system Environment class doesn't implement destroy? 
Without it, the pieces of the heat stack not owned by other resources get left 
lying around. It looks like it was once implemented as part of deploy, but that 
no longer seems to execute.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Timeline for the rest of the Juno release

2014-06-24 Thread Michael Still
Your comments are fair. I think perhaps at this point we should defer
discussion of the further away deadlines until the mid cycle meetup --
that will give us a chance to whiteboard the flow for that period of
the release.

Or do you really want to lock this down now?

Michael

On Wed, Jun 25, 2014 at 12:53 AM, Day, Phil  wrote:
>> -Original Message-
>> From: Russell Bryant [mailto:rbry...@redhat.com]
>> Sent: 24 June 2014 13:08
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [Nova] Timeline for the rest of the Juno release
>>
>> On 06/24/2014 07:35 AM, Michael Still wrote:
>> > Phil -- I really want people to focus their efforts on fixing bugs in
>> > that period was the main thing. The theory was if we encouraged people
>> > to work on specs for the next release, then they'd be distracted from
>> > fixing the bugs we need fixed in J.
>> >
>> > Cheers,
>> > Michael
>> >
>> > On Tue, Jun 24, 2014 at 9:08 PM, Day, Phil  wrote:
>> >> Hi Michael,
>> >>
>> >> Not sure I understand the need for a gap between "Juno Spec approval
>> freeze" (Jul 10th) and "K opens for spec proposals" (Sep 4th).I can
>> understand that K specs won't get approved in that period, and may not get
>> much feedback from the cores - but I don't see the harm in letting specs be
>> submitted to the K directory for early review / feedback during that period ?
>>
>> I agree with both of you.  Priorities need to be finishing up J, but I don't 
>> see
>> any reason not to let people post K specs whenever.
>> Expectations just need to be set appropriately that it may be a while before
>> they get reviewed/approved.
>>
> Exactly - I think it's reasonable to set the expectation that the focus of 
> those that can produce/review code will be elsewhere - but that shouldn't 
> stop some small effort going into knocking the rough corners off the specs at 
> the same time
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Announcing Keystone Middleware Project

2014-06-24 Thread Morgan Fainberg
The Keystone team would like to announce the official split of 
python-keystoneclient and the Keystone middleware code.
Over time the middleware (auth_token, s3_token, ec2_token) has developed into a 
fairly expansive code base and
includes dependencies that are not necessarily appropriate for the 
python-keystoneclient library and CLI tools. Combined
with the desire to be able to release updates of the middleware code without 
requiring an update of the CLI and
 python-keystoneclient library itself, we have opted to split the packaging of 
the middleware.

Launchpad Project (bug/bp tracker): https://launchpad.net/keystonemiddleware
Repository: git://git.openstack.org/openstack/keystonemiddleware
Repository (Browsable): 
https://git.openstack.org/cgit/openstack/keystonemiddleware
PyPI location: https://pypi.python.org/pypi/keystonemiddleware
Open Reviews in Gerrit: 
https://review.openstack.org/#/q/status:open+project:openstack/keystonemiddleware,n,z

Detailed information on the approved specification for this split: 
https://review.openstack.org/#/c/95987/

Middleware code that has been included in the new repository:
    * auth_token middleware
    * ec2_token middleware
    * s3_token middleware
    * memcache_crypt (utility code)

Impact for deployers:
    * New keystonemiddleware package will need to be installed (once released)
    * Paste pipelines will need to be updated to reference the 
keystonemiddleware package instead of keystoneclient

Impact for Projects and Infra:
    * Keystonemiddleware is in process of being added to devstack and 
devstack-gate
    * Global requirements update (once the 1.0.0 release occurs) will be 
updated to include keystonemiddleware
    * Updates to the example paste pipelines to reference the new 
keystonemiddleware package will be proposed

Impact for packagers (once released):
    * Keystonemiddleware will need to be packaged and made available via your 
distribution's repositories (apt, yum, etc)

For the time being, we will be maintaining the current state of the middleware 
in the python-keystoneclient library. This
will allow for a transition period and ensure that production deployments 
relying on the current location of the
middleware will continue to work. However, the code located in the 
keystoneclient.middleware module will 
only receive security related fixes going forward. All new code development 
should be proposed to the new
keystonemiddleware repository. 

We are targeting a 1.0.0 (stable) release of the new keystonemiddleware in the 
near term. The Keystone team will work with
the OpenStack projects that consume Keystone middlewares to convert over to the 
new keystonemiddleware package.

Feel free to join us in #openstack-keystone (on Freenode) to discuss the 
middleware, these changes, or any other
OpenStack Identity related topics.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we have a stale data indication in "nova list/show"?

2014-06-24 Thread Ahmed RAHAL

Le 2014-06-24 17:38, Joe Gordon a écrit :


On Jun 24, 2014 2:31 PM, "Russell Bryant" mailto:rbry...@redhat.com>> wrote:



 > There be dragons here.  Just because Nova doesn't see the node reporting
 > in, doesn't mean the VMs aren't actually still running.  I think this
 > needs to be left to logic outside of Nova.
 >
 > For example, if your deployment monitoring really does think the host is
 > down, you want to make sure it's *completely* dead before taking further
 > action such as evacuating the host.  You certainly don't want to risk
 > having the VM running on two different hosts.  This is just a business I
 > don't think Nova should be getting in to.

I agree nova shouldn't take any actions. But I don't think leaving an
instance as 'active' is right either.  I was thinking move instance to
error state (maybe an unknown state would be more accurate) and let the
user deal with it, versus just letting the user deal with everything.
Since nova knows something *may* be wrong shouldn't we convey that to
the user (I'm not 100% sure we should myself).


I saw compute nodes going down, from a management perspective (say, 
nova-compute disappeared), but VMs were just fine. Reporting on the 
state may be misleading. The 'unknown' state would fit, but nothing lets 
us presume the VMs are non-functional or impacted.


As far as an operator is concerned, a compute node not responding is a 
reason enough to check the situation.


To go further about other comments related to customer feedback, there 
are many reasons a customer may think his VM is down, so showing him a 
'useful information' in some cases will only trigger more anxiety.
Besides people will start hammering the API to check 'state' instead of 
using proper monitoring.

But, state is already reported if the customer shuts down a VM, so ...

Currently, compute nodes state reporting is done by the nova-compute 
process himself, reporting back with a time stamp to the database 
(through conductor if I recall well). It's more like a watchdog than a 
reporting system.
For VMs (assuming we find it useful) the same kind of process could 
occur: nova-compute reporting back all states with time stamps for all 
VMs he hosts. This shall then be optional, as I already sense 
scaling/performance issues here (ceilometer anyone ?).


Finally, assuming the customer had access to this 'unknown' state 
information, what would he be able to do with it ? Usually he has no 
lever to 'evacuate' or 'recover' the VM. All he could do is spawn 
another instance to replace the lost one. But only if the VM really is 
currently unavailable, an information he must get from other sources.


So, I see how the state reporting could be a useful information, but am 
not sure that nova Status is the right place for it.


Ahmed.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

2014-06-24 Thread Dustin Lundquist
I think there is significant value in having status on the listener object
even in the case of HAProxy. While HAProxy can support multiple listeners
in a single process, there is no reason it needs to be deployed that way.
Additionally in the case of updating a configuration with an additional
listener, the other listeners and the load balancer object are not in an
unavailable or down state before the configuration is applied, only the new
listener object is down or building. In the case of the HAProxy namespace
driver, one could map the namespace creation and HAProxy process to the
load balancer object status, but each listener can have its own status
based on the availability of members in its pools.

For the initial version of our new object model we be pragmatic and
minimize complexity and change, we can preform a reduction across all
listeners to generate an overall load balancer status.


-Dustin


On Tue, Jun 24, 2014 at 3:15 PM, Vijay B  wrote:

> Hi Brandon, Eugene, Doug,
>
> During the hackathon, I remember that we had briefly discussed how
> listeners would manifest themselves on the LB VM/device, and it turned out
> that for some backends like HAProxy it simply meant creating a frontend
> entry in the cfg file whereas on other solutions it could mean spawning a
> process/equivalent. So we must have status fields to track the state of any
> such entities that are actually created. In the listener case, an ACTIVE
> state would mean that the appropriate backend processes have been created
> or that the required config file entries have been made.
>
> I like the idea of having relational objects and setting the status on
> them, and in our case we can use the status fields
> (pool/healthmonitor/listener) in each table to denote the state of the
> relationship (configuration/association on backend) to another object like
> LoadBalancer. So I think the status fields should stay.
>
> In this scenario, some entities' status could be updated in lbaas proper,
> and some in the driver implementation. I don't have a strict preference as
> to which among lbaas proper or the driver layer announces the status since
> we discussed on the IRC that we'd have helper functions in the driver to do
> these updates.
>
>
> Regards,
> Vijay
>
>
> On Tue, Jun 24, 2014 at 12:16 PM, Brandon Logan <
> brandon.lo...@rackspace.com> wrote:
>
>> On Tue, 2014-06-24 at 18:53 +, Doug Wiegley wrote:
>> > Hi Brandon,
>> >
>> > I think just one status is overloading too much onto the LB object
>> (which
>> > is perhaps something that a UI should do for a user, but not something
>> an
>> > API should be doing.)
>>
>> That is a good point and perhaps its another discussion to just have
>> some way to show the status an entity has for each load balancer, which
>> is what mark suggested for the member status at the meet-up.
>>
>> >
>> > > 1) If an entity exists without a link to a load balancer it is purely
>> > > just a database entry, so it would always be ACTIVE, but not really
>> > > active in a technical sense.
>> >
>> > Depends on the driver.  I don¹t think this is a decision for lbaas
>> proper.
>>
>> Driver is linked to the flavor or provider.  Flavor or provider will/is
>> linked to load balancer.  We won't be able get a driver to send anything
>> to if there isn't a load balancer.  Without a driver it is a decision
>> for lbaas proper.  I'd be fine with setting the status of these
>> "orphaned" entities to just ACTIVE but I'm just worried about the status
>> management in the future.
>>
>> >
>> >
>> > > 2) If some of these entities become shareable then how does the status
>> > > reflect that the entity failed to create on one load balancer but was
>> > > successfully created on another.  That logic could get overly complex.
>> >
>> > That¹s a status on the join link, not the object, and I could argue
>> > multiple ways in which that should be one way or another based on the
>> > backend, which to me, again implies driver question (backend could queue
>> > for later, or error immediately, or let things run degraded, orŠ)
>>
>> Yeah that is definitely an argument.  I'm just trying to keep in mind
>> the complexities that could arise from decisions made now.  Perhaps it
>> is the wrong way to look at it to some, but I don't think thinking about
>> the future is a bad thing and should never be done.
>>
>> >
>> > Thanks,
>> > Doug
>> >
>> >
>> >
>> >
>> > On 6/24/14, 11:23 AM, "Brandon Logan" 
>> wrote:
>> >
>> > >I think we missed this discussion at the meet-up but I'd like to bring
>> > >it up here.  To me having a status on all entities doesn't make much
>> > >sense, and justing having a status on a load balancer (which would be a
>> > >provisioning status) and a status on a member (which would be an
>> > >operational status) are what makes sense because:
>> > >
>> > >1) If an entity exists without a link to a load balancer it is purely
>> > >just a database entry, so it would always be ACTIVE, but not really
>> > >active i

[openstack-dev] [DevStack] neutron config not working

2014-06-24 Thread Rob Crittenden
Before I get punted onto the operators list, I post this here because
this is the default config and I'd expect the defaults to just work.

Running devstack inside a VM with a single NIC configured and this in
localrc:

disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
Q_USE_DEBUG_COMMAND=True

Results in a successful install but no DHCP address assigned to hosts I
launch and other oddities like no CIDR in nova net-list output.

Is this still the default way to set things up for single node? It is
according to https://wiki.openstack.org/wiki/NeutronDevstack

thanks

rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we have a stale data indication in "nova list/show"?

2014-06-24 Thread Rick Jones

On 06/24/2014 02:53 PM, Steve Gordon wrote:

- Original Message -

From: "Rick Jones" 
To: "OpenStack Development Mailing List (not for usage questions)" 


On 06/24/2014 02:38 PM, Joe Gordon wrote:

I agree nova shouldn't take any actions. But I don't think leaving an
instance as 'active' is right either.  I was thinking move instance to
error state (maybe an unknown state would be more accurate) and let the
user deal with it, versus just letting the user deal with everything.
Since nova knows something *may* be wrong shouldn't we convey that to
the user (I'm not 100% sure we should myself).


I suspect the user's first action will be to call Support asking "Hey,
why is my perfectly usable instance showing-up in the ERROR|UNKNOWN state?"

rick jones


The existing alternative would be having the user calling to ask why
their non-responsive instance is showing as RUNNING so you are kind
of damned if you do, damned if you don't.


There will be a call for a non-responsive instance regardless what it 
shows.  However, responsive instance not showing ERROR or UNKNOWN will 
not generate a call.  So, all in all I think you will get fewer calls if 
you don't mark the "not known to be non-responsive" instance as ERROR or 
UNKNOWN.


rick


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

2014-06-24 Thread Vijay B
Hi Brandon, Eugene, Doug,

During the hackathon, I remember that we had briefly discussed how
listeners would manifest themselves on the LB VM/device, and it turned out
that for some backends like HAProxy it simply meant creating a frontend
entry in the cfg file whereas on other solutions it could mean spawning a
process/equivalent. So we must have status fields to track the state of any
such entities that are actually created. In the listener case, an ACTIVE
state would mean that the appropriate backend processes have been created
or that the required config file entries have been made.

I like the idea of having relational objects and setting the status on
them, and in our case we can use the status fields
(pool/healthmonitor/listener) in each table to denote the state of the
relationship (configuration/association on backend) to another object like
LoadBalancer. So I think the status fields should stay.

In this scenario, some entities' status could be updated in lbaas proper,
and some in the driver implementation. I don't have a strict preference as
to which among lbaas proper or the driver layer announces the status since
we discussed on the IRC that we'd have helper functions in the driver to do
these updates.


Regards,
Vijay


On Tue, Jun 24, 2014 at 12:16 PM, Brandon Logan  wrote:

> On Tue, 2014-06-24 at 18:53 +, Doug Wiegley wrote:
> > Hi Brandon,
> >
> > I think just one status is overloading too much onto the LB object (which
> > is perhaps something that a UI should do for a user, but not something an
> > API should be doing.)
>
> That is a good point and perhaps its another discussion to just have
> some way to show the status an entity has for each load balancer, which
> is what mark suggested for the member status at the meet-up.
>
> >
> > > 1) If an entity exists without a link to a load balancer it is purely
> > > just a database entry, so it would always be ACTIVE, but not really
> > > active in a technical sense.
> >
> > Depends on the driver.  I don¹t think this is a decision for lbaas
> proper.
>
> Driver is linked to the flavor or provider.  Flavor or provider will/is
> linked to load balancer.  We won't be able get a driver to send anything
> to if there isn't a load balancer.  Without a driver it is a decision
> for lbaas proper.  I'd be fine with setting the status of these
> "orphaned" entities to just ACTIVE but I'm just worried about the status
> management in the future.
>
> >
> >
> > > 2) If some of these entities become shareable then how does the status
> > > reflect that the entity failed to create on one load balancer but was
> > > successfully created on another.  That logic could get overly complex.
> >
> > That¹s a status on the join link, not the object, and I could argue
> > multiple ways in which that should be one way or another based on the
> > backend, which to me, again implies driver question (backend could queue
> > for later, or error immediately, or let things run degraded, orŠ)
>
> Yeah that is definitely an argument.  I'm just trying to keep in mind
> the complexities that could arise from decisions made now.  Perhaps it
> is the wrong way to look at it to some, but I don't think thinking about
> the future is a bad thing and should never be done.
>
> >
> > Thanks,
> > Doug
> >
> >
> >
> >
> > On 6/24/14, 11:23 AM, "Brandon Logan" 
> wrote:
> >
> > >I think we missed this discussion at the meet-up but I'd like to bring
> > >it up here.  To me having a status on all entities doesn't make much
> > >sense, and justing having a status on a load balancer (which would be a
> > >provisioning status) and a status on a member (which would be an
> > >operational status) are what makes sense because:
> > >
> > >1) If an entity exists without a link to a load balancer it is purely
> > >just a database entry, so it would always be ACTIVE, but not really
> > >active in a technical sense.
> > >
> > >2) If some of these entities become shareable then how does the status
> > >reflect that the entity failed to create on one load balancer but was
> > >successfully created on another.  That logic could get overly complex.
> > >
> > >I think the best thing to do is to have the load balancer status reflect
> > >the provisioning status of all of its children.  So if a health monitor
> > >is updated then the load balancer that health monitor is linked to would
> > >have its status changed to PENDING_UPDATE.  Conversely, if a load
> > >balancer or any entities linked to it are changed and the load
> > >balancer's status is in a non-ACTIVE state then that update should not
> > >be allowed.
> > >
> > >Thoughts?
> > >
> > >Thanks,
> > >Brandon
> > >
> > >
> > >___
> > >OpenStack-dev mailing list
> > >OpenStack-dev@lists.openstack.org
> > >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > h

Re: [openstack-dev] [cinder][glance] Update volume-image-metadata proposal

2014-06-24 Thread Brian Rosmaita
Hi Facundo,

Can you attend the Glance meeting this week at 20:00 UTC on Thursday in 
#openstack-meeting-alt ?

I may be misunderstanding what's at stake, but it looks like:
- Glance holds the image metadata (some user-modifiable, some not)
- Cinder copies the image metadata to use as volume metadata (none is 
user-modifiable)
- You want to implement user-modifiable metadata in Cinder, but you don't know 
which items should be mutable and which not.
- You propose to add glance API calls to allow you to figure out property 
protections on a per-property basis.

It looks like the only roles for Glance here are (1) as the original source of 
the image metadata, and then (2) as the source of truth for what image 
properties can be modified on the volume metadata.  For (1), you've already got 
an API call.  For (2), why not use the glance property protection configuration 
file directly?  It's going to be deployed somehow to your glance nodes, you can 
deploy it to your cinder nodes at the same time.  Or you can just use it as the 
basis of a Cinder property protection config file, because I wonder whether in 
the general case, you'll always want volume properties protected exactly the 
same as image properties.  If not, the new API call strategy will force you to 
deal with differences in the code, whereas the config file strategy would move 
dealing with differences to setting up the config file.  So I'm not convinced 
that a new API call is the way to go here.

But there may be some nuances I'm missing, so it might be easier to discuss at 
the Glance meeting.  The agenda looks pretty light for Thursday if you want to 
add this topic:
https://etherpad.openstack.org/p/glance-team-meeting-agenda

cheers,
brian


From: Maldonado, Facundo N [facundo.n.maldon...@intel.com]
Sent: Tuesday, June 24, 2014 2:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [cinder][glance] Update volume-image-metadata proposal

Hi folks,

I started working on this blueprint [1] but the work to be done 
is not limited to cinder python client.
Volume-image-metadata is immutable in Cinder and Glance has 
RBAC image properties and it doesn’t provide any way to find out which are 
those protected properties in advance [2].

I want to share this proposal and get feedback from you.

https://docs.google.com/document/d/1XYEqGOa30viOyZf8AiwkrCiMWGTfBKjgmeYBptaCHlM/


Thanks,
Facundo

[1] 
https://blueprints.launchpad.net/python-cinderclient/+spec/support-volume-image-metadata
[2] 
http://openstack.10931.n7.nabble.com/Cinder-Confusion-about-the-respective-use-cases-for-volume-s-admin-metadata-metadata-and-glance-imaga-td39849.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-24 Thread Kevin L. Mitchell
On Tue, 2014-06-24 at 22:26 +0100, Mark McLoughlin wrote:
> There's two sides to this coin - concern about alienating
> non-english-as-a-first-language speakers who feel undervalued because
> their language is nitpicked to death and concern about alienating
> english-as-a-first-language speakers who struggle to understand unclear
> or incorrect language.

Actually, I think there's a third case which is the one people seem to
be worried about here: non-English-as-a-first-language speakers who are
trying to read English written by other non-English-as-a-first-language
speakers.

> Obviously there's a balance to be struck there and different people will
> judge that differently, but I'm personally far more concerned about the
> former rather than the latter case.

So, my personal experience is that, as long as you express your
corrections kindly, most non-English-as-a-first-language speakers are
receptive and appreciative of corrections.  They should be
*corrections*, though: "Actually, I think you meant '…'; that would be
clearer."  Further, unless there are egregious problems, I always try to
express my language suggestions as "femtonits", meaning that I don't
down-vote a patch for those issues (unless perhaps there are a *lot* of
them).  The only time I don't suggest corrections is when I really can't
understand what was meant, in which case I try to ask questions to help
clarify the meaning…

> Absolutely, and I try and be clear about that with e.g. "not a -1" or
> "if you're rebasing anyway, perhaps fix this".
> 
> Maybe a convention for such comments would be a good thing? We often do
> 'nitpick' or 'femtonit', but they are often still things people are
> -1ing on.

Perhaps we should "formalize" the terminology, maybe by documenting that
"femtonit" should mean this in something like the review checklist?  We
could pair that with a glossary of terms that could be referred to by
the "your first patch" bot and mentioned in the Gerrit workflow page.
That way, reviewers are using a consistent terminology—"femtonit" is
hardly standard English, after all; it's a specialty term we've invented
—and developers have guidance on what it means and what they should do
in response to it.
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we have a stale data indication in "nova list/show"?

2014-06-24 Thread Steve Gordon
- Original Message -
> From: "Rick Jones" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> On 06/24/2014 02:38 PM, Joe Gordon wrote:
> > I agree nova shouldn't take any actions. But I don't think leaving an
> > instance as 'active' is right either.  I was thinking move instance to
> > error state (maybe an unknown state would be more accurate) and let the
> > user deal with it, versus just letting the user deal with everything.
> > Since nova knows something *may* be wrong shouldn't we convey that to
> > the user (I'm not 100% sure we should myself).
> 
> I suspect the user's first action will be to call Support asking "Hey,
> why is my perfectly usable instance showing-up in the ERROR|UNKNOWN state?"
> 
> rick jones

The existing alternative would be having the user calling to ask why their 
non-responsive instance is showing as RUNNING so you are kind of damned if you 
do, damned if you don't.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we have a stale data indication in "nova list/show"?

2014-06-24 Thread Joe Gordon
On Jun 24, 2014 2:47 PM, "Rick Jones"  wrote:
>
> On 06/24/2014 02:38 PM, Joe Gordon wrote:
>>
>> I agree nova shouldn't take any actions. But I don't think leaving an
>> instance as 'active' is right either.  I was thinking move instance to
>> error state (maybe an unknown state would be more accurate) and let the
>> user deal with it, versus just letting the user deal with everything.
>> Since nova knows something *may* be wrong shouldn't we convey that to
>> the user (I'm not 100% sure we should myself).
>
>
> I suspect the user's first action will be to call Support asking "Hey,
why is my perfectly usable instance showing-up in the ERROR|UNKNOWN state?"

True, but the alternative is, why is this dead instance listed as ACTIVE,
and I am being billed for it too. I think this is a loose-loose

>
> rick jones
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we have a stale data indication in "nova list/show"?

2014-06-24 Thread Rick Jones

On 06/24/2014 02:38 PM, Joe Gordon wrote:

I agree nova shouldn't take any actions. But I don't think leaving an
instance as 'active' is right either.  I was thinking move instance to
error state (maybe an unknown state would be more accurate) and let the
user deal with it, versus just letting the user deal with everything.
Since nova knows something *may* be wrong shouldn't we convey that to
the user (I'm not 100% sure we should myself).


I suspect the user's first action will be to call Support asking "Hey, 
why is my perfectly usable instance showing-up in the ERROR|UNKNOWN state?"


rick jones

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we have a stale data indication in "nova list/show"?

2014-06-24 Thread Joe Gordon
On Jun 24, 2014 2:31 PM, "Russell Bryant"  wrote:
>
> On 06/24/2014 04:42 PM, Joe Gordon wrote:
> >
> > On Jun 18, 2014 3:03 PM, "Chris Friesen"  > > wrote:
> >>
> >> The output of "nova list" and "nova show" reflects the current status
> > in the database, not the actual state on the compute node.
> >>
> >> If the instances in question are on a compute node that is currently
> > "down", then the information is stale and possibly incorrect.  Would
> > there be any benefit in adding some sort of indication of this in the
> > "nova list" output?  Or do we expect the end-user to check "nova
> > service-list" (or other health-monitoring mechanisms) to see if the
> > compute node is "up" before relying on the output of "nova list"?
> >
> > Great question.  In general I don't think a regular user should never
> > need to run any health monitoring command. I think the larger question
> > here is what how do we handle instances associated with a nova-compute
> > that is currently being reported as down.  If nova-compute is down we
> > have no way of knowing the actual state of the instances. Perhaps we
> > should move those instances to an error state and let the user respond
> > accordingly (delete instance etc.). And if the Nova-compute service
> > returns we correct the state.
>
> There be dragons here.  Just because Nova doesn't see the node reporting
> in, doesn't mean the VMs aren't actually still running.  I think this
> needs to be left to logic outside of Nova.
>
> For example, if your deployment monitoring really does think the host is
> down, you want to make sure it's *completely* dead before taking further
> action such as evacuating the host.  You certainly don't want to risk
> having the VM running on two different hosts.  This is just a business I
> don't think Nova should be getting in to.

I agree nova shouldn't take any actions. But I don't think leaving an
instance as 'active' is right either.  I was thinking move instance to
error state (maybe an unknown state would be more accurate) and let the
user deal with it, versus just letting the user deal with everything. Since
nova knows something *may* be wrong shouldn't we convey that to the user
(I'm not 100% sure we should myself).

>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-24 Thread Mark McLoughlin
On Tue, 2014-06-24 at 13:56 -0700, Clint Byrum wrote:
> Excerpts from Mark McLoughlin's message of 2014-06-24 12:49:52 -0700:
> > On Tue, 2014-06-24 at 09:51 -0700, Clint Byrum wrote:
> > > Excerpts from Monty Taylor's message of 2014-06-24 06:48:06 -0700:
> > > > On 06/22/2014 02:49 PM, Duncan Thomas wrote:
> > > > > On 22 June 2014 14:41, Amrith Kumar  wrote:
> > > > >> In addition to making changes to the hacking rules, why don't we 
> > > > >> mandate also
> > > > >> that perceived problems in the commit message shall not be an 
> > > > >> acceptable
> > > > >> reason to -1 a change.
> > > > > 
> > > > > -1.
> > > > > 
> > > > > There are some /really/ bad commit messages out there, and some of us
> > > > > try to use the commit messages to usefully sort through the changes
> > > > > (i.e. I often -1 in cinder a change only affects one driver and that
> > > > > isn't clear from the summary).
> > > > > 
> > > > > If the perceived problem is grammatical, I'm a bit more on board with
> > > > > it not a reason to rev a patch, but core reviewers can +2/A over the
> > > > > top of a -1 anyway...
> > > > 
> > > > 100% agree. Spelling and grammar are rude to review on - especially
> > > > since we have (and want) a LOT of non-native English speakers. It's not
> > > > our job to teach people better grammar. Heck - we have people from
> > > > different English backgrounds with differing disagreements on what good
> > > > grammar _IS_
> > > > 
> > > 
> > > We shouldn't quibble over _anything_ grammatical in a commit message. If
> > > there is a disagreement about it, the comments should be ignored. There
> > > are definitely a few grammar rules that are loose and those should be
> > > largely ignored.
> > > 
> > > However, we should correct grammar when there is a clear solution, as
> > > those same people who do not speak English as their first language are
> > > likely to be confused by poor grammar.
> > > 
> > > We're not doing it to teach grammar. We're doing it to ensure readability.
> > 
> > The importance of clear English varies with context, but commit messages
> > are a place where we should try hard to just let it go, particularly
> > with those who do not speak English as their first language.
> > 
> > Commit messages stick around forever and it's important that they are
> > useful, but they will be read by a small number of people who are going
> > to be in a position to spend a small amount of time getting over
> > whatever dissonance is caused by a typo or imperfect grammar.
> >
> 
> The times that one is reading git messages are often the most stressful
> such as when a regression has occurred in production.
> 
> Given that, I believe it is entirely worth it to me that the commit
> messages on my patches are accurate and understandable. I embrace all
> feedback which leads to them being more clear. I will of course stand
> back from grammar correcting and not block patches if there are many
> who disagree.
> 
> > I think specs are pretty similar and don't warrant much additional
> > grammar nitpicking. Sure, they're longer pieces of text and slightly
> > more people will rely on them for information, but they're not intended
> > to be complete documentation.
> >
> 
> Disagree. I will only state this one more time as I think everyone knows
> how I feel: if we are going to grow beyond the english-as-a-first-language
> world we simply cannot assume that those reading specs will be native
> speakers. Good spelling and grammar helps us grow. Bad spelling and
> grammar holds us back.

There's two sides to this coin - concern about alienating
non-english-as-a-first-language speakers who feel undervalued because
their language is nitpicked to death and concern about alienating
english-as-a-first-language speakers who struggle to understand unclear
or incorrect language.

Obviously there's a balance to be struck there and different people will
judge that differently, but I'm personally far more concerned about the
former rather than the latter case.

I expect many beyond the english-as-a-first-language world are pretty
used to dealing with imperfect language but aren't so delighted with
being constantly reminded that their use language is imperfect.

> > Where grammar is so poor that readers would be easily misled in
> > important ways, then sure that should be fixed. But there comes a point
> > when we're no longer working to avoid confusion and instead just being
> > pendants. Taking issue[1] with this:
> > 
> >   "whatever scaling mechanism Heat and we end up going with."
> > 
> > because it has a "dangling preposition" is an example of going way
> > beyond the point of productive pedantry IMHO :-)
> 
> I actually agree that it would not at all be a reason to block a patch.
> However, there is some ambiguity in that sentence that may not be clear
> to a native speaker. It is not 100% clear if we are going with Heat,
> or with the scaling mechanism. That is the only reason for the dangling
> preposition d

Re: [openstack-dev] [nova] should we have a stale data indication in "nova list/show"?

2014-06-24 Thread Russell Bryant
On 06/24/2014 04:42 PM, Joe Gordon wrote:
> 
> On Jun 18, 2014 3:03 PM, "Chris Friesen"  > wrote:
>>
>> The output of "nova list" and "nova show" reflects the current status
> in the database, not the actual state on the compute node.
>>
>> If the instances in question are on a compute node that is currently
> "down", then the information is stale and possibly incorrect.  Would
> there be any benefit in adding some sort of indication of this in the
> "nova list" output?  Or do we expect the end-user to check "nova
> service-list" (or other health-monitoring mechanisms) to see if the
> compute node is "up" before relying on the output of "nova list"?
> 
> Great question.  In general I don't think a regular user should never
> need to run any health monitoring command. I think the larger question
> here is what how do we handle instances associated with a nova-compute
> that is currently being reported as down.  If nova-compute is down we
> have no way of knowing the actual state of the instances. Perhaps we
> should move those instances to an error state and let the user respond
> accordingly (delete instance etc.). And if the Nova-compute service
> returns we correct the state.

There be dragons here.  Just because Nova doesn't see the node reporting
in, doesn't mean the VMs aren't actually still running.  I think this
needs to be left to logic outside of Nova.

For example, if your deployment monitoring really does think the host is
down, you want to make sure it's *completely* dead before taking further
action such as evacuating the host.  You certainly don't want to risk
having the VM running on two different hosts.  This is just a business I
don't think Nova should be getting in to.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Upgrade of Hadoop components inside released version

2014-06-24 Thread Andrew Lazarev
Hi Team,

I want to raise topic about upgrade of components in Hadoop version that is
already supported by released Sahara plugin. The question is raised because
of several change requests [1] and [2]. Topic was discussed in Atlanta
([3]), but we didn't come to the decision.

All of us agreed that existing clusters must continue to work after
OpenStack upgrade. So if user creates cluster by Icehouse Sahara and then
upgrades OpenStack - everything should continue working as before. The most
tricky operation is scaling and it dictates list of restrictions over new
version of component:

1. - pair supported by the plugin must not change
2. if component upgrade requires DIB involved then plugin must work with
both versions of image - old and new one
3. cluster with mixed nodes (created by old code and by new one) should
still be operational

Given that we should choose policy for components upgrade. Here are several
options:

1. Prohibit components upgrade in released versions of plugin. Change
plugin version even if hadoop version didn't change. This solves all listed
problems but a little bit frustrating for user. They will need to recreate
all clusters they have and migrate data like as it is hadoop upgrade. They
should also consider Hadoop upgrade to do migration only once.

2. Disable some operations over cluster created by the previous version. If
users don't have option to scale cluster there will be no problems with
mixed nodes. For this option Sahara need to know if the cluster was created
by this version or not.

3. Require change author to perform all kind of tests and prove that mixed
cluster works as good and not mixed. In such case we need some list of
tests that are enough to cover all corner cases.

Ideas are welcome.

[1] https://review.openstack.org/#/c/98260/
[2] https://review.openstack.org/#/c/87723/
[3] https://etherpad.openstack.org/p/juno-summit-sahara-relmngmt-backward

Thanks,
Andrew.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Barbican] Barebones CA

2014-06-24 Thread Clark, Robert Graham
Yeah pretty much.

That¹s something I¹d be interested to work on, if work isn¹t ongoing
already.

-Rob





On 24/06/2014 18:57, "John Wood"  wrote:

>Hello Robert,
>
>I would actually hope we have a self-contained certificate plugin
>implementation that runs 'out of the box' to enable certificate
>generation orders to be evaluated and demo-ed on local boxes.
>
>Is this what you were thinking though?
>
>Thanks,
>John
>
>
>
>
>From: Clark, Robert Graham [robert.cl...@hp.com]
>Sent: Tuesday, June 24, 2014 10:36 AM
>To: OpenStack List
>Subject: [openstack-dev] [Barbican] Barebones CA
>
>Hi all,
>
>I¹m sure this has been discussed somewhere and I¹ve just missed it.
>
>Is there any value in creating a basic ŒCA¹ and plugin to satisfy
>tests/integration in Barbican? I¹m thinking something that probably
>performs OpenSSL certificate operations itself, ugly but perhaps useful
>for some things?
>
>-Rob
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-24 Thread Clint Byrum
Excerpts from Mark McLoughlin's message of 2014-06-24 12:49:52 -0700:
> On Tue, 2014-06-24 at 09:51 -0700, Clint Byrum wrote:
> > Excerpts from Monty Taylor's message of 2014-06-24 06:48:06 -0700:
> > > On 06/22/2014 02:49 PM, Duncan Thomas wrote:
> > > > On 22 June 2014 14:41, Amrith Kumar  wrote:
> > > >> In addition to making changes to the hacking rules, why don't we 
> > > >> mandate also
> > > >> that perceived problems in the commit message shall not be an 
> > > >> acceptable
> > > >> reason to -1 a change.
> > > > 
> > > > -1.
> > > > 
> > > > There are some /really/ bad commit messages out there, and some of us
> > > > try to use the commit messages to usefully sort through the changes
> > > > (i.e. I often -1 in cinder a change only affects one driver and that
> > > > isn't clear from the summary).
> > > > 
> > > > If the perceived problem is grammatical, I'm a bit more on board with
> > > > it not a reason to rev a patch, but core reviewers can +2/A over the
> > > > top of a -1 anyway...
> > > 
> > > 100% agree. Spelling and grammar are rude to review on - especially
> > > since we have (and want) a LOT of non-native English speakers. It's not
> > > our job to teach people better grammar. Heck - we have people from
> > > different English backgrounds with differing disagreements on what good
> > > grammar _IS_
> > > 
> > 
> > We shouldn't quibble over _anything_ grammatical in a commit message. If
> > there is a disagreement about it, the comments should be ignored. There
> > are definitely a few grammar rules that are loose and those should be
> > largely ignored.
> > 
> > However, we should correct grammar when there is a clear solution, as
> > those same people who do not speak English as their first language are
> > likely to be confused by poor grammar.
> > 
> > We're not doing it to teach grammar. We're doing it to ensure readability.
> 
> The importance of clear English varies with context, but commit messages
> are a place where we should try hard to just let it go, particularly
> with those who do not speak English as their first language.
> 
> Commit messages stick around forever and it's important that they are
> useful, but they will be read by a small number of people who are going
> to be in a position to spend a small amount of time getting over
> whatever dissonance is caused by a typo or imperfect grammar.
>

The times that one is reading git messages are often the most stressful
such as when a regression has occurred in production.

Given that, I believe it is entirely worth it to me that the commit
messages on my patches are accurate and understandable. I embrace all
feedback which leads to them being more clear. I will of course stand
back from grammar correcting and not block patches if there are many
who disagree.

> I think specs are pretty similar and don't warrant much additional
> grammar nitpicking. Sure, they're longer pieces of text and slightly
> more people will rely on them for information, but they're not intended
> to be complete documentation.
>

Disagree. I will only state this one more time as I think everyone knows
how I feel: if we are going to grow beyond the english-as-a-first-language
world we simply cannot assume that those reading specs will be native
speakers. Good spelling and grammar helps us grow. Bad spelling and
grammar holds us back.

> Where grammar is so poor that readers would be easily misled in
> important ways, then sure that should be fixed. But there comes a point
> when we're no longer working to avoid confusion and instead just being
> pendants. Taking issue[1] with this:
> 
>   "whatever scaling mechanism Heat and we end up going with."
> 
> because it has a "dangling preposition" is an example of going way
> beyond the point of productive pedantry IMHO :-)

I actually agree that it would not at all be a reason to block a patch.
However, there is some ambiguity in that sentence that may not be clear
to a native speaker. It is not 100% clear if we are going with Heat,
or with the scaling mechanism. That is the only reason for the dangling
preposition debate. However, there is a debate, and thus I would _never_
block a patch based on this rule. It was feedback.. just as sometimes
there is feedback in commit messages that isn't taken and doesn't lead
to a -1.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we have a stale data indication in "nova list/show"?

2014-06-24 Thread Joe Gordon
On Jun 18, 2014 3:03 PM, "Chris Friesen" 
wrote:
>
> The output of "nova list" and "nova show" reflects the current status in
the database, not the actual state on the compute node.
>
> If the instances in question are on a compute node that is currently
"down", then the information is stale and possibly incorrect.  Would there
be any benefit in adding some sort of indication of this in the "nova list"
output?  Or do we expect the end-user to check "nova service-list" (or
other health-monitoring mechanisms) to see if the compute node is "up"
before relying on the output of "nova list"?

Great question.  In general I don't think a regular user should never need
to run any health monitoring command. I think the larger question here is
what how do we handle instances associated with a nova-compute that is
currently being reported as down.  If nova-compute is down we have no way
of knowing the actual state of the instances. Perhaps we should move those
instances to an error state and let the user respond accordingly (delete
instance etc.). And if the Nova-compute service returns we correct the
state.

>
> Chris
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] community consensus and removing rules

2014-06-24 Thread Ben Nemec
On 06/24/2014 04:49 AM, Mark McLoughlin wrote:
> On Mon, 2014-06-23 at 19:55 -0700, Joe Gordon wrote:
> 
>>   * Add a new directory, contrib, for local rules that multiple
>> projects use but are not generally considered acceptable to be
>> enabled by default. This way we can reduce the amount of cut
>> and pasted code (thank you to Ben Nemec for this idea).
> 
> All sounds good to me, apart from a pet peeve on 'contrib' directories.
> 
> What does 'contrib' mean? 'contributed'? What exactly *isn't*
> contributed? Often it has connotations of 'contributed by outsiders'.
> 
> It also often has connotations of 'bucket for crap', 'unmaintained and
> untested', YMMV, etc. etc.
> 
> Often the name is just chosen out of laziness - "I can't think of a good
> name for this, and projects often have a contrib directory with random
> stuff in it, so that works".

That's pretty much what happened here.  Contrib was just a throwaway
name I picked for convenience, but I have no particular attachment to
it. :-)

> 
> Let's be precise - these are optional rules, right? How about calling
> the directory 'optional'?

+1

> 
> Say no to contrib directories! :-P
> 
> Thanks,
> Mark.
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Set compute_node:hypervisor_nodename as unique and not null

2014-06-24 Thread Joe Gordon
On Jun 18, 2014 11:40 AM, "Manickam, Kanagaraj" 
wrote:
>
> Hi,
>
>
>
> This mail is regarding the required model change in nova. Please fine
more details below:
>
>
>
> As we knew, Nova db has the table “compute_nodes” for modelling the
hypervisors and its using the “hypervisor_hostname” field to represent the
hypervisor name.
>
> This value is having significant value in os-hypervisor extension api
which is using this field to uniquely identify the hypervisor.
>
>
>
> Consider the case where a given environment is having more than one
hypervisors (KVM, EXS, Xen, etc)  with same hostname and os-hypervisor and
thereby Horizon Hypervisor panel and nova hypervisors-servers command will
fail.
>
> There is a defect (https://bugs.launchpad.net/nova/+bug/1329261)  already
filed on VMware VC driver to address this issue to make sure that, a unique
value is generated for the VC driver’s hypervisor.  But its good to fix at
the model level as well by  making “hypervisor_hostname” field as unique
always. And a bug https://bugs.launchpad.net/nova/+bug/1329299 is filed for
the same.
>
>
>
> Before fixing this bug, I would like to get the opinion from the
community. Could you please help here !

++ to making hypervisor_hostname always unique,  being that we already make
this assumption all over the place.

>
>
>
> Regards
>
> Kanagaraj M
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FUEL] OpenStack patching and FUEL upgrade follow-up meeting minutes

2014-06-24 Thread Andrey Danin
I think, Vladimir means, that we need to improve our scheduling of the CI
jobs over available CI resources. As I know, now we have a dedicated server
groups for separate tests and we cannot use free resources of other server
groups in case of overbalanced load.


On Thu, Jun 5, 2014 at 6:52 PM, Jesse Pretorius 
wrote:

> On 5 June 2014 16:27, Vladimir Kuklin  wrote:
>
>> 1. We need strict EOS and EOL rules to decide how many maintenance
>> releases there will be for each series or our QA team and infrastructure
>> will not ever be available to digest it.
>>
>
> Agreed. Would it not be prudent to keep with the OpenStack support
> standard - support latest version and the -1 version?
>
>
>>  3. We need to clearly specify the restrictions which patching and
>> upgrade process we support:
>> a. New environments can only be deployed with the latest version of
>> OpenStack and FUEL Library supported
>> b. Old environments can only be updated within the only minor release
>> (e.g. 5.0.1->5.0.2 is allowed, 5.0.1->5.1 is not)
>>
>
> Assuming that the major upgrades will be handled in
> https://blueprints.launchpad.net/fuel/+spec/upgrade-major-openstack-environment
> then I agree. If not, then we have a sticking point here. I would agree
> that this is a good start, but in the medium to long term it is important
> to be able to upgrade from perhaps the latest minor version of the platform
> to the next available major version.
>
>
>>  4. We have some devops tasks we need to finish to feel more comfortable
>> in the future to make testing of patching much easier:
>> a. we need to finish devops bare metal and distributed enviroments setup
>> to make CI and testing process easier
>> b. we need to implement elastic-recheck like feature to analyze our CI
>> results in order to allow developers to retrigger checks in case of
>> floating bugs
>> c. we need to start using more sophisticated scheduler
>>
>
> I find the scheduler statement a curiosity. Can you elaborate?
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Andrey Danin
ada...@mirantis.com
skype: gcon.monolake
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler]

2014-06-24 Thread Joe Gordon
On Jun 24, 2014 9:00 AM, "Abbass MAROUNI" 
wrote:
>
> Hi,
>
> I was wondering if there's a way to set a tag (key/value) of a Virtual
Machine from within a scheduler filter ?

The scheduler today is just for placement. And since we are in the process
of trying to split it out, I don't think we want to make the scheduler do
something like this (at least for now).

>
> I want to be able to tag a machine with a specific key/value after
passing my custom filter

What is your use case? Perhaps we have another way of solving it today.

>
> Thanks,
>
> --
> --
> Abbass MAROUNI
> VirtualScale
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Support for plugins in fuel client

2014-06-24 Thread Andrey Danin
Why not to use stevedore?


On Wed, Jun 18, 2014 at 1:42 PM, Igor Kalnitsky 
wrote:

> Hi guys,
>
> Actually, I'm not a fun of cliff, but I think it's a good solution to use
> it in our fuel client.
>
> Here some pros:
>
> * pluggable design: we can encapsulate entire command logic in separate
> plugin file
> * builtin output formatters: we no need to implement various formatters to
> represent received data
> * interactive mode: cliff makes possible to provide a shell mode, just
> like psql do
>
> Well, I vote to use cliff inside fuel client. Yeah, I know, we need to
> rewrite a lot of code, but we
> can do it step-by-step.
>
> - Igor
>
>
>
>
> On Wed, Jun 18, 2014 at 9:14 AM, Dmitriy Shulyak 
> wrote:
>
>> Hi folks,
>>
>> I am wondering what our story/vision for plugins in fuel client [1]?
>>
>> We can benefit from using cliff [2] as framework for fuel cli, apart from
>> common code
>> for building cli applications on top of argparse, it provides nice
>> feature that allows to
>> dynamicly add actions by means of entry points (stevedore-like).
>>
>> So we will be able to add new actions for fuel client simply by
>> installing separate packages with correct entry points.
>>
>> Afaik stevedore is not used there, but i think it will be - cause of same
>> author and maintainer.
>>
>> Do we need this? Maybe there is other options?
>>
>> Thanks
>>
>> [1] https://github.com/stackforge/fuel-web/tree/master/fuelclient
>> [2]  https://github.com/openstack/cliff
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Andrey Danin
ada...@mirantis.com
skype: gcon.monolake
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV] Specific example NFV use case - ETSI #5, virtual IMS

2014-06-24 Thread Calum Loudon
Hello all

Following on from my contribution last week of a specific NFV use case
(a Session Border Controller) here's another one, this time for an IMS
core (part of ETSI NFV use case #5).

As we touched on at last week's meeting, this is not making claims for
what every example of a virtual IMS core would need, just as last week's
wasn't describing what every SBC would need.  In particular, my IMS core
example is for an application that was designed to be cloud-native from
day one, so the apparent lack of OpenStack gaps is not surprising: other
IMS cores may need more.  However, I think overall these two examples
are reasonably representative of the classes of data plane vs. control
plane apps.

Use case example


Project Clearwater, http://www.projectclearwater.org/.  An open source
implementation of an IMS core designed to run in the cloud and be
massively scalable.  It provides SIP-based call control for voice and 
video as well as SIP-based messaging apps.  As an IMS core it provides
P/I/S-CSCF function together with a BGCF and an HSS cache, and includes
a WebRTC gateway providing interworking between WebRTC & SIP clients.
 

Characteristics relevant to NFV/OpenStack
-

Mainly a compute application: modest demands on storage and networking.

Fully HA, with no SPOFs and service continuity over software and hardware
failures; must be able to offer SLAs.

Elastically scalable by adding/removing instances under the control of the
NFV orchestrator.

Requirements and mapping to blueprints
--

Compute application:
-   OpenStack already provides everything needed; in particular, there are
no requirements for an accelerated data plane, nor for core pinning
nor NUMA

HA:
-   implemented as a series of N+k compute pools; meeting a given SLA
requires being able to limit the impact of a single host failure 
-   we believe there is a scheduler gap here; affinity/anti-affinity
can be expressed pair-wise between VMs, but this needs a concept
equivalent to "group anti-affinity" i.e. allowing the NFV orchestrator
to assign each VM in a pool to one of X buckets, and requesting
OpenStack to ensure no single host failure can affect more than one
bucket (there are other approaches which achieve the same end e.g.
defining a group where the scheduler ensures every pair of VMs 
within that group are not instantiated on the same host)
-   if anyone is aware of any blueprints that would address this please
insert them here

Elastic scaling:
-   similarly readily achievable using existing features - no gap.

regards

Calum


Calum Loudon 
Director, Architecture
+44 (0)208 366 1177
 
METASWITCH NETWORKS 
THE BRAINS OF THE NEW GLOBAL NETWORK
www.metaswitch.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-24 Thread Mark McLoughlin
On Tue, 2014-06-24 at 09:51 -0700, Clint Byrum wrote:
> Excerpts from Monty Taylor's message of 2014-06-24 06:48:06 -0700:
> > On 06/22/2014 02:49 PM, Duncan Thomas wrote:
> > > On 22 June 2014 14:41, Amrith Kumar  wrote:
> > >> In addition to making changes to the hacking rules, why don't we mandate 
> > >> also
> > >> that perceived problems in the commit message shall not be an acceptable
> > >> reason to -1 a change.
> > > 
> > > -1.
> > > 
> > > There are some /really/ bad commit messages out there, and some of us
> > > try to use the commit messages to usefully sort through the changes
> > > (i.e. I often -1 in cinder a change only affects one driver and that
> > > isn't clear from the summary).
> > > 
> > > If the perceived problem is grammatical, I'm a bit more on board with
> > > it not a reason to rev a patch, but core reviewers can +2/A over the
> > > top of a -1 anyway...
> > 
> > 100% agree. Spelling and grammar are rude to review on - especially
> > since we have (and want) a LOT of non-native English speakers. It's not
> > our job to teach people better grammar. Heck - we have people from
> > different English backgrounds with differing disagreements on what good
> > grammar _IS_
> > 
> 
> We shouldn't quibble over _anything_ grammatical in a commit message. If
> there is a disagreement about it, the comments should be ignored. There
> are definitely a few grammar rules that are loose and those should be
> largely ignored.
> 
> However, we should correct grammar when there is a clear solution, as
> those same people who do not speak English as their first language are
> likely to be confused by poor grammar.
> 
> We're not doing it to teach grammar. We're doing it to ensure readability.

The importance of clear English varies with context, but commit messages
are a place where we should try hard to just let it go, particularly
with those who do not speak English as their first language.

Commit messages stick around forever and it's important that they are
useful, but they will be read by a small number of people who are going
to be in a position to spend a small amount of time getting over
whatever dissonance is caused by a typo or imperfect grammar.

I think specs are pretty similar and don't warrant much additional
grammar nitpicking. Sure, they're longer pieces of text and slightly
more people will rely on them for information, but they're not intended
to be complete documentation.

Where grammar is so poor that readers would be easily misled in
important ways, then sure that should be fixed. But there comes a point
when we're no longer working to avoid confusion and instead just being
pendants. Taking issue[1] with this:

  "whatever scaling mechanism Heat and we end up going with."

because it has a "dangling preposition" is an example of going way
beyond the point of productive pedantry IMHO :-)

Mark.

[1] - https://review.openstack.org/#/c/97939/5/specs/juno/remove-mergepy.rst


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] nova needs a new release of neutronclient for OverQuotaClient exception

2014-06-24 Thread Kyle Mestery
On Mon, Jun 23, 2014 at 11:08 AM, Kyle Mestery
 wrote:
> On Mon, Jun 23, 2014 at 8:54 AM, Matt Riedemann
>  wrote:
>> There are at least two changes [1][2] proposed to Nova that use the new
>> OverQuotaClient exception in python-neutronclient, but the unit test jobs no
>> longer test against trunk-level code of the client packages so they fail.
>> So I'm here to lobby for a new release of python-neutronclient if possible
>> so we can keep these fixes moving.  Are there any issues with that?
>>
> Thanks for bringing this up Matt. I've put this on the agenda for the
> Neutron meeting today, I'll reply on this thread with what comes out
> of that discussion.
>
> Kyle
>
As discussed in the meeting, we're going to work on making a new
release of the client Matt. Ping me in channel later this week, we're
working the details out on that release at the moment.

Thanks,
Kyle

> [1] https://wiki.openstack.org/wiki/Network/Meetings#Team_Discussion_Topics
>
>> [1] https://review.openstack.org/#/c/62581/
>> [2] https://review.openstack.org/#/c/101462/
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

2014-06-24 Thread Jain, Vivek
+1 to what Eugene just iterated:

  *   Different types of statuses
  *   Not every status on every object
  *   Status should be API call

Thanks,
Vivek

From: Eugene Nikanorov mailto:enikano...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 24, 2014 at 12:10 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

Hi lbaas folks,

IMO a status is really an important part of the API.
In some old email threads Sam has proposed the solution for lbaas objects: we 
need to have several attributes that independently represent different types of 
statuses:
- admin_state_up
- operational status
- provisioning state

Not every status need to be on every object.
Pure-DB objects (like pool) should not have provisioning state and operational 
status, instead, an association object should have them. I think that resolves 
both questions (1) and (2).
If some object is shareable, then we'll have association object anyway, and 
that's where provisioning status and operationl status can reside. For sure 
it's not very simple, but this is the right way to do it.

Also I'd like to emphasize that statuses are really an API thing, not a driver 
thing, so they must be used similarly across all drivers.

Thanks,
Eugene.


On Tue, Jun 24, 2014 at 10:53 PM, Doug Wiegley 
mailto:do...@a10networks.com>> wrote:
Hi Brandon,

I think just one status is overloading too much onto the LB object (which
is perhaps something that a UI should do for a user, but not something an
API should be doing.)

> 1) If an entity exists without a link to a load balancer it is purely
> just a database entry, so it would always be ACTIVE, but not really
> active in a technical sense.

Depends on the driver.  I don¹t think this is a decision for lbaas proper.


> 2) If some of these entities become shareable then how does the status
> reflect that the entity failed to create on one load balancer but was
> successfully created on another.  That logic could get overly complex.

That¹s a status on the join link, not the object, and I could argue
multiple ways in which that should be one way or another based on the
backend, which to me, again implies driver question (backend could queue
for later, or error immediately, or let things run degraded, orŠ)

Thanks,
Doug




On 6/24/14, 11:23 AM, "Brandon Logan" 
mailto:brandon.lo...@rackspace.com>> wrote:

>I think we missed this discussion at the meet-up but I'd like to bring
>it up here.  To me having a status on all entities doesn't make much
>sense, and justing having a status on a load balancer (which would be a
>provisioning status) and a status on a member (which would be an
>operational status) are what makes sense because:
>
>1) If an entity exists without a link to a load balancer it is purely
>just a database entry, so it would always be ACTIVE, but not really
>active in a technical sense.
>
>2) If some of these entities become shareable then how does the status
>reflect that the entity failed to create on one load balancer but was
>successfully created on another.  That logic could get overly complex.
>
>I think the best thing to do is to have the load balancer status reflect
>the provisioning status of all of its children.  So if a health monitor
>is updated then the load balancer that health monitor is linked to would
>have its status changed to PENDING_UPDATE.  Conversely, if a load
>balancer or any entities linked to it are changed and the load
>balancer's status is in a non-ACTIVE state then that update should not
>be allowed.
>
>Thoughts?
>
>Thanks,
>Brandon
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Bug squashing day on Tu, 24th of June

2014-06-24 Thread Dmitry Borodaenko
Mid-day numbers update:


startdelta from 2014-06-17mid-daydelta from startenddelta from startdelta
from 2014-06-17New1751701705Incomplete25-621-421-4-10Critical/High for 5.1

140
140
33Critical/High for 5.1, Confirmed Triaged92
87-587-5
Medium/Low/Undefined for 5.1, Confirmed/Triaged238
230-8230-8
In progress67
736736
Customer-found27
25-225-2
Confirmed/Triaged/In progress for 5.1

392
392
24Total open for 5.143928428-11428-1117
Spreadsheet:
https://docs.google.com/a/mirantis.com/spreadsheets/d/10mUeRwOplnmoe_RFkrUSeVEw-__ZMU2nq23BOY-gzYs/edit#gid=1683970476

-- 
Dmitry Borodaenko
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cinder pools implementation

2014-06-24 Thread Singh, Navneet
Hi,
  As per our discussions in the last meeting I have made an etherpad which 
details different pool implementations and at the end comparison between the 
approaches. Please go through it and be ready with any questions or opinions 
for tomorrow's meeting. Here is the link for etherpad:

https://etherpad.openstack.org/p/cinder-pool-impl-comparison

Best Regards
Navneet Singh
NetApp

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

2014-06-24 Thread Brandon Logan
Eugene,
Thanks for the feedback.  I have a feeling thats where we will end up
going anyway so perhaps status on all entities for now is the proper way
to build into that.  I just want my objections to be heard.

Thanks,
Brandon 

On Tue, 2014-06-24 at 23:10 +0400, Eugene Nikanorov wrote:
> Hi lbaas folks,
> 
> 
> IMO a status is really an important part of the API.
> In some old email threads Sam has proposed the solution for lbaas
> objects: we need to have several attributes that independently
> represent different types of statuses:
> - admin_state_up
> - operational status
> - provisioning state
> 
> 
> Not every status need to be on every object. 
> Pure-DB objects (like pool) should not have provisioning state and
> operational status, instead, an association object should have them. I
> think that resolves both questions (1) and (2).
> If some object is shareable, then we'll have association object
> anyway, and that's where provisioning status and operationl status can
> reside. For sure it's not very simple, but this is the right way to do
> it.
> 
> 
> Also I'd like to emphasize that statuses are really an API thing, not
> a driver thing, so they must be used similarly across all drivers.
> 
> 
> Thanks,
> Eugene.
> 
> 
> On Tue, Jun 24, 2014 at 10:53 PM, Doug Wiegley 
> wrote:
> Hi Brandon,
> 
> I think just one status is overloading too much onto the LB
> object (which
> is perhaps something that a UI should do for a user, but not
> something an
> API should be doing.)
> 
> > 1) If an entity exists without a link to a load balancer it
> is purely
> > just a database entry, so it would always be ACTIVE, but not
> really
> > active in a technical sense.
> 
> 
> Depends on the driver.  I don¹t think this is a decision for
> lbaas proper.
> 
> 
> > 2) If some of these entities become shareable then how does
> the status
> > reflect that the entity failed to create on one load
> balancer but was
> > successfully created on another.  That logic could get
> overly complex.
> 
> 
> That¹s a status on the join link, not the object, and I could
> argue
> multiple ways in which that should be one way or another based
> on the
> backend, which to me, again implies driver question (backend
> could queue
> for later, or error immediately, or let things run degraded,
> orŠ)
> 
> Thanks,
> Doug
> 
> 
> 
> 
> On 6/24/14, 11:23 AM, "Brandon Logan"
>  wrote:
> 
> >I think we missed this discussion at the meet-up but I'd like
> to bring
> >it up here.  To me having a status on all entities doesn't
> make much
> >sense, and justing having a status on a load balancer (which
> would be a
> >provisioning status) and a status on a member (which would be
> an
> >operational status) are what makes sense because:
> >
> >1) If an entity exists without a link to a load balancer it
> is purely
> >just a database entry, so it would always be ACTIVE, but not
> really
> >active in a technical sense.
> >
> >2) If some of these entities become shareable then how does
> the status
> >reflect that the entity failed to create on one load balancer
> but was
> >successfully created on another.  That logic could get overly
> complex.
> >
> >I think the best thing to do is to have the load balancer
> status reflect
> >the provisioning status of all of its children.  So if a
> health monitor
> >is updated then the load balancer that health monitor is
> linked to would
> >have its status changed to PENDING_UPDATE.  Conversely, if a
> load
> >balancer or any entities linked to it are changed and the
> load
> >balancer's status is in a non-ACTIVE state then that update
> should not
> >be allowed.
> >
> >Thoughts?
> >
> >Thanks,
> >Brandon
> >
> >
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.or

Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

2014-06-24 Thread Eichberger, German
Hi Doug & Brandon,

1) +1 Doug -- I like the status "Building" but that's a personal preference. 
It's entirely up to the driver (but it should be reasonable) and we should pick 
the states up front (as we already do with constants)

2) We actually touched upon that with the distinction between status and 
operational status -- that should take care of that.

German

-Original Message-
From: Doug Wiegley [mailto:do...@a10networks.com] 
Sent: Tuesday, June 24, 2014 11:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

Hi Brandon,

I think just one status is overloading too much onto the LB object (which is 
perhaps something that a UI should do for a user, but not something an API 
should be doing.)

> 1) If an entity exists without a link to a load balancer it is purely 
> just a database entry, so it would always be ACTIVE, but not really 
> active in a technical sense.

Depends on the driver.  I don¹t think this is a decision for lbaas proper.


> 2) If some of these entities become shareable then how does the status 
> reflect that the entity failed to create on one load balancer but was 
> successfully created on another.  That logic could get overly complex.

That¹s a status on the join link, not the object, and I could argue multiple 
ways in which that should be one way or another based on the backend, which to 
me, again implies driver question (backend could queue for later, or error 
immediately, or let things run degraded, orŠ)

Thanks,
Doug




On 6/24/14, 11:23 AM, "Brandon Logan"  wrote:

>I think we missed this discussion at the meet-up but I'd like to bring 
>it up here.  To me having a status on all entities doesn't make much 
>sense, and justing having a status on a load balancer (which would be a 
>provisioning status) and a status on a member (which would be an 
>operational status) are what makes sense because:
>
>1) If an entity exists without a link to a load balancer it is purely 
>just a database entry, so it would always be ACTIVE, but not really 
>active in a technical sense.
>
>2) If some of these entities become shareable then how does the status 
>reflect that the entity failed to create on one load balancer but was 
>successfully created on another.  That logic could get overly complex.
>
>I think the best thing to do is to have the load balancer status 
>reflect the provisioning status of all of its children.  So if a health 
>monitor is updated then the load balancer that health monitor is linked 
>to would have its status changed to PENDING_UPDATE.  Conversely, if a 
>load balancer or any entities linked to it are changed and the load 
>balancer's status is in a non-ACTIVE state then that update should not 
>be allowed.
>
>Thoughts?
>
>Thanks,
>Brandon
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

2014-06-24 Thread Brandon Logan
On Tue, 2014-06-24 at 18:53 +, Doug Wiegley wrote:
> Hi Brandon,
> 
> I think just one status is overloading too much onto the LB object (which
> is perhaps something that a UI should do for a user, but not something an
> API should be doing.)

That is a good point and perhaps its another discussion to just have
some way to show the status an entity has for each load balancer, which
is what mark suggested for the member status at the meet-up.

> 
> > 1) If an entity exists without a link to a load balancer it is purely
> > just a database entry, so it would always be ACTIVE, but not really
> > active in a technical sense.
> 
> Depends on the driver.  I don¹t think this is a decision for lbaas proper.

Driver is linked to the flavor or provider.  Flavor or provider will/is
linked to load balancer.  We won't be able get a driver to send anything
to if there isn't a load balancer.  Without a driver it is a decision
for lbaas proper.  I'd be fine with setting the status of these
"orphaned" entities to just ACTIVE but I'm just worried about the status
management in the future.

> 
> 
> > 2) If some of these entities become shareable then how does the status
> > reflect that the entity failed to create on one load balancer but was
> > successfully created on another.  That logic could get overly complex.
> 
> That¹s a status on the join link, not the object, and I could argue
> multiple ways in which that should be one way or another based on the
> backend, which to me, again implies driver question (backend could queue
> for later, or error immediately, or let things run degraded, orŠ)

Yeah that is definitely an argument.  I'm just trying to keep in mind
the complexities that could arise from decisions made now.  Perhaps it
is the wrong way to look at it to some, but I don't think thinking about
the future is a bad thing and should never be done.

> 
> Thanks,
> Doug
> 
> 
> 
> 
> On 6/24/14, 11:23 AM, "Brandon Logan"  wrote:
> 
> >I think we missed this discussion at the meet-up but I'd like to bring
> >it up here.  To me having a status on all entities doesn't make much
> >sense, and justing having a status on a load balancer (which would be a
> >provisioning status) and a status on a member (which would be an
> >operational status) are what makes sense because:
> >
> >1) If an entity exists without a link to a load balancer it is purely
> >just a database entry, so it would always be ACTIVE, but not really
> >active in a technical sense.
> >
> >2) If some of these entities become shareable then how does the status
> >reflect that the entity failed to create on one load balancer but was
> >successfully created on another.  That logic could get overly complex.
> >
> >I think the best thing to do is to have the load balancer status reflect
> >the provisioning status of all of its children.  So if a health monitor
> >is updated then the load balancer that health monitor is linked to would
> >have its status changed to PENDING_UPDATE.  Conversely, if a load
> >balancer or any entities linked to it are changed and the load
> >balancer's status is in a non-ACTIVE state then that update should not
> >be allowed.
> >
> >Thoughts?
> >
> >Thanks,
> >Brandon
> >
> >
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

2014-06-24 Thread Eugene Nikanorov
Hi lbaas folks,

IMO a status is really an important part of the API.
In some old email threads Sam has proposed the solution for lbaas objects:
we need to have several attributes that independently represent different
types of statuses:
- admin_state_up
- operational status
- provisioning state

Not every status need to be on every object.
Pure-DB objects (like pool) should not have provisioning state and
operational status, instead, an association object should have them. I
think that resolves both questions (1) and (2).
If some object is shareable, then we'll have association object anyway, and
that's where provisioning status and operationl status can reside. For sure
it's not very simple, but this is the right way to do it.

Also I'd like to emphasize that statuses are really an API thing, not a
driver thing, so they must be used similarly across all drivers.

Thanks,
Eugene.


On Tue, Jun 24, 2014 at 10:53 PM, Doug Wiegley 
wrote:

> Hi Brandon,
>
> I think just one status is overloading too much onto the LB object (which
> is perhaps something that a UI should do for a user, but not something an
> API should be doing.)
>
> > 1) If an entity exists without a link to a load balancer it is purely
> > just a database entry, so it would always be ACTIVE, but not really
> > active in a technical sense.
>
> Depends on the driver.  I don¹t think this is a decision for lbaas proper.
>
>
> > 2) If some of these entities become shareable then how does the status
> > reflect that the entity failed to create on one load balancer but was
> > successfully created on another.  That logic could get overly complex.
>
> That¹s a status on the join link, not the object, and I could argue
> multiple ways in which that should be one way or another based on the
> backend, which to me, again implies driver question (backend could queue
> for later, or error immediately, or let things run degraded, orŠ)
>
> Thanks,
> Doug
>
>
>
>
> On 6/24/14, 11:23 AM, "Brandon Logan"  wrote:
>
> >I think we missed this discussion at the meet-up but I'd like to bring
> >it up here.  To me having a status on all entities doesn't make much
> >sense, and justing having a status on a load balancer (which would be a
> >provisioning status) and a status on a member (which would be an
> >operational status) are what makes sense because:
> >
> >1) If an entity exists without a link to a load balancer it is purely
> >just a database entry, so it would always be ACTIVE, but not really
> >active in a technical sense.
> >
> >2) If some of these entities become shareable then how does the status
> >reflect that the entity failed to create on one load balancer but was
> >successfully created on another.  That logic could get overly complex.
> >
> >I think the best thing to do is to have the load balancer status reflect
> >the provisioning status of all of its children.  So if a health monitor
> >is updated then the load balancer that health monitor is linked to would
> >have its status changed to PENDING_UPDATE.  Conversely, if a load
> >balancer or any entities linked to it are changed and the load
> >balancer's status is in a non-ACTIVE state then that update should not
> >be allowed.
> >
> >Thoughts?
> >
> >Thanks,
> >Brandon
> >
> >
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

2014-06-24 Thread Doug Wiegley
Hi Brandon,

I think just one status is overloading too much onto the LB object (which
is perhaps something that a UI should do for a user, but not something an
API should be doing.)

> 1) If an entity exists without a link to a load balancer it is purely
> just a database entry, so it would always be ACTIVE, but not really
> active in a technical sense.

Depends on the driver.  I don¹t think this is a decision for lbaas proper.


> 2) If some of these entities become shareable then how does the status
> reflect that the entity failed to create on one load balancer but was
> successfully created on another.  That logic could get overly complex.

That¹s a status on the join link, not the object, and I could argue
multiple ways in which that should be one way or another based on the
backend, which to me, again implies driver question (backend could queue
for later, or error immediately, or let things run degraded, orŠ)

Thanks,
Doug




On 6/24/14, 11:23 AM, "Brandon Logan"  wrote:

>I think we missed this discussion at the meet-up but I'd like to bring
>it up here.  To me having a status on all entities doesn't make much
>sense, and justing having a status on a load balancer (which would be a
>provisioning status) and a status on a member (which would be an
>operational status) are what makes sense because:
>
>1) If an entity exists without a link to a load balancer it is purely
>just a database entry, so it would always be ACTIVE, but not really
>active in a technical sense.
>
>2) If some of these entities become shareable then how does the status
>reflect that the entity failed to create on one load balancer but was
>successfully created on another.  That logic could get overly complex.
>
>I think the best thing to do is to have the load balancer status reflect
>the provisioning status of all of its children.  So if a health monitor
>is updated then the load balancer that health monitor is linked to would
>have its status changed to PENDING_UPDATE.  Conversely, if a load
>balancer or any entities linked to it are changed and the load
>balancer's status is in a non-ACTIVE state then that update should not
>be allowed.
>
>Thoughts?
>
>Thanks,
>Brandon
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][glance] Update volume-image-metadata proposal

2014-06-24 Thread Maldonado, Facundo N
Hi folks,

I started working on this blueprint [1] but the work to be done 
is not limited to cinder python client.
Volume-image-metadata is immutable in Cinder and Glance has 
RBAC image properties and it doesn't provide any way to find out which are 
those protected properties in advance [2].

I want to share this proposal and get feedback from you.

https://docs.google.com/document/d/1XYEqGOa30viOyZf8AiwkrCiMWGTfBKjgmeYBptaCHlM/


Thanks,
Facundo

[1] 
https://blueprints.launchpad.net/python-cinderclient/+spec/support-volume-image-metadata
[2] 
http://openstack.10931.n7.nabble.com/Cinder-Confusion-about-the-respective-use-cases-for-volume-s-admin-metadata-metadata-and-glance-imaga-td39849.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] Providing a potentially more open interface to statsd statistics

2014-06-24 Thread Seger, Mark (Cloud Services)
I've lamented for awhile that while swift/statsd provide a wealth of 
information, it's in a somewhat difficult to use format.  Specifically you have 
to connect to a socket and listen for messages.  Furthermore if you're 
listening, nobody else can.  I do realize there is a mechanism to send the data 
to graphite, but what if I'm not a graphite user OR want to look at the data at 
a finer granularity than is being sent to graphite?

What I've put together and would love to get some feedback on is a tool I'm 
calling 'statsdtee', specifically because you can configure statsd to send to 
the port it wants to listen on (configurable of course) and statsdtee will then 
process it locally AND tee it out another socket, making it possible to forward 
the data on to graphite and still allow local processing.

Local processing consists of calculating rolling counters and writing them to a 
file that looks much like most /proc entries, such as this:

$cat /tmp/statsdtee
V1.0 1403633349.159516
accaudt 0 0 0
accreap 0 0 0 0 0 0 0 0 0
accrepl 0 0 2100 0 0 0 1391 682 0 2100
accsrvr 1 0 0 0 0 2072 0
conaudt 0 0 0
conrepl 0 0 2892 0 0 0 1997 1107 0 2892
consrvr 2700 0 0 1 1 992 0
consync 541036 0 11 0 0
conupdt 0 17 17889
objaudt 0 0
objexpr 0 0
objrepl 0 0 0 0
objsrvr 117190 16325 0 43068 9 996 5 0 6904
objupdt 0 0 0 1704 0

In this format we're looking at data for account, container and object 
services.  There is a similar one for proxy.  The reason for the names on each 
line is what to report on is configurable in a conf file down to the 
granularity of a single line, thereby making it possible to report less 
information, though I'm not sure if one would really do that or not.

To make this mechanism really simple and avoid using internal timers, I'm 
simply looking at the time of each record and every time the value of the 
second changes, write out the current counters.  I could change it to every 
10th of  second but am thinking that really isn't necessary.  I could also 
drive it off a timer interrupt, but again I'm not sure that would really buy 
you anything.

My peeve with /proc is you never know what  each field means and so there is a 
second format in which headers are included and they look like this:

$ cat /tmp/statsdtee
V1.0 140369.410722
#   errs pass fail
accaudt 0 0 0
#   errs cfail cdel cremain cposs_remain ofail odel oremain oposs_remain
accreap 0 0 0 0 0 0 0 0 0
#   diff diff_cap nochg hasmat rsync rem_merge attmpt fail remov succ
accrepl 0 0 2100 0 0 0 1391 682 0 2100
#   put get post del head repl errs
accsrvr 1 0 0 0 0 2069 0
#   errs pass fail
conaudt 0 0 0
#   diff diff_cap nochg hasmat rsync rem_merge attmpt fail remov succ
conrepl 0 0 2793 0 0 0 1934 1083 0 2793
#   put get post del head repl errs
consrvr 2700 0 0 1 1 976 0
#   skip fail sync del put
consync 536193 0 11 0 0
#   succ fail no_chg
conupdt 0 17 17889
#   quar errs
objaudt 0 0
#   obj errs
objexpr 0 0
#   part_del part_upd suff_hashes suff_sync
objrepl 0 0 0 0
#   put get post del head repl errs quar async_pend
objsrvr 117190 16325 0 43068 9 996 5 0 6904
#   errs quar succ fail unlk
objupdt 0 0 0 1704 0

The important thing to remember about rolling counters is as many people who 
wish can read them simultaneously and be assured nobody is stepping on each 
other since they never get zeroed!  You simply read a sample, wait awhile and 
read another.  The result is the change in the counters over that interval and 
anyone can use any interval they choose.

So how useful people think this is?  Personally I think it's very useful...

The next step is how to calculate the numbers I'm reporting.  While statsd 
reports a lot of timing information, none of that really fits this model as all 
I want are counts.  So when I see a GET timing record, I count it as 1 GET.  
Seems to work so far. IS this a legitimate thing to be doing?  Feels right and 
from the preliminary testing I've been doing it seems pretty accurate.

One thing I've found missing is more detailed error information.  For example I 
can tell how many errors there were but I can't tell how many of each type 
there were.  Is this something that can easily be added?  I've found in our 
environment it can be useful when there's an increase in the number of errors 
on a particular server, knowing the type can be quite useful.

While I'm not currently counting everything, such as device specific data which 
would significantly increase the volume of output, I think I have covered quite 
a lot in my model.

Comments?

-mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Juno Mid-cycle Meetup

2014-06-24 Thread Douglas Mendizabal
Hi Everyone,

Just a reminder that the Barbican mid-cycle meetup is just under two weeks
away.   I just wanted to send out a link to the etherpad we’re using to do
some pre-planning of things that need to be covered during the meetup

https://etherpad.openstack.org/p/barbican-juno-meetup

Also, please be sure to RSVP if you’re planning on coming, so that we can
plan accordingly.

RSVP [ 
https://docs.google.com/forms/d/1iao7mEN6HV3CRCRuCPhxOaF4_tJ-Kqq4_Lli1quft58
/viewform?usp=send_form ]

Thanks,
Doug Mendizábal
IRC: redrobot

From:  Douglas Mendizabal 
Reply-To:  "OpenStack Development Mailing List (not for usage questions)"

Date:  Monday, June 16, 2014 at 9:29 PM
To:  "OpenStack Development Mailing List (not for usage questions)"

Subject:  [openstack-dev] [barbican] Juno Mid-cycle Meetup

Hi Everyone,

Just wanted to send a reminder that the Barbican Juno meetup is coming up in
a few weeks.  We’ll be meeting at the new Geekdom location in San Antonio,
TX on  July 7-9 (Monday-Wednesday).  This meetup will overlap with the
Keystone Juno Hackathon being held July 9-11 at the same location.

RSVP [ 
https://docs.google.com/forms/d/1iao7mEN6HV3CRCRuCPhxOaF4_tJ-Kqq4_Lli1quft58
/viewform?usp=send_form ]

LOCATION

Geekdom
110 E Houston St, 7th Floor
San Antonio TX, 78205
( https://goo.gl/maps/skMaI )

DATES

Mon, July 7 – Barbican
Tue, July 8 – Barbican
Wed, July 9 – Barbican/Keystone
Thu, July 10 – Keystone
Fri, July 11 – Keystone

For more information check out the wiki page. [
https://wiki.openstack.org/wiki/Barbican/JunoMeetup ]

Thanks,

Douglas Mendizábal
IRC: redrobot




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Which entities need status

2014-06-24 Thread Brandon Logan
I think we missed this discussion at the meet-up but I'd like to bring
it up here.  To me having a status on all entities doesn't make much
sense, and justing having a status on a load balancer (which would be a
provisioning status) and a status on a member (which would be an
operational status) are what makes sense because:

1) If an entity exists without a link to a load balancer it is purely
just a database entry, so it would always be ACTIVE, but not really
active in a technical sense.

2) If some of these entities become shareable then how does the status
reflect that the entity failed to create on one load balancer but was
successfully created on another.  That logic could get overly complex.

I think the best thing to do is to have the load balancer status reflect
the provisioning status of all of its children.  So if a health monitor
is updated then the load balancer that health monitor is linked to would
have its status changed to PENDING_UPDATE.  Conversely, if a load
balancer or any entities linked to it are changed and the load
balancer's status is in a non-ACTIVE state then that update should not
be allowed.

Thoughts?

Thanks,
Brandon


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Trouble with Devstack

2014-06-24 Thread Trevor Vardeman
Fawad,

Thanks Fawad, that seems to have fixed my issue at this point.  Amused me, 
since pip is supposed to replace easy_install, but I won't nitpick if it fixes 
it ha ha.

-Trevor

From: Fawad Khaliq [fa...@plumgrid.com]
Sent: Tuesday, June 24, 2014 12:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Trouble with Devstack

Hi Trevor,

I ran into the same issue. I worked around quickly by doing the following:

  *   After stack.sh uninstalls pip, and fails with the 
"pkg_resources.DistributionNotFound: pip==1.4.1" error, install pip from 
easy_install
 *   # easy_install pip
  *   And re - stack.sh

Haven't done the investigation yet but this may help you move past this issue 
for now.

Thanks,
Fawad Khaliq



On Tue, Jun 24, 2014 at 10:32 AM, Trevor Vardeman 
mailto:trevor.varde...@rackspace.com>> wrote:
I'm running Ubuntu 14.04, and rather suddenly I'm unable to run "./stack.sh" 
successfully.  Brandon, who is also running Ubuntu 14.04, is seeing no issues 
here.  However, all the same, I'm at a loss as to understand what the problem 
is.  At the bottom of my text is the terminal output from running "./stack.sh"

It should be noted, I don't use a python virtual environment.  My reasoning is 
simple: I have a specific partition set up to use devstack, and only devstack.  
I don't think its necessary to use a VE mostly because I would find it weird to 
handle dependencies in an isolated environment rather than the host environment 
I've already dedicated to the project in the first place.  Not sure any of you 
will agree with me, and I'd only really entertain the idea of said VE if its 
the only solution to my problem.  I've installed "python-pip" as the latest 
version, 1.5.6.  When running "./stack.sh" it will uninstall the latest version 
and try using pip 1.4.1, to no avail, and where it would try to install 1.4.1 
escapes me, according to the following output.  If I manually install 1.4.1 and 
add files to the appropriate location for its use according to "./stack.sh" it 
still uninstalls the installed packages, and then fails, under what appeared to 
me to be the same output and failure as the following.  If anyone can help me 
sort this out, I'd be very appreciative.  Please feel free to message me on IRC 
(handle TrevorV) if you have a suggestion or are confused about anything I've 
done/tried.


Using mysql database backend
2014-06-24 17:16:32.095 | + echo_summary 'Installing package prerequisites'
2014-06-24 17:16:32.095 | + [[ -t 3 ]]
2014-06-24 17:16:32.095 | + [[ True != \T\r\u\e ]]
2014-06-24 17:16:32.095 | + echo -e Installing package prerequisites
2014-06-24 17:16:32.095 | + source 
/home/stack/workspace/devstack/tools/install_prereqs.sh
2014-06-24 17:16:32.095 | ++ [[ -n '' ]]
2014-06-24 17:16:32.095 | ++ [[ -z /home/stack/workspace/devstack ]]
2014-06-24 17:16:32.095 | ++ 
PREREQ_RERUN_MARKER=/home/stack/workspace/devstack/.prereqs
2014-06-24 17:16:32.095 | ++ PREREQ_RERUN_HOURS=2
2014-06-24 17:16:32.095 | ++ PREREQ_RERUN_SECONDS=7200
2014-06-24 17:16:32.096 | +++ date +%s
2014-06-24 17:16:32.096 | ++ NOW=1403630192
2014-06-24 17:16:32.096 | +++ head -1 /home/stack/workspace/devstack/.prereqs
2014-06-24 17:16:32.096 | ++ LAST_RUN=1403628907
2014-06-24 17:16:32.096 | ++ DELTA=1285
2014-06-24 17:16:32.096 | ++ [[ 1285 -lt 7200 ]]
2014-06-24 17:16:32.096 | ++ [[ -z '' ]]
2014-06-24 17:16:32.096 | ++ echo 'Re-run time has not expired (5915 seconds 
remaining) '
2014-06-24 17:16:32.096 | Re-run time has not expired (5915 seconds remaining)
2014-06-24 17:16:32.096 | ++ echo 'and FORCE_PREREQ not set; exiting...'
2014-06-24 17:16:32.096 | and FORCE_PREREQ not set; exiting...
2014-06-24 17:16:32.096 | ++ return 0
2014-06-24 17:16:32.096 | + [[ False != \T\r\u\e ]]
2014-06-24 17:16:32.096 | + /home/stack/workspace/devstack/tools/install_pip.sh
2014-06-24 17:16:32.096 | +++ dirname 
/home/stack/workspace/devstack/tools/install_pip.sh
2014-06-24 17:16:32.096 | ++ cd /home/stack/workspace/devstack/tools
2014-06-24 17:16:32.096 | ++ pwd
2014-06-24 17:16:32.096 | + TOOLS_DIR=/home/stack/workspace/devstack/tools
2014-06-24 17:16:32.096 | ++ cd /home/stack/workspace/devstack/tools/..
2014-06-24 17:16:32.096 | ++ pwd
2014-06-24 17:16:32.096 | + TOP_DIR=/home/stack/workspace/devstack
2014-06-24 17:16:32.096 | + cd /home/stack/workspace/devstack
2014-06-24 17:16:32.096 | + source /home/stack/workspace/devstack/functions
2014-06-24 17:16:32.096 |  dirname /home/stack/workspace/devstack/functions
2014-06-24 17:16:32.096 | +++ cd /home/stack/workspace/devstack
2014-06-24 17:16:32.096 | +++ pwd
2014-06-24 17:16:32.096 | ++ FUNC_DIR=/home/stack/workspace/devstack
2014-06-24 17:16:32.096 | ++ source 
/home/stack/workspace/devstack/functions-common
2014-06-24 17:16:32.105 | + FILES=/home/stack/workspace/devstack/files
2014-06-24 17:16:32.105 | + PIP_GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py
2014-06-

Re: [openstack-dev] [Barbican] Barebones CA

2014-06-24 Thread John Wood
Hello Robert,

I would actually hope we have a self-contained certificate plugin 
implementation that runs 'out of the box' to enable certificate generation 
orders to be evaluated and demo-ed on local boxes. 

Is this what you were thinking though?

Thanks,
John




From: Clark, Robert Graham [robert.cl...@hp.com]
Sent: Tuesday, June 24, 2014 10:36 AM
To: OpenStack List
Subject: [openstack-dev] [Barbican] Barebones CA

Hi all,

I’m sure this has been discussed somewhere and I’ve just missed it.

Is there any value in creating a basic ‘CA’ and plugin to satisfy 
tests/integration in Barbican? I’m thinking something that probably performs 
OpenSSL certificate operations itself, ugly but perhaps useful for some things?

-Rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Trouble with Devstack

2014-06-24 Thread Fawad Khaliq
Hi Trevor,

I ran into the same issue. I worked around quickly by doing the following:

   - After stack.sh uninstalls pip, and fails with the
"pkg_resources.DistributionNotFound:
   pip==1.4.1" error, install pip from easy_install
  - # easy_install pip
   - And re - stack.sh

Haven't done the investigation yet but this may help you move past this
issue for now.

Thanks,
Fawad Khaliq



On Tue, Jun 24, 2014 at 10:32 AM, Trevor Vardeman <
trevor.varde...@rackspace.com> wrote:

>  I'm running Ubuntu 14.04, and rather suddenly I'm unable to run
> "./stack.sh" successfully.  Brandon, who is also running Ubuntu 14.04, is
> seeing no issues here.  However, all the same, I'm at a loss as to
> understand what the problem is.  At the bottom of my text is the terminal
> output from running "./stack.sh"
>
>  It should be noted, I don't use a python virtual environment.  My
> reasoning is simple: I have a specific partition set up to use devstack,
> and only devstack.  I don't think its necessary to use a VE mostly because
> I would find it weird to handle dependencies in an isolated environment
> rather than the host environment I've already dedicated to the project in
> the first place.  Not sure any of you will agree with me, and I'd only
> really entertain the idea of said VE if its the only solution to my
> problem.  I've installed "python-pip" as the latest version, 1.5.6.  When
> running "./stack.sh" it will uninstall the latest version and try using pip
> 1.4.1, to no avail, and where it would try to install 1.4.1 escapes me,
> according to the following output.  If I manually install 1.4.1 and add
> files to the appropriate location for its use according to "./stack.sh" it
> still uninstalls the installed packages, and then fails, under what
> appeared to me to be the same output and failure as the following.  If
> anyone can help me sort this out, I'd be very appreciative.  Please feel
> free to message me on IRC (handle TrevorV) if you have a suggestion or are
> confused about anything I've done/tried.
>
>  
>  Using mysql database backend
> 2014-06-24 17:16:32.095 | + echo_summary 'Installing package prerequisites'
> 2014-06-24 17:16:32.095 | + [[ -t 3 ]]
> 2014-06-24 17:16:32.095 | + [[ True != \T\r\u\e ]]
> 2014-06-24 17:16:32.095 | + echo -e Installing package prerequisites
> 2014-06-24 17:16:32.095 | + source
> /home/stack/workspace/devstack/tools/install_prereqs.sh
> 2014-06-24 17:16:32.095 | ++ [[ -n '' ]]
> 2014-06-24 17:16:32.095 | ++ [[ -z /home/stack/workspace/devstack ]]
> 2014-06-24 17:16:32.095 | ++
> PREREQ_RERUN_MARKER=/home/stack/workspace/devstack/.prereqs
> 2014-06-24 17:16:32.095 | ++ PREREQ_RERUN_HOURS=2
> 2014-06-24 17:16:32.095 | ++ PREREQ_RERUN_SECONDS=7200
> 2014-06-24 17:16:32.096 | +++ date +%s
> 2014-06-24 17:16:32.096 | ++ NOW=1403630192
> 2014-06-24 17:16:32.096 | +++ head -1
> /home/stack/workspace/devstack/.prereqs
> 2014-06-24 17:16:32.096 | ++ LAST_RUN=1403628907
>  2014-06-24 17:16:32.096 | ++ DELTA=1285
> 2014-06-24 17:16:32.096 | ++ [[ 1285 -lt 7200 ]]
> 2014-06-24 17:16:32.096 | ++ [[ -z '' ]]
> 2014-06-24 17:16:32.096 | ++ echo 'Re-run time has not expired (5915
> seconds remaining) '
> 2014-06-24 17:16:32.096 | Re-run time has not expired (5915 seconds
> remaining)
> 2014-06-24 17:16:32.096 | ++ echo 'and FORCE_PREREQ not set; exiting...'
> 2014-06-24 17:16:32.096 | and FORCE_PREREQ not set; exiting...
> 2014-06-24 17:16:32.096 | ++ return 0
> 2014-06-24 17:16:32.096 | + [[ False != \T\r\u\e ]]
> 2014-06-24 17:16:32.096 | +
> /home/stack/workspace/devstack/tools/install_pip.sh
> 2014-06-24 17:16:32.096 | +++ dirname
> /home/stack/workspace/devstack/tools/install_pip.sh
> 2014-06-24 17:16:32.096 | ++ cd /home/stack/workspace/devstack/tools
> 2014-06-24 17:16:32.096 | ++ pwd
> 2014-06-24 17:16:32.096 | + TOOLS_DIR=/home/stack/workspace/devstack/tools
> 2014-06-24 17:16:32.096 | ++ cd /home/stack/workspace/devstack/tools/..
> 2014-06-24 17:16:32.096 | ++ pwd
> 2014-06-24 17:16:32.096 | + TOP_DIR=/home/stack/workspace/devstack
> 2014-06-24 17:16:32.096 | + cd /home/stack/workspace/devstack
> 2014-06-24 17:16:32.096 | + source /home/stack/workspace/devstack/functions
> 2014-06-24 17:16:32.096 |  dirname
> /home/stack/workspace/devstack/functions
> 2014-06-24 17:16:32.096 | +++ cd /home/stack/workspace/devstack
> 2014-06-24 17:16:32.096 | +++ pwd
> 2014-06-24 17:16:32.096 | ++ FUNC_DIR=/home/stack/workspace/devstack
> 2014-06-24 17:16:32.096 | ++ source
> /home/stack/workspace/devstack/functions-common
> 2014-06-24 17:16:32.105 | + FILES=/home/stack/workspace/devstack/files
> 2014-06-24 17:16:32.105 | + PIP_GET_PIP_URL=
> https://bootstrap.pypa.io/get-pip.py
> 2014-06-24 17:16:32.106 | ++ basename https://bootstrap.pypa.io/get-pip.py
> 2014-06-24 17:16:32.107 | +
> LOCAL_PIP=/home/stack/workspace/devstack/files/get-pip.py
> 2014-06-24 17:16:32.107 | + GetDistro
> 2014-06-24 17:16:32.107 | + GetOSVersion
> 2014-06-24 17:16:32.108 | ++ which sw_vers
> 2014-06-24 17:1

[openstack-dev] [Neutron][LBaaS] Trouble with Devstack

2014-06-24 Thread Trevor Vardeman
I'm running Ubuntu 14.04, and rather suddenly I'm unable to run "./stack.sh" 
successfully.  Brandon, who is also running Ubuntu 14.04, is seeing no issues 
here.  However, all the same, I'm at a loss as to understand what the problem 
is.  At the bottom of my text is the terminal output from running "./stack.sh"

It should be noted, I don't use a python virtual environment.  My reasoning is 
simple: I have a specific partition set up to use devstack, and only devstack.  
I don't think its necessary to use a VE mostly because I would find it weird to 
handle dependencies in an isolated environment rather than the host environment 
I've already dedicated to the project in the first place.  Not sure any of you 
will agree with me, and I'd only really entertain the idea of said VE if its 
the only solution to my problem.  I've installed "python-pip" as the latest 
version, 1.5.6.  When running "./stack.sh" it will uninstall the latest version 
and try using pip 1.4.1, to no avail, and where it would try to install 1.4.1 
escapes me, according to the following output.  If I manually install 1.4.1 and 
add files to the appropriate location for its use according to "./stack.sh" it 
still uninstalls the installed packages, and then fails, under what appeared to 
me to be the same output and failure as the following.  If anyone can help me 
sort this out, I'd be very appreciative.  Please feel free to message me on IRC 
(handle TrevorV) if you have a suggestion or are confused about anything I've 
done/tried.


Using mysql database backend
2014-06-24 17:16:32.095 | + echo_summary 'Installing package prerequisites'
2014-06-24 17:16:32.095 | + [[ -t 3 ]]
2014-06-24 17:16:32.095 | + [[ True != \T\r\u\e ]]
2014-06-24 17:16:32.095 | + echo -e Installing package prerequisites
2014-06-24 17:16:32.095 | + source 
/home/stack/workspace/devstack/tools/install_prereqs.sh
2014-06-24 17:16:32.095 | ++ [[ -n '' ]]
2014-06-24 17:16:32.095 | ++ [[ -z /home/stack/workspace/devstack ]]
2014-06-24 17:16:32.095 | ++ 
PREREQ_RERUN_MARKER=/home/stack/workspace/devstack/.prereqs
2014-06-24 17:16:32.095 | ++ PREREQ_RERUN_HOURS=2
2014-06-24 17:16:32.095 | ++ PREREQ_RERUN_SECONDS=7200
2014-06-24 17:16:32.096 | +++ date +%s
2014-06-24 17:16:32.096 | ++ NOW=1403630192
2014-06-24 17:16:32.096 | +++ head -1 /home/stack/workspace/devstack/.prereqs
2014-06-24 17:16:32.096 | ++ LAST_RUN=1403628907
2014-06-24 17:16:32.096 | ++ DELTA=1285
2014-06-24 17:16:32.096 | ++ [[ 1285 -lt 7200 ]]
2014-06-24 17:16:32.096 | ++ [[ -z '' ]]
2014-06-24 17:16:32.096 | ++ echo 'Re-run time has not expired (5915 seconds 
remaining) '
2014-06-24 17:16:32.096 | Re-run time has not expired (5915 seconds remaining)
2014-06-24 17:16:32.096 | ++ echo 'and FORCE_PREREQ not set; exiting...'
2014-06-24 17:16:32.096 | and FORCE_PREREQ not set; exiting...
2014-06-24 17:16:32.096 | ++ return 0
2014-06-24 17:16:32.096 | + [[ False != \T\r\u\e ]]
2014-06-24 17:16:32.096 | + /home/stack/workspace/devstack/tools/install_pip.sh
2014-06-24 17:16:32.096 | +++ dirname 
/home/stack/workspace/devstack/tools/install_pip.sh
2014-06-24 17:16:32.096 | ++ cd /home/stack/workspace/devstack/tools
2014-06-24 17:16:32.096 | ++ pwd
2014-06-24 17:16:32.096 | + TOOLS_DIR=/home/stack/workspace/devstack/tools
2014-06-24 17:16:32.096 | ++ cd /home/stack/workspace/devstack/tools/..
2014-06-24 17:16:32.096 | ++ pwd
2014-06-24 17:16:32.096 | + TOP_DIR=/home/stack/workspace/devstack
2014-06-24 17:16:32.096 | + cd /home/stack/workspace/devstack
2014-06-24 17:16:32.096 | + source /home/stack/workspace/devstack/functions
2014-06-24 17:16:32.096 |  dirname /home/stack/workspace/devstack/functions
2014-06-24 17:16:32.096 | +++ cd /home/stack/workspace/devstack
2014-06-24 17:16:32.096 | +++ pwd
2014-06-24 17:16:32.096 | ++ FUNC_DIR=/home/stack/workspace/devstack
2014-06-24 17:16:32.096 | ++ source 
/home/stack/workspace/devstack/functions-common
2014-06-24 17:16:32.105 | + FILES=/home/stack/workspace/devstack/files
2014-06-24 17:16:32.105 | + PIP_GET_PIP_URL=https://bootstrap.pypa.io/get-pip.py
2014-06-24 17:16:32.106 | ++ basename https://bootstrap.pypa.io/get-pip.py
2014-06-24 17:16:32.107 | + 
LOCAL_PIP=/home/stack/workspace/devstack/files/get-pip.py
2014-06-24 17:16:32.107 | + GetDistro
2014-06-24 17:16:32.107 | + GetOSVersion
2014-06-24 17:16:32.108 | ++ which sw_vers
2014-06-24 17:16:32.111 | + [[ -x '' ]]
2014-06-24 17:16:32.111 | ++ which lsb_release
2014-06-24 17:16:32.114 | + [[ -x /usr/bin/lsb_release ]]
2014-06-24 17:16:32.115 | ++ lsb_release -i -s
2014-06-24 17:16:32.160 | + os_VENDOR=Ubuntu
2014-06-24 17:16:32.161 | ++ lsb_release -r -s
2014-06-24 17:16:32.209 | + os_RELEASE=14.04
2014-06-24 17:16:32.209 | + os_UPDATE=
2014-06-24 17:16:32.209 | + os_PACKAGE=rpm
2014-06-24 17:16:32.209 | + [[ Debian,Ubuntu,LinuxMint =~ Ubuntu ]]
2014-06-24 17:16:32.209 | + os_PACKAGE=deb
2014-06-24 17:16:32.210 | ++ lsb_release -c -s
2014-06-24 17:16:32.262 | + os_CODENAME=trusty
2014-06-24 17:16:32.262 | + exp

Re: [openstack-dev] [Fuel] Removing old node logs

2014-06-24 Thread Andrey Danin
+1 to @Aleksandr


On Tue, Jun 24, 2014 at 8:32 PM, Aleksandr Didenko 
wrote:

> Yes, of course, snapshot for all nodes at once (like currently) should
> also be available.
>
>
> On Tue, Jun 24, 2014 at 7:27 PM, Igor Kalnitsky 
> wrote:
>
>> Hello,
>>
>> @Aleks, it's a good idea to make snapshot per environment, but I think
>> we can keep functionality to make snapshot for all nodes at once too.
>>
>> - Igor
>>
>>
>> On Tue, Jun 24, 2014 at 6:38 PM, Aleksandr Didenko > > wrote:
>>
>>> Yeah, I thought about diagnostic snapshot too. Maybe it would be better
>>> to implement per-environment diagnostic snapshots? I.e. add diagnostic
>>> snapshot generate/download buttons/links in the environment actions tab.
>>> Such snapshot would contain info/logs about Fuel master node and nodes
>>> assigned to the environment only.
>>>
>>>
>>> On Tue, Jun 24, 2014 at 6:27 PM, Igor Kalnitsky >> > wrote:
>>>
 Hi guys,

 What about our diagnostic snapshot?

 I mean we're going to make snapshot of entire /var/log and obviously
 this old logs will be included in snapshot. Should we skip theem or
 such situation is ok?

 - Igor




 On Tue, Jun 24, 2014 at 5:57 PM, Aleksandr Didenko <
 adide...@mirantis.com> wrote:

> Hi,
>
> If user runs some experiments with creating/deleting clusters, then
> taking care of old logs is under user's responsibility, I suppose. Fuel
> configures log rotation with compression for remote logs, so old logs will
> be gzipped and will not take much space.
>
> In case of additional boolean parameter, the default value should be
> "0-don't touch old logs".
>
> --
> Regards,
> Alex
>
>
> On Tue, Jun 24, 2014 at 4:07 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Guys,
>>
>> What do you think of removing node logs on master node right after
>> removing node from cluster?
>>
>> The issue is when user do experiments he creates and deletes clusters
>> and old unused directories remain and take disk space. On the other hand,
>> it is not so hard to imaging the situation when user would like to be 
>> able
>> to take a look in old logs.
>>
>> My suggestion here is to add a boolean parameter into settings which
>> will manage this piece of logic (1-remove old logs, 0-don't touch old 
>> logs).
>>
>> Thanks for your opinions.
>>
>> Vladimir Kozhukalov
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Andrey Danin
ada...@mirantis.com
skype: gcon.monolake
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Quick Survey: Horizon Mid-Cycle Meetup

2014-06-24 Thread Tzu-Mainn Chen
> On 6/20/14, 6:24 AM, "Radomir Dopieralski"  wrote:
> 
> >On 20/06/14 13:56, Jaromir Coufal wrote:
> >> On 2014/19/06 09:58, Matthias Runge wrote:
> >>> On Wed, Jun 18, 2014 at 10:55:59AM +0200, Jaromir Coufal wrote:
>  My quick questions are:
>  * Who would be interested (and able) to get to the meeting?
>  * What topics do we want to discuss?
> 
>  https://etherpad.openstack.org/p/horizon-juno-meetup
> 
> >>> Thanks for bringing this up!
> >>>
> >>> Do we really have items to discuss, where it needs a meeting in person?
> >>>
> >>> Matthias
> >> 
> >> I am not sure TBH, that's why I added also the Topic section to figure
> >> out if there is something what needs to be discussed. Though I don't see
> >> much interest yet.
> >
> >Apart from the split, I also work on configuration files rework, which
> >could benefit from discussion, but i think it's better done here or on
> >the wiki/etherpad, as that leaves tangible traces. I will post a
> >detailed e-mail in a few days. Other than that, I don't see a compelling
> >reason to organize it.
> >
> >--
> >Radomir Dopieralski
> >
> 
> I don¹t think the split warrants a mid-cycle meetup. A topic that would
> benefit from several people being in the room is client side architecture,
> but I¹m not entirely sure we¹re ready to work through that yet, and the
> dates are a little aggressive.  If we have interest in that, we could look
> to a slightly later date.
> 
> David

This was talked about a bit in today's Horizon weekly IRC meeting, and the
outcome was that it might make sense to see if people have the interest or
the time to attend such a meetup.  In order to gauge interest, here's an
etherpad where interested parties can put down their names next to dates
when they'd be available to attend.

https://etherpad.openstack.org/p/juno-horizon-meetup

Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] dib-utils Release Question

2014-06-24 Thread Ben Nemec
Whoops, sorry, I didn't realize a tox.ini was required to release the
project.  We do actually need to push this to pypi because it's going to
be a dep of diskimage-builder, so although there is currently no Python
code in the project it needs to be pip installable anyway (similar to
tripleo-image-elements).

I've pushed https://review.openstack.org/#/c/102291 to add a basic
tox.ini that should get the build command below working.

Thanks.

-Ben

On 06/24/2014 08:04 AM, Jay Dobies wrote:
> Ahh, ok. I had just assumed it was a Python library, but I admittedly 
> didn't look too closely at it. Thanks :)
> 
> On 06/23/2014 09:32 PM, Steve Kowalik wrote:
>> On 24/06/14 06:31, Jay Dobies wrote:
>>> I finished the releases for all of our existing projects and after
>>> poking around tarballs.openstack.org and pypi, it looks like they built
>>> successfully. Yay me \o/
>>>
>>> However, it doesn't look like dib-utils build worked. I don't see it
>>> listed on tarballs.openstack.org. It was the first release for that
>>> project, but I didn't take any extra steps (I just followed the
>>> instructions on the releases wiki and set it to version 0.0.1).
>>>
>>> I saw the build for it appear in zuul but I'm not sure how to go back
>>> and view the results of a build once it disappears off the main page.
>>>
>>> Can someone with experience releasing a new project offer me any insight?
>>
>> \o/
>>
>> I've been dealing with releases of new projects from the os-cloud-config
>> side recently, so let's see.
>>
>> dib-utils has a post job of dib-utils-branch-tarball, so the job does
>> exist, as you pointed out, but it doesn't hurt to double check.
>>
>> The object the tag points to is commit
>> 45b7cf44bc939ef08afc6b1cb1d855e0a85710ad, so logs can be found at
>> http://logs.openstack.org/45/45b7cf44bc939ef08afc6b1cb1d855e0a85710ad
>>
>> And from the log a few levels deep at the above URL, we see:
>>
>> 2014-06-16 07:17:13.122 | + tox -evenv python setup.py sdist
>> 2014-06-16 07:17:13.199 | ERROR: toxini file 'tox.ini' not found
>> 2014-06-16 07:17:13.503 | Build step 'Execute shell' marked build as failure
>>
>> Since it's not a Python project, no tarball or pypi upload.
>>
>> Cheers,
>>
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC] Glance Functional API and Cross-project API Consistency

2014-06-24 Thread Jay Pipes

On 06/11/2014 02:34 AM, Mark Washenberger wrote:

I think the tasks stuff is something different, though. A task is a
(potentially) long-running operation. So it would be possible for an
action to result in the creation of a task. As the proposal stands
today, the actions we've been looking at are an alternative to the
document-oriented PATCH HTTP verb. There was nearly unanimous consensus
that we found "POST /resources/actions/verb {inputs to verb}" to be a
more expressive and intuitive way of accomplishing some workflows than
trying to use JSON-PATCH documents.


Why do tasks necessarily mean the operation is long-running? As 
mentioned before to Brian R, just have the deactivation action be a 
task. There's no need to create a new faux-resource called 'action', IMO...


Best,
-jay


On Tue, Jun 10, 2014 at 4:15 PM, Jay Pipes mailto:jaypi...@gmail.com>> wrote:

On Wed, Jun 4, 2014 at 11:54 AM, Sean Dague mailto:s...@dague.net>> wrote:

On 05/30/2014 02:22 PM, Hemanth Makkapati wrote:
 > Hello All,
 > I'm writing to notify you of the approach the Glance
community has
 > decided to take for doing functional API.  Also, I'm writing
to solicit
 > your feedback on this approach in the light of cross-project API
 > consistency.
 >
 > At the Atlanta Summit, the Glance team has discussed introducing
 > functional API in Glance so as to be able to expose
operations/actions
 > that do not naturally fit into the CRUD-style. A few
approaches are
 > proposed and discussed here
 >

.
 > We have all converged on the approach to include 'action' and
action
 > type in the URL. For instance, 'POST
 > /images/{image_id}/actions/{action_type}'.
 >
 > However, this is different from the way Nova does actions.
Nova includes
 > action type in the payload. For instance, 'POST
 > /servers/{server_id}/action {"type": "", ...}'.
At this
 > point, we hit a cross-project API consistency issue mentioned
here
 >


 > (under the heading 'How to act on resource - cloud perform on
 > resources'). Though we are differing from the way Nova does
actions and
 > hence another source of cross-project API inconsistency , we
have a few
 > reasons to believe that Glance's way is helpful in certain ways.
 >
 > The reasons are as following:
 > 1. Discoverability of operations.  It'll be easier to expose
permitted
 > actions through schemas a json home document living at
 > /images/{image_id}/actions/.
 > 2. More conducive for rate-limiting. It'll be easier to
rate-limit
 > actions in different ways if the action type is available in
the URL.
 > 3. Makes more sense for functional actions that don't require
a request
 > body (e.g., image deactivation).
 >
 > At this point we are curious to see if the API conventions group
 > believes this is a valid and reasonable approach.
 > Any feedback is much appreciated. Thank you!

Honestly, I like POST /images/{image_id}/actions/{action_type} much
better than ACTION being embedded in the body (the way nova
currently
does it), for the simple reason of reading request logs:


I agree that not including the action type in the POST body is much
nicer and easier to read in logs, etc.

That said, I prefer to have resources actually be things that the
software creates. An action isn't created. It is performed.

I would prefer to replace the term "action(s)" with the term
"task(s)", as is proposed for Nova [1].

Then, I'd be happy as a pig in, well, you know.

Best,
-jay

[1] https://review.openstack.org/#/c/86938/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC] Glance Functional API and Cross-project API Consistency

2014-06-24 Thread Jay Pipes

On 06/11/2014 12:25 AM, Brian Rosmaita wrote:

 > That said, I prefer to have resources actually be things that the
software creates. An action
 > isn't created. It is performed.
 >
 > I would prefer to replace the term "action(s)" with the term
"task(s)", as is proposed for Nova [1].

Glance already uses "tasks" in the v2 URL for creating resources that
represent long-running asynchronous processes (import, export, cloning),
so that terminology is already taken.  (The documentation is lagging a
bit behind the code on tasks, though.)

The aim here is to introduce something that doesn't fit cleanly into a
CRUD-centric view.  For example, a cloud provider may need to put an
imported (or otherwise problematic) image into "deactivated" status
while the image is being investigated for some malicious stuff built
into it [2].


So just have a task that does deactivation... there's no need to create 
this new "action" resource IMO.



 Image status, however, is never set directly by users or
admins; image status transitions are handled entirely by Glance.  (You
don't delete an image by updating its status to be 'deleted'; rather,
its status becomes 'deleted' as a result of a DELETE request).  Using an
Images v2 PATCH call in this context would be misleading because we're
not modifying the image's status--we can't.  We're asking Glance to take
an action with respect to an image, the result of which will be that the
image will be deactivated (or reactivated), and Glance will modify the
image's status accordingly.

cheers,
brian

[1] https://review.openstack.org/#/c/86938/
[2] https://wiki.openstack.org/wiki/Glance-deactivate-image

*From:* Jay Pipes [jaypi...@gmail.com]
*Sent:* Tuesday, June 10, 2014 7:15 PM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [Glance][TC] Glance Functional API and
Cross-project API Consistency

On Wed, Jun 4, 2014 at 11:54 AM, Sean Dague mailto:s...@dague.net>> wrote:

On 05/30/2014 02:22 PM, Hemanth Makkapati wrote:
 > Hello All,
 > I'm writing to notify you of the approach the Glance community has
 > decided to take for doing functional API.  Also, I'm writing to
solicit
 > your feedback on this approach in the light of cross-project API
 > consistency.
 >
 > At the Atlanta Summit, the Glance team has discussed introducing
 > functional API in Glance so as to be able to expose
operations/actions
 > that do not naturally fit into the CRUD-style. A few approaches are
 > proposed and discussed here
 >

.
 > We have all converged on the approach to include 'action' and action
 > type in the URL. For instance, 'POST
 > /images/{image_id}/actions/{action_type}'.
 >
 > However, this is different from the way Nova does actions. Nova
includes
 > action type in the payload. For instance, 'POST
 > /servers/{server_id}/action {"type": "", ...}'. At this
 > point, we hit a cross-project API consistency issue mentioned here
 >


 > (under the heading 'How to act on resource - cloud perform on
 > resources'). Though we are differing from the way Nova does
actions and
 > hence another source of cross-project API inconsistency , we have
a few
 > reasons to believe that Glance's way is helpful in certain ways.
 >
 > The reasons are as following:
 > 1. Discoverability of operations.  It'll be easier to expose
permitted
 > actions through schemas a json home document living at
 > /images/{image_id}/actions/.
 > 2. More conducive for rate-limiting. It'll be easier to rate-limit
 > actions in different ways if the action type is available in the URL.
 > 3. Makes more sense for functional actions that don't require a
request
 > body (e.g., image deactivation).
 >
 > At this point we are curious to see if the API conventions group
 > believes this is a valid and reasonable approach.
 > Any feedback is much appreciated. Thank you!

Honestly, I like POST /images/{image_id}/actions/{action_type} much
better than ACTION being embedded in the body (the way nova currently
does it), for the simple reason of reading request logs:


I agree that not including the action type in the POST body is much
nicer and easier to read in logs, etc.

That said, I prefer to have resources actually be things that the
software creates. An action isn't created. It is performed.

I would prefer to replace the term "action(s)" with the term "task(s)",
as is proposed for Nova [1].

Then, I'd be happy as a pig in, well, you know.

Best,
-jay

[1] https://review.openstack.org/#/c/86938/


___
OpenStack-dev mailing list
Op

Re: [openstack-dev] [hacking] rules for removal

2014-06-24 Thread Clint Byrum
Excerpts from Monty Taylor's message of 2014-06-24 06:48:06 -0700:
> On 06/22/2014 02:49 PM, Duncan Thomas wrote:
> > On 22 June 2014 14:41, Amrith Kumar  wrote:
> >> In addition to making changes to the hacking rules, why don't we mandate 
> >> also
> >> that perceived problems in the commit message shall not be an acceptable
> >> reason to -1 a change.
> > 
> > -1.
> > 
> > There are some /really/ bad commit messages out there, and some of us
> > try to use the commit messages to usefully sort through the changes
> > (i.e. I often -1 in cinder a change only affects one driver and that
> > isn't clear from the summary).
> > 
> > If the perceived problem is grammatical, I'm a bit more on board with
> > it not a reason to rev a patch, but core reviewers can +2/A over the
> > top of a -1 anyway...
> 
> 100% agree. Spelling and grammar are rude to review on - especially
> since we have (and want) a LOT of non-native English speakers. It's not
> our job to teach people better grammar. Heck - we have people from
> different English backgrounds with differing disagreements on what good
> grammar _IS_
> 

We shouldn't quibble over _anything_ grammatical in a commit message. If
there is a disagreement about it, the comments should be ignored. There
are definitely a few grammar rules that are loose and those should be
largely ignored.

However, we should correct grammar when there is a clear solution, as
those same people who do not speak English as their first language are
likely to be confused by poor grammar.

We're not doing it to teach grammar. We're doing it to ensure readability.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Removing old node logs

2014-06-24 Thread Aleksandr Didenko
Yes, of course, snapshot for all nodes at once (like currently) should also
be available.


On Tue, Jun 24, 2014 at 7:27 PM, Igor Kalnitsky 
wrote:

> Hello,
>
> @Aleks, it's a good idea to make snapshot per environment, but I think
> we can keep functionality to make snapshot for all nodes at once too.
>
> - Igor
>
>
> On Tue, Jun 24, 2014 at 6:38 PM, Aleksandr Didenko 
> wrote:
>
>> Yeah, I thought about diagnostic snapshot too. Maybe it would be better
>> to implement per-environment diagnostic snapshots? I.e. add diagnostic
>> snapshot generate/download buttons/links in the environment actions tab.
>> Such snapshot would contain info/logs about Fuel master node and nodes
>> assigned to the environment only.
>>
>>
>> On Tue, Jun 24, 2014 at 6:27 PM, Igor Kalnitsky 
>> wrote:
>>
>>> Hi guys,
>>>
>>> What about our diagnostic snapshot?
>>>
>>> I mean we're going to make snapshot of entire /var/log and obviously
>>> this old logs will be included in snapshot. Should we skip theem or
>>> such situation is ok?
>>>
>>> - Igor
>>>
>>>
>>>
>>>
>>> On Tue, Jun 24, 2014 at 5:57 PM, Aleksandr Didenko <
>>> adide...@mirantis.com> wrote:
>>>
 Hi,

 If user runs some experiments with creating/deleting clusters, then
 taking care of old logs is under user's responsibility, I suppose. Fuel
 configures log rotation with compression for remote logs, so old logs will
 be gzipped and will not take much space.

 In case of additional boolean parameter, the default value should be
 "0-don't touch old logs".

 --
 Regards,
 Alex


 On Tue, Jun 24, 2014 at 4:07 PM, Vladimir Kozhukalov <
 vkozhuka...@mirantis.com> wrote:

> Guys,
>
> What do you think of removing node logs on master node right after
> removing node from cluster?
>
> The issue is when user do experiments he creates and deletes clusters
> and old unused directories remain and take disk space. On the other hand,
> it is not so hard to imaging the situation when user would like to be able
> to take a look in old logs.
>
> My suggestion here is to add a boolean parameter into settings which
> will manage this piece of logic (1-remove old logs, 0-don't touch old 
> logs).
>
> Thanks for your opinions.
>
> Vladimir Kozhukalov
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Removing old node logs

2014-06-24 Thread Igor Kalnitsky
Hello,

@Aleks, it's a good idea to make snapshot per environment, but I think
we can keep functionality to make snapshot for all nodes at once too.

- Igor


On Tue, Jun 24, 2014 at 6:38 PM, Aleksandr Didenko 
wrote:

> Yeah, I thought about diagnostic snapshot too. Maybe it would be better to
> implement per-environment diagnostic snapshots? I.e. add diagnostic
> snapshot generate/download buttons/links in the environment actions tab.
> Such snapshot would contain info/logs about Fuel master node and nodes
> assigned to the environment only.
>
>
> On Tue, Jun 24, 2014 at 6:27 PM, Igor Kalnitsky 
> wrote:
>
>> Hi guys,
>>
>> What about our diagnostic snapshot?
>>
>> I mean we're going to make snapshot of entire /var/log and obviously
>> this old logs will be included in snapshot. Should we skip theem or
>> such situation is ok?
>>
>> - Igor
>>
>>
>>
>>
>> On Tue, Jun 24, 2014 at 5:57 PM, Aleksandr Didenko > > wrote:
>>
>>> Hi,
>>>
>>> If user runs some experiments with creating/deleting clusters, then
>>> taking care of old logs is under user's responsibility, I suppose. Fuel
>>> configures log rotation with compression for remote logs, so old logs will
>>> be gzipped and will not take much space.
>>>
>>> In case of additional boolean parameter, the default value should be
>>> "0-don't touch old logs".
>>>
>>> --
>>> Regards,
>>> Alex
>>>
>>>
>>> On Tue, Jun 24, 2014 at 4:07 PM, Vladimir Kozhukalov <
>>> vkozhuka...@mirantis.com> wrote:
>>>
 Guys,

 What do you think of removing node logs on master node right after
 removing node from cluster?

 The issue is when user do experiments he creates and deletes clusters
 and old unused directories remain and take disk space. On the other hand,
 it is not so hard to imaging the situation when user would like to be able
 to take a look in old logs.

 My suggestion here is to add a boolean parameter into settings which
 will manage this piece of logic (1-remove old logs, 0-don't touch old 
 logs).

 Thanks for your opinions.

 Vladimir Kozhukalov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Periodic Security Checks

2014-06-24 Thread Joe Gordon
On Sat, Jun 21, 2014 at 11:33 AM, Alexandr Naumchev 
wrote:

> Hello!
> We have blueprints here:
>
> https://blueprints.launchpad.net/horizon/+spec/periodic-security-checks
>
> and here:
>
> https://blueprints.launchpad.net/nova/+spec/periodic-security-checks/
>
> And we already have some code. Is it necessary to approve the blueprint
> before contributing the code? In any case, could someone review the
> aforementioned blueprints?
> Thanks!
>
>

Hi, nova has moved away from using 100% launchpad to approve blueprints.
Our current workflow is documented here
https://wiki.openstack.org/wiki/Blueprints#Spec_.2B_Blueprints_lifecycle


> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Stackalytics 0.6 released!

2014-06-24 Thread Herman Narkaytis
Hi Stackers,
  More then a year ago Mirantis announced Stackalytics as a public resource
for the OpenStack community. Initially it was an internal tool for our
performance tracking, but later resource became de-facto standard for
measuring contribution statistics. We've started with several POCs on
different technologies which were targeted to show decent performance on 1
million records. At that time there were only 50k commits records and we
estimated to achieve 1M in 3 years. I'm glad to admit that we were wrong.
Now Stackalytics handles not only commits, but reviews, blueprints, bugs,
emails, registrations on openstack.org, etc. We've reached 1M bound and
still are able to generate reports in seconds!

  Today we'd like to announce 0.6 release with a bunch of new features:

   - Implemented module classification based on programs.yaml with
   retrospective integrated/incubated attribution
   - Added support for co-authored commits
   - Added metrics on filed and resolved bugs
   - Added drill-down report on OpenStack foundation members

  I want to say thank you to our development team and especially to Ilya
Shakhat, Pavel Kholkin and Yury Taraday.

  Please feel free to provide your feedback. It's highly appreciated.

--
Herman Narkaytis
DoO Ru, PhD
Tel.: +7 (8452) 674-555, +7 (8452) 431-555
Tel.: +7 (495) 640-4904
Tel.: +7 (812) 640-5904
Tel.: +38(057)728-4215
Tel.: +1 (408) 715-7897
ext 2002
http://www.mirantis.com

This email (including any attachments) is confidential. If you are not the
intended recipient you must not copy, use, disclose, distribute or rely on
the information contained in it. If you have received this email in error,
please notify the sender immediately by reply email and delete the email
from your system. Confidentiality and legal privilege attached to this
communication are not waived or lost by reason of mistaken delivery to you.
Mirantis does not guarantee (that this email or the attachment's) are
unaffected by computer virus, corruption or other defects. Mirantis may
monitor incoming and outgoing emails for compliance with its Email Policy.
Please note that our servers may not be located in your country.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] About storing volume format info for filesystem-based drivers

2014-06-24 Thread John Griffith
On Tue, Jun 24, 2014 at 9:56 AM, Duncan Thomas 
wrote:

> On 24 June 2014 16:42, Avishay Traeger  wrote:
> > One more reason why block storage management doesn't really work on file
> > systems.  I'm OK with storing the format, but that just means you fail
> > migration/backup operations with different formats, right?
>
> Actually I think storing the format *fixes* those cases, since the
> driver knows what the source format is to get a raw stream of bytes
> out. It was in trying to fix backup that this problem was found.
>

​Yes, but I was also trying to point out this shouldn't be done in the
driver... but at this point maybe IRC is a better forum to discuss the
impl?​

>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule - comapre_type values

2014-06-24 Thread Dustin Lundquist
I brought this up on https://review.openstack.org/#/c/101084/.


-Dustin


On Tue, Jun 24, 2014 at 7:57 AM, Avishay Balderman 
wrote:

>  Hi Dustin
>
> I agree with the concept you described but as far as I understand it is
> not currently supported in Neutron.
>
> So a driver should be fully compatible with the interface it implements.
>
>
>
> Avishay
>
>
>
> *From:* Dustin Lundquist [mailto:dus...@null-ptr.net]
> *Sent:* Tuesday, June 24, 2014 5:41 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7
> Rule - comapre_type values
>
>
>
> I think the API should provide an richly featured interface, and
> individual drivers should indicate if they support the provided
> configuration. For example there is a spec for a Linux LVS LBaaS driver,
> this driver would not support TLS termination or any layer 7 features, but
> would still be valuable for some deployments. The user experience of such a
> solution could be improved if the driver to propagate up a message
> specifically identifying the unsupported feature.
>
>
>
>
>
> -Dustin
>
>
>
> On Tue, Jun 24, 2014 at 4:28 AM, Avishay Balderman 
> wrote:
>
> Hi
>
> One of L7 Rule attributes is ‘compare_type’.
>
> This field is the match operator that the rule should activate against the
> value found in the request.
>
> Below is list of the possible values:
>
> - Regexp
>
> - StartsWith
>
> - EndsWith
>
> - Contains
>
> - EqualTo (*)
>
> - GreaterThan (*)
>
> - LessThan (*)
>
>
>
> The last 3 operators (*) in the list are used in numerical matches.
>
> Radware load balancing backend does not support those operators   “out of
> the box” and a significant development effort should be done in order to
> support it.
>
> We are afraid to miss the Junu timeframe if we will have to focus in
> supporting the numerical operators.
>
> Therefore we ask to support the non-numerical operators for Junu and add
> the numerical operators support post Junu.
>
>
>
> See https://review.openstack.org/#/c/99709/4/specs/juno/lbaas-l7-rules.rst
>
>
>
> Thanks
>
> Avishay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Scheduler]

2014-06-24 Thread Abbass MAROUNI

Hi,

I was wondering if there's a way to set a tag (key/value) of a Virtual 
Machine from within a scheduler filter ?


I want to be able to tag a machine with a specific key/value after 
passing my custom filter


Thanks,

--
--
Abbass MAROUNI
VirtualScale


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] About storing volume format info for filesystem-based drivers

2014-06-24 Thread John Griffith
On Tue, Jun 24, 2014 at 9:42 AM, Avishay Traeger 
wrote:

> One more reason why block storage management doesn't really work on file
> systems.  I'm OK with storing the format, but that just means you fail
> migration/backup operations with different formats, right?
>

​+1... so nice that somebody else said it for me this time!!

We need to make sure this is completely abstracted from end-user and the
manager can make the right decisions (ie implement a way to work from one
to the other)​.

>
>
> On Mon, Jun 23, 2014 at 6:07 PM, Trump.Zhang 
> wrote:
>
>> Hi, all:
>>
>> Currently, there are several filesystem-based drivers in Cinder, such
>> as nfs, glusterfs, etc. Multiple format of volume other than "raw" can be
>> potentially supported in these drivers, such as qcow2, raw, sparse, etc.
>>
>> However, Cinder does not store the actual format of volume and
>> suppose all volumes are "raw" format. It will has or already has several
>> problems as follows:
>>
>> 1. For volume migration, the generic migration implementation in
>> Cinder uses the "dd" command to copy "src" volume to "dest" volume. If the
>> src volume is "qcow2" format, instance will not get the right data from
>> volume after the dest volume attached to instance, because the info
>> returned from Cinder states that the volume's format is "raw" other than
>> "qcow2"
>> 2. For volume backup, the backup driver also supposes that src
>> volumes are "raw" format, other format will not be supported
>>
>> Indeed, glusterfs driver has used "qemu-img info" command to judge
>> the format of volume. However, as the comment from Duncan in [1] says, this
>> auto detection method has many possible error / exploit vectors. Because if
>> the beginning content of a "raw" volume happens to a "qcow2" disk, auto
>> detection method will judge this volume to be a "qcow2" volume wrongly.
>>
>> I proposed that the "format" info should be added to "admin_metadata"
>> of volumes, and enforce it on all operations, such as create, copy, migrate
>> and retype. The "format" will be only set / updated for filesystem-based
>> drivers,  other drivers will not contains this metadata and have a default
>> "raw" format.
>>
>> Any advice?
>>
>> [1] https://review.openstack.org/#/c/100529/
>>
>> --
>> ---
>> Best Regards
>>
>> Trump.Zhang
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] About storing volume format info for filesystem-based drivers

2014-06-24 Thread Duncan Thomas
On 24 June 2014 16:42, Avishay Traeger  wrote:
> One more reason why block storage management doesn't really work on file
> systems.  I'm OK with storing the format, but that just means you fail
> migration/backup operations with different formats, right?

Actually I think storing the format *fixes* those cases, since the
driver knows what the source format is to get a raw stream of bytes
out. It was in trying to fix backup that this problem was found.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] Reminder: Mid-cycle Meetup - Attendance Confirmation

2014-06-24 Thread Jordan OMara

On 24/06/14 10:55 -0400, Jordan OMara wrote:

On 20/06/14 16:26 -0400, Charles Crouch wrote:

Any more takers for the tripleo mid-cycle meetup in Raleigh? If so, please
sign up on the etherpad below.

The hotel group room rate will be finalized on Monday Jul 23rd (US 
time), after that time you will be on your own for finding 
accommodation.


Thanks
Charles



Just an update that I've got us a block of rooms reserved at the
nearest, cheapest hotel (the Marriott in downtown Raleigh, about 200
yards from the Red Hat office) - I'll have details on how to actually
book at this rate in just a few minutes.


Please use the following link to reserve at the marriott (it's copied
on the etherpad)

http://tinyurl.com/redhat-marriott

We have a 24-room block reserved at that rate from SUN-FRI
--
Jordan O'Mara 
Red Hat Engineering, Raleigh 


pgp1uiWMlmgmN.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] About storing volume format info for filesystem-based drivers

2014-06-24 Thread Avishay Traeger
One more reason why block storage management doesn't really work on file
systems.  I'm OK with storing the format, but that just means you fail
migration/backup operations with different formats, right?


On Mon, Jun 23, 2014 at 6:07 PM, Trump.Zhang 
wrote:

> Hi, all:
>
> Currently, there are several filesystem-based drivers in Cinder, such
> as nfs, glusterfs, etc. Multiple format of volume other than "raw" can be
> potentially supported in these drivers, such as qcow2, raw, sparse, etc.
>
> However, Cinder does not store the actual format of volume and suppose
> all volumes are "raw" format. It will has or already has several problems
> as follows:
>
> 1. For volume migration, the generic migration implementation in
> Cinder uses the "dd" command to copy "src" volume to "dest" volume. If the
> src volume is "qcow2" format, instance will not get the right data from
> volume after the dest volume attached to instance, because the info
> returned from Cinder states that the volume's format is "raw" other than
> "qcow2"
> 2. For volume backup, the backup driver also supposes that src volumes
> are "raw" format, other format will not be supported
>
> Indeed, glusterfs driver has used "qemu-img info" command to judge the
> format of volume. However, as the comment from Duncan in [1] says, this
> auto detection method has many possible error / exploit vectors. Because if
> the beginning content of a "raw" volume happens to a "qcow2" disk, auto
> detection method will judge this volume to be a "qcow2" volume wrongly.
>
> I proposed that the "format" info should be added to "admin_metadata"
> of volumes, and enforce it on all operations, such as create, copy, migrate
> and retype. The "format" will be only set / updated for filesystem-based
> drivers,  other drivers will not contains this metadata and have a default
> "raw" format.
>
> Any advice?
>
> [1] https://review.openstack.org/#/c/100529/
>
> --
> ---
> Best Regards
>
> Trump.Zhang
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-24 Thread David Shrewsbury
On Tue, Jun 24, 2014 at 11:28 AM, Jay Pipes  wrote:

>
> """
> This is a summary.
>
> And this is a description
> """
>
> will result in a failure of H404, due to the "This is a summary." not
> being on the first line, like this:
>
> """This is a summary.
>
> And this is a description
> """
>
> It is that silliness that I deplore, not the summary line followed by a
> newline issue.



Yes!! That is definitely the most silly silliness, and I deplore it.


--
David Shrewsbury (Shrews)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Removing old node logs

2014-06-24 Thread Andrey Danin
What about to gzip old logs by Astute and place them to a special
directory, which will be managed under logrotate.d, and logrotate will
remove untouched logs after 1 month.


On Tue, Jun 24, 2014 at 6:57 PM, Aleksandr Didenko 
wrote:

> Hi,
>
> If user runs some experiments with creating/deleting clusters, then taking
> care of old logs is under user's responsibility, I suppose. Fuel configures
> log rotation with compression for remote logs, so old logs will be gzipped
> and will not take much space.
>
> In case of additional boolean parameter, the default value should be
> "0-don't touch old logs".
>
> --
> Regards,
> Alex
>
>
> On Tue, Jun 24, 2014 at 4:07 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Guys,
>>
>> What do you think of removing node logs on master node right after
>> removing node from cluster?
>>
>> The issue is when user do experiments he creates and deletes clusters and
>> old unused directories remain and take disk space. On the other hand, it is
>> not so hard to imaging the situation when user would like to be able to
>> take a look in old logs.
>>
>> My suggestion here is to add a boolean parameter into settings which will
>> manage this piece of logic (1-remove old logs, 0-don't touch old logs).
>>
>> Thanks for your opinions.
>>
>> Vladimir Kozhukalov
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Andrey Danin
ada...@mirantis.com
skype: gcon.monolake
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-24 Thread Jay Pipes



On 06/24/2014 07:32 AM, Daniel P. Berrange wrote:

On Tue, Jun 24, 2014 at 10:55:41AM +, Day, Phil wrote:

-Original Message-
From: John Garbutt [mailto:j...@johngarbutt.com]
Sent: 23 June 2014 10:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction
as part of resize ?

On 18 June 2014 21:57, Jay Pipes  wrote:

On 06/17/2014 05:42 PM, Daniel P. Berrange wrote:


On Tue, Jun 17, 2014 at 04:32:36PM +0100, Pádraig Brady wrote:


On 06/13/2014 02:22 PM, Day, Phil wrote:


I guess the question I’m really asking here is:  “Since we know
resize down won’t work in all cases, and the failure if it does
occur will be hard for the user to detect, should we just block it
at the API layer and be consistent across all Hypervisors ?”



+1

There is an existing libvirt blueprint:

https://blueprints.launchpad.net/nova/+spec/libvirt-resize-disk-down
which I've never been in favor of:
https://bugs.launchpad.net/nova/+bug/1270238/comments/1



All of the functionality around resizing VMs to match a different
flavour seem to be a recipe for unleashing a torrent of unfixable
bugs, whether resizing disks, adding CPUs, RAM or any other aspect.



+1

I'm of the opinion that we should plan to rip resize functionality out
of (the next major version of) the Compute API and have a *single*,
*consistent* API for migrating resources. No more "API extension X for
migrating this kind of thing, and API extension Y for this kind of
thing, and API extension Z for migrating /live/ this type of thing."

There should be One "move" API to Rule Them All, IMHO.


+1 for one "move" API, the two evolved independently, in different
drivers, its time to unify them!

That plan got stuck behind the refactoring of live-migrate and migrate to the
conductor, to help unify the code paths. But it kinda got stalled (I must
rebase those patches...).

Just to be clear, I am against removing resize down from v2 without a
deprecation cycle. But I am pro starting that deprecation cycle.

John


I'm not sure Daniel and Jay are arguing for the same thing here John:
  I *think*  Daniel is saying "drop resize altogether" and Jay is saying
"unify it with migration" - so I'm a tad confused which of those you're
agreeing with.


Yes, I'm personally for removing resize completely since, IMHO, no matter
how many bugs we fix it is always going to be a mess. That said I realize
that people probably find resize-up useful, so I won't push hard to kill
it - we should just recognize that it is always going to be a mess which
does not result in the same setup you'd get if you booted fresh with the
new settings.


I am of the opinion that the different API extensions and the fact that 
they have evolved separately have created a giant mess for users, and 
that we should consolidate the API into a single "move" API that can 
take an optional new set of resources (via a new specified flavor) and 
should automatically "live move" the instance if it is possible, and 
fall back to a cold move if it isn't possible, with no confusing options 
or additional/variant API calls needed by the user.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Removing old node logs

2014-06-24 Thread Igor Kalnitsky
Hi guys,

What about our diagnostic snapshot?

I mean we're going to make snapshot of entire /var/log and obviously
this old logs will be included in snapshot. Should we skip theem or
such situation is ok?

- Igor




On Tue, Jun 24, 2014 at 5:57 PM, Aleksandr Didenko 
wrote:

> Hi,
>
> If user runs some experiments with creating/deleting clusters, then taking
> care of old logs is under user's responsibility, I suppose. Fuel configures
> log rotation with compression for remote logs, so old logs will be gzipped
> and will not take much space.
>
> In case of additional boolean parameter, the default value should be
> "0-don't touch old logs".
>
> --
> Regards,
> Alex
>
>
> On Tue, Jun 24, 2014 at 4:07 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Guys,
>>
>> What do you think of removing node logs on master node right after
>> removing node from cluster?
>>
>> The issue is when user do experiments he creates and deletes clusters and
>> old unused directories remain and take disk space. On the other hand, it is
>> not so hard to imaging the situation when user would like to be able to
>> take a look in old logs.
>>
>> My suggestion here is to add a boolean parameter into settings which will
>> manage this piece of logic (1-remove old logs, 0-don't touch old logs).
>>
>> Thanks for your opinions.
>>
>> Vladimir Kozhukalov
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Barbican] Barebones CA

2014-06-24 Thread Clark, Robert Graham
Hi all,

I’m sure this has been discussed somewhere and I’ve just missed it.

Is there any value in creating a basic ‘CA’ and plugin to satisfy 
tests/integration in Barbican? I’m thinking something that probably performs 
OpenSSL certificate operations itself, ugly but perhaps useful for some things?

-Rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Removing old node logs

2014-06-24 Thread Aleksandr Didenko
Yeah, I thought about diagnostic snapshot too. Maybe it would be better to
implement per-environment diagnostic snapshots? I.e. add diagnostic
snapshot generate/download buttons/links in the environment actions tab.
Such snapshot would contain info/logs about Fuel master node and nodes
assigned to the environment only.


On Tue, Jun 24, 2014 at 6:27 PM, Igor Kalnitsky 
wrote:

> Hi guys,
>
> What about our diagnostic snapshot?
>
> I mean we're going to make snapshot of entire /var/log and obviously
> this old logs will be included in snapshot. Should we skip theem or
> such situation is ok?
>
> - Igor
>
>
>
>
> On Tue, Jun 24, 2014 at 5:57 PM, Aleksandr Didenko 
> wrote:
>
>> Hi,
>>
>> If user runs some experiments with creating/deleting clusters, then
>> taking care of old logs is under user's responsibility, I suppose. Fuel
>> configures log rotation with compression for remote logs, so old logs will
>> be gzipped and will not take much space.
>>
>> In case of additional boolean parameter, the default value should be
>> "0-don't touch old logs".
>>
>> --
>> Regards,
>> Alex
>>
>>
>> On Tue, Jun 24, 2014 at 4:07 PM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> Guys,
>>>
>>> What do you think of removing node logs on master node right after
>>> removing node from cluster?
>>>
>>> The issue is when user do experiments he creates and deletes clusters
>>> and old unused directories remain and take disk space. On the other hand,
>>> it is not so hard to imaging the situation when user would like to be able
>>> to take a look in old logs.
>>>
>>> My suggestion here is to add a boolean parameter into settings which
>>> will manage this piece of logic (1-remove old logs, 0-don't touch old logs).
>>>
>>> Thanks for your opinions.
>>>
>>> Vladimir Kozhukalov
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-24 Thread Jay Pipes

On 06/24/2014 10:34 AM, Robert Collins wrote:

On 23 June 2014 07:04, Jay Pipes  wrote:


I would also love to get rid of H404, otherwise known as the dumb rule that
says if you have a multiline docstring, that there must be a summary line,
then a blank line, then a detailed description. It makes things like this
illegal, which, IMHO, is stupid:

def do_something(self, thing):
 """We do something with the supplied thing, so that something else
 is also done with this other thing. Make sure the other thing is
 of type something.
 """
 pass

Likewise, I'd love to be able to have a newline start the docstring, like
so:

def do_something(self, thing):
 """
 We do something with the supplied thing, so that something else
 is also done with this other thing. Make sure the other thing is
 of type something.
 """
 pass

But there's a rule that prevents that as well...

To be clear, I don't think all hacking rules are silly. To the contrary,
there are many that are reasonable and useful. However, I'd prefer to focus
on things that make the code more readable, not less readable, and rules
that enforce Pythonic idioms, not some random hacker's idea of good style.


So
"""
Lorem ipsum

Foo bar baz
"""

is a valid PEP-257 docstring, though a bit suspect on context. In fact
*all* leading whitespace is stripped -

"""foo"""

and

"""
foo
"""

are equivalent for docstrings - even though they aren't equivalent for
the mk1 human eyeball reading them.

So in both cases I would have expected to you be bitten by the
first-line rule, which exists for API extractors (such as
help(module)) so that they have a useful, meaningful summary they can
pull out. I think it aids immensely in docstring readability - and its
certainly convention throughout the rest of the Python universe, so
IMO it comes part of the parcel when you ask for Python.


"""
This is a summary.

And this is a description
"""

will result in a failure of H404, due to the "This is a summary." not 
being on the first line, like this:


"""This is a summary.

And this is a description
"""

It is that silliness that I deplore, not the summary line followed by a 
newline issue.


-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >