Re: [openstack-dev] [Neutron] Simple proposal for stabilizing new features in-tree

2014-08-14 Thread Sridar Kandaswamy (skandasw)
Hi Aaron:


There is a certain fear of another cascading chain of emails so with hesitation 
i send this email out. :-)


1) I could not agree with u more on the issue with the logs and the pain with 
debugging issues here. Yes for sure bugs do keep popping up but often times, 
(speaking for fwaas) - given the L3 agent interactions - there are a multitude 
of reasons for a failure. An L3Agent crash or a router issue - also manifests 
itself as an fwaas issue - i think ur first paste is along those lines (perhaps 
i could be wrong without much context here).


The L3Agent - service coexistence is far from ideal - but this experience has 
lead to two proposals - a vendor proposal [1] that actually tries to address 
such agent limitations and collaboration on another community proposal[2] to 
enable the L3 Agent to be more suited to hosting services. Hopefully [2] will 
get picked up in K and should help provide the necessary infrastructure to 
clean up the reference implementation.


2) Regarding ur point on the  FWaaS API - the intent of the feature by design 
was to keep the service abstraction separate from how it is inserted in the 
network - to keep this vendor/technology neutral. The first priority, post 
Havana was to address  service insertion to get away from the all routers model 
[3] but did not get the blessings needed. Now with a redrafted proposal for 
Juno[4] again an effort is being made to address this now for the second time 
in the 2 releases post H.


In general, I would make a request that before we decide to go ahead and start 
moving things out into an incubator area - more discussion is needed.  We don’t 
want  to land up in a situation in K-3 where we find out that this model does 
not quite work for whatever reason. Also i wonder about the hoops to get out 
from incubation. As vendors who try to align with the community and upstream 
our work - we don’t want to land up waiting more cycles - there is quite a bit 
of frustration that is felt here too.


Lets also think about the impacts on feature velocity, somehow that keeps 
popping into my head every time i buy a book from a certain online retailer. :-)


[1] https://review.openstack.org/#/c/90729/

[2] https://review.openstack.org/#/c/91532/

[3] https://review.openstack.org/#/c/62599/

[4] https://review.openstack.org/#/c/93128/


Thanks


Sridar


From: Aaron Rosen mailto:aaronoro...@gmail.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, August 13, 2014 at 3:56 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] Simple proposal for stabilizing new 
features in-tree

Hi,

I've been thinking a good bit on this on the right way to move forward with 
this and in general the right way new services should be added. Yesterday I was 
working on a bug that was causing some problems in the openstack infra. We 
tracked down the issue then I uploaded a patch for it. A little while later 
after jenkins voted back a -1 so I started looking through the logs to see what 
the source of the failure was (which was actually unrelated to my patch). The 
random failure in the fwaas/vpn/l3-agent code which all outputs to the same log 
file that contains many traces for every run even successful ones. In one skim 
of this log file I was able to spot 4 [1]bugs which shows  these new 
"experimental" services that we've added to neutron have underlying problems 
(even though they've been in the tree for 2 releases+ now). This puts a huge 
strain on the whole openstack development community as they are always recheck 
no bug'ing due to neutron failures.

If you look at the fwaas work that was done. This merged over two releases ago 
and still does not have a complete API as there is no concept of where 
enforcement should be done. Today, enforcement is done across all of a tenant's 
routers making it more or less useless imho and we're just carrying it along in 
the tree (and it's causing us problems)!

I think Mark's idea of neutron-incubator[2] is a great step forward to 
improving neutron.

We can easily move these things out of the neutron source tree and we can plug 
these things in here:
https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L52
https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L72
(GASP: We have seen shipped our own API's here to customers before we were able 
to upstream them).

This allows us to decouple these experimental things from the neutron core and 
allows us to release these components on their own making things more 
modular/maintainable and stable (I think these things might even be better long 
term living out of the tree).  Most importantly though it doesn't put a burden 
on everyone else.


Best,

Aaron


[1]
http://paste.openstack.org/show/94664/
http://paste.openstack.org/show/94670/
http://paste.openstack.org/show/94663/
http://paste.openstack.org/show/94662/

[2] - https://etherpad.openstack.o

[openstack-dev] [heat] heat docker multi host scheduling support

2014-08-14 Thread Malawade, Abhijeet
Hi all,

I am trying to use heat to create docker containers. I have configured 
heat-docker plugin.
I am also able to create stack using heat successfully.

To start container on different host we need to provide 'docker_endpoint' in 
template. For this we have to provide host address where container will run in 
template.

Is there any way to schedule docker container on available hosts using 
heat-docker plugin without giving 'docker_endpoint' in template file.
Is heat-docker plugin supports managing docker hosts cluster with scheduling 
logic.

Please let me know your suggestions on the same.

Thanks,
Abhijeet

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concerns around the Extensible Resource Tracker design - revert maybe?

2014-08-14 Thread Nikola Đipanov
On 08/13/2014 06:05 PM, Sylvain Bauza wrote:
> 
> Le 13/08/2014 12:21, Sylvain Bauza a écrit :
>>
>> Le 12/08/2014 22:06, Sylvain Bauza a écrit :
>>>
>>> Le 12/08/2014 18:54, Nikola Đipanov a écrit :
 On 08/12/2014 04:49 PM, Sylvain Bauza wrote:
> (sorry for reposting, missed 2 links...)
>
> Hi Nikola,
>
> Le 12/08/2014 12:21, Nikola Đipanov a écrit :
>> Hey Nova-istas,
>>
>> While I was hacking on [1] I was considering how to approach the fact
>> that we now need to track one more thing (NUMA node utilization)
>> in our
>> resources. I went with - "I'll add it to compute nodes table"
>> thinking
>> it's a fundamental enough property of a compute host that it
>> deserves to
>> be there, although I was considering  Extensible Resource Tracker
>> at one
>> point (ERT from now on - see [2]) but looking at the code - it did
>> not
>> seem to provide anything I desperately needed, so I went with
>> keeping it
>> simple.
>>
>> So fast-forward a few days, and I caught myself solving a problem
>> that I
>> kept thinking ERT should have solved - but apparently hasn't, and I
>> think it is fundamentally a broken design without it - so I'd really
>> like to see it re-visited.
>>
>> The problem can be described by the following lemma (if you take
>> 'lemma'
>> to mean 'a sentence I came up with just now' :)):
>>
>> """
>> Due to the way scheduling works in Nova (roughly: pick a host
>> based on
>> stale(ish) data, rely on claims to trigger a re-schedule), _same
>> exact_
>> information that scheduling service used when making a placement
>> decision, needs to be available to the compute service when
>> testing the
>> placement.
>> """
>>
>> This is not the case right now, and the ERT does not propose any
>> way to
>> solve it - (see how I hacked around needing to be able to get
>> extra_specs when making claims in [3], without hammering the DB). The
>> result will be that any resource that we add and needs user supplied
>> info for scheduling an instance against it, will need a buggy
>> re-implementation of gathering all the bits from the request that
>> scheduler sees, to be able to work properly.
> Well, ERT does provide a plugin mechanism for testing resources at the
> claim level. This is the plugin responsibility to implement a test()
> method [2.1] which will be called when test_claim() [2.2]
>
> So, provided this method is implemented, a local host check can be
> done
> based on the host's view of resources.
>
>
 Yes - the problem is there is no clear API to get all the needed
 bits to
 do so - especially the user supplied one from image and flavors.
 On top of that, in current implementation we only pass a hand-wavy
 'usage' blob in. This makes anyone wanting to use this in conjunction
 with some of the user supplied bits roll their own
 'extract_data_from_instance_metadata_flavor_image' or similar which is
 horrible and also likely bad for performance.
>>>
>>> I see your concern where there is no interface for user-facing
>>> resources like flavor or image metadata.
>>> I also think indeed that the big 'usage' blob is not a good choice
>>> for long-term vision.
>>>
>>> That said, I don't think as we say in French to throw the bath
>>> water... ie. the problem is with the RT, not the ERT (apart the
>>> mention of third-party API that you noted - I'll go to it later below)
>> This is obviously a bigger concern when we want to allow users to
>> pass
>> data (through image or flavor) that can affect scheduling, but
>> still a
>> huge concern IMHO.
> And here is where I agree with you : at the moment, ResourceTracker
> (and
> consequently Extensible RT) only provides the view of the resources
> the
> host is knowing (see my point above) and possibly some other resources
> are missing.
> So, whatever your choice of going with or without ERT, your patch [3]
> still deserves it if we want not to lookup DB each time a claim goes.
>
>
>> As I see that there are already BPs proposing to use this IMHO broken
>> ERT ([4] for example), which will surely add to the proliferation of
>> code that hacks around these design shortcomings in what is already a
>> messy, but also crucial (for perf as well as features) bit of Nova
>> code.
> Two distinct implementations of that spec (ie. instances and flavors)
> have been proposed [2.3] [2.4] so reviews are welcome. If you see the
> test() method, it's no-op thing for both plugins. I'm open to comments
> because I have the stated problem : how can we define a limit on
> just a
> counter of instances and flavors ?
>
 Will look at these - but none of them seem to hit the issue I am
 complaining about

Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit

2014-08-14 Thread Christopher Yeoh
On Wed, 13 Aug 2014 18:52:05 -0400
Jay Pipes  wrote:

> On 08/13/2014 06:35 PM, Russell Bryant wrote:
> > On 08/13/2014 06:23 PM, Mark McLoughlin wrote:
> >> On Wed, 2014-08-13 at 12:05 -0700, James E. Blair wrote:
> >>> cor...@inaugust.com (James E. Blair) writes:
> >>>
>  Sean Dague  writes:
> 
> > This has all gone far enough that someone actually wrote a
> > Grease Monkey script to purge all the 3rd Party CI content out
> > of Jenkins UI. People are writing mail filters to dump all the
> > notifications. Dan Berange filters all them out of his gerrit
> > query tools.
> 
>  I should also mention that there is a pending change to do
>  something similar via site-local Javascript in our Gerrit:
> 
> https://review.openstack.org/#/c/95743/
> 
>  I don't think it's an ideal long-term solution, but if it works,
>  we may have some immediate relief without all having to install
>  greasemonkey scripts.
> >>>
> >>> You may have noticed that this has merged, along with a further
> >>> change that shows the latest results in a table format.  (You may
> >>> need to force-reload in your browser to see the change.)
> >>
> >> Beautiful! Thank you so much to everyone involved.
> >
> > +1!  Love this.
> 
> Indeed. Amazeballs.
>

Agreed! This is a really nice improvement

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Simple proposal for stabilizing new features in-tree

2014-08-14 Thread Wuhongning
FWaas can't seamlessly work with DVR yet. A BP [1] has been submitted, but it 
can only handle NS traffic, leaving W-E untouched. If we implement the WE 
firewall in DVR, the iptable might be applied at a per port basis, so there are 
some overlapping with SG (Can we image a packet run into iptable hook twice 
between VM and the wire, for both ingress and egress directions?).

Maybe the overall service plugins (including service extension in ML2) needs 
some cleaning up, It seems that Neutron is just built from separate single 
blocks.

[1]  
http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/juno/neutron-dvr-fwaas.rst


From: Sridar Kandaswamy (skandasw) [skand...@cisco.com]
Sent: Thursday, August 14, 2014 3:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Simple proposal for stabilizing new 
features in-tree


Hi Aaron:


There is a certain fear of another cascading chain of emails so with hesitation 
i send this email out. :-)


1) I could not agree with u more on the issue with the logs and the pain with 
debugging issues here. Yes for sure bugs do keep popping up but often times, 
(speaking for fwaas) - given the L3 agent interactions - there are a multitude 
of reasons for a failure. An L3Agent crash or a router issue - also manifests 
itself as an fwaas issue - i think ur first paste is along those lines (perhaps 
i could be wrong without much context here).


The L3Agent - service coexistence is far from ideal - but this experience has 
lead to two proposals - a vendor proposal [1] that actually tries to address 
such agent limitations and collaboration on another community proposal[2] to 
enable the L3 Agent to be more suited to hosting services. Hopefully [2] will 
get picked up in K and should help provide the necessary infrastructure to 
clean up the reference implementation.


2) Regarding ur point on the  FWaaS API - the intent of the feature by design 
was to keep the service abstraction separate from how it is inserted in the 
network - to keep this vendor/technology neutral. The first priority, post 
Havana was to address  service insertion to get away from the all routers model 
[3] but did not get the blessings needed. Now with a redrafted proposal for 
Juno[4] again an effort is being made to address this now for the second time 
in the 2 releases post H.


In general, I would make a request that before we decide to go ahead and start 
moving things out into an incubator area - more discussion is needed.  We don’t 
want  to land up in a situation in K-3 where we find out that this model does 
not quite work for whatever reason. Also i wonder about the hoops to get out 
from incubation. As vendors who try to align with the community and upstream 
our work - we don’t want to land up waiting more cycles - there is quite a bit 
of frustration that is felt here too.


Lets also think about the impacts on feature velocity, somehow that keeps 
popping into my head every time i buy a book from a certain online retailer. :-)


[1] https://review.openstack.org/#/c/90729/

[2] https://review.openstack.org/#/c/91532/

[3] https://review.openstack.org/#/c/62599/

[4] https://review.openstack.org/#/c/93128/


Thanks


Sridar


From: Aaron Rosen mailto:aaronoro...@gmail.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, August 13, 2014 at 3:56 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] Simple proposal for stabilizing new 
features in-tree

Hi,

I've been thinking a good bit on this on the right way to move forward with 
this and in general the right way new services should be added. Yesterday I was 
working on a bug that was causing some problems in the openstack infra. We 
tracked down the issue then I uploaded a patch for it. A little while later 
after jenkins voted back a -1 so I started looking through the logs to see what 
the source of the failure was (which was actually unrelated to my patch). The 
random failure in the fwaas/vpn/l3-agent code which all outputs to the same log 
file that contains many traces for every run even successful ones. In one skim 
of this log file I was able to spot 4 [1]bugs which shows  these new 
"experimental" services that we've added to neutron have underlying problems 
(even though they've been in the tree for 2 releases+ now). This puts a huge 
strain on the whole openstack development community as they are always recheck 
no bug'ing due to neutron failures.

If you look at the fwaas work that was done. This merged over two releases ago 
and still does not have a complete API as there is no concept of where 
enforcement should be done. Today, enforcement is done across all of a tenant's 
routers making it more or less useless imho and we're just carrying it along in 
the tree (and it's causing us problems)!

I think Mark's idea of neutron-incubator[2] is

Re: [openstack-dev] [Nova] Concerns around the Extensible Resource Tracker design - revert maybe?

2014-08-14 Thread Nikola Đipanov
On 08/13/2014 07:40 PM, Sylvain Bauza wrote:
> 
> Le 13/08/2014 18:40, Brian Elliott a écrit :
>> On Aug 12, 2014, at 5:21 AM, Nikola Đipanov  wrote:
>>> The problem can be described by the following lemma (if you take 'lemma'
>>> to mean 'a sentence I came up with just now' :)):
>>>
>>> """
>>> Due to the way scheduling works in Nova (roughly: pick a host based on
>>> stale(ish) data, rely on claims to trigger a re-schedule), _same exact_
>>> information that scheduling service used when making a placement
>>> decision, needs to be available to the compute service when testing the
>>> placement.
>>> “""
>> Correct
>>
>>> This is not the case right now, and the ERT does not propose any way to
>>> solve it - (see how I hacked around needing to be able to get
>>> extra_specs when making claims in [3], without hammering the DB). The
>>> result will be that any resource that we add and needs user supplied
>>> info for scheduling an instance against it, will need a buggy
>>> re-implementation of gathering all the bits from the request that
>>> scheduler sees, to be able to work properly.
>> Agreed, ERT does not attempt to solve this problem of ensuring RT has
>> an identical set of information for testing claims.  I don’t think it
>> was intended to.
>>
>> ERT does solve the issue of bloat in the RT with adding
>> just-one-more-thing to test usage-wise.  It gives a nice hook for
>> inserting your claim logic for your specific use case.
> 
> I think Nikola and I agreed on the fact that ERT is not responsible for
> this design. That said I can talk on behalf of Nikola...
> 

Right - the hooks, however, hook into a piece of code that has not been
designed with this kind of extensibility in mind (to put it politely),
and exposing these hooks so that people can add functionality that is
broken by design is just asking for technical debt to accumulate more
quickly.

> 
>>> This is obviously a bigger concern when we want to allow users to pass
>>> data (through image or flavor) that can affect scheduling, but still a
>>> huge concern IMHO.
>> I think passing additional data through to compute just wasn’t a
>> problem that ERT aimed to solve.  (Paul Murray?)  That being said,
>> coordinating the passing of any extra data required to test a claim
>> that is *not* sourced from the host itself would be a very nice
>> addition.  You are working around it with some caching in your flavor
>> db lookup use case, although one could of course cook up a cleaner
>> patch to pass such data through on the “build this” request to the
>> compute.

The problem is - it would not only be a nice addition - it is
_necessary_ in order to be able write code that is race free, as we all
agreed on previously when I stated my lemma above :). We can try to
disagree on that, but if we end up agreeing - I find it hard to imagine
someone would defend keeping the ERT. The result would be that will
still have things like my caching hack and things like [1] popping up
_in addition_ to ERT extensions that don't need user data, and those
that do but don't know it and end up introducing races. All of this is
just bad.

[1] https://review.openstack.org/#/c/77800/

> 
> Indeed, and that's why I think the problem can be resolved thanks to 2
> different things :
> 1. Filters need to look at what ERT is giving them, that's what
> isolate-scheduler-db is trying to do (see my patches [2.3 and 2.4] on
> the previous emails
> 2. Some extra user request needs to be checked in the test() method of
> ERT plugins (where claims are done), so I provided a WIP patch for
> discussing it : https://review.openstack.org/#/c/113936/
> 
> 

Several shortcomings discussed on the review so won't repeat them here -
but I agree - it's a nice start.

>>> As I see that there are already BPs proposing to use this IMHO broken
>>> ERT ([4] for example), which will surely add to the proliferation of
>>> code that hacks around these design shortcomings in what is already a
>>> messy, but also crucial (for perf as well as features) bit of Nova code.
>>>
>>> I propose to revert [2] ASAP since it is still fresh, and see how we can
>>> come up with a cleaner design.
>>>
>> I think the ERT is forward-progress here, but am willing to review
>> patches/specs on improvements/replacements.

Even though I disagree with several design decisions in addition to the
problem we are discussing here (and feel mildly guilty for not bringing
them up sooner), I would be happy to help with a base-line of things
that need to be fixed, and no more, before we can add it back.

I can see us keeping it, bit not allowing any new resource extensions in
before refactoring, but I am not sure I see the real win in that. I am
of course open to hear other proposals that acknowledge the brokenness.

N.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-14 Thread Mathieu Rohon
Hi,

I would like to add that it would be harder for the community to help
maintaining drivers.
such a work [1] wouldn't have occured with an out of tree ODL driver.

[1] https://review.openstack.org/#/c/96459/

On Wed, Aug 13, 2014 at 1:09 PM, Robert Kukura  wrote:
> One thing to keep in mind is that the ML2 driver API does sometimes change,
> requiring updates to drivers. Drivers that are in-tree get updated along
> with the driver API change. Drivers that are out-of-tree must be updated by
> the owner.
>
> -Bob
>
>
> On 8/13/14, 6:59 AM, ZZelle wrote:
>
> Hi,
>
>
> The important thing to understand is how to integrate with neutron through
> stevedore/entrypoints:
>
> https://github.com/dave-tucker/odl-neutron-drivers/blob/master/setup.cfg#L32-L34
>
>
> Cedric
>
>
> On Wed, Aug 13, 2014 at 12:17 PM, Dave Tucker  wrote:
>>
>> I've been working on this for OpenDaylight
>> https://github.com/dave-tucker/odl-neutron-drivers
>>
>> This seems to work for me (tested Devstack w/ML2) but YMMV.
>>
>> -- Dave
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-14 Thread Thierry Carrez
Russell Bryant wrote:
> I think perhaps some middle ground makes sense.
> 
> 1) Start doing a better job of generating a priority list, and
> identifying the highest priority items based on group will.
> 
> 2) Expect that reviewers use the priority list to influence their
> general review time.

2b) Discuss those reviews at the weekly team meeting to give them extra
exposure

> 3) Don't actually block other things, should small groups self-organize
> and decide it's important enough to them, even if not to the group as a
> whole.
> 
> That sort of approach still sounds like an improvement to what we have
> today, which is alack of good priority communication to direct general
> review time.

+1

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][rally] nova-network, unable to associate floating ip

2014-08-14 Thread Li Tianqing
Hello,
Recently we use rally to test our cloud. We run boot-run-commnd-delete 
task. The configuration is this:
{
"VMTasks.boot_runcommand_delete": [
{
"args": {
"flavor": {
"name": "m1.small"
},
"image": {
"name": "ubuntu-12-04-raw-rally-test"
},
"script": "/home/rally/doc/samples/ec_script/ubuntu_ls_test.sh",
"interpreter": "bash",
"username": "root",
"floating_network": "LTQ",
"use_floatingip": true,
"availability_zone": "dell420"
},
"runner": {
"type": "constant",
"times": 1000,
"concurrency": 40,
"timeout": 6000
},
"context": {
"users": {
"tenants": 1,
"users_per_tenant": 1
},
"quotas": {
 "nova": {
 "instances": -1,
 "cores": -1,
 "ram": -1,
 "fixed_ips": -1,
 "floating_ips": -1,
 }
 },
}
}
]
}
There are almose 39 Exceptions for unable to associate floating ips.
The output of rally is
"Traceback (most recent call last):\n  File 
\"/opt/rally/local/lib/python2.7/site-packages/rally/benchmark/runners/base.py\",
 line 62, in _run_scenario_once\nmethod_name)(**kwargs) or {}\n  File 
\"/opt/rally/local/lib/python2.7/site-packages/rally/benchmark/scenarios/vm/vmtasks.py\",
 line 80, in boot_runcommand_delete\nself._associate_floating_ip(server, 
floating_ip)\n  File 
\"/opt/rally/local/lib/python2.7/site-packages/rally/benchmark/scenarios/utils.py\",
 line 139, in func_atomic_actions\nf = func(self, *args, **kwargs)\n  File 
\"/opt/rally/local/lib/python2.7/site-packages/rally/benchmark/scenarios/nova/utils.py\",
 line 402, in _associate_floating_ip\nserver.add_floating_ip(address, 
fixed_address=fixed_address)\n  File 
\"/opt/rally/local/lib/python2.7/site-packages/novaclient/v1_1/servers.py\", 
line 123, in add_floating_ip\nself.manager.add_floating_ip(self, address, 
fixed_address)\n  File 
\"/opt/rally/local/lib/python2.7/site-packages/novaclient/v1_1/servers.py\", 
line 631, in add_floating_ip\nself._action('addFloatingIp', server, 
{'address': address})\n  File 
\"/opt/rally/local/lib/python2.7/site-packages/novaclient/v1_1/servers.py\", 
line 1190, in _action\nreturn self.api.client.post(url, body=body)\n  File 
\"/opt/rally/local/lib/python2.7/site-packages/novaclient/client.py\", line 
485, in post\nreturn self._cs_request(url, 'POST', **kwargs)\n  File 
\"/opt/rally/local/lib/python2.7/site-packages/novaclient/client.py\", line 
459, in _cs_request\n**kwargs)\n  File 
\"/opt/rally/local/lib/python2.7/site-packages/novaclient/client.py\", line 
441, in _time_request\nresp, body = self.request(url, method, **kwargs)\n  
File \"/opt/rally/local/lib/python2.7/site-packages/novaclient/client.py\", 
line 435, in request\nraise exceptions.from_response(resp, body, url, 
method)\nBadRequest: Error. Unable to associate floating ip (HTTP 400) 
(Request-ID: req-7b85f1cb-664d-4c98-b140-66f8b8d2ebe4)\n"


I do not why associatiting floating ip is failed. I see in /var/log/sys.log
found this:
Aug 14 16:13:26 compute-node10 dnsmasq[21117]: failed to load names from 
/var/lib/nova/networks/nova-br100.hosts: No such file or directory
Aug 14 16:13:26 compute-node10 dnsmasq-dhcp[21117]: read 
/var/lib/nova/networks/nova-br100.conf
Aug 14 16:13:26 compute-node10 ceph-mon: 2014-08-14 16:13:26.952565 
7f7b2d65a700  1 mon.compute-node10@5(peon).paxos(paxos active c 
4947462..4947989) is_readable now=2014-08-14 16:13:26.952569 
lease_expire=2014-08-14 16:13:31.935450 has v0 lc 4947989
Aug 14 16:13:27 compute-node10 ceph-mon: 2014-08-14 16:13:27.425576 
7f7b2d65a700  1 mon.compute-node10@5(peon).paxos(paxos active c 
4947462..4947990) is_readable now=2014-08-14 16:13:27.425578 
lease_expire=2014-08-14 16:13:32.408369 has v0 lc 4947990
Aug 14 16:13:28 compute-node10 ceph-mon: 2014-08-14 16:13:28.368894 
7f7b2d65a700  1 mon.compute-node10@5(peon).paxos(paxos active c 
4947462..4947991) is_readable now=2014-08-14 16:13:28.368895 
lease_expire=2014-08-14 16:13:33.350551 has v0 lc 4947991
Aug 14 16:13:28 compute-node10 ceph-mon: 2014-08-14 16:13:28.504170 
7f7b2d65a700  1 mon.compute-node10@5(peon).paxos(paxos active c 
4947462..4947992) is_readable now=2014-08-14 16:13:28.504171 
lease_expire=2014-08-14 16:13:33.486123 has v0 lc 4947992
Aug 14 16:13:29 compute-node10 ceph-mon: 2014-08-14 16:13:29.593776 
7f7b2d65a700  1 mon.compute-node10@5(peon).paxos(paxos active c 
4947462..4947993) is_readable now=2014-08-14 16:13:29.593777 
lease_expire=2014

Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread Daniel P. Berrange
On Thu, Aug 14, 2014 at 09:24:36AM +1000, Michael Still wrote:
> On Thu, Aug 14, 2014 at 3:09 AM, Dan Smith  wrote:
> >> I'm not questioning the value of f2f - I'm questioning the idea of
> >> doing f2f meetings sooo many times a year. OpenStack is very much
> >> the outlier here among open source projects - the vast majority of
> >> projects get along very well with much less f2f time and a far
> >> smaller % of their contributors attend those f2f meetings that do
> >> happen. So I really do question what is missing from OpenStack's
> >> community interaction that makes us believe that having 4 f2f
> >> meetings a year is critical to our success.
> >
> > How many is too many? So far, I have found the midcycles to be extremely
> > productive -- productive in a way that we don't see at the summits, and
> > I think other attendees agree. Obviously if budgets start limiting them,
> > then we'll have to deal with it, but I don't want to stop meeting
> > preemptively.
> 
> I agree they're very productive. Let's pick on the nova v3 API case as
> an example... We had failed as a community to reach a consensus using
> our existing discussion mechanisms (hundreds of emails, at least three
> specs, phone calls between the various parties, et cetera), yet at the
> summit and then a midcycle meetup we managed to nail down an agreement
> on a very contentious and complicated topic.

We thought we had agreement on v3 API after Atlanta f2f summit and 
after Hong Kong f2f too. So I wouldn't neccessarily say that we
needed another f2f meeting to resolve that, but rather than this is
a very complex topic that takes a long time to resolve no matter
how we discuss it and the discussions had just happened to reach
a natural conclusion this time around. But lets see if this agreement
actually sticks this time

> I can see the argument that travel cost is an issue, but I think its
> also not a very strong argument. We have companies spending millions
> of dollars on OpenStack -- surely spending a relatively small amount
> on travel to keep the development team as efficient as possible isn't
> a big deal? I wouldn't be at all surprised if the financial costs of
> the v3 API debate (staff time mainly) were much higher than the travel
> costs of those involved in the summit and midcycle discussions which
> sorted it out.

I think the travel cost really is a big issue. Due to the number of
people who had to travel to the many mid-cycle meetups, a good number
of people I work with no longer have the ability to go to the Paris
design summit. This is going to make it harder for them to feel a
proper engaged part of our community. I can only see this situation
get worse over time if greater emphasis is placed on attending the
mid-cycle meetups.

> Travelling to places to talk to people isn't a great solution, but it
> is the most effective one we've found so far. We should continue to
> experiment with other options, but until we find something that works
> as well as meetups, I think we need to keep having them.
> 
> > IMHO, the reasons to cut back would be:
> >
> > - People leaving with a "well, that was useless..." feeling
> > - Not enough people able to travel to make it worthwhile
> >
> > So far, neither of those have been outcomes of the midcycles we've had,
> > so I think we're doing okay.
> >
> > The design summits are structured differently, where we see a lot more
> > diverse attendance because of the colocation with the user summit. It
> > doesn't lend itself well to long and in-depth discussions about specific
> > things, but it's very useful for what it gives us in the way of
> > exposure. We could try to have less of that at the summit and more
> > midcycle-ish time, but I think it's unlikely to achieve the same level
> > of usefulness in that environment.
> >
> > Specifically, the lack of colocation with too many other projects has
> > been a benefit. This time, Mark and Maru where there from Neutron. Last
> > time, Mark from Neutron and the other Mark from Glance were there. If
> > they were having meetups in other rooms (like at summit) they wouldn't
> > have been there exposed to discussions that didn't seem like they'd have
> > a component for their participation, but did after all (re: nova and
> > glance and who should own flavors).
> 
> I agree. The ability to focus on the issues that were blocking nova
> was very important. That's hard to do at a design summit when there is
> so much happening at the same time.

Maybe we should change the way we structure the design summit to
improve that. If there are critical issues blocking nova, it feels
like it is better to be able to discuss and resolve as much as possible
at the start of the dev cycle rather than in the middle of the dev
cycle because I feel that means we are causing ourselves pain during
milestone 1/2.

> >> As I explain in the rest of my email below I'm not advocating
> >> getting rid of mid-cycle events entirely. I'm suggesting that
> >> we can attai

Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit

2014-08-14 Thread Daniel P. Berrange
On Wed, Aug 13, 2014 at 12:05:27PM -0700, James E. Blair wrote:
> cor...@inaugust.com (James E. Blair) writes:
> 
> > Sean Dague  writes:
> >
> >> This has all gone far enough that someone actually wrote a Grease Monkey
> >> script to purge all the 3rd Party CI content out of Jenkins UI. People
> >> are writing mail filters to dump all the notifications. Dan Berange
> >> filters all them out of his gerrit query tools.
> >
> > I should also mention that there is a pending change to do something
> > similar via site-local Javascript in our Gerrit:
> >
> >   https://review.openstack.org/#/c/95743/
> >
> > I don't think it's an ideal long-term solution, but if it works, we may
> > have some immediate relief without all having to install greasemonkey
> > scripts.
> 
> You may have noticed that this has merged, along with a further change
> that shows the latest results in a table format.  (You may need to
> force-reload in your browser to see the change.)

> Thanks again to Radoslav Gerganov for writing the original change.

These are both a great step forward in usability of the Gerrit web UI.
Thanks for the effort everyone put into these changes.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-14 Thread Eoghan Glynn

> >> Letting the industry field-test a project and feed their experience
> >> back into the community is a slow process, but that is the best
> >> measure of a project's success. I seem to recall this being an
> >> implicit expectation a few years ago, but haven't seen it discussed in
> >> a while.
> > 
> > I think I recall us discussing a "must have feedback that it's
> > successfully deployed" requirement in the last cycle, but we recognized
> > that deployers often wait until a project is integrated.
> 
> In the early discussions about incubation, we respected the need to
> officially recognize a project as part of OpenStack just to create the
> uptick in adoption necessary to mature projects. Similarly, integration is a
> recognition of the maturity of a project, but I think we have graduated
> several projects long before they actually reached that level of maturity.
> Actually running a project at scale for a period of time is the only way to
> know it is mature enough to run it in production at scale.
> 
> I'm just going to toss this out there. What if we set the graduation bar to
> "is in production in at least two sizeable clouds" (note that I'm not saying
> "public clouds"). Trove is the only project that has, to my knowledge, met
> that bar prior to graduation, and it's the only project that graduated since
> Havana that I can, off hand, point at as clearly successful. Heat and
> Ceilometer both graduated prior to being in production; a few cycles later,
> they're still having adoption problems and looking at large architectural
> changes. I think the added cost to OpenStack when we integrate immature or
> unstable projects is significant enough at this point to justify a more
> defensive posture.
> 
> FWIW, Ironic currently doesn't meet that bar either - it's in production in
> only one public cloud. I'm not aware of large private installations yet,
> though I suspect there are some large private deployments being spun up
> right now, planning to hit production with the Juno release.

We have some hard data from the user survey presented at the Juno summit,
with respectively 26 & 53 production deployments of Heat and Ceilometer
reported.

There's no cross-referencing of deployment size with services in production
in those data presented, though it may be possible to mine that out of the
raw survey responses.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Retrospective veto revert policy

2014-08-14 Thread Mark McLoughlin
On Tue, 2014-08-12 at 15:56 +0100, Mark McLoughlin wrote:
> Hey
> 
> (Terrible name for a policy, I know)
> 
> From the version_cap saga here:
> 
>   https://review.openstack.org/110754
> 
> I think we need a better understanding of how to approach situations
> like this.
> 
> Here's my attempt at documenting what I think we're expecting the
> procedure to be:
> 
>   https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy
> 
> If it sounds reasonably sane, I can propose its addition to the
> "Development policies" doc.

(In the spirit of "we really need to step back and laugh at ourselves
sometimes" ... )

Two years ago, we were worried about patches getting merged in less than
2 hours and had a discussion about imposing a minimum review time. How
times have changed! Is it even possible to land a patch in less than two
hours now? :)

Looking back over the thread, this part stopped me in my tracks:

  https://lists.launchpad.net/openstack/msg08625.html

On Tue, Mar 13, 2012, Mark McLoughlin  wrote:

> Sometimes there can be a few folks working through an issue together and
> the patch gets pushed and approved so quickly that no-one else gets a
> chance to review.

Everyone has an opportunity to review even after a patch gets merged.

JE

It's not quite perfect, but if you squint you could conclude that
Johannes and I have both completely reversed our opinions in the
intervening two years :)

The lesson I take from that is to not get too caught up in the current
moment. We're growing and evolving rapidly. If we assume everyone is
acting in good faith, and allow each other to debate earnestly without
feelings getting hurt ... we should be able to work through anything.

Now, back on topic - digging through that thread, it doesn't seem we
settled on the idea of "we can just revert it later if someone has an
objection" in this thread. Does anyone recall when that idea first came
up?

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Retrospective veto revert policy

2014-08-14 Thread Daniel P. Berrange
On Thu, Aug 14, 2014 at 10:06:13AM +0100, Mark McLoughlin wrote:
> On Tue, 2014-08-12 at 15:56 +0100, Mark McLoughlin wrote:
> > Hey
> > 
> > (Terrible name for a policy, I know)
> > 
> > From the version_cap saga here:
> > 
> >   https://review.openstack.org/110754
> > 
> > I think we need a better understanding of how to approach situations
> > like this.
> > 
> > Here's my attempt at documenting what I think we're expecting the
> > procedure to be:
> > 
> >   https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy
> > 
> > If it sounds reasonably sane, I can propose its addition to the
> > "Development policies" doc.
> 
> (In the spirit of "we really need to step back and laugh at ourselves
> sometimes" ... )
> 
> Two years ago, we were worried about patches getting merged in less than
> 2 hours and had a discussion about imposing a minimum review time. How
> times have changed! Is it even possible to land a patch in less than two
> hours now? :)
> 
> Looking back over the thread, this part stopped me in my tracks:
> 
>   https://lists.launchpad.net/openstack/msg08625.html
> 
> On Tue, Mar 13, 2012, Mark McLoughlin  wrote:
> 
> > Sometimes there can be a few folks working through an issue together and
> > the patch gets pushed and approved so quickly that no-one else gets a
> > chance to review.
> 
> Everyone has an opportunity to review even after a patch gets merged.
> 
> JE
> 
> It's not quite perfect, but if you squint you could conclude that
> Johannes and I have both completely reversed our opinions in the
> intervening two years :)
> 
> The lesson I take from that is to not get too caught up in the current
> moment. We're growing and evolving rapidly. If we assume everyone is
> acting in good faith, and allow each other to debate earnestly without
> feelings getting hurt ... we should be able to work through anything.
> 
> Now, back on topic - digging through that thread, it doesn't seem we
> settled on the idea of "we can just revert it later if someone has an
> objection" in this thread. Does anyone recall when that idea first came
> up?

Probably lost in time - I've seen it said several times on Nova IRC
channel over the year(s) when we made a strategic decision to merge
something quickly.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Error: Node Not Found

2014-08-14 Thread Peeyush Gupta
Hi all,





I have successfully done a tripleo setup using RDO instack in 
a virtual machine environment. Now, I deleted the overcloud and 
I am trying to add a physical node to the setup, but I get the error
as the setup has failed. 

Here are the nova logs:

2014-08-14 04:55:02.449 2208 WARNING nova.compute.manager [-] Bandwidth usage 
not supported by hypervisor.
2014-08-14 04:55:37.606 2208 ERROR nova.virt.baremetal.virtual_power_driver [-] 
Node "overcloud-notcompute0-fl3hqbooytsv" with MAC address [u''] 
not found.
2014-08-14 04:55:37.607 2208 ERROR nova.compute.manager [-] [instance: 
4be32cfe-0674-4668-aba4-6ca75b836528] Periodic sync_power_state task had an 
error while processing an in\
stance.
2014-08-14 04:55:37.607 2208 TRACE nova.compute.manager [instance: 
4be32cfe-0674-4668-aba4-6ca75b836528] Traceback (most recent call last):
2014-08-14 04:55:37.607 2208 TRACE nova.compute.manager [instance: 
4be32cfe-0674-4668-aba4-6ca75b836528]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", lin\
e 5206, in _sync_power_states
2014-08-14 04:55:37.607 2208 TRACE nova.compute.manager [instance: 
4be32cfe-0674-4668-aba4-6ca75b836528]     vm_instance = 
self.driver.get_info(db_instance)
2014-08-14 04:55:37.607 2208 TRACE nova.compute.manager [instance: 
4be32cfe-0674-4668-aba4-6ca75b836528]   File 
"/usr/lib/python2.7/site-packages/nova/virt/baremetal/driver.py\
", line 447, in get_info
2014-08-14 04:55:37.607 2208 TRACE nova.compute.manager [instance: 
4be32cfe-0674-4668-aba4-6ca75b836528]     ps = pm.is_power_on()
2014-08-14 04:55:37.607 2208 TRACE nova.compute.manager [instance: 
4be32cfe-0674-4668-aba4-6ca75b836528]   File 
"/usr/lib/python2.7/site-packages/nova/virt/baremetal/virtual_p\
ower_driver.py", line 200, in is_power_on
2014-08-14 04:55:37.607 2208 TRACE nova.compute.manager [instance: 
4be32cfe-0674-4668-aba4-6ca75b836528]     raise 
exception.NodeNotFound(node_id=self._node_name)
2014-08-14 04:55:37.607 2208 TRACE nova.compute.manager [instance: 
4be32cfe-0674-4668-aba4-6ca75b836528] NodeNotFound: Node 
overcloud-notcompute0-fl3hqbooytsv could not be fou\
nd.
2014-08-14 04:55:37.607 2208 TRACE nova.compute.manager [instance: 
4be32cfe-0674-4668-aba4-6ca75b836528]

I understand from the log that the setup is not able to find the node.
But I can ping the machine from my undercloud, then why can't it find
it at the time of deployment?

Regards,
~Peeyush Gupta___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Is network ordering of vNICs guaranteed?

2014-08-14 Thread Aaron Rosen
Most guests generate this on first boot. I haven't looked into how newer
versions of ubuntu handle this though  I noticed in 14.04 this file doesn't
exist anymore but I figure it's doing it somewhere else.

Aaron


On Wed, Aug 13, 2014 at 9:03 PM, Jian Wen  wrote:

> Ordering of vNICs is not 100% guaranteed for the cloud images which are
> not shipped with
> /etc/udev/rules.d/70-persistent-net.rules.
> e.g. attaching a port and deattaching another port, then reboot the
> instance.
>
>
> 2014-08-12 15:52 GMT+08:00 Aaron Rosen :
>
> This bug was true in grizzly and older (and was reintroduced in icehouse
>> for a few days but was fixed before the nova icehouse shipped).
>>
>> Aaron
>>
>>
>> On Mon, Aug 11, 2014 at 7:10 AM, CARVER, PAUL  wrote:
>>
>>> Armando M. [mailto:arma...@gmail.com] wrote:
>>>
>>>
>>>
>>> >>On 9 August 2014 10:16, Jay Pipes  wrote:
>>>
>>> >>Paul, does this friend of a friend have a reproduceable test
>>>
>>> >>script for this?
>>>
>>>
>>>
>>> >We would also need to know the OpenStack release where this issue
>>> manifest
>>>
>>> >itself. A number of bugs have been raised in the past around this type
>>> of
>>>
>>> >issue, and the last fix I recall is this one:
>>>
>>> >
>>>
>>> >https://bugs.launchpad.net/nova/+bug/1300325
>>>
>>> >
>>>
>>> >It's possible that this might have regressed, though.
>>>
>>>
>>>
>>> The reason I called it "friend of a friend" is because I think the info
>>>
>>> has filtered through a series of people and is not firsthand observation.
>>>
>>> I'll ask them to track back to who actually observed the behavior, how
>>>
>>> long ago, and with what version.
>>>
>>>
>>>
>>> It could be a regression, or it could just be old info that people have
>>>
>>> continued to assume is true without realizing it was considered a bug
>>>
>>> all along and has been fixed.
>>>
>>>
>>>
>>> Thanks! The moment I first heard it my first reaction was that it was
>>>
>>> almost certainly a bug and had probably already been fixed.
>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best,
>
> Jian
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] 3'rd party CI systems

2014-08-14 Thread David Pineau
Well, I could make it test every patchset, but then I have to look
into how to limit at one build per time + queue the other builds.
(didn't try that yet)

Anyways, I just got my new CI machine so I'll be able to finalize the
CI system relatively soon.

2014-08-14 1:01 GMT+02:00 Jeremy Stanley :
> On 2014-08-13 16:30:23 + (+), Asselin, Ramy wrote:
>> I remember infra team objected to the nightly builds. They wanted
>> reports on every patch set in order to report to gerrit.
> [...]
>
> I can't imagine, nor do I recall, objecting to such an idea. The
> question is merely where you expect to publish the results, and how
> you deal with the fact that you're testing changes which have
> already merged rather than getting in front of the reviewers on
> proposed changes. I don't personally have any specific desire for
> third-party CI systems to report on changes in Gerrit, but
> individual projects supporting your drivers/features/whatever might.
> --
> Jeremy Stanley
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
David Pineau,
Developer R&D at Scality

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Which changes need accompanying bugs?

2014-08-14 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 14/08/14 02:43, Angus Lees wrote:
> On Wed, 13 Aug 2014 11:11:51 AM Kevin Benton wrote:
>> Is the pylint static analysis that caught that error prone to
>> false positives? If not, I agree that it would be really nice if
>> that were made part of the tox check so these don't have to be
>> fixed after the fact.
> 
> At the moment pylint on neutron is *very* noisy, and I've been
> looking through the reported issues by hand to get a feel for
> what's involved.  Enabling pylint is a separate discussion that I'd
> like to have - in some other thread.
> 

Just start with non-voting check. Once all the issues are fixed, we
can enable it as voting.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJT7INvAAoJEC5aWaUY1u570YcH/10jc3qR4g/tcU+3eL30b6bR
7AwmsjV353Sbbjj3vShavECxVZpLo05i+5SBZAAC18DZ+2WUXeamLo0uQRDbSpa/
mVMCobJiAtt5AcKXWbFrziiVS6ASkRRpnUMv4pqoU+tnTFWbegLmY1OaA/5pPHdj
LKv30o6nqsjjygMPDjXrGFNDmerANv+8IYkkizpGo8zrhScCvQpcpEFwZrCH6HL4
OJX1m2hfH7tgbc9r8Wsj79fgS6nkP7Xqfu2AhkQsIENkQxHYkSQVMlGCD9R8jrZe
ZtIVyujQuak/6kthBjjDu5Zq/3sKpsuDYsmxeRTpoOwLd1gVUAQ0qt2gf7dpMbs=
=IWiF
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Retrospective veto revert policy

2014-08-14 Thread Nikola Đipanov
On 08/14/2014 11:11 AM, Daniel P. Berrange wrote:
> On Thu, Aug 14, 2014 at 10:06:13AM +0100, Mark McLoughlin wrote:
>> On Tue, 2014-08-12 at 15:56 +0100, Mark McLoughlin wrote:
>>> Hey
>>>
>>> (Terrible name for a policy, I know)
>>>
>>> From the version_cap saga here:
>>>
>>>   https://review.openstack.org/110754
>>>
>>> I think we need a better understanding of how to approach situations
>>> like this.
>>>
>>> Here's my attempt at documenting what I think we're expecting the
>>> procedure to be:
>>>
>>>   https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy
>>>
>>> If it sounds reasonably sane, I can propose its addition to the
>>> "Development policies" doc.
>>
>> (In the spirit of "we really need to step back and laugh at ourselves
>> sometimes" ... )
>>
>> Two years ago, we were worried about patches getting merged in less than
>> 2 hours and had a discussion about imposing a minimum review time. How
>> times have changed! Is it even possible to land a patch in less than two
>> hours now? :)
>>
>> Looking back over the thread, this part stopped me in my tracks:
>>
>>   https://lists.launchpad.net/openstack/msg08625.html
>>
>> On Tue, Mar 13, 2012, Mark McLoughlin  wrote:
>>
>> > Sometimes there can be a few folks working through an issue together 
>> and
>> > the patch gets pushed and approved so quickly that no-one else gets a
>> > chance to review.
>>
>> Everyone has an opportunity to review even after a patch gets merged.
>>
>> JE
>>
>> It's not quite perfect, but if you squint you could conclude that
>> Johannes and I have both completely reversed our opinions in the
>> intervening two years :)
>>
>> The lesson I take from that is to not get too caught up in the current
>> moment. We're growing and evolving rapidly. If we assume everyone is
>> acting in good faith, and allow each other to debate earnestly without
>> feelings getting hurt ... we should be able to work through anything.
>>
>> Now, back on topic - digging through that thread, it doesn't seem we
>> settled on the idea of "we can just revert it later if someone has an
>> objection" in this thread. Does anyone recall when that idea first came
>> up?
> 
> Probably lost in time - I've seen it said several times on Nova IRC
> channel over the year(s) when we made a strategic decision to merge
> something quickly.
> 

Another vote for "revert early revert often" here.

Having said that - as chance would have it, I am in the middle of
proposing a revert on a different thread [1]. I did not propose a revert
patch before starting the discussion first since it's a feature that has
been cooking for some time now, people involved seem to be on holidays,
and I just simply never managed to look closely enough until it merged,
and I, by chance again, was working on something that could potentially
use it which brought it's defects to my attention.

I think even in this situation when we have a long standing feature - it
should be fair game for reverting *, as long as revert is backed with
legitimate technical arguments. There should be no hard feelings when
technical arguments are sound enough. We can and should talk about it in
details later, and make sure we are seen all angle, but do revert early.

Even with best of intentions, I would not have looked at it in time -
Nova is just too big at this point, at least for me.

So I propose a small addition to the "revert early, revert often"
catchphrase - "have solid technical arguments."**

N.

* v3 API is on a whole different order of magnitude and should not be
used in this discussion IMHO, hopefully something like that does not
come up often enough to warrant a rule.

** Not to imply that I do in [1], but solid technical arguments can
never include quoting obscure policies or "pulling rank", among other
things.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-August/042709.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-14 Thread loy wolfe
On Thu, Aug 14, 2014 at 4:22 PM, Mathieu Rohon 
wrote:

> Hi,
>
> I would like to add that it would be harder for the community to help
> maintaining drivers.
> such a work [1] wouldn't have occured with an out of tree ODL driver.
>

+1.
It's better to move all MD for none built-in backend out of tree,
maintaining these drivers shouldn't be the responsibility of community. Not
only MD, but also plugin, agent should all obey this rule


>
> [1] https://review.openstack.org/#/c/96459/
>
> On Wed, Aug 13, 2014 at 1:09 PM, Robert Kukura 
> wrote:
> > One thing to keep in mind is that the ML2 driver API does sometimes
> change,
> > requiring updates to drivers. Drivers that are in-tree get updated along
> > with the driver API change. Drivers that are out-of-tree must be updated
> by
> > the owner.
> >
> > -Bob
> >
> >
> > On 8/13/14, 6:59 AM, ZZelle wrote:
> >
> > Hi,
> >
> >
> > The important thing to understand is how to integrate with neutron
> through
> > stevedore/entrypoints:
> >
> >
> https://github.com/dave-tucker/odl-neutron-drivers/blob/master/setup.cfg#L32-L34
> >
> >
> > Cedric
> >
> >
> > On Wed, Aug 13, 2014 at 12:17 PM, Dave Tucker 
> wrote:
> >>
> >> I've been working on this for OpenDaylight
> >> https://github.com/dave-tucker/odl-neutron-drivers
> >>
> >> This seems to work for me (tested Devstack w/ML2) but YMMV.
> >>
> >> -- Dave
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Which changes need accompanying bugs?

2014-08-14 Thread Andreas Jaeger
On 08/14/2014 11:37 AM, Ihar Hrachyshka wrote:
> On 14/08/14 02:43, Angus Lees wrote:
>> On Wed, 13 Aug 2014 11:11:51 AM Kevin Benton wrote:
>>> Is the pylint static analysis that caught that error prone to
>>> false positives? If not, I agree that it would be really nice if
>>> that were made part of the tox check so these don't have to be
>>> fixed after the fact.
> 
>> At the moment pylint on neutron is *very* noisy, and I've been
>> looking through the reported issues by hand to get a feel for
>> what's involved.  Enabling pylint is a separate discussion that I'd
>> like to have - in some other thread.
> 
> 
> Just start with non-voting check. Once all the issues are fixed, we
> can enable it as voting.

Or blacklist all failures, make it voting - and then fix one issue after
the other - and with each patch, remove the blacklisted number.

That way no regressions come in...

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Help in understanding API Flow

2014-08-14 Thread Madhu Mohan
Hi,

Can some one help me understand how an API is exposed to the clients from
Congress server ?

I see that a cage service ('api-policy') is created in congress_server.py.
I believe this is implemented in policy_model. I tried to send a
json_request from my client on the server.

I tried sending "list_members", "get_items", "PUT" and  "POST" as methods
and all these give me "NotImplemented" error response.

Any help in this direction ?

I also want to add new APIs and hence understanding the API flow is crucial.

Thanks,
Madhu Mohan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra] Third Party CI naming and contact (action required)

2014-08-14 Thread Franck Yelles
Hi James,

I have added the Nuage CI system to the list; I also took the liberty
to reorder alphabetically the list

Franck
Franck


On Wed, Aug 13, 2014 at 11:23 AM, James E. Blair  wrote:
> Hi,
>
> We've updated the registration requirements for third-party CI systems
> here:
>
>   http://ci.openstack.org/third_party.html
>
> We now have 86 third-party CI systems registered and have undertaken an
> effort to make things more user-friendly for the developers who interact
> with them.  There are two important changes to be aware of:
>
> 1) We now generally name third-party systems in a descriptive manner
> including the company and product they are testing.  We have renamed
> currently-operating CI systems to match these standards to the best of
> our abilities.  Some of them ended up with particularly bad names (like
> "Unknown Function...").  If your system is one of these, please join us
> in #openstack-infra on Freenode to establish a more descriptive name.
>
> 2) We have established a standard wiki page template to supply a
> description of the system, what is tested, and contact information for
> each system.  See https://wiki.openstack.org/wiki/ThirdPartySystems for
> an index of such pages and instructions for creating them.  Each
> third-party CI system will have its own page in the wiki and it must
> include a link to that page in every comment that it leaves in Gerrit.
>
> If you operate a third-party CI system, please ensure that you register
> a wiki page and update your system to link to it in every new Gerrit
> comment by the end of August.  Beginning in September, we will disable
> systems that have not been updated.
>
> Thanks,
>
> Jim
>
> ___
> OpenStack-Infra mailing list
> openstack-in...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] gate-ceilometer-python33 failed because of wrong setup.py in happybase

2014-08-14 Thread Dina Belova
Hisashi Osanai, np :)
You're welcome :)


On Thu, Aug 14, 2014 at 5:13 AM, Osanai, Hisashi <
osanai.hisa...@jp.fujitsu.com> wrote:

>
> On Wed, Aug 13, 2014 at 2:35 PM, Julien Danjou  wrote:
> >> Means the py33 needs to execute on stable/icehouse. Here I
> misunderstand something...
> > Not it does not, that line in tox.ini is not use by the gate.
>
> >>> this is a problem in the infrastructure config.
> >> Means execfile function calls on python33 in happybase is a problem. If
> my understanding
> >> is correct, I agree with you and I think this is the direct cause of
> this problem.
> >>
> >> Your idea to solve this is creating a patch for the direct cause, right?
> > My idea to solve this is to create a patch on
> > http://git.openstack.org/cgit/openstack-infra/config/
> > to exclude py33 on the stable/icehouse branch of Ceilometer in the gate.
>
> Sorry to use your time for explanation above again and thanks for it. I'm
> happy to have
> clear understanding your thought.
>
> On Wednesday, August 13, 2014 7:54 PM, Dina Belova wrote:
> > Here it is: https://review.openstack.org/#/c/113842/
> Thank you for providing the fix. I surprised the speed for it. it's really
> fast...
>
> Thanks again!
> Hisashi Osanai
>



-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-14 Thread Salvatore Orlando
I think there will soon be a discussion regarding what the appropriate
location for plugin and drivers should be.
My personal feeling is that Neutron has simply reached the tipping point
where the high number of drivers and plugins is causing unnecessary load
for the core team and frustration for the community.

There I would totally support Luke's initiative about maintaining an
out-of-tree ML2 driver. On the other hand, a plugin/driver "diaspora" might
also have negative consequences such as frequent breakages such as those
Bob was mentioning or confusion for users which might need to end up
fetching drivers from disparate sources.

As mentioned during the last Neutron IRC meeting this is another "process"
aspect which will be discussed soon, with the aim of defining a plan for:
- drastically reduce the number of plugins and drivers which must be
maintained in the main source tree
- enhance control of plugin/driver maintainers over their own code
- preserve the ability of doing CI checks on gerrit as we do today
- raise the CI bar (maybe finally set the smoketest as a minimum
requirement?)

Regards,
Salvatore



On 14 August 2014 11:47, loy wolfe  wrote:

>
>
>
> On Thu, Aug 14, 2014 at 4:22 PM, Mathieu Rohon 
> wrote:
>
>> Hi,
>>
>> I would like to add that it would be harder for the community to help
>> maintaining drivers.
>> such a work [1] wouldn't have occured with an out of tree ODL driver.
>>
>
> +1.
> It's better to move all MD for none built-in backend out of tree,
> maintaining these drivers shouldn't be the responsibility of community. Not
> only MD, but also plugin, agent should all obey this rule
>
>
>>
>> [1] https://review.openstack.org/#/c/96459/
>>
>> On Wed, Aug 13, 2014 at 1:09 PM, Robert Kukura 
>> wrote:
>> > One thing to keep in mind is that the ML2 driver API does sometimes
>> change,
>> > requiring updates to drivers. Drivers that are in-tree get updated along
>> > with the driver API change. Drivers that are out-of-tree must be
>> updated by
>> > the owner.
>> >
>> > -Bob
>> >
>> >
>> > On 8/13/14, 6:59 AM, ZZelle wrote:
>> >
>> > Hi,
>> >
>> >
>> > The important thing to understand is how to integrate with neutron
>> through
>> > stevedore/entrypoints:
>> >
>> >
>> https://github.com/dave-tucker/odl-neutron-drivers/blob/master/setup.cfg#L32-L34
>> >
>> >
>> > Cedric
>> >
>> >
>> > On Wed, Aug 13, 2014 at 12:17 PM, Dave Tucker 
>> wrote:
>> >>
>> >> I've been working on this for OpenDaylight
>> >> https://github.com/dave-tucker/odl-neutron-drivers
>> >>
>> >> This seems to work for me (tested Devstack w/ML2) but YMMV.
>> >>
>> >> -- Dave
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Retrospective veto revert policy

2014-08-14 Thread Mark McLoughlin
On Tue, 2014-08-12 at 15:56 +0100, Mark McLoughlin wrote:
> Hey
> 
> (Terrible name for a policy, I know)
> 
> From the version_cap saga here:
> 
>   https://review.openstack.org/110754
> 
> I think we need a better understanding of how to approach situations
> like this.
> 
> Here's my attempt at documenting what I think we're expecting the
> procedure to be:
> 
>   https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy
> 
> If it sounds reasonably sane, I can propose its addition to the
> "Development policies" doc.

Proposed here: https://review.openstack.org/114188

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra] Third Party CI naming and contact (action required)

2014-08-14 Thread trinath.soman...@freescale.com
Hi Franck -

Thanks for the update. I too have that re-order in mind. :)

Here after CI owners may add their CI names in appropriate alphabetical order.

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

-Original Message-
From: Franck Yelles [mailto:franck...@gmail.com] 
Sent: Thursday, August 14, 2014 3:33 PM
To: James E. Blair
Cc: OpenStack Development Mailing List (not for usage questions); 
openstack-in...@lists.openstack.org
Subject: Re: [OpenStack-Infra] [infra] Third Party CI naming and contact 
(action required)

Hi James,

I have added the Nuage CI system to the list; I also took the liberty to 
reorder alphabetically the list

Franck
Franck


On Wed, Aug 13, 2014 at 11:23 AM, James E. Blair  wrote:
> Hi,
>
> We've updated the registration requirements for third-party CI systems
> here:
>
>   http://ci.openstack.org/third_party.html
>
> We now have 86 third-party CI systems registered and have undertaken 
> an effort to make things more user-friendly for the developers who 
> interact with them.  There are two important changes to be aware of:
>
> 1) We now generally name third-party systems in a descriptive manner 
> including the company and product they are testing.  We have renamed 
> currently-operating CI systems to match these standards to the best of 
> our abilities.  Some of them ended up with particularly bad names 
> (like "Unknown Function...").  If your system is one of these, please 
> join us in #openstack-infra on Freenode to establish a more descriptive name.
>
> 2) We have established a standard wiki page template to supply a 
> description of the system, what is tested, and contact information for 
> each system.  See https://wiki.openstack.org/wiki/ThirdPartySystems 
> for an index of such pages and instructions for creating them.  Each 
> third-party CI system will have its own page in the wiki and it must 
> include a link to that page in every comment that it leaves in Gerrit.
>
> If you operate a third-party CI system, please ensure that you 
> register a wiki page and update your system to link to it in every new 
> Gerrit comment by the end of August.  Beginning in September, we will 
> disable systems that have not been updated.
>
> Thanks,
>
> Jim
>
> ___
> OpenStack-Infra mailing list
> openstack-in...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
openstack-in...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Annoucing CloudKitty : an OpenSource Rating-as-a-Service project for OpenStack

2014-08-14 Thread Fei Long Wang
Glad to see there is another group interested in rating/billing. I'm 
thinking if I should announce our rating service to join the fun :-) You 
know, I'm kidding. The more groups join the rating service discussion, 
the better. But IMHO, we just need one rating service for OpenStack. So 
let's collaborate to work out a good rating service to benefit OpenStack 
and consumer. But before more duplicate work, it would be great if we 
can start some discussion. Cheers.


On 14/08/14 01:40, Christophe Sauthier wrote:
We are very pleased at Objectif Libre to intoduce CloudKitty, an 
effort to provide a fully OpenSource Rating-as-a-Service comptinent in 
OpenStack..


Following a first POC presented during the last summit in Atlanta to 
some Ceilometer devs (thanks again Julien Danjou for your great 
support !), we continued our effort to create a real service for 
rating. Today we are happy to share it with you all.



So what do we propose in CloudKitty?
 - a service for collecting metrics (using Ceilometer API)
 - a modular rating architecture to enable/disable modules and create 
your own rules on-the-fly, allowing you to use the rating patterns you 
like
 - an API to interact with the whole environment from core components 
to every rating module
 - a Horizon integration to allow configuration of the rating modules 
and display of pricing information in "real time" during instance 
creation
 - a CLI client to access this information and easily configure 
everything


Technically we are using all the elements that are used in the various 
OpenStack projects like olso, stevedore, pecan...
CloudKitty is highly modular and allows integration / developement of 
third party collection and rating modules and output formats.


A roadmap is available on the project wiki page (the link is at the 
end of this email), but we are clearly hoping to have some feedback 
and ideas on how to improve the project and reach a tighter 
integration with OpenStack.


The project source code is available at 
http://github.com/stackforge/cloudkitty
More stuff will be available on stackforge as soon as the reviews get 
validated like python-cloudkittyclient and cloudkitty-dashboard, so 
stay tuned.


The project's wiki page (https://wiki.openstack.org/wiki/CloudKitty) 
provides more information, and you can reach us via irc on freenode: 
#cloudkitty. Developper's documentation is on its way to readthedocs too.


We plan to present CloudKitty in detail during the Paris Summit, but 
we would love to hear from you sooner...


Cheers,

 Christophe and Objectif Libre


Christophe Sauthier   Mail : 
christophe.sauth...@objectif-libre.com

CEO & Fondateur   Mob : +33 (0) 6 16 98 63 96
Objectif LibreURL : 
www.objectif-libre.com

Infrastructure et Formations LinuxTwitter : @objectiflibre

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vmware] Canonical list of os types

2014-08-14 Thread Matthew Booth
I've just spent the best part of a day tracking down why instance
creation was failing on a particular setup. The error message from
CreateVM_Task was: 'A specified parameter was not correct'.

After discounting a great many possibilities, I finally discovered that
the problem was guestId, which was being set to 'CirrosGuest'.
Unusually, the vSphere API docs don't contain a list of valid values for
that field. Given the unhelpfulness of the error message, it might be
worthwhile validating that field (which we get from glance) and
displaying an appropriate warning.

Does anybody have a canonical list of valid values?

Thanks,

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Devstack] [Cinder] Registering 3rd party driver as default

2014-08-14 Thread Amit Das
With further debugging, i find that none of the configuration options
present in /etc/cinder/cinder.conf are getting applied.

Regards,
Amit
*CloudByte Inc.* 


On Thu, Aug 14, 2014 at 11:40 AM, Amit Das  wrote:

> Hi folks,
>
> I have been trying to run devstack with my cinder driver as the default
> volume_driver but with no luck.
>
>  Devstack seems to register the lvm driver as the default always.
>
> I have tried below approaches:
>
>1. directly modifying the /etc/cinder/cinder.conf file
>2. creating a driver file @ ./devstack/lib/cinder_plugins/
>1. ref - https://review.openstack.org/#/c/68726/
>
>
> This is my localrc details:
> http://paste.openstack.org/show/94822/
>
> I run ./unstack.sh & then FORCE=yes ./stack.sh
>
> This is the cinder.conf that is generated after running above stack.sh. I
> comment out the [lvmdriver-1] section manually *(not sure if this section
> needs to be commented)*
>
> http://paste.openstack.org/show/94841/
>
> These are portions of c-sch & c-vol logs after restarting them in their
> respective screens.
>
> http://paste.openstack.org/show/94842/
>
> Regards,
> Amit
> *CloudByte Inc.* 
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vmware] Canonical list of os types

2014-08-14 Thread Steve Gordon
- Original Message -
> From: "Matthew Booth" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> I've just spent the best part of a day tracking down why instance
> creation was failing on a particular setup. The error message from
> CreateVM_Task was: 'A specified parameter was not correct'.
> 
> After discounting a great many possibilities, I finally discovered that
> the problem was guestId, which was being set to 'CirrosGuest'.
> Unusually, the vSphere API docs don't contain a list of valid values for
> that field. Given the unhelpfulness of the error message, it might be
> worthwhile validating that field (which we get from glance) and
> displaying an appropriate warning.
> 
> Does anybody have a canonical list of valid values?
> 
> Thanks,
> 
> Matt

I found a page [1] linked from the Grizzly edition of the compute guide  [2] 
which has since been superseded. The content that would appear to have replaced 
it in more recent versions of the documentation suite [3] does not appear to 
contain such a link though. If a link to a more formal list is available it 
would be great to get this in the documentation.

Thanks,

Steve

[1] http://www.thinkvirt.com/?q=node/181
[2] 
http://docs.openstack.org/grizzly/openstack-compute/admin/content/image-metadata.html
[3] http://docs.openstack.org/icehouse/config-reference/content/vmware.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][cisco] Cisco Nexus requires patched ncclient

2014-08-14 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

FYI: I've uploaded a review for openstack/requirements to add the
upstream module into the list of potential dependencies [1]. Once it's
merged, I'm going to introduce this new requirement for Neutron.

[1]: https://review.openstack.org/114213

/Ihar

On 12/08/14 16:27, Ihar Hrachyshka wrote:
> Hey all,
> 
> as per [1], Cisco Nexus ML2 plugin requires a patched version of 
> ncclient from github. I wonder:
> 
> - whether this information is still current; - why don't we depend
> on ncclient thru our requirements.txt file.
> 
> [1]: https://wiki.openstack.org/wiki/Neutron/ML2/MechCiscoNexus
> 
> Cheers, /Ihar
> 
> ___ OpenStack-dev
> mailing list OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJT7KO6AAoJEC5aWaUY1u57ft0IAJzPC2o8fz9mwlC47tXGeHpJ
uP5HFIMaJaPXIwDnuz2uz8dJQZzBvTo6WrN+SRPb6PL3aGbIW4y2BA4n6NM276Xx
PwL8cnBdjQ9INwxn3g9jBceynbm2Yxx3I//2AIT1iu1Io/qjHppkUePxgH33PVMn
jw/n00mnCVJgxpHNXuFoe7Mn8UsduhB7xNCnW90t4rc9cfClGhW1T6/Pw2PWc07p
3Kw4OmmTS7q6r89iAmkBgbgdI2WBjdR902gwGxnuwmf7TJLo5Nd+jnvbe4bJ7V1d
DHzcFPNffKS2Fjbbxay0GjN+7/dAKHVRqkGQZvGzzEL8ZvqZGzyFojZ9BlnrM4Q=
=BZEc
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread Russell Bryant
On 08/13/2014 07:27 PM, Michael Still wrote:
> On Thu, Aug 14, 2014 at 3:44 AM, Russell Bryant  wrote:
>> On 08/13/2014 01:09 PM, Dan Smith wrote:
>>> Expecting cores to be at these sorts of things seems pretty reasonable
>>> to me, given the usefulness (and gravity) of the discussions we've been
>>> having so far. Companies with more cores will have to send more or make
>>> some hard decisions, but I don't want to cut back on the meetings until
>>> their value becomes unjustified.
>>
>> I disagree.  IMO, *expecting* people to travel, potentially across the
>> globe, 4 times a year is an unreasonable expectation, and quite
>> uncharacteristic of open source projects.  If we can't figure out a way
>> to have the most important conversations in a way that is inclusive of
>> everyone, we're failing with our processes.
> 
> I am a bit confused by this stance to be honest. You yourself said
> when you were Icehouse PTL that you wanted cores to come to the
> summit. What changed?

Yes, I would love for core team members to come to the design summit
that's twice a year.  I still don't *expect* it for them to remain a
member of the team, and I certainly don't expect it 4 times a year.
It's a matter of frequency and requirement.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread Russell Bryant
On 08/13/2014 11:31 PM, Michael Still wrote:
> On Thu, Aug 14, 2014 at 1:24 PM, Jay Pipes  wrote:
> 
>> Just wanted to quickly weigh in with my thoughts on this important topic. I
>> very much valued the face-to-face interaction that came from the mid-cycle
>> meetup in Beaverton (it was the only one I've ever been to).
>>
>> That said, I do not believe it should be a requirement that cores make it to
>> the face-to-face meetings in-person. A number of folks have brought up very
>> valid concerns about personal/family time, travel costs and burnout.
> 
> I'm not proposing they be a requirement. I am proposing that they be
> strongly encouraged.

I'm not sure that's much different in reality.

>> I believe that the issue raised about furthering the divide between core and
>> non-core folks is actually the biggest reason I don't support a mandate to
>> have cores at the face-to-face meetings, and I think we should make our best
>> efforts to support quality virtual meetings that can be done on a more
>> frequent basis than the face-to-face meetings that would be optional.
> 
> I am all for online meetings, but we don't have a practical way to do
> them at the moment apart from IRC. Until someone has a concrete
> proposal that's been shown to work, I feel its a straw man argument.

Yes, IRC is one option which we already use on a regular basis.  We can
also switch to voice communication for higher bandwidth when needed.  We
even have a conferencing server set up in OpenStack's infrastructure:

https://wiki.openstack.org/wiki/Infrastructure/Conferencing

In theory it even supports basic video conferencing, though I haven't
tested it on this server yet.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread Russell Bryant
On 08/13/2014 07:27 PM, Michael Still wrote:
> The etherpad for the meetup has extensive notes. Any summary I write
> will basically be those notes in prose. What are you looking for in a
> summary that isn't in the etherpad? There also wasn't a summary of the
> Icehouse midcycle produced that I can find. Whilst I am happy to do
> one for Juno, its a requirement that I hadn't planned for, and is
> therefore taking me some time to retrofit.
> 
> I think we should chalk the request for summaries up experience and
> talk through how to better provide such things at future meetups.

The summary from the Icehouse meetup is here:

http://lists.openstack.org/pipermail/openstack-dev/2014-February/027370.html

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][Docker] Run OpenStack Service in Docker Container

2014-08-14 Thread Jay Lau
I see a few mentions of OpenStack services themselves being containerized
in Docker. Is this a serious trend in the community?

http://allthingsopen.com/2014/02/12/why-containers-for-openstack-services/

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Annoucing CloudKitty : an OpenSource Rating-as-a-Service project for OpenStack

2014-08-14 Thread Sandy Walsh
Sounds very interesting. We're currently collecting detailed (and verified) 
usage information in StackTach and are keen to see what CloudKitty is able to 
offer. My one wish is that you keep the components as small pip 
redistributables with low coupling to promote reuse with other projects. Many 
tiny repos and clear API's (internal and external) are good for adoption and 
contribution. 

All the best!
-Sandy


From: Christophe Sauthier [christophe.sauth...@objectif-libre.com]
Sent: Wednesday, August 13, 2014 10:40 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Annoucing CloudKitty : an OpenSource 
Rating-as-a-Service project for OpenStack

We are very pleased at Objectif Libre to intoduce CloudKitty, an effort
to provide a fully OpenSource Rating-as-a-Service component in
OpenStack..

Following a first POC presented during the last summit in Atlanta to
some Ceilometer devs (thanks again Julien Danjou for your great support
!), we continued our effort to create a real service for rating. Today
we are happy to share it with you all.


So what do we propose in CloudKitty?
  - a service for collecting metrics (using Ceilometer API)
  - a modular rating architecture to enable/disable modules and create
your own rules on-the-fly, allowing you to use the rating patterns you
like
  - an API to interact with the whole environment from core components
to every rating module
  - a Horizon integration to allow configuration of the rating modules
and display of pricing information in "real time" during instance
creation
  - a CLI client to access this information and easily configure
everything

Technically we are using all the elements that are used in the various
OpenStack projects like olso, stevedore, pecan...
CloudKitty is highly modular and allows integration / developement of
third party collection and rating modules and output formats.

A roadmap is available on the project wiki page (the link is at the end
of this email), but we are clearly hoping to have some feedback and
ideas on how to improve the project and reach a tighter integration with
OpenStack.

The project source code is available at
http://github.com/stackforge/cloudkitty
More stuff will be available on stackforge as soon as the reviews get
validated like python-cloudkittyclient and cloudkitty-dashboard, so stay
tuned.

The project's wiki page (https://wiki.openstack.org/wiki/CloudKitty)
provides more information, and you can reach us via irc on freenode:
#cloudkitty. Developper's documentation is on its way to readthedocs
too.

We plan to present CloudKitty in detail during the Paris Summit, but we
would love to hear from you sooner...

Cheers,

  Christophe and Objectif Libre


Christophe Sauthier   Mail :
christophe.sauth...@objectif-libre.com
CEO & Fondateur   Mob : +33 (0) 6 16 98 63
96
Objectif LibreURL :
www.objectif-libre.com
Infrastructure et Formations LinuxTwitter : @objectiflibre

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-14 Thread Kyle Mestery
I also feel like the drivers/plugins are currently BEYOND a tipping
point, and are in fact dragging down velocity of the core project in
many ways. I'm working on a proposal for Kilo where we move all
drivers/plugins out of the main Neutron tree and into a separate git
repository under the networking program. We have way too many drivers,
requiring way too man review cycles, for this to be a sustainable
model going forward. Since the main reason plugin/driver authors want
their code upstream is to be a part of the simultaneous release, and
thus be packaged by distributions, having a separate repository for
these will satisfy this requirement. I'm still working through the
details around reviews of this repository, etc.

Also, I feel as if the level of passion on the mailing list has died
down a bit, so I thought I'd send something out to try and liven
things up a bit. It's been somewhat non-emotional here for a day or
so. :)

Thanks,
Kyle

On Thu, Aug 14, 2014 at 5:09 AM, Salvatore Orlando  wrote:
> I think there will soon be a discussion regarding what the appropriate
> location for plugin and drivers should be.
> My personal feeling is that Neutron has simply reached the tipping point
> where the high number of drivers and plugins is causing unnecessary load for
> the core team and frustration for the community.
>
> There I would totally support Luke's initiative about maintaining an
> out-of-tree ML2 driver. On the other hand, a plugin/driver "diaspora" might
> also have negative consequences such as frequent breakages such as those Bob
> was mentioning or confusion for users which might need to end up fetching
> drivers from disparate sources.
>
> As mentioned during the last Neutron IRC meeting this is another "process"
> aspect which will be discussed soon, with the aim of defining a plan for:
> - drastically reduce the number of plugins and drivers which must be
> maintained in the main source tree
> - enhance control of plugin/driver maintainers over their own code
> - preserve the ability of doing CI checks on gerrit as we do today
> - raise the CI bar (maybe finally set the smoketest as a minimum
> requirement?)
>
> Regards,
> Salvatore
>
>
>
> On 14 August 2014 11:47, loy wolfe  wrote:
>>
>>
>>
>>
>> On Thu, Aug 14, 2014 at 4:22 PM, Mathieu Rohon 
>> wrote:
>>>
>>> Hi,
>>>
>>> I would like to add that it would be harder for the community to help
>>> maintaining drivers.
>>> such a work [1] wouldn't have occured with an out of tree ODL driver.
>>
>>
>> +1.
>> It's better to move all MD for none built-in backend out of tree,
>> maintaining these drivers shouldn't be the responsibility of community. Not
>> only MD, but also plugin, agent should all obey this rule
>>
>>>
>>>
>>> [1] https://review.openstack.org/#/c/96459/
>>>
>>> On Wed, Aug 13, 2014 at 1:09 PM, Robert Kukura 
>>> wrote:
>>> > One thing to keep in mind is that the ML2 driver API does sometimes
>>> > change,
>>> > requiring updates to drivers. Drivers that are in-tree get updated
>>> > along
>>> > with the driver API change. Drivers that are out-of-tree must be
>>> > updated by
>>> > the owner.
>>> >
>>> > -Bob
>>> >
>>> >
>>> > On 8/13/14, 6:59 AM, ZZelle wrote:
>>> >
>>> > Hi,
>>> >
>>> >
>>> > The important thing to understand is how to integrate with neutron
>>> > through
>>> > stevedore/entrypoints:
>>> >
>>> >
>>> > https://github.com/dave-tucker/odl-neutron-drivers/blob/master/setup.cfg#L32-L34
>>> >
>>> >
>>> > Cedric
>>> >
>>> >
>>> > On Wed, Aug 13, 2014 at 12:17 PM, Dave Tucker 
>>> > wrote:
>>> >>
>>> >> I've been working on this for OpenDaylight
>>> >> https://github.com/dave-tucker/odl-neutron-drivers
>>> >>
>>> >> This seems to work for me (tested Devstack w/ML2) but YMMV.
>>> >>
>>> >> -- Dave
>>> >>
>>> >> ___
>>> >> OpenStack-dev mailing list
>>> >> OpenStack-dev@lists.openstack.org
>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> >
>>> >
>>> > ___
>>> > OpenStack-dev mailing list
>>> > OpenStack-dev@lists.openstack.org
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> >
>>> > ___
>>> > OpenStack-dev mailing list
>>> > OpenStack-dev@lists.openstack.org
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread Daniel P. Berrange
On Thu, Aug 14, 2014 at 08:31:48AM -0400, Russell Bryant wrote:
> On 08/13/2014 11:31 PM, Michael Still wrote:
> > On Thu, Aug 14, 2014 at 1:24 PM, Jay Pipes  wrote:
> > 
> >> Just wanted to quickly weigh in with my thoughts on this important topic. I
> >> very much valued the face-to-face interaction that came from the mid-cycle
> >> meetup in Beaverton (it was the only one I've ever been to).
> >>
> >> That said, I do not believe it should be a requirement that cores make it 
> >> to
> >> the face-to-face meetings in-person. A number of folks have brought up very
> >> valid concerns about personal/family time, travel costs and burnout.
> > 
> > I'm not proposing they be a requirement. I am proposing that they be
> > strongly encouraged.
> 
> I'm not sure that's much different in reality.
> 
> >> I believe that the issue raised about furthering the divide between core 
> >> and
> >> non-core folks is actually the biggest reason I don't support a mandate to
> >> have cores at the face-to-face meetings, and I think we should make our 
> >> best
> >> efforts to support quality virtual meetings that can be done on a more
> >> frequent basis than the face-to-face meetings that would be optional.
> > 
> > I am all for online meetings, but we don't have a practical way to do
> > them at the moment apart from IRC. Until someone has a concrete
> > proposal that's been shown to work, I feel its a straw man argument.
> 
> Yes, IRC is one option which we already use on a regular basis.  We can
> also switch to voice communication for higher bandwidth when needed.  We
> even have a conferencing server set up in OpenStack's infrastructure:
> 
> https://wiki.openstack.org/wiki/Infrastructure/Conferencing
> 
> In theory it even supports basic video conferencing, though I haven't
> tested it on this server yet.

Depending on the usage needs, I think Google hangouts is a quite useful
technology. For many-to-many session its limit of 10 participants can be
an issue, but for a few-to-many broadcast it could be practical. What I
find particularly appealing is the way it can live stream the session
over youtube which allows for unlimited number of viewers, as well as
being available offline for later catchup.

It could be useful in cases where one (or a handful) of people want to
present an idea / topic visually with slides / screencasts / etc and
then let the broader interactive discussion take place on IRC / mailing
list afterwards. I could see this being something that might let people
present proposals for new Nova features without having to wait (+try for
a limited) design summit slot in the 6 month cycle. One of the issues
I feel with the design summit is that it was the first time hearing about
many of the ideas, so there was not enough time to disgest the proposal
and so some of the thorny things to discuss only come to mind afterwards.
If we had to a approach for promoting features & knowledge in this way the
design summit could have more time to focus on areas of debate where the
f2f prescence is most valuable.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-14 Thread Devananda van der Veen
On Aug 14, 2014 2:04 AM, "Eoghan Glynn"  wrote:
>
>
> > >> Letting the industry field-test a project and feed their experience
> > >> back into the community is a slow process, but that is the best
> > >> measure of a project's success. I seem to recall this being an
> > >> implicit expectation a few years ago, but haven't seen it discussed
in
> > >> a while.
> > >
> > > I think I recall us discussing a "must have feedback that it's
> > > successfully deployed" requirement in the last cycle, but we
recognized
> > > that deployers often wait until a project is integrated.
> >
> > In the early discussions about incubation, we respected the need to
> > officially recognize a project as part of OpenStack just to create the
> > uptick in adoption necessary to mature projects. Similarly, integration
is a
> > recognition of the maturity of a project, but I think we have graduated
> > several projects long before they actually reached that level of
maturity.
> > Actually running a project at scale for a period of time is the only
way to
> > know it is mature enough to run it in production at scale.
> >
> > I'm just going to toss this out there. What if we set the graduation
bar to
> > "is in production in at least two sizeable clouds" (note that I'm not
saying
> > "public clouds"). Trove is the only project that has, to my knowledge,
met
> > that bar prior to graduation, and it's the only project that graduated
since
> > Havana that I can, off hand, point at as clearly successful. Heat and
> > Ceilometer both graduated prior to being in production; a few cycles
later,
> > they're still having adoption problems and looking at large
architectural
> > changes. I think the added cost to OpenStack when we integrate immature
or
> > unstable projects is significant enough at this point to justify a more
> > defensive posture.
> >
> > FWIW, Ironic currently doesn't meet that bar either - it's in
production in
> > only one public cloud. I'm not aware of large private installations yet,
> > though I suspect there are some large private deployments being spun up
> > right now, planning to hit production with the Juno release.
>
> We have some hard data from the user survey presented at the Juno summit,
> with respectively 26 & 53 production deployments of Heat and Ceilometer
> reported.
>
> There's no cross-referencing of deployment size with services in
production
> in those data presented, though it may be possible to mine that out of the
> raw survey responses.

Indeed, and while that would be useful information, I was referring to the
deployment of those services at scale prior to graduation, not post
graduation.

Best,
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][DevStack] How to increase developer usage of Neutron

2014-08-14 Thread Mike Spreitzer
I'll bet I am not the only developer who is not highly competent with 
bridges and tunnels, Open VSwitch, Neutron configuration, and how DevStack 
transmutes all those.  My bet is that you would have more developers using 
Neutron if there were an easy-to-find and easy-to-follow recipe to use, to 
create a developer install of OpenStack with Neutron.  One that's a pretty 
basic and easy case.  Let's say a developer gets a recent image of Ubuntu 
14.04 from Canonical, and creates an instance in some undercloud, and that 
instance has just one NIC, at 10.9.8.7/16.  If there were a recipe for 
such a developer to follow from that point on, it would be great.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Devstack] [Cinder] Registering 3rd party driver as default

2014-08-14 Thread Kerr, Andrew
You either need to comment out the enabled_backends line, or you'll want
to put something similar to this in your cinder.conf file:

[DEFAULT]
...
enabled_backends = cloudbyte
...
[cloudbyte]
volume_driver = volume_driver =
cinder.volume.drivers.cloudbyte.cloudbyte.ElasticenterISCSIDriver
SAN_IP=20.10.22.245
CB_APIKEY=masQwghrmPOVIqbjyyWKQdg4z4bP2sNZ13fRQyUMwm453PUiYB-xyRSMBDoZeMj6R
0-XU9DCscxMbe3AhleDyQ
CB_ACCOUNT_NAME=acc1
TSM_NAME=openstacktsm



If you have enabled_backends set then it will only use those driver specs
and ignore all driver related details in the DEFAULT section.

You also probably want to comment (or remove) the default_volume_type
line, unless you plan to create that volume type after the services come
up.

Andrew Kerr
OpenStack QA
Cloud Solutions Group
NetApp


From:  Amit Das 
Reply-To:  "OpenStack Development Mailing List (not for usage questions)"

Date:  Thursday, August 14, 2014 at 5:37 AM
To:  OpenStack Development Mailing List 
Subject:  Re: [openstack-dev] [Devstack] [Cinder] Registering 3rd
party   driver as default


With further debugging, i find that none of the configuration options
present in /etc/cinder/cinder.conf are getting applied.


Regards,
AmitCloudByte Inc. 




On Thu, Aug 14, 2014 at 11:40 AM, Amit Das
 wrote:

Hi folks,

I have been trying to run devstack with my cinder driver as the default
volume_driver but with no luck.

Devstack seems to register the lvm driver as the default always.

I have tried below approaches:

1. directly modifying the /etc/cinder/cinder.conf file
2. creating a driver file @ ./devstack/lib/cinder_plugins/

1. ref - https://review.openstack.org/#/c/68726/






This is my localrc details:
http://paste.openstack.org/show/94822/


I run ./unstack.sh & then FORCE=yes ./stack.sh

This is the cinder.conf that is generated after running above stack.sh. I
comment out the [lvmdriver-1] section manually
(not sure if this section needs to be commented)

http://paste.openstack.org/show/94841/


These are portions of c-sch & c-vol logs after restarting them in their
respective screens.

http://paste.openstack.org/show/94842/



Regards,
AmitCloudByte Inc. 








___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread Russell Bryant
On 08/14/2014 09:14 AM, Daniel P. Berrange wrote:
> On Thu, Aug 14, 2014 at 08:31:48AM -0400, Russell Bryant wrote:
>> On 08/13/2014 11:31 PM, Michael Still wrote:
>>> On Thu, Aug 14, 2014 at 1:24 PM, Jay Pipes  wrote:
>>>
 Just wanted to quickly weigh in with my thoughts on this important topic. I
 very much valued the face-to-face interaction that came from the mid-cycle
 meetup in Beaverton (it was the only one I've ever been to).

 That said, I do not believe it should be a requirement that cores make it 
 to
 the face-to-face meetings in-person. A number of folks have brought up very
 valid concerns about personal/family time, travel costs and burnout.
>>>
>>> I'm not proposing they be a requirement. I am proposing that they be
>>> strongly encouraged.
>>
>> I'm not sure that's much different in reality.
>>
 I believe that the issue raised about furthering the divide between core 
 and
 non-core folks is actually the biggest reason I don't support a mandate to
 have cores at the face-to-face meetings, and I think we should make our 
 best
 efforts to support quality virtual meetings that can be done on a more
 frequent basis than the face-to-face meetings that would be optional.
>>>
>>> I am all for online meetings, but we don't have a practical way to do
>>> them at the moment apart from IRC. Until someone has a concrete
>>> proposal that's been shown to work, I feel its a straw man argument.
>>
>> Yes, IRC is one option which we already use on a regular basis.  We can
>> also switch to voice communication for higher bandwidth when needed.  We
>> even have a conferencing server set up in OpenStack's infrastructure:
>>
>> https://wiki.openstack.org/wiki/Infrastructure/Conferencing
>>
>> In theory it even supports basic video conferencing, though I haven't
>> tested it on this server yet.
> 
> Depending on the usage needs, I think Google hangouts is a quite useful
> technology. For many-to-many session its limit of 10 participants can be
> an issue, but for a few-to-many broadcast it could be practical. What I
> find particularly appealing is the way it can live stream the session
> over youtube which allows for unlimited number of viewers, as well as
> being available offline for later catchup.
> 
> It could be useful in cases where one (or a handful) of people want to
> present an idea / topic visually with slides / screencasts / etc and
> then let the broader interactive discussion take place on IRC / mailing
> list afterwards. I could see this being something that might let people
> present proposals for new Nova features without having to wait (+try for
> a limited) design summit slot in the 6 month cycle. One of the issues
> I feel with the design summit is that it was the first time hearing about
> many of the ideas, so there was not enough time to disgest the proposal
> and so some of the thorny things to discuss only come to mind afterwards.
> If we had to a approach for promoting features & knowledge in this way the
> design summit could have more time to focus on areas of debate where the
> f2f prescence is most valuable.

Yeah, I really like google hangouts.  I suspect more than 10 people may
want the option to speak.  We could have an IRC channel going as well
for questions, so that might mitigate it.

Another issue is that some folks are just fundamentally opposed to using
Google ... but I think it's worth it for how good the service is.  :-)

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][DevStack] How to increase developer usage of Neutron

2014-08-14 Thread CARVER, PAUL
Mike Spreitzer [mailto:mspre...@us.ibm.com] wrote:

>I'll bet I am not the only developer who is not highly competent with
>bridges and tunnels, Open VSwitch, Neutron configuration, and how DevStack
>transmutes all those.  My bet is that you would have more developers using
>Neutron if there were an easy-to-find and easy-to-follow recipe to use, to
>create a developer install of OpenStack with Neutron.  One that's a pretty
>basic and easy case.  Let's say a developer gets a recent image of Ubuntu
>14.04 from Canonical, and creates an instance in some undercloud, and that
>instance has just one NIC, at 10.9.8.7/16.  If there were a recipe for
>such a developer to follow from that point on, it would be great.

https://wiki.openstack.org/wiki/NeutronDevstack worked for me.

However, I'm pretty sure it's only a single node all in one setup. At least,
I created only one VM to run it on and I don't think DevStack has created
multiple nested VMs inside of the one I create to run DevStack. I haven't
gotten around to figuring out how to setup a full multi-node DevStack
setup with separate compute nodes and network nodes and GRE/VXLAN tunnels.

There are multi-node instructions on that wiki page but I haven't tried
following them. If someone has a Vagrant file that creates a full multi-
node Neutron devstack complete with GRE/VXLAN tunnels it would be great
if they could add it to that wiki page.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] requirements.txt: explicit vs. implicit

2014-08-14 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi all,

some plugins depend on modules that are not mentioned in
requirements.txt. Among them, Cisco Nexus (ncclient), Brocade
(ncclient), Embrane (heleosapi)... Some other plugins put their
dependencies in requirements.txt though (like Arista depending on
jsonrpclib).

There are pros and cons in both cases. The obvious issue with not
putting those requirements in the file is that packagers are left
uninformed about those implicit requirements existing, meaning plugins
are shipped to users with broken dependencies. It also means we ship
code that depends on unknown modules grabbed from random places in the
internet instead of relying on what's available on pypi, which is a
bit scary.

With my packager hat on, I would like to suggest to make those
dependencies explicit by filling in requirements.txt. This will make
packaging a bit easier. Of course, runtime dependencies being set
correctly do not mean plugins are working and tested, but at least we
give them chance to be tested and used.

But, maybe there are valid concerns against doing so. In that case, I
would be glad to know how packagers are expected to track those
implicit dependencies.

I would like to ask community to decide what's the right way to handle
those cases.

Cheers,
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJT7Lu2AAoJEC5aWaUY1u57tDkIAOrx1TWVjke8xxsJtz+tizmg
rDgoQyugU8bmaWUFzKi3yVLDFmkOH5iX9RFqj6pgXngydd+cO0Z8CB825uT7kimi
tTwTk2o1Ty4lIG38nwi/U8pn+nmzVApjOqtJmBmtZKBtoY7hRUs+QVTz5V5M1AmA
MQm0eYZXMQ531k4UTdaFxtZ2xPvnCEsFTWi0vosZLPvccVw33vUnQ0SnewQAgb4w
NZ7m302454S2INegqVYlZqQMQXxy6v/BAigyoLXBj8Pl3FsrNU0j3SMtzqSm71ty
GCz0qdWckUdgsDFnLyyNXjUV/G9xZ03pYZ5ID2WiVQl5MYbmkAHlJJkjCYIrv3c=
=tLTZ
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-14 Thread Russell Bryant
On 08/14/2014 09:21 AM, Devananda van der Veen wrote:
> 
> On Aug 14, 2014 2:04 AM, "Eoghan Glynn"  > wrote:
>>
>>
>> > >> Letting the industry field-test a project and feed their experience
>> > >> back into the community is a slow process, but that is the best
>> > >> measure of a project's success. I seem to recall this being an
>> > >> implicit expectation a few years ago, but haven't seen it
> discussed in
>> > >> a while.
>> > >
>> > > I think I recall us discussing a "must have feedback that it's
>> > > successfully deployed" requirement in the last cycle, but we
> recognized
>> > > that deployers often wait until a project is integrated.
>> >
>> > In the early discussions about incubation, we respected the need to
>> > officially recognize a project as part of OpenStack just to create the
>> > uptick in adoption necessary to mature projects. Similarly,
> integration is a
>> > recognition of the maturity of a project, but I think we have graduated
>> > several projects long before they actually reached that level of
> maturity.
>> > Actually running a project at scale for a period of time is the only
> way to
>> > know it is mature enough to run it in production at scale.
>> >
>> > I'm just going to toss this out there. What if we set the graduation
> bar to
>> > "is in production in at least two sizeable clouds" (note that I'm
> not saying
>> > "public clouds"). Trove is the only project that has, to my
> knowledge, met
>> > that bar prior to graduation, and it's the only project that
> graduated since
>> > Havana that I can, off hand, point at as clearly successful. Heat and
>> > Ceilometer both graduated prior to being in production; a few cycles
> later,
>> > they're still having adoption problems and looking at large
> architectural
>> > changes. I think the added cost to OpenStack when we integrate
> immature or
>> > unstable projects is significant enough at this point to justify a more
>> > defensive posture.
>> >
>> > FWIW, Ironic currently doesn't meet that bar either - it's in
> production in
>> > only one public cloud. I'm not aware of large private installations yet,
>> > though I suspect there are some large private deployments being spun up
>> > right now, planning to hit production with the Juno release.
>>
>> We have some hard data from the user survey presented at the Juno summit,
>> with respectively 26 & 53 production deployments of Heat and Ceilometer
>> reported.
>>
>> There's no cross-referencing of deployment size with services in
> production
>> in those data presented, though it may be possible to mine that out of the
>> raw survey responses.
> 
> Indeed, and while that would be useful information, I was referring to
> the deployment of those services at scale prior to graduation, not post
> graduation.

We have a tough messaging problem here though.  I suspect many users
wait until graduation to consider a real deployment.  "Incubated" is
viewed as immature / WIP / etc.  That won't change quickly, even if we
want it to.

I think our intentions are already to not graduate something that isn't
ready for production.  That doesn't mean we haven't made mistakes, but
we're trying to learn and improve.  We developed a set of *written*
guidelines to stick to, and have been holding all projects up to them.
Teams like Ceilometer have been very receptive to the process, have
developed plans to fill gaps, and have been working hard on the issues.

A hard rule for production deployments seems like a heavy rule.  I'd
rather just say that we should be confident that it's a production ready
component, and known deployments are one such piece of input that would
provide that confidence.  It could also just be extraordinary testing
that shows both scale and quality.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] heat docker multi host scheduling support

2014-08-14 Thread Zane Bitter

On 14/08/14 03:21, Malawade, Abhijeet wrote:

Hi all,

I am trying to use heat to create docker containers. I have configured 
heat-docker plugin.
I am also able to create stack using heat successfully.

To start container on different host we need to provide 'docker_endpoint' in 
template. For this we have to provide host address where container will run in 
template.

Is there any way to schedule docker container on available hosts using 
heat-docker plugin without giving 'docker_endpoint' in template file.


Yes, there is a Nova driver on Stackforge that uses Docker as a 
back-end. Hopefully one day this will turn into a Nova-like container 
service, but for now the way to use Docker as a scheduler is through 
Nova with this driver.



Is heat-docker plugin supports managing docker hosts cluster with scheduling 
logic.


No.


Please let me know your suggestions on the same.

Thanks,
Abhijeet



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Devstack] [Cinder] Registering 3rd party driver as default

2014-08-14 Thread Amit Das
Thanks a lot.

This worked out like a charm.

Regards,
Amit
*CloudByte Inc.* 


On Thu, Aug 14, 2014 at 7:02 PM, Kerr, Andrew 
wrote:

> You either need to comment out the enabled_backends line, or you'll want
> to put something similar to this in your cinder.conf file:
>
> [DEFAULT]
> ...
> enabled_backends = cloudbyte
> ...
> [cloudbyte]
> volume_driver = volume_driver =
> cinder.volume.drivers.cloudbyte.cloudbyte.ElasticenterISCSIDriver
> SAN_IP=20.10.22.245
> CB_APIKEY=masQwghrmPOVIqbjyyWKQdg4z4bP2sNZ13fRQyUMwm453PUiYB-xyRSMBDoZeMj6R
> 0-XU9DCscxMbe3AhleDyQ
> CB_ACCOUNT_NAME=acc1
> TSM_NAME=openstacktsm
>
>
>
> If you have enabled_backends set then it will only use those driver specs
> and ignore all driver related details in the DEFAULT section.
>
> You also probably want to comment (or remove) the default_volume_type
> line, unless you plan to create that volume type after the services come
> up.
>
> Andrew Kerr
> OpenStack QA
> Cloud Solutions Group
> NetApp
>
>
> From:  Amit Das 
> Reply-To:  "OpenStack Development Mailing List (not for usage questions)"
> 
> Date:  Thursday, August 14, 2014 at 5:37 AM
> To:  OpenStack Development Mailing List  >
> Subject:  Re: [openstack-dev] [Devstack] [Cinder] Registering 3rd
> party   driver as default
>
>
> With further debugging, i find that none of the configuration options
> present in /etc/cinder/cinder.conf are getting applied.
>
>
> Regards,
> AmitCloudByte Inc. 
>
>
>
>
> On Thu, Aug 14, 2014 at 11:40 AM, Amit Das
>  wrote:
>
> Hi folks,
>
> I have been trying to run devstack with my cinder driver as the default
> volume_driver but with no luck.
>
> Devstack seems to register the lvm driver as the default always.
>
> I have tried below approaches:
>
> 1. directly modifying the /etc/cinder/cinder.conf file
> 2. creating a driver file @ ./devstack/lib/cinder_plugins/
>
> 1. ref - https://review.openstack.org/#/c/68726/
>
>
>
>
>
>
> This is my localrc details:
> http://paste.openstack.org/show/94822/
>
>
> I run ./unstack.sh & then FORCE=yes ./stack.sh
>
> This is the cinder.conf that is generated after running above stack.sh. I
> comment out the [lvmdriver-1] section manually
> (not sure if this section needs to be commented)
>
> http://paste.openstack.org/show/94841/
>
>
> These are portions of c-sch & c-vol logs after restarting them in their
> respective screens.
>
> http://paste.openstack.org/show/94842/
>
>
>
> Regards,
> AmitCloudByte Inc. 
>
>
>
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread CARVER, PAUL
Daniel P. Berrange [mailto:berra...@redhat.com] wrote:

>Depending on the usage needs, I think Google hangouts is a quite useful
>technology. For many-to-many session its limit of 10 participants can be
>an issue, but for a few-to-many broadcast it could be practical. What I
>find particularly appealing is the way it can live stream the session
>over youtube which allows for unlimited number of viewers, as well as
>being available offline for later catchup.

I can't actually offer AT&T resources without getting some level of
management approval first, but just for the sake of discussion here's
some info about the telepresence system we use.

-=-=-=-=-=-=-=-=-=-
ATS B2B Telepresence conferences can be conducted with an external company's
Telepresence room(s), which subscribe to the AT&T Telepresence Solution,
or a limited number of other Telepresence service provider's networks.

Currently, the number of Telepresence rooms that can participate in a B2B
conference is limited to a combined total of 20 rooms (19 of which can be
AT&T rooms, depending on the number of remote endpoints included).
-=-=-=-=-=-=-=-=-=-

We currently have B2B interconnect with over 100 companies and AT&T has
telepresence rooms in many of our locations around the US and around
the world. If other large OpenStack companies also have telepresence
rooms that we could interconnect with I think it might be possible
to get management agreement to hold a couple OpenStack meetups per
year.

Most of our rooms are best suited for 6 people, but I know of at least
one 18 person telepresence room near me.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Cinder] Cinder Core nomination

2014-08-14 Thread Duncan Thomas
+1

On 14 August 2014 00:55, Boring, Walter  wrote:
> Hey guys,
>I wanted to pose a nomination for Cinder core.
>
> Xing Yang.
> She has been active in the cinder community for many releases and has worked 
> on several drivers as well as other features for cinder itself.   She has 
> been doing an awesome job doing reviews and helping folks out in the 
> #openstack-cinder irc channel for a long time.   I think she would be a good 
> addition to the core team.
>
>
> Walt
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread Russell Bryant
On 08/14/2014 10:04 AM, CARVER, PAUL wrote:
> Daniel P. Berrange [mailto:berra...@redhat.com] wrote:
> 
>> Depending on the usage needs, I think Google hangouts is a quite useful
>> technology. For many-to-many session its limit of 10 participants can be
>> an issue, but for a few-to-many broadcast it could be practical. What I
>> find particularly appealing is the way it can live stream the session
>> over youtube which allows for unlimited number of viewers, as well as
>> being available offline for later catchup.
> 
> I can't actually offer AT&T resources without getting some level of
> management approval first, but just for the sake of discussion here's
> some info about the telepresence system we use.
> 
> -=-=-=-=-=-=-=-=-=-
> ATS B2B Telepresence conferences can be conducted with an external company's
> Telepresence room(s), which subscribe to the AT&T Telepresence Solution,
> or a limited number of other Telepresence service provider's networks.
> 
> Currently, the number of Telepresence rooms that can participate in a B2B
> conference is limited to a combined total of 20 rooms (19 of which can be
> AT&T rooms, depending on the number of remote endpoints included).
> -=-=-=-=-=-=-=-=-=-
> 
> We currently have B2B interconnect with over 100 companies and AT&T has
> telepresence rooms in many of our locations around the US and around
> the world. If other large OpenStack companies also have telepresence
> rooms that we could interconnect with I think it might be possible
> to get management agreement to hold a couple OpenStack meetups per
> year.
> 
> Most of our rooms are best suited for 6 people, but I know of at least
> one 18 person telepresence room near me.

An ideal solution would allow attendees to join as individuals from
anywhere.  A lot of contributors work from home.  Is that sort of thing
compatible with your system?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Simple proposal for stabilizing new features in-tree

2014-08-14 Thread Sridar Kandaswamy (skandasw)
Hi Wuhongning:

Yes u are correct – this is phase 1 to at least get basic perimeter firewall 
support working with DVR before looking for an optimal way to address E – W 
traffic.

Thanks

Sridar

From: Wuhongning mailto:wuhongn...@huawei.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, August 14, 2014 at 1:05 AM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] Simple proposal for stabilizing new 
features in-tree

FWaas can't seamlessly work with DVR yet. A BP [1] has been submitted, but it 
can only handle NS traffic, leaving W-E untouched. If we implement the WE 
firewall in DVR, the iptable might be applied at a per port basis, so there are 
some overlapping with SG (Can we image a packet run into iptable hook twice 
between VM and the wire, for both ingress and egress directions?).

Maybe the overall service plugins (including service extension in ML2) needs 
some cleaning up, It seems that Neutron is just built from separate single 
blocks.

[1]  
http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/juno/neutron-dvr-fwaas.rst


From: Sridar Kandaswamy (skandasw) 
[skand...@cisco.com]
Sent: Thursday, August 14, 2014 3:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Simple proposal for stabilizing new 
features in-tree


Hi Aaron:


There is a certain fear of another cascading chain of emails so with hesitation 
i send this email out. :-)


1) I could not agree with u more on the issue with the logs and the pain with 
debugging issues here. Yes for sure bugs do keep popping up but often times, 
(speaking for fwaas) - given the L3 agent interactions - there are a multitude 
of reasons for a failure. An L3Agent crash or a router issue - also manifests 
itself as an fwaas issue - i think ur first paste is along those lines (perhaps 
i could be wrong without much context here).


The L3Agent - service coexistence is far from ideal - but this experience has 
lead to two proposals - a vendor proposal [1] that actually tries to address 
such agent limitations and collaboration on another community proposal[2] to 
enable the L3 Agent to be more suited to hosting services. Hopefully [2] will 
get picked up in K and should help provide the necessary infrastructure to 
clean up the reference implementation.


2) Regarding ur point on the  FWaaS API - the intent of the feature by design 
was to keep the service abstraction separate from how it is inserted in the 
network - to keep this vendor/technology neutral. The first priority, post 
Havana was to address  service insertion to get away from the all routers model 
[3] but did not get the blessings needed. Now with a redrafted proposal for 
Juno[4] again an effort is being made to address this now for the second time 
in the 2 releases post H.


In general, I would make a request that before we decide to go ahead and start 
moving things out into an incubator area - more discussion is needed.  We don’t 
want  to land up in a situation in K-3 where we find out that this model does 
not quite work for whatever reason. Also i wonder about the hoops to get out 
from incubation. As vendors who try to align with the community and upstream 
our work - we don’t want to land up waiting more cycles - there is quite a bit 
of frustration that is felt here too.


Lets also think about the impacts on feature velocity, somehow that keeps 
popping into my head every time i buy a book from a certain online retailer. :-)


[1] https://review.openstack.org/#/c/90729/

[2] https://review.openstack.org/#/c/91532/

[3] https://review.openstack.org/#/c/62599/

[4] https://review.openstack.org/#/c/93128/


Thanks


Sridar


From: Aaron Rosen mailto:aaronoro...@gmail.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, August 13, 2014 at 3:56 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] Simple proposal for stabilizing new 
features in-tree

Hi,

I've been thinking a good bit on this on the right way to move forward with 
this and in general the right way new services should be added. Yesterday I was 
working on a bug that was causing some problems in the openstack infra. We 
tracked down the issue then I uploaded a patch for it. A little while later 
after jenkins voted back a -1 so I started looking through the logs to see what 
the source of the failure was (which was actually unrelated to my patch). The 
random failure in the fwaas/vpn/l3-agent code which all outputs to the same log 
file that contains many traces for every run even successful ones. In one skim 
of this log file I was able to spot 4 [1]bugs which shows  these new 
"experimental" services that we've added to neutron have underlying problems 
(even though they've been in the tree for 2 releases+ n

Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread Matt Riedemann



On 8/14/2014 3:47 AM, Daniel P. Berrange wrote:

On Thu, Aug 14, 2014 at 09:24:36AM +1000, Michael Still wrote:

On Thu, Aug 14, 2014 at 3:09 AM, Dan Smith  wrote:

I'm not questioning the value of f2f - I'm questioning the idea of
doing f2f meetings sooo many times a year. OpenStack is very much
the outlier here among open source projects - the vast majority of
projects get along very well with much less f2f time and a far
smaller % of their contributors attend those f2f meetings that do
happen. So I really do question what is missing from OpenStack's
community interaction that makes us believe that having 4 f2f
meetings a year is critical to our success.


How many is too many? So far, I have found the midcycles to be extremely
productive -- productive in a way that we don't see at the summits, and
I think other attendees agree. Obviously if budgets start limiting them,
then we'll have to deal with it, but I don't want to stop meeting
preemptively.


I agree they're very productive. Let's pick on the nova v3 API case as
an example... We had failed as a community to reach a consensus using
our existing discussion mechanisms (hundreds of emails, at least three
specs, phone calls between the various parties, et cetera), yet at the
summit and then a midcycle meetup we managed to nail down an agreement
on a very contentious and complicated topic.


We thought we had agreement on v3 API after Atlanta f2f summit and
after Hong Kong f2f too. So I wouldn't neccessarily say that we
needed another f2f meeting to resolve that, but rather than this is
a very complex topic that takes a long time to resolve no matter
how we discuss it and the discussions had just happened to reach
a natural conclusion this time around. But lets see if this agreement
actually sticks this time


I can see the argument that travel cost is an issue, but I think its
also not a very strong argument. We have companies spending millions
of dollars on OpenStack -- surely spending a relatively small amount
on travel to keep the development team as efficient as possible isn't
a big deal? I wouldn't be at all surprised if the financial costs of
the v3 API debate (staff time mainly) were much higher than the travel
costs of those involved in the summit and midcycle discussions which
sorted it out.


I think the travel cost really is a big issue. Due to the number of
people who had to travel to the many mid-cycle meetups, a good number
of people I work with no longer have the ability to go to the Paris
design summit. This is going to make it harder for them to feel a
proper engaged part of our community. I can only see this situation
get worse over time if greater emphasis is placed on attending the
mid-cycle meetups.


Travelling to places to talk to people isn't a great solution, but it
is the most effective one we've found so far. We should continue to
experiment with other options, but until we find something that works
as well as meetups, I think we need to keep having them.


IMHO, the reasons to cut back would be:

- People leaving with a "well, that was useless..." feeling
- Not enough people able to travel to make it worthwhile

So far, neither of those have been outcomes of the midcycles we've had,
so I think we're doing okay.

The design summits are structured differently, where we see a lot more
diverse attendance because of the colocation with the user summit. It
doesn't lend itself well to long and in-depth discussions about specific
things, but it's very useful for what it gives us in the way of
exposure. We could try to have less of that at the summit and more
midcycle-ish time, but I think it's unlikely to achieve the same level
of usefulness in that environment.

Specifically, the lack of colocation with too many other projects has
been a benefit. This time, Mark and Maru where there from Neutron. Last
time, Mark from Neutron and the other Mark from Glance were there. If
they were having meetups in other rooms (like at summit) they wouldn't
have been there exposed to discussions that didn't seem like they'd have
a component for their participation, but did after all (re: nova and
glance and who should own flavors).


I agree. The ability to focus on the issues that were blocking nova
was very important. That's hard to do at a design summit when there is
so much happening at the same time.


Maybe we should change the way we structure the design summit to
improve that. If there are critical issues blocking nova, it feels
like it is better to be able to discuss and resolve as much as possible
at the start of the dev cycle rather than in the middle of the dev
cycle because I feel that means we are causing ourselves pain during
milestone 1/2.


Just speaking from experience, I attended the Icehouse meetup before my 
first summit (Juno in ATL) and the design summit sessions for Juno were 
a big disappointment after the meetup sessions, basically because of the 
time constraints. The meetups are nice since there is time to really 

Re: [openstack-dev] use of compute node as a storage

2014-08-14 Thread Jyoti Ranjan
You run cinder-volume on compute node which you want to convert to storage.
Configure LVM driver of compute node as storage backend for cinder-volume.


On Fri, Aug 8, 2014 at 1:54 PM, shailendra acharya <
acharyashailend...@gmail.com> wrote:

> i made 4 vm 1 controller, 1 network and 2 compute and i want 1 compute to
> run as a storage so plz help how can i do such ?
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread Sandy Walsh
On 8/14/2014 11:28 AM, Russell Bryant wrote:
> On 08/14/2014 10:04 AM, CARVER, PAUL wrote:
>> Daniel P. Berrange [mailto:berra...@redhat.com] wrote:
>>
>>> Depending on the usage needs, I think Google hangouts is a quite useful
>>> technology. For many-to-many session its limit of 10 participants can be
>>> an issue, but for a few-to-many broadcast it could be practical. What I
>>> find particularly appealing is the way it can live stream the session
>>> over youtube which allows for unlimited number of viewers, as well as
>>> being available offline for later catchup.
>> I can't actually offer AT&T resources without getting some level of
>> management approval first, but just for the sake of discussion here's
>> some info about the telepresence system we use.
>>
>> -=-=-=-=-=-=-=-=-=-
>> ATS B2B Telepresence conferences can be conducted with an external company's
>> Telepresence room(s), which subscribe to the AT&T Telepresence Solution,
>> or a limited number of other Telepresence service provider's networks.
>>
>> Currently, the number of Telepresence rooms that can participate in a B2B
>> conference is limited to a combined total of 20 rooms (19 of which can be
>> AT&T rooms, depending on the number of remote endpoints included).
>> -=-=-=-=-=-=-=-=-=-
>>
>> We currently have B2B interconnect with over 100 companies and AT&T has
>> telepresence rooms in many of our locations around the US and around
>> the world. If other large OpenStack companies also have telepresence
>> rooms that we could interconnect with I think it might be possible
>> to get management agreement to hold a couple OpenStack meetups per
>> year.
>>
>> Most of our rooms are best suited for 6 people, but I know of at least
>> one 18 person telepresence room near me.
> An ideal solution would allow attendees to join as individuals from
> anywhere.  A lot of contributors work from home.  Is that sort of thing
> compatible with your system?
>
http://bluejeans.com/ was a good experience.

What about Google Hangout OnAir for the PTL and core, while others are
view-only with chat/irc questions?

http://www.google.com/+/learnmore/hangouts/onair.html



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread Anne Gentle
On Thu, Aug 14, 2014 at 10:13 AM, Sandy Walsh 
wrote:

> On 8/14/2014 11:28 AM, Russell Bryant wrote:
> > On 08/14/2014 10:04 AM, CARVER, PAUL wrote:
> >> Daniel P. Berrange [mailto:berra...@redhat.com] wrote:
> >>
> >>> Depending on the usage needs, I think Google hangouts is a quite useful
> >>> technology. For many-to-many session its limit of 10 participants can
> be
> >>> an issue, but for a few-to-many broadcast it could be practical. What I
> >>> find particularly appealing is the way it can live stream the session
> >>> over youtube which allows for unlimited number of viewers, as well as
> >>> being available offline for later catchup.
> >> I can't actually offer AT&T resources without getting some level of
> >> management approval first, but just for the sake of discussion here's
> >> some info about the telepresence system we use.
> >>
> >> -=-=-=-=-=-=-=-=-=-
> >> ATS B2B Telepresence conferences can be conducted with an external
> company's
> >> Telepresence room(s), which subscribe to the AT&T Telepresence Solution,
> >> or a limited number of other Telepresence service provider's networks.
> >>
> >> Currently, the number of Telepresence rooms that can participate in a
> B2B
> >> conference is limited to a combined total of 20 rooms (19 of which can
> be
> >> AT&T rooms, depending on the number of remote endpoints included).
> >> -=-=-=-=-=-=-=-=-=-
> >>
> >> We currently have B2B interconnect with over 100 companies and AT&T has
> >> telepresence rooms in many of our locations around the US and around
> >> the world. If other large OpenStack companies also have telepresence
> >> rooms that we could interconnect with I think it might be possible
> >> to get management agreement to hold a couple OpenStack meetups per
> >> year.
> >>
> >> Most of our rooms are best suited for 6 people, but I know of at least
> >> one 18 person telepresence room near me.
> > An ideal solution would allow attendees to join as individuals from
> > anywhere.  A lot of contributors work from home.  Is that sort of thing
> > compatible with your system?
> >
> http://bluejeans.com/ was a good experience.
>
> What about Google Hangout OnAir for the PTL and core, while others are
> view-only with chat/irc questions?
>
> http://www.google.com/+/learnmore/hangouts/onair.html
>
>
We've done Google Hangout OnAir for the docs team with varied success due
to the oddities with calendaring and having to know everyone's preferred
Google identity. It's nice for high-fidelity conversations without
requiring travel. It also lets people "tune in" then or watch a recording
later.
Anne




>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting Aug 14 1800 UTC

2014-08-14 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meeting&iso=20140814T18

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] new client release (0.7.1)

2014-08-14 Thread Sergey Lukjanov
There are no additional proposals and, so, I'll push tag to client
later today. Target commit is
https://github.com/openstack/python-saharaclient/commit/1625d6851efbb3545b5915b4a87ef4948e405534

On Mon, Aug 11, 2014 at 5:37 PM, Sergey Lukjanov  wrote:
> Hey sahara folks,
>
> the latest sahara client release was at Mar 29 (right after the fully
> completed renaming process) and so we have a lot of unreleased fixes
> and improvements.
>
> You can check diff [0] and launchpad's release page [1].
>
> I'm going to release 0.7.1 this week, so, please propose changes to be
> included into the new client release in the next few days.
>
> Thanks.
>
> [0] https://github.com/openstack/python-saharaclient/compare/0.7.0...HEAD
> [1] https://launchpad.net/python-saharaclient/+milestone/0.7.1
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Principal Software Engineer
> Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [third-party] What tests are required to be run

2014-08-14 Thread Kyle Mestery
Folks, I'm not sure if all CI accounts are running sufficient tests.
Per the requirements wiki page here [1], everyone needs to be running
more than just Tempest API tests, which I still see most neutron
third-party CI setups doing. I'd like to ask everyone who operates a
third-party CI account for Neutron to please look at the link below
and make sure you are running appropriate tests. If you have
questions, the weekly third-party meeting [2] is a great place to ask
questions.

Thanks,
Kyle

[1] https://wiki.openstack.org/wiki/NeutronThirdPartyTesting
[2] https://wiki.openstack.org/wiki/Meetings/ThirdParty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread Kyle Mestery
On Thu, Aug 14, 2014 at 10:13 AM, Sandy Walsh  wrote:
> On 8/14/2014 11:28 AM, Russell Bryant wrote:
>> On 08/14/2014 10:04 AM, CARVER, PAUL wrote:
>>> Daniel P. Berrange [mailto:berra...@redhat.com] wrote:
>>>
 Depending on the usage needs, I think Google hangouts is a quite useful
 technology. For many-to-many session its limit of 10 participants can be
 an issue, but for a few-to-many broadcast it could be practical. What I
 find particularly appealing is the way it can live stream the session
 over youtube which allows for unlimited number of viewers, as well as
 being available offline for later catchup.
>>> I can't actually offer AT&T resources without getting some level of
>>> management approval first, but just for the sake of discussion here's
>>> some info about the telepresence system we use.
>>>
>>> -=-=-=-=-=-=-=-=-=-
>>> ATS B2B Telepresence conferences can be conducted with an external company's
>>> Telepresence room(s), which subscribe to the AT&T Telepresence Solution,
>>> or a limited number of other Telepresence service provider's networks.
>>>
>>> Currently, the number of Telepresence rooms that can participate in a B2B
>>> conference is limited to a combined total of 20 rooms (19 of which can be
>>> AT&T rooms, depending on the number of remote endpoints included).
>>> -=-=-=-=-=-=-=-=-=-
>>>
>>> We currently have B2B interconnect with over 100 companies and AT&T has
>>> telepresence rooms in many of our locations around the US and around
>>> the world. If other large OpenStack companies also have telepresence
>>> rooms that we could interconnect with I think it might be possible
>>> to get management agreement to hold a couple OpenStack meetups per
>>> year.
>>>
>>> Most of our rooms are best suited for 6 people, but I know of at least
>>> one 18 person telepresence room near me.
>> An ideal solution would allow attendees to join as individuals from
>> anywhere.  A lot of contributors work from home.  Is that sort of thing
>> compatible with your system?
>>
> http://bluejeans.com/ was a good experience.
>
> What about Google Hangout OnAir for the PTL and core, while others are
> view-only with chat/irc questions?
>
This is a terrible idea, as it perpetuates the "core vs. non-core"
argument. We need equal participation for cores and non-cores alike.

> http://www.google.com/+/learnmore/hangouts/onair.html
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Cinder] Cinder Core nomination

2014-08-14 Thread Martin, Kurt Frederick (ESSN Storage MSDU)
 +1, Xing  is very active in the community, provides valuable reviews and 
currently developing cinder core features.
~Kurt 

-Original Message-
From: Boring, Walter 
Sent: Wednesday, August 13, 2014 11:56 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [OpenStack-Dev][Cinder] Cinder Core nomination

Hey guys,
   I wanted to pose a nomination for Cinder core.

Xing Yang.
She has been active in the cinder community for many releases and has worked on 
several drivers as well as other features for cinder itself.   She has been 
doing an awesome job doing reviews and helping folks out in the 
#openstack-cinder irc channel for a long time.   I think she would be a good 
addition to the core team.


Walt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread CARVER, PAUL
Russell Bryant [mailto:rbry...@redhat.com] wrote:

>An ideal solution would allow attendees to join as individuals from
>anywhere.  A lot of contributors work from home.  Is that sort of thing
>compatible with your system?

In principle, yes, but that loses the immersive telepresence aspect
which is the next best thing to an in-person meetup (which is where
this thread started.)

AT&T Employees live and breathe on AT&T Connect which is our
teleconferencing (not telepresence) service. It supports webcam
video as well as desktop sharing, but I'm on the verge of making
a sales pitch here which was NOT my intent.

I'm on AT&T Connect meetings 5+ times a day but I'm biased so I
won't offer any opinion on how it compares to WebEx, GotoMeeting,
and other services. None of them are really equivalent to the
purpose built telepresence rooms.

My point was that there may well be a telepresence room within
reasonable driving distance for a large number of OpenStack
contributors if we were able to get a number of the large
OpenStack participant companies to open their doors to an
occasional meet-up. Instead of asking participants around
the globe to converge on a single physical location for
a meet-up, perhaps they could converge on the closest of
20 different locations that are linked via telepresence.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread Daniel P. Berrange
On Thu, Aug 14, 2014 at 10:30:59AM -0500, Kyle Mestery wrote:
> On Thu, Aug 14, 2014 at 10:13 AM, Sandy Walsh  
> wrote:
> > On 8/14/2014 11:28 AM, Russell Bryant wrote:
> >> On 08/14/2014 10:04 AM, CARVER, PAUL wrote:
> >>> Daniel P. Berrange [mailto:berra...@redhat.com] wrote:
> >>>
>  Depending on the usage needs, I think Google hangouts is a quite useful
>  technology. For many-to-many session its limit of 10 participants can be
>  an issue, but for a few-to-many broadcast it could be practical. What I
>  find particularly appealing is the way it can live stream the session
>  over youtube which allows for unlimited number of viewers, as well as
>  being available offline for later catchup.
> >>> I can't actually offer AT&T resources without getting some level of
> >>> management approval first, but just for the sake of discussion here's
> >>> some info about the telepresence system we use.
> >>>
> >>> -=-=-=-=-=-=-=-=-=-
> >>> ATS B2B Telepresence conferences can be conducted with an external 
> >>> company's
> >>> Telepresence room(s), which subscribe to the AT&T Telepresence Solution,
> >>> or a limited number of other Telepresence service provider's networks.
> >>>
> >>> Currently, the number of Telepresence rooms that can participate in a B2B
> >>> conference is limited to a combined total of 20 rooms (19 of which can be
> >>> AT&T rooms, depending on the number of remote endpoints included).
> >>> -=-=-=-=-=-=-=-=-=-
> >>>
> >>> We currently have B2B interconnect with over 100 companies and AT&T has
> >>> telepresence rooms in many of our locations around the US and around
> >>> the world. If other large OpenStack companies also have telepresence
> >>> rooms that we could interconnect with I think it might be possible
> >>> to get management agreement to hold a couple OpenStack meetups per
> >>> year.
> >>>
> >>> Most of our rooms are best suited for 6 people, but I know of at least
> >>> one 18 person telepresence room near me.
> >> An ideal solution would allow attendees to join as individuals from
> >> anywhere.  A lot of contributors work from home.  Is that sort of thing
> >> compatible with your system?
> >>
> > http://bluejeans.com/ was a good experience.
> >
> > What about Google Hangout OnAir for the PTL and core, while others are
> > view-only with chat/irc questions?
> >
> This is a terrible idea, as it perpetuates the "core vs. non-core"
> argument. We need equal participation for cores and non-cores alike.

Yeah I think we'd have to be careful about doing such a distinct
split in participation between groups of people. I think the idea
of some people full-access, others view-only will only be viable
for those discussions where it was focused as an information
dissemination event. eg 1 or a handful of people presenting some
kind of proposal / problem, and then have the interactive
discussion part in a more broadly inclusive arena like IRC or
email. Or if there were discussions which are inherantly core
only work already so that there wasn't an expectation of more
general non-core attendance.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread Kyle Mestery
On Thu, Aug 14, 2014 at 10:31 AM, CARVER, PAUL  wrote:
> Russell Bryant [mailto:rbry...@redhat.com] wrote:
>
>>An ideal solution would allow attendees to join as individuals from
>>anywhere.  A lot of contributors work from home.  Is that sort of thing
>>compatible with your system?
>
> In principle, yes, but that loses the immersive telepresence aspect
> which is the next best thing to an in-person meetup (which is where
> this thread started.)
>
> AT&T Employees live and breathe on AT&T Connect which is our
> teleconferencing (not telepresence) service. It supports webcam
> video as well as desktop sharing, but I'm on the verge of making
> a sales pitch here which was NOT my intent.
>
> I'm on AT&T Connect meetings 5+ times a day but I'm biased so I
> won't offer any opinion on how it compares to WebEx, GotoMeeting,
> and other services. None of them are really equivalent to the
> purpose built telepresence rooms.
>
> My point was that there may well be a telepresence room within
> reasonable driving distance for a large number of OpenStack
> contributors if we were able to get a number of the large
> OpenStack participant companies to open their doors to an
> occasional meet-up. Instead of asking participants around
> the globe to converge on a single physical location for
> a meet-up, perhaps they could converge on the closest of
> 20 different locations that are linked via telepresence.
>
If we could make this work, I'd be up for it. It's a great balance to
the travel and face-to-face argument here.

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread David Kranz

On 08/14/2014 10:54 AM, Matt Riedemann wrote:



On 8/14/2014 3:47 AM, Daniel P. Berrange wrote:

On Thu, Aug 14, 2014 at 09:24:36AM +1000, Michael Still wrote:

On Thu, Aug 14, 2014 at 3:09 AM, Dan Smith  wrote:

I'm not questioning the value of f2f - I'm questioning the idea of
doing f2f meetings sooo many times a year. OpenStack is very much
the outlier here among open source projects - the vast majority of
projects get along very well with much less f2f time and a far
smaller % of their contributors attend those f2f meetings that do
happen. So I really do question what is missing from OpenStack's
community interaction that makes us believe that having 4 f2f
meetings a year is critical to our success.


How many is too many? So far, I have found the midcycles to be 
extremely
productive -- productive in a way that we don't see at the summits, 
and
I think other attendees agree. Obviously if budgets start limiting 
them,

then we'll have to deal with it, but I don't want to stop meeting
preemptively.


I agree they're very productive. Let's pick on the nova v3 API case as
an example... We had failed as a community to reach a consensus using
our existing discussion mechanisms (hundreds of emails, at least three
specs, phone calls between the various parties, et cetera), yet at the
summit and then a midcycle meetup we managed to nail down an agreement
on a very contentious and complicated topic.


We thought we had agreement on v3 API after Atlanta f2f summit and
after Hong Kong f2f too. So I wouldn't neccessarily say that we
needed another f2f meeting to resolve that, but rather than this is
a very complex topic that takes a long time to resolve no matter
how we discuss it and the discussions had just happened to reach
a natural conclusion this time around. But lets see if this agreement
actually sticks this time


I can see the argument that travel cost is an issue, but I think its
also not a very strong argument. We have companies spending millions
of dollars on OpenStack -- surely spending a relatively small amount
on travel to keep the development team as efficient as possible isn't
a big deal? I wouldn't be at all surprised if the financial costs of
the v3 API debate (staff time mainly) were much higher than the travel
costs of those involved in the summit and midcycle discussions which
sorted it out.


I think the travel cost really is a big issue. Due to the number of
people who had to travel to the many mid-cycle meetups, a good number
of people I work with no longer have the ability to go to the Paris
design summit. This is going to make it harder for them to feel a
proper engaged part of our community. I can only see this situation
get worse over time if greater emphasis is placed on attending the
mid-cycle meetups.


Travelling to places to talk to people isn't a great solution, but it
is the most effective one we've found so far. We should continue to
experiment with other options, but until we find something that works
as well as meetups, I think we need to keep having them.


IMHO, the reasons to cut back would be:

- People leaving with a "well, that was useless..." feeling
- Not enough people able to travel to make it worthwhile

So far, neither of those have been outcomes of the midcycles we've 
had,

so I think we're doing okay.

The design summits are structured differently, where we see a lot more
diverse attendance because of the colocation with the user summit. It
doesn't lend itself well to long and in-depth discussions about 
specific

things, but it's very useful for what it gives us in the way of
exposure. We could try to have less of that at the summit and more
midcycle-ish time, but I think it's unlikely to achieve the same level
of usefulness in that environment.

Specifically, the lack of colocation with too many other projects has
been a benefit. This time, Mark and Maru where there from Neutron. 
Last

time, Mark from Neutron and the other Mark from Glance were there. If
they were having meetups in other rooms (like at summit) they wouldn't
have been there exposed to discussions that didn't seem like they'd 
have

a component for their participation, but did after all (re: nova and
glance and who should own flavors).


I agree. The ability to focus on the issues that were blocking nova
was very important. That's hard to do at a design summit when there is
so much happening at the same time.


Maybe we should change the way we structure the design summit to
improve that. If there are critical issues blocking nova, it feels
like it is better to be able to discuss and resolve as much as possible
at the start of the dev cycle rather than in the middle of the dev
cycle because I feel that means we are causing ourselves pain during
milestone 1/2.


Just speaking from experience, I attended the Icehouse meetup before 
my first summit (Juno in ATL) and the design summit sessions for Juno 
were a big disappointment after the meetup sessions, basically because 
of the time const

Re: [openstack-dev] [infra] Gerrit downtime on August 16 for project renames

2014-08-14 Thread Sergey Lukjanov
Adding Kite move from stack forge to openstack org:

stackforge/kite -> openstack/kite
stackforge/python-kiteclient -> openstack/python-kiteclient

On Thu, Aug 14, 2014 at 1:25 AM, Sergey Lukjanov  wrote:
> Hi,
>
> On Saturday, August 16 at 16:00 UTC Gerrit will be unavailable for
> about 20 minutes while we rename some projects. Existing reviews,
> project watches, etc, should all be carried over.
>
> The current list of projects that we will rename is:
>
> openstack/marconi -> openstack/zaqar
> openstack/python-marconiclient -> openstack/python-zaqarclient
> openstack/marconi-specs -> openstack/zaqar-specs
> openstack/openstack-security-notes ->
> openstack-attic/openstack-security-notes
>
> Though that list is subject to change.
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Principal Software Engineer
> Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Infra] [QA] Devstack and Testing for ironic-python-agent``

2014-08-14 Thread Jim Rollenhagen
Hi friends,

The two devstack patches mentioned below have had the latest patchset up for a 
week now, and still only have one +2 on them. The other patches (and the 
ultimate goal of getting CI running on this driver) are blocked these patches.

Could a devstack core please review these soon? Thanks!

// jim

On August 7, 2014 at 3:51:21 PM, Jay Faulkner (j...@jvf.cc) wrote:

Hi all,



At the recent Ironic mid-cycle meetup, we got the first version of the 
ironic-python-agent (IPA) driver merged. There are a few reviews we need merged 
(and their dependencies) across a few other projects in order to begin testing 
it automatically. We would like to eventually gate IPA and Ironic with tempest 
testing similar to what the pxe driver does today.



For IPA to work in devstack (openstack-dev/devstack repo):

 - https://review.openstack.org/#/c/112095 Adds swift temp URL support to 
Devstack

 - https://review.openstack.org/#/c/108457 Adds IPA support to Devstack

 

Docs on running IPA in devstack (openstack/ironic repo):

 - https://review.openstack.org/#/c/112136/

 

For IPA to work in the devstack-gate environment (openstack-infra/config & 
openstack-infra/devstack-gate repos):

 - https://review.openstack.org/#/c/112143 Add IPA support to devstack-gate

 - https://review.openstack.org/#/c/112134 Consolidate and rename Ironic jobs

 - https://review.openstack.org/#/c/112693 Add check job for IPA + tempest 



Once these are all merged, we'll have IPA testing via a nonvoting check job, 
using the IPA-CoreOS deploy ramdisk, in both the ironic and ironic-python-agent 
projects. This will be promoted to voting once proven stable.



However, this is only one of many possible IPA deploy ramdisk images. We're 
currently publishing a CoreOS ramdisk, but we also have an effort to create a 
ramdisk with diskimage-builder (https://review.openstack.org/#/c/110487/) , as 
well as plans for an ISO image (for use with things like iLo). As we gain 
additional images, we'd like to run those images through the same suite of 
tests prior to publishing them, so that images which would break IPA's gate 
wouldn't get published. The final state testing matrix should look something 
like this, with check and gate jobs in each project covering the variations 
unique to that project, and one representative test in consuming project's test 
pipelines.



IPA:

 - tempest runs against Ironic+agent_ssh with CoreOS ramdisk

 - tempest runs against Ironic+agent_ssh with DIB ramdisk

 - (other IPA tests)

 

IPA would then, as a post job, generate and publish the images, as we currently 
do with IPA-CoreOS ( 
http://tarballs.openstack.org/ironic-python-agent/coreos/ipa-coreos.tar.gz ). 
Because IPA would gate on tempest tests against each image, we'd avoid ever 
publishing a bad deploy ramdisk.



Ironic:

 - tempest runs against Ironic+agent_ssh with most suitable ramdisk (due to 
significantly decreased ram requirements, this will likely be an image created 
by DIB once it exists)

 - tempest runs against Ironic+pxe_ssh

 - (what ever else Ironic runs)

 

Nova and other integrated projects will continue to run a single job, using 
Ironic with its default deploy driver (currently pxe_ssh).

 

 

Using this testing matrix, we'll ensure that there is coverage of each 
cross-project dependency, without bloating each project's test matrix 
unnecessarily. If, for instance, a change in Nova passes the Ironic pxe_ssh job 
and lands, but then breaks the agent_ssh job and thus blocks Ironic's gate, 
this would indicate a layering violation between Ironic and its deploy drivers 
(from Nova's perspective, nothing should change between those drivers). 
Similarly, if IPA tests failed against the CoreOS image (due to Ironic OR Nova 
change), but the DIB image passed in both Ironic and Nova tests, then it's 
almost certainly an *IPA* bug.



Thanks so much for your time, and for the Openstack Ironic community being 
welcoming to us as we have worked towards this alternate deploy driver and work 
towards improving it even further as Kilo opens.



--

Jay Faulkner

___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread Russell Bryant
On 08/14/2014 11:40 AM, David Kranz wrote:
> On 08/14/2014 10:54 AM, Matt Riedemann wrote:
>>
>>
>> On 8/14/2014 3:47 AM, Daniel P. Berrange wrote:
>>> On Thu, Aug 14, 2014 at 09:24:36AM +1000, Michael Still wrote:
 On Thu, Aug 14, 2014 at 3:09 AM, Dan Smith  wrote:
>> I'm not questioning the value of f2f - I'm questioning the idea of
>> doing f2f meetings sooo many times a year. OpenStack is very much
>> the outlier here among open source projects - the vast majority of
>> projects get along very well with much less f2f time and a far
>> smaller % of their contributors attend those f2f meetings that do
>> happen. So I really do question what is missing from OpenStack's
>> community interaction that makes us believe that having 4 f2f
>> meetings a year is critical to our success.
>
> How many is too many? So far, I have found the midcycles to be
> extremely
> productive -- productive in a way that we don't see at the summits,
> and
> I think other attendees agree. Obviously if budgets start limiting
> them,
> then we'll have to deal with it, but I don't want to stop meeting
> preemptively.

 I agree they're very productive. Let's pick on the nova v3 API case as
 an example... We had failed as a community to reach a consensus using
 our existing discussion mechanisms (hundreds of emails, at least three
 specs, phone calls between the various parties, et cetera), yet at the
 summit and then a midcycle meetup we managed to nail down an agreement
 on a very contentious and complicated topic.
>>>
>>> We thought we had agreement on v3 API after Atlanta f2f summit and
>>> after Hong Kong f2f too. So I wouldn't neccessarily say that we
>>> needed another f2f meeting to resolve that, but rather than this is
>>> a very complex topic that takes a long time to resolve no matter
>>> how we discuss it and the discussions had just happened to reach
>>> a natural conclusion this time around. But lets see if this agreement
>>> actually sticks this time
>>>
 I can see the argument that travel cost is an issue, but I think its
 also not a very strong argument. We have companies spending millions
 of dollars on OpenStack -- surely spending a relatively small amount
 on travel to keep the development team as efficient as possible isn't
 a big deal? I wouldn't be at all surprised if the financial costs of
 the v3 API debate (staff time mainly) were much higher than the travel
 costs of those involved in the summit and midcycle discussions which
 sorted it out.
>>>
>>> I think the travel cost really is a big issue. Due to the number of
>>> people who had to travel to the many mid-cycle meetups, a good number
>>> of people I work with no longer have the ability to go to the Paris
>>> design summit. This is going to make it harder for them to feel a
>>> proper engaged part of our community. I can only see this situation
>>> get worse over time if greater emphasis is placed on attending the
>>> mid-cycle meetups.
>>>
 Travelling to places to talk to people isn't a great solution, but it
 is the most effective one we've found so far. We should continue to
 experiment with other options, but until we find something that works
 as well as meetups, I think we need to keep having them.

> IMHO, the reasons to cut back would be:
>
> - People leaving with a "well, that was useless..." feeling
> - Not enough people able to travel to make it worthwhile
>
> So far, neither of those have been outcomes of the midcycles we've
> had,
> so I think we're doing okay.
>
> The design summits are structured differently, where we see a lot more
> diverse attendance because of the colocation with the user summit. It
> doesn't lend itself well to long and in-depth discussions about
> specific
> things, but it's very useful for what it gives us in the way of
> exposure. We could try to have less of that at the summit and more
> midcycle-ish time, but I think it's unlikely to achieve the same level
> of usefulness in that environment.
>
> Specifically, the lack of colocation with too many other projects has
> been a benefit. This time, Mark and Maru where there from Neutron.
> Last
> time, Mark from Neutron and the other Mark from Glance were there. If
> they were having meetups in other rooms (like at summit) they wouldn't
> have been there exposed to discussions that didn't seem like they'd
> have
> a component for their participation, but did after all (re: nova and
> glance and who should own flavors).

 I agree. The ability to focus on the issues that were blocking nova
 was very important. That's hard to do at a design summit when there is
 so much happening at the same time.
>>>
>>> Maybe we should change the way we structure the design summit to
>>> improve that. If there a

Re: [openstack-dev] [all] The future of the integrated release

2014-08-14 Thread Vishvananda Ishaya

On Aug 13, 2014, at 5:07 AM, Daniel P. Berrange  wrote:

> On Wed, Aug 13, 2014 at 12:55:48PM +0100, Steven Hardy wrote:
>> On Wed, Aug 13, 2014 at 11:42:52AM +0100, Daniel P. Berrange wrote:
>>> On Thu, Aug 07, 2014 at 03:56:04AM -0700, Jay Pipes wrote:
>>> By ignoring stable branches, leaving it upto a
>>> small team to handle, I think we giving the wrong message about what
>>> our priorities as a team team are. I can't help thinking this filters
>>> through to impact the way people think about their work on master.
>> 
>> Who is ignoring stable branches?  This sounds like a project specific
>> failing to me, as all experienced core reviewers should consider offering
>> their services to help with stable-maint activity.
>> 
>> I don't personally see any reason why the *entire* project core team has to
>> do this, but a subset of them should feel compelled to participate in the
>> stable-maint process, if they have sufficient time, interest and historical
>> context, it's not "some other team" IMO.
> 
> I think that stable branch review should be a key responsibility for anyone
> on the core team, not solely those few who volunteer for stable team. As
> the number of projects in openstack grows I think the idea of having a
> single stable team with rights to approve across any project is ultimately
> flawed because it doesn't scale efficiently and they don't have the same
> level of domain knowledge as the respective project teams.

This side-thread is a bit off topic for the main discussion, but as a
stable-maint with not a lot of time, I would love more help from the core
teams here. That said, help is not just about aproving reviews. There are
three main steps in the process:
 1. Bugs get marked for backport
   I try to stay on top of this in nova by following the feed of merged patches
   and marking them icehouse-backport-potential[1] when they seem like they are
   appropriate but I’m sure I miss some.
 2. Patches get backprorted
   This is sometimes a very time-consuming process, especially late in the
   cycle or for patches that are being backported 2 releases.
 3. Patches get reviewed and merged
   The criteria for a stable backport are pretty straightforward and I think
   any core reviewer is capable of understanding and aply that criteria

While we have fallen behind in number 3. at times, we are much more often WAY
behind on 2. I also suspect that a whole bunch of patches get missed in some
of the other projects where someone isn’t specifically trying to mark them all
as they come in.

Vish

[1] https://bugs.launchpad.net/nova/+bugs?field.tag=icehouse-backport-potential



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Cinder] Cinder Core nomination

2014-08-14 Thread Mike Perez
On 06:55 Thu 14 Aug , Boring, Walter wrote:
> Hey guys,
>I wanted to pose a nomination for Cinder core.
> 
> Xing Yang.
> She has been active in the cinder community for many releases and has worked 
> on several drivers as well as other features for cinder itself.   She has 
> been doing an awesome job doing reviews and helping folks out in the 
> #openstack-cinder irc channel for a long time.   I think she would be a good 
> addition to the core team.

+1

If you take a look at the last 3 months [1][2] she has been very active and
providing great feedback in reviews. She has also been setting great
expectations for other drivers with the recent third-party CI work. Thanks
Xing!

[1] - http://russellbryant.net/openstack-stats/cinder-reviewers-90.txt
[2] - http://stackalytics.com/?user_id=xing-yang

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] requirements.txt: explicit vs. implicit

2014-08-14 Thread Ben Nemec
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/14/2014 08:37 AM, Ihar Hrachyshka wrote:
> Hi all,
> 
> some plugins depend on modules that are not mentioned in 
> requirements.txt. Among them, Cisco Nexus (ncclient), Brocade 
> (ncclient), Embrane (heleosapi)... Some other plugins put their 
> dependencies in requirements.txt though (like Arista depending on 
> jsonrpclib).
> 
> There are pros and cons in both cases. The obvious issue with not 
> putting those requirements in the file is that packagers are left 
> uninformed about those implicit requirements existing, meaning
> plugins are shipped to users with broken dependencies. It also
> means we ship code that depends on unknown modules grabbed from
> random places in the internet instead of relying on what's
> available on pypi, which is a bit scary.
> 
> With my packager hat on, I would like to suggest to make those 
> dependencies explicit by filling in requirements.txt. This will
> make packaging a bit easier. Of course, runtime dependencies being
> set correctly do not mean plugins are working and tested, but at
> least we give them chance to be tested and used.
> 
> But, maybe there are valid concerns against doing so. In that case,
> I would be glad to know how packagers are expected to track those 
> implicit dependencies.
> 
> I would like to ask community to decide what's the right way to
> handle those cases.

So I raised a similar issue about six months ago and completely failed
to follow up on the direction everyone seemed to be onboard with:
http://lists.openstack.org/pipermail/openstack-dev/2014-February/026976.html

I did add support to pbr for using nested requirements files, and I
had posted a PoC for oslo.messaging to allow requirements files for
different backends, but some of our CI jobs don't know how to handle
that and I never got around to addressing the limitation.

- From the packaging perspective, I think you could do a requirements
file that basically rolls up requirements.d/*.txt minus test.txt and
get all the runtime dependencies that the project knows about,
assuming we finished the implementation for this and started using it
in projects.

I don't really anticipate having time to pursue this in the near
future, so if you wanted to pick up the ball and run with it that
would be great! :-)

- -Ben

> 
> Cheers, /Ihar
> 
> ___ OpenStack-dev
> mailing list OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJT7OS8AAoJEDehGd0Fy7uqtqsH/0kcIB+Q9iA5vR5evC+PDsFb
ek+cwvldbgpv0JwFwhgsLtsbRbI9xv1wDpu8L5i30yKzgcPQX5cqYe2WZeG5eCBJ
HUHb3t86rCanBU+kp7hpjHSoQbdwhY9gtu1LwiVha/2IeOHchZBzJccxEcACsv0q
Es8YkQy3qp9EfegumaL4OHvYFfB/j4NbewxZjAb3mkcpObb6NBM1v+qeubjTEg5I
nY8lJMLBXJOLNrR5cg8G7sObh3Cow51GtjwFaiFuZi/9o6whQFXipKnwXkaSRR5U
I3YV18sy3NLtoStZdr4/Oa9kUICw1MdDZAckoc5nP+AQeZCBFaPaCPpkLzIcMT8=
=CdPl
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators?

2014-08-14 Thread Pendergrass, Eric
Sure, Doug.  We want the ability to selectively apply policies to certain
Ceilometer
API methods based on user/tenant roles.

For example, we want to restrict the ability to execute Alarm deletes to
admins and user/tenants who have a special role, say "domainadmin".

The policy file might look like this:
{
"context_is_admin":  [["role:admin"]],
"admin_and_matching_project_domain_id":  [["role:domainadmin"]],
"admin_or_cloud_admin": [["rule:context_is_admin"],
["rule:admin_and_matching_project_domain_id"]],
"telemetry:delete_alarms":  [["rule:admin_or_cloud_admin"]]
}

The current acl.py and _query_to_kwargs access control setup either sets
project_id scope to None (do everything) or to the project_id in the request
header 'X-Project-Id'.  This allows for admin or project scope, but nothing
in
between.

We tried hooks.  Unfortunately we can't seem to turn the API controllers
into
HookControllers just by adding HookController to the Controller class
definition.  It causes infinite recursion on API startup.  For example, this
doesn't work because ceilometer-api will not start with it:
class MetersController(rest.RestController, HookController):

If there was a way to use hooks with the v2. API controllers that might work
really well.

So we are left using the @secure decorator and deriving the method name from
the request environ PATH_INFO and REQUEST_METHOD values.  This is how we
determine the wrapped method within the class (REQUEST_METHOD + PATH_INFO =
"telemetry:delete_alarms" with some munging).  We need the method name in
order to
selectively apply acces control to certain methods.

Deriving the method this way isn't ideal but it's the only thing we've
gotten working 
between hooks, @secure, and regular decorators.

I submitted a WIP BP here: https://review.openstack.org/#/c/112137/3.  It is
slightly out of date but should give you a beter idea of our goals.

Thanks

> Eric,
>
> If you can give us some more information about your end goal, independent
of the implementation, maybe we can propose an alternate technique to
achieve the same thing.
>
> Doug
>
> On Aug 12, 2014, at 6:21 PM, Ryan Petrello 
wrote:
>
> > Yep, you're right, this doesn't seem to work.  The issue is that
> > security is enforced at routing time (while the controller is still
> > actually being discovered).  In order to do this sort of thing with
> > the `check_permissions`, we'd probably need to add a feature to pecan.
> >
> > On 08/12/14 06:38 PM, Pendergrass, Eric wrote:
> >> Sure, here's the decorated method from v2.py:
> >>
> >>class MetersController(rest.RestController):
> >>"""Works on meters."""
> >>
> >>@pecan.expose()
> >>def _lookup(self, meter_name, *remainder):
> >>return MeterController(meter_name), remainder
> >>
> >>@wsme_pecan.wsexpose([Meter], [Query])
> >>@secure(RBACController.check_permissions)
> >>def get_all(self, q=None):
> >>
> >> and here's the decorator called by the secure tag:
> >>
> >>class RBACController(object):
> >>global _ENFORCER
> >>if not _ENFORCER:
> >>_ENFORCER = policy.Enforcer()
> >>
> >>
> >>@classmethod
> >>def check_permissions(cls):
> >># do some stuff
> >>
> >> In check_permissions I'd like to know the class and method with the
@secure tag that caused check_permissions to be invoked.  In this case, that
would be MetersController.get_all.
> >>
> >> Thanks
> >>
> >>
> >>> Can you share some code?  What do you mean by, "is there a way for the
decorator code to know it was called by MetersController.get_all"
> >>>
> >>> On 08/12/14 04:46 PM, Pendergrass, Eric wrote:
>  Thanks Ryan, but for some reason the controller attribute is None:
> 
>  (Pdb) from pecan.core import state
>  (Pdb) state.__dict__
>  {'hooks': [,
>  ,
>  ,
>  ], 'app':
>  , 'request':   0x3ed7390 GET http://localhost:8777/v2/meters>, 'controller': None,
>  'response': }
> 
> > -Original Message-
> > From: Ryan Petrello [mailto:ryan.petre...@dreamhost.com]
> > Sent: Tuesday, August 12, 2014 10:34 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Ceilometer] Way to get wrapped
method's name/class using Pecan secure decorators?
> >
> > This should give you what you need:
> >
> > from pecan.core import state
> > state.controller
> >
> > On 08/12/14 04:08 PM, Pendergrass, Eric wrote:
> >> Hi, I'm trying to use the built in secure decorator in Pecan for
access control, and I'ld like to get the name of the method that is wrapped
from within the decorator.
> >>
> >> For instance, if I'm wrapping MetersController.get_all with an
@secure decorator, is there a way for the decorator code to know it was
called by MetersController.get_all?
> >>
> >> I don't see any global objects that provide this information.  I
can g

[openstack-dev] Using gerrymander, a client API and command line tool for gerrit

2014-08-14 Thread Daniel P. Berrange
Over the past couple of months I've worked on creating a python library
and command line tool for dealing with gerrit. I've blogged about this
a number of times on http://planet.openstack.org, but I figure there are
quite a few people who won't be reading the blogs there or easily miss
some posts hence this mail to alert people to its existance.

Gerrymander was born out of my frustration with the inefficiency of
handling reviews through the gerrit web UI. In particular (until
today - yay !) the review comments were splattered with noise from
CI bots, and it was hard to generate some kinds of reports for lists
of reviews you might want. The gerrit upgrade has improved the latter
to with better query language support, but it is still somewhat limited
in the way it can present the results and I prefer the CLI reports over
web UI anyway.

You might be familiar with tools like gerrit_view, reviewtodo and
reviewstats written by Josh Harlow and / or Russell Bryant in the
past. As a starting point, gerrymander aimed to offer all the
functionality present in those tools. More importantly though was
the idea to write it as a set of a python classes rather than just
put everything in a command line tool. This makes it easy for people
to customize & extend its functionality to pull our new interesting
types of data.

Personally, using gerrymander to query gerrit for reviews and being
able to quickly get a list of comments filtered for bots has had a
big positive impact on my efficiency as a core reviewer. I've been
able to review more code in less time and focus my attention better.
So if you are the kind of person who is more comfortable with having
command line tools instead of pointy-clicky web UIs, I'd encourage
you to try it out for yourself. Hopefully it will be of as much use
to other reviewers as it has been for me.

You can get it from pypi  (pip install gerrymander) and for a quick
start guide, check out the README file

   https://github.com/berrange/gerrymander/blob/master/README

Or my blog posts

   https://www.berrange.com/tags/gerrymander/

It is intended to work with any Gerrit instance, not just openstack,
so you'll need a config file that tailors it for openstack world,
like our team membership, bot accounts, etc. I've got a demo config
file here that should point you in the right direction at least:

   https://wiki.openstack.org/wiki/GerrymanderConfig

Feel free to update that wiki to add more project teams I've not
covered already, since there's many openstack projects I've not
familiar enough with.

NB, I've tested it to work on Linux systems. It would probably work
on OS-X since that resembles UNIX well enough, but Windows people
might be out of luck since it relies on forking stuff like ssh
and less with pipes connecting them up.


In terms of how I personally use it (assuming the above config file)...

At least once a day I look at all open changes in Nova which touch any
libvirt source files

  $ gerrymander -g nova --branch master libvirt

and will go through and review anything that I've not seen before.

In addition I'll process any open changes where I have commented on a
previous version of a patch, but not commented in the current version
which is a report generated by a special command:

  $ gerrymander -g nova todo-mine

If I have more time I'll look at any patches which have been submitted
where *no  one* has done any review, touching anything in nova source
tree

  $ gerrymander -g nova todo-noones

or any patches where I've not reviewed the current version

  $ gerrymander -g nova todo-others

That's pretty much all I need on a day-to-day basis to identify stuff
needing my review attention. For actual review I'll use the gerrit
web UI, or git fetch the patch locally depending on complexity of the
change in question. Also if I want to see comments fully expanded,
without bots, for a specific change number 104264 I would do

  $ gerrymander comments 104264

One day I'd love to be able to write some reports which pull priority
data on blueprints from nova-specs or launchpad, and correlate with 
reviews so important changes needing attention can be highlighted...

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] 3'rd party CI systems: Not Whitelisted Volume type with name * could not be found.

2014-08-14 Thread Asselin, Ramy
Hi,

Does anyone know how to configure cinder ci tests to not have these errors?

12:32:11 *** Not Whitelisted *** 2014-08-14 12:23:56.179 18021 ERROR 
cinder.volume.volume_types [req-c9ec92ab-132b-4167-a467-15bd213659e8 
bd1f54a867ce47acb4097cd94149efa0 d15dcf4cb3c7495389799ec849e9036d - - -] 
Default volume type is not found, please check default_volume_type config: 
Volume type with name dot could not be found.

17:43:28 *** Not Whitelisted *** 2014-08-11 17:31:20.343 2097 ERROR 
cinder.volume.volume_types [req-01e539ad-5357-4a0f-8d58-464b970114f9 
f72cd499be584d9d9585bc26ca71c603 d845ad2d7e6a47dfb84bdf0754f60384 - - -] 
Default volume type is not found, please check default_volume_type config: 
Volume type with name cat_1 could not be found.

http://15.126.198.151/60/111460/18/check/dsvm-tempest-hp-lefthand/3ee1598/console.html
http://publiclogs.emc.com/vnx_ostack/EMC_VNX_FC/212/console.html

Thanks,
Ramy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vmware] Canonical list of os types

2014-08-14 Thread Anita Kuno
On 08/14/2014 05:32 AM, Matthew Booth wrote:
> I've just spent the best part of a day tracking down why instance
> creation was failing on a particular setup. The error message from
> CreateVM_Task was: 'A specified parameter was not correct'.
> 
> After discounting a great many possibilities, I finally discovered that
> the problem was guestId, which was being set to 'CirrosGuest'.
> Unusually, the vSphere API docs don't contain a list of valid values for
> that field. Given the unhelpfulness of the error message, it might be
> worthwhile validating that field (which we get from glance) and
> displaying an appropriate warning.
> 
> Does anybody have a canonical list of valid values?
> 
> Thanks,
> 
> Matt
> 
os == operating system
openstack == openstack

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Using gerrymander, a client API and command line tool for gerrit

2014-08-14 Thread Alessandro Pilotti
This simple but effective tool went straight into my personal list of daily
most used utilities since Daniel suggested it to me some weeks ago.
I’m using it mostly on OS X and it works smoothly. Dan sorted out very quicly a
couple of minor bugs that came up during initial testing.

It proved to be particolarly useful while tracking the status of all the Hyper-V
pending patches that we had on the Havana and Icehouse backports.

Since then I use it constantly to track my co-workers patches and reviews and
keep an eye on the CI failures as frankly beyond a given volume emails are 
simply too noisy, no matter how many filters you set.

All in all a simple productivity saver, it’d be great to have similar features
directly in Gerrit.

So, thanks again! :)

Alessandro


On 14 Aug 2014, at 19:54, Daniel P. Berrange  wrote:

> Over the past couple of months I've worked on creating a python library
> and command line tool for dealing with gerrit. I've blogged about this
> a number of times on http://planet.openstack.org, but I figure there are
> quite a few people who won't be reading the blogs there or easily miss
> some posts hence this mail to alert people to its existance.
> 
> Gerrymander was born out of my frustration with the inefficiency of
> handling reviews through the gerrit web UI. In particular (until
> today - yay !) the review comments were splattered with noise from
> CI bots, and it was hard to generate some kinds of reports for lists
> of reviews you might want. The gerrit upgrade has improved the latter
> to with better query language support, but it is still somewhat limited
> in the way it can present the results and I prefer the CLI reports over
> web UI anyway.
> 
> You might be familiar with tools like gerrit_view, reviewtodo and
> reviewstats written by Josh Harlow and / or Russell Bryant in the
> past. As a starting point, gerrymander aimed to offer all the
> functionality present in those tools. More importantly though was
> the idea to write it as a set of a python classes rather than just
> put everything in a command line tool. This makes it easy for people
> to customize & extend its functionality to pull our new interesting
> types of data.
> 
> Personally, using gerrymander to query gerrit for reviews and being
> able to quickly get a list of comments filtered for bots has had a
> big positive impact on my efficiency as a core reviewer. I've been
> able to review more code in less time and focus my attention better.
> So if you are the kind of person who is more comfortable with having
> command line tools instead of pointy-clicky web UIs, I'd encourage
> you to try it out for yourself. Hopefully it will be of as much use
> to other reviewers as it has been for me.
> 
> You can get it from pypi  (pip install gerrymander) and for a quick
> start guide, check out the README file
> 
>   https://github.com/berrange/gerrymander/blob/master/README
> 
> Or my blog posts
> 
>   https://www.berrange.com/tags/gerrymander/
> 
> It is intended to work with any Gerrit instance, not just openstack,
> so you'll need a config file that tailors it for openstack world,
> like our team membership, bot accounts, etc. I've got a demo config
> file here that should point you in the right direction at least:
> 
>   https://wiki.openstack.org/wiki/GerrymanderConfig
> 
> Feel free to update that wiki to add more project teams I've not
> covered already, since there's many openstack projects I've not
> familiar enough with.
> 
> NB, I've tested it to work on Linux systems. It would probably work
> on OS-X since that resembles UNIX well enough, but Windows people
> might be out of luck since it relies on forking stuff like ssh
> and less with pipes connecting them up.
> 
> 
> In terms of how I personally use it (assuming the above config file)...
> 
> At least once a day I look at all open changes in Nova which touch any
> libvirt source files
> 
>  $ gerrymander -g nova --branch master libvirt
> 
> and will go through and review anything that I've not seen before.
> 
> In addition I'll process any open changes where I have commented on a
> previous version of a patch, but not commented in the current version
> which is a report generated by a special command:
> 
>  $ gerrymander -g nova todo-mine
> 
> If I have more time I'll look at any patches which have been submitted
> where *no  one* has done any review, touching anything in nova source
> tree
> 
>  $ gerrymander -g nova todo-noones
> 
> or any patches where I've not reviewed the current version
> 
>  $ gerrymander -g nova todo-others
> 
> That's pretty much all I need on a day-to-day basis to identify stuff
> needing my review attention. For actual review I'll use the gerrit
> web UI, or git fetch the patch locally depending on complexity of the
> change in question. Also if I want to see comments fully expanded,
> without bots, for a specific change number 104264 I would do
> 
>  $ gerrymander comments 104264
> 
> One day I'd love to be able to write 

Re: [openstack-dev] Is there a way to let nova schedule plugin fetch glance image metadata

2014-08-14 Thread Dugger, Donald D
Indeed, no REST API yet, but that's one of the things we will be considering 
for Gantt.  I wonder if this is yet another argument for Gantt and, if you can 
wait until the Kilo cycle, maybe we can provide an appropriate service for Heat.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Wednesday, August 13, 2014 10:12 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Is there a way to let nova schedule plugin fetch 
glance image metadata

On 08/13/2014 11:06 PM, zhiwei wrote:
> Hi Jay.
>
> The case is: When heat create a stack, it will first call our 
> scheduler(will pass image_id), our scheduler will get image metadata 
> by image_id.
>
> Our scheduler will build a placement policy through image metadata, 
> then start booting VM.

How exactly is Heat calling your scheduler? The Nova scheduler does not have a 
public REST API, so I'm unsure how you are calling it.

More details needed, thanks!
  :)

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Issues with POSIX semaphores and other locks in lockutils

2014-08-14 Thread Yuriy Taraday
Hello.

It started (for me) with a bug [1] that stated one problem with POSIX
semaphores. I started implementing fix for that [2] and it appeared that
there are other issues with lockutils.
Discussion got spread over bug [1] and Gerrit [2] and got limited to those
who get notifications on them.
I'd like to move this discussion to this mailing list and recap all
thoughts in etherpad [3] so that everyone will be on one page.
I've written my view on possible options we have on etherpad [3], so please
take a look at them and share your thoughts on it.
I personally prefer doing options B and AB2 (SysV + fix for them) while we
can fix file locks with C3 as well.

Looking forward to your opinions.

[1] https://launchpad.net/bugs/1327946
[2] https://review.openstack.org/108954
[3] https://etherpad.openstack.org/p/lockutils-issues

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread Dugger, Donald D
My experience with mics, no matter how good, In conference rooms is not good.  
You are always working hard to hear the soft spoken person who is drowned by 
the background noise.  It’s always a strain and I don’t want to even think 
about doing it for a full 8 hrs.  It’s better than nothing so I agree, we need 
to provide some sort of remote conferencing at meetups but nothing compares 
with actually being there.

I’m not a core so I’m speaking from the outside but I think requiring 
attendance at the summits (absence requires a note from your parent ☺  while 
strongly encouraging attendance at mid-cycle meetups seems reasonable.

PS:  In re companies spending millions on OpenStack so travel to f2f events 
should be a drop in the bucket.  It doesn’t always work that way with the 
finance department, money comes out of different buckets and the travel bucket 
`always` dries up first.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

From: Joe Gordon [mailto:joe.gord...@gmail.com]
Sent: Wednesday, August 13, 2014 10:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][core] Expectations of core reviewers



On Wed, Aug 13, 2014 at 8:31 PM, Michael Still 
mailto:mi...@stillhq.com>> wrote:
On Thu, Aug 14, 2014 at 1:24 PM, Jay Pipes 
mailto:jaypi...@gmail.com>> wrote:

> Just wanted to quickly weigh in with my thoughts on this important topic. I
> very much valued the face-to-face interaction that came from the mid-cycle
> meetup in Beaverton (it was the only one I've ever been to).
>
> That said, I do not believe it should be a requirement that cores make it to
> the face-to-face meetings in-person. A number of folks have brought up very
> valid concerns about personal/family time, travel costs and burnout.
I'm not proposing they be a requirement. I am proposing that they be
strongly encouraged.

> I believe that the issue raised about furthering the divide between core and
> non-core folks is actually the biggest reason I don't support a mandate to
> have cores at the face-to-face meetings, and I think we should make our best
> efforts to support quality virtual meetings that can be done on a more
> frequent basis than the face-to-face meetings that would be optional.
I am all for online meetings, but we don't have a practical way to do
them at the moment apart from IRC. Until someone has a concrete
proposal that's been shown to work, I feel its a straw man argument.

What about making it easier for remote people to participate at the mid-cycle 
meetups? Set up some microphones and a Google hangout?  At least that way 
attending the mid-cycle is not all or nothing.

We did something like this last cycle (IIRC we didn't have enough mics) and it 
worked pretty well.


Michael

--
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra] Third Party CI naming and contact (action required)

2014-08-14 Thread Lucas Eznarriaga
Hi,

Midokura CI should be up to date now too.
We have a problem with zuul-merger when the host id of review.openstack.org
changes. In this case, the new rsa key has to be added manually otherwise
the build will fail with a:

Merge Failed.

This change was unable to be automatically merged with the current state of
the repository. Please rebase your change and upload a new patchset

So if the review.openstack.org host changes soon it will start failing
again.
That said, taking into account that August is the usual vacation month in
Europe and probably in some other parts as well, end of August sounds like
a tight deadline. Also, I have been attending to third-party meetings
regularly but the last one (I have checked the logs) and I feel like an
announcement for these new requirements should have been done there.

Have a great summer!

Lucas




On Wed, Aug 13, 2014 at 8:23 PM, James E. Blair  wrote:

> Hi,
>
> We've updated the registration requirements for third-party CI systems
> here:
>
>   http://ci.openstack.org/third_party.html
>
> We now have 86 third-party CI systems registered and have undertaken an
> effort to make things more user-friendly for the developers who interact
> with them.  There are two important changes to be aware of:
>
> 1) We now generally name third-party systems in a descriptive manner
> including the company and product they are testing.  We have renamed
> currently-operating CI systems to match these standards to the best of
> our abilities.  Some of them ended up with particularly bad names (like
> "Unknown Function...").  If your system is one of these, please join us
> in #openstack-infra on Freenode to establish a more descriptive name.
>
> 2) We have established a standard wiki page template to supply a
> description of the system, what is tested, and contact information for
> each system.  See https://wiki.openstack.org/wiki/ThirdPartySystems for
> an index of such pages and instructions for creating them.  Each
> third-party CI system will have its own page in the wiki and it must
> include a link to that page in every comment that it leaves in Gerrit.
>
> If you operate a third-party CI system, please ensure that you register
> a wiki page and update your system to link to it in every new Gerrit
> comment by the end of August.  Beginning in September, we will disable
> systems that have not been updated.
>
> Thanks,
>
> Jim
>
> ___
> OpenStack-Infra mailing list
> openstack-in...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Cinder] Cinder Core nomination

2014-08-14 Thread John Griffith
On Thu, Aug 14, 2014 at 10:29 AM, Mike Perez  wrote:

> On 06:55 Thu 14 Aug , Boring, Walter wrote:
> > Hey guys,
> >I wanted to pose a nomination for Cinder core.
> >
> > Xing Yang.
> > She has been active in the cinder community for many releases and has
> worked on several drivers as well as other features for cinder itself.
> She has been doing an awesome job doing reviews and helping folks out in
> the #openstack-cinder irc channel for a long time.   I think she would be a
> good addition to the core team.
>
> +1
>
> If you take a look at the last 3 months [1][2] she has been very active and
> providing great feedback in reviews. She has also been setting great
> expectations for other drivers with the recent third-party CI work. Thanks
> Xing!
>
> [1] - http://russellbryant.net/openstack-stats/cinder-reviewers-90.txt
> [2] - http://stackalytics.com/?user_id=xing-yang
>
> --
> Mike Perez
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

​+1 for sure!!​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Cinder] Cinder Core nomination

2014-08-14 Thread Jay S. Bryant
Great nomination Walt.  Thank you!

+1

Thank you Xing for your contributions!

Jay

On Thu, 2014-08-14 at 06:55 +, Boring, Walter wrote:
> Hey guys,
>I wanted to pose a nomination for Cinder core.
> 
> Xing Yang.
> She has been active in the cinder community for many releases and has worked 
> on several drivers as well as other features for cinder itself.   She has 
> been doing an awesome job doing reviews and helping folks out in the 
> #openstack-cinder irc channel for a long time.   I think she would be a good 
> addition to the core team.
> 
> 
> Walt
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread Joe Gordon
On Thu, Aug 14, 2014 at 1:47 AM, Daniel P. Berrange 
wrote:

> On Thu, Aug 14, 2014 at 09:24:36AM +1000, Michael Still wrote:
> > On Thu, Aug 14, 2014 at 3:09 AM, Dan Smith  wrote:
> > >> I'm not questioning the value of f2f - I'm questioning the idea of
> > >> doing f2f meetings sooo many times a year. OpenStack is very much
> > >> the outlier here among open source projects - the vast majority of
> > >> projects get along very well with much less f2f time and a far
> > >> smaller % of their contributors attend those f2f meetings that do
> > >> happen. So I really do question what is missing from OpenStack's
> > >> community interaction that makes us believe that having 4 f2f
> > >> meetings a year is critical to our success.
> > >
> > > How many is too many? So far, I have found the midcycles to be
> extremely
> > > productive -- productive in a way that we don't see at the summits, and
> > > I think other attendees agree. Obviously if budgets start limiting
> them,
> > > then we'll have to deal with it, but I don't want to stop meeting
> > > preemptively.
> >
> > I agree they're very productive. Let's pick on the nova v3 API case as
> > an example... We had failed as a community to reach a consensus using
> > our existing discussion mechanisms (hundreds of emails, at least three
> > specs, phone calls between the various parties, et cetera), yet at the
> > summit and then a midcycle meetup we managed to nail down an agreement
> > on a very contentious and complicated topic.
>
> We thought we had agreement on v3 API after Atlanta f2f summit and
> after Hong Kong f2f too. So I wouldn't neccessarily say that we
> needed another f2f meeting to resolve that, but rather than this is
> a very complex topic that takes a long time to resolve no matter
> how we discuss it and the discussions had just happened to reach
> a natural conclusion this time around. But lets see if this agreement
> actually sticks this time
>
> > I can see the argument that travel cost is an issue, but I think its
> > also not a very strong argument. We have companies spending millions
> > of dollars on OpenStack -- surely spending a relatively small amount
> > on travel to keep the development team as efficient as possible isn't
> > a big deal? I wouldn't be at all surprised if the financial costs of
> > the v3 API debate (staff time mainly) were much higher than the travel
> > costs of those involved in the summit and midcycle discussions which
> > sorted it out.
>
> I think the travel cost really is a big issue. Due to the number of
> people who had to travel to the many mid-cycle meetups, a good number
> of people I work with no longer have the ability to go to the Paris
> design summit. This is going to make it harder for them to feel a
> proper engaged part of our community. I can only see this situation
> get worse over time if greater emphasis is placed on attending the
> mid-cycle meetups.
>
> > Travelling to places to talk to people isn't a great solution, but it
> > is the most effective one we've found so far. We should continue to
> > experiment with other options, but until we find something that works
> > as well as meetups, I think we need to keep having them.
> >
> > > IMHO, the reasons to cut back would be:
> > >
> > > - People leaving with a "well, that was useless..." feeling
> > > - Not enough people able to travel to make it worthwhile
> > >
> > > So far, neither of those have been outcomes of the midcycles we've had,
> > > so I think we're doing okay.
> > >
> > > The design summits are structured differently, where we see a lot more
> > > diverse attendance because of the colocation with the user summit. It
> > > doesn't lend itself well to long and in-depth discussions about
> specific
> > > things, but it's very useful for what it gives us in the way of
> > > exposure. We could try to have less of that at the summit and more
> > > midcycle-ish time, but I think it's unlikely to achieve the same level
> > > of usefulness in that environment.
> > >
> > > Specifically, the lack of colocation with too many other projects has
> > > been a benefit. This time, Mark and Maru where there from Neutron. Last
> > > time, Mark from Neutron and the other Mark from Glance were there. If
> > > they were having meetups in other rooms (like at summit) they wouldn't
> > > have been there exposed to discussions that didn't seem like they'd
> have
> > > a component for their participation, but did after all (re: nova and
> > > glance and who should own flavors).
> >
> > I agree. The ability to focus on the issues that were blocking nova
> > was very important. That's hard to do at a design summit when there is
> > so much happening at the same time.
>
> Maybe we should change the way we structure the design summit to
> improve that. If there are critical issues blocking nova, it feels
> like it is better to be able to discuss and resolve as much as possible
> at the start of the dev cycle rather than in the middle of the dev
> cy

Re: [openstack-dev] [Neutron][DevStack] How to increase developer usage of Neutron

2014-08-14 Thread Mike Spreitzer
"CARVER, PAUL"  wrote on 08/14/2014 09:35:17 AM:

> Mike Spreitzer [mailto:mspre...@us.ibm.com] wrote:
> 
> >I'll bet I am not the only developer who is not highly competent with
> >bridges and tunnels, Open VSwitch, Neutron configuration, and how 
DevStack
> >transmutes all those.  My bet is that you would have more developers 
using
> >Neutron if there were an easy-to-find and easy-to-follow recipe to use, 
to
> >create a developer install of OpenStack with Neutron.  One that's a 
pretty
> >basic and easy case.  Let's say a developer gets a recent image of 
Ubuntu
> >14.04 from Canonical, and creates an instance in some undercloud, and 
that
> >instance has just one NIC, at 10.9.8.7/16.  If there were a recipe for
> >such a developer to follow from that point on, it would be great.
> 
> https://wiki.openstack.org/wiki/NeutronDevstack worked for me.
> 
> However, I'm pretty sure it's only a single node all in one setup. At 
least,
> I created only one VM to run it on and I don't think DevStack has 
created
> multiple nested VMs inside of the one I create to run DevStack. I 
haven't
> gotten around to figuring out how to setup a full multi-node DevStack
> setup with separate compute nodes and network nodes and GRE/VXLAN 
tunnels.
> 
> There are multi-node instructions on that wiki page but I haven't tried
> following them. If someone has a Vagrant file that creates a full multi-
> node Neutron devstack complete with GRE/VXLAN tunnels it would be great
> if they could add it to that wiki page.

A working concrete recipe for a single-node install would be great.

https://wiki.openstack.org/wiki/NeutronDevstack is far from a concrete 
recipe, leaving many blanks to be filled in by the reader.
My problem is that as a non-expert in the relevant networking arcana, 
Neutron implementation,
and DevStack configuration options, it is not entirely obvious how to fill 
in the blanks.
For starters, 
http://docs.openstack.org/admin-guide-cloud/content/network-connectivity.html
speaks of four networks and, appropriately for a general page like that, 
does not relate them to NICs.
But at the start of the day, I need to know how many NICs to put on my 
host VM (the one in which I
will run DevStack to install OpenStack), how to configure them in the host 
VM's operating system,
and how to tell DevStack whatever details it needs and cannot figure out 
on its own
(I am not even clear on what that set is).
I need to know how to derive the fixed and floating IP address ranges from 
the networking context
of my host VM.
A recipe that requires more than one NIC on my host VM can be problematic 
in some situations,
which is why I suggested starting with a recipe for a host with a single 
NIC.

I am not using Xen in my undercloud.  I suspect many developers are not 
using Xen.
I did not even know it was possible to install OpenStack inside a Xen VM;
does that still work?

I was hoping for a working concrete recipe that does not depend on the 
undercloud,
rather something that works in a vanilla context that can easily be 
established with
whatever undercloud a given developer is using.

Thanks,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is there a way to let nova schedule plugin fetch glance image metadata

2014-08-14 Thread Sylvain Bauza
Mmm... I can understand that you perhaps need an external scheduler for
your own purposes, but I think you can't expect your plugin merged upstream
for two reasons :
- during Icehouse, there was an effort for not having the scheduler
proxying to the compute nodes
- Call for scheduler needs to go thru Nova-api and no endpoints are there
got just scheduling

That said, once Gantt will be lifted, discussing about possible endpoints
sounds reasonable to me.

-Sylvain

Le 14 août 2014 05:07, "zhiwei"  a écrit :
>
> Hi Jay.
>
> The case is: When heat create a stack, it will first call our
scheduler(will pass image_id), our scheduler will get image metadata by
image_id.
>
> Our scheduler will build a placement policy through image metadata, then
start booting VM.
>
>
> Thanks.
>
>
> On Thu, Aug 14, 2014 at 10:28 AM, Jay Pipes  wrote:
>>
>> On 08/13/2014 10:22 PM, zhiwei wrote:
>>>
>>> Thanks Jay.
>>>
>>> The scheduler plugin is not a scheduler filter.
>>>
>>> We implemented a scheduler instead of using nova native scheduler.
>>
>>
>> OK. Any reason why you did this? Without any details on what your
scheduler does, it's tough to give advice on how to solve your problems.
>>
>>
>>> One of our scheduler component need to fetch image metadata by image_id(
>>> at this time, there is not instance ).
>>
>>
>> Why? Again, the request_spec contains all the information you need about
the image...
>>
>> Best,
>> -jay
>>
>>> On Thu, Aug 14, 2014 at 9:29 AM, Jay Pipes >> > wrote:
>>>
>>> On 08/13/2014 08:31 PM, zhiwei wrote:
>>>
>>> Hi all,
>>>
>>> We wrote a nova schedule plugin that need to fetch image
metadata by
>>> image_id, but encountered one thing, we did not have the glance
>>> context.
>>>
>>> Our solution is to configure OpenStack admin user and password
to
>>> nova.conf, as you know this is not good.
>>>
>>> So, I want to ask if there are any other ways to do this?
>>>
>>>
>>> You should not have to do a separate fetch of image metadata in a
>>> scheduler filter (which is what I believe you meant by "plugin"
above?).
>>>
>>> The filter object's host_passes() method has a filter_properties
>>> parameter that contains the request_spec, that in turn contains the
>>> image, which in turn contains the image "metadata". You can access
>>> it like so:
>>>
>>>   def host_passes(self, host_state, filter_properties):
>>>   request_spec = filter_properties['request___spec']
>>>
>>>   image_info = request_spec['image']
>>>   # Certain image attributes are accessed via top-level keys,
like
>>>   # size, disk_format, container_format and checksum
>>>   image_size = image_info['size']
>>>   # Other attributes can be accessed in the "properties"
collection
>>>   # of key/value pairs
>>>   image_props =  image.get('properties', {})
>>>   for key, value in image_props.items():
>>>   # do something...
>>>
>>> Best,
>>> -jay
>>>
>>>
>>>
>>> _
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.__org
>>> 
>>>
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev <
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>>
>>>
>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Review priorities as we approach juno-3

2014-08-14 Thread Michael Still
Hi.

We're rapidly approaching j-3, so I want to remind people of the
current reviews that are high priority. The definition of high
priority I am using here is blueprints that are marked high priority
in launchpad that have outstanding code for review -- I am sure there
are other reviews that are important as well, but I want us to try to
land more blueprints than we have so far. These are listed in the
order they appear in launchpad.

== Compute Manager uses Objects (Juno Work) ==

https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/compute-manager-objects-juno,n,z

This is ongoing work, but if you're after some quick code review
points they're very easy to review and help push the project forward
in an important manner.

== Move Virt Drivers to use Objects (Juno Work) ==

I couldn't actually find any code out for review for this one apart
from https://review.openstack.org/#/c/94477/, is there more out there?

== Add a virt driver for Ironic ==

This one is in progress, but we need to keep going at it or we wont
get it merged in time.

* https://review.openstack.org/#/c/111223/ was approved, but a rebased
ate it. Should be quick to re-approve.
* https://review.openstack.org/#/c/111423/
* https://review.openstack.org/#/c/111425/
* ...there are more reviews in this series, but I'd be super happy to
see even a few reviewed

== Create Scheduler Python Library ==

* https://review.openstack.org/#/c/82778/
* https://review.openstack.org/#/c/104556/

(There are a few abandoned patches in this series, I think those two
are the active ones but please correct me if I am wrong).

== VMware: spawn refactor ==

* https://review.openstack.org/#/c/104145/
* https://review.openstack.org/#/c/104147/ (Dan Smith's -2 on this one
seems procedural to me)
* https://review.openstack.org/#/c/105738/
* ...another chain with many more patches to review

Thanks,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting minutes Aug 14

2014-08-14 Thread Sergey Lukjanov
Thanks everyone who have joined Sahara meeting.

Here are the logs from the meeting:

http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-08-14-18.00.html
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-08-14-18.00.log.html

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra] Third Party CI naming and contact (action required)

2014-08-14 Thread Jeremy Stanley
On 2014-08-14 20:50:21 +0200 (+0200), Lucas Eznarriaga wrote:
[...]
> We have a problem with zuul-merger when the host id of
> review.openstack.org changes. In this case, the new rsa key has to
> be added manually
[...]

If so, you were broken a while... the last (and effectively only in
recent history) time we replaced the Gerrit SSH API RSA host key on
review.openstack.org was in April as a safety precaution in the wake
of the Heartbleed Bug announcement. We definitely haven't touched it
in the four months since then.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concerns around the Extensible Resource Tracker design - revert maybe?

2014-08-14 Thread Sylvain Bauza
Hi mikal,

Le 14 août 2014 01:49, "Michael Still"  a écrit :
>
> So, there's been a lot of email in the last few days and I feel I am
> not keeping up.
>
> Sylvain, can you summarise for me what the plan is here? Can we roll
> forward or do we need to revert?

Well, as we agreed with Nikola, the problem is not with ERT but RT, as the
request data needs to be passed when claiming a resource.

I'm proposing to keep ERT and only consider plugins that are not needing
request_spec when claiming, but here there is no agreement yet.

Unfortunately, I'm on PTO till Tuesday, and Paul Murray this week as well.
So I propose to delay the discussion by these days as that's not impacted
by FPF.

In the meantime, I created a patch for discussing a workaround [1] for Juno
until we correctly figure out how to fix that issue, as it deserves a spec.

> Time is running out for Juno.
>

Indeed, I'm mostly concerned by the example exception spec that Nikola
mentioned [2] (isolate-scheduler-db) as it still needs a second +2 while
FPF is in 1 week...
I'm planning to deliver an alternative implementation without ERT wrt this
discussion.

-Sylvain

[1] https://review.openstack.org/#/c/113936/

[2] https://review.openstack.org/#/c/89893

> Thanks,
> Michael
>
> On Thu, Aug 14, 2014 at 3:40 AM, Sylvain Bauza  wrote:
> >
> > Le 13/08/2014 18:40, Brian Elliott a écrit :
> >
> >> On Aug 12, 2014, at 5:21 AM, Nikola Đipanov 
wrote:
> >>
> >>> Hey Nova-istas,
> >>>
> >>> While I was hacking on [1] I was considering how to approach the fact
> >>> that we now need to track one more thing (NUMA node utilization) in
our
> >>> resources. I went with - "I'll add it to compute nodes table" thinking
> >>> it's a fundamental enough property of a compute host that it deserves
to
> >>> be there, although I was considering  Extensible Resource Tracker at
one
> >>> point (ERT from now on - see [2]) but looking at the code - it did not
> >>> seem to provide anything I desperately needed, so I went with keeping
it
> >>> simple.
> >>>
> >>> So fast-forward a few days, and I caught myself solving a problem
that I
> >>> kept thinking ERT should have solved - but apparently hasn't, and I
> >>> think it is fundamentally a broken design without it - so I'd really
> >>> like to see it re-visited.
> >>>
> >>> The problem can be described by the following lemma (if you take
'lemma'
> >>> to mean 'a sentence I came up with just now' :)):
> >>>
> >>> """
> >>> Due to the way scheduling works in Nova (roughly: pick a host based on
> >>> stale(ish) data, rely on claims to trigger a re-schedule), _same
exact_
> >>> information that scheduling service used when making a placement
> >>> decision, needs to be available to the compute service when testing
the
> >>> placement.
> >>> “""
> >>
> >> Correct
> >>
> >>> This is not the case right now, and the ERT does not propose any way
to
> >>> solve it - (see how I hacked around needing to be able to get
> >>> extra_specs when making claims in [3], without hammering the DB). The
> >>> result will be that any resource that we add and needs user supplied
> >>> info for scheduling an instance against it, will need a buggy
> >>> re-implementation of gathering all the bits from the request that
> >>> scheduler sees, to be able to work properly.
> >>
> >> Agreed, ERT does not attempt to solve this problem of ensuring RT has
an
> >> identical set of information for testing claims.  I don’t think it was
> >> intended to.
> >>
> >> ERT does solve the issue of bloat in the RT with adding
> >> just-one-more-thing to test usage-wise.  It gives a nice hook for
inserting
> >> your claim logic for your specific use case.
> >
> >
> > I think Nikola and I agreed on the fact that ERT is not responsible for
this
> > design. That said I can talk on behalf of Nikola...
> >
> >
> >
> >>> This is obviously a bigger concern when we want to allow users to pass
> >>> data (through image or flavor) that can affect scheduling, but still a
> >>> huge concern IMHO.
> >>
> >> I think passing additional data through to compute just wasn’t a
problem
> >> that ERT aimed to solve.  (Paul Murray?)  That being said,
coordinating the
> >> passing of any extra data required to test a claim that is *not*
sourced
> >> from the host itself would be a very nice addition.  You are working
around
> >> it with some caching in your flavor db lookup use case, although one
could
> >> of course cook up a cleaner patch to pass such data through on the
“build
> >> this” request to the compute.
> >
> >
> > Indeed, and that's why I think the problem can be resolved thanks to 2
> > different things :
> > 1. Filters need to look at what ERT is giving them, that's what
> > isolate-scheduler-db is trying to do (see my patches [2.3 and 2.4] on
the
> > previous emails
> > 2. Some extra user request needs to be checked in the test() method of
ERT
> > plugins (where claims are done), so I provided a WIP patch for
discussing it
> > : https://review.openstack.org/#/c/113936/
> >
> >
> >

Re: [openstack-dev] [nova] Review priorities as we approach juno-3

2014-08-14 Thread Dan Smith
> == Move Virt Drivers to use Objects (Juno Work) ==
> 
> I couldn't actually find any code out for review for this one apart
> from https://review.openstack.org/#/c/94477/, is there more out there?

This was an umbrella one to cover a bunch of virt driver objects work
done early in the cycle. Much of that is done, I haven't gone looking
for anything to see if there are any obvious things to include under
this anymore, but I'll try to do that.

> == Add a virt driver for Ironic ==
> 
> This one is in progress, but we need to keep going at it or we wont
> get it merged in time.
> 
> * https://review.openstack.org/#/c/111223/ was approved, but a rebased
> ate it. Should be quick to re-approve.
> * https://review.openstack.org/#/c/111423/
> * https://review.openstack.org/#/c/111425/
> * ...there are more reviews in this series, but I'd be super happy to
> see even a few reviewed

I've been reviewing this pretty heavy and I think that it's just taking
a while to make changes given the roundabout way they're getting done
first in Ironic. I'm pretty confident that this one will be okay.

> == VMware: spawn refactor ==
> 
> * https://review.openstack.org/#/c/104145/
> * https://review.openstack.org/#/c/104147/ (Dan Smith's -2 on this one
> seems procedural to me)

Yep, we're just trying to get MineSweeper votes on them before letting
them in. We already had one refactor go in without a minesweeper run
that ... broke minesweeper :)

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] python-saharaclient 0.7.1 released

2014-08-14 Thread Sergey Lukjanov
Hi folks,

python-saharaclient 0.7.1 released.

The prev. released 0.7.0 has been released about a half of year ago.

The 0.7.1 release contains a bunch of bug fixes and improvements,
update requirements to Juno ones and etc.

Tarball: 
http://tarballs.openstack.org/python-saharaclient/python-saharaclient-0.7.1.tar.gz
PYPI: https://pypi.python.org/pypi/python-saharaclient/0.7.1
Launchpad: https://launchpad.net/python-saharaclient/+milestone/0.7.1

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] What tests are required to be run

2014-08-14 Thread trinath.soman...@freescale.com
Hi-

I'm hitting bugs for (basic/advanced)_server_ops testing. 

https://bugs.launchpad.net/nova/+bug/1232303

Kindly help me with a fix to this.

Thanking you.




--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

-Original Message-
From: Kyle Mestery [mailto:mest...@mestery.com] 
Sent: Thursday, August 14, 2014 9:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron] [third-party] What tests are required to be 
run

Folks, I'm not sure if all CI accounts are running sufficient tests.
Per the requirements wiki page here [1], everyone needs to be running more than 
just Tempest API tests, which I still see most neutron third-party CI setups 
doing. I'd like to ask everyone who operates a third-party CI account for 
Neutron to please look at the link below and make sure you are running 
appropriate tests. If you have questions, the weekly third-party meeting [2] is 
a great place to ask questions.

Thanks,
Kyle

[1] https://wiki.openstack.org/wiki/NeutronThirdPartyTesting
[2] https://wiki.openstack.org/wiki/Meetings/ThirdParty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-14 Thread Joe Gordon
On Wed, Aug 13, 2014 at 12:24 PM, Doug Hellmann 
wrote:

>
> On Aug 13, 2014, at 3:05 PM, Eoghan Glynn  wrote:
>
> >
> >>> At the end of the day, that's probably going to mean saying No to more
> >>> things. Everytime I turn around everyone wants the TC to say No to
> >>> things, just not to their particular thing. :) Which is human nature.
> >>> But I think if we don't start saying No to more things we're going to
> >>> end up with a pile of mud that no one is happy with.
> >>
> >> That we're being so abstract about all of this is frustrating. I get
> >> that no-one wants to start a flamewar, but can someone be concrete about
> >> what they feel we should say 'no' to but are likely to say 'yes' to?
> >>
> >>
> >> I'll bite, but please note this is a strawman.
> >>
> >> No:
> >> * Accepting any more projects into incubation until we are comfortable
> with
> >> the state of things again
> >> * Marconi
> >> * Ceilometer
> >
> > Well -1 to that, obviously, from me.
> >
> > Ceilometer is on track to fully execute on the gap analysis coverage
> > plan agreed with the TC at the outset of this cycle, and has an active
> > plan in progress to address architectural debt.
>
> Yes, there seems to be an attitude among several people in the community
> that the Ceilometer team denies that there are issues and refuses to work
> on them. Neither of those things is the case from our perspective.
>

Totally agree.


>
> Can you be more specific about the shortcomings you see in the project
> that aren’t being addressed?
>


Once again, this is just a strawman.

I'm just not sure OpenStack has 'blessed' the best solution out there.

https://wiki.openstack.org/wiki/Ceilometer/Graduation#Why_we_think_we.27re_ready

"

   - Successfully passed the challenge of being adopted by 3 related
   projects which have agreed to join or use ceilometer:
  - Synaps
  - Healthnmon
  - StackTach
  

  "


Stacktach seems to still be under active development (
http://git.openstack.org/cgit/stackforge/stacktach/log/), is used by
rackspace in production and from everything I hear is more mature then
ceilometer.


>
> >
> >> Divert all cross project efforts from the following projects so we can
> focus
> >> our cross project resources. Once we are in a bitter place we can
> expand our
> >> cross project resources to cover these again. This doesn't mean removing
> >> anything.
> >> * Sahara
> >> * Trove
> >> * Tripleo
> >
> > You write as if cross-project efforts are both of fixed size and
> > amenable to centralized command & control.
> >
> > Neither of which is actually the case, IMO.
> >
> > Additional cross-project resources can be ponied up by the large
> > contributor companies, and existing cross-project resources are not
> > necessarily divertable on command.
>

Sure additional cross-project resources can and need to be ponied up, but I
am doubtful that will be enough.


>
> What “cross-project efforts” are we talking about? The liaison program in
> Oslo has been a qualified success so far. Would it make sense to extend
> that to other programs and say that each project needs at least one
> designated QA, Infra, Doc, etc. contact?
>
> Doug
>
> >
> >> Yes:
> >> * All integrated projects that are not listed above
> >
> > And what of the other pending graduation request?
> >
> > Cheers,
> > Eoghan
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Review priorities as we approach juno-3

2014-08-14 Thread Michael Still
On Fri, Aug 15, 2014 at 6:37 AM, Dan Smith  wrote:
>> == Move Virt Drivers to use Objects (Juno Work) ==
>>
>> I couldn't actually find any code out for review for this one apart
>> from https://review.openstack.org/#/c/94477/, is there more out there?
>
> This was an umbrella one to cover a bunch of virt driver objects work
> done early in the cycle. Much of that is done, I haven't gone looking
> for anything to see if there are any obvious things to include under
> this anymore, but I'll try to do that.

Thanks, I'd appreciate that. If its all done, we should mark it implemented.

>> == Add a virt driver for Ironic ==
>>
>> This one is in progress, but we need to keep going at it or we wont
>> get it merged in time.
>>
>> * https://review.openstack.org/#/c/111223/ was approved, but a rebased
>> ate it. Should be quick to re-approve.
>> * https://review.openstack.org/#/c/111423/
>> * https://review.openstack.org/#/c/111425/
>> * ...there are more reviews in this series, but I'd be super happy to
>> see even a few reviewed
>
> I've been reviewing this pretty heavy and I think that it's just taking
> a while to make changes given the roundabout way they're getting done
> first in Ironic. I'm pretty confident that this one will be okay.

Yep, I appreciate your focus on this one -- as I am sure the ironic
people do too. If another core was available to pair up with you on
these we might be able to get them to land faster. I was doing that
for a while, but I haven't had time in the last week or so.

>> == VMware: spawn refactor ==
>>
>> * https://review.openstack.org/#/c/104145/
>> * https://review.openstack.org/#/c/104147/ (Dan Smith's -2 on this one
>> seems procedural to me)
>
> Yep, we're just trying to get MineSweeper votes on them before letting
> them in. We already had one refactor go in without a minesweeper run
> that ... broke minesweeper :)

Is there a way for the minesweeper people to manually kick off runs
against specific reviews? Doing that might unblock this faster than
rechecking and hoping.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-14 Thread Sylvain Bauza
Le 14 août 2014 21:56, "Joe Gordon"  a écrit :
>
>
>
>
> On Thu, Aug 14, 2014 at 1:47 AM, Daniel P. Berrange 
wrote:
>>
>> On Thu, Aug 14, 2014 at 09:24:36AM +1000, Michael Still wrote:
>> > On Thu, Aug 14, 2014 at 3:09 AM, Dan Smith  wrote:
>> > >> I'm not questioning the value of f2f - I'm questioning the idea of
>> > >> doing f2f meetings sooo many times a year. OpenStack is very much
>> > >> the outlier here among open source projects - the vast majority of
>> > >> projects get along very well with much less f2f time and a far
>> > >> smaller % of their contributors attend those f2f meetings that do
>> > >> happen. So I really do question what is missing from OpenStack's
>> > >> community interaction that makes us believe that having 4 f2f
>> > >> meetings a year is critical to our success.
>> > >
>> > > How many is too many? So far, I have found the midcycles to be
extremely
>> > > productive -- productive in a way that we don't see at the summits,
and
>> > > I think other attendees agree. Obviously if budgets start limiting
them,
>> > > then we'll have to deal with it, but I don't want to stop meeting
>> > > preemptively.
>> >
>> > I agree they're very productive. Let's pick on the nova v3 API case as
>> > an example... We had failed as a community to reach a consensus using
>> > our existing discussion mechanisms (hundreds of emails, at least three
>> > specs, phone calls between the various parties, et cetera), yet at the
>> > summit and then a midcycle meetup we managed to nail down an agreement
>> > on a very contentious and complicated topic.
>>
>> We thought we had agreement on v3 API after Atlanta f2f summit and
>> after Hong Kong f2f too. So I wouldn't neccessarily say that we
>> needed another f2f meeting to resolve that, but rather than this is
>> a very complex topic that takes a long time to resolve no matter
>> how we discuss it and the discussions had just happened to reach
>> a natural conclusion this time around. But lets see if this agreement
>> actually sticks this time
>>
>> > I can see the argument that travel cost is an issue, but I think its
>> > also not a very strong argument. We have companies spending millions
>> > of dollars on OpenStack -- surely spending a relatively small amount
>> > on travel to keep the development team as efficient as possible isn't
>> > a big deal? I wouldn't be at all surprised if the financial costs of
>> > the v3 API debate (staff time mainly) were much higher than the travel
>> > costs of those involved in the summit and midcycle discussions which
>> > sorted it out.
>>
>> I think the travel cost really is a big issue. Due to the number of
>> people who had to travel to the many mid-cycle meetups, a good number
>> of people I work with no longer have the ability to go to the Paris
>> design summit. This is going to make it harder for them to feel a
>> proper engaged part of our community. I can only see this situation
>> get worse over time if greater emphasis is placed on attending the
>> mid-cycle meetups.
>>
>> > Travelling to places to talk to people isn't a great solution, but it
>> > is the most effective one we've found so far. We should continue to
>> > experiment with other options, but until we find something that works
>> > as well as meetups, I think we need to keep having them.
>> >
>> > > IMHO, the reasons to cut back would be:
>> > >
>> > > - People leaving with a "well, that was useless..." feeling
>> > > - Not enough people able to travel to make it worthwhile
>> > >
>> > > So far, neither of those have been outcomes of the midcycles we've
had,
>> > > so I think we're doing okay.
>> > >
>> > > The design summits are structured differently, where we see a lot
more
>> > > diverse attendance because of the colocation with the user summit. It
>> > > doesn't lend itself well to long and in-depth discussions about
specific
>> > > things, but it's very useful for what it gives us in the way of
>> > > exposure. We could try to have less of that at the summit and more
>> > > midcycle-ish time, but I think it's unlikely to achieve the same
level
>> > > of usefulness in that environment.
>> > >
>> > > Specifically, the lack of colocation with too many other projects has
>> > > been a benefit. This time, Mark and Maru where there from Neutron.
Last
>> > > time, Mark from Neutron and the other Mark from Glance were there. If
>> > > they were having meetups in other rooms (like at summit) they
wouldn't
>> > > have been there exposed to discussions that didn't seem like they'd
have
>> > > a component for their participation, but did after all (re: nova and
>> > > glance and who should own flavors).
>> >
>> > I agree. The ability to focus on the issues that were blocking nova
>> > was very important. That's hard to do at a design summit when there is
>> > so much happening at the same time.
>>
>> Maybe we should change the way we structure the design summit to
>> improve that. If there are critical issues blocking nova, it feels
>> like it i

Re: [openstack-dev] [Cinder] 3'rd party CI systems: Not Whitelisted Volume type with name * could not be found.

2014-08-14 Thread Martin, Kurt Frederick (ESSN Storage MSDU)
Cinder.conf needs to have a default_volume_type  entry set under the [Default] 
group and a volume type that is valid, for example, default_volume_type=bronze. 
This allows for a volume to be created when a volume type is not selected, the 
default 'None'  type. This feature has been available for some time in cinder 
but recently enabled in devstack.
~Kurt

-Original Message-
From: Asselin, Ramy 
Sent: Thursday, August 14, 2014 10:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Cinder] 3'rd party CI systems: Not Whitelisted Volume 
type with name * could not be found.

Hi,

Does anyone know how to configure cinder ci tests to not have these errors?

12:32:11 *** Not Whitelisted *** 2014-08-14 12:23:56.179 18021 ERROR 
cinder.volume.volume_types [req-c9ec92ab-132b-4167-a467-15bd213659e8 
bd1f54a867ce47acb4097cd94149efa0 d15dcf4cb3c7495389799ec849e9036d - - -] 
Default volume type is not found, please check default_volume_type config: 
Volume type with name dot could not be found.

17:43:28 *** Not Whitelisted *** 2014-08-11 17:31:20.343 2097 ERROR 
cinder.volume.volume_types [req-01e539ad-5357-4a0f-8d58-464b970114f9 
f72cd499be584d9d9585bc26ca71c603 d845ad2d7e6a47dfb84bdf0754f60384 - - -] 
Default volume type is not found, please check default_volume_type config: 
Volume type with name cat_1 could not be found.

http://15.126.198.151/60/111460/18/check/dsvm-tempest-hp-lefthand/3ee1598/console.html
http://publiclogs.emc.com/vnx_ostack/EMC_VNX_FC/212/console.html

Thanks,
Ramy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Which changes need accompanying bugs?

2014-08-14 Thread Maru Newby

On Aug 13, 2014, at 11:11 AM, Kevin Benton  wrote:

> Is the pylint static analysis that caught that error prone to false
> positives? If not, I agree that it would be really nice if that were made
> part of the tox check so these don't have to be fixed after the fact.
> 
> To me that particular patch seems like one that should be accompanied with
> a unit test because it's a failure case with cleanup code that isn't being
> touched by the unit tests.

+1

As a general rule I would like to see test addition included with any fix 
targeting a bug that was merged due to a lack of coverage.


m.


> On Aug 13, 2014 8:34 AM, "Armando M."  wrote:
> 
>> I am gonna add more color to this story by posting my replies on review
>> [1]:
>> 
>> Hi Angus,
>> 
>> You touched on a number of points. Let me try to give you an answer to all
>> of them.
>> 
 (I'll create a bug report too. I still haven't worked out which class
>> of changes need an accompanying bug report and which don't.)
>> 
>> The long story can be read below:
>> 
>> https://wiki.openstack.org/wiki/BugFilingRecommendations
>> 
>> https://wiki.openstack.org/wiki/GitCommitMessages
>> 
>> IMO, there's a grey area for some of the issues you found, but when I am
>> faced with a bug, I tend to answer myself? Would a bug report be useful to
>> someone else? The author of the code? A consumer of the code? Not everyone
>> follow the core review system all the time, whereas Launchpad is pretty
>> much the tool everyone uses to stay abreast with the OpenStack release
>> cycle. Obviously if you're fixing a grammar nit, or filing a cosmetic
>> change that has no functional impact then I warrant the lack of a test, but
>> in this case you're fixing a genuine error: let's say we want to backport
>> this to icehouse, how else would we make the release manager of that?
>> He/she is looking at Launchpad.
>> 
 I can add a unittest for this particular code path, but it would only
>> check this particular short segment of code, would need to be maintained as
>> the code changes, and wouldn't catch another occurrence somewhere else.
>> This seems an unsatisfying return on the additional work :(
>> 
>> You're looking at this from the wrong perspective. This is not about
>> ensuring that other code paths are valid, but that this code path stays
>> valid over time, ensuring that the code path is exercised and that no other
>> regression of any kind creep in. The reason why this error was introduced
>> in the first place is because the code wasn't tested when it should have.
>> If you don't get that this mechanical effort of fixing errors by static
>> analysis is kind of ineffective, which leads me to the last point
>> 
 I actually found this via static analysis using pylint - and my
>> question is: should I create some sort of pylint unittest that tries to
>> catch this class of problem across the entire codebase? [...]
>> 
>> I value what you're doing, however I would see even more value if we
>> prevented these types of errors from occurring in the first place via
>> automation. You run pylint today, but what about tomorrow, or a week from
>> now? Are you gonna be filing pylint fixes for ever? We might be better off
>> automating the check and catch these types of errors before they land in
>> the tree. This means that the work you are doing it two-pronged: a)
>> automate the detection of some failures by hooking this into tox.ini via
>> HACKING/pep8 or equivalent mechanism and b) file all the fixes that require
>> these validation tests to pass; c) everyone is happy, or at least they
>> should be.
>> 
>> I'd welcome to explore a better strategy to ensure a better quality of the
>> code base, without some degree of automation, nothing will stop these
>> conversation from happening again.
>> 
>> Cheers,
>> 
>> Armando
>> 
>> [1] https://review.openstack.org/#/c/113777/
>> 
>> 
>> On 13 August 2014 03:02, Ihar Hrachyshka  wrote:
>> 
>>> -BEGIN PGP SIGNED MESSAGE-
>>> Hash: SHA512
>>> 
>>> On 13/08/14 09:28, Angus Lees wrote:
 I'm doing various small cleanup changes as I explore the neutron
 codebase. Some of these cleanups are to fix actual bugs discovered
 in the code.  Almost all of them are tiny and "obviously correct".
 
 A recurring reviewer comment is that the change should have had an
 accompanying bug report and that they would rather that change was
 not submitted without one (or at least, they've -1'ed my change).
 
 I often didn't discover these issues by encountering an actual
 production issue so I'm unsure what to include in the bug report
 other than basically a copy of the change description.  I also
 haven't worked out the pattern yet of which changes should have a
 bug and which don't need one.
 
 There's a section describing blueprints in NeutronDevelopment but
 nothing on bugs.  It would be great if someone who understands the
 nuances here could add some words on when

Re: [openstack-dev] Which program for Rally

2014-08-14 Thread Matthew Treinish
On Wed, Aug 13, 2014 at 03:48:59PM -0600, Duncan Thomas wrote:
> On 13 August 2014 13:57, Matthew Treinish  wrote:
> > On Tue, Aug 12, 2014 at 01:45:17AM +0400, Boris Pavlovic wrote:
> >> Keystone, Glance, Cinder, Neutron and Heat are running rally performance
> >> jobs, that can be used for performance testing, benchmarking, regression
> >> testing (already now). These jobs supports in-tree plugins for all
> >> components (scenarios, load generators, benchmark context) and they can use
> >> Rally fully without interaction with Rally team at all. More about these
> >> jobs:
> >> https://docs.google.com/a/mirantis.com/document/d/1s93IBuyx24dM3SmPcboBp7N47RQedT8u4AJPgOHp9-A/
> >> So I really don't see anything like this in tempest (even in observed
> >> future)
> 
> > So this is actually the communication problem I mentioned before. Singling 
> > out
> > individual projects and getting them to add a rally job is not "cross 
> > project"
> > communication. (this is part of what I meant by "push using Rally") There 
> > was no
> > larger discussion on the ML or a topic in the project meeting about adding 
> > these
> > jobs. There was no discussion about the value vs risk of adding new jobs to 
> > the
> > gate. Also, this is why less than half of the integrated projects have these
> > jobs. Having asymmetry like this between gating workloads on projects helps 
> > no
> > one.
> 
> So the advantage of the approach, rather than having a massive
> cross-product discussion, is that interested projects (I've been very
> interested for a cinder core PoV) act as a test bed for other
> projects. 'Cross project' discussions rather come to other teams, they
> rely on people to find them, where as Boris came to us, said I've got
> this thing you might like, try it out, tell me what you want. He took
> feedback, iterated fast and investigated bugs. It has been a genuine
> pleasure to work with him, and I feel we made progress faster than we
> would have done if it was trying to please everybody.

I'm not arguing whether Boris was great to work with or not. Or whether there
isn't value in talking directly to the dev team when setting up a new job. That
is definitely the fastest path to getting a new job up and running. But, for
something like adding a new class of dsvm job which runs on every patch, that
affects everyone, not just the project where the jobs are being added. A larger
discussion is really necessary to weigh whether such a job should be added. It
really only needs to happen once, just before the first one is added on an
integrated project.

> 
> > That being said the reason I think osprofiler has been more accepted and 
> > it's
> > adoption into oslo is not nearly as contentious is because it's an 
> > independent
> > library that has value outside of itself. You don't need to pull in a 
> > monolithic
> > stack to use it. Which is a design point more conducive with the rest of
> > OpenStack.
> 
> Sorry, are you suggesting tempest isn't a giant monolithic thing?
> Because I was able to comprehend the rally code very quickly, that
> isn't even slightly true of tempest. Having one simple tool that does
> one thing well is exactly what rally has tried to do - tempest seems
> to want to be five different things at once (CI, instalation tests,
> trademark, preformance, stress testing, ...)

This is actually a common misconception about the purpose and role of Tempest.
Tempest is strictly concerned with being the integration test suite for
OpenStack, which just includes the actual tests and some methods of running the
tests. This is attempted to be done in a manner which is independent of the
environment in which tempest is run or run against. (for example, devstack vs a
public cloud) Yes tempest is a large project and has a lot of tests which just
adds to it's complexity, but it's scope is quite targeted. It's just that it
grows at the same rate OpenStack scope grows because tempest has coverage for
all the projects.

Methods of running the tests does include the stress tests framework, but that
is mostly just a method of leveraging the large quantity of tests we currently
have in-tree to generate load. [1] (Yeah, we need to write better user docs
around this and a lot of other things) It just lets you define which tests to
use and how to loop and distribute them over workers. [2] 

The trademark, CI, upgrade testing, and installation testing are just examples
of applications where tempest is being used. (some of which are the domain of
other QA or Infra program projects, some are not) If you look in the tempest
tree you'll see very little specifically about any of those applications.
They're all mostly accomplished by building tooling around tempest. For example:
refstack->trademark, devstack-gate->ci, grenade->upgrade, etc. Tempest is just a
building block that can be used to make all of those things. As all of these
different use cases are basically tempest's primary consumer we do have to
take them into acco

Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators?

2014-08-14 Thread Doug Hellmann

On Aug 14, 2014, at 12:38 PM, Pendergrass, Eric  wrote:

> Sure, Doug.  We want the ability to selectively apply policies to certain
> Ceilometer
> API methods based on user/tenant roles.
> 
> For example, we want to restrict the ability to execute Alarm deletes to
> admins and user/tenants who have a special role, say "domainadmin".
> 
> The policy file might look like this:
> {
>"context_is_admin":  [["role:admin"]],
>"admin_and_matching_project_domain_id":  [["role:domainadmin"]],
>"admin_or_cloud_admin": [["rule:context_is_admin"],
> ["rule:admin_and_matching_project_domain_id"]],
>"telemetry:delete_alarms":  [["rule:admin_or_cloud_admin"]]
> }
> 
> The current acl.py and _query_to_kwargs access control setup either sets
> project_id scope to None (do everything) or to the project_id in the request
> header 'X-Project-Id'.  This allows for admin or project scope, but nothing
> in
> between.
> 
> We tried hooks.  Unfortunately we can't seem to turn the API controllers
> into
> HookControllers just by adding HookController to the Controller class
> definition.  It causes infinite recursion on API startup.  For example, this
> doesn't work because ceilometer-api will not start with it:
>class MetersController(rest.RestController, HookController):
> 
> If there was a way to use hooks with the v2. API controllers that might work
> really well.

OK, that sounds like it might be a bug in pecan or the way the classes were set 
up. Can you post sample code somewhere?

> 
> So we are left using the @secure decorator and deriving the method name from
> the request environ PATH_INFO and REQUEST_METHOD values.  This is how we
> determine the wrapped method within the class (REQUEST_METHOD + PATH_INFO =
> "telemetry:delete_alarms" with some munging).  We need the method name in
> order to
> selectively apply acces control to certain methods.
> 
> Deriving the method this way isn't ideal but it's the only thing we've
> gotten working 
> between hooks, @secure, and regular decorators.

Why do you need to look up the method name, though? Each method is decorated 
separately, and the decorator takes as argument the name of a method to invoke 
to do the security check. Could you write separate check methods for whatever 
rules you have (“must be admin” or “must be member of tenant” or whatever) and 
then apply them to the appropriate controller methods?

Also, as Ryan pointed out earlier in the thread, the security rules are applied 
while the controller method is being resolved (during the process of “routing” 
the request). If you have nested controllers, applying security rules higher up 
the object tree (toward the root controller) may impose RBAC requirements on 
controllers further down. Maybe that’s what you want, but it’s something to be 
aware of in case it’s not.

Doug

> 
> I submitted a WIP BP here: https://review.openstack.org/#/c/112137/3.  It is
> slightly out of date but should give you a beter idea of our goals.
> 
> Thanks
> 
>> Eric,
>> 
>> If you can give us some more information about your end goal, independent
> of the implementation, maybe we can propose an alternate technique to
> achieve the same thing.
>> 
>> Doug
>> 
>> On Aug 12, 2014, at 6:21 PM, Ryan Petrello 
> wrote:
>> 
>>> Yep, you're right, this doesn't seem to work.  The issue is that
>>> security is enforced at routing time (while the controller is still
>>> actually being discovered).  In order to do this sort of thing with
>>> the `check_permissions`, we'd probably need to add a feature to pecan.
>>> 
>>> On 08/12/14 06:38 PM, Pendergrass, Eric wrote:
 Sure, here's the decorated method from v2.py:
 
   class MetersController(rest.RestController):
   """Works on meters."""
 
   @pecan.expose()
   def _lookup(self, meter_name, *remainder):
   return MeterController(meter_name), remainder
 
   @wsme_pecan.wsexpose([Meter], [Query])
   @secure(RBACController.check_permissions)
   def get_all(self, q=None):
 
 and here's the decorator called by the secure tag:
 
   class RBACController(object):
   global _ENFORCER
   if not _ENFORCER:
   _ENFORCER = policy.Enforcer()
 
 
   @classmethod
   def check_permissions(cls):
   # do some stuff
 
 In check_permissions I'd like to know the class and method with the
> @secure tag that caused check_permissions to be invoked.  In this case, that
> would be MetersController.get_all.
 
 Thanks
 
 
> Can you share some code?  What do you mean by, "is there a way for the
> decorator code to know it was called by MetersController.get_all"
> 
> On 08/12/14 04:46 PM, Pendergrass, Eric wrote:
>> Thanks Ryan, but for some reason the controller attribute is None:
>> 
>> (Pdb) from pecan.core import state
>> (Pdb) state.__dict__
>> {'hooks': [,
>> ,
>> ,
>> ], 'app':

Re: [openstack-dev] [Cinder] 3'rd party CI systems: Not Whitelisted Volume type with name * could not be found.

2014-08-14 Thread Asselin, Ramy
Both configurations have that set as you described. [1][2] 
Who actually creates that volume type? Is that supposed to be added manually to 
local.sh, or is this a bug in devstack?

[1] 
http://publiclogs.emc.com/vnx_ostack/EMC_VNX_FC/212/logs/etc/cinder/cinder.conf.txt.gz
[DEFAULT]
default_volume_type = cat_1

[2] 
http://15.126.198.151/60/111460/18/check/dsvm-tempest-hp-lefthand/3ee1598/logs/etc/cinder/cinder.conf.txt.gz
[DEFAULT]
default_volume_type = dot

-Original Message-
From: Martin, Kurt Frederick (ESSN Storage MSDU) 
Sent: Thursday, August 14, 2014 2:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] 3'rd party CI systems: Not Whitelisted 
Volume type with name * could not be found.

Cinder.conf needs to have a default_volume_type  entry set under the [Default] 
group and a volume type that is valid, for example, default_volume_type=bronze. 
This allows for a volume to be created when a volume type is not selected, the 
default 'None'  type. This feature has been available for some time in cinder 
but recently enabled in devstack.
~Kurt

-Original Message-
From: Asselin, Ramy 
Sent: Thursday, August 14, 2014 10:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Cinder] 3'rd party CI systems: Not Whitelisted Volume 
type with name * could not be found.

Hi,

Does anyone know how to configure cinder ci tests to not have these errors?

12:32:11 *** Not Whitelisted *** 2014-08-14 12:23:56.179 18021 ERROR 
cinder.volume.volume_types [req-c9ec92ab-132b-4167-a467-15bd213659e8 
bd1f54a867ce47acb4097cd94149efa0 d15dcf4cb3c7495389799ec849e9036d - - -] 
Default volume type is not found, please check default_volume_type config: 
Volume type with name dot could not be found.

17:43:28 *** Not Whitelisted *** 2014-08-11 17:31:20.343 2097 ERROR 
cinder.volume.volume_types [req-01e539ad-5357-4a0f-8d58-464b970114f9 
f72cd499be584d9d9585bc26ca71c603 d845ad2d7e6a47dfb84bdf0754f60384 - - -] 
Default volume type is not found, please check default_volume_type config: 
Volume type with name cat_1 could not be found.

http://15.126.198.151/60/111460/18/check/dsvm-tempest-hp-lefthand/3ee1598/console.html
http://publiclogs.emc.com/vnx_ostack/EMC_VNX_FC/212/console.html

Thanks,
Ramy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >