Re: [openstack-dev] Supporting Javascript clients calling OpenStack APIs

2014-10-16 Thread Martin Geisler
Diego Parrilla Santamaría  writes:

> May be it's a bit too late, but in mid 2012 the FIWARE team developed a
> horizon clone 100% in Javascript. https://github.com/ging/horizon-js

Nice! I poked around a little in the repo and found no mention of Swift?

> [...] Portal has evolved and now it's more like a 'chroot' for safe
> execution of javascript apps. And we use it for our own OpenStack
> extensions (accounting, rating, billing, monitoring, advanced network
> management...).

Sounds cool!

> So maybe if you want to discuss about how to proceed in the future for
> a 100% javascript user interface, maybe we can talk about it in Paris.

I'm unfortunately not coming to Paris, but we can discuss this over
email. The ZeroVM mailinglist would be fine if it's too off-topic for
this list.

In the ZeroVM project, we're mostly (only really) interested in an
interface for Swift since we can embed the ZeroVM sandbox in Swift.

This gives you a Swift cluster where you can execute map-reduce jobs
(from untrusted clients because of the ZeroVM sandbox) directly on the
data. We like say that you can send the code to the data instead of
doing it the other way around.

So that is why I'm focusing on Swift now and will later add support for
starting these ZeroVM jobs to the browser.

-- 
Martin Geisler

http://google.com/+MartinGeisler


pgpTx2xioHhGZ.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-16 Thread Jastrzebski, Michal


> -Original Message-
> From: Russell Bryant [mailto:rbry...@redhat.com]
> Sent: Thursday, October 16, 2014 5:04 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] Automatic evacuate
> 
> On 10/15/2014 05:07 PM, Florian Haas wrote:
> > On Wed, Oct 15, 2014 at 10:03 PM, Russell Bryant 
> wrote:
> >>> Am I making sense?
> >>
> >> Yep, the downside is just that you need to provide a new set of
> >> flavors for "ha" vs "non-ha".  A benefit though is that it's a way to
> >> support it today without *any* changes to OpenStack.
> >
> > Users are already very used to defining new flavors. Nova itself
> > wouldn't even need to define those; if the vendor's deployment tools
> > defined them it would be just fine.
> 
> Yes, I know Nova wouldn't need to define it.  I was saying I didn't like that 
> it
> was required at all.
> 
> >> This seems like the kind of thing we should also figure out how to
> >> offer on a per-guest basis without needing a new set of flavors.
> >> That's why I also listed the server tagging functionality as another 
> >> possible
> solution.
> >
> > This still doesn't do away with the requirement to reliably detect
> > node failure, and to fence misbehaving nodes. Detecting that a node
> > has failed, and fencing it if unsure, is a prerequisite for any
> > recovery action. So you need Corosync/Pacemaker anyway.
> 
> Obviously, yes.  My post covered all of that directly ... the tagging bit was 
> just
> additional input into the recovery operation.
> 
> > Note also that when using an approach where you have physically
> > clustered nodes, but you are also running non-HA VMs on those, then
> > the user must understand that the following applies:
> >
> > (1) If your guest is marked HA, then it will automatically recover on
> > node failure, but
> > (2) if your guest is *not* marked HA, then it will go down with the
> > node not only if it fails, but also if it is fenced.
> >
> > So a non-HA guest on an HA node group actually has a slightly
> > *greater* chance of going down than a non-HA guest on a non-HA host.
> > (And let's not get into "don't use fencing then"; we all know why
> > that's a bad idea.)
> >
> > Which is why I think it makes sense to just distinguish between
> > HA-capable and non-HA-capable hosts, and have the user decide whether
> > they want HA or non-HA guests simply by assigning them to the
> > appropriate host aggregates.
> 
> Very good point.  I hadn't considered that.
> 
> --
> Russell Bryant
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

In my opinion flavor defining is a bit hacky. Sure, it will provide us
functionality fairly quickly, but also will strip us from flexibility Heat
would give. Healing can be done in several ways, simple destroy -> create
(basic convergence workflow so far), evacuate with or without
shared storage, even rebuild vm, probably few more when we put more thoughts
to it.

I'd rather use nova for low level task and maybe low level monitoring (imho
nova should do that using servicegroup). But I'd use something more more
configurable for actual task triggering like heat. That would give us
framework rather than mechanism. Later we might want to apply HA on network or
volume, then we'll have mechanism ready just monitoring hook and healing
will need to be implemented.
We can use scheduler hints to place resource on host HA-compatible 
(whichever health action we'd like to use), this will bit more complicated, but
also will give us more flexibility.

I agree that we all should meet in Paris and discuss that so we can join our
forces. This is one of bigger gaps to be filled imho.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-16 Thread Sylvain Bauza


Le 16/10/2014 05:04, Russell Bryant a écrit :

On 10/15/2014 05:07 PM, Florian Haas wrote:

On Wed, Oct 15, 2014 at 10:03 PM, Russell Bryant  wrote:

Am I making sense?

Yep, the downside is just that you need to provide a new set of flavors
for "ha" vs "non-ha".  A benefit though is that it's a way to support it
today without *any* changes to OpenStack.

Users are already very used to defining new flavors. Nova itself
wouldn't even need to define those; if the vendor's deployment tools
defined them it would be just fine.

Yes, I know Nova wouldn't need to define it.  I was saying I didn't like
that it was required at all.


This seems like the kind of thing we should also figure out how to offer
on a per-guest basis without needing a new set of flavors.  That's why I
also listed the server tagging functionality as another possible solution.

This still doesn't do away with the requirement to reliably detect
node failure, and to fence misbehaving nodes. Detecting that a node
has failed, and fencing it if unsure, is a prerequisite for any
recovery action. So you need Corosync/Pacemaker anyway.

Obviously, yes.  My post covered all of that directly ... the tagging
bit was just additional input into the recovery operation.


Note also that when using an approach where you have physically
clustered nodes, but you are also running non-HA VMs on those, then
the user must understand that the following applies:

(1) If your guest is marked HA, then it will automatically recover on
node failure, but
(2) if your guest is *not* marked HA, then it will go down with the
node not only if it fails, but also if it is fenced.

So a non-HA guest on an HA node group actually has a slightly
*greater* chance of going down than a non-HA guest on a non-HA host.
(And let's not get into "don't use fencing then"; we all know why
that's a bad idea.)

Which is why I think it makes sense to just distinguish between
HA-capable and non-HA-capable hosts, and have the user decide whether
they want HA or non-HA guests simply by assigning them to the
appropriate host aggregates.

Very good point.  I hadn't considered that.



There are various possibilities for handling that usecase, and tagging 
VMs on a case-by-case basis sounds really good to me.
What is missing IMHO is a smart filter able to mitigate spread across 
computes and VMs asking for HA. We can actually do this thanks to 
Instance Groups and affinity filters, but in a certain sense, it would 
be cool if an user could just boot an instance and ask for a policy 
(either given by a flavor metadata or whatever) without having knowledge 
on the subsequent infra.


-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-16 Thread Florian Haas
 (5) Let monitoring and orchestration services deal with these use
 cases and
 have Nova simply provide the primitive API calls that it already does
 (i.e.
 host evacuate).
>>>
>>> That would arguably lead to an incredible amount of wheel reinvention
>>> for node failure detection, service failure detection, etc. etc.
>>
>> How so? (5) would use existing wheels for monitoring and orchestration
>> instead of writing all new code paths inside Nova to do the same thing.
>
> Right, there may be some confusion here ... I thought you were both
> agreeing that the use of an external toolset was a good approach for the
> problem, but Florian's last message makes that not so clear ...

While one of us (Jay or me) speaking for the other and saying we agree
is a distributed consensus problem that dwarfs the complexity of
Paxos, *I* for my part do think that an "external" toolset (i.e. one
that lives outside the Nova codebase) is the better approach versus
duplicating the functionality of said toolset in Nova.

I just believe that the toolset that should be used here is
Corosync/Pacemaker and not Ceilometer/Heat. And I believe the former
approach leads to *much* fewer necessary code changes *in* Nova than
the latter.

Cheers,
Florian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-16 Thread Florian Haas
On Thu, Oct 16, 2014 at 5:04 AM, Russell Bryant  wrote:
> On 10/15/2014 05:07 PM, Florian Haas wrote:
>> On Wed, Oct 15, 2014 at 10:03 PM, Russell Bryant  wrote:
 Am I making sense?
>>>
>>> Yep, the downside is just that you need to provide a new set of flavors
>>> for "ha" vs "non-ha".  A benefit though is that it's a way to support it
>>> today without *any* changes to OpenStack.
>>
>> Users are already very used to defining new flavors. Nova itself
>> wouldn't even need to define those; if the vendor's deployment tools
>> defined them it would be just fine.
>
> Yes, I know Nova wouldn't need to define it.  I was saying I didn't like
> that it was required at all.

Fair enough, but do consider that, for example, Trove already
routinely defines flavors of its own.

So I don't think that's quite as painful (to users) as you think.

>>> This seems like the kind of thing we should also figure out how to offer
>>> on a per-guest basis without needing a new set of flavors.  That's why I
>>> also listed the server tagging functionality as another possible solution.
>>
>> This still doesn't do away with the requirement to reliably detect
>> node failure, and to fence misbehaving nodes. Detecting that a node
>> has failed, and fencing it if unsure, is a prerequisite for any
>> recovery action. So you need Corosync/Pacemaker anyway.
>
> Obviously, yes.  My post covered all of that directly ... the tagging
> bit was just additional input into the recovery operation.

This is essentially why I am saying using the Pacemaker stack is the
smarter approach than hacking something into Ceilometer and Heat. You
already need Pacemaker for service availability (and all major vendors
have adopted it for that purpose), so a highly available cloud that
does *not* use Pacemaker at all won't be a vendor supported option for
some time. So people will already be running Pacemaker — then why not
use it for what it's good at?

(Yes, I am aware of things like etcd and fleet. I think that's headed
in the right direction, but hasn't nearly achieved the degree of
maturity that Pacemaker has. All of HA is about performing correctly
in weird corner cases, and you're only able to do that if you've run
into them and got your nose bloody.)

And just so my position is clear, what Pacemaker is good at is node
and service monitoring, recovery, and fencing. It's *not* particularly
good at usability. Which is why it makes perfect sense to not have
your Pacemaker configurations managed directly by a human, but have an
automated deployment facility do it. Which the vendors are already
doing.

>> Note also that when using an approach where you have physically
>> clustered nodes, but you are also running non-HA VMs on those, then
>> the user must understand that the following applies:
>>
>> (1) If your guest is marked HA, then it will automatically recover on
>> node failure, but
>> (2) if your guest is *not* marked HA, then it will go down with the
>> node not only if it fails, but also if it is fenced.
>>
>> So a non-HA guest on an HA node group actually has a slightly
>> *greater* chance of going down than a non-HA guest on a non-HA host.
>> (And let's not get into "don't use fencing then"; we all know why
>> that's a bad idea.)
>>
>> Which is why I think it makes sense to just distinguish between
>> HA-capable and non-HA-capable hosts, and have the user decide whether
>> they want HA or non-HA guests simply by assigning them to the
>> appropriate host aggregates.
>
> Very good point.  I hadn't considered that.

Yay, I've contributed something useful to this discussion then. :)

Cheers,
Florian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-16 Thread Florian Haas
On Thu, Oct 16, 2014 at 9:25 AM, Jastrzebski, Michal
 wrote:
> In my opinion flavor defining is a bit hacky. Sure, it will provide us
> functionality fairly quickly, but also will strip us from flexibility Heat
> would give. Healing can be done in several ways, simple destroy -> create
> (basic convergence workflow so far), evacuate with or without
> shared storage, even rebuild vm, probably few more when we put more thoughts
> to it.

But then you'd also need to monitor the availability of *individual*
guest and down you go the rabbit hole.

So suppose you're monitoring a guest with a simple ping. And it stops
responding to that ping.

(1) Has it died?
(2) Is it just too busy to respond to the ping?
(3) Has its guest network stack died?
(4) Has its host vif died?
(5) Has the L2 agent on the compute host died?
(6) Has its host network stack died?
(7) Has the compute host died?

Suppose further it's using shared storage (running off an RBD volume
or using an iSCSI volume, or whatever). Now you have almost as many
recovery options as possible causes for the failure, and some of those
recovery options will potentially destroy your guest's data.

No matter how you twist and turn the problem, you need strongly
consistent distributed VM state plus fencing. In other words, you need
a full blown HA stack.

> I'd rather use nova for low level task and maybe low level monitoring (imho
> nova should do that using servicegroup). But I'd use something more more
> configurable for actual task triggering like heat. That would give us
> framework rather than mechanism. Later we might want to apply HA on network or
> volume, then we'll have mechanism ready just monitoring hook and healing
> will need to be implemented.
>
> We can use scheduler hints to place resource on host HA-compatible
> (whichever health action we'd like to use), this will bit more complicated, 
> but
> also will give us more flexibility.

I apologize in advance for my bluntness, but this all sounds to me
like you're vastly underrating the problem of reliable guest state
detection and recovery. :)

> I agree that we all should meet in Paris and discuss that so we can join our
> forces. This is one of bigger gaps to be filled imho.

Pretty much every user I've worked with in the last 2 years agrees.
Granted, my view may be skewed as HA is typically what customers
approach us for in the first place, but yes, this definitely needs a
globally understood and supported solution.

Cheers,
Florian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] My notes and experiences about OSv on OpenStack

2014-10-16 Thread Zhipeng Huang
Hey Glauber that would be great! See you in Paris then :)

On Thu, Oct 16, 2014 at 4:32 PM, Glauber Costa  wrote:

> Hello guys
>
> Just to let you know, I won't be in the Summit because I am too busy due
> to the fact I am relocating to a foreign country.
>
> However, that country happens to be France, and that city happens to be
> Paris. I am arriving Nov 4th around 3 pm, and would be happy to meet you
> (and other people interested in OSv) either that night, or even better, the
> following day or night.
>
> Cheers
>
> On Thu, Oct 16, 2014 at 10:56 AM, Zhipeng Huang 
> wrote:
>
>> Great let's have some f2f time discussion at Paris. Have you added what u
>> mentioned in Nova Kilo summit topics?
>>
>> On Thu, Oct 16, 2014 at 2:54 PM, Gareth  wrote:
>>
>>> Pusihng it into OpenStack is not a hard job, because for OpenStack
>>> developers OSv image is a normal qcow2 image. What you need do is enabling
>>> some Qemu flags in Nova libvirt driver.
>>>
>>> yep, I will be there :)
>>>
>>> On Thu, Oct 16, 2014 at 2:10 PM, Zhipeng Huang 
>>> wrote:
>>>
 Ahhh, that's bad. Do you think there is a need for pushing OSv
 integration in OpenStack in Kilo cycle? Would you come to Paris Summit?

 On Thu, Oct 16, 2014 at 1:24 PM, Gareth 
 wrote:

> yes :)
>
> I planed that if that topic were picked, I could apply that as a
> formal project in Intel. But failed...
>
> On Thu, Oct 16, 2014 at 12:53 PM, Zhipeng Huang  > wrote:
>
>> Hi, I'm also interested in it. You submitted a talk about it to Paris
>> Summit right?
>>
>> On Thu, Oct 16, 2014 at 10:34 AM, Gareth 
>> wrote:
>>
>>>
>>> Here is introducing OSv to all OpenStack developers. OSv team is
>>> focusing on performance of KVM based guest OS, or cloud application. I'm
>>> interested in it because of their hard work on optimizing all details. 
>>> And
>>> I had also worked on deploying OSv on OpenStack environment. However, 
>>> since
>>> my work is only for private interests in off working time, my progress 
>>> is
>>> pretty slow. So I have to share my experience and hope other engineers
>>> could join it:
>>>
>>> # OSv highlights in my mind
>>>
>>> 1, Super fast booting time means nearly zero down time services, an
>>> alternative way to dynamic flavor changing and time improvement for
>>> deploying instances in KVM based PaaS platform. 2, Great work on
>>> performance. Cloud engineers could borrow experience from their work on
>>> guest OS. 3, Better performance on JVM. We could imagine there are many
>>> overhead and redundancy in host OS/guest OS/JVM. Fixing that could help
>>> Java applications perform closer to bare-metal.
>>>
>>> # Enabling OSv on OpenStack
>>>
>>> Actually there should not be any big problems. The steps are that
>>> building OSv qcow2 image first and boot it via Nova then. You may face 
>>> some
>>> problems because OSv image need many new Qemu features, such as
>>> virtio-rng-io/vhost and enable-kvm flag is necessary.
>>>
>>> Fortunately, I don't meet any problems with network, Neutron
>>> (actually I thought before network in OpenStack maybe hang me for a long
>>> time). OSv need a tap device and Neutron does good job on it. And then I
>>> could access OSv service very well.
>>>
>>> # OSv based demo
>>>
>>> The work I finished is only a memcached cluster. And the result is
>>> obvious: memory throughout of OSv based instance has 3 times than it in
>>> traditional virtual machines, and 90% of performance on host OS[0][1].
>>> Since their work on memcached is quite mature, consider OSv if you need
>>> build memcached instance.
>>>
>>> Another valuable demo cluster is Hadoop. When talking about Hadoop
>>> on OpenStack, the topic asked most frequently is the performance on 
>>> virtual
>>> machines. A known experience is higher version Qemu would help fix disk 
>>> I/O
>>> performance[2]. But how  does the overlap in JVM/guest OS? I would love 
>>> to
>>> find that, but don't have so much time.
>>>
>>> After of all, the purpose of this thread is to bring an interesting
>>> topic on cloud performance and hope more and more efficient clusters 
>>> based
>>> on OpenStack (in production use). I don't have so much time on OSv 
>>> because
>>> this just is my personal interest, but I could prove OSv is a valuable 
>>> way
>>> and topic.
>>>
>>> [0] http://paste.openstack.org/show/121382/
>>> [1]
>>> https://github.com/cloudius-systems/osv/wiki/OSv-Case-Study:-Memcached
>>> [2]
>>> https://www.openstack.org/summit/openstack-summit-atlanta-2014/session-videos/presentation/performance-of-hadoop-on-openstack
>>>
>>> --
>>> Gareth
>>>
>>> *Cloud Computing, OpenStack, Distributed Storage, Fitness,
>

Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-16 Thread Thomas Herve

> >> This still doesn't do away with the requirement to reliably detect
> >> node failure, and to fence misbehaving nodes. Detecting that a node
> >> has failed, and fencing it if unsure, is a prerequisite for any
> >> recovery action. So you need Corosync/Pacemaker anyway.
> >
> > Obviously, yes.  My post covered all of that directly ... the tagging
> > bit was just additional input into the recovery operation.
> 
> This is essentially why I am saying using the Pacemaker stack is the
> smarter approach than hacking something into Ceilometer and Heat. You
> already need Pacemaker for service availability (and all major vendors
> have adopted it for that purpose), so a highly available cloud that
> does *not* use Pacemaker at all won't be a vendor supported option for
> some time. So people will already be running Pacemaker — then why not
> use it for what it's good at?

I may be missing something, but Pacemaker will only provide monitoring of your 
compute node, right? I think the advantage you would get by using something 
like Heat is having an instance agent and provide monitoring of your client 
service, instead of just knowing the status of your hypervisor. Hosts can fail, 
but there is another array of failures that you can't handle with the global 
deployment monitoring.

-- 
Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-16 Thread Florian Haas
On Thu, Oct 16, 2014 at 11:01 AM, Thomas Herve
 wrote:
>
>> >> This still doesn't do away with the requirement to reliably detect
>> >> node failure, and to fence misbehaving nodes. Detecting that a node
>> >> has failed, and fencing it if unsure, is a prerequisite for any
>> >> recovery action. So you need Corosync/Pacemaker anyway.
>> >
>> > Obviously, yes.  My post covered all of that directly ... the tagging
>> > bit was just additional input into the recovery operation.
>>
>> This is essentially why I am saying using the Pacemaker stack is the
>> smarter approach than hacking something into Ceilometer and Heat. You
>> already need Pacemaker for service availability (and all major vendors
>> have adopted it for that purpose), so a highly available cloud that
>> does *not* use Pacemaker at all won't be a vendor supported option for
>> some time. So people will already be running Pacemaker — then why not
>> use it for what it's good at?
>
> I may be missing something, but Pacemaker will only provide monitoring of 
> your compute node, right? I think the advantage you would get by using 
> something like Heat is having an instance agent and provide monitoring of 
> your client service, instead of just knowing the status of your hypervisor. 
> Hosts can fail, but there is another array of failures that you can't handle 
> with the global deployment monitoring.

You *are* missing something, indeed. :) Pacemaker would be a perfectly
fine tool for also monitoring the status of your guests on the hosts.
So arguably, nova-compute could in fact hook in with pcsd
(https://github.com/feist/pcs/tree/master/pcs -- all in Python) down
the road to inject VM monitoring into the Pacemaker configuration.
This would, of course, need to be specific to the hypervisor so it
would be a job for the nova driver, rather than being implemented at
the nova-compute level.

But my hunch is that that sort of thing would be for the L release;
for Kilo the low-hanging fruit would be to defend against host failure
(meaning, compute node failure, unrecoverable nova-compute service
failure, etc.).

Cheers,
Florian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [ml2] How ML2 reflects on the topology?

2014-10-16 Thread Mathieu Rohon
Hi,

if you use a VLAN type driver, TOR switches should be configured in
trunk mode to allow VLAN specified in vlan_type section of
ml2_conf.ini.
vlan id range is defined in this section. Any tenant network will use
an id from this range, and it is totally independent from tenant id.
Some mechanism drivers should allow you to automatically configure the
TOR switch with the correct vlan ID in the trunk port connected to
compute nodes.

when you create a port, traffic from this port will use the vlan tag
from the network which owns this port.

Hope this help

On Wed, Oct 15, 2014 at 7:18 PM, Ettore zugliani
 wrote:
> Hi, I've got a few questions that have been left unanswered on Ask.Openstack
> and on the IRC channel.
>
> How the topology may be affected by the ML2 API calls? In other words, how
> would a "Create Network" call affect the actual topology? How is it
> controlled?
>
> An example: Once we receive a "Create Network" ML2 API call we don't know
> how exactly it reflects on ANY switch configuration. Supposing that we
> received a create_network with the tenant_id = tid and we are using the
> TypeDriver VLAN, should we create a VLAN on the swtich with vid = tid?
>
> On a create_port API call should we add a specifc port -manually- to this
> vlan? Another thing that comes to mind is that if is there a default port or
> do we get the correct port from Neutron context?
>
> Thank you
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API recommendation

2014-10-16 Thread Salvatore Orlando
In an analysis we recently did for managing lifecycle of neutron resources,
it also emerged that task (or operation) API are a very useful resource.
Indeed several neutron resources introduced the (in)famous PENDING_XXX
operational statuses to note the fact that an operation is in progress and
its status is changing.

This could have been easily avoided if a facility for querying active tasks
through the API was available.

>From an API guideline viewpoint, I understand that
https://review.openstack.org/#/c/86938/ proposes the introduction of a
rather simple endpoint to query active tasks and filter them by resource
uuid or state, for example.
While this is hardly questionable, I wonder if it might be worth
"typifying" the task, ie: adding a resource_type attribute, and/or allowing
to retrieve active tasks as a chile resource of an object, eg.: GET
/servers//tasks?state=running or if just for running tasks GET
/servers//active_tasks

The proposed approach for the multiple server create case also makes sense
to me. Other than "bulk" operations there are indeed cases where a single
API operation needs to perform multiple tasks. For instance, in Neutron,
creating a port implies L2 wiring, setting up DHCP info, and securing it on
the compute node by enforcing anti-spoof rules and security groups. This
means there will be 3/4 active tasks. For this reason I wonder if it might
be the case of differentiating between the concept of "operation" and
"tasks" where the former is the activity explicitly initiated by the API
consumer, and the latter are the activities which need to complete to
fulfil it. This is where we might leverage the already proposed request_id
attribute of the task data structure.

Finally, a note on persistency. How long a completed task, successfully or
not should be stored for? Do we want to store them until the resource they
operated on is deleted?
I don't think it's a great idea to store them indefinitely in the DB. Tying
their lifespan to resources is probably a decent idea, but time-based
cleanup policies might also be considered (e.g.: destroy a task record 24
hours after its completion)

Salvatore


On 16 October 2014 08:38, Christopher Yeoh  wrote:

> On Thu, Oct 16, 2014 at 7:19 AM, Kevin L. Mitchell <
> kevin.mitch...@rackspace.com> wrote:
>
>> On Wed, 2014-10-15 at 12:39 -0400, Andrew Laski wrote:
>> > On 10/15/2014 11:49 AM, Kevin L. Mitchell wrote:
>> > > Now that we have an API working group forming, I'd like to kick off
>> some
>> > > discussion over one point I'd really like to see our APIs using (and
>> > > I'll probably drop it in to the repo once that gets fully set up): the
>> > > difference between synchronous and asynchronous operations.  Using
>> nova
>> > > as an example—right now, if you kick off a long-running operation,
>> such
>> > > as a server create or a reboot, you watch the resource itself to
>> > > determine the status of the operation.  What I'd like to propose is
>> that
>> > > future APIs use a separate "operation" resource to track status
>> > > information on the particular operation.  For instance, if we were to
>> > > rebuild the nova API with this idea in mind, booting a new server
>> would
>> > > give you a server handle and an operation handle; querying the server
>> > > resource would give you summary information about the state of the
>> > > server (running, not running) and pending operations, while querying
>> the
>> > > operation would give you detailed information about the status of the
>> > > operation.  As another example, issuing a reboot would give you the
>> > > operation handle; you'd see the operation in a queue on the server
>> > > resource, but the actual state of the operation itself would be listed
>> > > on that operation.  As a side effect, this would allow us (not
>> require,
>> > > though) to queue up operations on a resource, and allow us to cancel
>> an
>> > > operation that has not yet been started.
>> > >
>> > > Thoughts?
>> >
>> > Something like https://review.openstack.org/#/c/86938/ ?
>> >
>> > I know that Jay has proposed a similar thing before as well.  I would
>> > love to get some feedback from others on this as it's something I'm
>> > going to propose for Nova in Kilo.
>>
>> Yep, something very much like that :)  But the idea behind my proposal
>> is to make that a codified API guideline, rather than just an addition
>> to Nova.
>>
>
> Perhaps the best way to make this move faster is for developers not from
> Nova
> who are interested to help develop the tasks api spec Andrew pointed to.
> Its been
>  on the Nova to-do list for a few cycles now and had quite a bit of
> discussion both
> at mid cycles and summit meetings.
>
> Once we have a nova spec approved we can extract the project common parts
> out into the API guidelines.
>
> I think we really want microversions up so we can make backwards
> incompatible API
> changes when we implement the API side of tasks, but that is something
> members
> of the API WG are ho

Re: [openstack-dev] [api] API recommendation

2014-10-16 Thread Alex Xu
2014-10-16 17:57 GMT+08:00 Salvatore Orlando :

> In an analysis we recently did for managing lifecycle of neutron
> resources, it also emerged that task (or operation) API are a very useful
> resource.
> Indeed several neutron resources introduced the (in)famous PENDING_XXX
> operational statuses to note the fact that an operation is in progress and
> its status is changing.
>
> This could have been easily avoided if a facility for querying active
> tasks through the API was available.
>
> From an API guideline viewpoint, I understand that
> https://review.openstack.org/#/c/86938/ proposes the introduction of a
> rather simple endpoint to query active tasks and filter them by resource
> uuid or state, for example.
> While this is hardly questionable, I wonder if it might be worth
> "typifying" the task, ie: adding a resource_type attribute, and/or allowing
> to retrieve active tasks as a chile resource of an object, eg.: GET
> /servers//tasks?state=running or if just for running tasks GET
> /servers//active_tasks
>
> The proposed approach for the multiple server create case also makes sense
> to me. Other than "bulk" operations there are indeed cases where a single
> API operation needs to perform multiple tasks. For instance, in Neutron,
> creating a port implies L2 wiring, setting up DHCP info, and securing it on
> the compute node by enforcing anti-spoof rules and security groups. This
> means there will be 3/4 active tasks. For this reason I wonder if it might
> be the case of differentiating between the concept of "operation" and
> "tasks" where the former is the activity explicitly initiated by the API
> consumer, and the latter are the activities which need to complete to
> fulfil it. This is where we might leverage the already proposed request_id
> attribute of the task data structure.
>

This sounds like sub-task. The propose from Andrew include the sub-task
concept, it just didn't implement in the first step.


>
> Finally, a note on persistency. How long a completed task, successfully or
> not should be stored for? Do we want to store them until the resource they
> operated on is deleted?
> I don't think it's a great idea to store them indefinitely in the DB.
> Tying their lifespan to resources is probably a decent idea, but time-based
> cleanup policies might also be considered (e.g.: destroy a task record 24
> hours after its completion)
>
>
This is good point! Task can be removed after finished except failed. And
maybe can implement plugin mechanism to add different persistency backend?



> Salvatore
>
>
> On 16 October 2014 08:38, Christopher Yeoh  wrote:
>
>> On Thu, Oct 16, 2014 at 7:19 AM, Kevin L. Mitchell <
>> kevin.mitch...@rackspace.com> wrote:
>>
>>> On Wed, 2014-10-15 at 12:39 -0400, Andrew Laski wrote:
>>> > On 10/15/2014 11:49 AM, Kevin L. Mitchell wrote:
>>> > > Now that we have an API working group forming, I'd like to kick off
>>> some
>>> > > discussion over one point I'd really like to see our APIs using (and
>>> > > I'll probably drop it in to the repo once that gets fully set up):
>>> the
>>> > > difference between synchronous and asynchronous operations.  Using
>>> nova
>>> > > as an example—right now, if you kick off a long-running operation,
>>> such
>>> > > as a server create or a reboot, you watch the resource itself to
>>> > > determine the status of the operation.  What I'd like to propose is
>>> that
>>> > > future APIs use a separate "operation" resource to track status
>>> > > information on the particular operation.  For instance, if we were to
>>> > > rebuild the nova API with this idea in mind, booting a new server
>>> would
>>> > > give you a server handle and an operation handle; querying the server
>>> > > resource would give you summary information about the state of the
>>> > > server (running, not running) and pending operations, while querying
>>> the
>>> > > operation would give you detailed information about the status of the
>>> > > operation.  As another example, issuing a reboot would give you the
>>> > > operation handle; you'd see the operation in a queue on the server
>>> > > resource, but the actual state of the operation itself would be
>>> listed
>>> > > on that operation.  As a side effect, this would allow us (not
>>> require,
>>> > > though) to queue up operations on a resource, and allow us to cancel
>>> an
>>> > > operation that has not yet been started.
>>> > >
>>> > > Thoughts?
>>> >
>>> > Something like https://review.openstack.org/#/c/86938/ ?
>>> >
>>> > I know that Jay has proposed a similar thing before as well.  I would
>>> > love to get some feedback from others on this as it's something I'm
>>> > going to propose for Nova in Kilo.
>>>
>>> Yep, something very much like that :)  But the idea behind my proposal
>>> is to make that a codified API guideline, rather than just an addition
>>> to Nova.
>>>
>>
>> Perhaps the best way to make this move faster is for developers not from
>> Nova
>> who are interested to help develop the t

Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-16 Thread Russell Bryant
On 10/16/2014 04:29 AM, Florian Haas wrote:
> (5) Let monitoring and orchestration services deal with these use
> cases and
> have Nova simply provide the primitive API calls that it already does
> (i.e.
> host evacuate).

 That would arguably lead to an incredible amount of wheel reinvention
 for node failure detection, service failure detection, etc. etc.
>>>
>>> How so? (5) would use existing wheels for monitoring and orchestration
>>> instead of writing all new code paths inside Nova to do the same thing.
>>
>> Right, there may be some confusion here ... I thought you were both
>> agreeing that the use of an external toolset was a good approach for the
>> problem, but Florian's last message makes that not so clear ...
> 
> While one of us (Jay or me) speaking for the other and saying we agree
> is a distributed consensus problem that dwarfs the complexity of
> Paxos, *I* for my part do think that an "external" toolset (i.e. one
> that lives outside the Nova codebase) is the better approach versus
> duplicating the functionality of said toolset in Nova.
> 
> I just believe that the toolset that should be used here is
> Corosync/Pacemaker and not Ceilometer/Heat. And I believe the former
> approach leads to *much* fewer necessary code changes *in* Nova than
> the latter.

Have you tried pacemaker_remote yet?  It seems like a better choice for
this particular case, as opposed to using corosync, due to the potential
number of compute nodes.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-16 Thread Russell Bryant
On 10/16/2014 05:01 AM, Thomas Herve wrote:
> 
 This still doesn't do away with the requirement to reliably detect
 node failure, and to fence misbehaving nodes. Detecting that a node
 has failed, and fencing it if unsure, is a prerequisite for any
 recovery action. So you need Corosync/Pacemaker anyway.
>>>
>>> Obviously, yes.  My post covered all of that directly ... the tagging
>>> bit was just additional input into the recovery operation.
>>
>> This is essentially why I am saying using the Pacemaker stack is the
>> smarter approach than hacking something into Ceilometer and Heat. You
>> already need Pacemaker for service availability (and all major vendors
>> have adopted it for that purpose), so a highly available cloud that
>> does *not* use Pacemaker at all won't be a vendor supported option for
>> some time. So people will already be running Pacemaker — then why not
>> use it for what it's good at?
> 
> I may be missing something, but Pacemaker will only provide
> monitoring of your compute node, right? I think the advantage you
> would get by using something like Heat is having an instance agent
> and provide monitoring of your client service, instead of just
> knowing the status of your hypervisor. Hosts can fail, but there is
> another array of failures that you can't handle with the global
> deployment monitoring.

I think that's an important problem, too.

The thread was started talking about evacuate, which is used in the case
of a host failure.  I wrote up a more detailed proposal of using an
external tool (Pacemaker) to handle automatic evacuation of failed hosts.

For a guest OS failure, we have some basic watchdog support.  From my
blog post:

"It’s worth noting that the libvirt/KVM driver in OpenStack does contain
one feature related to guest operating system failure.  The
libvirt-watchdog blueprint was implemented in the Icehouse release of
Nova.  This feature allows you to set the hw_watchdog_action property on
either the image or flavor.  Valid values include poweroff, reset,
pause, and none.  When this is enabled, libvirt will enable the i6300esb
watchdog device for the guest and will perform the requested action if
the watchdog is triggered.  This may be a helpful component of your
strategy for recovery from guest failures."

HA in the case of application failures can be handled in several ways,
depending on the application.  It's really a separate problem space,
though, IMO.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Proposal: A launchpad bug description template

2014-10-16 Thread Markus Zoeller
TL;DR: A proposal for a template for launchpad bug entries which asks 
   for the minimal needed data to work on a bug.


Hi,

As a new guy in OpenStack I find myself often in the situation that I'm
not able to work on a bug because I don't understand the problem itself.
If I don't understand the problem I'm not able to locate the source of 
the issue and to provide an appropriate patch.

As the new guy I don't want to flood the bug entry with questions what
it should do which might be obvious to other members. And so, wrt 
Igawas comment in the mail thread "[openstack-dev] [qa] Tempest Bug 
triage" (http://openstack.markmail.org/thread/jcwgdcjxylqw2vyk) I'd 
like to propose a template for bug entries in launchpad (see below). 

Even if I'm not able to solve a bug due to the lack of knowledge, 
reading the bug description written according to this template could 
help me build up knowledge in this area.

Maybe this or a similar template can be preloaded into the description
panel of launchpad when creating a bug entry.

What are your thoughts about this?

Regards,
Markus Zoeller
IRC: markus_z


-- template start -

Problem description:
==
# Summarize the issue in as few words as possible and as much words as
# necessary. A reader should have the chance to decide if this is in his
# expertise without reading the following sections.

Steps to reproduce:
==
# Explain where you start and under which conditions.
# List every input you gave and every action you made.

# E.g.:
# Prereqs: Installed devstack on x86 machine with icehouse release
# 1) In Horizon, launch an instance with name "test" an flavor "m1.tiny" 
#from image "cirros-0.3.1-x86_64-uec"
# 2) Launch 2 other instances with different names but with the same
#flavor and image.

Expected behavior:
==
# Describe what you thought what should happen. If applicable provide
# sources (e.g. wiki pages, manuals or similar) why to expected this.
# E.g.:
# Each instance should be launched successfully.

Observed behavior:
==
# Describe the observed behavior and how it differs from the expected
# one. List the error messages you get. Link to debug data.
# E.g.:
# The third instance was not launched. It went to an error state. The
# error message was "host not found".

Reproducibility:
==
# How often will the "steps to reproduce" result in the described 
# observed behavior?
# This could give a hint if the bug is related to race conditions
# Try to give a good estimate. 
# E.g. "4/5" (read "four times out of five tries")
# 5/5

Environment:
==
# Which version/branch did you use?
# Details of the machine?

Additional data:
==
# Links to http://paste.openstack.org/ with content of log files.
# Links to possible related bugs.
# Things which might be helpful to debug this but doesn't fit into the 
# other sections.

-- template end  -


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API recommendation

2014-10-16 Thread Dean Troyer
On Thu, Oct 16, 2014 at 4:57 AM, Salvatore Orlando 
wrote:

> From an API guideline viewpoint, I understand that
> https://review.openstack.org/#/c/86938/ proposes the introduction of a
> rather simple endpoint to query active tasks and filter them by resource
> uuid or state, for example.
>

That review/blueprint contains one thing that I want to address in more
detail below along with Sal's comment on persistence...


> While this is hardly questionable, I wonder if it might be worth
> "typifying" the task, ie: adding a resource_type attribute, and/or allowing
> to retrieve active tasks as a chile resource of an object, eg.: GET
> /servers//tasks?state=running or if just for running tasks GET
> /servers//active_tasks
>

I'd prefer the filter approach, but more importantly, it should be the
_same_ structure as listing resources themselves.

To note: here is another API design detail, specifying resource types in
the URL path:

/server//foo

vs

//foo

or what we have today, for example, in compute:

//foo

The proposed approach for the multiple server create case also makes sense
> to me. Other than "bulk" operations there are indeed cases where a single
> API operation needs to perform multiple tasks. For instance, in Neutron,
> creating a port implies L2 wiring, setting up DHCP info, and securing it on
> the compute node by enforcing anti-spoof rules and security groups. This
> means there will be 3/4 active tasks. For this reason I wonder if it might
> be the case of differentiating between the concept of "operation" and
> "tasks" where the former is the activity explicitly initiated by the API
> consumer, and the latter are the activities which need to complete to
> fulfil it. This is where we might leverage the already proposed request_id
> attribute of the task data structure.
>

I like the ability to track the fan-out, especially if I can get the state
of the entire set of tasks in a single round-trip.  This also makes it
easier to handle backout of failed requests without having to maintain a
lot of client-side state, or make a lot of round-trips.

Finally, a note on persistency. How long a completed task, successfully or
> not should be stored for? Do we want to store them until the resource they
> operated on is deleted?
> I don't think it's a great idea to store them indefinitely in the DB.
> Tying their lifespan to resources is probably a decent idea, but time-based
> cleanup policies might also be considered (e.g.: destroy a task record 24
> hours after its completion)
>

I can envision an operator/user wanting to be able to pull a log of an
operation/task for not only cloud debugging (x failed to build, when/why?)
but also app-level debugging (concrete use case not ready at deadline).
This would require a minimum of life-of-resource + some-amount-of-time.
The time might also be variable, failed operations might actually need to
stick around longer.

Even as an operator with access to backend logging, pulling these state
transitions out should not be hard, and should be available to the resource
owner (project).

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-16 Thread Florian Haas
On Thu, Oct 16, 2014 at 1:59 PM, Russell Bryant  wrote:
> On 10/16/2014 04:29 AM, Florian Haas wrote:
>> (5) Let monitoring and orchestration services deal with these use
>> cases and
>> have Nova simply provide the primitive API calls that it already does
>> (i.e.
>> host evacuate).
>
> That would arguably lead to an incredible amount of wheel reinvention
> for node failure detection, service failure detection, etc. etc.

 How so? (5) would use existing wheels for monitoring and orchestration
 instead of writing all new code paths inside Nova to do the same thing.
>>>
>>> Right, there may be some confusion here ... I thought you were both
>>> agreeing that the use of an external toolset was a good approach for the
>>> problem, but Florian's last message makes that not so clear ...
>>
>> While one of us (Jay or me) speaking for the other and saying we agree
>> is a distributed consensus problem that dwarfs the complexity of
>> Paxos, *I* for my part do think that an "external" toolset (i.e. one
>> that lives outside the Nova codebase) is the better approach versus
>> duplicating the functionality of said toolset in Nova.
>>
>> I just believe that the toolset that should be used here is
>> Corosync/Pacemaker and not Ceilometer/Heat. And I believe the former
>> approach leads to *much* fewer necessary code changes *in* Nova than
>> the latter.
>
> Have you tried pacemaker_remote yet?  It seems like a better choice for
> this particular case, as opposed to using corosync, due to the potential
> number of compute nodes.

I'll assume that you are *not* referring to running Corosync/Pacemaker
on the compute nodes plus pacemaker_remote in the VMs, because doing
so would blow up the separation between the cloud operator and tenant
space.

Running compute nodes as baremetal extensions of a different
Corosync/Pacemaker cluster (presumably the one that manages the other
Nova services)  would potentially be an option, although vendors would
need to buy into this. Ubuntu, for example, currently only ships
pacemaker-remote in universe.

*If* you're running pacemaker_remote on the compute node, though, that
then also opens up the possibility for a compute driver to just dump
the libvirt definition into a VirtualDomain Pacemaker resource,
meaning with a small callout added to Nova, you could also get the
virtual machine monitoring functionality. Bonus: this could eventually
be extended to allow live migration of guests to other compute nodes
in the same cluster, in case you want to shut down a compute node for
maintenance without interrupting your HA guests.

Cheers,
Florian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Travels tips for the Paris summit

2014-10-16 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/15/2014 03:25 AM, Thierry Carrez wrote:
>>> What I can suggest is when looking at the menu (this is mandatory to put
>>> >> it outside of the restaurant) and check for the word 'Végétarien'.

> It's also always risky to ask for a vegetarian plate in a French
> traditional restaurant. At best you get a bowl of rice. At worse you get
> Canton rice with bacon and shrimps.

Maybe can you provide a translation for "Does this dish have any meat in
it?". That seems to work best for me, since 'vegetarian' means so many
different things.


- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iQIcBAEBAgAGBQJUP8WZAAoJEKMgtcocwZqLuOgQAKwPEp1obSKq4FlYaLq2t4aA
XpOdqONRfBFyxUCWLLaBqomFSNeFKTotJDBeo6AcpncIjAPrlYoC/wptBUP1gwbK
+I6PFgNPIRjOBlT/ptnoxZOsTK1cfip9Zis6F/wqSlve/N9rdO/nxZeUnfGtgDu+
m4JCWIbKRpUMDk72qfJDsapBdozizK9pLQo93cxwD1nPjBN5nGCwddTHjOk31iqU
v7UADZTTz+4apvjVsTIDpE8PWMtggIl/PF1DDkl5ycudo/cDczDbQhQaR0dEdQe/
RBGWYPkqJFTWnhY859lPHIEYPtn5EZn23JJyuW+oXaXiz+fQB1fBI+DjFuQZUzsB
L44GMDHMbbJiTro4MV2v1EpbaUBohJYH/Dc8woJ2F+OXBA1p7tdLJ1//lYd2iEqq
+BC2Imet55FDn/S2Kah3vfcazukKiQ2NvKQ8NO21J0ImZgLSz+vJj8Sfn6+rPyAY
jY9Xgh8FHKXGK0XLVIOV/jGa+QMJUB8gnNNFfFD2b9FYe/VcuSnzXAbBfSrpF/r1
HNuU1tYMgd1ON52KaGoBXT2B93JZbeOGg/KMPK5Rnl6SQ0WPLT2y93BwXRbCLyrV
NYXf8ChnuwUKDC2cP0nu6a30ISEHynoTcmD2id2wrxrjI/qmDkuWVHzp1X4u2QQ9
Jx/aY5BS/P29CbGkwaho
=U6f1
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] scheduling Kilo summit topics

2014-10-16 Thread Eoghan Glynn

Folks,

Just a quick reminder that we'll be considering the summit design
sessions proposals[1] at the weekly meeting today at 1500UTC.

We'll start the process of collaborative scheduling of topics with
each session proposer giving a short pitch for inclusion.

We've got 15 proposals already, contending for 6 slots, so let's try
to keep the pitches brief, as opposed to foreshadowing the entire
summit discussion :)

Thanks,
Eoghan 

[1] http://bit.ly/kilo-ceilometer-summit-topics 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Travels tips for the Paris summit

2014-10-16 Thread Sylvain Bauza


Le 16/10/2014 15:18, Ed Leafe a écrit :

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/15/2014 03:25 AM, Thierry Carrez wrote:

What I can suggest is when looking at the menu (this is mandatory to put

it outside of the restaurant) and check for the word 'Végétarien'.

It's also always risky to ask for a vegetarian plate in a French
traditional restaurant. At best you get a bowl of rice. At worse you get
Canton rice with bacon and shrimps.

Maybe can you provide a translation for "Does this dish have any meat in
it?". That seems to work best for me, since 'vegetarian' means so many
different things.


Sure thing "Est-ce que ce plat contient de la viande ?" (ε-s kə sə pla 
kɔ̃tjε̃ də la vjɑ̃d )


https://translate.google.fr/?hl=en&tab=wT#auto/en/est-ce%20que%20ce%20plat%20contient%20de%20la%20viande%20%3F

As the wiki says, don't hesitate to see us at the E47 booth for any 
translation problem, we will be glad to help you.


-Sylvain



- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iQIcBAEBAgAGBQJUP8WZAAoJEKMgtcocwZqLuOgQAKwPEp1obSKq4FlYaLq2t4aA
XpOdqONRfBFyxUCWLLaBqomFSNeFKTotJDBeo6AcpncIjAPrlYoC/wptBUP1gwbK
+I6PFgNPIRjOBlT/ptnoxZOsTK1cfip9Zis6F/wqSlve/N9rdO/nxZeUnfGtgDu+
m4JCWIbKRpUMDk72qfJDsapBdozizK9pLQo93cxwD1nPjBN5nGCwddTHjOk31iqU
v7UADZTTz+4apvjVsTIDpE8PWMtggIl/PF1DDkl5ycudo/cDczDbQhQaR0dEdQe/
RBGWYPkqJFTWnhY859lPHIEYPtn5EZn23JJyuW+oXaXiz+fQB1fBI+DjFuQZUzsB
L44GMDHMbbJiTro4MV2v1EpbaUBohJYH/Dc8woJ2F+OXBA1p7tdLJ1//lYd2iEqq
+BC2Imet55FDn/S2Kah3vfcazukKiQ2NvKQ8NO21J0ImZgLSz+vJj8Sfn6+rPyAY
jY9Xgh8FHKXGK0XLVIOV/jGa+QMJUB8gnNNFfFD2b9FYe/VcuSnzXAbBfSrpF/r1
HNuU1tYMgd1ON52KaGoBXT2B93JZbeOGg/KMPK5Rnl6SQ0WPLT2y93BwXRbCLyrV
NYXf8ChnuwUKDC2cP0nu6a30ISEHynoTcmD2id2wrxrjI/qmDkuWVHzp1X4u2QQ9
Jx/aY5BS/P29CbGkwaho
=U6f1
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Proposal: A launchpad bug description template

2014-10-16 Thread Masayuki Igawa
Hi Markus,

Thank you for bringing up this.

On Thu, Oct 16, 2014 at 9:13 PM, Markus Zoeller  wrote:
> TL;DR: A proposal for a template for launchpad bug entries which asks
>for the minimal needed data to work on a bug.
>
>
> Hi,
>
> As a new guy in OpenStack I find myself often in the situation that I'm
> not able to work on a bug because I don't understand the problem itself.
> If I don't understand the problem I'm not able to locate the source of
> the issue and to provide an appropriate patch.
>
> As the new guy I don't want to flood the bug entry with questions what
> it should do which might be obvious to other members. And so, wrt
> Igawas comment in the mail thread "[openstack-dev] [qa] Tempest Bug
> triage" (http://openstack.markmail.org/thread/jcwgdcjxylqw2vyk) I'd
> like to propose a template for bug entries in launchpad (see below).
>
> Even if I'm not able to solve a bug due to the lack of knowledge,
> reading the bug description written according to this template could
> help me build up knowledge in this area.
>
> Maybe this or a similar template can be preloaded into the description
> panel of launchpad when creating a bug entry.
>
> What are your thoughts about this?
>
> Regards,
> Markus Zoeller
> IRC: markus_z
>
>
> -- template start -
>
> Problem description:
> ==
> # Summarize the issue in as few words as possible and as much words as
> # necessary. A reader should have the chance to decide if this is in his
> # expertise without reading the following sections.
>
> Steps to reproduce:
> ==
> # Explain where you start and under which conditions.
> # List every input you gave and every action you made.
>
> # E.g.:
> # Prereqs: Installed devstack on x86 machine with icehouse release
> # 1) In Horizon, launch an instance with name "test" an flavor "m1.tiny"
> #from image "cirros-0.3.1-x86_64-uec"
> # 2) Launch 2 other instances with different names but with the same
> #flavor and image.
>
> Expected behavior:
> ==
> # Describe what you thought what should happen. If applicable provide
> # sources (e.g. wiki pages, manuals or similar) why to expected this.
> # E.g.:
> # Each instance should be launched successfully.
>
> Observed behavior:
> ==
> # Describe the observed behavior and how it differs from the expected
> # one. List the error messages you get. Link to debug data.
> # E.g.:
> # The third instance was not launched. It went to an error state. The
> # error message was "host not found".
>
> Reproducibility:
> ==
> # How often will the "steps to reproduce" result in the described
> # observed behavior?
> # This could give a hint if the bug is related to race conditions
> # Try to give a good estimate.
> # E.g. "4/5" (read "four times out of five tries")
> # 5/5
>
> Environment:
> ==
> # Which version/branch did you use?
> # Details of the machine?
>
> Additional data:
> ==
> # Links to http://paste.openstack.org/ with content of log files.
> # Links to possible related bugs.
> # Things which might be helpful to debug this but doesn't fit into the
> # other sections.
>
> -- template end  -

+1!

I think many of bug descriptions don't have enough information for
debugging now.  So we should have enough information like this as
possible.
However, one thing I'm concerned about this template is a little heavy
process for submitting an easy bug. We might have some templates for
depending on the situation.

Thanks,
-- 
Masayuki Igawa

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] canceled weekly meeting October, 16

2014-10-16 Thread Sergey Lukjanov
Hi sahara folks,

I'm cancelling weekly irc meeting this week due to the release.

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [QA] How to attach multiple NICs to an instance VM?

2014-10-16 Thread Mariusz Gronczewski
> 
> Is this the correct way to attach multiple NICs to an instance?
> 
> Thanks,
> Danny
> 

Hi,

run a dhcp client on second instance, it should get added IP.

if you want to add port to existing instance then create port in
neutron:

neutron port-create --fixed-ip ip_address=1.2.3.4 vm_network_name

then get its ID and attach it to instance:

nova interface-attach --port-id 6eb03b5e-48fa-4c2b-8e3b-c56137086896 
instance_name

it *can* be done directly during neutron port-create... but doesn't
work in icehouse;/ (port is added to neutron but permanently down in
nova) so it have to be done in 2 steps

-- 
Mariusz Gronczewski, Administrator

Efigence S. A.
ul. Wołoska 9a, 02-583 Warszawa
T: [+48] 22 380 13 13
F: [+48] 22 380 13 14
E: mariusz.gronczew...@efigence.com



signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-16 Thread Steve Gordon


- Original Message -
> From: "Florian Haas" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> On Thu, Oct 16, 2014 at 1:59 PM, Russell Bryant  wrote:
> > On 10/16/2014 04:29 AM, Florian Haas wrote:
> >> (5) Let monitoring and orchestration services deal with these use
> >> cases and
> >> have Nova simply provide the primitive API calls that it already does
> >> (i.e.
> >> host evacuate).
> >
> > That would arguably lead to an incredible amount of wheel reinvention
> > for node failure detection, service failure detection, etc. etc.
> 
>  How so? (5) would use existing wheels for monitoring and orchestration
>  instead of writing all new code paths inside Nova to do the same thing.
> >>>
> >>> Right, there may be some confusion here ... I thought you were both
> >>> agreeing that the use of an external toolset was a good approach for the
> >>> problem, but Florian's last message makes that not so clear ...
> >>
> >> While one of us (Jay or me) speaking for the other and saying we agree
> >> is a distributed consensus problem that dwarfs the complexity of
> >> Paxos, *I* for my part do think that an "external" toolset (i.e. one
> >> that lives outside the Nova codebase) is the better approach versus
> >> duplicating the functionality of said toolset in Nova.
> >>
> >> I just believe that the toolset that should be used here is
> >> Corosync/Pacemaker and not Ceilometer/Heat. And I believe the former
> >> approach leads to *much* fewer necessary code changes *in* Nova than
> >> the latter.
> >
> > Have you tried pacemaker_remote yet?  It seems like a better choice for
> > this particular case, as opposed to using corosync, due to the potential
> > number of compute nodes.
> 
> I'll assume that you are *not* referring to running Corosync/Pacemaker
> on the compute nodes plus pacemaker_remote in the VMs, because doing
> so would blow up the separation between the cloud operator and tenant
> space.
> 
> Running compute nodes as baremetal extensions of a different
> Corosync/Pacemaker cluster (presumably the one that manages the other
> Nova services)  would potentially be an option, although vendors would
> need to buy into this. Ubuntu, for example, currently only ships
> pacemaker-remote in universe.

This is something we'd be doing *too* OpenStack rather than *in* the OpenStack 
projects (at least those that deliver code), in fact that's a large part of the 
appeal. As such I don't know that there necessarily has to be one true solution 
to rule them all, a distribution could deviate as needed, but we would have 
some - ideally very small - number of "known good" configurations which achieve 
the stated goal and are well documented.

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack] [Barbican] Cinder and Barbican

2014-10-16 Thread Giuseppe Galeota
Dear all,
is Cinder capable today to use Barbican for encryption? If yes, can
you attach some useful doc?

Thank you,
Giuseppe
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-16 Thread Florian Haas
On Thu, Oct 16, 2014 at 4:31 PM, Steve Gordon  wrote:
>> Running compute nodes as baremetal extensions of a different
>> Corosync/Pacemaker cluster (presumably the one that manages the other
>> Nova services)  would potentially be an option, although vendors would
>> need to buy into this. Ubuntu, for example, currently only ships
>> pacemaker-remote in universe.
>
> This is something we'd be doing *too* OpenStack rather than *in* the 
> OpenStack projects (at least those that deliver code), in fact that's a large 
> part of the appeal. As such I don't know that there necessarily has to be one 
> true solution to rule them all, a distribution could deviate as needed, but 
> we would have some - ideally very small - number of "known good" 
> configurations which achieve the stated goal and are well documented.

Correct. In the infrastructure/service HA field, we already have that,
as vendors (with very few exceptions) have settled on
Corosync/Pacemaker for service availability, HAproxy for load
balancing, and MySQL/Galera for database replication, for example. It
would be great if we could see this kind of convergent evolution for
guest HA as well.

Cheers,
Florian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-16 Thread Russell Bryant
On 10/16/2014 09:00 AM, Florian Haas wrote:
> On Thu, Oct 16, 2014 at 1:59 PM, Russell Bryant  wrote:
>> On 10/16/2014 04:29 AM, Florian Haas wrote:
>>> (5) Let monitoring and orchestration services deal with these use
>>> cases and
>>> have Nova simply provide the primitive API calls that it already does
>>> (i.e.
>>> host evacuate).
>>
>> That would arguably lead to an incredible amount of wheel reinvention
>> for node failure detection, service failure detection, etc. etc.
>
> How so? (5) would use existing wheels for monitoring and orchestration
> instead of writing all new code paths inside Nova to do the same thing.

 Right, there may be some confusion here ... I thought you were both
 agreeing that the use of an external toolset was a good approach for the
 problem, but Florian's last message makes that not so clear ...
>>>
>>> While one of us (Jay or me) speaking for the other and saying we agree
>>> is a distributed consensus problem that dwarfs the complexity of
>>> Paxos, *I* for my part do think that an "external" toolset (i.e. one
>>> that lives outside the Nova codebase) is the better approach versus
>>> duplicating the functionality of said toolset in Nova.
>>>
>>> I just believe that the toolset that should be used here is
>>> Corosync/Pacemaker and not Ceilometer/Heat. And I believe the former
>>> approach leads to *much* fewer necessary code changes *in* Nova than
>>> the latter.
>>
>> Have you tried pacemaker_remote yet?  It seems like a better choice for
>> this particular case, as opposed to using corosync, due to the potential
>> number of compute nodes.
> 
> I'll assume that you are *not* referring to running Corosync/Pacemaker
> on the compute nodes plus pacemaker_remote in the VMs, because doing
> so would blow up the separation between the cloud operator and tenant
> space.

Correct.

> Running compute nodes as baremetal extensions of a different
> Corosync/Pacemaker cluster (presumably the one that manages the other
> Nova services)  would potentially be an option, although vendors would
> need to buy into this. Ubuntu, for example, currently only ships
> pacemaker-remote in universe.

Yes, this is what I had in mind.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][DVR] Openstack Juno: how to configure dvr in Network-Node and Compute-Node?

2014-10-16 Thread
Have you seen the “how-to” wiki page?
https://wiki.openstack.org/wiki/Neutron/DVR/HowTo
Yours,
Michael Smith
Hewlett-Packard Company
HP Networking R&D
8000 Foothills Blvd. M/S 5557
Roseville, CA 95747
PC Phone: 916 540-1884
Ph: 916 785-0918
Fax: 916 785-1199

From: zhang xiaobin [mailto:14050...@cnsuning.com]
Sent: Wednesday, October 15, 2014 9:50 PM
To: openstack-dev
Subject: [openstack-dev] [Neutron][DVR] Openstack Juno: how to configure dvr in 
Network-Node and Compute-Node?

Could anyone help on this?


In Openstack juno, Neutron new feature called Distributed Virtual Routing (DVR),

But how to configure it in network-node and compute-node. 
Openstack.org just said "router_distributed = True", 
which is far than enough.

could any help point to some detailed instruction on how to configure it?

When we were adding port, some errors from router reported errors like: 
"AttributeError: 'Ml2Plugin' object has no attribute 'update_dvr_port_binding'"

What's more, is this DVR per project, or per compute nodes?
Thanks in advance!


zhang xiaobin


本邮件(包括其附件)可能含有保密、专有或保留著作权的信息。如果您并非本邮件指定接受人,请即刻通知发送人并将本邮件从您的系统中删除,您不得散布、保留、复制、披露或以其他方式使用本邮件任何相关信息,并且通过邮件告知我们此次错误投递。发送人在本邮件下表达的观点并不一定代表苏宁云商集团股份有限公司的观点。苏宁云商集团股份有限公司并不保证本邮件是安全或不受任何计算机病毒影响的,并且对由于邮件传输而导致的邮件内容错误或缺失不承担任何责任。除非明确说明,本邮件并不构成具有约束力的契约。

This e-mail may contain confidential, copyright and/or privileged information. 
If you are not the addressee or authorized to receive this, please inform us of 
the erroneous delivery by return e-mail, and you should delete it from your 
system and may not use, copy, disclose or take any action based on this e-mail 
or any information herein. Any opinions expressed by sender hereof do not 
necessarily represent those of SUNING COMMERCE GROUP CO., LTD.,SUNING COMMERCE 
GROUP CO., LTD.,does not guarantee that this email is secure or free from 
viruses. No liability is accepted for any errors or omissions in the contents 
of this email, which arise as a result of email transmission. Unless expressly 
stated,this email is not intended to form a binding contract.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Travels tips for the Paris summit

2014-10-16 Thread Julien Danjou
On Thu, Oct 16 2014, Sylvain Bauza wrote:

> Sure thing "Est-ce que ce plat contient de la viande ?" (ε-s kə sə pla 
> kɔ̃tjε̃ də
> la vjɑ̃d )

Be careful because that a lot of places would answer "no" to that while
it might be possible that the dish has things like fish in it… Many
people, even in restaurants, don't get what vegetarian or vegan actually
means.

This is really getting off-topic for this list though. :) I've added a
note in the wiki on that.

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] BGPVPN implementation discussions

2014-10-16 Thread Mathieu Rohon
Hi all,

as discussed during today's l3-meeting, we keep on working on BGPVPN
service plugin implementation [1].
MPLS encapsulation is now supported in OVS [2], so we would like to
summit a design to leverage OVS capabilities. A first design proposal,
based on l3agent, can be found here :

https://docs.google.com/drawings/d/1NN4tDgnZlBRr8ZUf5-6zzUcnDOUkWSnSiPm8LuuAkoQ/edit

this solution is based on bagpipe [3], and its capacity to manipulate
OVS, based on advertised and learned routes.

[1]https://blueprints.launchpad.net/neutron/+spec/neutron-bgp-vpn
[2]https://raw.githubusercontent.com/openvswitch/ovs/master/FAQ
[3]https://github.com/Orange-OpenSource/bagpipe-bgp


Thanks

Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Travels tips for the Paris summit

2014-10-16 Thread Anita Kuno
On 10/16/2014 12:49 PM, Julien Danjou wrote:
> On Thu, Oct 16 2014, Sylvain Bauza wrote:
> 
>> Sure thing "Est-ce que ce plat contient de la viande ?" (ε-s kə
>> sə pla kɔ̃tjε̃ də la vjɑ̃d )
> 
> Be careful because that a lot of places would answer "no" to that
> while it might be possible that the dish has things like fish in
> it… Many people, even in restaurants, don't get what vegetarian or
> vegan actually means.
> 
> This is really getting off-topic for this list though. :)
Heh, yeah didn't expect to take over the thread on this, sorry about
that. Perhaps we should form a group, vegetarians in Paris? We can
shuffle through the streets shotgunning "Est-ce que ce plat contient
de la viande ?" at passers-by and hoping someone will point us in the
right direction. :D

> I've added a note in the wiki on that.
Thanks Julien,
Anita.
> 
> 
> 
> ___ OpenStack-dev
> mailing list OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-16 Thread Adam Lawson
Be forewarned; here's my two cents before I've had my morning coffee.

It would seem to me that if we were seeking some level of resiliency
against host failures (if a host fails, evacuate the instances that were
hosted on it to a host that isn't broken), it would seem that host HA is a
good approach. The ultimate goal of course is instance HA but the task of
monitoring individual instances and determining what constitutes "down"
seems like a much more complex task than detecting when a compute node is
down. I know that requiring the presence of agents should probably need
some more brain-cycles since we can't expect additional bytes consuming
memory on each individual VM.

Additionally, I'm not really hung up on the 'how' as we all realize there
several ways to skin that cat, so long as that 'how' is leveraged via tools
over which we have control and direct influence. Reason being, we may not
want to leverage features as important as this on tools that change outside
our control and subsequently shifts the foundation of the feature we
implemented that was based on how the product USED to work. Basically if
Pacemaker does what we need then cool but it seems that implementing a
feature should be built upon a bedrock of programs over which we have a
direct influence. This is why Nagios may be able to do it but it's a hack
at best. I'm not saying Nagios isn't good or ythe hack doesn't work but in
the context of an Openstack solution, we can't require a single external
tool for a feature like host or VM HA. Are we suggesting that we tell
people who want HA - "go use Nagios"? Call me a purist but if we're going
to implement a feature, it should be our community implementing it because
we have some of the best minds on staff. ; )



*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072


On Thu, Oct 16, 2014 at 7:53 AM, Russell Bryant  wrote:

> On 10/16/2014 09:00 AM, Florian Haas wrote:
> > On Thu, Oct 16, 2014 at 1:59 PM, Russell Bryant 
> wrote:
> >> On 10/16/2014 04:29 AM, Florian Haas wrote:
> >>> (5) Let monitoring and orchestration services deal with these use
> >>> cases and
> >>> have Nova simply provide the primitive API calls that it already
> does
> >>> (i.e.
> >>> host evacuate).
> >>
> >> That would arguably lead to an incredible amount of wheel
> reinvention
> >> for node failure detection, service failure detection, etc. etc.
> >
> > How so? (5) would use existing wheels for monitoring and
> orchestration
> > instead of writing all new code paths inside Nova to do the same
> thing.
> 
>  Right, there may be some confusion here ... I thought you were both
>  agreeing that the use of an external toolset was a good approach for
> the
>  problem, but Florian's last message makes that not so clear ...
> >>>
> >>> While one of us (Jay or me) speaking for the other and saying we agree
> >>> is a distributed consensus problem that dwarfs the complexity of
> >>> Paxos, *I* for my part do think that an "external" toolset (i.e. one
> >>> that lives outside the Nova codebase) is the better approach versus
> >>> duplicating the functionality of said toolset in Nova.
> >>>
> >>> I just believe that the toolset that should be used here is
> >>> Corosync/Pacemaker and not Ceilometer/Heat. And I believe the former
> >>> approach leads to *much* fewer necessary code changes *in* Nova than
> >>> the latter.
> >>
> >> Have you tried pacemaker_remote yet?  It seems like a better choice for
> >> this particular case, as opposed to using corosync, due to the potential
> >> number of compute nodes.
> >
> > I'll assume that you are *not* referring to running Corosync/Pacemaker
> > on the compute nodes plus pacemaker_remote in the VMs, because doing
> > so would blow up the separation between the cloud operator and tenant
> > space.
>
> Correct.
>
> > Running compute nodes as baremetal extensions of a different
> > Corosync/Pacemaker cluster (presumably the one that manages the other
> > Nova services)  would potentially be an option, although vendors would
> > need to buy into this. Ubuntu, for example, currently only ships
> > pacemaker-remote in universe.
>
> Yes, this is what I had in mind.
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Proposal: A launchpad bug description template

2014-10-16 Thread Markus Zoeller
Masayuki Igawa  wrote on 10/16/2014 03:41:24 PM:

> From: Masayuki Igawa 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 10/16/2014 03:44 PM
> Subject: Re: [openstack-dev] [QA] Proposal: A launchpad bug 
> description template
> 
> Hi Markus,
> 
> Thank you for bringing up this.
> 
> On Thu, Oct 16, 2014 at 9:13 PM, Markus Zoeller  
wrote:
> > TL;DR: A proposal for a template for launchpad bug entries which asks
> >for the minimal needed data to work on a bug.
> >
> >
> > Hi,
> >
> > As a new guy in OpenStack I find myself often in the situation that 
I'm
> > not able to work on a bug because I don't understand the problem 
itself.
> > If I don't understand the problem I'm not able to locate the source of
> > the issue and to provide an appropriate patch.
> >
> > As the new guy I don't want to flood the bug entry with questions what
> > it should do which might be obvious to other members. And so, wrt
> > Igawas comment in the mail thread "[openstack-dev] [qa] Tempest Bug
> > triage" (http://openstack.markmail.org/thread/jcwgdcjxylqw2vyk) I'd
> > like to propose a template for bug entries in launchpad (see below).
> >
> > Even if I'm not able to solve a bug due to the lack of knowledge,
> > reading the bug description written according to this template could
> > help me build up knowledge in this area.
> >
> > Maybe this or a similar template can be preloaded into the description
> > panel of launchpad when creating a bug entry.
> >
> > What are your thoughts about this?
> >
> > Regards,
> > Markus Zoeller
> > IRC: markus_z
> >
> >
> > -- template start 
-
> >
> > Problem description:
> > ==
> > # Summarize the issue in as few words as possible and as much words as
> > # necessary. A reader should have the chance to decide if this is in 
his
> > # expertise without reading the following sections.
> >
> > Steps to reproduce:
> > ==
> > # Explain where you start and under which conditions.
> > # List every input you gave and every action you made.
> >
> > # E.g.:
> > # Prereqs: Installed devstack on x86 machine with icehouse release
> > # 1) In Horizon, launch an instance with name "test" an flavor 
"m1.tiny"
> > #from image "cirros-0.3.1-x86_64-uec"
> > # 2) Launch 2 other instances with different names but with the same
> > #flavor and image.
> >
> > Expected behavior:
> > ==
> > # Describe what you thought what should happen. If applicable provide
> > # sources (e.g. wiki pages, manuals or similar) why to expected this.
> > # E.g.:
> > # Each instance should be launched successfully.
> >
> > Observed behavior:
> > ==
> > # Describe the observed behavior and how it differs from the expected
> > # one. List the error messages you get. Link to debug data.
> > # E.g.:
> > # The third instance was not launched. It went to an error state. The
> > # error message was "host not found".
> >
> > Reproducibility:
> > ==
> > # How often will the "steps to reproduce" result in the described
> > # observed behavior?
> > # This could give a hint if the bug is related to race conditions
> > # Try to give a good estimate.
> > # E.g. "4/5" (read "four times out of five tries")
> > # 5/5
> >
> > Environment:
> > ==
> > # Which version/branch did you use?
> > # Details of the machine?
> >
> > Additional data:
> > ==
> > # Links to http://paste.openstack.org/ with content of log files.
> > # Links to possible related bugs.
> > # Things which might be helpful to debug this but doesn't fit into the
> > # other sections.
> >
> > -- template end  -
> 
> +1!
> 
> I think many of bug descriptions don't have enough information for
> debugging now.  So we should have enough information like this as
> possible.
> However, one thing I'm concerned about this template is a little heavy
> process for submitting an easy bug. We might have some templates for
> depending on the situation.
> 
> Thanks,
> -- 
> Masayuki Igawa
> 

I'm not quite sure if it would be good to omit a proper bug description
even if the author of the bug considers this an "easy" bug. If it is
an easy one it could be tagged with the "low-hanging-fruit" tag and 
left for a new contributor. Sure, it is easier for the *author* to make
a quick description in a bug entry but I assume that this time saving
has to be paid in the later steps of the lifecycle of the bug. 
A bug seems easy to an author because he has already the knowledge to
debug this. Another person will not profit from a short description 
and will not increase his knowledge and help in other (more complicated)
bugs. If a section is irrelevant for this bug (e.g. Environment), than
a short "N/A" could save time for author and reader.

Regards,
Markus Zoeller
IRC: markus_z


___

[openstack-dev] [Ironic][Ceilometer] Proposed Change to Sensor meter naming in Ceilometer

2014-10-16 Thread Jim Mankovich

All,

I would like to get some feedback on a proposal  to change to the 
current sensor naming implemented in ironic and ceilometer.


I would like to provide vendor specific sensors within the current 
structure for IPMI sensors in ironic and ceilometer, but I have found 
that the current  implementation of sensor meters in ironic and 
ceilometer is IPMI specific (from a meter naming perspective) . This is 
not suitable as it currently stands to support sensor information from a 
provider other than IPMI.Also, the current Resource ID naming makes 
it difficult for a consumer of sensors to quickly find all the sensors 
for a given Ironic Node ID, so I would like to propose changing the 
Resource ID naming as well.


Currently, sensors sent by ironic to ceilometer get named by ceilometer 
as has "hardware.ipmi.SensorType", and the Resource ID is the Ironic 
Node ID with a post-fix containing the Sensor ID.  For Details 
pertaining to the issue with the Resource ID naming, see 
https://bugs.launchpad.net/ironic/+bug/1377157, "ipmi sensor naming in 
ceilometer is not consumer friendly"


Here is an example of what meters look like for sensors in ceilometer 
with the current implementation:

| Name| Type  | Unit | Resource ID
| hardware.ipmi.current   | gauge | W| 
edafe6f4-5996-4df8-bc84-7d92439e15c0-power_meter_(0x16)
| hardware.ipmi.temperature   | gauge | C| 
edafe6f4-5996-4df8-bc84-7d92439e15c0-16-system_board_(0x15)


What I would like to propose is dropping the ipmi string from the name 
altogether and appending the Sensor ID to the name  instead of to the 
Resource ID.   So, transforming the above to the new naming would result 
in the following:

| Name | Type  | Unit | Resource ID
| hardware.current.power_meter_(0x16)  | gauge | W| 
edafe6f4-5996-4df8-bc84-7d92439e15c0
| hardware.temperature.system_board_(0x15) | gauge | C| 
edafe6f4-5996-4df8-bc84-7d92439e15c0


This structure would provide the ability for a consumer to do a 
ceilometer resource list using the Ironic Node ID as the Resource ID to 
get all the sensors in a given platform.   The consumer would then then 
iterate over each of the sensors to get the samples it wanted.   In 
order to retain the information as to who provide the sensors, I would 
like to propose that a standard "sensor_provider" field be added to the 
resource_metadata for every sensor where the "sensor_provider" field 
would have a string value indicating the driver that provided the sensor 
information. This is where the string "ipmi", or a vendor specific 
string would be specified.


I understand that this proposed change is not backward compatible with 
the existing naming, but I don't really see a good solution that would 
retain backward compatibility.


Any/All Feedback will be appreciated,
Jim

--
--- Jim Mankovich | jm...@hp.com (US Mountain Time) ---


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] [Heat] [Mistral] [Horizon] Merlin project PoC update: shift from HOT builder to Mistral Workbook builder

2014-10-16 Thread Timur Sufiev
Drago,

great news indeed! In the meantime I've started integrating Merlin PoC with
the MIstral Workbook builder into Horizon. In case you follow the
instructions at https://github.com/stackforge/merlin/blob/master/README.md
you'll get working Project->Orchestration->Mistral panel with the Workbook
Builder. It has the same functionality as before, though is currently
looking inconsistent with overall Horizon UI - I'll fix that in a day or
two, adding along the way some stub code to display more than one Mistral
workbooks.

I propose to join our efforts in integration different builders into
Horizon, feel free to ask me on various aspects of getting HOT builder work
with the Horizon! There is one bug in Horizon that can prevent your panel
added via pluggable settings mechanism from appearing in the proper group,
https://bugs.launchpad.net/horizon/+bug/1378558 - the proposed fix already
works.

On Tue, Oct 14, 2014 at 12:51 AM, Drago Rosson 
wrote:

>  The HOT Builder code is available now at
> https://github.com/rackerlabs/hotbuilder although at the moment it is
> non-functional because it has not been ported over to Horizon.
>
>  Drago
>
>   From: Angus Salkeld 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Tuesday, September 30, 2014 at 2:42 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [UX] [Heat] [Mistral] Merlin project PoC
> update: shift from HOT builder to Mistral Workbook builder
>
>On Fri, Sep 26, 2014 at 7:04 AM, Steve Baker  wrote:
>
>> On 26/09/14 05:36, Timur Sufiev wrote:
>>
>>> Hello, folks!
>>>
>>> Following Drago Rosson's introduction of Barricade.js and our discussion
>>> in ML about possibility of using it in Merlin [1], I've decided to change
>>> the plans for PoC: now the goal for Merlin's PoC is to implement Mistral
>>> Workbook builder on top of Barricade.js. The reasons for that are:
>>>
>>> * To better understand Barricade.js potential as data abstraction layer
>>> in Merlin, I need to learn much more about its possibilities and
>>> limitations than simple examining/reviewing of its source code allows. The
>>> best way to do this is by building upon it.
>>> * It's becoming too crowded in the HOT builder's sandbox - doing the
>>> same work as Drago currently does [2] seems like a waste of resources to me
>>> (especially in case he'll opensource his HOT builder someday just as he did
>>> with Barricade.js).
>>>
>>
>> Drago, it would be to everyone's benefit if your HOT builder efforts were
>> developed on a public git repository, no matter how functional it is
>> currently.
>>
>> Is there any chance you can publish what you're working on to
>> https://github.com/dragorosson or rackerlabs for a start?
>>
>
>  Drago any news of this? This would prevent a lot of duplication of work
> and later merging of code. The sooner this is done the better.
>
>  -Angus
>
>
>>
>>  * Why Mistral and not Murano or Solum? Because Mistral's YAML templates
>>> have simpler structure than Murano's ones do and is better defined at that
>>> moment than the ones in Solum.
>>>
>>> There already some commits in https://github.com/stackforge/merlin and
>>> since client-side app doesn't talk to the Mistral's server yet, it is
>>> pretty easy to run it (just follow the instructions in README.md) and then
>>> see it in browser at http://localhost:8080. UI is yet not great, as the
>>> current focus is data abstraction layer exploration, i.e. how to exploit
>>> Barricade.js capabilities to reflect all relations between Mistral's
>>> entities. I hope to finish the minimal set of features in a few weeks - and
>>> will certainly announce it in the ML.
>>>
>>> [1] http://lists.openstack.org/pipermail/openstack-dev/2014-
>>> September/044591.html
>>> [2] http://lists.openstack.org/pipermail/openstack-dev/2014-
>>> August/044186.html
>>>
>>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Timur Sufiev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-16 Thread Jay Pipes

On 10/16/2014 04:29 AM, Florian Haas wrote:

(5) Let monitoring and orchestration services deal with these use
cases and
have Nova simply provide the primitive API calls that it already does
(i.e.
host evacuate).


That would arguably lead to an incredible amount of wheel reinvention
for node failure detection, service failure detection, etc. etc.


How so? (5) would use existing wheels for monitoring and orchestration
instead of writing all new code paths inside Nova to do the same thing.


Right, there may be some confusion here ... I thought you were both
agreeing that the use of an external toolset was a good approach for the
problem, but Florian's last message makes that not so clear ...


While one of us (Jay or me) speaking for the other and saying we agree
is a distributed consensus problem that dwarfs the complexity of
Paxos


You've always had a way with words, Florian :)

>, *I* for my part do think that an "external" toolset (i.e. one

that lives outside the Nova codebase) is the better approach versus
duplicating the functionality of said toolset in Nova.

I just believe that the toolset that should be used here is
Corosync/Pacemaker and not Ceilometer/Heat. And I believe the former
approach leads to *much* fewer necessary code changes *in* Nova than
the latter.


I agree with you that Corosync/Pacemaker is the tool of choice for 
monitoring/heartbeat functionality, and is my choice for 
compute-node-level HA monitoring. For guest-level HA monitoring, I would 
say use Heat/Ceilometer. For container-level HA monitoring, it looks 
like fleet or something like Kubernetes would be a good option.


I'm curious to see how the combination of compute-node-level HA and 
container-level HA tools will work together in some of the proposed 
deployment architectures (bare metal + docker containers w/ OpenStack 
and infrastructure services run in a Kubernetes pod or CoreOS fleet).


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Travels tips for the Paris summit

2014-10-16 Thread Jay Faulkner

On Oct 16, 2014, at 10:02 AM, Anita Kuno 
mailto:ante...@anteaya.info>> wrote:

Heh, yeah didn't expect to take over the thread on this, sorry about
that. Perhaps we should form a group, vegetarians in Paris?

I know myself and at least one other Summit attendee who has similarly annoying 
dietary restrictions — no gluten. Which from what I understand is very 
difficult in France.

If there’s anyone else who has similar restrictions would like to team up so we 
can all find places to eat without getting sick, feel free to email me off list 
or ping me on IRC (JayF).

-
Jay Faulkner
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-16 Thread Russell Bryant
On 10/16/2014 01:03 PM, Adam Lawson wrote:
> Be forewarned; here's my two cents before I've had my morning coffee. 
> 
> It would seem to me that if we were seeking some level of resiliency
> against host failures (if a host fails, evacuate the instances that were
> hosted on it to a host that isn't broken), it would seem that host HA is
> a good approach. The ultimate goal of course is instance HA but the task
> of monitoring individual instances and determining what constitutes
> "down" seems like a much more complex task than detecting when a compute
> node is down. I know that requiring the presence of agents should
> probably need some more brain-cycles since we can't expect additional
> bytes consuming memory on each individual VM.
> 
> Additionally, I'm not really hung up on the 'how' as we all realize
> there several ways to skin that cat, so long as that 'how' is leveraged
> via tools over which we have control and direct influence. Reason being,
> we may not want to leverage features as important as this on tools that
> change outside our control and subsequently shifts the foundation of the
> feature we implemented that was based on how the product USED to work.
> Basically if Pacemaker does what we need then cool but it seems that
> implementing a feature should be built upon a bedrock of programs over
> which we have a direct influence. This is why Nagios may be able to do
> it but it's a hack at best. I'm not saying Nagios isn't good or ythe
> hack doesn't work but in the context of an Openstack solution, we can't
> require a single external tool for a feature like host or VM HA. Are we
> suggesting that we tell people who want HA - "go use Nagios"? Call me a
> purist but if we're going to implement a feature, it should be our
> community implementing it because we have some of the best minds on
> staff. ; )

I think you just gave a great example of "NIH".  :-)

I was saying "use Pacemaker", not "use Nagios".  I'm not aware of
fencing integration with Nagios, but it's feasible.  The key point I've
been making is "this is very achievable today as a function of the
infrastructure supporting an OpenStack deployment".  I'd also like to
work on some more detailed examples of doing so.

FWIW, there are existing very good relationships between OpenStack
community members and the Pacemaker team.  I'm really not concerned
about that at all.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-16 Thread Florian Haas
On Thu, Oct 16, 2014 at 7:03 PM, Adam Lawson  wrote:
>
> Be forewarned; here's my two cents before I've had my morning coffee.
>
> It would seem to me that if we were seeking some level of resiliency against 
> host failures (if a host fails, evacuate the instances that were hosted on it 
> to a host that isn't broken), it would seem that host HA is a good approach. 
> The ultimate goal of course is instance HA but the task of monitoring 
> individual instances and determining what constitutes "down" seems like a 
> much more complex task than detecting when a compute node is down. I know 
> that requiring the presence of agents should probably need some more 
> brain-cycles since we can't expect additional bytes consuming memory on each 
> individual VM.

What Russell is suggesting, though, is actually a very feasible
approach for compute node HA today and per-instance HA tomorrow.

> Additionally, I'm not really hung up on the 'how' as we all realize there 
> several ways to skin that cat, so long as that 'how' is leveraged via tools 
> over which we have control and direct influence. Reason being, we may not 
> want to leverage features as important as this on tools that change outside 
> our control and subsequently shifts the foundation of the feature we 
> implemented that was based on how the product USED to work. Basically if 
> Pacemaker does what we need then cool but it seems that implementing a 
> feature should be built upon a bedrock of programs over which we have a 
> direct influence.

That almost sounds a bit like "let's always build a better wheel,
because control". I'm not sure if that's indeed the intention, but if
it is then that seems like a bad idea to me.

> This is why Nagios may be able to do it but it's a hack at best. I'm not 
> saying Nagios isn't good or ythe hack doesn't work but in the context of an 
> Openstack solution, we can't require a single external tool for a feature 
> like host or VM HA. Are we suggesting that we tell people who want HA - "go 
> use Nagios"? Call me a purist but if we're going to implement a feature, it 
> should be our community implementing it because we have some of the best 
> minds on staff. ; )

Anyone who thinks that having a monitoring solution to page people and
then waking up a human to restart the service constitutes HA needs to
be doused in a bucket of ice water. :)

Cheers,
Florian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Travels tips for the Paris summit

2014-10-16 Thread Gary Kotton
You can always try:
http://www.timeout.com/paris/en/food-and-drink/vegetarian-restaurants-in-pa
ris.


On 10/16/14, 8:02 PM, "Anita Kuno"  wrote:

>On 10/16/2014 12:49 PM, Julien Danjou wrote:
>> On Thu, Oct 16 2014, Sylvain Bauza wrote:
>> 
>>> Sure thing "Est-ce que ce plat contient de la viande ?" (ε-s kə
>>> sə pla kɔ̃tjε̃ də la vjɑ̃d )
>> 
>> Be careful because that a lot of places would answer "no" to that
>> while it might be possible that the dish has things like fish in
>> it… Many people, even in restaurants, don't get what vegetarian or
>> vegan actually means.
>> 
>> This is really getting off-topic for this list though. :)
>Heh, yeah didn't expect to take over the thread on this, sorry about
>that. Perhaps we should form a group, vegetarians in Paris? We can
>shuffle through the streets shotgunning "Est-ce que ce plat contient
>de la viande ?" at passers-by and hoping someone will point us in the
>right direction. :D
>
>> I've added a note in the wiki on that.
>Thanks Julien,
>Anita.
>> 
>> 
>> 
>> ___ OpenStack-dev
>> mailing list OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Ceilometer] Proposed Change to Sensor meter naming in Ceilometer

2014-10-16 Thread Julien Danjou
On Thu, Oct 16 2014, Jim Mankovich wrote:

> This structure would provide the ability for a consumer to do a ceilometer
> resource list using the Ironic Node ID as the Resource ID to get all the 
> sensors
> in a given platform.   The consumer would then then iterate over each of the
> sensors to get the samples it wanted.   In order to retain the information as 
> to
> who provide the sensors, I would like to propose that a standard
> "sensor_provider" field be added to the resource_metadata for every sensor 
> where
> the "sensor_provider" field would have a string value indicating the driver 
> that
> provided the sensor information. This is where the string "ipmi", or a
> vendor specific string would be specified.
>
> I understand that this proposed change is not backward compatible with the
> existing naming, but I don't really see a good solution that would retain
> backward compatibility.

I think it's a good idea to drop that suffix in the resource id, as long
as we're sure a node can't have several IPMI sources. :)

-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] external AuthN Identity Backend

2014-10-16 Thread Dave Walker
Hi,

Currently we have two ways of doing Identity Auth backends, these are
sql and ldap.

The SQL backend is the default and is for situations where Keyston is
the canonical Identity provider with username / password being
directly compared to the Keystone database.

LDAP is the current option if Keystone isn't the canonical Identity
provider and passes the username and password to an LDAP server for
comparison and retrieves the groups.

For a few releases we have supported External auth (or Kerberos),
where we authenticate the user at the edge and trust the REMOTE_USER
is valid.  In these situations Keystone doesn't require the Username
or Password to be valid.

Particularly in Kerberos situations, no password is used to
successfully authenticate at the edge.  This works well, but LDAP
cannot be used as no password is passed through.  The other option is
SQL, but that then requires a user to be created in Keystone first.

We do not seem to cover the situation where Identity is provided by an
external mechanism.  The only system currently available is Federation
via SAML, which isn't always the best fit.

Therefore, I'd like to suggest the introduction of a third backend.
This would be the external identity provider.  This would seem to be
pretty simple, as the current checks would simply return success (as
we trust auth at the edge), and not store user_id in the database, but
generate it at runtime.

The issue I have, is that this doesn't cover Group membership.

So, am I a:
 - Barking totally up the wrong tree
 - Add support to the current LDAP plugin to support external auth
(but still use LDAP for groups)
 - Write a standalone external plugin, but then do what for Groups?  I
would be reasonably happy to just have 1:1 mapping of users to groups.

Does this make sense?

Thanks

--
Kind Regards,
Daviey Walker

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] external AuthN Identity Backend

2014-10-16 Thread Steve Martinelli
I think it depends on what you mean by
'external identity provider'
- I assume it's capable of speaking some sort of federation protocol. So
I'd rather explore adding more support for additional federation protocols/standards,
rather than making our own third backend. 

Steve

Dave Walker  wrote on 10/16/2014
02:15:07 PM:

> From: Dave Walker 
> To: OpenStack Development Mailing List ,

> Date: 10/16/2014 02:20 PM
> Subject: [openstack-dev] [Keystone] external
AuthN Identity Backend
> 
> Hi,
> 
> Currently we have two ways of doing Identity Auth backends, these
are
> sql and ldap.
> 
> The SQL backend is the default and is for situations where Keyston
is
> the canonical Identity provider with username / password being
> directly compared to the Keystone database.
> 
> LDAP is the current option if Keystone isn't the canonical Identity
> provider and passes the username and password to an LDAP server for
> comparison and retrieves the groups.
> 
> For a few releases we have supported External auth (or Kerberos),
> where we authenticate the user at the edge and trust the REMOTE_USER
> is valid.  In these situations Keystone doesn't require the Username
> or Password to be valid.
> 
> Particularly in Kerberos situations, no password is used to
> successfully authenticate at the edge.  This works well, but
LDAP
> cannot be used as no password is passed through.  The other option
is
> SQL, but that then requires a user to be created in Keystone first.
> 
> We do not seem to cover the situation where Identity is provided by
an
> external mechanism.  The only system currently available is Federation
> via SAML, which isn't always the best fit.
> 
> Therefore, I'd like to suggest the introduction of a third backend.
> This would be the external identity provider.  This would seem
to be
> pretty simple, as the current checks would simply return success (as
> we trust auth at the edge), and not store user_id in the database,
but
> generate it at runtime.
> 
> The issue I have, is that this doesn't cover Group membership.
> 
> So, am I a:
>  - Barking totally up the wrong tree
>  - Add support to the current LDAP plugin to support external
auth
> (but still use LDAP for groups)
>  - Write a standalone external plugin, but then do what for Groups?
 I
> would be reasonably happy to just have 1:1 mapping of users to groups.
> 
> Does this make sense?
> 
> Thanks
> 
> --
> Kind Regards,
> Daviey Walker
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-16 Thread Florian Haas
On Thu, Oct 16, 2014 at 7:48 PM, Jay Pipes  wrote:
>> While one of us (Jay or me) speaking for the other and saying we agree
>> is a distributed consensus problem that dwarfs the complexity of
>> Paxos
>
>
> You've always had a way with words, Florian :)

I knew you'd like that one. :)

>>, *I* for my part do think that an "external" toolset (i.e. one
>>
>> that lives outside the Nova codebase) is the better approach versus
>> duplicating the functionality of said toolset in Nova.
>>
>> I just believe that the toolset that should be used here is
>> Corosync/Pacemaker and not Ceilometer/Heat. And I believe the former
>> approach leads to *much* fewer necessary code changes *in* Nova than
>> the latter.
>
>
> I agree with you that Corosync/Pacemaker is the tool of choice for
> monitoring/heartbeat functionality, and is my choice for compute-node-level
> HA monitoring. For guest-level HA monitoring, I would say use
> Heat/Ceilometer. For container-level HA monitoring, it looks like fleet or
> something like Kubernetes would be a good option.

Here's why I think that's a bad idea: none of these support the
concept of being subordinate to another cluster.

Again, suppose a VM stops responding. Then
Heat/Ceilometer/Kubernetes/fleet would need to know whether the node
hosting the VM is down or not. Only if the node is up or recovered
(which Pacemaker would be reponsible for) the VM HA facility would be
able to kick in. Effectively you have two views of the cluster
membership, and that sort of thing always gets messy. In the HA space
we're always facing the same issues when a replication facility
(Galera, GlusterFS, DRBD, whatever) has a different view of the
cluster membership than the cluster manager itself — which *always*
happens for a few seconds on any failover, recovery, or fencing event.

Russell's suggestion, by having remote Pacemaker instances on the
compute nodes tie in with a Pacemaker cluster on the control nodes,
does away with that discrepancy.

> I'm curious to see how the combination of compute-node-level HA and
> container-level HA tools will work together in some of the proposed
> deployment architectures (bare metal + docker containers w/ OpenStack and
> infrastructure services run in a Kubernetes pod or CoreOS fleet).

I have absolutely nothing against an OpenStack cluster using
*exclusively* Kubernetes or fleet for HA management, once those have
reached sufficient maturity. But just about every significant
OpenStack distro out there has settled on Corosync/Pacemaker for the
time being. Let's not shove another cluster manager down their throats
for little to no real benefit.

Cheers,
Florian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] Barbican Juno Release

2014-10-16 Thread Douglas Mendizabal
Hi All,

The Barbican team is proud to announce the final release of the Barbican Key
Management Service for Juno:

https://launchpad.net/barbican/juno/2014.2

This release includes 9 Blueprints and 47 bug fixes.  Check the link above
for the full details.  Many thanks to all the contributors who made this
release possible!

-Doug


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5  0CC9 AD14 1F30 2D58 923C




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-16 Thread Adam Lawson
Okay the coffee kicked in.

I can see how my comment could be interpreted that way so let's take a step
backward so I can explain my perspective here.

Amazon was the first to implement a commercial-grade cloud IaaS, Openstack
was developed as an alternative. If we avoided wheel re-invention as a
rule, Openstack would have never been written. That's how I see it.
Automatic fail-over is already done by VMware. If we were looking to avoid
re-invention as our guide to implementing new features, we'd setup a
product referral partnership with VMware, tell our users that HA requires
VMware, dust off our hands and say job well done. No one here is saying
that though, but that's the mindset I think I'm hearing. I champion the
in-house approach not as an effort to develop something that doesn't exist
elsewhere or for the sake of control but because we don't want to be tied
to a single external product for a core feature of Openstack.

When ProductA+ProductB = XYZ, it creates a one-way dependency that I
historically try to avoid. Because if ProductA = Openstack, ProductB is no
longer optional.

Personally speaking, I'm actually speaking more towards our approach to how
we scope features for Openstack rather than whether we use Pacemaker,
Nagios, Nova, Heat or something else.

Question: is host HA not achievable using the programs we have in place now
(with modification of course)? If not, I'm still a champion to see it done
within our four walls.

Just my 10c or so. ; )



*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072


On Thu, Oct 16, 2014 at 10:53 AM, Florian Haas  wrote:

> On Thu, Oct 16, 2014 at 7:03 PM, Adam Lawson  wrote:
> >
> > Be forewarned; here's my two cents before I've had my morning coffee.
> >
> > It would seem to me that if we were seeking some level of resiliency
> against host failures (if a host fails, evacuate the instances that were
> hosted on it to a host that isn't broken), it would seem that host HA is a
> good approach. The ultimate goal of course is instance HA but the task of
> monitoring individual instances and determining what constitutes "down"
> seems like a much more complex task than detecting when a compute node is
> down. I know that requiring the presence of agents should probably need
> some more brain-cycles since we can't expect additional bytes consuming
> memory on each individual VM.
>
> What Russell is suggesting, though, is actually a very feasible
> approach for compute node HA today and per-instance HA tomorrow.
>
> > Additionally, I'm not really hung up on the 'how' as we all realize
> there several ways to skin that cat, so long as that 'how' is leveraged via
> tools over which we have control and direct influence. Reason being, we may
> not want to leverage features as important as this on tools that change
> outside our control and subsequently shifts the foundation of the feature
> we implemented that was based on how the product USED to work. Basically if
> Pacemaker does what we need then cool but it seems that implementing a
> feature should be built upon a bedrock of programs over which we have a
> direct influence.
>
> That almost sounds a bit like "let's always build a better wheel,
> because control". I'm not sure if that's indeed the intention, but if
> it is then that seems like a bad idea to me.
>
> > This is why Nagios may be able to do it but it's a hack at best. I'm not
> saying Nagios isn't good or ythe hack doesn't work but in the context of an
> Openstack solution, we can't require a single external tool for a feature
> like host or VM HA. Are we suggesting that we tell people who want HA - "go
> use Nagios"? Call me a purist but if we're going to implement a feature, it
> should be our community implementing it because we have some of the best
> minds on staff. ; )
>
> Anyone who thinks that having a monitoring solution to page people and
> then waking up a human to restart the service constitutes HA needs to
> be doused in a bucket of ice water. :)
>
> Cheers,
> Florian
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] external AuthN Identity Backend

2014-10-16 Thread Dave Walker
Hi Steve,

Thanks for your response.  I am talking generally about the external
auth support.  One use case is Kerberos, but for the sake of argument
this could quite easily be Apache Basic auth.  The point is, we have
current support for entrusting AuthN outside of Keystone.

What I was trying to outline is that it seems that the current design
of external auth is that keystone is not in the auth pipeline as we
trust auth at the edge.  However, we then do additional auth within
keystone.

With external auth and SQL, we drop the user provided username and
password on the floor and use what was provided in REMOTE_USER (set by
the webserver).

Therefore the check as it currently stands in SQL is basically 'is
this username in the database'.  The LDAP plugin does Authentication
via username and password, which is clearly not sufficient for
external auth.  The LDAP plugin could be made to check in a similar
manner to SQL 'is this a valid user' - but this would seem to be a
duplicate check, as we already did this at the edge.

If the webserver granted access to keystone, the user has already been
checked to see if they are a valid user.  However, your response seems
to suggest that current external auth should be formally deprecated?

--
Kind Regards,
Daviey Walker

On 16 October 2014 19:28, Steve Martinelli  wrote:
> I think it depends on what you mean by 'external identity provider' - I
> assume it's capable of speaking some sort of federation protocol. So I'd
> rather explore adding more support for additional federation
> protocols/standards, rather than making our own third backend.
>
> Steve
>
> Dave Walker  wrote on 10/16/2014 02:15:07 PM:
>
>> From: Dave Walker 
>> To: OpenStack Development Mailing List
>> ,
>> Date: 10/16/2014 02:20 PM
>> Subject: [openstack-dev] [Keystone] external AuthN Identity Backend
>>
>> Hi,
>>
>> Currently we have two ways of doing Identity Auth backends, these are
>> sql and ldap.
>>
>> The SQL backend is the default and is for situations where Keyston is
>> the canonical Identity provider with username / password being
>> directly compared to the Keystone database.
>>
>> LDAP is the current option if Keystone isn't the canonical Identity
>> provider and passes the username and password to an LDAP server for
>> comparison and retrieves the groups.
>>
>> For a few releases we have supported External auth (or Kerberos),
>> where we authenticate the user at the edge and trust the REMOTE_USER
>> is valid.  In these situations Keystone doesn't require the Username
>> or Password to be valid.
>>
>> Particularly in Kerberos situations, no password is used to
>> successfully authenticate at the edge.  This works well, but LDAP
>> cannot be used as no password is passed through.  The other option is
>> SQL, but that then requires a user to be created in Keystone first.
>>
>> We do not seem to cover the situation where Identity is provided by an
>> external mechanism.  The only system currently available is Federation
>> via SAML, which isn't always the best fit.
>>
>> Therefore, I'd like to suggest the introduction of a third backend.
>> This would be the external identity provider.  This would seem to be
>> pretty simple, as the current checks would simply return success (as
>> we trust auth at the edge), and not store user_id in the database, but
>> generate it at runtime.
>>
>> The issue I have, is that this doesn't cover Group membership.
>>
>> So, am I a:
>>  - Barking totally up the wrong tree
>>  - Add support to the current LDAP plugin to support external auth
>> (but still use LDAP for groups)
>>  - Write a standalone external plugin, but then do what for Groups?  I
>> would be reasonably happy to just have 1:1 mapping of users to groups.
>>
>> Does this make sense?
>>
>> Thanks
>>
>> --
>> Kind Regards,
>> Daviey Walker
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] external AuthN Identity Backend

2014-10-16 Thread David Chadwick
Dave

when federation is used, the user's group is stored in a mapping rule.
So we do have a mechanism for storing group memberships without using
LDAP or creating an entry for the user in the SQL backend. (The only
time this is kinda not true is if we have a specific rule for each
federated user, so that then each mapping rule is equivalent to an entry
for each user). But usually we might expect many users to use the same
mapping rule.

Mapping rules should be usable for Kerberos logins. I dont know if the
current code does have this ability or not, but if it doesn't, then it
should be re-engineered to. (it was always in my design that all remote
logins should have a mapping capability)

regards

David

On 16/10/2014 19:15, Dave Walker wrote:
> Hi,
> 
> Currently we have two ways of doing Identity Auth backends, these are
> sql and ldap.
> 
> The SQL backend is the default and is for situations where Keyston is
> the canonical Identity provider with username / password being
> directly compared to the Keystone database.
> 
> LDAP is the current option if Keystone isn't the canonical Identity
> provider and passes the username and password to an LDAP server for
> comparison and retrieves the groups.
> 
> For a few releases we have supported External auth (or Kerberos),
> where we authenticate the user at the edge and trust the REMOTE_USER
> is valid.  In these situations Keystone doesn't require the Username
> or Password to be valid.
> 
> Particularly in Kerberos situations, no password is used to
> successfully authenticate at the edge.  This works well, but LDAP
> cannot be used as no password is passed through.  The other option is
> SQL, but that then requires a user to be created in Keystone first.
> 
> We do not seem to cover the situation where Identity is provided by an
> external mechanism.  The only system currently available is Federation
> via SAML, which isn't always the best fit.
> 
> Therefore, I'd like to suggest the introduction of a third backend.
> This would be the external identity provider.  This would seem to be
> pretty simple, as the current checks would simply return success (as
> we trust auth at the edge), and not store user_id in the database, but
> generate it at runtime.
> 
> The issue I have, is that this doesn't cover Group membership.
> 
> So, am I a:
>  - Barking totally up the wrong tree
>  - Add support to the current LDAP plugin to support external auth
> (but still use LDAP for groups)
>  - Write a standalone external plugin, but then do what for Groups?  I
> would be reasonably happy to just have 1:1 mapping of users to groups.
> 
> Does this make sense?
> 
> Thanks
> 
> --
> Kind Regards,
> Daviey Walker
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] external AuthN Identity Backend

2014-10-16 Thread David Stanek
On Thu, Oct 16, 2014 at 2:54 PM, Dave Walker  wrote:

> Hi Steve,
>
> Thanks for your response.  I am talking generally about the external
> auth support.  One use case is Kerberos, but for the sake of argument
> this could quite easily be Apache Basic auth.  The point is, we have
> current support for entrusting AuthN outside of Keystone.
>
> What I was trying to outline is that it seems that the current design
> of external auth is that keystone is not in the auth pipeline as we
> trust auth at the edge.  However, we then do additional auth within
> keystone.
>
> With external auth and SQL, we drop the user provided username and
> password on the floor and use what was provided in REMOTE_USER (set by
> the webserver).
>
> Therefore the check as it currently stands in SQL is basically 'is
> this username in the database'.  The LDAP plugin does Authentication
> via username and password, which is clearly not sufficient for
> external auth.  The LDAP plugin could be made to check in a similar
> manner to SQL 'is this a valid user' - but this would seem to be a
> duplicate check, as we already did this at the edge.
>
> If the webserver granted access to keystone, the user has already been
> checked to see if they are a valid user.  However, your response seems
> to suggest that current external auth should be formally deprecated?


I may be missing something, but can you use the external auth method with
the LDAP backend?

-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] external AuthN Identity Backend

2014-10-16 Thread Dave Walker
On 16 October 2014 20:07, David Stanek  wrote:

> I may be missing something, but can you use the external auth method with
> the LDAP backend?
>

No, as the purpose of the LDAP backend is to validate user/pass
combination are valid.  With the external auth plugin, these are not
provided to keystone (and may not even exist).  If they did exist, we
would be doing auth at the edge and at the backend - which seems
needlessly expensive.

--
Kind Regards,
Daviey Walker

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] external AuthN Identity Backend

2014-10-16 Thread Dave Walker
Hi,

I think I considered the Federated plugin as a mismatch as it dealt
with 'remote' auth rather than 'external' auth.  I thought it was for
purely handling SSO / SAML2, and not being subordinate to auth with
the webserver.

I'll dig into the federation plugin more, thanks.

--
Kind Regards,
Dave Walker

On 16 October 2014 19:58, David Chadwick  wrote:
> Dave
>
> when federation is used, the user's group is stored in a mapping rule.
> So we do have a mechanism for storing group memberships without using
> LDAP or creating an entry for the user in the SQL backend. (The only
> time this is kinda not true is if we have a specific rule for each
> federated user, so that then each mapping rule is equivalent to an entry
> for each user). But usually we might expect many users to use the same
> mapping rule.
>
> Mapping rules should be usable for Kerberos logins. I dont know if the
> current code does have this ability or not, but if it doesn't, then it
> should be re-engineered to. (it was always in my design that all remote
> logins should have a mapping capability)
>
> regards
>
> David
>
> On 16/10/2014 19:15, Dave Walker wrote:
>> Hi,
>>
>> Currently we have two ways of doing Identity Auth backends, these are
>> sql and ldap.
>>
>> The SQL backend is the default and is for situations where Keyston is
>> the canonical Identity provider with username / password being
>> directly compared to the Keystone database.
>>
>> LDAP is the current option if Keystone isn't the canonical Identity
>> provider and passes the username and password to an LDAP server for
>> comparison and retrieves the groups.
>>
>> For a few releases we have supported External auth (or Kerberos),
>> where we authenticate the user at the edge and trust the REMOTE_USER
>> is valid.  In these situations Keystone doesn't require the Username
>> or Password to be valid.
>>
>> Particularly in Kerberos situations, no password is used to
>> successfully authenticate at the edge.  This works well, but LDAP
>> cannot be used as no password is passed through.  The other option is
>> SQL, but that then requires a user to be created in Keystone first.
>>
>> We do not seem to cover the situation where Identity is provided by an
>> external mechanism.  The only system currently available is Federation
>> via SAML, which isn't always the best fit.
>>
>> Therefore, I'd like to suggest the introduction of a third backend.
>> This would be the external identity provider.  This would seem to be
>> pretty simple, as the current checks would simply return success (as
>> we trust auth at the edge), and not store user_id in the database, but
>> generate it at runtime.
>>
>> The issue I have, is that this doesn't cover Group membership.
>>
>> So, am I a:
>>  - Barking totally up the wrong tree
>>  - Add support to the current LDAP plugin to support external auth
>> (but still use LDAP for groups)
>>  - Write a standalone external plugin, but then do what for Groups?  I
>> would be reasonably happy to just have 1:1 mapping of users to groups.
>>
>> Does this make sense?
>>
>> Thanks
>>
>> --
>> Kind Regards,
>> Daviey Walker
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-16 Thread Russell Bryant
On 10/16/2014 02:40 PM, Adam Lawson wrote:
> Question: is host HA not achievable using the programs we have in place
> now (with modification of course)? If not, I'm still a champion to see
> it done within our four walls.

Yes, it is achievable (without modification, even).

That was the primary point of:

  http://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposal/

I think there's work to do to build up a reference configuration, test
it out, and document it.  I believe all the required software exists and
is already in use in many OpenStack deployments for other reasons.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack] [Barbican] [Cinder] Cinder and Barbican

2014-10-16 Thread Douglas Mendizabal
Hi Giuseppe,

Someone from the Cinder team can correct me if I’m wrong, but I don’t think
that Cinder has done any integration with Barbican yet.

-Douglas


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5  0CC9 AD14 1F30 2D58 923C

From:  Giuseppe Galeota 
Reply-To:  "OpenStack Development Mailing List (not for usage questions)"

Date:  Thursday, October 16, 2014 at 9:54 AM
To:  OpenStack Development Mailing List 
Subject:  [openstack-dev]  [OpenStack] [Barbican] Cinder and Barbican

Dear all,
is Cinder capable today to use Barbican for encryption? If yes, can you
attach some useful doc?

Thank you,
Giuseppe




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-16 Thread Florian Haas
On Thu, Oct 16, 2014 at 9:40 PM, Russell Bryant  wrote:
> On 10/16/2014 02:40 PM, Adam Lawson wrote:
>> Question: is host HA not achievable using the programs we have in place
>> now (with modification of course)? If not, I'm still a champion to see
>> it done within our four walls.
>
> Yes, it is achievable (without modification, even).
>
> That was the primary point of:
>
>   http://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposal/
>
> I think there's work to do to build up a reference configuration, test
> it out, and document it.  I believe all the required software exists and
> is already in use in many OpenStack deployments for other reasons.

+1.

Florian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Summit scheduling - using our time together wisely.

2014-10-16 Thread Clint Byrum
The format has changed slightly this summit, to help encourage a more
collaborative design experience, rather than rapid fire mass-inclusion
summit sessions. So we have two 40-minute long slots, and one whole day
of contributor meetup.[1]

Our etherpad topics page has received quite a few additions now [2], and
so I'd like to hear thoughts on what things we want to talk about in the
meetup versus the sessions.

A few things I think we should stipulate:

* The scheduled sessions will be heavily attended by the community at
  large. This often includes those who are just curious, or those who
  want to make sure that their voice is heard. These sessions should be
  reserved for those topics which have the most external influence or
  are the most dependent on other projects.

* The meetup will be at the end of the week so at the end of it, we
  can't then go to any other meetups and ask for things / participate
  in those design activities. This reinforces that scheduled session
  time should be focused on things that are externally focused so that
  we can take the result of those discussions into any of the sessions
  that are after.

* The Ops Summit is Wendesday/Thursday [3], which overlaps with these
  sessions. I am keenly interested in gathering more contribution from
  those already operating and deploying OpenStack. It can go both ways,
  but I think it might make sense to have more ops-centric topics
  discussed on Friday, when those participants might not be fully
  wrapped up in the ops sessions.

If we can all agree on those points, given the current topics, I think
our scheduled sessions should target at least (but not limited to):

* Cinder + CEPH
* Layer 3 segmentation

I think those might fit into 40 minutes, as long as we hash some things
out here on the mailing list first. Cinder + CEPH is really just a
check-in to make sure we're on track to providing it. Layer 3, I've had
discussions with Ironic and Neutron people and I think we have a plan,
but I wanted to present it in the open and discuss the short term goals
to see if it satisfies what users may want for the Kilo time frame.

So, I would encourage you all to look at the etherpad, and expand on
topics or add more, and then reply to this thread with ideas for how
best to use our precious time together.

[1] http://kilodesignsummit.sched.org/overview/type/tripleo
[2] https://etherpad.openstack.org/p/kilo-tripleo-summit-topics
[3] http://kilodesignsummit.sched.org/overview/type/ops+summit

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] projects still using obsolete oslo modules

2014-10-16 Thread Andrey Kurilin
Thanks you for this script!

Status of novaclient: Work around porting to use oslo.i18n is finished.
Also, latest code from incubator is synced.

On Mon, Oct 13, 2014 at 4:20 PM, Doug Hellmann 
wrote:

> I’ve put together a little script to generate a report of the projects
> using modules that used to be in the oslo-incubator but that have moved to
> libraries [1]. These modules have been deleted, and now only exist in the
> stable/juno branch of the incubator. We do not anticipate back-porting
> fixes except for serious security concerns, so it is important to update
> all projects to use the libraries where the modules now live.
>
> Liaisons, please look through the list below and file bugs against your
> project for any changes needed to move to the new libraries and start
> working on the updates. We need to prioritize this work for early in Kilo
> to ensure that your projects do not fall further out of step. K-1 is the
> ideal target, with K-2 as an absolute latest date. I anticipate having
> several more libraries by the time the K-2 milestone arrives.
>
> Most of the porting work involves adding dependencies and updating import
> statements, but check the documentation for each library for any special
> guidance. Also, because the incubator is updated to use our released
> libraries, you may end up having to port to several libraries *and* sync a
> copy of any remaining incubator dependencies that have not graduated all in
> a single patch in order to have a working copy. I suggest giving your
> review teams a heads-up about what to expect to avoid -2 for the scope of
> the patch.
>
> Doug
>
>
> [1] https://review.openstack.org/#/c/127039/
>
>
> openstack-dev/heat-cfnclient: exception
> openstack-dev/heat-cfnclient: gettextutils
> openstack-dev/heat-cfnclient: importutils
> openstack-dev/heat-cfnclient: jsonutils
> openstack-dev/heat-cfnclient: timeutils
>
> openstack/ceilometer: gettextutils
> openstack/ceilometer: log_handler
>
> openstack/python-troveclient: strutils
>
> openstack/melange: exception
> openstack/melange: extensions
> openstack/melange: utils
> openstack/melange: wsgi
> openstack/melange: setup
>
> openstack/tuskar: config.generator
> openstack/tuskar: db
> openstack/tuskar: db.sqlalchemy
> openstack/tuskar: excutils
> openstack/tuskar: gettextutils
> openstack/tuskar: importutils
> openstack/tuskar: jsonutils
> openstack/tuskar: strutils
> openstack/tuskar: timeutils
>
> openstack/sahara-dashboard: importutils
>
> openstack/barbican: gettextutils
> openstack/barbican: jsonutils
> openstack/barbican: timeutils
> openstack/barbican: importutils
>
> openstack/kite: db
> openstack/kite: db.sqlalchemy
> openstack/kite: jsonutils
> openstack/kite: timeutils
>
> openstack/python-ironicclient: gettextutils
> openstack/python-ironicclient: importutils
> openstack/python-ironicclient: strutils
>
> openstack/python-melangeclient: setup
>
> openstack/neutron: excutils
> openstack/neutron: gettextutils
> openstack/neutron: importutils
> openstack/neutron: jsonutils
> openstack/neutron: middleware.base
> openstack/neutron: middleware.catch_errors
> openstack/neutron: middleware.correlation_id
> openstack/neutron: middleware.debug
> openstack/neutron: middleware.request_id
> openstack/neutron: middleware.sizelimit
> openstack/neutron: network_utils
> openstack/neutron: strutils
> openstack/neutron: timeutils
>
> openstack/tempest: importlib
>
> openstack/manila: excutils
> openstack/manila: gettextutils
> openstack/manila: importutils
> openstack/manila: jsonutils
> openstack/manila: network_utils
> openstack/manila: strutils
> openstack/manila: timeutils
>
> openstack/keystone: gettextutils
>
> openstack/python-glanceclient: importutils
> openstack/python-glanceclient: network_utils
> openstack/python-glanceclient: strutils
>
> openstack/python-keystoneclient: jsonutils
> openstack/python-keystoneclient: strutils
> openstack/python-keystoneclient: timeutils
>
> openstack/zaqar: config.generator
> openstack/zaqar: excutils
> openstack/zaqar: gettextutils
> openstack/zaqar: importutils
> openstack/zaqar: jsonutils
> openstack/zaqar: setup
> openstack/zaqar: strutils
> openstack/zaqar: timeutils
> openstack/zaqar: version
>
> openstack/python-novaclient: gettextutils
>
> openstack/ironic: config.generator
> openstack/ironic: gettextutils
>
> openstack/cinder: config.generator
> openstack/cinder: excutils
> openstack/cinder: gettextutils
> openstack/cinder: importutils
> openstack/cinder: jsonutils
> openstack/cinder: log_handler
> openstack/cinder: network_utils
> openstack/cinder: strutils
> openstack/cinder: timeutils
> openstack/cinder: units
>
> openstack/python-manilaclient: gettextutils
> openstack/python-manilaclient: importutils
> openstack/python-manilaclient: jsonutils
> openstack/python-manilaclient: strutils
> openstack/python-manilaclient: timeutils
>
> openstack/trove: exception
> openstack/trove: excutils
> openstack/trove: gettextutils
> openstack/trove: importutils
> openstack/tr

Re: [openstack-dev] [Nova] - do we need .start and .end notifications in all cases ?

2014-10-16 Thread Matt Riedemann



On 9/25/2014 11:09 AM, Day, Phil wrote:

Hi Jay,

So just to be clear, are you saying that we should generate 2
notification messages on Rabbit for every DB update?   That feels
like a big overkill for me.   If I follow that login then the current
state transition notifications should also be changed to "Starting to
update task state / finished updating task state"  - which seems just
daft and confuisng logging with notifications.
Sandy's answer where start /end are used if there is a significant
amount of work between the two and/or the transaction spans multiple
hosts makes a lot more sense to me.   Bracketing a single DB call
with two notification messages rather than just a single one on
success to show that something changed would seem to me to be much
more in keeping with the concept of notifying on key events.


I can see your point, Phil. But what about when the set of DB calls takes a
not-insignificant amount of time? Would the event be considered significant
then? If so, sending only the "I completed creating this thing" notification
message might mask the fact that the total amount of time spent creating
the thing was significant.


Sure, I think there's a judgment call to be made on a case by case basis on 
this.   In general thought I'd say it's tasks that do more than just update the 
database that need to provide this kind of timing data.   Simple object 
creation / db table inserts don't really feel like they need to be individually 
timed by pairs of messages - if there is value in providing the creation time 
that could just be part of the payload of the single message, rather than 
doubling up on messages.



That's why I think it's safer to always wrap tasks -- a series of actions that
*do* one or more things -- with start/end/abort context managers that send
the appropriate notification messages.

Some notifications are for events that aren't tasks, and I don't think those
need to follow start/end/abort semantics. Your example of an instance state
change is not a task, and therefore would not need a start/end/abort
notification manager. However, the user action of say, "Reboot this server"
*would* have a start/end/abort wrapper for the "REBOOT_SERVER" event.
In between the start and end notifications for this REBOOT_SERVER event,
there may indeed be multiple SERVER_STATE_CHANGED notification
messages sent, but those would not have start/end/abort wrappers around
them.

Make a bit more sense?
-jay


Sure - it sounds like we're agreed in principle then that not all operations 
need start/end/abort messages, only those that are a series of operations.

So in that context the server group operations to me still look like they fall 
into the first groups.

Phil



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Just for closure, we went with the single notification in the server 
groups create/update/delete calls:


https://review.openstack.org/#/c/107954/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] allow-mac-to-be-updated

2014-10-16 Thread Chuck Carlino

On 10/14/2014 05:19 AM, Gary Kotton wrote:

Hi,
I am really in favor of this. The implementation looks great! Nova can
surely benefit from this and we can make Neutron allocations at the API
level and save a ton of complexity at the compute level.
Kudos!
Thanks
Gary

On 10/13/14, 11:31 PM, "Chuck Carlino"  wrote:


Hi,

Is anyone working on this blueprint[1]?  I have an implementation [2]
and would like to write up a spec.


Spec is now available for review.

https://review.openstack.org/129085



Thanks,
Chuck

[1] https://blueprints.launchpad.net/neutron/+spec/allow-mac-to-be-updated
[2] https://review.openstack.org/#/c/112129/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Virtual Mini-Summit

2014-10-16 Thread Nikhil Komawar
We've planned a online + IRC mini-summit for discussing Glance topics prior to 
the Paris summit. Please find the agenda as well as the topics which are 
currently included for the same in this [0] etherpad. If you've a topic in mind 
just for this virtual summit, please feel free to add so in the same etherpad. 
I'm planning to finalize the topics early next week.

If you're interested in attending; there is a doodle poll [1] which asks for 
your preferred date along with timezone in UTC (please see etherpad). Most 
likely we will just end up having the sessions spanned over 2 days. However, if 
you cannot make it both days please let me know your preferred topics in the 
etherpad [0].

If you've any questions or concerns please feel free to reach out.

[0] https://etherpad.openstack.org/p/kilo-glance-virtual-mini-summit
[1] http://doodle.com/e42y2xvrhycqbmu5

Thanks,
-Nikhil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack] [Barbican] [Cinder] Cinder and Barbican

2014-10-16 Thread John Wood
Hello Giuseppe,

There have been blueprints related to this, such as here: 
https://blueprints.launchpad.net/cinder/+spec/encryption-with-barbican

I don't think all the necessary pieces for this feature have landed as of yet 
though...cc-ing Joel and Nate in case they wanted to weigh in here.

Thanks,
John



From: Douglas Mendizabal [douglas.mendiza...@rackspace.com]
Sent: Thursday, October 16, 2014 3:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [OpenStack] [Barbican] [Cinder] Cinder and Barbican

Hi Giuseppe,

Someone from the Cinder team can correct me if I’m wrong, but I don’t think 
that Cinder has done any integration with Barbican yet.

-Douglas


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5 0CC9 AD14 1F30 2D58 923C

From: Giuseppe Galeota 
mailto:giuseppegale...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, October 16, 2014 at 9:54 AM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [OpenStack] [Barbican] Cinder and Barbican


Dear all,
is Cinder capable today to use Barbican for encryption? If yes, can you attach 
some useful doc?

Thank you,
Giuseppe
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Maintenance mode in OpenStack during patching/upgrades

2014-10-16 Thread Christopher Aedo
On Tue, Sep 9, 2014 at 2:19 PM, Mike Scherbakov
 wrote:
>> On Tue, Sep 9, 2014 at 6:02 PM, Clint Byrum  wrote:
> The idea is not simply deny or hang requests from clients, but provide them
> "we are in maintenance mode, retry in X seconds"
>
>> You probably would want 'nova host-servers-migrate '
> yeah for migrations - but as far as I understand, it doesn't help with
> disabling this host in scheduler - there is can be a chance that some
> workloads will be scheduled to the host.

Regarding putting a compute host in maintenance mode using "nova
host-update --maintenance enable", it looks like the blueprint and
associated commits were abandoned a year and a half ago:
https://blueprints.launchpad.net/nova/+spec/host-maintenance

It seems that "nova service-disable  nova-compute" effectively
prevents the scheduler from trying to send new work there.  Is this
the best approach to use right now if you want to pull a compute host
out of an environment before migrating VMs off?

I agree with Tim and Mike that having something respond "down for
maintenance" rather than ignore or hang would be really valuable.  But
it also looks like that hasn't gotten much traction in the past -
anyone feel like they'd be in support of reviving the notion of
"maintenance mode"?

-Christopher

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance][QA] python-glanceclient untestable in Python 3.4

2014-10-16 Thread Jeremy Stanley
As part of an effort to deprecate our specialized testing platform
for Python 3.3, many of us have been working to confirm projects
which currently gate on 3.3 can also pass their same test sets under
Python 3.4 (which comes by default in Ubuntu Trusty). For the vast
majority of projects, the differences between 3.3 and 3.4 are
immaterial and no effort is required. For some, minor adjustments
are needed...

For python-glanceclient, we have 22 failing tests in a tox -e py34
run. I spent the better part of today digging into them, and they
basically all stem from the fact that PEP 456 switches the unordered
data hash algorithm from FNV to SipHash in 3.4. The unit tests in
python-glanceclient frequently rely on trying to match
multi-parameter URL queries and JSON built from unordered data types
against predetermined string representations. Put simply, this just
doesn't work if you can't guarantee their ordering.

I'm left with a dilemma--I don't really have time to fix all of
these (I started to go through and turn the fixture keys into format
strings embedding dicts filtered through urlencode() for example,
but it created as many new failures as it fixed), however I'd hate
to drop Py3K testing for software which currently has it no matter
how fragile. This is mainly a call for help to anyone with some
background and/or interest in python-glanceclient's unit tests to
get them working under Python 3.4, so that we can eliminate the
burden of maintaining special 3.3 test infrastructure.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Maintenance mode in OpenStack during patching/upgrades

2014-10-16 Thread Matt Riedemann



On 10/16/2014 7:26 PM, Christopher Aedo wrote:

On Tue, Sep 9, 2014 at 2:19 PM, Mike Scherbakov
 wrote:

On Tue, Sep 9, 2014 at 6:02 PM, Clint Byrum  wrote:

The idea is not simply deny or hang requests from clients, but provide them
"we are in maintenance mode, retry in X seconds"


You probably would want 'nova host-servers-migrate '

yeah for migrations - but as far as I understand, it doesn't help with
disabling this host in scheduler - there is can be a chance that some
workloads will be scheduled to the host.


Regarding putting a compute host in maintenance mode using "nova
host-update --maintenance enable", it looks like the blueprint and
associated commits were abandoned a year and a half ago:
https://blueprints.launchpad.net/nova/+spec/host-maintenance

It seems that "nova service-disable  nova-compute" effectively
prevents the scheduler from trying to send new work there.  Is this
the best approach to use right now if you want to pull a compute host
out of an environment before migrating VMs off?

I agree with Tim and Mike that having something respond "down for
maintenance" rather than ignore or hang would be really valuable.  But
it also looks like that hasn't gotten much traction in the past -
anyone feel like they'd be in support of reviving the notion of
"maintenance mode"?

-Christopher

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



host-maintenance-mode is definitely a thing in nova compute via the 
os-hosts API extension and the --maintenance parameter, the compute 
manager code is here [1].  The thing is the only in-tree virt driver 
that implements it is xenapi, and I believe when you put the host in 
maintenance mode it's supposed to automatically evacuate the instances 
to some other host, but you can't target the other host or tell the 
driver, from the API, which instances you want to evacuate, e.g. all, 
none, running only, etc.


[1] 
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py?id=2014.2#n3990


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-16 Thread Angus Lees
On Wed, 15 Oct 2014 08:19:03 PM Clint Byrum wrote:
> > > I think it would be a good idea for containers' filesystem contents to
> > > be a whole distro. What's at question in this thread is what should be
> > > running. If we can just chroot into the container's FS and run
> > > apt-get/yum
> > > install our tools, and then nsenter and attach to the running process,
> > > then huzzah: I think we have best of both worlds.
> >
> > 
> >
> > Erm, yes that's exactly what you can do with containers (docker, lxc, and 
> > presumably any other use of containers with a private/ephemeral
> > filesystem).>
> > 
> 
> The point I was trying to make is that this case was not being addressed
> by the "don't run init in a container" crowd. I am in that crowd, and
> thus, wanted to make the point: this is how I think it will work when
> I do finally get around to trying containers for a real workload.

So you don't need init in a container in order to do the above (chroot into a 
container and run apt-get/yum and attach gdb/whatever to a running process).
(Oh wait, perhaps you're already agreeing with that?  I'm confused, so I'm 
going to explain how in case others are curious anyway.)


You just need to find the pid of a process in the container (perhaps using 
docker inspect to go from container name -> pid) and then:
 nsenter -t $pid -m -u -i -n -p -w
will give you a shell (by default) running in the already existing container.  
>From there you can install whatever additional packages you want and poke at 
the existing processes from their own (inside the container) pov.  A handy 
script (posted to os-dev previously) is:

 #!/bin/sh
 # name or id
 container=$1; shift 
 pid=$(docker inspect --format '{{ .State.Pid }}' $container)
 if [ $pid -eq 0 ]; then
echo "No pid found for $container -> dead?" >&2
exit 1
 fi
 exec nsenter -t $pid -m -u -i -n -p -w "$@"

If the docker container is destroyed/recreated at any point then your 
modifications to the filesystem (installing additional packages) are lost, 
which 
is probably what you wanted.


An interesting variant of this approach is CoreOS's "toolbox".  CoreOS is a 
very minimal OS that is designed to be just enough to run docker containers.  
Consequently it doesn't have many debugging tools available (no tcpdump, eg).  
But it has a "toolbox" command that is basically a simple shell script that 
pulls down and runs a full-featured generic distro (redhat by default iirc) 
and runs that in "privileged" mode.
Inside that generic distro container you get access to the real network 
devices and processes so you can install additional tools and tcpdump, etc as 
you wish.  When you exit, it gets cleaned up.  Works quite nicely.

-- 
 - Gus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] on Dockerfile patterns

2014-10-16 Thread Lars Kellogg-Stedman
On Fri, Oct 17, 2014 at 12:44:50PM +1100, Angus Lees wrote:
> You just need to find the pid of a process in the container (perhaps using 
> docker inspect to go from container name -> pid) and then:
>  nsenter -t $pid -m -u -i -n -p -w

Note also that the 1.3 release of Docker ("any day now") will sport a
shiny new "docker exec" command that will provide you with the ability
to run commands inside the container via the docker client without
having to involve nsenter (or nsinit).

It looks like:

docker exec  ps -fe

Or:

docker exec -it  bash

-- 
Lars Kellogg-Stedman  | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/



pgph4nan8hDa3.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] external AuthN Identity Backend

2014-10-16 Thread Nathan Kinder


On 10/16/2014 12:30 PM, Dave Walker wrote:
> Hi,
> 
> I think I considered the Federated plugin as a mismatch as it dealt
> with 'remote' auth rather than 'external' auth.  I thought it was for
> purely handling SSO / SAML2, and not being subordinate to auth with
> the webserver.
> 
> I'll dig into the federation plugin more, thanks.

There are some plans to be able to frontend the user/group lookup in
httpd like you ca do with external auth.  This is similar to the
federation approach.  I've written up some details here:

  https://blog-nkinder.rhcloud.com/?p=130

There is still work to do to tie in the mapping code from the
OS-FEDERATION extension, but I think the approach shows a lot of
promise.  Choose your auth module (mod_auth_kerb, mod_ssl, etc.) to set
REMOTE_USER, then use mod_lookup_identity to supply the user/group info
from LDAP.  This is an approach that should get some discussion time at
the Summit.

Thanks,
-NGK

> 
> --
> Kind Regards,
> Dave Walker
> 
> On 16 October 2014 19:58, David Chadwick  wrote:
>> Dave
>>
>> when federation is used, the user's group is stored in a mapping rule.
>> So we do have a mechanism for storing group memberships without using
>> LDAP or creating an entry for the user in the SQL backend. (The only
>> time this is kinda not true is if we have a specific rule for each
>> federated user, so that then each mapping rule is equivalent to an entry
>> for each user). But usually we might expect many users to use the same
>> mapping rule.
>>
>> Mapping rules should be usable for Kerberos logins. I dont know if the
>> current code does have this ability or not, but if it doesn't, then it
>> should be re-engineered to. (it was always in my design that all remote
>> logins should have a mapping capability)
>>
>> regards
>>
>> David
>>
>> On 16/10/2014 19:15, Dave Walker wrote:
>>> Hi,
>>>
>>> Currently we have two ways of doing Identity Auth backends, these are
>>> sql and ldap.
>>>
>>> The SQL backend is the default and is for situations where Keyston is
>>> the canonical Identity provider with username / password being
>>> directly compared to the Keystone database.
>>>
>>> LDAP is the current option if Keystone isn't the canonical Identity
>>> provider and passes the username and password to an LDAP server for
>>> comparison and retrieves the groups.
>>>
>>> For a few releases we have supported External auth (or Kerberos),
>>> where we authenticate the user at the edge and trust the REMOTE_USER
>>> is valid.  In these situations Keystone doesn't require the Username
>>> or Password to be valid.
>>>
>>> Particularly in Kerberos situations, no password is used to
>>> successfully authenticate at the edge.  This works well, but LDAP
>>> cannot be used as no password is passed through.  The other option is
>>> SQL, but that then requires a user to be created in Keystone first.
>>>
>>> We do not seem to cover the situation where Identity is provided by an
>>> external mechanism.  The only system currently available is Federation
>>> via SAML, which isn't always the best fit.
>>>
>>> Therefore, I'd like to suggest the introduction of a third backend.
>>> This would be the external identity provider.  This would seem to be
>>> pretty simple, as the current checks would simply return success (as
>>> we trust auth at the edge), and not store user_id in the database, but
>>> generate it at runtime.
>>>
>>> The issue I have, is that this doesn't cover Group membership.
>>>
>>> So, am I a:
>>>  - Barking totally up the wrong tree
>>>  - Add support to the current LDAP plugin to support external auth
>>> (but still use LDAP for groups)
>>>  - Write a standalone external plugin, but then do what for Groups?  I
>>> would be reasonably happy to just have 1:1 mapping of users to groups.
>>>
>>> Does this make sense?
>>>
>>> Thanks
>>>
>>> --
>>> Kind Regards,
>>> Daviey Walker
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] add cyclomatic complexity check to pep8 target

2014-10-16 Thread Angus Salkeld
Hi all

I came across some tools [1] & [2] that we could use to make sure we don't
increase our code complexity.

Has anyone had any experience with these or other tools?

radon is the underlying reporting tool and xenon is a "monitor" - meaning
it will fail if a threshold is reached.

To save you the time:
radon cc -nd heat
heat/engine/stack.py
M 809:4 Stack.delete - E
M 701:4 Stack.update_task - D
heat/engine/resources/server.py
M 738:4 Server.handle_update - D
M 891:4 Server.validate - D
heat/openstack/common/jsonutils.py
F 71:0 to_primitive - D
heat/openstack/common/config/generator.py
F 252:0 _print_opt - D
heat/tests/v1_1/fakes.py
M 240:4 FakeHTTPClient.post_servers_1234_action - F

It ranks the complexity from A (best) upwards, the command above (-nd) says
only show D or worse.
If you look at these methods they are getting out of hand and are  becoming
difficult to understand.
I like the idea of having a threshold that says we are not going to just
keep adding to the complexity
of these methods.

This can be enforced with:
xenon --max-absolute E heat
ERROR:xenon:block "heat/tests/v1_1/fakes.py:240 post_servers_1234_action"
has a rank of F

[1] https://pypi.python.org/pypi/radon
[2] https://pypi.python.org/pypi/xenon

If people are open to this, I'd like to add these to the test-requirements
and trial this in Heat
(as part of the pep8 tox target).

Regards
Angus
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] add cyclomatic complexity check to pep8 target

2014-10-16 Thread Joe Gordon
On Thu, Oct 16, 2014 at 8:11 PM, Angus Salkeld 
wrote:

> Hi all
>
> I came across some tools [1] & [2] that we could use to make sure we don't
> increase our code complexity.
>
> Has anyone had any experience with these or other tools?
>


Flake8 (and thus hacking) has built in McCabe Complexity checking.

flake8 --select=C --max-complexity 10

https://github.com/flintwork/mccabe
http://flake8.readthedocs.org/en/latest/warnings.html

Example on heat: http://paste.openstack.org/show/121561
Example in nova (max complexity of 20):
http://paste.openstack.org/show/121562


>
> radon is the underlying reporting tool and xenon is a "monitor" - meaning
> it will fail if a threshold is reached.
>
> To save you the time:
> radon cc -nd heat
> heat/engine/stack.py
> M 809:4 Stack.delete - E
> M 701:4 Stack.update_task - D
> heat/engine/resources/server.py
> M 738:4 Server.handle_update - D
> M 891:4 Server.validate - D
> heat/openstack/common/jsonutils.py
> F 71:0 to_primitive - D
> heat/openstack/common/config/generator.py
> F 252:0 _print_opt - D
> heat/tests/v1_1/fakes.py
> M 240:4 FakeHTTPClient.post_servers_1234_action - F
>
> It ranks the complexity from A (best) upwards, the command above (-nd)
> says only show D or worse.
> If you look at these methods they are getting out of hand and are
> becoming difficult to understand.
> I like the idea of having a threshold that says we are not going to just
> keep adding to the complexity
> of these methods.
>
> This can be enforced with:
> xenon --max-absolute E heat
> ERROR:xenon:block "heat/tests/v1_1/fakes.py:240 post_servers_1234_action"
> has a rank of F
>
> [1] https://pypi.python.org/pypi/radon
> [2] https://pypi.python.org/pypi/xenon
>
> If people are open to this, I'd like to add these to the test-requirements
> and trial this in Heat
> (as part of the pep8 tox target).
>

I think the idea of gating on complexity is a great idea and would like to
see nova adopt this as well. But why not just use flake8's built in stuff?


>
> Regards
> Angus
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] add cyclomatic complexity check to pep8 target

2014-10-16 Thread Dolph Mathews
I ran this tool on Keystone and liked the results - the two methods that
ranked D should certainly be refactored. It also matched a few methods in
openstack/common.

But flake8 supports complexity checking already, using mccabe. Just enable
complexity checking with:

  $ flake8 --max-complexity 20

It seems to function about the same as radon + xenon, where a D in that
tool is about a --max-complexity of 22. And flake8 obeys our existing tox
directives (such as ignoring openstack/common).

+1 for enabling complexity checking though.

On Thu, Oct 16, 2014 at 10:11 PM, Angus Salkeld 
wrote:

> Hi all
>
> I came across some tools [1] & [2] that we could use to make sure we don't
> increase our code complexity.
>
> Has anyone had any experience with these or other tools?
>
> radon is the underlying reporting tool and xenon is a "monitor" - meaning
> it will fail if a threshold is reached.
>
> To save you the time:
> radon cc -nd heat
> heat/engine/stack.py
> M 809:4 Stack.delete - E
> M 701:4 Stack.update_task - D
> heat/engine/resources/server.py
> M 738:4 Server.handle_update - D
> M 891:4 Server.validate - D
> heat/openstack/common/jsonutils.py
> F 71:0 to_primitive - D
> heat/openstack/common/config/generator.py
> F 252:0 _print_opt - D
> heat/tests/v1_1/fakes.py
> M 240:4 FakeHTTPClient.post_servers_1234_action - F
>
> It ranks the complexity from A (best) upwards, the command above (-nd)
> says only show D or worse.
> If you look at these methods they are getting out of hand and are
> becoming difficult to understand.
> I like the idea of having a threshold that says we are not going to just
> keep adding to the complexity
> of these methods.
>
> This can be enforced with:
> xenon --max-absolute E heat
> ERROR:xenon:block "heat/tests/v1_1/fakes.py:240 post_servers_1234_action"
> has a rank of F
>
> [1] https://pypi.python.org/pypi/radon
> [2] https://pypi.python.org/pypi/xenon
>
> If people are open to this, I'd like to add these to the test-requirements
> and trial this in Heat
> (as part of the pep8 tox target).
>
> Regards
> Angus
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] add cyclomatic complexity check to pep8 target

2014-10-16 Thread Morgan Fainberg
I agree we should use flake8 built-in if at all possible. I complexity checking 
will definitely help us in the long run keeping code maintainable.

+1 from me. 


—
Morgan Fainberg


On October 16, 2014 at 20:45:35, Joe Gordon (joe.gord...@gmail.com) wrote:
> On Thu, Oct 16, 2014 at 8:11 PM, Angus Salkeld  
> wrote:
>  
> > Hi all
> >
> > I came across some tools [1] & [2] that we could use to make sure we don't
> > increase our code complexity.
> >
> > Has anyone had any experience with these or other tools?
> >
>  
>  
> Flake8 (and thus hacking) has built in McCabe Complexity checking.
>  
> flake8 --select=C --max-complexity 10
>  
> https://github.com/flintwork/mccabe
> http://flake8.readthedocs.org/en/latest/warnings.html
>  
> Example on heat: http://paste.openstack.org/show/121561
> Example in nova (max complexity of 20):
> http://paste.openstack.org/show/121562
>  
>  
> >
> > radon is the underlying reporting tool and xenon is a "monitor" - meaning
> > it will fail if a threshold is reached.
> >
> > To save you the time:
> > radon cc -nd heat
> > heat/engine/stack.py
> > M 809:4 Stack.delete - E
> > M 701:4 Stack.update_task - D
> > heat/engine/resources/server.py
> > M 738:4 Server.handle_update - D
> > M 891:4 Server.validate - D
> > heat/openstack/common/jsonutils.py
> > F 71:0 to_primitive - D
> > heat/openstack/common/config/generator.py
> > F 252:0 _print_opt - D
> > heat/tests/v1_1/fakes.py
> > M 240:4 FakeHTTPClient.post_servers_1234_action - F
> >
> > It ranks the complexity from A (best) upwards, the command above (-nd)
> > says only show D or worse.
> > If you look at these methods they are getting out of hand and are
> > becoming difficult to understand.
> > I like the idea of having a threshold that says we are not going to just
> > keep adding to the complexity
> > of these methods.
> >
> > This can be enforced with:
> > xenon --max-absolute E heat
> > ERROR:xenon:block "heat/tests/v1_1/fakes.py:240 post_servers_1234_action"  
> > has a rank of F
> >
> > [1] https://pypi.python.org/pypi/radon
> > [2] https://pypi.python.org/pypi/xenon
> >
> > If people are open to this, I'd like to add these to the test-requirements
> > and trial this in Heat
> > (as part of the pep8 tox target).
> >
>  
> I think the idea of gating on complexity is a great idea and would like to
> see nova adopt this as well. But why not just use flake8's built in stuff?
>  
>  
> >
> > Regards
> > Angus
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][QA] python-glanceclient untestable in Python 3.4

2014-10-16 Thread Fei Long Wang
Hi Jeremy,

Thanks for the heads up. Is there a bug opened to track this? If not,
I'm going to open one and dig into it. Cheers.

On 17/10/14 14:17, Jeremy Stanley wrote:
> As part of an effort to deprecate our specialized testing platform
> for Python 3.3, many of us have been working to confirm projects
> which currently gate on 3.3 can also pass their same test sets under
> Python 3.4 (which comes by default in Ubuntu Trusty). For the vast
> majority of projects, the differences between 3.3 and 3.4 are
> immaterial and no effort is required. For some, minor adjustments
> are needed...
>
> For python-glanceclient, we have 22 failing tests in a tox -e py34
> run. I spent the better part of today digging into them, and they
> basically all stem from the fact that PEP 456 switches the unordered
> data hash algorithm from FNV to SipHash in 3.4. The unit tests in
> python-glanceclient frequently rely on trying to match
> multi-parameter URL queries and JSON built from unordered data types
> against predetermined string representations. Put simply, this just
> doesn't work if you can't guarantee their ordering.
>
> I'm left with a dilemma--I don't really have time to fix all of
> these (I started to go through and turn the fixture keys into format
> strings embedding dicts filtered through urlencode() for example,
> but it created as many new failures as it fixed), however I'd hate
> to drop Py3K testing for software which currently has it no matter
> how fragile. This is mainly a call for help to anyone with some
> background and/or interest in python-glanceclient's unit tests to
> get them working under Python 3.4, so that we can eliminate the
> burden of maintaining special 3.3 test infrastructure.

-- 
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] add cyclomatic complexity check to pep8 target

2014-10-16 Thread Michael Still
I think nova wins. We have:

./nova/virt/libvirt/driver.py:3736:1: C901
'LibvirtDriver._get_guest_config' is too complex (67)

Michael

On Fri, Oct 17, 2014 at 2:45 PM, Dolph Mathews  wrote:
> I ran this tool on Keystone and liked the results - the two methods that
> ranked D should certainly be refactored. It also matched a few methods in
> openstack/common.
>
> But flake8 supports complexity checking already, using mccabe. Just enable
> complexity checking with:
>
>   $ flake8 --max-complexity 20
>
> It seems to function about the same as radon + xenon, where a D in that tool
> is about a --max-complexity of 22. And flake8 obeys our existing tox
> directives (such as ignoring openstack/common).
>
> +1 for enabling complexity checking though.
>
> On Thu, Oct 16, 2014 at 10:11 PM, Angus Salkeld 
> wrote:
>>
>> Hi all
>>
>> I came across some tools [1] & [2] that we could use to make sure we don't
>> increase our code complexity.
>>
>> Has anyone had any experience with these or other tools?
>>
>> radon is the underlying reporting tool and xenon is a "monitor" - meaning
>> it will fail if a threshold is reached.
>>
>> To save you the time:
>> radon cc -nd heat
>> heat/engine/stack.py
>> M 809:4 Stack.delete - E
>> M 701:4 Stack.update_task - D
>> heat/engine/resources/server.py
>> M 738:4 Server.handle_update - D
>> M 891:4 Server.validate - D
>> heat/openstack/common/jsonutils.py
>> F 71:0 to_primitive - D
>> heat/openstack/common/config/generator.py
>> F 252:0 _print_opt - D
>> heat/tests/v1_1/fakes.py
>> M 240:4 FakeHTTPClient.post_servers_1234_action - F
>>
>> It ranks the complexity from A (best) upwards, the command above (-nd)
>> says only show D or worse.
>> If you look at these methods they are getting out of hand and are
>> becoming difficult to understand.
>> I like the idea of having a threshold that says we are not going to just
>> keep adding to the complexity
>> of these methods.
>>
>> This can be enforced with:
>> xenon --max-absolute E heat
>> ERROR:xenon:block "heat/tests/v1_1/fakes.py:240 post_servers_1234_action"
>> has a rank of F
>>
>> [1] https://pypi.python.org/pypi/radon
>> [2] https://pypi.python.org/pypi/xenon
>>
>> If people are open to this, I'd like to add these to the test-requirements
>> and trial this in Heat
>> (as part of the pep8 tox target).
>>
>> Regards
>> Angus
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] add cyclomatic complexity check to pep8 target

2014-10-16 Thread Angus Salkeld
On Fri, Oct 17, 2014 at 1:40 PM, Joe Gordon  wrote:

>
>
> On Thu, Oct 16, 2014 at 8:11 PM, Angus Salkeld 
> wrote:
>
>> Hi all
>>
>> I came across some tools [1] & [2] that we could use to make sure we
>> don't increase our code complexity.
>>
>> Has anyone had any experience with these or other tools?
>>
>
>
> Flake8 (and thus hacking) has built in McCabe Complexity checking.
>
> flake8 --select=C --max-complexity 10
>
> https://github.com/flintwork/mccabe
> http://flake8.readthedocs.org/en/latest/warnings.html
>
> Example on heat: http://paste.openstack.org/show/121561
> Example in nova (max complexity of 20):
> http://paste.openstack.org/show/121562
>
>

Cool, no need to add to requirements:-) Thanks, I didn't think to look at
flake8.


>
>> radon is the underlying reporting tool and xenon is a "monitor" - meaning
>> it will fail if a threshold is reached.
>>
>> To save you the time:
>> radon cc -nd heat
>> heat/engine/stack.py
>> M 809:4 Stack.delete - E
>> M 701:4 Stack.update_task - D
>> heat/engine/resources/server.py
>> M 738:4 Server.handle_update - D
>> M 891:4 Server.validate - D
>> heat/openstack/common/jsonutils.py
>> F 71:0 to_primitive - D
>> heat/openstack/common/config/generator.py
>> F 252:0 _print_opt - D
>> heat/tests/v1_1/fakes.py
>> M 240:4 FakeHTTPClient.post_servers_1234_action - F
>>
>> It ranks the complexity from A (best) upwards, the command above (-nd)
>> says only show D or worse.
>> If you look at these methods they are getting out of hand and are
>> becoming difficult to understand.
>> I like the idea of having a threshold that says we are not going to just
>> keep adding to the complexity
>> of these methods.
>>
>> This can be enforced with:
>> xenon --max-absolute E heat
>> ERROR:xenon:block "heat/tests/v1_1/fakes.py:240 post_servers_1234_action"
>> has a rank of F
>>
>> [1] https://pypi.python.org/pypi/radon
>> [2] https://pypi.python.org/pypi/xenon
>>
>> If people are open to this, I'd like to add these to the
>> test-requirements and trial this in Heat
>> (as part of the pep8 tox target).
>>
>
> I think the idea of gating on complexity is a great idea and would like to
> see nova adopt this as well. But why not just use flake8's built in stuff?
>
>

Totally, it's the end result I want, I not am not worried about which tool
we use.

-Angus


>
>> Regards
>> Angus
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] add cyclomatic complexity check to pep8 target

2014-10-16 Thread Joe Gordon
On Thu, Oct 16, 2014 at 8:53 PM, Morgan Fainberg 
wrote:

> I agree we should use flake8 built-in if at all possible. I complexity
> checking will definitely help us in the long run keeping code maintainable.
>

Well this is scary:

./nova/virt/libvirt/driver.py:3736:1: C901
'LibvirtDriver._get_guest_config' is too complex (67)

http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver.py#n373


to

*http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver.py#n4113
*




First step in fixing this, put a cap on it:  
https://review.openstack.org/129125



>
> +1 from me.
>
>
> —
> Morgan Fainberg
>
>
> On October 16, 2014 at 20:45:35, Joe Gordon (joe.gord...@gmail.com) wrote:
> > On Thu, Oct 16, 2014 at 8:11 PM, Angus Salkeld
> > wrote:
> >
> > > Hi all
> > >
> > > I came across some tools [1] & [2] that we could use to make sure we
> don't
> > > increase our code complexity.
> > >
> > > Has anyone had any experience with these or other tools?
> > >
> >
> >
> > Flake8 (and thus hacking) has built in McCabe Complexity checking.
> >
> > flake8 --select=C --max-complexity 10
> >
> > https://github.com/flintwork/mccabe
> > http://flake8.readthedocs.org/en/latest/warnings.html
> >
> > Example on heat: http://paste.openstack.org/show/121561
> > Example in nova (max complexity of 20):
> > http://paste.openstack.org/show/121562
> >
> >
> > >
> > > radon is the underlying reporting tool and xenon is a "monitor" -
> meaning
> > > it will fail if a threshold is reached.
> > >
> > > To save you the time:
> > > radon cc -nd heat
> > > heat/engine/stack.py
> > > M 809:4 Stack.delete - E
> > > M 701:4 Stack.update_task - D
> > > heat/engine/resources/server.py
> > > M 738:4 Server.handle_update - D
> > > M 891:4 Server.validate - D
> > > heat/openstack/common/jsonutils.py
> > > F 71:0 to_primitive - D
> > > heat/openstack/common/config/generator.py
> > > F 252:0 _print_opt - D
> > > heat/tests/v1_1/fakes.py
> > > M 240:4 FakeHTTPClient.post_servers_1234_action - F
> > >
> > > It ranks the complexity from A (best) upwards, the command above (-nd)
> > > says only show D or worse.
> > > If you look at these methods they are getting out of hand and are
> > > becoming difficult to understand.
> > > I like the idea of having a threshold that says we are not going to
> just
> > > keep adding to the complexity
> > > of these methods.
> > >
> > > This can be enforced with:
> > > xenon --max-absolute E heat
> > > ERROR:xenon:block "heat/tests/v1_1/fakes.py:240
> post_servers_1234_action"
> > > has a rank of F
> > >
> > > [1] https://pypi.python.org/pypi/radon
> > > [2] https://pypi.python.org/pypi/xenon
> > >
> > > If people are open to this, I'd like to add these to the
> test-requirements
> > > and trial this in Heat
> > > (as part of the pep8 tox target).
> > >
> >
> > I think the idea of gating on complexity is a great idea and would like
> to
> > see nova adopt this as well. But why not just use flake8's built in
> stuff?
> >
> >
> > >
> > > Regards
> > > Angus
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] add cyclomatic complexity check to pep8 target

2014-10-16 Thread John Griffith
On Thu, Oct 16, 2014 at 10:09 PM, Joe Gordon  wrote:

>
> On Thu, Oct 16, 2014 at 8:53 PM, Morgan Fainberg <
> morgan.fainb...@gmail.com> wrote:
>
>> I agree we should use flake8 built-in if at all possible. I complexity
>> checking will definitely help us in the long run keeping code maintainable.
>>
>
> Well this is scary:
>
> ./nova/virt/libvirt/driver.py:3736:1: C901 'LibvirtDriver._get_guest_config' 
> is too complex (67)
>
> http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver.py#n373
>  
> 
>
> to
>
> *http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver.py#n4113
>  
> *
>
>
>
>
> First step in fixing this, put a cap on it:  
> https://review.openstack.org/129125
>
>
>
>>
>> +1 from me.
>>
>>
>> —
>> Morgan Fainberg
>>
>>
>> On October 16, 2014 at 20:45:35, Joe Gordon (joe.gord...@gmail.com)
>> wrote:
>> > On Thu, Oct 16, 2014 at 8:11 PM, Angus Salkeld
>> > wrote:
>> >
>> > > Hi all
>> > >
>> > > I came across some tools [1] & [2] that we could use to make sure we
>> don't
>> > > increase our code complexity.
>> > >
>> > > Has anyone had any experience with these or other tools?
>> > >
>> >
>> >
>> > Flake8 (and thus hacking) has built in McCabe Complexity checking.
>> >
>> > flake8 --select=C --max-complexity 10
>> >
>> > https://github.com/flintwork/mccabe
>> > http://flake8.readthedocs.org/en/latest/warnings.html
>> >
>> > Example on heat: http://paste.openstack.org/show/121561
>> > Example in nova (max complexity of 20):
>> > http://paste.openstack.org/show/121562
>> >
>> >
>> > >
>> > > radon is the underlying reporting tool and xenon is a "monitor" -
>> meaning
>> > > it will fail if a threshold is reached.
>> > >
>> > > To save you the time:
>> > > radon cc -nd heat
>> > > heat/engine/stack.py
>> > > M 809:4 Stack.delete - E
>> > > M 701:4 Stack.update_task - D
>> > > heat/engine/resources/server.py
>> > > M 738:4 Server.handle_update - D
>> > > M 891:4 Server.validate - D
>> > > heat/openstack/common/jsonutils.py
>> > > F 71:0 to_primitive - D
>> > > heat/openstack/common/config/generator.py
>> > > F 252:0 _print_opt - D
>> > > heat/tests/v1_1/fakes.py
>> > > M 240:4 FakeHTTPClient.post_servers_1234_action - F
>> > >
>> > > It ranks the complexity from A (best) upwards, the command above (-nd)
>> > > says only show D or worse.
>> > > If you look at these methods they are getting out of hand and are
>> > > becoming difficult to understand.
>> > > I like the idea of having a threshold that says we are not going to
>> just
>> > > keep adding to the complexity
>> > > of these methods.
>> > >
>> > > This can be enforced with:
>> > > xenon --max-absolute E heat
>> > > ERROR:xenon:block "heat/tests/v1_1/fakes.py:240
>> post_servers_1234_action"
>> > > has a rank of F
>> > >
>> > > [1] https://pypi.python.org/pypi/radon
>> > > [2] https://pypi.python.org/pypi/xenon
>> > >
>> > > If people are open to this, I'd like to add these to the
>> test-requirements
>> > > and trial this in Heat
>> > > (as part of the pep8 tox target).
>> > >
>> >
>> > I think the idea of gating on complexity is a great idea and would like
>> to
>> > see nova adopt this as well. But why not just use flake8's built in
>> stuff?
>> >
>> >
>> > >
>> > > Regards
>> > > Angus
>> > >
>> > > ___
>> > > OpenStack-dev mailing list
>> > > OpenStack-dev@lists.openstack.org
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> > >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> Wow!!  Great topic, and a lot of opportunities in Cinder for the Kilo
release.  Thanks!!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] add cyclomatic complexity check to pep8 target

2014-10-16 Thread Angus Salkeld
On Fri, Oct 17, 2014 at 2:09 PM, Joe Gordon  wrote:

>
> On Thu, Oct 16, 2014 at 8:53 PM, Morgan Fainberg <
> morgan.fainb...@gmail.com> wrote:
>
>> I agree we should use flake8 built-in if at all possible. I complexity
>> checking will definitely help us in the long run keeping code maintainable.
>>
>
> Well this is scary:
>
> ./nova/virt/libvirt/driver.py:3736:1: C901 'LibvirtDriver._get_guest_config' 
> is too complex (67)
>
> http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver.py#n373
>  
> 
>
> to
>
> *http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver.py#n4113
>  
> *
>
>
>
>
> First step in fixing this, put a cap on it:  
> https://review.openstack.org/129125
>
>
Thanks for that, I mostly copied your patch for Heat:
https://review.openstack.org/#/c/129126/

I wish it would print out the value for all the modules tho', that is what
is nice about radon.

-Angus


>
>
>>
>> +1 from me.
>>
>>
>> —
>> Morgan Fainberg
>>
>>
>> On October 16, 2014 at 20:45:35, Joe Gordon (joe.gord...@gmail.com)
>> wrote:
>> > On Thu, Oct 16, 2014 at 8:11 PM, Angus Salkeld
>> > wrote:
>> >
>> > > Hi all
>> > >
>> > > I came across some tools [1] & [2] that we could use to make sure we
>> don't
>> > > increase our code complexity.
>> > >
>> > > Has anyone had any experience with these or other tools?
>> > >
>> >
>> >
>> > Flake8 (and thus hacking) has built in McCabe Complexity checking.
>> >
>> > flake8 --select=C --max-complexity 10
>> >
>> > https://github.com/flintwork/mccabe
>> > http://flake8.readthedocs.org/en/latest/warnings.html
>> >
>> > Example on heat: http://paste.openstack.org/show/121561
>> > Example in nova (max complexity of 20):
>> > http://paste.openstack.org/show/121562
>> >
>> >
>> > >
>> > > radon is the underlying reporting tool and xenon is a "monitor" -
>> meaning
>> > > it will fail if a threshold is reached.
>> > >
>> > > To save you the time:
>> > > radon cc -nd heat
>> > > heat/engine/stack.py
>> > > M 809:4 Stack.delete - E
>> > > M 701:4 Stack.update_task - D
>> > > heat/engine/resources/server.py
>> > > M 738:4 Server.handle_update - D
>> > > M 891:4 Server.validate - D
>> > > heat/openstack/common/jsonutils.py
>> > > F 71:0 to_primitive - D
>> > > heat/openstack/common/config/generator.py
>> > > F 252:0 _print_opt - D
>> > > heat/tests/v1_1/fakes.py
>> > > M 240:4 FakeHTTPClient.post_servers_1234_action - F
>> > >
>> > > It ranks the complexity from A (best) upwards, the command above (-nd)
>> > > says only show D or worse.
>> > > If you look at these methods they are getting out of hand and are
>> > > becoming difficult to understand.
>> > > I like the idea of having a threshold that says we are not going to
>> just
>> > > keep adding to the complexity
>> > > of these methods.
>> > >
>> > > This can be enforced with:
>> > > xenon --max-absolute E heat
>> > > ERROR:xenon:block "heat/tests/v1_1/fakes.py:240
>> post_servers_1234_action"
>> > > has a rank of F
>> > >
>> > > [1] https://pypi.python.org/pypi/radon
>> > > [2] https://pypi.python.org/pypi/xenon
>> > >
>> > > If people are open to this, I'd like to add these to the
>> test-requirements
>> > > and trial this in Heat
>> > > (as part of the pep8 tox target).
>> > >
>> >
>> > I think the idea of gating on complexity is a great idea and would like
>> to
>> > see nova adopt this as well. But why not just use flake8's built in
>> stuff?
>> >
>> >
>> > >
>> > > Regards
>> > > Angus
>> > >
>> > > ___
>> > > OpenStack-dev mailing list
>> > > OpenStack-dev@lists.openstack.org
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> > >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] add cyclomatic complexity check to pep8 target

2014-10-16 Thread Steve Kowalik
On 17/10/14 15:23, Angus Salkeld wrote:
> Thanks for that, I mostly copied your patch for Heat:
> https://review.openstack.org/#/c/129126/
> 
> I wish it would print out the value for all the modules tho', that is
> what is nice about radon.

It's a little horrible, but we can do the same thing:

(pep8)steven@undermined:~/openstack/openstack/heat% flake8
--max-complexity 0 | grep 'is too complex' | sort -k 2 -rn -t '(' | head
./heat/engine/stack.py:809:1: C901 'Stack.delete' is too complex (28)
./heat/tests/v1_1/fakes.py:240:1: C901
'FakeHTTPClient.post_servers_1234_action' is too complex (24)
./heat/engine/resources/server.py:738:1: C901 'Server.handle_update' is
too complex (23)
./doc/source/conf.py:62:1: C901 'write_autodoc_index' is too complex (18)
./heat/engine/stack.py:701:1: C901 'Stack.update_task' is too complex (17)
./heat/engine/resources/instance.py:709:1: C901 'Instance.handle_update'
is too complex (17)
./heat/engine/rsrc_defn.py:340:1: C901 'ResourceDefinition.__getitem__'
is too complex (16)
./heat/engine/resources/random_string.py:175:1: C901
'RandomString._generate_random_string' is too complex (16)
./heat/api/cfn/v1/stacks.py:277:1: C901
'StackController.create_or_update' is too complex (16)
./heat/engine/watchrule.py:358:1: C901 'rule_can_use_sample' is too
complex (15)

Cheers,
-- 
Steve
Oh, in case you got covered in that Repulsion Gel, here's some advice
the lab boys gave me: [paper rustling] DO NOT get covered in the
Repulsion Gel.
 - Cave Johnson, CEO of Aperture Science

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] add cyclomatic complexity check to pep8 target

2014-10-16 Thread Mike Spreitzer
I like the idea of measuring complexity.  I looked briefly at `python -m 
mccabe`.  It seems to measure each method independently.  Is this really 
fair?  If I have a class with some big methods, and I break it down into 
more numerous and smaller methods, then the largest method gets smaller, 
but the number of methods gets larger.  A large number of methods is 
itself a form of complexity.  It is not clear to me that said re-org has 
necessarily made the class easier to understand.  I can also break one 
class into two, but it is not clear to me that the project has necessarily 
become easier to understand.  While it is true that when you truly make a 
project easier to understand you sometimes break it into more classes, it 
is also true that you can do a bad job of re-organizing a set of classes 
while still reducing the size of the largest method.  Has the McCabe 
metric been evaluated on Python projects?  There is a danger in focusing 
on what is easy to measure if that is not really what you want to 
optimize.

BTW, I find that one of the complexity issues for me when I am learning 
about a Python class is doing the whole-program type inference so that I 
know what the arguments are.  It seems to me that if you want to measure 
complexity of Python code then something like the complexity of the 
argument typing should be taken into account.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] add cyclomatic complexity check to pep8 target

2014-10-16 Thread Angus Salkeld
On Fri, Oct 17, 2014 at 3:24 PM, Mike Spreitzer  wrote:

> I like the idea of measuring complexity.  I looked briefly at `python -m
> mccabe`.  It seems to measure each method independently.  Is this really
> fair?


I think it is a good starting point. You still need to write methods that
do one thing well (simple and easy to read/test by it's self). But it stops
complex methods growing out of hand, by just failing the test and now you
have to look at refactoring. I think reviews can cover the "how good is
your refactor".


>  If I have a class with some big methods, and I break it down into more
> numerous and smaller methods, then the largest method gets smaller, but the
> number of methods gets larger.  A large number of methods is itself a form
> of complexity.  It is not clear to me that said re-org has necessarily made
> the class easier to understand.  I can also break one class into two, but
> it is not clear to me that the project has necessarily become easier to
> understand.


No, but a well written class *should* be easy to understand in isolation. I
don't think there is a tool to say 'your code sucks'.

-A


>  While it is true that when you truly make a project easier to understand
> you sometimes break it into more classes, it is also true that you can do a
> bad job of re-organizing a set of classes while still reducing the size of
> the largest method.  Has the McCabe metric been evaluated on Python
> projects?  There is a danger in focusing on what is easy to measure if that
> is not really what you want to optimize.
>
> BTW, I find that one of the complexity issues for me when I am learning
> about a Python class is doing the whole-program type inference so that I
> know what the arguments are.  It seems to me that if you want to measure
> complexity of Python code then something like the complexity of the
> argument typing should be taken into account.
>
> Regards,
> Mike
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] add cyclomatic complexity check to pep8 target

2014-10-16 Thread Michael Davies
On Fri, Oct 17, 2014 at 2:39 PM, Joe Gordon  wrote:

>
> First step in fixing this, put a cap on it:  
> https://review.openstack.org/129125
>

Thanks Joe - I've just put up a similar patch for Ironic:
https://review.openstack.org/129132

-- 
Michael Davies   mich...@the-davies.net
Rackspace Australia
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] add cyclomatic complexity check to pep8 target

2014-10-16 Thread Clint Byrum
Excerpts from Mike Spreitzer's message of 2014-10-16 22:24:30 -0700:
> I like the idea of measuring complexity.  I looked briefly at `python -m 
> mccabe`.  It seems to measure each method independently.  Is this really 
> fair?  If I have a class with some big methods, and I break it down into 
> more numerous and smaller methods, then the largest method gets smaller, 
> but the number of methods gets larger.  A large number of methods is 
> itself a form of complexity.  It is not clear to me that said re-org has 
> necessarily made the class easier to understand.  I can also break one 
> class into two, but it is not clear to me that the project has necessarily 
> become easier to understand.  While it is true that when you truly make a 
> project easier to understand you sometimes break it into more classes, it 
> is also true that you can do a bad job of re-organizing a set of classes 
> while still reducing the size of the largest method.  Has the McCabe 
> metric been evaluated on Python projects?  There is a danger in focusing 
> on what is easy to measure if that is not really what you want to 
> optimize.
> 
> BTW, I find that one of the complexity issues for me when I am learning 
> about a Python class is doing the whole-program type inference so that I 
> know what the arguments are.  It seems to me that if you want to measure 
> complexity of Python code then something like the complexity of the 
> argument typing should be taken into account.
> 

Fences don't solve problems. Fences make it harder to cause problems.

Of course you can still do the wrong thing and make the code worse. But
you can't do _this_ wrong thing without asserting why you need to.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] add cyclomatic complexity check to pep8 target

2014-10-16 Thread Daniel P. Berrange
On Fri, Oct 17, 2014 at 03:03:43PM +1100, Michael Still wrote:
> I think nova wins. We have:
> 
> ./nova/virt/libvirt/driver.py:3736:1: C901
> 'LibvirtDriver._get_guest_config' is too complex (67)

IMHO this tool is of pretty dubious value. I mean that function is long
for sure, but it is by no means a serious problem in the Nova libvirt
codebase. The stuff it complains about in the libvirt/config.py file is
just incredibly stupid thing to highlight.

We've got plenty of big problems that need addressing in the OpenStack
codebase and I don't see this tool highlighting any of them. Better to
have people focus on solving actual real problems we have than trying
to get some arbitrary code analysis score to hit a magic value.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev