Re: [openstack-dev] [Nova] Monitoring plugin file names

2013-08-05 Thread Gary Kotton


From: Wang, Shane [mailto:shane.w...@intel.com]
Sent: Monday, August 05, 2013 7:37 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Nova] Monitoring plugin file names

I prefer nova/compute/plugins/virt/libvirt, because I think a plugin might not 
call libvirt or any virt driver.

[Gary Kotton] This works for me :)

By the way, can nova core or those who are interested in the bp review our 
patch sets at 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/ubs,n,z?

Thanks.
--
Shane

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Sunday, August 04, 2013 1:41 AM
To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: [openstack-dev] [Nova] Monitoring plugin file names

Hi,
As part of the BP blueprint 
utilization-aware-schedulinghttps://blueprints.launchpad.net/openstack/?searchtext=utilization-aware-scheduling
 a plugin has been added. I have an issue with the placement of the drivers 
(the code is looking good:)) and would like to know what the community thinks. 
Here are a few examples:

1.   https://review.openstack.org/#/c/35760/17 - a new file has been added 
- 
nova/compute/plugins/libvirt_cpu_monitor_plugin.pyhttps://review.openstack.org/#/c/35760/17/nova/compute/plugins/libvirt_cpu_monitor_plugin.py

2.   https://review.openstack.org/#/c/39190/ - a new file has been added - 
nova/compute/plugins/libvirt_memory_monitor_plugin.pyhttps://review.openstack.org/#/c/39190/1/nova/compute/plugins/libvirt_memory_monitor_plugin.py
I think that these monitoring plugins should either reside in the directory  
nova/virt/libvirt/plugins or 
nova/compute/pluginshttps://review.openstack.org/#/c/39190/1/nova/compute/plugins/libvirt_memory_monitor_plugin.py/virt/libvirt.
It would be interesting to know what others think.
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Autogenerating the Nova v3 API specification

2013-08-05 Thread Christopher Yeoh
Hi,

I'd like to extend the information we produce with the Nova v3 API samples
in order to make it easier to automate as much as possible the generation
of a specification document. This should make it easier to keep the
documentation more accurate and up to date in the future. I believe that if
we generate a meta file for each xml and json response file which includes
some information which describes the method for the api sample and ties
together the request and response files we can do this.

I've put together a bit of a prototype here  (the patch is very ugly at the
moment, just proof of concept)

https://review.openstack.org/#/c/40169/

An example of the hosts extension method to put a host into or out of
maintenance mode, a file called host-put-maintenance-resp.json.meta is
created:
 :

{
description: Enables a host or puts it in maintenance mode.,
extension: os-hosts,
extension_description: Manages physical hosts.,
method: PUT,
request: host-put-maintenance-req.json,
response: host-put-maintenance-resp.json,
section_name: Hosts,
status: 200,
url: os-hosts/{host_name}
}

A separate metafile is created for each api sample response rather than
trying to accumulate them by extension because this way it allows for the
tests to still be run in parallel. The metafile also adds the logging of
the status code expected which is not currently done and I think an
important part of the API.

On the documentation side we'd have a script collate all the metafiles and
produce the bulk of the specification document.

The changes to the api sample test code is fairly minor:

class HostsSampleJsonTest(api_sample_base.ApiSampleTestBaseV3):
extension_name = os-hosts
section_name = Hosts
section_doc = Manages physical hosts.

def test_host_startup(self):
response = self._do_get(
'os-hosts/%s/startup', self.compute.host, 'host_name',
api_desc='Starts a host.')
subs = self._get_regexes()
self._verify_response('host-get-startup', subs, response, 200)

def test_host_maintenance(self):
response = self._do_put(
'os-hosts/%s', self.compute.host, 'host_name',
'host-put-maintenance-req', {},
api_desc='Enables a host or puts it in maintenance mode.')
subs = self._get_regexes()
self._verify_response('host-put-maintenance-resp', subs, response,
200)


- some definitions per extension and a description for the method per test
so I don't think its a significant burden for developers or reviewers.

I'd like to know what people think about heading in this direction, and if
there is any other information we should include. I'm not currently
intending on including this in the first pass of porting the api samples
tests to v3 (I don't think there is time) nor to backport to V2.

Regards,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Ceilometer and nova compute cells

2013-08-05 Thread Julien Danjou
On Fri, Aug 02 2013, Doug Hellmann wrote:

 On Fri, Aug 2, 2013 at 7:47 AM, Julien Danjou jul...@danjou.info wrote:
 That would need the RPC layer to connect to different rabbitmq server.
 Not sure that's supported yet.


 We'll have that problem in the cell's collector, then, too, right?

If you have an AMQP server per cell and a Ceilometer installation per
cell, that'd work. But I can't see how you can aggregate at higher
level.

-- 
Julien Danjou
;; Free Software hacker ; freelance consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Event API Access Controls

2013-08-05 Thread Julien Danjou
On Sat, Aug 03 2013, Herndon, John Luke (HPCS - Ft. Collins) wrote:

Hi John,

 Hello, I'm currently implementing the event api blueprint[0], and am
 wondering what access controls we should impose on the event api. The
 purpose of the blueprint is to provide a StackTach equivalent in the
 ceilometer api. I believe that StackTach is used as an internal tool which
 end with no access to end users. Given that the event api is targeted at
 administrators, I am currently thinking that it should be limited to admin
 users only. However, I wanted to ask for input on this topic. Any arguments
 for opening it up so users can look at events for their resources? Any
 arguments for not doing so?

You should definitely use the policy system we has in Ceilometer to
check that the user is authenticated and has admin privileges. We
already have such a mechanism in ceilometer.api.acl.

I don't see any point to expose raw operator system data to the users.
That could even be dangerous security wise.

-- 
Julien Danjou
// Free Software hacker / freelance consultant
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Looking for some help understanding default meters

2013-08-05 Thread Julien Danjou
On Fri, Aug 02 2013, Thomas Maddox wrote:

Hi Thomas,

 I've been poking around to get an understanding of what some of these
 default meters mean in the course of researching this Glance bug
 (https://bugs.launchpad.net/ceilometer/+bug/1201701). I was wondering if
 anyone could explain to me what the instance meter is. The unit 'instance'
 sort of confuses me when each one of these meters is tied to a single
 resource (instance), especially because it looks like a count of all
 notifications regarding a particular instance that hit the bus. Here's some
 output for one of the instances I spun up:
 http://paste.openstack.org/show/42963/.

Are you talking about instance:m1.nano like counters?
These are old counters we introduce a while back to count directly the
number of instances by summing up all these counters. That's something
that should now be done via the API -- I'll look into removing them.

About the general instance counter, that's just a gauge counter which
has always value = 1, counting the number of 'instance' on a particular
resource, in this case, the instance itself. So it's just some sort of a
heartbeat counting instances.

 Another concern I have is I think I
 may have found another bug, because I can delete the instance shown in this
 paste, and it still has a resource state description of 'scheduling' long
 after it's been deleted: http://paste.openstack.org/show/42962/, much like
 the Glance issue I'm currently working on.

Then you use resource-show, Ceilometer just returns the latest metadata
it has about the resource. So this should be equal to the
resource_metadata field of the more recent samples it has in this
database (hint: you could go and check this out in the db yourself to be
sure).

Now, 2 options:
- the latest sample retrieved by Ceilometer shows differents metadata,
  so there's a bug in Ceilometer
- the latests sample retrieve by Ceilometer shows the same information,
  so:
   a. a message arrived late and out of order to Ceilometer, so the
   resource metadata is oudated -- we can't do much about it
   b. this is actually what Nova sends to Ceilometer -- much likely a
   bug in Nova.

-- 
Julien Danjou
;; Free Software hacker ; freelance consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Cells and quota reservation expiry

2013-08-05 Thread Kieran Spear
Hi all,

We're having some issues with quota reservations not being deleted. I 
understand this can happen in certain cases and that's why they have an expiry 
time? But I've also noticed that reservations never expire in our system either.

Looking at the code, I think this is because the periodic task to handle 
deleting expired reservations lives in nova-scheduler while the reservations 
themselves are stored in the top-cell db. Is this a bug? Where's a good place 
to move the task to? The cells manager?

ps. What's a sane low value for the reservation expiry time?

Cheers,
Kieran


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Nomination for Mehdi Abaakouk

2013-08-05 Thread Nicolas Barcet
+1


On Wed, Jul 31, 2013 at 8:55 PM, Lu, Lianhao lianhao.lu at intel.com
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
wrote:

* +1 -Lianhao Angus Salkeld wrote on 2013-07-31:**  On 31/07/13 
10:56 +0200, Julien Danjou wrote:**  Hi,** **  I'd like to propose to 
add Mehdi Abaakouk (sileht) to ceilometer-core.**  He has been a valuable 
contributor for the last months, doing a lot of**  work in the alarming 
blueprints, and useful code reviews.** **  +1** ** **  --**  
Julien Danjou**  -- Free Software hacker - freelance consultant**  -- 
http://julien.danjou.info***
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Nomination for Mehdi Abaakouk

2013-08-05 Thread Julien Danjou
On Wed, Jul 31 2013, Julien Danjou wrote:

 I'd like to propose to add Mehdi Abaakouk (sileht) to ceilometer-core.
 He has been a valuable contributor for the last months, doing a lot of
 work in the alarming blueprints, and useful code reviews.

I've proceed and added Mehdi to ceilometer-core.

Welcome Mehdi!

-- 
Julien Danjou
;; Free Software hacker ; freelance consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Savanna] Cluster launch error

2013-08-05 Thread Thierry Carrez
Linus Nova wrote:
 I installed OpenStack Savanna in OpenStack Grizzely release. As you can
 see in savanna.log, the savanna-api start and operates correctly.

This is a development mailing-list, focused on development discussions
about the future Havana release. Questions about OpenStack usage (or
already-released versions of OpenStack) should be posted to the general
openst...@lists.openstack.org mailing-list instead.

See: https://wiki.openstack.org/wiki/Mailing_Lists

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Enabling neutron gating

2013-08-05 Thread Sean Dague

On 08/04/2013 12:09 PM, Thierry Carrez wrote:

Nachi Ueno wrote:

It looks like neutron gating error improves as much as non-neutron gating one,
so I would like to suggest to enable neturon-gating again.


+1


If those numbers are still valid as of today, I think we should turn it 
back on.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Alarming should be outside of Ceilometer as a separate package.

2013-08-05 Thread Sandy Walsh


On 08/04/2013 08:24 PM, Angus Salkeld wrote:
 On 02/08/13 13:26 -0300, Sandy Walsh wrote:


 On 08/02/2013 12:27 PM, Eoghan Glynn wrote:

 On 08/01/2013 07:22 PM, Doug Hellmann wrote:



 On Thu, Aug 1, 2013 at 10:31 AM, Sandy Walsh
 sandy.wa...@rackspace.com
 mailto:sandy.wa...@rackspace.com wrote:

 Hey y'all,

 I've had a little thorn in my claw on this topic for a while and
 thought
 I'd ask the larger group.

 I applaud the efforts of the people working on the alarming
 additions
 to
 Ceilometer, but I've got concerns that we're packaging things the
 wrong way.

 I fear we're making another Zenoss/Nagios with Ceilometer. It's
 trying
 to do too much.

 The current trend in the monitoring work (#monitoringlove) is
 to build
 your own stack from a series of components. These components
 take in
 inputs, process them and spit out outputs.
 Collectors/Transformers/Publishers. This is the model CM is
 built on.

 Making an all-singing-all-dancing monolithic monitoring package
 is the
 old way of building these tools. People want to use best-of-breed
 components for their monitoring stack. I'd like to be able to use
 reimann.io http://reimann.io for my stream manager, diamond
 for my
 collector, logstash for
 my parser, etc. Alarming should just be another consumer.

 CM should do one thing well. Collect data from openstack, store
 and
 process them, and make them available to other systems via the
 API or
 publishers. That's all. It should understand these events
 better than
 any other product out there. It should be able to produce
 meaningful
 metrics/meters from these events.

 Alarming should be yet another consumer of the data CM
 produces. Done
 right, the If-This-Then-That nature of the alarming tool could be
 re-used by the orchestration team or perhaps even scheduling.
 Intertwining it with CM is making the whole thing too complex
 and rigid
 (imho).
 
 Heat is using the alarm api.

Perhaps you've misread the intent of that paragraph? It was to
illustrate the *other* projects that could benefit from a standalone
alarming module.

 

 CM should be focused on extending our publishers and input
 plug-ins.

 I'd like to propose that alarming becomes its own project
 outside of
 Ceilometer. Or, at the very least, its own package, external of
 the
 Ceilometer code base. Perhaps it still lives under the CM
 moniker, but
 packaging-wise, I think it should live as a separate code base.

 It is currently implemented as a pair of daemons (one to monitor the
 alarm state, another to send the notifications). Both daemons use a
 ceilometer client to talk to the REST API to consume the sample
 data or
 get the alarm details, as required. It looks like alarms are triggered
 by sending RPC cast message, and that those in turn trigger the
 webhook
 invocation. That seems pretty loosely coupled, as far as the runtime
 goes. Granted, it's still in the main ceilometer code tree, but that
 doesn't say anything about how the distros will package it.

 I'll admit I haven't been closely involved in the development of this
 feature, so maybe my quick review of the code missed something that is
 bringing on this sentiment?

 No, you hit the nail on the head. It's nothing with the implementation,
 it's purely with the packaging and having it co-exist within
 ceilometer.
 Since it has its own services, uses Oslo, the CM client and operates
 via
 the public API, it should be able to live outside the main CM codebase.
 My concern is that it has a different mandate than CM (or the CM
 mandate
 is too broad).

 What really brought it on for me was doing code reviews for CM and
 hitting all this alarm stuff and thinking this is mental context
 switch
 from what CM does, it really doesn't belong here. (though I'm happy to
 help out with the reviews)

 -S

 Hi Sandy,

 In terms of distro packaging, the case that I'm most familiar (Fedora
  derivatives)
 already splits out the ceilometer packaging in a fairly fine-grained
 manner (with
 separate RPMs for the various services and agents). I'd envisage a
 similar packaging
 approach will be followed for the alarming services, so for
 deployments for which
 alarming is not required, this functionality won't be foisted on anyone.

 Thanks for the feedback Eoghan.

 I don't imagine that should be a big problem. Packaging in the sense of
 the code base is different issue. If, for all intents and purposes,
 alarming is a separate system: uses external api's, only uses sanctioned
 CM client libraries, is distro packaged separately and optionally
 installed/deployed then I don't understand why it has to live in the CM
 codebase?

 Now we could think about splitting it out even further to aid the
 sort of composability
 you desire, however this functionality is needed by Heat, so it makes
 sense for it to
 live in one of the integrated projects (as 

Re: [openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-05 Thread Dina Belova
Hello, everyone!


Patrick, Julien, thank you so much for your comments. As for the moments
Patrick mentioned in his letter, I'll describe our vision for them below.


1) Patrick, thank you for the idea! I think it would be great to add not
only 'post-lease actions policy', but also 'start-lease actions policy'. I
mean like having two types of what can be done with resource (virtual one)
on lease starting - 'start VM automatically' or 'start VM manually'. This
means user may not use reserved resources at all, if he needs such a
behaviour.


2) We really believe that creating lease first, and going with its id to
all the OpenStack projects to use is a better idea than 'filling' the lease
with resources just at the moment of its creation. I'll try to explain why.
First of all, as for virtual reservations, we'll need to proxy Nova,
Cinder, etc. APIs through Reservation API to reserve VM or volume or
something else. Workflow for VM/volume/etc. creation is really complicated
and only services written to do this have to do it, in our opinion. Second,
this makes adding new reservations to the created lease simple and user
friendly. And the last moment, we should not copy all these dashboard pages
for instance/volume/... creation to the reservation Dashboard tab in this
case. As for the physical reservations, as you mentioned, there is no way
to 'create' them like virtual resources in the Nova's, for example, API
now. That's why there are two ways to solve this problem and reserve them.
First way is to reserve them from Reservation Service as it is implemented
now and described also in our document (WF-2b part of it). The second
variant (that seems to be more elegant, but more complicated as well) is to
implement needed parts as Nova API extension to let Nova do the things it
does the best way - managing hosts, VMs, etc. Our concern in this question
is not doing things Nova (or any other service) can do much better.


3) We completely agree with you! Our 'nested reservation' vision was
created only to let user the opportunity of checking reservation status of
complex virtual resources (stacks) by having an opportunity to check status
of all its 'nested' components, like VMs, networks, etc. This can be done
as well by using just Heat without reservation service. Now we are thinking
about reservation as the reservation of the OpenStack resource that has ID
in the OpenStack service DB, no matter how complex it is (VM, network,
floating IP, stack, etc.)


4) We were thinking about Reservation Scheduler as a service that controls
lease life cycle (starting, ending, making user notifications, etc.) and
communicates with Reservation Manager via RPC. Reservation Manager can send
user notifications about close lease ending using Ceilometer (this question
has to be researched). As for the time needed to run physical reservation
or complex virtual one, that is used to make preparations and settings, I
think it would be better for user to amortise it in lease using period,
because for physical resources it much depends on hardware resources and
for virtual ones - on hardware, network and geo location of DCs.


Thank you,

DIna.


On Mon, Aug 5, 2013 at 1:22 PM, Julien Danjou jul...@danjou.info wrote:

 On Fri, Aug 02 2013, Patrick Petit wrote:

  3. The proposal specifies that a lease can contain a combo of different
 resources types reservations (instances, volumes, hosts, Heat
 stacks, ...) that can even be nested and that the reservation
 service will somehow orchestrate their deployment when the lease
 kicks in. In my opinion, many use cases (at least ours) do not
 warrant for that level of complexity and so, if that's something
 that is need to support your use cases, then it should be delivered
 as module that can be loaded optionally in the system. Our preferred
 approach is to use Heat for deployment orchestration.

 I agree that this is not something Climate should be in charge. If the
 user wants to reserve a set of services and deploys them automatically,
 Climate should provide the lease and Heat the deployment orchestration.
 Also, for example, it may be good to be able to reserve automatically
 the right amount of resources needed to deploy a Heat stack via Climate.

 --
 Julien Danjou
 // Free Software hacker / freelance consultant
 // http://julien.danjou.info

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Looking for some help understanding default meters

2013-08-05 Thread Thomas Maddox
Hey Julien,

On 8/5/13 3:14 AM, Julien Danjou jul...@danjou.info wrote:

On Fri, Aug 02 2013, Thomas Maddox wrote:

Hi Thomas,

 I've been poking around to get an understanding of what some of these
 default meters mean in the course of researching this Glance bug
 (https://bugs.launchpad.net/ceilometer/+bug/1201701). I was wondering if
 anyone could explain to me what the instance meter is. The unit
'instance'
 sort of confuses me when each one of these meters is tied to a single
 resource (instance), especially because it looks like a count of all
 notifications regarding a particular instance that hit the bus. Here's
some
 output for one of the instances I spun up:
 http://paste.openstack.org/show/42963/.

Are you talking about instance:m1.nano like counters?
These are old counters we introduce a while back to count directly the
number of instances by summing up all these counters. That's something
that should now be done via the API -- I'll look into removing them.

About the general instance counter, that's just a gauge counter which
has always value = 1, counting the number of 'instance' on a particular
resource, in this case, the instance itself. So it's just some sort of a
heartbeat counting instances.

I was talking about both of them. Okay, so it is just detecting activity
and existence.


 Another concern I have is I think I
 may have found another bug, because I can delete the instance shown in
this
 paste, and it still has a resource state description of 'scheduling'
long
 after it's been deleted: http://paste.openstack.org/show/42962/, much
like
 the Glance issue I'm currently working on.

Then you use resource-show, Ceilometer just returns the latest metadata
it has about the resource. So this should be equal to the
resource_metadata field of the more recent samples it has in this
database (hint: you could go and check this out in the db yourself to be
sure).

Now, 2 options:
- the latest sample retrieved by Ceilometer shows differents metadata,
  so there's a bug in Ceilometer
- the latests sample retrieve by Ceilometer shows the same information,
  so:
   a. a message arrived late and out of order to Ceilometer, so the
   resource metadata is oudated -- we can't do much about it
   b. this is actually what Nova sends to Ceilometer -- much likely a
   bug in Nova.

That was my thinking too. Judging by it being a scheduling event after the
instance was well into the active state, I would be more inclined to think
the first option is the case for the described bug(s).

Thinking about it, the latter option seems to describe a very real concern
going forward that didn't occur to me when I was wandering around the
code. Specifically regarding option 2a, if message 2 arrives at CM before
message 1 because it ended up on a faster route, then message 1 will
overwrite the metadata from message 2 and we record an incorrect state.
Isn't the nature of network comms for messages at the application layer to
potentially be out of order and in the case of UDP, even lost? What is the
leftover purpose of resource-show when we can't trust its output to
represent the actual state of whatever resource is in question? It seems
that timestamps could be used to prevent overwriting of the latest state
by checking that the incoming notification doesn't have a timestamp less
than the already recorded one. I hope I'm not seeing a problem that
doesn't exist here or misunderstanding something. If so, please correct me!

Thanks again for the help! :)

-Thomas


-- 
Julien Danjou
;; Free Software hacker ; freelance consultant
;; http://julien.danjou.info


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Looking for some help understanding default meters

2013-08-05 Thread Julien Danjou
On Mon, Aug 05 2013, Thomas Maddox wrote:

 Thinking about it, the latter option seems to describe a very real concern
 going forward that didn't occur to me when I was wandering around the
 code. Specifically regarding option 2a, if message 2 arrives at CM before
 message 1 because it ended up on a faster route, then message 1 will
 overwrite the metadata from message 2 and we record an incorrect state.
 Isn't the nature of network comms for messages at the application layer to
 potentially be out of order and in the case of UDP, even lost? What is the
 leftover purpose of resource-show when we can't trust its output to
 represent the actual state of whatever resource is in question? It seems
 that timestamps could be used to prevent overwriting of the latest state
 by checking that the incoming notification doesn't have a timestamp less
 than the already recorded one. I hope I'm not seeing a problem that
 doesn't exist here or misunderstanding something. If so, please correct me!

No you're absolutely right. Checking the timestamp before we override
resource metadata would be a great idea. Would you mind reporting a bug
first, so we can schedule to fix it?

-- 
Julien Danjou
;; Free Software hacker ; freelance consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Autogenerating the Nova v3 API specification

2013-08-05 Thread Anne Gentle
On Mon, Aug 5, 2013 at 8:55 AM, John Garbutt j...@johngarbutt.com wrote:

 Given we seem to be leaning towards WSME:
 http://lists.openstack.org/pipermail/openstack-dev/2013-August/012954.html

 Could we not try to make WSME give us the documentation we need?

 Not sure if its feasible, but it seems like there is a good start to
 that already available:
 https://wsme.readthedocs.org/en/latest/document.html


John, this looks interesting, but I have reservations. Do you know if you
can suppress the SOAP and ExtDirect entries?

Also we already have available the current way to create API docs for
http://api.openstack.org/api-ref.html.

I'm not excited about an inconsistent user experience for reading REST API
docs for one project but not others. Where will it be published? How will
readers find it? And so on. It's good for Compute to lead the way and try
things, but I'd like to know if other projects are willing to follow?
Looking for input from other projects.

Anne


 John

 On 5 August 2013 08:44, Christopher Yeoh cbky...@gmail.com wrote:
  Hi,
 
  I'd like to extend the information we produce with the Nova v3 API
 samples
  in order to make it easier to automate as much as possible the
 generation of
  a specification document. This should make it easier to keep the
  documentation more accurate and up to date in the future. I believe that
 if
  we generate a meta file for each xml and json response file which
 includes
  some information which describes the method for the api sample and ties
  together the request and response files we can do this.
 
  I've put together a bit of a prototype here  (the patch is very ugly at
 the
  moment, just proof of concept)
 
  https://review.openstack.org/#/c/40169/
 
  An example of the hosts extension method to put a host into or out of
  maintenance mode, a file called host-put-maintenance-resp.json.meta is
  created:
   :
 
  {
  description: Enables a host or puts it in maintenance mode.,
  extension: os-hosts,
  extension_description: Manages physical hosts.,
  method: PUT,
  request: host-put-maintenance-req.json,
  response: host-put-maintenance-resp.json,
  section_name: Hosts,
  status: 200,
  url: os-hosts/{host_name}
  }
 
  A separate metafile is created for each api sample response rather than
  trying to accumulate them by extension because this way it allows for the
  tests to still be run in parallel. The metafile also adds the logging of
 the
  status code expected which is not currently done and I think an important
  part of the API.


Wow what's your guess on how many files this will be? Trying to think of
this from a doc management standpoint for troubleshooting when the output
is incorrect.


 
  On the documentation side we'd have a script collate all the metafiles
 and
  produce the bulk of the specification document.


I'd like to see the results of that script, and it's good you're thinking
about docs. But if you create the docs from the code it's not quite a spec,
what if the code is incorrect? Maybe I misunderstand.


 
  The changes to the api sample test code is fairly minor:
 
  class HostsSampleJsonTest(api_sample_base.ApiSampleTestBaseV3):
  extension_name = os-hosts
  section_name = Hosts
  section_doc = Manages physical hosts.
 
  def test_host_startup(self):
  response = self._do_get(
  'os-hosts/%s/startup', self.compute.host, 'host_name',
  api_desc='Starts a host.')
  subs = self._get_regexes()
  self._verify_response('host-get-startup', subs, response, 200)
 
  def test_host_maintenance(self):
  response = self._do_put(
  'os-hosts/%s', self.compute.host, 'host_name',
  'host-put-maintenance-req', {},
  api_desc='Enables a host or puts it in maintenance mode.')
  subs = self._get_regexes()
  self._verify_response('host-put-maintenance-resp', subs,
 response,
  200)
 
 
  - some definitions per extension and a description for the method per
 test
  so I don't think its a significant burden for developers or reviewers.
 
  I'd like to know what people think about heading in this direction, and
 if
  there is any other information we should include. I'm not currently
  intending on including this in the first pass of porting the api samples
  tests to v3 (I don't think there is time) nor to backport to V2.
 


I fully support these efforts, and I just want to be sure I understand the
eventual outcomes and ramifications.
Thanks,
Anne


   Regards,
 
  Chris
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Anne Gentle
annegen...@justwriteclick.com

Re: [openstack-dev] [nova][glance] Future of nova's image API

2013-08-05 Thread John Garbutt
On 3 August 2013 03:07, Christopher Yeoh cbky...@gmail.com wrote:
 Some people had concerns about exposing the glance api publicly and so
 wanted to retain the images support in Nova.
 So the consensus seemed to be to leave the images support in, but to demote
 it from core. So people who don't want it exclude the os-images extension.

I think a lot of the concern was around RBAC, but seems most of that
will be fixed by the end of Havana:
https://blueprints.launchpad.net/glance/+spec/api-v2-property-protection

Given v3 is will not be finished till Icehouse, maybe we should look
at removing os-images extension for now, and putting it back in for
Icehouse if it causes people real headaches?

 Just as I write this I've realised that the servers api currently returns
 links to the image used for the instance. And that won't be valid if the
 images extension is not loaded. So probably have some work to do there to
 support  that properly.

Have we decided a good strategy for this in v3? Referring to image in
glance, and networks and ports in neutron.

The pragmatic part of me says:
* just use the uuid, its what the users will input when booting servers

But I wonder if a REST purest would say:
* an image is a REST resource, so we should have a URL pointing to the
exposed glance service?

What do you think? I just want to make sure we make a deliberate choice.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Autogenerating the Nova v3 API specification

2013-08-05 Thread Christopher Yeoh
On Mon, 5 Aug 2013 14:55:15 +0100
John Garbutt j...@johngarbutt.com wrote:
 Given we seem to be leaning towards WSME:
 http://lists.openstack.org/pipermail/openstack-dev/2013-August/012954.html
 
 Could we not try to make WSME give us the documentation we need?
 
 Not sure if its feasible, but it seems like there is a good start to
 that already available:
 https://wsme.readthedocs.org/en/latest/document.html

Hrm its not clear from there how the API samples are generated.
But more generally I have a concern with making having a specification
for the V3 API dependent on getting WSME merged - since I think its a
reasonably big chunk of work and certainly won't land until sometime in
the icehouse timeframe. In the meantime without some automation of the
process its likely we won't have a V3 API spec as there are around 60
extensions (with all their methods) to document.

Regards,

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Autogenerating the Nova v3 API specification

2013-08-05 Thread Christopher Yeoh
On Mon, 5 Aug 2013 09:15:33 -0500
Anne Gentle annegen...@justwriteclick.com wrote:

 On Mon, Aug 5, 2013 at 8:55 AM, John Garbutt j...@johngarbutt.com
 wrote:
  On 5 August 2013 08:44, Christopher Yeoh cbky...@gmail.com wrote:
   A separate metafile is created for each api sample response
   rather than trying to accumulate them by extension because this
   way it allows for the tests to still be run in parallel. The
   metafile also adds the logging of
  the
   status code expected which is not currently done and I think an
   important part of the API.
 
 
 Wow what's your guess on how many files this will be? Trying to think
 of this from a doc management standpoint for troubleshooting when the
 output is incorrect.

Well around 60 extensions with say an average of 4-5 methods each,
times 2 for JSON/XML. Note that we already generate a lot more files
than that for the api samples already though. We could theoretically
collate the files in a post process on the nova side I guess which
would mean one meta file per extension instead, but I don't think it
really matters if that happens on the Nova tree side or the
documentation side.

I agree around concerns around troubleshooting. It would always have to
be a case of either fixing the script or fixing the source data (api
samples or meta data) and not manually patching the end result.

  
   On the documentation side we'd have a script collate all the
   metafiles
  and
   produce the bulk of the specification document.
 
 
 I'd like to see the results of that script, and it's good you're
 thinking about docs. But if you create the docs from the code it's
 not quite a spec, what if the code is incorrect? Maybe I
 misunderstand.

The script doesn't yet exist, but Kersten is looking at what would
needed to be done (I don't understand enough of what is required at the
doc end to write it myself). 

We currently do create the docs from the code, so saying its a
specification is a bit backwards. It'd be a document of how we
actually behave rather than how we're theoretically supposed to.
But that's essentially what api.openstack.org/api_refs is now with the
api samples generated from code.

 
 I fully support these efforts, and I just want to be sure I
 understand the eventual outcomes and ramifications.

Cool - just throwing this out now to get a bit of a sanity check around
it. And I do want to establish that there is enough information in the
metafile to automate the process. I'd eventually like to get to the
point where the API doc generation stays pretty much in sync with the
code itself with little manual intervention (probably just resyncing
the api doc tree with the nova tree every now and then).

Regards,

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Savanna] Cluster launch error

2013-08-05 Thread Sergey Lukjanov
Hi Thierry,

it looks like that this question is about trunk or recently released Savanna 
version, so, this mailing list is the right place for such question, isn't it? 
On the other hand while we in Savanna have some separated releases maybe only 
openstack-dev is the only right place for Savanna-related questions?

Thank you.

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

On Aug 5, 2013, at 13:49, Thierry Carrez thie...@openstack.org wrote:

 Linus Nova wrote:
 I installed OpenStack Savanna in OpenStack Grizzely release. As you can
 see in savanna.log, the savanna-api start and operates correctly.
 
 This is a development mailing-list, focused on development discussions
 about the future Havana release. Questions about OpenStack usage (or
 already-released versions of OpenStack) should be posted to the general
 openst...@lists.openstack.org mailing-list instead.
 
 See: https://wiki.openstack.org/wiki/Mailing_Lists
 
 Regards,
 
 -- 
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Autogenerating the Nova v3 API specification

2013-08-05 Thread John Garbutt
Hi Anne,

On 5 August 2013 15:15, Anne Gentle annegen...@justwriteclick.com wrote:
 On Mon, Aug 5, 2013 at 8:55 AM, John Garbutt j...@johngarbutt.com wrote:

 Given we seem to be leaning towards WSME:
 http://lists.openstack.org/pipermail/openstack-dev/2013-August/012954.html

 Could we not try to make WSME give us the documentation we need?

 Not sure if its feasible, but it seems like there is a good start to
 that already available:
 https://wsme.readthedocs.org/en/latest/document.html

 John, this looks interesting, but I have reservations. Do you know if you
 can suppress the SOAP and ExtDirect entries?

Sorry no idea, but we certainly would have to.

 Also we already have available the current way to create API docs for
 http://api.openstack.org/api-ref.html.

 I'm not excited about an inconsistent user experience for reading REST API
 docs for one project but not others. Where will it be published? How will
 readers find it? And so on. It's good for Compute to lead the way and try
 things, but I'd like to know if other projects are willing to follow?
 Looking for input from other projects.

Sorry I was not very clear. I certainly agree with your concerns.

My main concern would be adding a lot of documentation and lots of
validation, and them not being in sync.

We may need to create our own doc generator, and it might only be
loosely based on what they already have upstream for WSME.

Thinking about this a little more, we might want to use their
generator to create info required for the current inputs to the API
doc generation.

However, really just wondering if someone has tried this out for real?
Does it seem feasible?

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Event API Access Controls

2013-08-05 Thread Herndon, John Luke (HPCS - Ft. Collins)
Hi Julien,

On 8/5/13 2:04 AM, Julien Danjou jul...@danjou.info wrote:

On Sat, Aug 03 2013, Herndon, John Luke (HPCS - Ft. Collins) wrote:

Hi John,

 Hello, I'm currently implementing the event api blueprint[0], and am
 wondering what access controls we should impose on the event api. The
 purpose of the blueprint is to provide a StackTach equivalent in the
 ceilometer api. I believe that StackTach is used as an internal tool
which
 end with no access to end users. Given that the event api is targeted at
 administrators, I am currently thinking that it should be limited to
admin
 users only. However, I wanted to ask for input on this topic. Any
arguments
 for opening it up so users can look at events for their resources? Any
 arguments for not doing so?

You should definitely use the policy system we has in Ceilometer to
check that the user is authenticated and has admin privileges. We
already have such a mechanism in ceilometer.api.acl.

I don't see any point to expose raw operator system data to the users.
That could even be dangerous security wise.

This plans sounds good to me. We can enable/disable the event api for
users, but is there a way to restrict a user to viewing only his/her
events using the policy system? Or do we not need to do that?

-john


-- 
Julien Danjou
// Free Software hacker / freelance consultant
// http://julien.danjou.info



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Ceilometer and nova compute cells

2013-08-05 Thread Sandy Walsh


On 08/05/2013 04:49 AM, Julien Danjou wrote:
 On Fri, Aug 02 2013, Doug Hellmann wrote:
 
 On Fri, Aug 2, 2013 at 7:47 AM, Julien Danjou jul...@danjou.info wrote:
 That would need the RPC layer to connect to different rabbitmq server.
 Not sure that's supported yet.


 We'll have that problem in the cell's collector, then, too, right?
 
 If you have an AMQP server per cell and a Ceilometer installation per
 cell, that'd work. But I can't see how you can aggregate at higher
 level.

At RAX we have one StackTach per region with one worker (collector) per
cell. This should work for CM as well, no? Our reports are therefore,
per region, but per cell is available if needed (metadata included from
each worker).

We expect to have the functionality available from CM by using the
metadata from the underlying events.


 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Event API Access Controls

2013-08-05 Thread Julien Danjou
On Mon, Aug 05 2013, Herndon, John Luke (HPCS - Ft. Collins) wrote:

 This plans sounds good to me. We can enable/disable the event api for
 users, but is there a way to restrict a user to viewing only his/her
 events using the policy system? Or do we not need to do that?

There may be, but we don't want to do that.

-- 
Julien Danjou
;; Free Software hacker ; freelance consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] amending Trove incubation to support NRDB

2013-08-05 Thread Michael Basnight
As per the request from the TC, here is a work in progress review for the redis 
impl. its a POC. Plz understand that its more about seeing how this affects the 
trove codebase than scrutinizing why i did X or Y in the redis impl, or whether 
i should include config value Z. 3

https://review.openstack.org/#/c/40239/ 


On Jul 29, 2013, at 4:02 PM, Jay Pipes wrote:

 On 07/29/2013 05:12 PM, Michael Basnight wrote:
 Rackspace is interested in creating a redis implementation for Trove, as 
 well as haomai wang is looking to leverage Trove for leveldb integration. 
 Ive done a proof of concept for redis and it was ~200 lines of guest impl 
 code and I had to make one small change to the core create API, but one that 
 will be the new default for creating instances. It was adding a return of a 
 root password. the other differences were the /users and /databases 
 extensions for mysql were not working (for obvious reasons). The reason NRDB 
 was not originally part of Trove was a decision that there was nothing to 
 show that it has a valid impl without substantial differences to the 
 API/core system [1].
 
 As you allude to above, it's all about the API :) As long as the API does not 
 become either too convoluted from needing to conform to the myriad KVS/NRDB 
 standards or too generic as to be detrimental to relational databases, I'm 
 cool with it.
 
  See around 20:35:42. Originally we had petitioned for a RDDB / NRDB system 
  [2].
 
 The path for it is to basically add a redis.py to the guest impl and to 
 instruct the api that redis is the default (config file). Then the 
 api/functionality behaves the same. Feel free to ask questions on list 
 before the tc meeting!
 
 All good in my opinion.
 
 Best,
 -jay
 
 [1] 
 http://eavesdrop.openstack.org/meetings/tc/2013/tc.2013-04-30-20.02.log.html
 [2] https://wiki.openstack.org/wiki/ReddwarfAppliesForIncubation#Summary:
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to add Nikola Đipanov to nova-core

2013-08-05 Thread Russell Bryant
On 07/31/2013 03:10 PM, Russell Bryant wrote:
 Greetings,
 
 I propose that we add Nikola Đipanov to the nova-core team [1].
 
 Nikola has been actively contributing to nova for a while now, both in
 code and reviews.  He provides high quality reviews. so I think he would
 make a good addition to the review team.
 
 https://review.openstack.org/#/q/reviewer:ndipa...@redhat.com,n,z
 
 https://review.openstack.org/#/q/owner:ndipa...@redhat.com,n,z
 
 Please respond with +1/-1.

Enough +1s and no objections.  Welcome to the team, Nikola!

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Enabling neutron gating

2013-08-05 Thread James E. Blair
Sean Dague s...@dague.net writes:

 On 08/04/2013 12:09 PM, Thierry Carrez wrote:
 Nachi Ueno wrote:
 It looks like neutron gating error improves as much as non-neutron gating 
 one,
 so I would like to suggest to enable neturon-gating again.

 +1

 If those numbers are still valid as of today, I think we should turn it 
 back on.

   -Sean

Here is a commit to turn it back on: https://review.openstack.org/40250

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Is WSME really suitable? (Was: [nova] Autogenerating the Nova v3 API specification)

2013-08-05 Thread Mac Innes, Kiall
On 05/08/13 16:09, John Garbutt wrote:
 On 5 August 2013 15:15, Anne Gentleannegen...@justwriteclick.com  wrote:
 On Mon, Aug 5, 2013 at 8:55 AM, John Garbuttj...@johngarbutt.com  wrote:
 
 Given we seem to be leaning towards WSME:
 http://lists.openstack.org/pipermail/openstack-dev/2013-August/012954.html
 
 Could we not try to make WSME give us the documentation we need?
 
 Not sure if its feasible, but it seems like there is a good start to
 that already available:
 https://wsme.readthedocs.org/en/latest/document.html
 
 John, this looks interesting, but I have reservations. Do you know if you
 can suppress the SOAP and ExtDirect entries?
 Sorry no idea, but we certainly would have to.


While the topic of WSME is open - Has anyone actually tried using it?

IMO it's just not ready. Simple things like returning a 404 are not 
currently supported[1].

I would be very cautious about assuming WSME can support anything we 
need when the absolute fundamentals of building a REST API are totally MIA.

Thanks,
Kiall

[1]: 
https://bitbucket.org/cdevienne/wsme/issue/10/returning-404-or-basically-any-status-code

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Looking for some help understanding default meters

2013-08-05 Thread Thomas Maddox
Reported bug: https://bugs.launchpad.net/ceilometer/+bug/1208547


On 8/5/13 8:45 AM, Thomas Maddox thomas.mad...@rackspace.com wrote:

Yep, I'll do that this morning. Thanks!

On 8/5/13 8:40 AM, Julien Danjou jul...@danjou.info wrote:

On Mon, Aug 05 2013, Thomas Maddox wrote:

 Thinking about it, the latter option seems to describe a very real
concern
 going forward that didn't occur to me when I was wandering around the
 code. Specifically regarding option 2a, if message 2 arrives at CM
before
 message 1 because it ended up on a faster route, then message 1 will
 overwrite the metadata from message 2 and we record an incorrect state.
 Isn't the nature of network comms for messages at the application layer
to
 potentially be out of order and in the case of UDP, even lost? What is
the
 leftover purpose of resource-show when we can't trust its output to
 represent the actual state of whatever resource is in question? It
seems
 that timestamps could be used to prevent overwriting of the latest
state
 by checking that the incoming notification doesn't have a timestamp
less
 than the already recorded one. I hope I'm not seeing a problem that
 doesn't exist here or misunderstanding something. If so, please correct
me!

No you're absolutely right. Checking the timestamp before we override
resource metadata would be a great idea. Would you mind reporting a bug
first, so we can schedule to fix it?

-- 
Julien Danjou
;; Free Software hacker ; freelance consultant
;; http://julien.danjou.info


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Enabling neutron gating

2013-08-05 Thread Nachi Ueno
Thanks!

2013/8/5 Clark Boylan clark.boy...@gmail.com:
 On Mon, Aug 5, 2013 at 10:21 AM, James E. Blair jebl...@openstack.org wrote:
 Sean Dague s...@dague.net writes:

 On 08/04/2013 12:09 PM, Thierry Carrez wrote:
 Nachi Ueno wrote:
 It looks like neutron gating error improves as much as non-neutron gating 
 one,
 so I would like to suggest to enable neturon-gating again.

 +1

 If those numbers are still valid as of today, I think we should turn it
 back on.

   -Sean

 Here is a commit to turn it back on: https://review.openstack.org/40250

 -Jim

 This change has been merged, gate-tempest-devstack-vm-neutron is now
 voting and gating again.

 Clark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Future of nova's image API

2013-08-05 Thread Joe Gordon
On Fri, Aug 2, 2013 at 7:07 PM, Christopher Yeoh cbky...@gmail.com wrote:



 Hi Joe,
 Am on my phone so can't find the links at the moment but there was some
 discussion around this when working out what we should leave out of the v3
 api. Some people had concerns about exposing the glance api publicly and so
 wanted to retain the images support in Nova.
 So the consensus seemed to be to leave the images support in, but to
 demote it from core. So people who don't want it exclude the os-images
 extension.

 Just as I write this I've realised that the servers api currently returns
 links to the image used for the instance. And that won't be valid if the
 images extension is not loaded. So probably have some work to do there to
 support  that properly.


This sounds like part of a bigger question for V3 API.  Can someone
actually run nova with just the core API?



 Regards,

 Chris



 On Sat, Aug 3, 2013 at 6:54 AM, Joe Gordon **joe.gord...@gmail.com**
 =mailto:joe.gord...@gmail.com; wrote:

 Hi All,

 even though Glance, has been pulled out of Nova years ago, Nova still has
 a images API that proxies back to Glance.  Since Nova is in the process of
 creating a new, V3, API, we know have a chance to re-evaluate this API.

 * Do we still need this in Nova, is there any reason to not just use
 Glance directly?  I have vague concerns about making Glance API publicly
 accessible, but I am not sure what the underlying reason is
 * If it is still needed in Nova today, can we remove it in the future and
 if so what is the timeline?

 best,
 Joe Gordon


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Future of nova's image API

2013-08-05 Thread Mark Washenberger
On Mon, Aug 5, 2013 at 7:26 AM, John Garbutt j...@johngarbutt.com wrote:

 On 3 August 2013 03:07, Christopher Yeoh cbky...@gmail.com wrote:
  Some people had concerns about exposing the glance api publicly and so
  wanted to retain the images support in Nova.
  So the consensus seemed to be to leave the images support in, but to
 demote
  it from core. So people who don't want it exclude the os-images
 extension.

 I think a lot of the concern was around RBAC, but seems most of that
 will be fixed by the end of Havana:
 https://blueprints.launchpad.net/glance/+spec/api-v2-property-protection


I don't think this is a big issue. The RBAC approach to properties is just
an attempt to formalize what large public clouds are already doing in their
forks to manage info about image billing. Its not really a critical blocker
for public adoption.




 Given v3 is will not be finished till Icehouse, maybe we should look
 at removing os-images extension for now, and putting it back in for
 Icehouse if it causes people real headaches?

  Just as I write this I've realised that the servers api currently returns
  links to the image used for the instance. And that won't be valid if the
  images extension is not loaded. So probably have some work to do there to
  support  that properly.

 Have we decided a good strategy for this in v3? Referring to image in
 glance, and networks and ports in neutron.

 The pragmatic part of me says:
 * just use the uuid, its what the users will input when booting servers

 But I wonder if a REST purest would say:
 * an image is a REST resource, so we should have a URL pointing to the
 exposed glance service?


 What do you think? I just want to make sure we make a deliberate choice.

 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Future of nova's image API

2013-08-05 Thread Joe Gordon
On Mon, Aug 5, 2013 at 9:11 AM, Roman Bogorodskiy rbogorods...@mirantis.com
 wrote:

   Joe Gordon wrote:

  Hi All,
 
  even though Glance, has been pulled out of Nova years ago, Nova still
 has a
  images API that proxies back to Glance.  Since Nova is in the process of
  creating a new, V3, API, we know have a chance to re-evaluate this API.
 
  * Do we still need this in Nova, is there any reason to not just use
 Glance
  directly?  I have vague concerns about making Glance API publicly
  accessible, but I am not sure what the underlying reason is

 From the end user point of view, images are strongly tied to
 logical models nova operates with, such as servers, flavors etc. So for
 an API user, it would be move convenient to manage all these in a
 single API, IMHO.



-1, I think Monty stated this well

Honestly, I think we should ditch it. Glance is our image service, not
nova, we should use it. For user-experience stuff,
python-openstackclient should be an excellent way to expose both through
a single tool without needing to proxy one service through another.

We don't want nova to be the proxy for all other services, that
partially defeats the purpose of splitting them off.   We have better ways
of making everything look like a single API, such as:

* better python-openstackclient
* Make sure all services can run as one endpoint, on the same port.  So a
REST call to $IP/images/ would go to glance and a call to $IP/instances
would go to nova




  * If it is still needed in Nova today, can we remove it in the future and
  if so what is the timeline?
 
  best,
  Joe Gordon

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Roman Bogorodskiy

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Future of nova's image API

2013-08-05 Thread Joshua Harlow
+1 to not having nova be the all the things proxy. Hopefully openstack client 
can help here, and its usage where needed/applicable.

Sent from my really tiny device...

On Aug 5, 2013, at 12:20 PM, Joe Gordon 
joe.gord...@gmail.commailto:joe.gord...@gmail.com wrote:




On Mon, Aug 5, 2013 at 9:11 AM, Roman Bogorodskiy 
rbogorods...@mirantis.commailto:rbogorods...@mirantis.com wrote:
  Joe Gordon wrote:

 Hi All,

 even though Glance, has been pulled out of Nova years ago, Nova still has a
 images API that proxies back to Glance.  Since Nova is in the process of
 creating a new, V3, API, we know have a chance to re-evaluate this API.

 * Do we still need this in Nova, is there any reason to not just use Glance
 directly?  I have vague concerns about making Glance API publicly
 accessible, but I am not sure what the underlying reason is

From the end user point of view, images are strongly tied to
logical models nova operates with, such as servers, flavors etc. So for
an API user, it would be move convenient to manage all these in a
single API, IMHO.


-1, I think Monty stated this well

Honestly, I think we should ditch it. Glance is our image service, not
nova, we should use it. For user-experience stuff,
python-openstackclient should be an excellent way to expose both through
a single tool without needing to proxy one service through another.

We don't want nova to be the proxy for all other services, that partially 
defeats the purpose of splitting them off.   We have better ways of making 
everything look like a single API, such as:

* better python-openstackclient
* Make sure all services can run as one endpoint, on the same port.  So a REST 
call to $IP/images/ would go to glance and a call to $IP/instances would go to 
nova



 * If it is still needed in Nova today, can we remove it in the future and
 if so what is the timeline?

 best,
 Joe Gordon

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Roman Bogorodskiy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [ml2] Configuration file sections for MechanismDrivers

2013-08-05 Thread Kyle Mestery (kmestery)
While working on the OpenDaylight ML2 MechanismDriver, one thing which cropped 
up was configuration file options for MechanismDrivers and were those should be 
stored. I was initially of the opinion we should put all configuration sections 
into the ml2 configuration file, but this could get crowded, and would make the 
sample one confusing. The other option is to create them in separate 
configuration files per-MechanismDriver, and pass those on the command line 
when starting the Neutron server. This keeps things tidy from the perspective 
that you would only need to modify the ones you plan to run with Ml2.

So my question is, which route do people prefer here?

I'll have to make some changes to the ML2 devstack configuration to support 
this no matter which way we go forward, as it currently doesn't support the 
concept of separate configuration sections for MechanismDrivers.

Thanks,
Kyle
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] upcoming sprint agenda

2013-08-05 Thread Robert Collins
Hi, we're sketching an agenda here -
https://etherpad.openstack.org/tripleo-havana-sprint

Note that its primarily a 'doing' event not a 'meeting' event, so we
don't expect this to be rigid.

Current thoughts are to bring everyone up to speed, then focus on key
issues for H - but please feel free to fixup the etherpad!

-Rob



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Cloud Services

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Looking for some help understanding default meters

2013-08-05 Thread Monsyne Dragon


On 8/5/13 8:40 AM, Julien Danjou jul...@danjou.info wrote:

On Mon, Aug 05 2013, Thomas Maddox wrote:

 Thinking about it, the latter option seems to describe a very real
concern
 going forward that didn't occur to me when I was wandering around the
 code. Specifically regarding option 2a, if message 2 arrives at CM
before
 message 1 because it ended up on a faster route, then message 1 will
 overwrite the metadata from message 2 and we record an incorrect state.
 Isn't the nature of network comms for messages at the application layer
to
 potentially be out of order and in the case of UDP, even lost? What is
the
 leftover purpose of resource-show when we can't trust its output to
 represent the actual state of whatever resource is in question? It seems
 that timestamps could be used to prevent overwriting of the latest state
 by checking that the incoming notification doesn't have a timestamp less
 than the already recorded one. I hope I'm not seeing a problem that
 doesn't exist here or misunderstanding something. If so, please correct
me!

No you're absolutely right. Checking the timestamp before we override
resource metadata would be a great idea. Would you mind reporting a bug
first, so we can schedule to fix it?

It's probably good to keep in mind that AMQP does not guarantee order of
delivery. 
At any point in the future, if we need to rely on ordering, we will need
to check timestamps too.


-- 
Julien Danjou
;; Free Software hacker ; freelance consultant
;; http://julien.danjou.info


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Call for testing: 2013.1.3 candidate tarballs

2013-08-05 Thread Alan Pevec
Hi all,

We are scheduled to publish Nova, Keystone, Glance, Networking, Cinder and
Horizon 2013.1.3 releases on Thursday Aug 8.
Ceilometer and Heat were incubating in Grizzly so they're not covered
by the stable
branch policy but Ceilometer and Heat teams are preparing 2013.1.3 releases on
their own, to coincide with other 2013.1.3 stable releases.

The list of issues fixed so far can be seen here:

  https://launchpad.net/nova/+milestone/2013.1.3
  https://launchpad.net/keystone/+milestone/2013.1.3
  https://launchpad.net/glance/+milestone/2013.1.3
  https://launchpad.net/neutron/+milestone/2013.1.3
  https://launchpad.net/cinder/+milestone/2013.1.3
  https://launchpad.net/horizon/+milestone/2013.1.3
  https://launchpad.net/heat/+milestone/2013.1.3
  https://launchpad.net/ceilometer/+milestone/2013.1.3

We'd appreciate anyone who could test the candidate 2013.1.3 tarballs:

  http://tarballs.openstack.org/nova/nova-stable-grizzly.tar.gz
  http://tarballs.openstack.org/keystone/keystone-stable-grizzly.tar.gz
  http://tarballs.openstack.org/glance/glance-stable-grizzly.tar.gz
  http://tarballs.openstack.org/neutron/neutron-stable-grizzly.tar.gz [*]
  http://tarballs.openstack.org/cinder/cinder-stable-grizzly.tar.gz
  http://tarballs.openstack.org/horizon/horizon-stable-grizzly.tar.gz
  http://tarballs.openstack.org/heat/heat-stable-grizzly.tar.gz
  http://tarballs.openstack.org/ceilometer/ceilometer-stable-grizzly.tar.gz


Thanks
Alan

[*] Stable Networking tarball will be renamed to
quantum-2013.1.3.tar.gz before upload to Launchpad

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] Python dependencies: PyPI vs distro packages

2013-08-05 Thread Joshua Harlow
This does sound like a neat approach, a hybrid if u will. Might be something to 
try :-)

Sometimes I wish package managers were better, especially with regard to 
'complex' (not really that complex in reality) dependencies.

Any possibility of opening up said makefiles?? Be interesting to look at :-)

From: Jay Buffington m...@jaybuff.commailto:m...@jaybuff.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, August 5, 2013 3:37 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [DevStack] Python dependencies: PyPI vs distro 
packages

I've been doing continuous deployment via rpms on RHEL 6u3 of glance, keystone,
neutron and nova for about six months now.  I used Anvil for the first three
months, but it required constant updating of dependency versions and it didn't
support quantum.  Also, yum is terrible at managing dependency ranges, which
openstack components often have (e.g. SQLAlchemy=0.7.8,=0.7.9).  I'm not
convinced DNF[1] will be much better.

I found it easier to just build a virtualenv and then put all those files in an
rpm.  Then make non-python dependencies (like libxslt) dependencies of that rpm.
Since we did that the number of daily packaging headaches have dropped
significantly.   We have a bunch of Makefiles and jenkins jobs that drive this
on every commit to components we care about.

I was unaware of RDO (or it hadn't been released) when I switched our builds
over.  I'd take a serious look at it, but right now I'm happy with our solution.

[1] http://fedoraproject.org/wiki/Features/DNF


On Mon, Aug 5, 2013 at 10:03 AM, Dean Troyer 
dtro...@gmail.commailto:dtro...@gmail.com wrote:
[Moving a discussion from https://review.openstack.org/40019 to the ML
to get a wider audience]

We've been around this block more than once so let's get it all
documented in one place and see where to go next.  Skip down to
# for more actual discussion...

Given:
* OpenStack now has an official list of Python package versions
required in https://review.openstack.org/p/openstack/requirements
* This list is rarely completely available in any packaged Linux distro
* Developers like new packages that fix their immediate problem
* Packagers dislike the treadmill of constantly upgrading packages for
many reasons (stability, effort, etc)
* Running OpenStack on certain long-term-stability distros (RHEL6) is
seriously a challenge due to the number of out-of-date components,
specifically here many of the python-* packages.
* Fedora and RHEL6 have a nasty configuration of telling pip to
install packages into the same location as RPM-installed packages
setting up hard-to-diagnose problems and irritating sysadmins
everywhere.  FTR, Debian/Ubuntu configure pip to install into
/usr/local and put '/usr/local/lib/python2.7/dist-packages' ahead of
'/usr/lib/python2.7/dist-packages' in sys.path.
* Replacing setuptools on RHEL6 illustrated another issue: removing
python-setuptools took with it a number of other packages that
depended on it.
* OpenStack devs are not in the packaging business.  This has been
decided [citation required?].  Fortunately those in the packaging
business do contribute to OpenStack (THANKS!) and do make a huge
effort to keep up with things like the Ububntu cloud Archive and Red
Hat's RDO.

The last week or so of attempting to install Python prereqs from
requirements.txt and installing pip 1.4 to support that rather than
re-invent the wheel and all of the fallout from that illustrates
clearly that neither approach is going to solve our problem.

Summary of the discussion in the review (paraphrased, read the review
to see where I got it wrong):

* packages are evil: we develop and gate based on what is available in
requirements.txt and a non-zero number of those are only in PyPI
* Anvil solved this already: resolves requirements into the RPM
package list and packages anything required from PyPI
* ongoing worries about pip and apt/rpm overwriting each other as
additional things are installed
* packages are necessary:

#

My specific responses:

* proposals to use a tool to automatically decide between package and
PyPI (harlowja, sdague):  this works well on the surface, but anything
that does not take in to account the dependencies in these packages
going BOTH ways is going to fail.  For example: on RHEL6 setuptools is
0.6.10, we want 0.9.8 (the merged release w/distribute).  Removing
python-setuptools will also remove python-nose, numpy and other
packages depending on what is installed.  So fine, those can be
re-installed with pip.  But a) we don't want to rebuild numpy (just
bear with me here), and b) the packaged python-nose 0.10.4 meets the
version requirement in requirements.txt so the package will be
re-installed, bringing with it python-setuptools 0.6.10 overwriting
the pip installation of 0.9.8.
* 

Re: [openstack-dev] Keystone Split Backend LDAP Question

2013-08-05 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
I have been inserting debug logging and stack traces into the code base to help 
find out what is and is not happening.


· I am able to connect  the LDAP backend to our Enterprise Directory 
and perform a REST “get an unscoped token” from keystone. Following is the 
result:
· Connection →keep-alive
· Content-Length →259
· Content-Type →application/json
· Date →Fri, 26 Jul 2013 21:49:16 GMT
· Vary →X-Auth-Token
· X-Subject-Token →cae95a17517245798acb17c47b8eb74b

{
token: {
issued_at: 2013-07-26T21:49:16.951821Z,
extras: {},
methods: [
password
],
expires_at: 2045-04-03T19:49:16.951738Z,
user: {
domain: {
id: default,
name: Default
},
id: mark.m.mil...@hp.com,
name: mark.m.mil...@hp.com
}
}
}

· When I attempt to assign a role to the user:


Ø  keystone user-role-add --user mark.m.mil...@hp.com --role-id 
7fb862d10b5c46679b4334eae9c73a46 --tenant-id 9798b027472d4f459d231c005977b3ac

The “identity/controllers/get_users()” method is called instead of the 
“get_user_by_name()” method.


Does anyone know why or how to fix this or if what I am trying to do even works?

Regards,

Mark Miller


From: Miller, Mark M (EB SW Cloud - RD - Corvallis)
Sent: Friday, August 02, 2013 4:00 PM
To: OpenStack Development Mailing List; Adam Young (ayo...@redhat.com); Dolph 
Mathews (dolph.math...@gmail.com); Yee, Guang
Subject: Re: [openstack-dev] Keystone Split Backend LDAP Question

Hello,

With some minor tweaking of the keystone common/ldap/core.py file, I have been 
able to authenticate and get an unscoped token for a user from an LDAP 
Enterprise Directory. I want to continue testing but I have some questions that 
need to be answered before I can continue.


1.   Do I need to add the user from the LDAP server to the Keystone SQL 
database or will the H-2 code search the LDAP server?

2.   When I performed a “keystone user-list” the following log file entries 
were written indicating that keystone was attempting to get all the users on 
the massive Enterprise Directory. How do we limit this query to just the one 
user or group of users we are interested in?

2013-07-23 14:04:31DEBUG [keystone.common.ldap.core] LDAP bind: 
dn=cn=CloudOSKeystoneDev, ou=Applications, o=hp.com
2013-07-23 14:04:32DEBUG [keystone.common.ldap.core] In get_connection 6 
user: cn=CloudOSKeystoneDev, ou=Applications, o=hp.com
2013-07-23 14:04:32DEBUG [keystone.common.ldap.core] MY query in 
_ldap_get_all: ()
  2013-07-23 14:04:32DEBUG [keystone.common.ldap.core] LDAP search: 
dn=ou=People,o=hp.com, scope=2, query=(), attrs=['businessCategory', 
'userPassword', 'hpStatus', 'mail', 'uid']

3.   Next I want to acquire a scoped token. How do I assign the LDAP user 
to a local project?

Regards,

Mark Miller
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] instances fail to boot on el6 (glance schema error issue)

2013-08-05 Thread Dan Prince
As of an hour ago the el6 (Centos) builds in SmokeStack all started failing. 
I've documented the initial issue I'm seeing in this ticket:

 https://bugs.launchpad.net/nova/+bug/1208656

The issue seems to be that we now hit a SchemaError which bubbles up from 
glanceclient when the new direct download plugin code runs. This only seems to 
happen on distributions using python 2.6 as I'm not seeing the same thing on 
Fedora.

This stack trace also highlights the fact that the Glance v2 API now seems to 
be a requirement for Nova... and I'm not sure this is a good thing considering 
we still use the v1 API for many things as well. Ideally we'd have all Nova - 
Glance communication use a single version of the Glance API (either v1 or v2... 
not both) right?



Sorry I didn't catch this one sooner. We only recently enabled Centos testing 
again (due to some resource limitations). Plus I just got back from vacation. :)

Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Enabling neutron gating

2013-08-05 Thread Sean Dague
Well, the neutron gate is still unstable, it killed the 
global-requirements change from going in today. Bug filed here: 
https://bugs.launchpad.net/neutron/+bug/1208661


Can someone from the neutron team that has the ability to change bug 
priorities please bump that to critical?


-Sean

On 08/05/2013 02:59 PM, Nachi Ueno wrote:

Thanks!

2013/8/5 Clark Boylan clark.boy...@gmail.com:

On Mon, Aug 5, 2013 at 10:21 AM, James E. Blair jebl...@openstack.org wrote:

Sean Dague s...@dague.net writes:


On 08/04/2013 12:09 PM, Thierry Carrez wrote:

Nachi Ueno wrote:

It looks like neutron gating error improves as much as non-neutron gating one,
so I would like to suggest to enable neturon-gating again.


+1


If those numbers are still valid as of today, I think we should turn it
back on.

   -Sean


Here is a commit to turn it back on: https://review.openstack.org/40250

-Jim


This change has been merged, gate-tempest-devstack-vm-neutron is now
voting and gating again.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Future of nova's image API

2013-08-05 Thread Christopher Yeoh
On Tue, Aug 6, 2013 at 4:41 AM, Joe Gordon joe.gord...@gmail.com wrote:




 On Fri, Aug 2, 2013 at 7:07 PM, Christopher Yeoh cbky...@gmail.comwrote:



 Hi Joe,
 Am on my phone so can't find the links at the moment but there was some
 discussion around this when working out what we should leave out of the v3
 api. Some people had concerns about exposing the glance api publicly and so
 wanted to retain the images support in Nova.
 So the consensus seemed to be to leave the images support in, but to
 demote it from core. So people who don't want it exclude the os-images
 extension.

 Just as I write this I've realised that the servers api currently returns
 links to the image used for the instance. And that won't be valid if the
 images extension is not loaded. So probably have some work to do there to
 support  that properly.


 This sounds like part of a bigger question for V3 API.  Can someone
 actually run nova with just the core API?


It's definitely a bug if we can't.

Regards,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone Split Backend LDAP Question

2013-08-05 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
Adam,

Great suggestion. Using the v3 API I have been able to grant a project role to 
an LDAP user:

mark.m.mil...@hp.com

| 9798b027472d4f459d231c005977b3ac

| {roles: [{id: 7fb862d10b5c46679b4334eae9c73a46}]}


Mark

From: Adam Young [mailto:ayo...@redhat.com]
Sent: Monday, August 05, 2013 5:29 PM
To: Miller, Mark M (EB SW Cloud - RD - Corvallis)
Cc: OpenStack Development Mailing List; Dolph Mathews 
(dolph.math...@gmail.com); Yee, Guang
Subject: Re: Keystone Split Backend LDAP Question

On 08/02/2013 06:59 PM, Miller, Mark M (EB SW Cloud - RD - Corvallis) wrote:
Hello,

With some minor tweaking of the keystone common/ldap/core.py file, I have been 
able to authenticate and get an unscoped token for a user from an LDAP 
Enterprise Directory. I want to continue testing but I have some questions that 
need to be answered before I can continue.


1.  Do I need to add the user from the LDAP server to the Keystone SQL 
database or will the H-2 code search the LDAP server?
No.  there is no entry in SQL for the user, only in LDAP.


2.  When I performed a keystone user-list the following log file entries 
were written indicating that keystone was attempting to get all the users on 
the massive Enterprise Directory. How do we limit this query to just the one 
user or group of users we are interested in?

2013-07-23 14:04:31DEBUG [keystone.common.ldap.core] LDAP bind: 
dn=cn=CloudOSKeystoneDev, ou=Applications, o=hp.com
2013-07-23 14:04:32DEBUG [keystone.common.ldap.core] In get_connection 6 
user: cn=CloudOSKeystoneDev, ou=Applications, o=hp.com
2013-07-23 14:04:32DEBUG [keystone.common.ldap.core] MY query in 
_ldap_get_all: ()
  2013-07-23 14:04:32DEBUG [keystone.common.ldap.core] LDAP search: 
dn=ou=People,o=hp.com, scope=2, query=(), attrs=['businessCategory', 
'userPassword', 'hpStatus', 'mail', 'uid']

I think this bug is filed here:
https://bugs.launchpad.net/keystone/+bug/1205150

I've grabbed it/



3.  Next I want to acquire a scoped token. How do I assign the LDAP user to 
a local project?
Use hte normal Keystone api for that.  THe project and assignments all happed 
in the SQL backend.




Regards,

Mark Miller

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] Python dependencies: PyPI vs distro packages

2013-08-05 Thread Ian Wienand
On Mon, Aug 05, 2013 at 12:03:07PM -0500, Dean Troyer wrote:
 * proposals to use a tool to automatically decide between package and
 PyPI (harlowja, sdague):  this works well on the surface, but anything
 that does not take in to account the dependencies in these packages
 going BOTH ways is going to fail.  For example: on RHEL6 setuptools is
 0.6.10, we want 0.9.8 (the merged release w/distribute).  Removing
 python-setuptools will also remove python-nose, numpy and other
 packages depending on what is installed.  So fine, those can be
 re-installed with pip.  But a) we don't want to rebuild numpy (just
 bear with me here), and b) the packaged python-nose 0.10.4 meets the
 version requirement in requirements.txt so the package will be
 re-installed, bringing with it python-setuptools 0.6.10 overwriting
 the pip installation of 0.9.8.

I think Anvil is working with the package management system so that
scenario doesn't happen.  The fine, those can be re-installed with
pip bit is where the problem occurs.

The Anvil process is, as I understand it:

 1) parse requirements.txt
 2) see what can be satisfied via yum
 3) pip download the rest
 4) remove downloaded dependencies that are satisfied by yum
 5) make rpms of now remaining pip downloads
 6) install the *whole lot*

The whole lot bit is important, because you can't have conflicts
there.  Say requirements.txt brings in setuptools-0.9.8; Anvil will
create a python-setuptools 0.9.8 package.  If rpm-packaged nose relies
*exactly* python-setuptools@0.6.10, there will be a conflict -- I
think the installation would fail to complete.  But likely, that
package doesn't care and gets it dep satisfied by 0.9.8 [1]

Because you're not using pip to install directly, you don't have this
confusion around who owns files in /usr/lib/python2.6/site-packages
and have rpm or pip overwriting each other -- RPM owns all the files
and that's that.

 Removing python-setuptools will also remove python-nose, numpy and
 other packages depending on what is installed.

Nowhere is python-setuptools removed; just upgraded.

Recently trying out Anvil, it seems to have the correct solution to my
mind.

-i

[1] From a quick look at Anvil I don't think it would handle this
situation, which is probably unsolvable (if a rpm-package wants one
version and devstack wants another, and it all has to be in
/usr/lib/python2.6/site-packages, then *someone* is going to lose).
But I don't think = or == dependencies in rpms are too common, so you
can just drop in the new version and hope it remains backwards
compatible :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ml2] Configuration file sections for MechanismDrivers

2013-08-05 Thread Andre Pech
Hey Kyle,

We're currently going with the second option you describe - having separate
configuration files per mechanism driver, and passing these in on the
command line when starting Neutron. This feels much cleaner than putting
all configuration options into the ML2 config file, especially as the
number of mechanism drivers grows.

Andre


On Mon, Aug 5, 2013 at 1:22 PM, Kyle Mestery (kmestery)
kmest...@cisco.comwrote:

 While working on the OpenDaylight ML2 MechanismDriver, one thing which
 cropped up was configuration file options for MechanismDrivers and were
 those should be stored. I was initially of the opinion we should put all
 configuration sections into the ml2 configuration file, but this could get
 crowded, and would make the sample one confusing. The other option is to
 create them in separate configuration files per-MechanismDriver, and pass
 those on the command line when starting the Neutron server. This keeps
 things tidy from the perspective that you would only need to modify the
 ones you plan to run with Ml2.

 So my question is, which route do people prefer here?

 I'll have to make some changes to the ML2 devstack configuration to
 support this no matter which way we go forward, as it currently doesn't
 support the concept of separate configuration sections for MechanismDrivers.

 Thanks,
 Kyle
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Dropping or weakening the 'only import modules' style guideline - H302

2013-08-05 Thread Robert Collins
I wanted to get a temperature reading from everyone on this style guideline.

My view on it is that it's a useful heuristic but shouldn't be a
golden rule applied everywhere. Things like matches are designed to be
used as a dsl:
self.assertThat(foo, Or(Equals(1), Equals(2)))

rather than what H302 enforces:
self.assertThat(foo, matchers.Or(matchers.Equals(1),
matchers.Equals(2)))

Further, conflicting module names become harder to manage, when one
could import just the thing.

Some arguments for requiring imports of modules:
 - makes the source of symbols obvious
   - Actually, it has no impact on that as the import is still present
and clear in the file. import * would obfuscate things, but I'm not
arguing for that.
   - and package/module names can (and are!) still ambiguous. Like
'test.' - whats that? - consult the imports.
 - makes mocking more reliable
   - This is arguably the case, but it's a mirage: it isn't a complete
solution because modules still need to be mocked at every place they
are dereferenced : only import modules helps to the extent that one
never mocks modules. Either way this failure mode of mocking is
usually very obvious IME : but keeping the rule as a recommendation,
*particularly* when crossing layers to static resources is a good
idea.
 - It's in the Google Python style guide
(http://google-styleguide.googlecode.com/svn/trunk/pyguide.html?showone=Imports#Imports)
   - shrug :)

What I'd like us to do is weaken it from a MUST to a MAY, unless noone
cares about it at all, in which case lets just turn it off entirely.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dropping or weakening the 'only import modules' style guideline - H302

2013-08-05 Thread Alex Gaynor
I'd favor weakening or removing this requirement. Besides google I've never
seen any other python project which enforced this standard, and I think
it's a very weak heuristic for readability.

Alex


On Mon, Aug 5, 2013 at 7:26 PM, Robert Collins robe...@robertcollins.netwrote:

 I wanted to get a temperature reading from everyone on this style
 guideline.

 My view on it is that it's a useful heuristic but shouldn't be a
 golden rule applied everywhere. Things like matches are designed to be
 used as a dsl:
 self.assertThat(foo, Or(Equals(1), Equals(2)))

 rather than what H302 enforces:
 self.assertThat(foo, matchers.Or(matchers.Equals(1),
 matchers.Equals(2)))

 Further, conflicting module names become harder to manage, when one
 could import just the thing.

 Some arguments for requiring imports of modules:
  - makes the source of symbols obvious
- Actually, it has no impact on that as the import is still present
 and clear in the file. import * would obfuscate things, but I'm not
 arguing for that.
- and package/module names can (and are!) still ambiguous. Like
 'test.' - whats that? - consult the imports.
  - makes mocking more reliable
- This is arguably the case, but it's a mirage: it isn't a complete
 solution because modules still need to be mocked at every place they
 are dereferenced : only import modules helps to the extent that one
 never mocks modules. Either way this failure mode of mocking is
 usually very obvious IME : but keeping the rule as a recommendation,
 *particularly* when crossing layers to static resources is a good
 idea.
  - It's in the Google Python style guide
 (
 http://google-styleguide.googlecode.com/svn/trunk/pyguide.html?showone=Imports#Imports
 )
- shrug :)

 What I'd like us to do is weaken it from a MUST to a MAY, unless noone
 cares about it at all, in which case lets just turn it off entirely.

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
I disapprove of what you say, but I will defend to the death your right to
say it. -- Evelyn Beatrice Hall (summarizing Voltaire)
The people's good is the highest law. -- Cicero
GPG Key fingerprint: 125F 5C67 DFE9 4084
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dropping or weakening the 'only import modules' style guideline - H302

2013-08-05 Thread Monty Taylor


On 08/05/2013 11:26 PM, Robert Collins wrote:
 I wanted to get a temperature reading from everyone on this style guideline.
 
 My view on it is that it's a useful heuristic but shouldn't be a
 golden rule applied everywhere. Things like matches are designed to be
 used as a dsl:
 self.assertThat(foo, Or(Equals(1), Equals(2)))
 
 rather than what H302 enforces:
 self.assertThat(foo, matchers.Or(matchers.Equals(1),
 matchers.Equals(2)))
 
 Further, conflicting module names become harder to manage, when one
 could import just the thing.
 
 Some arguments for requiring imports of modules:
  - makes the source of symbols obvious
- Actually, it has no impact on that as the import is still present
 and clear in the file. import * would obfuscate things, but I'm not
 arguing for that.
- and package/module names can (and are!) still ambiguous. Like
 'test.' - whats that? - consult the imports.
  - makes mocking more reliable
- This is arguably the case, but it's a mirage: it isn't a complete
 solution because modules still need to be mocked at every place they
 are dereferenced : only import modules helps to the extent that one
 never mocks modules. Either way this failure mode of mocking is
 usually very obvious IME : but keeping the rule as a recommendation,
 *particularly* when crossing layers to static resources is a good
 idea.
  - It's in the Google Python style guide
 (http://google-styleguide.googlecode.com/svn/trunk/pyguide.html?showone=Imports#Imports)
- shrug :)
 
 What I'd like us to do is weaken it from a MUST to a MAY, unless noone
 cares about it at all, in which case lets just turn it off entirely.

Enforcing it is hard. The code that does it has to import and then make
guesses on failures.

Also - I agree with Robert on this. I _like_ writing my code to not
import bazillions of things... but I think the hard and fast rule makes
things crappy at times.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dropping or weakening the 'only import modules' style guideline - H302

2013-08-05 Thread Clint Byrum
Excerpts from Robert Collins's message of 2013-08-05 19:26:20 -0700:
 I wanted to get a temperature reading from everyone on this style guideline.
 
 My view on it is that it's a useful heuristic but shouldn't be a
 golden rule applied everywhere. Things like matches are designed to be
 used as a dsl:
 self.assertThat(foo, Or(Equals(1), Equals(2)))
 
 rather than what H302 enforces:
 self.assertThat(foo, matchers.Or(matchers.Equals(1),
 matchers.Equals(2)))
 
 Further, conflicting module names become harder to manage, when one
 could import just the thing.
 
 Some arguments for requiring imports of modules:
  - makes the source of symbols obvious
- Actually, it has no impact on that as the import is still present
 and clear in the file. import * would obfuscate things, but I'm not
 arguing for that.
- and package/module names can (and are!) still ambiguous. Like
 'test.' - whats that? - consult the imports.
  - makes mocking more reliable
- This is arguably the case, but it's a mirage: it isn't a complete
 solution because modules still need to be mocked at every place they
 are dereferenced : only import modules helps to the extent that one
 never mocks modules. Either way this failure mode of mocking is
 usually very obvious IME : but keeping the rule as a recommendation,
 *particularly* when crossing layers to static resources is a good
 idea.
  - It's in the Google Python style guide
 (http://google-styleguide.googlecode.com/svn/trunk/pyguide.html?showone=Imports#Imports)
- shrug :)
 
 What I'd like us to do is weaken it from a MUST to a MAY, unless noone
 cares about it at all, in which case lets just turn it off entirely.


You've convinced me. Monty's point about the complexity of making sure
what is imported in code is actually a module makes it expensive without
much benefit.

I say remove it entirely.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] devstack modifying git repository

2013-08-05 Thread Chmouel Boudjnah
Hello,

I just saw that
https://github.com/openstack-dev/devstack/commit/6c84463071e1ff23e20e4ef4fb863aba0732bebc
as just landed.

I understand the good reasons for that and thanks for the works that
has been done into it but this has come to  some weird side effects of
having devstack modifying your source tree when running it.

My workflow when working on a feature/bug and I suspect I am not the
only one is usually, run devstack, hack the source, unittests,
unstack.sh/stack.sh run the devstack again etc.

When I commit those I would be commited that updated requirements, is
it the side effect we want to force, should the commiter commit those
or remove them before git-review it ? This seems to me some extra
works for commiters/reviewers.

Chmouel

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Scheduler sub-group meeting agenda 8/6

2013-08-05 Thread Dugger, Donald D
A few things we can go over this time:

1) Instance groups
2) Overall scheduler plan
3) Multiple active scheduler policies

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] V3 Extensions Discoverability

2013-08-05 Thread Jamie Lennox
Hi all, 

Partially in response to the trusts API review in keystoneclient
(https://review.openstack.org/#/c/39899/ ) and my work on keystone API
version discoverability (spell-check disagrees but I'm going to assume
that's a word - https://review.openstack.org/#/c/38414/ ) I was thinking
about how we should be able to know what/if an extension is available. I
even made a basic blueprint for how i think it should work:
https://blueprints.launchpad.net/python-keystoneclient/+spec/keystoneclient-extensions
 and then realized that GET /extensions is only a V2 API.

Is this intentional? I was starting to make a review to add it to
identity-api but is there the intention that extensions should show up
within the endpoint APIs? There is no reason it couldn't work that way
and DELETE would just fail. 

I am not convinced that it is a good idea though and I just want to
check if this is something that has been discussed or purposefully
handled this way or something we need to add.


Thanks, 

Jamie 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev