[openstack-dev] [Horizon] The source language of Horizon in Transifex

2013-08-13 Thread Ying Chun Guo

Hi,

Now the source language of Horizon in Transifex is set as en_US, not en. So
when pulling the translations
from Transifex, there will be some dummy characters in en.po, which will
cause errors in unit
tests.

I don't find a way to change the setting in Transifex. I think the only way
to fix it is to
re-upload the resources with source language setting as en, and delete the
existing resources.

Please let me know if Horizon development team know the issue and have any
plans to fix it. Thanks

Regards
Daisy___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Extension to volume creation (filesystem and label)

2013-08-13 Thread Greg Poirier
On Tue, Aug 13, 2013 at 1:37 PM, Caitlin Bestler <
caitlin.best...@nexenta.com> wrote:

> I'm not following something here. What is the point at dictating a
> specific FS format when the compute node will be the one applying
> the interpretation?
>

This is the rabbithole that made me start to re-think our approach.

Our goal is to make it so that developers can self-service provision
additional storage for themselves, grow filesystems, etc without having to
... well... know how. There are so many approaches to this (within the
Openstack community) that we thought "why not just make this something that
Cinder can do?"


> Isn't a 120 GB volume which the VM will interpret as an EXT4 FS just
> a 120 GB volume that has a *hint* attached to it?
>

Yes, in the same sense that a 120 GB volume is just a starting point on a
disk with a hint attached to it.


> And would there be any reason to constrain in advance the set of hints
> that could be offered?


Simplicity.

I think that what would make this idea tractable would be to abstract away
the filesystem-level stuff into an abstract factory that Cinder would use.
Each FS type would implement the factory accordingly and register itself
somehow with Cinder. So Cinder operators would have a range of choices for
making filesystems available to users.

Of course, that's anything but simple, right?

What we really want (and are comfortable working with) is predictability
and consistency.

Currently, if I specify a device name via nova volume-attach, we can end up
in a state where our metadata regarding the attachment is incorrect. E.g. a
device is attached as /dev/vdb, but I specified /dev/vdc and the
attachment's 'device' parameter says /dev/vdc.

We have the alternative of using the truncated
/dev/disk/by-id/virtio- to find the attached volume, but
you cannot guarantee that there will not be a collision in an environment
with a sufficient number of volumes.

I'd be personally satisfied if that weren't truncated and would move along.
It's something else I'm looking into.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Blueprint for Amazon VPC API support

2013-08-13 Thread Rudra Rugge
Hi All,

A blueprint has been registered to add Amazon VPC API support.
Currently Amazon EC2 API support already exists in Openstack.
Please review the blueprint and the attached specification. The
specification covers all the VPC APIs and describes how it maps
to the Openstack constructs.

Blueprint
https://blueprints.launchpad.net/nova/+spec/aws-vpc-support

Specification
https://wiki.openstack.org/wiki/Blueprint-aws-vpc-support

The changes are orthogonal to Amazon EC2 APIs but follow the
same model as the EC2 API source code. In addition new unit
tests have been added to cover all the VPC tests.

Please review the blueprint - all comments are welcome.

Thank you,
Rudra
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Proposal for approving Starting by scheduler development blueprint.

2013-08-13 Thread cosmos cosmos
Hello. 

My name is Rucia for Samsung SDS.



Now, I am developing Start Logic by nova-scheduler for efficient resources of 
host.

This function is already implemented in folsom release version.



It is used for the iscsi target such as HP san storage.



This is slightly different from the original version.

If you start the instance after stop, the instance will started at optimal 
Compute host.

The selected host is through the nova-scheduler.





1. Do not use the scheduler originally in start logic of Openstack Nova

2. Start on the host where the instance is created





1. When the stopped instance start, Changed to start from the hosts that is 
selected by nova-scheduler

2. When the VM starts, Check the resources through check_resource_limit()



Pros

- You can use resources efficiently 

- When you start a virtual machine, You can solve the problem that is error 
caused by the lack of resources on a host.



Below is my blueprint and wiki page.

Thanks



https://blueprints.launchpad.net/nova/+spec/start-instance-by-scheduler

https://wiki.openstack.org/wiki/Start

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift, netifaces, PyPy, and cffi

2013-08-13 Thread Joe Gordon
On Tue, Aug 13, 2013 at 6:56 PM, Clint Byrum  wrote:

> Excerpts from Alex Gaynor's message of 2013-08-13 14:58:56 -0700:
> > Hi all,
> >
> > (This references this changeset: https://review.openstack.org/#/c/38415/
> )
> >
> > One of the goals I've been working at has been getting swift running on
> > PyPy (and from there, the rest of OpenStack). The last blocking issue in
> > swift is that it currently uses netifaces, which is a C-extension that
> > doesn't on PyPy. I've proposed to replace this dependency with a cffi
> based
> > binding to the system.
>

I assume you have seen
http://vish.everyone.me/running-openstack-nova-with-pypy



> >
> > For those not familiar, cffi is a tool for binding to C libraries,
> similar
> > to ctypes (in the stdlib), except more expressive, less error prone, and
> > faster; some of our downstream dependencies already use it.
> >
> > One of the issues that came up in this review however, is that cffi is
> not
> > packaged in the most recent Ubuntu LTS (and likely other distributions),
> > although it is available in raring, and in a PPA (
> > http://packages.ubuntu.com/raring/python-cffi and
> >
> https://launchpad.net/~pypy/+archive/ppa?field.series_filter=preciserespectively
> ).
> >
> > As a result of this, we wanted to get some feedback on which direction is
> > best to go:
> >
> > a) cffi-only approach, this is obviously the simplest approach, and works
> > everywhere (assuming you can install a PPA, use pip, or similar for cffi)
>
> There are a lot of dependencies of Grizzly and Havana that aren't in
> the official release of Ubuntu 12.04. That is why Canonical created
> the cloud archive, so that users can keep everything that isn't
> "OpenStack+Dependencies" on the LTS.
>
> The fact that cffi is already available in a release makes it even
> more likely that it will be a straight forward backport to the cloud
> archive. However, is Ubuntu 12.04's pypy 1.8 sufficient?  Ubuntu 13.04
> and 12.10 have 1.9, and saucy (the presumed 13.10) has 2.0.2.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Skipping tests in tempest via config file

2013-08-13 Thread Matt Riedemann
I have the same issue.  I run a subset of the tempest tests via nose on a 
RHEL 6.4 VM directly against the site-packages (not using virtualenv). I'm 
running on x86_64, ppc64 and s390x and have different issues on all of 
them (a mix of DB2 on x86_64 and MySQL on the others, and different 
nova/cinder drivers on each).  What I had to do was just make a nose.cfg 
for each of them and throw that into ~/ for each run of the suite.

The switch from nose to testr hasn't impacted me because I'm not using a 
venv.  However, there was a change this week that broke me on python 2.6 
and I opened this bug:

https://bugs.launchpad.net/tempest/+bug/1212071 



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Ian Wienand 
To: openstack-dev@lists.openstack.org, 
Date:   08/13/2013 09:13 PM
Subject:[openstack-dev] Skipping tests in tempest via config file



Hi,

I proposed a change to tempest that skips tests based on a config file
directive [1].  Reviews were inconclusive and it was requested the
idea be discussed more widely.

Of course issues should go upstream first.  However, sometimes test
failures are triaged to a local/platform problem and it is preferable
to keep everything else running by skipping the problematic tests
while its being worked on.

My perspective is one of running tempest in a mixed CI environment
with RHEL, Fedora, etc.  Python 2.6 on RHEL doesn't support testr (it
doesn't do the setUpClass calls required by temptest) and nose
upstream has some quirks that make it hard to work with the tempest
test layout [2].

Having a common place in the temptest config to set these skips is
more convienent than having to deal with the multiple testing
environments.

Another proposal is to have a separate JSON file of skipped tests.  I
don't feel strongly but it does seem like another config file.

-i

[1] https://review.openstack.org/#/c/39417/
[2] https://github.com/nose-devs/nose/pull/717

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] FWaaS: Support for explicit commit

2013-08-13 Thread Sridar Kandaswamy (skandasw)
Hi All:

In discussing with some more folks from a deployment perspective - managing 
rules for  PCI compliance and Audit requirements is quite important. And as is 
pointed below by Sumit, this can help enable a gate for any audit checks before 
actually applying it on the backend. Another use case discussed was  that 
firewall rules are often bloated because often admins hesitate to remove old 
and unused rules because no one wants to take a chance on the effects. This 
could also serve as a validation point before an actual update is effected on a 
commit.

Thanks

Sridar 

-Original Message-
From: Sumit Naiksatam [mailto:sumitnaiksa...@gmail.com] 
Sent: Monday, August 12, 2013 12:24 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Neutron] FWaaS: Support for explicit commit

Hi Aaron,

I seemed to have missed this email from you earlier. As compared to existing 
Neutron resources, the FWaaS Firewall resource and workflow is slightly 
different, since it's a two step process. The rules/policy creation is 
decoupled (for audit reasons) from its application on the backend firewall. 
Hence the need for the commit-like operation which expresses the intent that 
the state of the rules/policy be applied to the backend firewall. We can 
provide capabilities for bulk creation/update of rules/policies as well but 
that I believe is independent of this.

I posted a patch yesterday night for this 
(https://review.openstack.org/#/c/41353/).

Thanks,
~Sumit.

On Wed, Aug 7, 2013 at 5:19 PM, Aaron Rosen  wrote:
> Hi Sumit,
>
> Neutron has a concept of a bulk creation where multiple things can be 
> created in one api request rather that N (and then be implemented 
> atomically on the backend). In my opinion, I think it would be better 
> to implement a bulk update/delete operation rather than a commit. I 
> think that having something like this that is generic could be useful 
> to other api's in neutron.
>
> I do agree that one has to keep track of the order they are 
> changing/adding/delete rules so that they don't allow two things to 
> communicate that shouldn't be allowed to. If someone wanted to perform 
> this type of bulk atomic change now could they create a new profile 
> with the rules they desire and then switch out which profile is 
> attached to the firewall?
>
> Best,
>
> Aaron
>
>
> On Wed, Aug 7, 2013 at 3:40 PM, Sumit Naiksatam 
> 
> wrote:
>>
>> We had some discussion on this during the Neutron IRC meeting, and 
>> per that discussion I have created a blueprint for this:
>>
>> https://blueprints.launchpad.net/neutron/+spec/neutron-fwaas-explicit
>> -commit
>>
>> Further comments can be posted on the blueprint whiteboard and/or the 
>> design spec doc.
>>
>> Thanks,
>> ~Sumit.
>>
>> On Fri, Aug 2, 2013 at 6:43 PM, Sumit Naiksatam 
>>  wrote:
>> > Hi All,
>> >
>> > In Neutron Firewall as a Service (FWaaS), we currently support an 
>> > implicit commit mode, wherein a change made to a firewall_rule is 
>> > propagated immediately to all the firewalls that use this rule (via 
>> > the firewall_policy association), and the rule gets applied in the 
>> > backend firewalls. This might be acceptable, however this is 
>> > different from the explicit commit semantics which most firewalls support.
>> > Having an explicit commit operation ensures that multiple rules can 
>> > be applied atomically, as opposed to in the implicit case where 
>> > each rule is applied atomically and thus opens up the possibility 
>> > of security holes between two successive rule applications.
>> >
>> > So the proposal here is quite simple -
>> >
>> > * When any changes are made to the firewall_rules 
>> > (added/deleted/updated), no changes will happen on the firewall 
>> > (only the corresponding firewall_rule resources are modified).
>> >
>> > * We will support an explicit commit operation on the firewall 
>> > resource. Any changes made to the rules since the last commit will 
>> > now be applied to the firewall when this commit operation is invoked.
>> >
>> > * A show operation on the firewall will show a list of the 
>> > currently committed rules, and also the pending changes.
>> >
>> > Kindly respond if you have any comments on this.
>> >
>> > Thanks,
>> > ~Sumit.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Skipping tests in tempest via config file

2013-08-13 Thread Ian Wienand
Hi,

I proposed a change to tempest that skips tests based on a config file
directive [1].  Reviews were inconclusive and it was requested the
idea be discussed more widely.

Of course issues should go upstream first.  However, sometimes test
failures are triaged to a local/platform problem and it is preferable
to keep everything else running by skipping the problematic tests
while its being worked on.

My perspective is one of running tempest in a mixed CI environment
with RHEL, Fedora, etc.  Python 2.6 on RHEL doesn't support testr (it
doesn't do the setUpClass calls required by temptest) and nose
upstream has some quirks that make it hard to work with the tempest
test layout [2].

Having a common place in the temptest config to set these skips is
more convienent than having to deal with the multiple testing
environments.

Another proposal is to have a separate JSON file of skipped tests.  I
don't feel strongly but it does seem like another config file.

-i

[1] https://review.openstack.org/#/c/39417/
[2] https://github.com/nose-devs/nose/pull/717

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift, netifaces, PyPy, and cffi

2013-08-13 Thread Clint Byrum
Excerpts from Alex Gaynor's message of 2013-08-13 14:58:56 -0700:
> Hi all,
> 
> (This references this changeset: https://review.openstack.org/#/c/38415/)
> 
> One of the goals I've been working at has been getting swift running on
> PyPy (and from there, the rest of OpenStack). The last blocking issue in
> swift is that it currently uses netifaces, which is a C-extension that
> doesn't on PyPy. I've proposed to replace this dependency with a cffi based
> binding to the system.
> 
> For those not familiar, cffi is a tool for binding to C libraries, similar
> to ctypes (in the stdlib), except more expressive, less error prone, and
> faster; some of our downstream dependencies already use it.
> 
> One of the issues that came up in this review however, is that cffi is not
> packaged in the most recent Ubuntu LTS (and likely other distributions),
> although it is available in raring, and in a PPA (
> http://packages.ubuntu.com/raring/python-cffi and
> https://launchpad.net/~pypy/+archive/ppa?field.series_filter=preciserespectively).
> 
> As a result of this, we wanted to get some feedback on which direction is
> best to go:
> 
> a) cffi-only approach, this is obviously the simplest approach, and works
> everywhere (assuming you can install a PPA, use pip, or similar for cffi)

There are a lot of dependencies of Grizzly and Havana that aren't in
the official release of Ubuntu 12.04. That is why Canonical created
the cloud archive, so that users can keep everything that isn't
"OpenStack+Dependencies" on the LTS.

The fact that cffi is already available in a release makes it even
more likely that it will be a straight forward backport to the cloud
archive. However, is Ubuntu 12.04's pypy 1.8 sufficient?  Ubuntu 13.04
and 12.10 have 1.9, and saucy (the presumed 13.10) has 2.0.2.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]Possiblility to run multiple hypervisors in a single deployment

2013-08-13 Thread Konglingxian
Hi all:

When I read (http://docs.openstack.org/trunk/openstack-ops/content/compute_nodes.html),
 there is a note as follows:
"It is also possible to run multiple hypervisors in a single deployment using 
Host Aggregates or Cells. However, an individual compute node can only run a 
single hypervisor at a time."

I think it not very correct, it should be based on the premise that the 
'multiple hypervisors' should support the same Neutron Plugin.

Am I right? Any hints are appreciated.


Lingxian Kong
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Allows to set the memory parameters for an Instance

2013-08-13 Thread Jae Sang Lee
Yes, there are instance resource quotas but Memory parameter doesn't
exists. Using libvirt memtune, I'd like to set the memory parameters for a
VM.



2013/8/13 Shake Chen 

> maybe use Extra Flavor .
>
> https://wiki.openstack.org/wiki/FlavorExtraSpecsKeyList
>
>
> On Sun, Aug 11, 2013 at 3:49 PM, Jae Sang Lee  wrote:
>
>>
>>
>> I've registered a blueprint to allows to set the advanced memory
>> parameters for an Instance
>>
>> https://blueprints.launchpad.net/nova/+spec/libvirt-memtune-for-instance
>>
>>
>> Would it be possible to review it (and maybe get an approval or not)?
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Shake Chen
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [ceilometer] Periodic Auditing In Glance

2013-08-13 Thread Neal, Phil
I'm a little concerned that a batch payload won't align with "exists" events 
generated from other services. To my recollection, Cinder, Trove and Neutron 
all emit exists events on a per-instance basisa consumer would have to 
figure out a way to handle/unpack these separately if they needed a granular 
feed. Not the end of the world, I suppose, but a bit inconsistent.

And a minor quibble: batching would also make it a much bigger issue if a 
consumer missed a notificationthough I guess you could counteract that by 
increasing the frequency (but wouldn't that defeat the purpose?)

> 
> 
> 
> On 08/13/2013 04:35 PM, Andrew Melton wrote:
> >> I'm just concerned with the type of notification you'd send. It has to
> >> be enough fine grained so we don't lose too much information.
> >
> > It's a tough situation, sending out an image.exists for each image with
> > the same payload as say image.upload would likely create TONS of traffic.
> > Personally, I'm thinking about a batch payload, with a bare minimum of the
> > following values:
> >
> > 'payload': [{'id': 'uuid1', 'owner': 'tenant1', 'created_at':
> > 'some_date', 'size': 1},
> >{'id': 'uuid2', 'owner': 'tenant2', 'created_at':
> > 'some_date', 'deleted_at': 'some_other_date', 'size': 2}]
> >
> > That way the audit job/task could be configured to emit in batches which
> > a deployer could tweak the settings so as to not emit too many messages.
> > I definitely welcome other ideas as well.
> 
> Would it be better to group by tenant vs. image?
> 
> One .exists per tenant that contains all the images owned by that tenant?
> 
> -S
> 
> 
> > Thanks,
> > Andrew Melton
> >
> >
> > On Tue, Aug 13, 2013 at 4:27 AM, Julien Danjou  > > wrote:
> >
> > On Mon, Aug 12 2013, Andrew Melton wrote:
> >
> > > So, my question to the Ceilometer community is this, does this
> > sound like
> > > something Ceilometer would find value in and use? If so, would this be
> > > something
> > > we would want most deployers turning on?
> >
> > Yes. I think we would definitely be happy to have the ability to drop
> > our pollster at some time.
> > I'm just concerned with the type of notification you'd send. It has to
> > be enough fine grained so we don't lose too much information.
> >
> > --
> > Julien Danjou
> > // Free Software hacker / freelance consultant
> > // http://julien.danjou.info
> >
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Kieran Spear

On 14/08/2013, at 7:40 AM, Jay Pipes  wrote:

> On 08/13/2013 05:04 PM, Gabriel Hurley wrote:
>> I have been one of the earliest, loudest, and most consistent PITA's about 
>> pagination, so I probably oughta speak up. I would like to state three facts:
>> 
>> 1. Marker + limit (e.g. forward-only) pagination is horrific for building a 
>> user interface.
>> 2. Pagination doesn't scale.
>> 3. OpenStack's APIs have historically had useless filtering capabilities.
>> 
>> In a world where pagination is a "must-have" feature we need to have page 
>> number + limit pagination in order to build a reasonable UI. Ironically 
>> though, I'm in favor of ditching pagination altogether. It's the 
>> lowest-common denominator, used because we as a community haven't buckled 
>> down and built meaningful ways for our users to get to the data they really 
>> want.
>> 
>> Filtering is great, but it's only 1/3 of the solution. Let me break it down 
>> with problems and high level "solutions":
>> 
>> Problem 1: I know what I want and I need to find it.
>> Solution: filtering/search systems.
> 
> This is a good place to start. Glance has excellent filtering/search 
> capabilities -- built in to the API from early on in the Essex timeframe, and 
> only expanded over the last few releases.
> 
> Pagination solutions should build on a solid filtering/search functionality 
> in the API, where there is a consistent sort key and direction (either 
> hard-coded or user-determined, doesn't matter).
> 
> Limit/offset pagination solutions (forward and backwards paging, random 
> skip-to-a-page) are inefficient from a SQL query perspective and should be a 
> last resort, IMO, compared to limit/marker. With some smart session-storage 
> of a page's markers, backwards paging with limit/marker APIs is certainly 
> possible -- just store the previous page's marker.

Not just the previous page's marker, but the marker of every previous page 
since we would like to be able to click the previous button more than once. Any 
previous markers we store are also likely to become stale pretty quickly. And 
all this is based on the assumption that the user's session even started at the 
first 'page' - it could be they followed a link from elsewhere in Horizon or 
the greater internet.

I completely agree with Dolph that this is something the client shouldn't need 
to care about at all. The next/prev links returned with each page of results 
should hide all of this. next/prev links also make it trivial for the client to 
discover whether there's even a next page at all, since we don't want to make a 
user click a link to go to an empty page.

Having said that, I think we can improve the current marker/limit system 
without hurting performance if we split the marker into 'before' and 'after' 
parameters. That way all the information needed to go forward or backwards is 
included in the results for the current page. Supporting 'before' should be as 
simple as reversing the sort order and then flipping the order of the results.


Kieran


> 
>> Problem 2: I don't know what I want, and it may or may not exist.
>> Solution: tailored discovery mechanisms.
> 
> This should not be a use case that we spend much time on. Frankly, this use 
> case can be summarized as "the window shopper scenario". Providing a quality 
> search/filtering mechanism, including the *API* itself providing REST-ful 
> discovery of the filters and search criteria the API supports, is way more 
> important...
> 
>> Problem 3: I need to know something about *all* the data in my system.
>> Solution: reporting systems.
> 
> Sure, no disagreement there.
> 
>> We've got the better part of none of that.
> 
> I disagree. Some of the APIs have support for a good bit of search/filtering. 
> We just need to bring all the projects up to search speed, Captain.
> 
> Best,
> -jay
> 
> p.s. I very often go to the second and third pages of Google searches. :) But 
> I never skip to the 127th page of results.
> 
> > But I'd like to solve these issues. I have lots of thoughts on all of 
> > those, and I think the UX and design communities can offer a lot in terms 
> > of the usability of the solutions we come up with. Even more, I think this 
> > would be an awesome working group session at the next summit to talk about 
> > nothing other than "how can we get rid of pagination?"
>> 
>> As a parting thought, what percentage of the time do you click to the second 
>> page of results in Google?
>> 
>> All the best,
>> 
>> - Gabriel
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@l

Re: [openstack-dev] [keystone] [oslo] postpone key distribution bp until icehouse?

2013-08-13 Thread Simo Sorce
On Tue, 2013-08-13 at 17:20 -0500, Dolph Mathews wrote:
> With regard
> to: https://blueprints.launchpad.net/keystone/+spec/key-distribution-server
> 
Well I am of course biased so take my comments with a grain of salt,
that said...
> 
> During today's project status meeting [1], the state of KDS was
> discussed [2]. To quote ttx directly: "we've been bitten in the past
> with late security-sensitive stuff" and "I'm a bit worried to ship
> late code with such security implications as a KDS."

Is ttx going to review any "security implications" ? The code does not
mature just because is sit there untouched for more or less time.

>  I share the same concern, especially considering the API only
> recently went up for formal review [3],

While the API may be important it has little to no bearing over the
security properties of the underlying code and mechanism.
The document to review to understand and/or criticize the "security
implications" is this: https://wiki.openstack.org/wiki/MessageSecurity
and it has been available for quite a few months.

>  and the WIP implementation is still failing smokestack [4].

This is a red herring, unfortunately Smokestack doesn't say why it is
failing but we suppose it is due to something python 2.6 doesn't like
(only the centos machine fails). I have been developing on 2.7 and was
planning to do a final test on a machine with 2.6 once I had reviews
agreeing no more fundamental changes were needed.
> 
> I'm happy to see the reviews in question continue to receive their
> fair share of attention over the next few weeks, but can (and should?)
> merging be delayed until icehouse while more security-focused eyes
> have time to review the code?

I would agree to this only if you can name individuals that are going to
do a "security review", otherwise I see no real reason to delay, as it
will cost time to keep patches up to date, and I'd rather not do that if
no one is lining up to do a "security review".

FWIW I did circulate the design for the security mechanism internally in
Red Hat to some people with some expertise in crypto matters.

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] security_groups extension in nova api v3

2013-08-13 Thread Melanie Witt
On Aug 13, 2013, at 2:11 AM, Day, Phil wrote:

> If we really want to get clean separation between Nova and Neutron in the V3 
> API should we consider making the Nov aV3 API only accept lists o port ids in 
> the server create command ?
> 
> That way there would be no need to every pass security group information into 
> Nova.
> 
> Any cross project co-ordination (for example automatically creating ports) 
> could be handled in the client layer, rather than inside Nova.

Server create is always (until there's a separate layer) going to go cross 
project calling other apis like neutron and cinder while an instance is being 
provisioned. For that reason, I tend to think it's ok to give some extra 
convenience of automatically creating ports if needed, and being able to 
specify security groups.

For the associate and disassociate, the only convenience is being able to use 
the instance display name and security group name, which is already handled at 
the client layer. It seems a clearer case of duplicating what neutron offers.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] postpone key distribution bp until icehouse?

2013-08-13 Thread Russell Bryant
On 08/13/2013 06:20 PM, Dolph Mathews wrote:
> With regard
> to: https://blueprints.launchpad.net/keystone/+spec/key-distribution-server
> 
> During today's project status meeting [1], the state of KDS was
> discussed [2]. To quote ttx directly: "we've been bitten in the past
> with late security-sensitive stuff" and "I'm a bit worried to ship late
> code with such security implications as a KDS." I share the same
> concern, especially considering the API only recently went up for formal
> review [3], and the WIP implementation is still failing smokestack [4].
> 
> I'm happy to see the reviews in question continue to receive their fair
> share of attention over the next few weeks, but can (and should?)
> merging be delayed until icehouse while more security-focused eyes have
> time to review the code?
> 
> Ceilometer and nova would both be affected by a delay, as both have use
> cases for consuming trusted messaging [5] (a dependency of the bp in
> question).

The longer this takes, the longer it is until we can make use of it.
However, at this point, deferring doesn't affect Nova much.  Landing at
the end of Havana vs the beginning of Icehouse doesn't change that
Icehouse would be the earliest Nova would start making use of it.

I would really like to see this as a priority to land ASAP in Icehouse
if it gets deferred.  Otherwise, other projects such as Nova can't make
any plans to build something with it in Icehouse, pushing this out yet
another 6 months.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Meeting agenda for Wed August 14th at 2000 UTC

2013-08-13 Thread Steven Hardy
The Heat team holds a weekly meeting in #openstack-meeting, see

https://wiki.openstack.org/wiki/Meetings/HeatAgenda for more details

The next meeting is on Wed August 14th at 2000 UTC

Current topics for discussion:
- Review last weeks actions
- Reminder re Havana_Release_Schedule FeatureProposalFreeze
- h3 blueprint status
- Open discussion

If anyone has any other topic to discuss, please add to the wiki.

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [oslo] postpone key distribution bp until icehouse?

2013-08-13 Thread Dolph Mathews
With regard to:
https://blueprints.launchpad.net/keystone/+spec/key-distribution-server

During today's project status meeting [1], the state of KDS was discussed
[2]. To quote ttx directly: "we've been bitten in the past with late
security-sensitive stuff" and "I'm a bit worried to ship late code with
such security implications as a KDS." I share the same concern, especially
considering the API only recently went up for formal review [3], and the
WIP implementation is still failing smokestack [4].

I'm happy to see the reviews in question continue to receive their fair
share of attention over the next few weeks, but can (and should?) merging
be delayed until icehouse while more security-focused eyes have time to
review the code?

Ceilometer and nova would both be affected by a delay, as both have use
cases for consuming trusted messaging [5] (a dependency of the bp in
question).

Thanks for you feedback!

[1]:
http://eavesdrop.openstack.org/irclogs/%23openstack-meeting/%23openstack-meeting.2013-08-13.log
[2]: http://paste.openstack.org/raw/44075/
[3]: https://review.openstack.org/#/c/40692/
[4]: https://review.openstack.org/#/c/37118/
[5]: https://blueprints.launchpad.net/oslo/+spec/trusted-messaging
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] GPU passthrough support blueprints for OpenStack

2013-08-13 Thread Brian Schott
Are there more recent blueprints related to adding GPU pass-through support?  
All that I can find are some stale blueprints that I created around the Cactus 
timeframe (while wearing a different hat) that are pretty out of date.

I just heard a rumor that folks are doing Nvidia GRID K2 GPU passthrough with 
KVM successfully using linux 3.10.6 kernel with RHEL.

In addition, Lorin and I did some GPU passthrough testing back in the spring 
with GRID K2 on HyperV, libvirt+xen, and XenServer.  Slides are here:
http://www.slideshare.net/bfschott/nimbis-schott-openstackgpustatus20130618

The virtualization support for  GPU-enabled virtual desktops and GPGPU seems to 
have stabilized this year for server deployments.  How is this going to be 
supported in OpenStack?

Brian

-
Brian Schott, CTO
Nimbis Services, Inc.
brian.sch...@nimbisservices.com
ph: 443-274-6064  fx: 443-274-6060





smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Swift, netifaces, PyPy, and cffi

2013-08-13 Thread Alex Gaynor
Hi all,

(This references this changeset: https://review.openstack.org/#/c/38415/)

One of the goals I've been working at has been getting swift running on
PyPy (and from there, the rest of OpenStack). The last blocking issue in
swift is that it currently uses netifaces, which is a C-extension that
doesn't on PyPy. I've proposed to replace this dependency with a cffi based
binding to the system.

For those not familiar, cffi is a tool for binding to C libraries, similar
to ctypes (in the stdlib), except more expressive, less error prone, and
faster; some of our downstream dependencies already use it.

One of the issues that came up in this review however, is that cffi is not
packaged in the most recent Ubuntu LTS (and likely other distributions),
although it is available in raring, and in a PPA (
http://packages.ubuntu.com/raring/python-cffi and
https://launchpad.net/~pypy/+archive/ppa?field.series_filter=preciserespectively).

As a result of this, we wanted to get some feedback on which direction is
best to go:

a) cffi-only approach, this is obviously the simplest approach, and works
everywhere (assuming you can install a PPA, use pip, or similar for cffi)
b) wait until the next LTS to move to this approach (requires waiting until
2014 for PyPy support)
c) Support using either netifaces or cffi: most complex, and most code,
plus "one or the other" dependencies aren't well supported by most tools as
far as I know.

Thoughts?
Alex

-- 
"I disapprove of what you say, but I will defend to the death your right to
say it." -- Evelyn Beatrice Hall (summarizing Voltaire)
"The people's good is the highest law." -- Cicero
GPG Key fingerprint: 125F 5C67 DFE9 4084
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Blueprint: launch-time configurable kernel-id, ramdisk-id, and kernel command line

2013-08-13 Thread Russell Bryant
On 08/13/2013 05:47 PM, Dennis Kliban wrote:
> I have just created a new blueprint: 
> https://blueprints.launchpad.net/nova/+spec/expose-ramdisk-kernel-and-command-line-via-rest-and-cli
> 
> I realize that some of this work overlaps with: 
> https://blueprints.launchpad.net/nova/+spec/improve-boot-from-volume
> which is an umbrella blueprint for: 
> https://blueprints.launchpad.net/nova/+spec/improve-block-device-handling
> 
> I can see that a lot of work has been done for the above blueprints, but I 
> was not clear on the progress with regard to exposing kernel-id and 
> ramdisk-id.  Perhaps I don't need to implement this?   
> 
> The second change proposed in the blueprint has not been addressed in any 
> other blueprints.  Does anyone think that adding ability to pass in at launch 
> time the kernel command line would be problematic?

This specific part wasn't implemented as a part of those blueprints (and
isn't planned to be right now).

The functionality generally seems reasonable to me.  It will have to
wait for Icehouse, though.  We already have too many things in the queue
for Havana.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] can't install devstack - nova-api did not start

2013-08-13 Thread XINYU ZHAO
i updated mine but still no avail.  still got this error.  Did you just
update devstack or uninstall the oslo.config as well.

 + /opt/stack/new/keystone/bin/keystone-manage db_sync
2013-08-13 12:18:10 Traceback (most recent call last):
2013-08-13 12:18:10   File
"/opt/stack/new/keystone/bin/keystone-manage", line 16, in 
2013-08-13 12:18:10 from keystone import cli
2013-08-13 12:18:10   File "/opt/stack/new/keystone/keystone/cli.py",
line 25, in 
2013-08-13 12:18:10 from oslo.config import cfg
2013-08-13 12:18:10 ImportError: No module named config



On Mon, Aug 12, 2013 at 11:41 PM, Roman Gorodeckij  wrote:

> Updating devstack to latest revision solves my problem.
>
> Sent from my iPhone
>
> On 2013 Rugp. 13, at 05:00, XINYU ZHAO  wrote:
>
> Hi Sean
> I uninstalled the oslo.config 1.1.1 version and run devstack, but this
> time it stopped at
>
> 2013-08-09 18:55:16 + /opt/stack/new/keystone/bin/keystone-manage db_sync
> 2013-08-09 18:55:16 Traceback (most recent call last):
> 2013-08-09 18:55:16   File "/opt/stack/new/keystone/bin/keystone-manage", 
> line 16, in 
> 2013-08-09 18:55:16 from keystone import cli
> 2013-08-09 18:55:16   File "/opt/stack/new/keystone/keystone/cli.py", line 
> 23, in 
> 2013-08-09 18:55:16 from oslo.config import cfg
> 2013-08-09 18:55:16 ImportError: No module named config
> 2013-08-09 18:55:16 + [[ PKI == \P\K\I ]]
>
>
> An unexpected error prevented the server from fulfilling your request.
> (ProgrammingError) (1146, "Table 'keystone.service' doesn't exist") 'INSERT
> INTO service (id, type, extra) VALUES (%s, %s, %s)'
> ('32578395572b4cf2a70ba70b6031cd1d', 'identity', '{"name": "keystone",
> "description": "Keystone Identity Service"}') (HTTP 500)
> 2013-08-12 18:36:45 + KEYSTONE_SERVICE=
> 2013-08-12 18:36:45 + keystone endpoint-create --region RegionOne
> --service_id --publicurl http://127.0.0.1:5000/v2.0 --adminurl
> http://127.0.0.1:35357/v2.0 --internalurl http://127.0.0.1:5000/v2.0
>
> it seems that  oslo.config was not properly imported after i re-installed
> it.
> but when i list the pip installations, it is there.
>
> /usr/local/bin/pip freeze |grep oslo.config
> -e git+
> http://10.145.81.234/openstackci/gerrit/p/oslo.config@c65d70c02494805ce50b88f343f8fafe7a521724#egg=oslo.config-master
> root@devstack-4:/# /usr/local/bin/pip search oslo.config
> oslo.config   - Oslo configuration API
>   INSTALLED: 1.2.0.a192.gc65d70c
>   LATEST:1.1.1
>
>
>
> On Sat, Aug 10, 2013 at 7:07 AM, Sean Dague  wrote:
>
>> Silly pip, trix are for kids.
>>
>> Ok, well:
>>
>> sudo pip install -I oslo.config==1.1.1
>>
>> then pip uninstall oslo.config
>>
>> On 08/09/2013 06:58 PM, Roman Gorodeckij wrote:
>>
>>> stack@hp:~/devstack$ sudo pip install oslo.config
>>> Requirement already satisfied (use --upgrade to upgrade): oslo.config in
>>> /opt/stack/oslo.config
>>> Requirement already satisfied (use --upgrade to upgrade): six in
>>> /usr/local/lib/python2.7/dist-**packages (from oslo.config)
>>> Cleaning up...
>>> stack@hp:~/devstack$ sudo pip uninstall oslo.config
>>> Can't uninstall 'oslo.config'. No files were found to uninstall.
>>> stack@hp:~/devstack$
>>>
>>> stack@hp:~/devstack$ cat /tmp/devstack/log//screen-n-**api.log
>>> | touch "/opt/stack/status/stack/n-**api.failure"nova &&
>>> /usr/local/bin/nova-api |
>>>
>>> Traceback (most recent call last):
>>>File "/usr/local/bin/nova-api", line 6, in 
>>>  from nova.cmd.api import main
>>>File "/opt/stack/nova/nova/cmd/api.**py", line 29, in 
>>>  from nova import config
>>>File "/opt/stack/nova/nova/config.**py", line 22, in 
>>>  from nova.openstack.common.db.**sqlalchemy import session as
>>> db_session
>>>File 
>>> "/opt/stack/nova/nova/**openstack/common/db/**sqlalchemy/session.py",
>>> line 279, in 
>>>  deprecated_opts=[cfg.**DeprecatedOpt('sql_connection'**,
>>> AttributeError: 'module' object has no attribute 'DeprecatedOpt'
>>>
>>> nothing changed.
>>>
>>> On Aug 9, 2013, at 6:11 PM, Sean Dague  wrote:
>>>
>>>  This should be addressed by the latest devstack, however because we
 moved to oslo.config out of git, some install environments might still have
 oslo.config 1.1.0 somewhere, that pip no longer sees (so can't uninstall)

 sudo pip install oslo.config
 sudo pip uninstall oslo.config

 rerun devstack, see if it works.

 -Sean

 On 08/09/2013 09:14 AM, Roman Gorodeckij wrote:

> Tried to install devstack to dedicated server, ip's are defined.
>
> Here's the output:
>
> 13-08-09 09:06:28 ++ echo -ne '\015'
>
> 2013-08-09 09:06:28 + NL=$'\r'
> 2013-08-09 09:06:28 + screen -S stack -p n-api -X stuff 'cd
> /opt/stack/nova && /'sr/local/bin/nova-api || touch
> "/opt/stack/status/stack/n-**api.failure"
> 2013-08-09 09:06:28 + echo 'Waiting for nova-api to start...'
> 2013-08-09 09:06:28 Waiting for nova-api to start...
> 2013-08-09 09:06:28 + wait_for_se

[openstack-dev] Blueprint: launch-time configurable kernel-id, ramdisk-id, and kernel command line

2013-08-13 Thread Dennis Kliban
I have just created a new blueprint: 
https://blueprints.launchpad.net/nova/+spec/expose-ramdisk-kernel-and-command-line-via-rest-and-cli

I realize that some of this work overlaps with: 
https://blueprints.launchpad.net/nova/+spec/improve-boot-from-volume
which is an umbrella blueprint for: 
https://blueprints.launchpad.net/nova/+spec/improve-block-device-handling

I can see that a lot of work has been done for the above blueprints, but I was 
not clear on the progress with regard to exposing kernel-id and ramdisk-id.  
Perhaps I don't need to implement this?   

The second change proposed in the blueprint has not been addressed in any other 
blueprints.  Does anyone think that adding ability to pass in at launch time 
the kernel command line would be problematic?

Thanks,
Dennis

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Meeting Tuesday August 13th at 19:00 UTC

2013-08-13 Thread Elizabeth Krumbach Joseph
On Mon, Aug 12, 2013 at 10:40 AM, Elizabeth Krumbach Joseph
 wrote:
> The OpenStack Infrastructure (Infra) team is hosting our weekly
> meeting tomorrow, Tuesday August 13th, at 19:00 UTC in
> #openstack-meeting

Meeting log and minutes:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-08-13-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-08-13-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-08-13-19.02.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Jay Pipes

On 08/13/2013 05:04 PM, Gabriel Hurley wrote:

I have been one of the earliest, loudest, and most consistent PITA's about 
pagination, so I probably oughta speak up. I would like to state three facts:

1. Marker + limit (e.g. forward-only) pagination is horrific for building a 
user interface.
2. Pagination doesn't scale.
3. OpenStack's APIs have historically had useless filtering capabilities.

In a world where pagination is a "must-have" feature we need to have page 
number + limit pagination in order to build a reasonable UI. Ironically though, I'm in 
favor of ditching pagination altogether. It's the lowest-common denominator, used because 
we as a community haven't buckled down and built meaningful ways for our users to get to 
the data they really want.

Filtering is great, but it's only 1/3 of the solution. Let me break it down with problems 
and high level "solutions":

Problem 1: I know what I want and I need to find it.
Solution: filtering/search systems.


This is a good place to start. Glance has excellent filtering/search 
capabilities -- built in to the API from early on in the Essex 
timeframe, and only expanded over the last few releases.


Pagination solutions should build on a solid filtering/search 
functionality in the API, where there is a consistent sort key and 
direction (either hard-coded or user-determined, doesn't matter).


Limit/offset pagination solutions (forward and backwards paging, random 
skip-to-a-page) are inefficient from a SQL query perspective and should 
be a last resort, IMO, compared to limit/marker. With some smart 
session-storage of a page's markers, backwards paging with limit/marker 
APIs is certainly possible -- just store the previous page's marker.



Problem 2: I don't know what I want, and it may or may not exist.
Solution: tailored discovery mechanisms.


This should not be a use case that we spend much time on. Frankly, this 
use case can be summarized as "the window shopper scenario". Providing a 
quality search/filtering mechanism, including the *API* itself providing 
REST-ful discovery of the filters and search criteria the API supports, 
is way more important...



Problem 3: I need to know something about *all* the data in my system.
Solution: reporting systems.


Sure, no disagreement there.


We've got the better part of none of that.


I disagree. Some of the APIs have support for a good bit of 
search/filtering. We just need to bring all the projects up to search 
speed, Captain.


Best,
-jay

p.s. I very often go to the second and third pages of Google searches. 
:) But I never skip to the 127th page of results.


> But I'd like to solve these issues. I have lots of thoughts on all of 
those, and I think the UX and design communities can offer a lot in 
terms of the usability of the solutions we come up with. Even more, I 
think this would be an awesome working group session at the next summit 
to talk about nothing other than "how can we get rid of pagination?"


As a parting thought, what percentage of the time do you click to the second 
page of results in Google?

All the best,

 - Gabriel


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Migrating to testr parallel in tempest

2013-08-13 Thread Clark Boylan
On Tue, Aug 13, 2013 at 1:25 PM, Matthew Treinish  wrote:
>
> Hi everyone,
>
> So for the past month or so I've been working on getting tempest to work 
> stably
> with testr in parallel. As part of this you may have noticed the testr-full
> jobs that get run on the zuul check queue. I was using that job to debug some
> of the more obvious race conditions and stability issues with running tempest
> in parallel. After a bunch of fixes to tempest and finding some real bugs in
> some of the projects things seem to have smoothed out.
>
> So I pushed the testr-full run to the gate queue earlier today. I'll be 
> keeping
> track of the success rate of this job vs the serial job and use this as the
> determining factor before we push this live to be the default for all tempest
> runs. So assuming that the success rate matches up well enough with serial job
> on the gate queue then I will push out the change that will migrate all the
> voting jobs to run in parallel hopefully either Friday afternoon or early next
> week. Also, if anyone has any input on what threshold they feel is good enough
> for this I'd welcome any input on that. For example, do we want to ensure
> a >= 1:1 match for job success? Or would something like 90% as stable as the
> serial job be good enough considering the speed advantage. (The parallel runs
> take about half as much time as a full serial run, the parallel job normally
> finishes in ~25-30min) Since this affects almost every project I don't want to
> define this threshold without input from everyone.
>
> After there is some more data for the gate queue's parallel job I'll have some
> pretty graphite graphs that I can share comparing the success trends between
> the parallel and serial jobs.
>
> So at this point we're in the home stretch and I'm asking for everyone's help
> in getting this merged. So, if everyone who is reviewing and pushing commits
> could watch the results from these non-voting jobs and if things fail on the
> parallel job but not the serial job please investigate the failure and open a
> bug if necessary. If it turns out to be a bug in tempest please link it 
> against
> this blueprint:
>
> https://blueprints.launchpad.net/tempest/+spec/speed-up-tempest
>
> so that I'll give it the attention it deserves. I'd hate to get this close to
> getting this merged and have a bit of racy code get merged at the last second
> and block us for another week or two.
>
> I feel that we need to get this in before the H3 rush starts up as it will 
> help
> everyone get through the extra review load faster.
>
Getting this in before the H3 rush would be very helpful. When we made
the switch with Nova's unittests we fixed as many of the test bugs
that we could find, merged the change to switch the test runner, then
treated all failures as very high priority bugs that received
immediate attention. Getting this in before H3 will give everyone a
little more time to debug any potential new issues exposed by Jenkins
or people running the tests locally.

I think we should be bold here and merge this as soon as we have good
numbers that indicate the trend is for these tests to pass. Graphite
can give us the pass to fail ratios over time, as long as these trends
are similar for both the old nosetest jobs and the new testr job I say
we go for it. (Disclaimer: most of the projecst I work on are not
affected by the tempest jobs; however, I am often called upon to help
sort out issues in the gate).

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reminder: Oslo project meeting

2013-08-13 Thread Mark McLoughlin
Hi

We're having an IRC meeting on Friday to sync up again on the messaging
work going on:

  https://wiki.openstack.org/wiki/Meetings/Oslo
  https://etherpad.openstack.org/HavanaOsloMessaging

Feel free to add other topics to the wiki

See you on #openstack-meeting at 1400 UTC

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Gabriel Hurley
I have been one of the earliest, loudest, and most consistent PITA's about 
pagination, so I probably oughta speak up. I would like to state three facts:

1. Marker + limit (e.g. forward-only) pagination is horrific for building a 
user interface.
2. Pagination doesn't scale.
3. OpenStack's APIs have historically had useless filtering capabilities.

In a world where pagination is a "must-have" feature we need to have page 
number + limit pagination in order to build a reasonable UI. Ironically though, 
I'm in favor of ditching pagination altogether. It's the lowest-common 
denominator, used because we as a community haven't buckled down and built 
meaningful ways for our users to get to the data they really want.

Filtering is great, but it's only 1/3 of the solution. Let me break it down 
with problems and high level "solutions":

Problem 1: I know what I want and I need to find it.
Solution: filtering/search systems.

Problem 2: I don't know what I want, and it may or may not exist.
Solution: tailored discovery mechanisms.

Problem 3: I need to know something about *all* the data in my system.
Solution: reporting systems.

We've got the better part of none of that. But I'd like to solve these issues. 
I have lots of thoughts on all of those, and I think the UX and design 
communities can offer a lot in terms of the usability of the solutions we come 
up with. Even more, I think this would be an awesome working group session at 
the next summit to talk about nothing other than "how can we get rid of 
pagination?"

As a parting thought, what percentage of the time do you click to the second 
page of results in Google?

All the best,

- Gabriel


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Migrating to testr parallel in tempest

2013-08-13 Thread Jay Pipes

On 08/13/2013 04:25 PM, Matthew Treinish wrote:


Hi everyone,

So for the past month or so I've been working on getting tempest to work stably
with testr in parallel. As part of this you may have noticed the testr-full
jobs that get run on the zuul check queue. I was using that job to debug some
of the more obvious race conditions and stability issues with running tempest
in parallel. After a bunch of fixes to tempest and finding some real bugs in
some of the projects things seem to have smoothed out.

So I pushed the testr-full run to the gate queue earlier today. I'll be keeping
track of the success rate of this job vs the serial job and use this as the
determining factor before we push this live to be the default for all tempest
runs. So assuming that the success rate matches up well enough with serial job
on the gate queue then I will push out the change that will migrate all the
voting jobs to run in parallel hopefully either Friday afternoon or early next
week. Also, if anyone has any input on what threshold they feel is good enough
for this I'd welcome any input on that. For example, do we want to ensure
a >= 1:1 match for job success? Or would something like 90% as stable as the
serial job be good enough considering the speed advantage. (The parallel runs
take about half as much time as a full serial run, the parallel job normally
finishes in ~25-30min) Since this affects almost every project I don't want to
define this threshold without input from everyone.

After there is some more data for the gate queue's parallel job I'll have some
pretty graphite graphs that I can share comparing the success trends between
the parallel and serial jobs.

So at this point we're in the home stretch and I'm asking for everyone's help
in getting this merged. So, if everyone who is reviewing and pushing commits
could watch the results from these non-voting jobs and if things fail on the
parallel job but not the serial job please investigate the failure and open a
bug if necessary. If it turns out to be a bug in tempest please link it against
this blueprint:

https://blueprints.launchpad.net/tempest/+spec/speed-up-tempest

so that I'll give it the attention it deserves. I'd hate to get this close to
getting this merged and have a bit of racy code get merged at the last second
and block us for another week or two.

I feel that we need to get this in before the H3 rush starts up as it will help
everyone get through the extra review load faster.


Fantastic work on this, Matthew. Appreciate the effort immensely.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Extension to volume creation (filesystem and label)

2013-08-13 Thread Caitlin Bestler

On 8/12/2013 9:37 AM, Greg Poirier wrote:




Oh, we don't want to get super fancy with it. We would probably only
support one filesystem type and not partitions. E.g. You request a 120GB
volume and you get a 120GB Ext4 FS mountable by label.



I'm not following something here. What is the point at dictating a 
specific FS format when the compute node will be the one applying

the interpretation?

Isn't a 120 GB volume which the VM will interpret as an EXT4 FS just
a 120 GB volume that has a *hint* attached to it?

And would there be any reason to constrain in advance the set of hints
that could be offered?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Migrating to testr parallel in tempest

2013-08-13 Thread Matthew Treinish

Hi everyone,

So for the past month or so I've been working on getting tempest to work stably
with testr in parallel. As part of this you may have noticed the testr-full
jobs that get run on the zuul check queue. I was using that job to debug some
of the more obvious race conditions and stability issues with running tempest
in parallel. After a bunch of fixes to tempest and finding some real bugs in
some of the projects things seem to have smoothed out.

So I pushed the testr-full run to the gate queue earlier today. I'll be keeping
track of the success rate of this job vs the serial job and use this as the
determining factor before we push this live to be the default for all tempest
runs. So assuming that the success rate matches up well enough with serial job
on the gate queue then I will push out the change that will migrate all the
voting jobs to run in parallel hopefully either Friday afternoon or early next
week. Also, if anyone has any input on what threshold they feel is good enough
for this I'd welcome any input on that. For example, do we want to ensure 
a >= 1:1 match for job success? Or would something like 90% as stable as the
serial job be good enough considering the speed advantage. (The parallel runs
take about half as much time as a full serial run, the parallel job normally
finishes in ~25-30min) Since this affects almost every project I don't want to
define this threshold without input from everyone.

After there is some more data for the gate queue's parallel job I'll have some
pretty graphite graphs that I can share comparing the success trends between
the parallel and serial jobs.

So at this point we're in the home stretch and I'm asking for everyone's help
in getting this merged. So, if everyone who is reviewing and pushing commits
could watch the results from these non-voting jobs and if things fail on the
parallel job but not the serial job please investigate the failure and open a
bug if necessary. If it turns out to be a bug in tempest please link it against
this blueprint:

https://blueprints.launchpad.net/tempest/+spec/speed-up-tempest

so that I'll give it the attention it deserves. I'd hate to get this close to
getting this merged and have a bit of racy code get merged at the last second
and block us for another week or two.

I feel that we need to get this in before the H3 rush starts up as it will help
everyone get through the extra review load faster.

Thanks,

Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [ceilometer] Periodic Auditing In Glance

2013-08-13 Thread Sandy Walsh


On 08/13/2013 04:35 PM, Andrew Melton wrote:
>> I'm just concerned with the type of notification you'd send. It has to
>> be enough fine grained so we don't lose too much information.
> 
> It's a tough situation, sending out an image.exists for each image with
> the same payload as say image.upload would likely create TONS of traffic.
> Personally, I'm thinking about a batch payload, with a bare minimum of the
> following values:
> 
> 'payload': [{'id': 'uuid1', 'owner': 'tenant1', 'created_at':
> 'some_date', 'size': 1},
>{'id': 'uuid2', 'owner': 'tenant2', 'created_at':
> 'some_date', 'deleted_at': 'some_other_date', 'size': 2}]
> 
> That way the audit job/task could be configured to emit in batches which
> a deployer could tweak the settings so as to not emit too many messages.
> I definitely welcome other ideas as well.

Would it be better to group by tenant vs. image?

One .exists per tenant that contains all the images owned by that tenant?

-S


> Thanks,
> Andrew Melton
> 
> 
> On Tue, Aug 13, 2013 at 4:27 AM, Julien Danjou  > wrote:
> 
> On Mon, Aug 12 2013, Andrew Melton wrote:
> 
> > So, my question to the Ceilometer community is this, does this
> sound like
> > something Ceilometer would find value in and use? If so, would this be
> > something
> > we would want most deployers turning on?
> 
> Yes. I think we would definitely be happy to have the ability to drop
> our pollster at some time.
> I'm just concerned with the type of notification you'd send. It has to
> be enough fine grained so we don't lose too much information.
> 
> --
> Julien Danjou
> // Free Software hacker / freelance consultant
> // http://julien.danjou.info
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [ceilometer] Periodic Auditing In Glance

2013-08-13 Thread Andrew Melton
> I'm just concerned with the type of notification you'd send. It has to
> be enough fine grained so we don't lose too much information.

It's a tough situation, sending out an image.exists for each image with
the same payload as say image.upload would likely create TONS of traffic.
Personally, I'm thinking about a batch payload, with a bare minimum of the
following values:

'payload': [{'id': 'uuid1', 'owner': 'tenant1', 'created_at': 'some_date',
'size': 1},
   {'id': 'uuid2', 'owner': 'tenant2', 'created_at':
'some_date', 'deleted_at': 'some_other_date', 'size': 2}]

That way the audit job/task could be configured to emit in batches which
a deployer could tweak the settings so as to not emit too many messages.
I definitely welcome other ideas as well.

Thanks,
Andrew Melton


On Tue, Aug 13, 2013 at 4:27 AM, Julien Danjou  wrote:

> On Mon, Aug 12 2013, Andrew Melton wrote:
>
> > So, my question to the Ceilometer community is this, does this sound like
> > something Ceilometer would find value in and use? If so, would this be
> > something
> > we would want most deployers turning on?
>
> Yes. I think we would definitely be happy to have the ability to drop
> our pollster at some time.
> I'm just concerned with the type of notification you'd send. It has to
> be enough fine grained so we don't lose too much information.
>
> --
> Julien Danjou
> // Free Software hacker / freelance consultant
> // http://julien.danjou.info
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Concerning get_resources/get_meters and the Ceilometer API

2013-08-13 Thread Thomas Maddox
Hello!

I was having some chats yesterday with both Julien and Doug regarding some 
thoughts that occurred to me while digging through CM and Doug suggested that I 
bring them up on the dev list for everyones benefit and discussion.

My bringing this up is intended to help myself and others get a better 
understanding of why it's this way, whether we're on the correct course, and, 
if not, how we get to it. I'm not expecting anything to change quickly or 
necessarily at all from this. Ultimately the question I'm asking is: are we 
addressing the correct use cases with the correct API calls; being able to 
expect certain behavior without having to know the internals? For context, this 
is mostly using the SQLAlchemy implementation for these questions, but the API 
questions apply overall.

My concerns:

  *   Driving get_resources() with the Meter table instead of the Resource 
table. This is mainly because of the additional filtering available in the 
Meter table, which allows us to satisfy a use case like getting a list of 
resources a user had during a period of time to get meters to compute billing 
with. The semantics are tripping me up a bit; the question this boiled down to 
for me was: why use a resource query to get meters to show usage by a tenant? I 
was curious about why we needed the timestamp filtering when looking at 
Resources, and why we would use Resource as a way to get at metering data, 
rather than a Meter request itself? This was answered by resources being the 
current vector to get at metering data for a tenant in terms of resources, if I 
understood correctly.
  *   With this implementation, we have to do aggregation to get at the 
discrete Resources (via the Meter table) rather than just filtering the already 
distinct ones in the Resource table.
  *   This brought up some confusion with the API for me with the major use 
cases I can think of:
 *   As a new consumer of this API, I would think that 
/resource/ would get me details for a resource, e.g. current 
state, when it was created, last updated/used timestamp, who owns it; not the 
attributes from the first sample to come through about it
 *   I would think that /meter/?q.field=resource_id&q.value= 
ought to get me a list of meter(s) details for a specific resource, e.g. name, 
unit, and origin; but not a huge mixture of samples.
*   Additionally /meter/?q.field=user_id&q.value= would get me 
a list of all meters that are currently related to the user
 *   The ultimate use case, for billing queries, I would think that 
/meter//statistics?&&() would get me 
the measurements for that meter to bill for.

If I understand correctly, one main intent driving this is wanting to avoid end 
users having to write a bunch of API requests themselves from the billing side 
and instead just drill down from payloads for each resource to get the billing 
information for their customers. It also looks like there's a BP to add 
grouping functionality to statistics queries to allow us this functionality 
easily (this one, I think: 
https://blueprints.launchpad.net/ceilometer/+spec/api-group-by).

I'm new to this project, so I'm trying to get a handle on how we got here and 
maybe offer some outside perspective, if it's needed or wanted. =]

Thank you all in advance for your time with this. I hope this is productive!

Cheers!

-Thomas













___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Jay Pipes

On 08/13/2013 01:43 PM, Henry Nash wrote:

Jay,

Thanks for all the various links - most useful.


No problem. It's an old problem in the OpenStack world that's been 
discussed many times before :) Might as well take advantage of prior 
art/discussions... I personally had a change of heart on the issue a 
couple years ago after the initial discussions.



To map this into keystone context, if we were to follow this logic we would:

1) Support 'limit' and 'marker' (as opposed to 'page', 'page_szie', or anything 
else).  These would be standard, independent of what backing store keystone was 
using.  If neither are included in the url, then we return the first N entires, 
where N is defined by the cloud provider.  This ensures that for at least 
smaller deployments, non-pagination aware clients still work.  If either 
'limit' or 'marker' are specified, then we paginate, passing them down into the 
driver layer wherever possible to ensure efficiency (some drivers may not be 
able to support pagination, hence we will do this, inefficiently, at a higher 
layer)
2) If we are paginating at the driver level, we must, by definition, be doing 
all the filtering down there as well (otherwise it all gets mucked)
3) We should look at supporting the other standard options (sort order etc.), 
but irrespective of that, by definition, we must ensure that we any driver that 
is paginating must be getting is entries back in a consistent order (otherwise, 
again, pagination doesn't work reliably)


Yup, all of the above meets matches my understanding. Filter support 
should come first, followed by paging-parameter pushdown to the engines.


Best,
-jay


On 13 Aug 2013, at 18:10, Jay Pipes wrote:


On 08/13/2013 12:55 PM, Lyle, David (Cloud Services) wrote:

The marker/limit pagination scheme is inferior.


A bold statement that flies in the face of experience and the work already done 
in all the other projects.


The use of page/page_size allows access to arbitrary pages, whereas 
limit/marker only allows forward progress.


I don't see this as a particularly compelling use case considering the 
performance manifestations of using LIMIT OFFSET pagination.


In Horizon's use case, with page/page_size we can provide the user access to 
any page they have already visited, rather than just the previous page (using 
prev/next links returned in the response).


I don't see this as a particularly useful thing, but in any case, you could 
still do this by keeping the markers for previous pages on the client (Horizon) 
side.

The point of marker/limit is to eliminate poor performance of LIMIT OFFSET 
queries and to force proper index usage in the listing queries.

You can see the original discussion about this from more than two years and 
even see where I was originally arguing for a LIMIT OFFSET strategy and was 
brought around to the current limit/marker strategy by the responses of Justin 
Santa Barbara and Greg Holt:

https://lists.launchpad.net/openstack/msg02548.html

Best,
-jay


-David

On 08/13/2013 10:29 AM, Pipes, Jay wrote:


On 08/13/2013 03:05 AM, Yee, Guang wrote:

Passing the query parameters, whatever they are, into the driver if
the given driver supports pagination and allowing the driver to
override the manager default pagination functionality seem reasonable to me.



Please do use the standards that are supported in other OpenStack services 
already: limit, marker, sort_key and sort_dir.



Pagination is meaningless without a sort key and direction, so picking a sensible 
default for user/project records is good. I'd go with either created_at (what 
Glance/Nova/Cinder use..) or with the user/project >UUID.



The Glance DB API pagination is well-documented and clean [1]. I highly 
recommend it as a starting point.



Nova uses the same marker/limit/sort_key/sort_dir options for queries that it 
allows pagination on. An example is the
instance_get_all_by_filters() call [2].



Cinder uses the same marker/limit/sort_key/sort_dir options for query 
pagination as well. [3]



Finally, I'd consider supporting the standard change-since parameter for listing operations. 
Both Nova [4] and Glance [5] support the parameter, which is useful for tools that poll the 
APIs for "new" >events/records.



In short, go with what is already a standard in the other projects...



Best,
-jay



[1]
https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L429
[2]
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L1709
[3]
https://github.com/openstack/cinder/blob/master/cinder/common/sqlalchemyutils.py#L33
[4]
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L1766
[5]
https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L618





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Jay Pipes
On 08/13/2013 01:51 PM, Miller, Mark M (EB SW Cloud - R&D - Corvallis) 
wrote:

I have been following this exchange of ideas on how to solve/implement 
pagination. I would ask you to keep in mind that a solution needs to take into 
account a split LDAP/SQL backend (you are not always dealing with a single 
Keystone SQL database). Having a split backend means that the query information 
is divided between both backends and that you may not have as much flexibility 
with the LDAP backend


Yes, absolutely understood and a good point.

In the case of engines that don't support filtering, ordering, or other 
DB-like operations, then a pagination implementation in the controller 
would have to be provided. Not efficient, but better than nothing.


-jay


Mark.

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Tuesday, August 13, 2013 10:10 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [keystone] Pagination

On 08/13/2013 12:55 PM, Lyle, David (Cloud Services) wrote:

The marker/limit pagination scheme is inferior.


A bold statement that flies in the face of experience and the work already done 
in all the other projects.

  >The use of page/page_size allows access to arbitrary pages, whereas 
limit/marker only allows forward progress.

I don't see this as a particularly compelling use case considering the 
performance manifestations of using LIMIT OFFSET pagination.

  >In Horizon's use case, with page/page_size we can provide the user access to 
any page they have already visited, rather than just the previous page (using 
prev/next links returned in the response).

I don't see this as a particularly useful thing, but in any case, you could 
still do this by keeping the markers for previous pages on the client (Horizon) 
side.

The point of marker/limit is to eliminate poor performance of LIMIT OFFSET 
queries and to force proper index usage in the listing queries.

You can see the original discussion about this from more than two years and 
even see where I was originally arguing for a LIMIT OFFSET strategy and was 
brought around to the current limit/marker strategy by the responses of Justin 
Santa Barbara and Greg Holt:

https://lists.launchpad.net/openstack/msg02548.html

Best,
-jay


-David

On 08/13/2013 10:29 AM, Pipes, Jay wrote:


On 08/13/2013 03:05 AM, Yee, Guang wrote:

Passing the query parameters, whatever they are, into the driver if
the given driver supports pagination and allowing the driver to
override the manager default pagination functionality seem reasonable to me.



Please do use the standards that are supported in other OpenStack services 
already: limit, marker, sort_key and sort_dir.



Pagination is meaningless without a sort key and direction, so picking a sensible 
default for user/project records is good. I'd go with either created_at (what 
Glance/Nova/Cinder use..) or with the user/project >UUID.



The Glance DB API pagination is well-documented and clean [1]. I highly 
recommend it as a starting point.



Nova uses the same marker/limit/sort_key/sort_dir options for queries
that it allows pagination on. An example is the
instance_get_all_by_filters() call [2].



Cinder uses the same marker/limit/sort_key/sort_dir options for query
pagination as well. [3]



Finally, I'd consider supporting the standard change-since parameter for listing operations. 
Both Nova [4] and Glance [5] support the parameter, which is useful for tools that poll the 
APIs for "new" >events/records.



In short, go with what is already a standard in the other projects...



Best,
-jay



[1]
https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/
api.py#L429
[2]
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.
py#L1709
[3]
https://github.com/openstack/cinder/blob/master/cinder/common/sqlalch
emyutils.py#L33
[4]
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.
py#L1766
[5]
https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/
api.py#L618





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ANVIL] Missing openvswitch dependency for basic-neutron.yaml persona

2013-08-13 Thread Joshua Harlow
Haha, no problem. Darn time differences.

So some other useful links that I think will be helpful.

- 
https://github.com/stackforge/anvil/blob/master/conf/templates/packaging/specs/openstack-neutron.spec

This one is likely the biggest part of the issue, since it is the combination 
of all of neutron into 1 package (which has sub-packages).

- One of those sub-packages is 
https://github.com/stackforge/anvil/blob/master/conf/templates/packaging/specs/openstack-neutron.spec#L274

This is pulling in the openvswitch part, that I think u don't want (at least 
not always want, it's wanted if neutron is going to use it, which under certain 
plugins it will).

As you've seen it likely shouldn't be installing/needing that if 
https://github.com/stackforge/anvil/blob/master/anvil/components/configurators/neutron_plugins/linuxbridge.py
 is used.

This should be coming from the following config (which will come from the yaml 
files) 'get_option' 'call':

https://github.com/stackforge/anvil/blob/master/anvil/components/configurators/neutron.py#L49

So I think what can be done is a couple of things:

  1.  Don't include sub-packages that we don't want (the spec files are 
cheetah templates, so this can be done 
dynamically).
  2.  See if there is a way to make yum (or via yyoom) not pull in the 
dependencies for a sub-package when it won't be used (?)
  3.  Always build openvswitch (not as preferable) and include it 
(https://github.com/stackforge/anvil/blob/master/tools/build-openvswitch.sh)
 *   I think the RDO repos might have some of these components.
 *   
http://openstack.redhat.com/Frequently_Asked_Questions#For_which_distributions_does_RDO_provide_packages.3F
 *   This means we can just include the RDO repo rpm (like epel and use 
that openvswitch version there) instead of build your own.

Hope some of this offers some good pointers.

-Josh

From: Sylvain Bauza mailto:sylvain.ba...@bull.net>>
Date: Tuesday, August 13, 2013 9:52 AM
To: Joshua Harlow mailto:harlo...@yahoo-inc.com>>
Cc: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [ANVIL] Missing openvswitch dependency for 
basic-neutron.yaml persona

Well, then I have to read thru the docs to see how it can be done thru a config 
option... =)

Nope, I won't be able to catch you up on IRC, time difference you know :-)
Anyway, let me go thru it, I'll try to sort it out.

I RTFM'd all the anvil docs, but do you have any other pointer for me ?

Thanks,
-Sylvain

Le 13/08/2013 18:39, Joshua Harlow a écrit :
Well open switch is likely needed still when it's really needed right? So I 
think there is a need for it. It just might have to be a dynamic choice (based 
on a config option) instead of a static choice. Make sense??

The other personas don't use neutron so I think that's how they work, since 
nova-network base functionality still exists.

Any patches would be great, will be on irc soon if u want to discuss more.

Josh

Sent from my really tiny device...

On Aug 13, 2013, at 9:23 AM, "Sylvain Bauza" 
mailto:sylvain.ba...@bull.net>> wrote:

Do you confirm the basic idea would be to get rid of any openvswitch reference 
in rhel.yaml ?
If so, wouldn't it be breaking other personas ?

I can provide a patch so the team would review it.

-Sylvain

Le 13/08/2013 17:57, Joshua Harlow a écrit :
It likely shouldn't be needed :)

I haven't personally messes around with the neutron persona to much and I know 
that it just underwent the "great rename of 2013" so u might be hitting issues 
due to that.

Try seeing if u can adjust the yaml file and if not I am on irc to help more.

Sent from my really tiny device...

On Aug 12, 2013, at 9:14 AM, "Sylvain Bauza" 
mailto:sylvain.ba...@bull.net>> wrote:

Hi,

./smithy -a install -p conf/personas/in-a-box/basic-neutron.yaml is failing 
because of openvswitch missing.
See logs here [1].

Does anyone knows why openvswitch is needed when asking for linuxbridge in 
components/neutron.yaml ?
Shall I update distros/rhel.yaml ?

-Sylvain



[1] : http://pastebin.com/TFkDrrDc


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Miller, Mark M (EB SW Cloud - R&D - Corvallis)
I have been following this exchange of ideas on how to solve/implement 
pagination. I would ask you to keep in mind that a solution needs to take into 
account a split LDAP/SQL backend (you are not always dealing with a single 
Keystone SQL database). Having a split backend means that the query information 
is divided between both backends and that you may not have as much flexibility 
with the LDAP backend

Mark.

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Tuesday, August 13, 2013 10:10 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [keystone] Pagination

On 08/13/2013 12:55 PM, Lyle, David (Cloud Services) wrote:
> The marker/limit pagination scheme is inferior.

A bold statement that flies in the face of experience and the work already done 
in all the other projects.

 >The use of page/page_size allows access to arbitrary pages, whereas 
 >limit/marker only allows forward progress.

I don't see this as a particularly compelling use case considering the 
performance manifestations of using LIMIT OFFSET pagination.

 >In Horizon's use case, with page/page_size we can provide the user access to 
 >any page they have already visited, rather than just the previous page (using 
 >prev/next links returned in the response).

I don't see this as a particularly useful thing, but in any case, you could 
still do this by keeping the markers for previous pages on the client (Horizon) 
side.

The point of marker/limit is to eliminate poor performance of LIMIT OFFSET 
queries and to force proper index usage in the listing queries.

You can see the original discussion about this from more than two years and 
even see where I was originally arguing for a LIMIT OFFSET strategy and was 
brought around to the current limit/marker strategy by the responses of Justin 
Santa Barbara and Greg Holt:

https://lists.launchpad.net/openstack/msg02548.html

Best,
-jay

> -David
>
> On 08/13/2013 10:29 AM, Pipes, Jay wrote:
>
>> On 08/13/2013 03:05 AM, Yee, Guang wrote:
>>> Passing the query parameters, whatever they are, into the driver if 
>>> the given driver supports pagination and allowing the driver to 
>>> override the manager default pagination functionality seem reasonable to me.
>
>> Please do use the standards that are supported in other OpenStack services 
>> already: limit, marker, sort_key and sort_dir.
>
>> Pagination is meaningless without a sort key and direction, so picking a 
>> sensible default for user/project records is good. I'd go with either 
>> created_at (what Glance/Nova/Cinder use..) or with the user/project >UUID.
>
>> The Glance DB API pagination is well-documented and clean [1]. I highly 
>> recommend it as a starting point.
>
>> Nova uses the same marker/limit/sort_key/sort_dir options for queries 
>> that it allows pagination on. An example is the
>> instance_get_all_by_filters() call [2].
>
>> Cinder uses the same marker/limit/sort_key/sort_dir options for query 
>> pagination as well. [3]
>
>> Finally, I'd consider supporting the standard change-since parameter for 
>> listing operations. Both Nova [4] and Glance [5] support the parameter, 
>> which is useful for tools that poll the APIs for "new" >events/records.
>
>> In short, go with what is already a standard in the other projects...
>
>> Best,
>> -jay
>
>> [1]
>> https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/
>> api.py#L429
>> [2]
>> https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.
>> py#L1709
>> [3]
>> https://github.com/openstack/cinder/blob/master/cinder/common/sqlalch
>> emyutils.py#L33
>> [4]
>> https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.
>> py#L1766
>> [5]
>> https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/
>> api.py#L618
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Henry Nash
Jay,

Thanks for all the various links - most useful.

To map this into keystone context, if we were to follow this logic we would:

1) Support 'limit' and 'marker' (as opposed to 'page', 'page_szie', or anything 
else).  These would be standard, independent of what backing store keystone was 
using.  If neither are included in the url, then we return the first N entires, 
where N is defined by the cloud provider.  This ensures that for at least 
smaller deployments, non-pagination aware clients still work.  If either 
'limit' or 'marker' are specified, then we paginate, passing them down into the 
driver layer wherever possible to ensure efficiency (some drivers may not be 
able to support pagination, hence we will do this, inefficiently, at a higher 
layer)
2) If we are paginating at the driver level, we must, by definition, be doing 
all the filtering down there as well (otherwise it all gets mucked)
3) We should look at supporting the other standard options (sort order etc.), 
but irrespective of that, by definition, we must ensure that we any driver that 
is paginating must be getting is entries back in a consistent order (otherwise, 
again, pagination doesn't work reliably)

Henry
On 13 Aug 2013, at 18:10, Jay Pipes wrote:

> On 08/13/2013 12:55 PM, Lyle, David (Cloud Services) wrote:
>> The marker/limit pagination scheme is inferior.
> 
> A bold statement that flies in the face of experience and the work already 
> done in all the other projects.
> 
> >The use of page/page_size allows access to arbitrary pages, whereas 
> >limit/marker only allows forward progress.
> 
> I don't see this as a particularly compelling use case considering the 
> performance manifestations of using LIMIT OFFSET pagination.
> 
> >In Horizon's use case, with page/page_size we can provide the user access to 
> >any page they have already visited, rather than just the previous page 
> >(using prev/next links returned in the response).
> 
> I don't see this as a particularly useful thing, but in any case, you could 
> still do this by keeping the markers for previous pages on the client 
> (Horizon) side.
> 
> The point of marker/limit is to eliminate poor performance of LIMIT OFFSET 
> queries and to force proper index usage in the listing queries.
> 
> You can see the original discussion about this from more than two years and 
> even see where I was originally arguing for a LIMIT OFFSET strategy and was 
> brought around to the current limit/marker strategy by the responses of 
> Justin Santa Barbara and Greg Holt:
> 
> https://lists.launchpad.net/openstack/msg02548.html
> 
> Best,
> -jay
> 
>> -David
>> 
>> On 08/13/2013 10:29 AM, Pipes, Jay wrote:
>> 
>>> On 08/13/2013 03:05 AM, Yee, Guang wrote:
 Passing the query parameters, whatever they are, into the driver if
 the given driver supports pagination and allowing the driver to
 override the manager default pagination functionality seem reasonable to 
 me.
>> 
>>> Please do use the standards that are supported in other OpenStack services 
>>> already: limit, marker, sort_key and sort_dir.
>> 
>>> Pagination is meaningless without a sort key and direction, so picking a 
>>> sensible default for user/project records is good. I'd go with either 
>>> created_at (what Glance/Nova/Cinder use..) or with the user/project >UUID.
>> 
>>> The Glance DB API pagination is well-documented and clean [1]. I highly 
>>> recommend it as a starting point.
>> 
>>> Nova uses the same marker/limit/sort_key/sort_dir options for queries that 
>>> it allows pagination on. An example is the
>>> instance_get_all_by_filters() call [2].
>> 
>>> Cinder uses the same marker/limit/sort_key/sort_dir options for query 
>>> pagination as well. [3]
>> 
>>> Finally, I'd consider supporting the standard change-since parameter for 
>>> listing operations. Both Nova [4] and Glance [5] support the parameter, 
>>> which is useful for tools that poll the APIs for "new" >events/records.
>> 
>>> In short, go with what is already a standard in the other projects...
>> 
>>> Best,
>>> -jay
>> 
>>> [1]
>>> https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L429
>>> [2]
>>> https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L1709
>>> [3]
>>> https://github.com/openstack/cinder/blob/master/cinder/common/sqlalchemyutils.py#L33
>>> [4]
>>> https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L1766
>>> [5]
>>> https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L618
>> 
>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> ___
> OpenSt

Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Jay Pipes

On 08/13/2013 12:55 PM, Lyle, David (Cloud Services) wrote:

The marker/limit pagination scheme is inferior.


A bold statement that flies in the face of experience and the work 
already done in all the other projects.


>The use of page/page_size allows access to arbitrary pages, whereas 
limit/marker only allows forward progress.


I don't see this as a particularly compelling use case considering the 
performance manifestations of using LIMIT OFFSET pagination.


>In Horizon's use case, with page/page_size we can provide the user 
access to any page they have already visited, rather than just the 
previous page (using prev/next links returned in the response).


I don't see this as a particularly useful thing, but in any case, you 
could still do this by keeping the markers for previous pages on the 
client (Horizon) side.


The point of marker/limit is to eliminate poor performance of LIMIT 
OFFSET queries and to force proper index usage in the listing queries.


You can see the original discussion about this from more than two years 
and even see where I was originally arguing for a LIMIT OFFSET strategy 
and was brought around to the current limit/marker strategy by the 
responses of Justin Santa Barbara and Greg Holt:


https://lists.launchpad.net/openstack/msg02548.html

Best,
-jay


-David

On 08/13/2013 10:29 AM, Pipes, Jay wrote:


On 08/13/2013 03:05 AM, Yee, Guang wrote:

Passing the query parameters, whatever they are, into the driver if
the given driver supports pagination and allowing the driver to
override the manager default pagination functionality seem reasonable to me.



Please do use the standards that are supported in other OpenStack services 
already: limit, marker, sort_key and sort_dir.



Pagination is meaningless without a sort key and direction, so picking a sensible 
default for user/project records is good. I'd go with either created_at (what 
Glance/Nova/Cinder use..) or with the user/project >UUID.



The Glance DB API pagination is well-documented and clean [1]. I highly 
recommend it as a starting point.



Nova uses the same marker/limit/sort_key/sort_dir options for queries that it 
allows pagination on. An example is the
instance_get_all_by_filters() call [2].



Cinder uses the same marker/limit/sort_key/sort_dir options for query 
pagination as well. [3]



Finally, I'd consider supporting the standard change-since parameter for listing operations. 
Both Nova [4] and Glance [5] support the parameter, which is useful for tools that poll the 
APIs for "new" >events/records.



In short, go with what is already a standard in the other projects...



Best,
-jay



[1]
https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L429
[2]
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L1709
[3]
https://github.com/openstack/cinder/blob/master/cinder/common/sqlalchemyutils.py#L33
[4]
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L1766
[5]
https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L618





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V Meeting Minutes

2013-08-13 Thread Peter Pouliot
Hi Everyone,

Here are the minutes from today's Hyper-V meeting.

Minutes:
http://eavesdrop.openstack.org/meetings/_hyper_v/2013/_hyper_v.2013-08-13-16.02.html
Minutes (text): 
http://eavesdrop.openstack.org/meetings/_hyper_v/2013/_hyper_v.2013-08-13-16.02.txt
Log:
http://eavesdrop.openstack.org/meetings/_hyper_v/2013/_hyper_v.2013-08-13-16.02.log.html


Peter J. Pouliot, CISSP
Senior SDET, OpenStack

Microsoft
New England Research & Development Center
One Memorial Drive,Cambridge, MA 02142
ppoul...@microsoft.com | Tel: +1(857) 453 6436

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Lyle, David (Cloud Services)
The marker/limit pagination scheme is inferior.  The use of page/page_size 
allows access to arbitrary pages, whereas limit/marker only allows forward 
progress.   In Horizon's use case, with page/page_size we can provide the user 
access to any page they have already visited, rather than just the previous 
page (using prev/next links returned in the response).  

-David

On 08/13/2013 10:29 AM, Pipes, Jay wrote:

>On 08/13/2013 03:05 AM, Yee, Guang wrote:
>> Passing the query parameters, whatever they are, into the driver if 
>> the given driver supports pagination and allowing the driver to 
>> override the manager default pagination functionality seem reasonable to me.

>Please do use the standards that are supported in other OpenStack services 
>already: limit, marker, sort_key and sort_dir.

>Pagination is meaningless without a sort key and direction, so picking a 
>sensible default for user/project records is good. I'd go with either 
>created_at (what Glance/Nova/Cinder use..) or with the user/project >UUID.

>The Glance DB API pagination is well-documented and clean [1]. I highly 
>recommend it as a starting point.

>Nova uses the same marker/limit/sort_key/sort_dir options for queries that it 
>allows pagination on. An example is the
>instance_get_all_by_filters() call [2].

>Cinder uses the same marker/limit/sort_key/sort_dir options for query 
>pagination as well. [3]

>Finally, I'd consider supporting the standard change-since parameter for 
>listing operations. Both Nova [4] and Glance [5] support the parameter, which 
>is useful for tools that poll the APIs for "new" >events/records.

>In short, go with what is already a standard in the other projects...

>Best,
>-jay

>[1]
>https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L429
>[2]
>https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L1709
>[3]
>https://github.com/openstack/cinder/blob/master/cinder/common/sqlalchemyutils.py#L33
>[4]
>https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L1766
>[5]
>https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L618




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ANVIL] Missing openvswitch dependency for basic-neutron.yaml persona

2013-08-13 Thread Sylvain Bauza
Well, then I have to read thru the docs to see how it can be done thru a 
config option... =)


Nope, I won't be able to catch you up on IRC, time difference you know :-)
Anyway, let me go thru it, I'll try to sort it out.

I RTFM'd all the anvil docs, but do you have any other pointer for me ?

Thanks,
-Sylvain

Le 13/08/2013 18:39, Joshua Harlow a écrit :
Well open switch is likely needed still when it's really needed right? 
So I think there is a need for it. It just might have to be a dynamic 
choice (based on a config option) instead of a static choice. Make sense??


The other personas don't use neutron so I think that's how they work, 
since nova-network base functionality still exists.


Any patches would be great, will be on irc soon if u want to discuss more.

Josh

Sent from my really tiny device...

On Aug 13, 2013, at 9:23 AM, "Sylvain Bauza" > wrote:


Do you confirm the basic idea would be to get rid of any openvswitch 
reference in rhel.yaml ?

If so, wouldn't it be breaking other personas ?

I can provide a patch so the team would review it.

-Sylvain

Le 13/08/2013 17:57, Joshua Harlow a écrit :

It likely shouldn't be needed :)

I haven't personally messes around with the neutron persona to much 
and I know that it just underwent the "great rename of 2013" so u 
might be hitting issues due to that.


Try seeing if u can adjust the yaml file and if not I am on irc to 
help more.


Sent from my really tiny device...

On Aug 12, 2013, at 9:14 AM, "Sylvain Bauza" > wrote:



Hi,

./smithy -a install -p conf/personas/in-a-box/basic-neutron.yaml is 
failing because of openvswitch missing.

See logs here [1].

Does anyone knows why openvswitch is needed when asking for 
linuxbridge in components/neutron.yaml ?

Shall I update distros/rhel.yaml ?

-Sylvain



[1] : http://pastebin.com/TFkDrrDc


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to support new Cinder driver for CloudByte's Elastistor

2013-08-13 Thread Amit Das
Thanks John for the updates.

I shall work towards setting up the Blueprint & Gerrit. We have the code
ready but pretty late in participating with the community.

Regards,
Amit
*CloudByte Inc.* 


On Tue, Aug 13, 2013 at 8:31 PM, John Griffith
wrote:

> Hi Amit,
>
> I think part of what Thierry was eluding to was the fact that feature
> freeze for Grizzly is next week.  Also in the past we've been trying to
> make sure that folks did not introduce BP's for new drivers in the last
> release mile-stone.  There are other folks that are in this position
> however they've also proposed their BP's for their driver and sent updates
> to the Cinder team since H1.
>
> That being said, if you already have working code that you think is ready
> and can be submitted we can see what the rest of the Cinder team thinks.
>  No promises though that your code will make it in, there are a number of
> things already in process that will take priority in terms of review time
> etc.
>
> Thanks,
> John
>
>
> On Tue, Aug 13, 2013 at 8:42 AM, Amit Das  wrote:
>
>> Thanks a lot... This should give us a head start.
>>
>> Regards,
>> Amit
>> *CloudByte Inc.* 
>>
>>
>> On Tue, Aug 13, 2013 at 5:14 PM, Thierry Carrez wrote:
>>
>>> Amit Das wrote:
>>> > We have implemented a CINDER driver for our QoS aware storage solution
>>> > (CloudByte Elastistor).
>>> >
>>> > We would like to integrate this driver code with the next version of
>>> > OpenStack (Havana).
>>> >
>>> > Please let us know the approval processes to be followed for this new
>>> > driver support.
>>>
>>> See https://wiki.openstack.org/wiki/Release_Cycle and
>>> https://wiki.openstack.org/wiki/Blueprints for the beginning of an
>>> answer.
>>>
>>> Note that we are pretty late in the Havana cycle with lots of features
>>> which have been proposed a long time ago still waiting for reviews and
>>> merging... so it's a bit unlikely that a new feature would be added now
>>> to that already-overloaded backlog.
>>>
>>> --
>>> Thierry Carrez (ttx)
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ANVIL] Missing openvswitch dependency for basic-neutron.yaml persona

2013-08-13 Thread Joshua Harlow
Well open switch is likely needed still when it's really needed right? So I 
think there is a need for it. It just might have to be a dynamic choice (based 
on a config option) instead of a static choice. Make sense??

The other personas don't use neutron so I think that's how they work, since 
nova-network base functionality still exists.

Any patches would be great, will be on irc soon if u want to discuss more.

Josh

Sent from my really tiny device...

On Aug 13, 2013, at 9:23 AM, "Sylvain Bauza" 
mailto:sylvain.ba...@bull.net>> wrote:

Do you confirm the basic idea would be to get rid of any openvswitch reference 
in rhel.yaml ?
If so, wouldn't it be breaking other personas ?

I can provide a patch so the team would review it.

-Sylvain

Le 13/08/2013 17:57, Joshua Harlow a écrit :
It likely shouldn't be needed :)

I haven't personally messes around with the neutron persona to much and I know 
that it just underwent the "great rename of 2013" so u might be hitting issues 
due to that.

Try seeing if u can adjust the yaml file and if not I am on irc to help more.

Sent from my really tiny device...

On Aug 12, 2013, at 9:14 AM, "Sylvain Bauza" 
mailto:sylvain.ba...@bull.net>> wrote:

Hi,

./smithy -a install -p conf/personas/in-a-box/basic-neutron.yaml is failing 
because of openvswitch missing.
See logs here [1].

Does anyone knows why openvswitch is needed when asking for linuxbridge in 
components/neutron.yaml ?
Shall I update distros/rhel.yaml ?

-Sylvain



[1] : http://pastebin.com/TFkDrrDc


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Jay Pipes

On 08/13/2013 03:05 AM, Yee, Guang wrote:

Passing the query parameters, whatever they are, into the driver if the
given driver supports pagination and allowing the driver to override the
manager default pagination functionality seem reasonable to me.


Please do use the standards that are supported in other OpenStack 
services already: limit, marker, sort_key and sort_dir.


Pagination is meaningless without a sort key and direction, so picking a 
sensible default for user/project records is good. I'd go with either 
created_at (what Glance/Nova/Cinder use..) or with the user/project UUID.


The Glance DB API pagination is well-documented and clean [1]. I highly 
recommend it as a starting point.


Nova uses the same marker/limit/sort_key/sort_dir options for queries 
that it allows pagination on. An example is the 
instance_get_all_by_filters() call [2].


Cinder uses the same marker/limit/sort_key/sort_dir options for query 
pagination as well. [3]


Finally, I'd consider supporting the standard change-since parameter for 
listing operations. Both Nova [4] and Glance [5] support the parameter, 
which is useful for tools that poll the APIs for "new" events/records.


In short, go with what is already a standard in the other projects...

Best,
-jay

[1] 
https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L429
[2] 
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L1709
[3] 
https://github.com/openstack/cinder/blob/master/cinder/common/sqlalchemyutils.py#L33
[4] 
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L1766
[5] 
https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L618





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ANVIL] Missing openvswitch dependency for basic-neutron.yaml persona

2013-08-13 Thread Sylvain Bauza
Do you confirm the basic idea would be to get rid of any openvswitch 
reference in rhel.yaml ?

If so, wouldn't it be breaking other personas ?

I can provide a patch so the team would review it.

-Sylvain

Le 13/08/2013 17:57, Joshua Harlow a écrit :

It likely shouldn't be needed :)

I haven't personally messes around with the neutron persona to much 
and I know that it just underwent the "great rename of 2013" so u 
might be hitting issues due to that.


Try seeing if u can adjust the yaml file and if not I am on irc to 
help more.


Sent from my really tiny device...

On Aug 12, 2013, at 9:14 AM, "Sylvain Bauza" > wrote:



Hi,

./smithy -a install -p conf/personas/in-a-box/basic-neutron.yaml is 
failing because of openvswitch missing.

See logs here [1].

Does anyone knows why openvswitch is needed when asking for 
linuxbridge in components/neutron.yaml ?

Shall I update distros/rhel.yaml ?

-Sylvain



[1] : http://pastebin.com/TFkDrrDc


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to support new Cinder driver for CloudByte's Elastistor

2013-08-13 Thread John Griffith
On Tue, Aug 13, 2013 at 9:01 AM, John Griffith
wrote:

> Hi Amit,
>
> I think part of what Thierry was eluding to was the fact that feature
> freeze for Grizzly is next week.  Also in the past we've been trying to
> make sure that folks did not introduce BP's for new drivers in the last
> release mile-stone.  There are other folks that are in this position
> however they've also proposed their BP's for their driver and sent updates
> to the Cinder team since H1.
>
> That being said, if you already have working code that you think is ready
> and can be submitted we can see what the rest of the Cinder team thinks.
>  No promises though that your code will make it in, there are a number of
> things already in process that will take priority in terms of review time
> etc.
>
> Thanks,
> John
>
>
> On Tue, Aug 13, 2013 at 8:42 AM, Amit Das  wrote:
>
>> Thanks a lot... This should give us a head start.
>>
>> Regards,
>> Amit
>> *CloudByte Inc.* 
>>
>>
>> On Tue, Aug 13, 2013 at 5:14 PM, Thierry Carrez wrote:
>>
>>> Amit Das wrote:
>>> > We have implemented a CINDER driver for our QoS aware storage solution
>>> > (CloudByte Elastistor).
>>> >
>>> > We would like to integrate this driver code with the next version of
>>> > OpenStack (Havana).
>>> >
>>> > Please let us know the approval processes to be followed for this new
>>> > driver support.
>>>
>>> See https://wiki.openstack.org/wiki/Release_Cycle and
>>> https://wiki.openstack.org/wiki/Blueprints for the beginning of an
>>> answer.
>>>
>>> Note that we are pretty late in the Havana cycle with lots of features
>>> which have been proposed a long time ago still waiting for reviews and
>>> merging... so it's a bit unlikely that a new feature would be added now
>>> to that already-overloaded backlog.
>>>
>>> --
>>> Thierry Carrez (ttx)
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
I should clarify my posting, next week (August 21'st) is a FeatureProposal
freeze for the Cinder project.  Further explanation here: [1]

[1] https://wiki.openstack.org/wiki/FeatureProposalFreeze
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron a quick qpid revert

2013-08-13 Thread David Ripton

On 08/13/2013 09:57 AM, Dan Prince wrote:

All of my Neutron tests are failing this morning in SmokeStack. We need a quick 
revert to fix the qpid RPC implementation:

https://review.openstack.org/41689

https://bugs.launchpad.net/neutron/+bug/1211778

I figure we may as well revert this quick and then just wait on oslo.messaging 
to fix the original RPC concern here?


Thanks Dan.  That's my mistake, for pulling over the entire latest 
impl_qpid.py rather than just my tiny fix to it.  I'll redo the patch.


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ANVIL] Missing openvswitch dependency for basic-neutron.yaml persona

2013-08-13 Thread Joshua Harlow
It likely shouldn't be needed :)

I haven't personally messes around with the neutron persona to much and I know 
that it just underwent the "great rename of 2013" so u might be hitting issues 
due to that.

Try seeing if u can adjust the yaml file and if not I am on irc to help more.

Sent from my really tiny device...

On Aug 12, 2013, at 9:14 AM, "Sylvain Bauza" 
mailto:sylvain.ba...@bull.net>> wrote:

Hi,

./smithy -a install -p conf/personas/in-a-box/basic-neutron.yaml is failing 
because of openvswitch missing.
See logs here [1].

Does anyone knows why openvswitch is needed when asking for linuxbridge in 
components/neutron.yaml ?
Shall I update distros/rhel.yaml ?

-Sylvain



[1] :  http://pastebin.com/TFkDrrDc


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Henry Nash

On 13 Aug 2013, at 16:03, Dolph Mathews wrote:

> 
> On Tue, Aug 13, 2013 at 3:10 AM, Henry Nash  wrote:
> Hi
> 
> So few comebacks to the various comments:
> 
> 1) While I understand the idea that a client would follow the next/prev links 
> returned in collections, I wasn't aware that we considered 'page'/'per-page' 
> as not standardized. We list these explicitly throughout the identity API 
> spec (look in each List 'entity' example).
> 
> They were essentially relics from a very early draft of the spec that were 
> thoughtlessly copy/pasted around (I'm guilty of this myself)... they were 
> recently cleaned up and removed from the spec.

Unfortunately every API list call in the spec (e.g. Users, Groups, Projects, 
Domains etc.) have the following in their list of items supported for query:

query_string: page (optional)
query_string: per_page (optional, default 30)

Are you suggesting that both are driver dependant, or just the 'page' item?  I 
assume it can't be both - otherwise a client would need to specify page size 
differently based on the driver in use.

>  
> How I imagined it would work would be:
> 
> a) If a client did not include 'page' in the url we would not paginate
> 
> Make that a deployment option? per_page could simply default to a very high 
> value.
>  
> b) Once we are paginating, a client can either build the next/prevs urls 
> themselves if they want (by incrementing/decrementing the page number), or 
> just follow the next/prev links (which come with the appropriate 'page=x' in 
> them) returned in the collection which saves them having to do this.
> 
> I'm obviously very opposed to this because it unreasonably forces a single 
> approach to pagination across all drivers.

So I can see the advantage of not having to do that - although I guess the 
counter argument is that it is the job of a driver to map non-specific apis to 
a particular implementation based on the the underlying technology.  You could 
consider page definitions as just part of the api url to be mapped (at least 
that's how I had been thinking about it to date).  The pro of using 
standardized terms is that we can support a mixture of clients that do and do 
not support pagination (since we can infer if they support pagination based on 
whether they specify 'page' in the query string).

>  
> c) Regarding implementation, the controller would continue to be able to 
> paginate on behalf of drivers that couldn't, but those paginate-aware drivers 
> would take over that capability (and indicate this to the controller the 
> state of the pagination so that it can build the correct next/prev links)
> 
> 2) On the subject of huge enumerates, options are:
> a) Support a backend manager scoped (i.e. identity/assignent/token) limit in 
> the conf file which would be honored by drivers.  Assuming that you set this 
> larger than your pagination limit, this would make sense whether your driver 
> is paginating or not in terms of minimizing the delay in responding data as 
> well as not messing up pagination.  In the non-paginated case when we hit the 
> limit, should we indicate this to the client?  Maybe a 206 return code?  
> Although i) not quite sure that meets http standards, and ii) would we break 
> a bunch of clients by doing this?
> 
> I'm not clear on what kind of limit you're referring to? A 206 sounds 
> unexpected for this use case though.
>  
> b) We scrap the whole idea of pagination, and just set a conf limit as in 
> 2a).  To make this work of course, we must implement any defined filters in 
> the backend (otherwise we still end up with today's performance problems - 
> remember that today, in general,  filtering is done in the controller on a 
> full enumeration of the entities in question).  I was planning to implement 
> this backend filtering anyway as part of (or on top of) my change, since we 
> are holding (at least one of) our hands behind our backs right now by not 
> doing so.  And our filters need to be powerful, do we support wildcards for 
> example, e.g. GET /users?name = fred*  ?
> 
> There were some discussions on this topic from about a year ago that I'd love 
> to continue. I don't want to invent a new "language," but we do need to 
> settle on an approach that we can apply across a wide variety of backends. 
> That probably means keeping it very simple (like your example). Asterisks 
> need to be URL encoded, though. One suggestion I particularly liked (which 
> happens to avoid claiming perfectly valid characters - asterisks - as special 
> characters) was to adopt the syntax used in the django ORM's filter function:
> 
>   ?name__startswith=Fred
>   ?name__istartswith=fred
>   ?name__endswith=Fred
>   ?name__iendswith=fred
>   ?name__contains=Fred
>   ?name__icontains=fred
> 
> This probably represents the immediately useful subset of parameters for us, 
> but for more:
> 
>   https://docs.djangoproject.com/en/dev/topics/db/queries/
> 
>  
> Henry
> 
> On 13 Aug 2013, at 04:40, Ada

Re: [openstack-dev] Proposal to support new Cinder driver for CloudByte's Elastistor

2013-08-13 Thread John Griffith
Hi Amit,

I think part of what Thierry was eluding to was the fact that feature
freeze for Grizzly is next week.  Also in the past we've been trying to
make sure that folks did not introduce BP's for new drivers in the last
release mile-stone.  There are other folks that are in this position
however they've also proposed their BP's for their driver and sent updates
to the Cinder team since H1.

That being said, if you already have working code that you think is ready
and can be submitted we can see what the rest of the Cinder team thinks.
 No promises though that your code will make it in, there are a number of
things already in process that will take priority in terms of review time
etc.

Thanks,
John


On Tue, Aug 13, 2013 at 8:42 AM, Amit Das  wrote:

> Thanks a lot... This should give us a head start.
>
> Regards,
> Amit
> *CloudByte Inc.* 
>
>
> On Tue, Aug 13, 2013 at 5:14 PM, Thierry Carrez wrote:
>
>> Amit Das wrote:
>> > We have implemented a CINDER driver for our QoS aware storage solution
>> > (CloudByte Elastistor).
>> >
>> > We would like to integrate this driver code with the next version of
>> > OpenStack (Havana).
>> >
>> > Please let us know the approval processes to be followed for this new
>> > driver support.
>>
>> See https://wiki.openstack.org/wiki/Release_Cycle and
>> https://wiki.openstack.org/wiki/Blueprints for the beginning of an
>> answer.
>>
>> Note that we are pretty late in the Havana cycle with lots of features
>> which have been proposed a long time ago still waiting for reviews and
>> merging... so it's a bit unlikely that a new feature would be added now
>> to that already-overloaded backlog.
>>
>> --
>> Thierry Carrez (ttx)
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Dolph Mathews
On Tue, Aug 13, 2013 at 3:10 AM, Henry Nash wrote:

> Hi
>
> So few comebacks to the various comments:
>
> 1) While I understand the idea that a client would follow the next/prev
> links returned in collections, I wasn't aware that we considered
> 'page'/'per-page' as not standardized. We list these explicitly throughout
> the identity API spec (look in each List 'entity' example).
>

They were essentially relics from a very early draft of the spec that were
thoughtlessly copy/pasted around (I'm guilty of this myself)... they were
recently cleaned up and removed from the spec.


> How I imagined it would work would be:
>
> a) If a client did not include 'page' in the url we would not paginate
>

Make that a deployment option? per_page could simply default to a very high
value.


> b) Once we are paginating, a client can either build the next/prevs urls
> themselves if they want (by incrementing/decrementing the page number), or
> just follow the next/prev links (which come with the appropriate 'page=x'
> in them) returned in the collection which saves them having to do this.
>

I'm obviously very opposed to this because it unreasonably forces a single
approach to pagination across all drivers.


> c) Regarding implementation, the controller would continue to be able to
> paginate on behalf of drivers that couldn't, but those paginate-aware
> drivers would take over that capability (and indicate this to the
> controller the state of the pagination so that it can build the correct
> next/prev links)
>
> 2) On the subject of huge enumerates, options are:
> a) Support a backend manager scoped (i.e. identity/assignent/token) limit
> in the conf file which would be honored by drivers.  Assuming that you set
> this larger than your pagination limit, this would make sense whether your
> driver is paginating or not in terms of minimizing the delay in responding
> data as well as not messing up pagination.  In the non-paginated case when
> we hit the limit, should we indicate this to the client?  Maybe a 206
> return code?  Although i) not quite sure that meets http standards, and ii)
> would we break a bunch of clients by doing this?
>

I'm not clear on what kind of limit you're referring to? A 206 sounds
unexpected for this use case though.


> b) We scrap the whole idea of pagination, and just set a conf limit as in
> 2a).  To make this work of course, we must implement any defined filters in
> the backend (otherwise we still end up with today's performance problems -
> remember that today, in general,  filtering is done in the controller on a
> full enumeration of the entities in question).  I was planning to implement
> this backend filtering anyway as part of (or on top of) my change, since we
> are holding (at least one of) our hands behind our backs right now by not
> doing so.  And our filters need to be powerful, do we support wildcards for
> example, e.g. GET /users?name = fred*  ?
>

There were some discussions on this topic from about a year ago that I'd
love to continue. I don't want to invent a new "language," but we do need
to settle on an approach that we can apply across a wide variety of
backends. That probably means keeping it very simple (like your example).
Asterisks need to be URL encoded, though. One suggestion I particularly
liked (which happens to avoid claiming perfectly valid characters -
asterisks - as special characters) was to adopt the syntax used in the
django ORM's filter function:

  ?name__startswith=Fred
  ?name__istartswith=fred
  ?name__endswith=Fred
  ?name__iendswith=fred
  ?name__contains=Fred
  ?name__icontains=fred

This probably represents the immediately useful subset of parameters for
us, but for more:

  https://docs.djangoproject.com/en/dev/topics/db/queries/


> Henry
>
> On 13 Aug 2013, at 04:40, Adam Young wrote:
>
>  On 08/12/2013 09:22 PM, Miller, Mark M (EB SW Cloud - R&D - Corvallis)
> wrote:
>
> The main reason I use user lists (i.e. keystone user-list) is to get the
> list of usernames/IDs for other keystone commands. I do not see the value
> of showing all of the users in an LDAP server when they are not part of the
> keystone database (i.e. do not have roles assigned to them). Performing a
> “keystone user-list” command against the HP Enterprise Directory locks up
> keystone for about 1 ½ hours in that it will not perform any other commands
> until it is done.  If it is decided that user lists are necessary, then at
> a minimum they need to be paged to return control back to keystone for
> another command.
>
>
> We need a way to tell HP ED to limit the number of rows, and to do
> filtering.
>
> We have a bug for the second part.  I'll open one for the limit.
>
>  
>
> ** **
>
> Mark
>
> ** **
>
> *From:* Adam Young [mailto:ayo...@redhat.com ]
> *Sent:* Monday, August 12, 2013 5:27 PM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* Re: [openstack-dev] [keystone] Pagination
>
> ** **
>
> On 08/12/2013 05:34 PM, Henry Nash wrote:
>
> Hi 

Re: [openstack-dev] Proposal to support new Cinder driver for CloudByte's Elastistor

2013-08-13 Thread Amit Das
Thanks a lot... This should give us a head start.

Regards,
Amit
*CloudByte Inc.* 


On Tue, Aug 13, 2013 at 5:14 PM, Thierry Carrez wrote:

> Amit Das wrote:
> > We have implemented a CINDER driver for our QoS aware storage solution
> > (CloudByte Elastistor).
> >
> > We would like to integrate this driver code with the next version of
> > OpenStack (Havana).
> >
> > Please let us know the approval processes to be followed for this new
> > driver support.
>
> See https://wiki.openstack.org/wiki/Release_Cycle and
> https://wiki.openstack.org/wiki/Blueprints for the beginning of an answer.
>
> Note that we are pretty late in the Havana cycle with lots of features
> which have been proposed a long time ago still waiting for reviews and
> merging... so it's a bit unlikely that a new feature would be added now
> to that already-overloaded backlog.
>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Savanna issue

2013-08-13 Thread Matthew Farrellee

On 08/04/2013 12:01 PM, Linus Nova wrote:

HI,

I installed OpenStack Savanna in OpenStack Grizzely release. As you can
see in savanna.log, the savanna-api start and operates correctly.

When I launch the cluster, the VMs start correctly but soon after they
are removed as shown in the log file.

Do you have any ideas on what is happening?

Best regards.

Linus Nova


Linus,

I don't know if your issue has been resolved, but if it hasn't I invite 
you to ask it at -


   https://answers.launchpad.net/savanna/+addquestion

Best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V meeting agenda

2013-08-13 Thread Peter Pouliot
Hi All,

Agenda for today's meeting is as follows.


* H3 Milestones

* Current patches in for review

o   Nova

o   Cinder

* Hyper-V Puppet Module Updates

* CI Discussion

Peter J. Pouliot, CISSP
Senior SDET, OpenStack

Microsoft
New England Research & Development Center
One Memorial Drive,Cambridge, MA 02142
ppoul...@microsoft.com | Tel: +1(857) 453 6436

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-13 Thread Dina Belova
Patrick, I had an opportunity to take just a quick look on Haizea project
(only ideas of it and some common things). Actually we had not so much time
to investigate it in better way, so we'll do it this week.


On Tue, Aug 13, 2013 at 5:50 PM, Patrick Petit wrote:

>  Hi Dina,
> Sounds great! Speaking on behalf of Francois feel free to proceed with
> points below. I don't think he would have issues with that. We'll close the
> loop when he returns. BTW, did you get a chance to take a look at Haizea's
> design and implementation?
> Thanks
> Patrick
> On 8/13/13 3:08 PM, Dina Belova wrote:
>
>  Patrick, we are really glad we've found the way to deal with both use
> cases.
>
>
>  As for your patches, that are on review and were already merged, we are
> thinking about the following actions to commit:
>
>
>  1) Oslo was merged, but it is a little bit old verdant (with setup and
> version module, that are not really used now because of new per project).
> So we (Mirantis) can update it as a first step.
>
>  2) We need to implement comfortable to use DB layer to allow using of
> different DB types (SQL and NoSQL as well), so that's the second step. Here
> we'll also create new abstractions like lease and physical or virtual
> reservations (I think we can implement it really before end of August).
>
>
>  3) After that we'll have the opportunity to modify Francois' patches for
> the physical hosts reservation in the way to be a part of our new common
> vision together.
>
>
>  Thank you.
>
>
> On Tue, Aug 13, 2013 at 4:23 PM, Patrick Petit wrote:
>
>>  Hi Nikolay,
>> Please see comments inline.
>> Thanks
>> Patrick
>>
>> On 8/12/13 5:28 PM, Nikolay Starodubtsev wrote:
>>
>>  Hi, again!
>>
>>  Partick, I’ll try to explain why do we belive in some base actions like
>> instance starting/deleting in Climate. We are thinking about the following
>> workflow (that will be quite comfortable and user friendly, and now we have
>> more than one customer who really want it):
>>
>>  1) User goes to the OpenStack dashboard and asks Heat to reserve
>> several stacks.
>>
>>  2) Heat goes to the Climate and creates all needed leases. Also Heat
>> reserves all resources for these stacks.
>>
>>  3) When time comes, user goes to the OpenStack cloud and here we think
>> he wants to see already working stacks (ideal version) or (at least)
>> already started. If no, user will have to go to the Dashboard and wake up
>> all the stacks he or she reserved. This means several actions, that may be
>> done for the user automatically, because it will be needed to do them no
>> matter what is the aim for these stacks - if user reserves them, he / she
>> needs them.
>>
>>  We understand, that there are situations when these actions may be done
>> by some other system (like some hypothetical Jenkins). But if we speak
>> about users, this will be useful. We also understand that this default way
>> of behavior should be implemented in some kind of long term life cycle
>> management system (which is not Heat), but we have no one in the OpenStack
>> now. Because the best may to implement it is to use Convection, that is
>> only proposal now...
>>
>>  That’s why we think that for the behavior like “user just reserves
>> resources and then does anything he / she wants to” physical leases are
>> better variant, when user may reserve several nodes and use it in different
>> ways. For the virtual reservations it will be better to start / delete them
>> as a default way (for something unusual Heat may be used and modified).
>>
>>  Okay. So let's bootstrap it this way then. There will be two different
>> ways the reservation service will deal with reservations depending on
>> whether its physical or virtual. All things being equal, future will tell
>> how things settle. We will focus on the physical host reservation side of
>> things. It think having this contradictory debate helped to understand each
>> others use cases and requirements that the initial design has to cope with.
>> Francois who already submitted a bunch of code for review will not return
>> from vacation until the end of August. So things on our side are a little
>> on the standby until he returns but it might help if you could take a look
>> at it. I suggest you start with your vision and we will iterate from there.
>> Is that okay with you?
>>
>>
>>
>>  Do you think that this workflow is useful too and if so can you propose
>> another implementation  variant for it?
>>
>>  Thank you.
>>
>>
>>
>>  On Mon, Aug 12, 2013 at 1:55 PM, Patrick Petit 
>> wrote:
>>
>>>  On 8/9/13 3:05 PM, Nikolay Starodubtsev wrote:
>>>
>>> Hello, Patrick!
>>>
>>> We have several reasons to think that for the virtual resources this
>>> possibility is interesting. If we speak about physical resources, user may
>>> use them in the different ways, that's why it is impossible to include base
>>> actions with them to the reservation service. But speaking about virtual
>>> reservations, let's imagine user wants to reserve 

[openstack-dev] Neutron a quick qpid revert

2013-08-13 Thread Dan Prince
All of my Neutron tests are failing this morning in SmokeStack. We need a quick 
revert to fix the qpid RPC implementation:

https://review.openstack.org/41689

https://bugs.launchpad.net/neutron/+bug/1211778

I figure we may as well revert this quick and then just wait on oslo.messaging 
to fix the original RPC concern here?

Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-13 Thread Patrick Petit

Hi Dina,
Sounds great! Speaking on behalf of Francois feel free to proceed with 
points below. I don't think he would have issues with that. We'll close 
the loop when he returns. BTW, did you get a chance to take a look at 
Haizea's design and implementation?

Thanks
Patrick
On 8/13/13 3:08 PM, Dina Belova wrote:


Patrick, we are really glad we've found the way to deal with both use 
cases.



As for your patches, that are on review and were already merged, we 
are thinking about the following actions to commit:



1) Oslo was merged, but it is a little bit old verdant (with setup and 
version module, that are not really used now because of new per 
project). So we (Mirantis) can update it as a first step.


2) We need to implement comfortable to use DB layer to allow using of 
different DB types (SQL and NoSQL as well), so that's the second step. 
Here we'll also create new abstractions like lease and physical or 
virtual reservations (I think we can implement it really before end of 
August).



3) After that we'll have the opportunity to modify Francois' patches 
for the physical hosts reservation in the way to be a part of our new 
common vision together.



Thank you.



On Tue, Aug 13, 2013 at 4:23 PM, Patrick Petit > wrote:


Hi Nikolay,
Please see comments inline.
Thanks
Patrick

On 8/12/13 5:28 PM, Nikolay Starodubtsev wrote:


Hi, again!


Partick, I'll try to explain why do we belive in some base
actions like instance starting/deleting in Climate. We are
thinking about the following workflow (that will be quite
comfortable and user friendly, and now we have more than one
customer who really want it):


1) User goes to the OpenStack dashboard and asks Heat to reserve
several stacks.


2) Heat goes to the Climate and creates all needed leases. Also
Heat reserves all resources for these stacks.


3) When time comes, user goes to the OpenStack cloud and here we
think he wants to see already working stacks (ideal version) or
(at least) already started. If no, user will have to go to the
Dashboard and wake up all the stacks he or she reserved. This
means several actions, that may be done for the user
automatically, because it will be needed to do them no matter
what is the aim for these stacks - if user reserves them, he /
she needs them.


We understand, that there are situations when these actions may
be done by some other system (like some hypothetical Jenkins).
But if we speak about users, this will be useful. We also
understand that this default way of behavior should be
implemented in some kind of long term life cycle management
system (which is not Heat), but we have no one in the OpenStack
now. Because the best may to implement it is to use Convection,
that is only proposal now...


That's why we think that for the behavior like "user just
reserves resources and then does anything he / she wants to"
physical leases are better variant, when user may reserve several
nodes and use it in different ways. For the virtual reservations
it will be better to start / delete them as a default way (for
something unusual Heat may be used and modified).


Okay. So let's bootstrap it this way then. There will be two
different ways the reservation service will deal with reservations
depending on whether its physical or virtual. All things being
equal, future will tell how things settle. We will focus on the
physical host reservation side of things. It think having this
contradictory debate helped to understand each others use cases
and requirements that the initial design has to cope with.
Francois who already submitted a bunch of code for review will not
return from vacation until the end of August. So things on our
side are a little on the standby until he returns but it might
help if you could take a look at it. I suggest you start with your
vision and we will iterate from there. Is that okay with you?




Do you think that this workflow is useful too and if so can you
propose another implementation  variant for it?


Thank you.




On Mon, Aug 12, 2013 at 1:55 PM, Patrick Petit
mailto:patrick.pe...@bull.net>> wrote:

On 8/9/13 3:05 PM, Nikolay Starodubtsev wrote:

Hello, Patrick!

We have several reasons to think that for the virtual
resources this possibility is interesting. If we speak about
physical resources, user may use them in the different ways,
that's why it is impossible to include base actions with
them to the reservation service. But speaking about virtual
reservations, let's imagine user wants to reserve virtual
machine. He knows everything about it - its parameters,
flavor and time to be leased for. Really, in this case user
wants to have already working (or at

[openstack-dev] [Swift] Swift 1.9.1 released

2013-08-13 Thread John Dickinson
Swift 1.9.1, as described below, has been released. Download links to the 
tarball are at https://launchpad.net/swift/havana/1.9.1


--John


On Aug 7, 2013, at 10:21 AM, John Dickinson  wrote:

> Today we have released Swift 1.9.1 (RC1).
> 
> The tarball for the RC is at
> http://tarballs.openstack.org/swift/swift-milestone-proposed.tar.gz
> 
> This release was initially prompted by a bug found by Peter Portante
> (https://bugs.launchpad.net/swift/+bug/1196932) and includes a patch
> for it. All clusters are recommended to upgrade to this new release.
> As always, you can upgrade to this version of Swift with no end-user
> downtime.
> 
> In addition to the patch mentioned above, this release contains a few
> other important features:
> 
> * The default worker count has changed from 1 to auto. The new default
>  value will for workers in the proxy, container, account & object
>  wsgi servers will spawn as many workers per process as you have cpu
>  cores.
> 
> * A "reveal_sensitive_prefix" config parameter was added to the
>  proxy_logging config. This value allows the auth token to be
>  obscured in the logs.
> 
> * The Keystone middleware will now enforce that the reseller_prefix
>  ends in an underscore. Previously, this was a recommendation, and
>  now it is enforced.
> 
> There are several other changes in this release. I'd encourage you to
> read the full changelog at
> https://github.com/openstack/swift/blob/master/CHANGELOG.
> 
> On the community side, this release includes the work of 7 new
> contributors. They are:
> 
> Alistair Coles (alistair.co...@hp.com)
> Thomas Leaman (thomas.lea...@hp.com)
> Dirk Mueller (d...@dmllr.de)
> Newptone (xingc...@unitedstack.com)
> Jon Snitow (other...@swiftstack.com)
> TheSriram (sri...@klusterkloud.com)
> Koert van der Veer (ko...@cloudvps.com)
> 
> Thanks to everyone for your hard work. I'm very happy with where Swift
> is and where we are going together.
> 
> --John
> 
> 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: Proposal for approving Auto HA development blueprint.

2013-08-13 Thread yongiman
For realizing auto HA function, we need monitoring service like ceilometer.

Ceilometer monitors status of compute nodes ( network interface..connection, 
healthcheck,,etc,,)

What I focus on is that this operation goes on automatically.

Nova expose auto ha API. When nova received a auto api call. VMs automatically 
migrate to auto ha host.( which is extra compute node for only auto ha)

All of information of auto ha is stored in auto_ha_hosts table.

In this tables, used column of auto ha hosts is changed to true

Administrator check broken compute node and fix( or replace ) the compute node.

After fixing the compute node, VMs is migrating to operating compute nodes. Now 
auto ha host is empty again.

When the number of runnning VMs in the auto ha host is zero, used column is 
replaced to false for using again by periodic task.

Combination with monitoring service is important. Howerver in this blueprint, I 
want to realize nova's auto ha operation.

My wiki page is still building now, I will fill out as soon as possbile.

I am expecting your advices . Thank you very much~!
 



Sent from my iPad

On 2013. 8. 13., at 오후 8:01, balaji patnala  wrote:

> Potential candidate as new service like Ceilometer, Heat etc for OpenStack 
> and provide High Availability of VMs. Good topic to discuss at Summit for 
> implementation post Havana Release. 
> 
> On Tue, Aug 13, 2013 at 12:03 PM, Alex Glikson  wrote:
>> Agree. Some enhancements to Nova might be still required (e.g., to handle 
>> resource reservations, so that there is enough capacity), but the end-to-end 
>> framework probably should be outside of existing services, probably talking 
>> to Nova, Ceilometer and potentially other components (maybe Cinder, Neutron, 
>> Ironic), and 'orchestrating' failure detection, fencing and recovery. 
>> Probably worth a discussion at the upcoming summit. 
>> 
>> 
>> Regards, 
>> Alex 
>> 
>> 
>> 
>> From:Konglingxian  
>> To:OpenStack Development Mailing List 
>> , 
>> Date:13/08/2013 07:07 AM 
>> Subject:[openstack-dev] 答复:  Proposal for approving Auto HA 
>> developmentblueprint. 
>> 
>> 
>> 
>> Hi yongiman: 
>>   
>> Your idea is good, but I think the auto HA operation is not OpenStack’s 
>> business. IMO, Ceilometer offers ‘monitoring’, Nova  offers ‘evacuation’, 
>> and you can combine them to realize HA operation. 
>>   
>> So, I’m afraid I can’t understand the specific implementation details very 
>> well. 
>>   
>> Any different opinions? 
>>   
>> 发件人: yongi...@gmail.com [mailto:yongi...@gmail.com] 
>> 发送时间: 2013年8月12日 20:52
>> 收件人: openstack-dev@lists.openstack.org
>> 主题: Re: [openstack-dev] Proposal for approving Auto HA development 
>> blueprint. 
>>   
>>   
>>   
>> Hi, 
>>   
>> Now, I am developing auto ha operation for vm high availability. 
>>   
>> This function is all progress automatically. 
>>   
>> It needs other service like ceilometer. 
>>   
>> ceilometer monitors compute nodes. 
>>   
>> When ceilometer detects broken compute node, it send a api call to Nova, 
>> nova exposes for auto ha API. 
>>   
>> When received auto ha call, nova progress auto ha operation. 
>>   
>> All auto ha enabled VM where are running on broken host are all migrated to 
>> auto ha Host which is extra compute node for using only Auto-HA function. 
>>   
>> Below is my blueprint and wiki page. 
>>   
>> Wiki page is not yet completed. Now I am adding lots of information for this 
>> function. 
>>   
>> Thanks 
>>   
>> https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken 
>>   
>> https://wiki.openstack.org/wiki/Autoha___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Question about get_meters query using a JOIN

2013-08-13 Thread Thomas Maddox
Hey team,

I was curious about why we went for a JOIN here rather than just using the 
meter table initially? 
https://github.com/openstack/ceilometer/blob/master/ceilometer/storage/impl_sqlalchemy.py#L336-L391.
 Doug had mentioned that some performance testing had gone on with some of 
these queries, so before writing up requests to change this to the meter table 
only, I wanted to see if this was a result of that performance testing? Like 
the JOIN was less expensive than a DISTINCT.

Cheers!

-Thomas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-13 Thread Dina Belova
Patrick, we are really glad we've found the way to deal with both use cases.


As for your patches, that are on review and were already merged, we are
thinking about the following actions to commit:


1) Oslo was merged, but it is a little bit old verdant (with setup and
version module, that are not really used now because of new per project).
So we (Mirantis) can update it as a first step.


2) We need to implement comfortable to use DB layer to allow using of
different DB types (SQL and NoSQL as well), so that's the second step. Here
we'll also create new abstractions like lease and physical or virtual
reservations (I think we can implement it really before end of August).


3) After that we'll have the opportunity to modify Francois' patches for
the physical hosts reservation in the way to be a part of our new common
vision together.


Thank you.


On Tue, Aug 13, 2013 at 4:23 PM, Patrick Petit wrote:

>  Hi Nikolay,
> Please see comments inline.
> Thanks
> Patrick
>
> On 8/12/13 5:28 PM, Nikolay Starodubtsev wrote:
>
>  Hi, again!
>
>  Partick, I’ll try to explain why do we belive in some base actions like
> instance starting/deleting in Climate. We are thinking about the following
> workflow (that will be quite comfortable and user friendly, and now we have
> more than one customer who really want it):
>
>  1) User goes to the OpenStack dashboard and asks Heat to reserve several
> stacks.
>
>  2) Heat goes to the Climate and creates all needed leases. Also Heat
> reserves all resources for these stacks.
>
>  3) When time comes, user goes to the OpenStack cloud and here we think
> he wants to see already working stacks (ideal version) or (at least)
> already started. If no, user will have to go to the Dashboard and wake up
> all the stacks he or she reserved. This means several actions, that may be
> done for the user automatically, because it will be needed to do them no
> matter what is the aim for these stacks - if user reserves them, he / she
> needs them.
>
>  We understand, that there are situations when these actions may be done
> by some other system (like some hypothetical Jenkins). But if we speak
> about users, this will be useful. We also understand that this default way
> of behavior should be implemented in some kind of long term life cycle
> management system (which is not Heat), but we have no one in the OpenStack
> now. Because the best may to implement it is to use Convection, that is
> only proposal now...
>
>  That’s why we think that for the behavior like “user just reserves
> resources and then does anything he / she wants to” physical leases are
> better variant, when user may reserve several nodes and use it in different
> ways. For the virtual reservations it will be better to start / delete them
> as a default way (for something unusual Heat may be used and modified).
>
> Okay. So let's bootstrap it this way then. There will be two different
> ways the reservation service will deal with reservations depending on
> whether its physical or virtual. All things being equal, future will tell
> how things settle. We will focus on the physical host reservation side of
> things. It think having this contradictory debate helped to understand each
> others use cases and requirements that the initial design has to cope with.
> Francois who already submitted a bunch of code for review will not return
> from vacation until the end of August. So things on our side are a little
> on the standby until he returns but it might help if you could take a look
> at it. I suggest you start with your vision and we will iterate from there.
> Is that okay with you?
>
>
>
>  Do you think that this workflow is useful too and if so can you propose
> another implementation  variant for it?
>
>  Thank you.
>
>
>
>  On Mon, Aug 12, 2013 at 1:55 PM, Patrick Petit wrote:
>
>>  On 8/9/13 3:05 PM, Nikolay Starodubtsev wrote:
>>
>> Hello, Patrick!
>>
>> We have several reasons to think that for the virtual resources this
>> possibility is interesting. If we speak about physical resources, user may
>> use them in the different ways, that's why it is impossible to include base
>> actions with them to the reservation service. But speaking about virtual
>> reservations, let's imagine user wants to reserve virtual machine. He knows
>> everything about it - its parameters, flavor and time to be leased for.
>> Really, in this case user wants to have already working (or at least
>> starting to work) reserved virtual machine and it would be great to include
>> this opportunity to the reservation service.
>>
>>  We are thinking about base actions for the virtual reservations that
>> will be supported by Climate, like boot/delete for instance, create/delete
>> for volume and create/delete for the stacks. The same will be with volumes,
>> IPs, etc. As for more complicated behaviour, it may be implemented in Heat.
>> This will make reservations simpler to use for the end users.
>>
>> Don't you think so?
>>
>>  Well yes and an

Re: [openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-13 Thread Patrick Petit

Hi Nikolay,
Please see comments inline.
Thanks
Patrick
On 8/12/13 5:28 PM, Nikolay Starodubtsev wrote:


Hi, again!


Partick, I’ll try to explain why do we belive in some base actions 
like instance starting/deleting in Climate. We are thinking about the 
following workflow (that will be quite comfortable and user friendly, 
and now we have more than one customer who really want it):



1) User goes to the OpenStack dashboard and asks Heat to reserve 
several stacks.



2) Heat goes to the Climate and creates all needed leases. Also Heat 
reserves all resources for these stacks.



3) When time comes, user goes to the OpenStack cloud and here we think 
he wants to see already working stacks (ideal version) or (at least) 
already started. If no, user will have to go to the Dashboard and wake 
up all the stacks he or she reserved. This means several actions, that 
may be done for the user automatically, because it will be needed to 
do them no matter what is the aim for these stacks - if user reserves 
them, he / she needs them.



We understand, that there are situations when these actions may be 
done by some other system (like some hypothetical Jenkins). But if we 
speak about users, this will be useful. We also understand that this 
default way of behavior should be implemented in some kind of long 
term life cycle management system (which is not Heat), but we have no 
one in the OpenStack now. Because the best may to implement it is to 
use Convection, that is only proposal now...



That’s why we think that for the behavior like “user just reserves 
resources and then does anything he / she wants to” physical leases 
are better variant, when user may reserve several nodes and use it in 
different ways. For the virtual reservations it will be better to 
start / delete them as a default way (for something unusual Heat may 
be used and modified).


Okay. So let's bootstrap it this way then. There will be two different 
ways the reservation service will deal with reservations depending on 
whether its physical or virtual. All things being equal, future will 
tell how things settle. We will focus on the physical host reservation 
side of things. It think having this contradictory debate helped to 
understand each others use cases and requirements that the initial 
design has to cope with. Francois who already submitted a bunch of code 
for review will not return from vacation until the end of August. So 
things on our side are a little on the standby until he returns but it 
might help if you could take a look at it. I suggest you start with your 
vision and we will iterate from there. Is that okay with you?




Do you think that this workflow is useful too and if so can you 
propose another implementation  variant for it?



Thank you.




On Mon, Aug 12, 2013 at 1:55 PM, Patrick Petit > wrote:


On 8/9/13 3:05 PM, Nikolay Starodubtsev wrote:

Hello, Patrick!

We have several reasons to think that for the virtual resources
this possibility is interesting. If we speak about physical
resources, user may use them in the different ways, that's why it
is impossible to include base actions with them to the
reservation service. But speaking about virtual reservations,
let's imagine user wants to reserve virtual machine. He knows
everything about it - its parameters, flavor and time to be
leased for. Really, in this case user wants to have already
working (or at least starting to work) reserved virtual machine
and it would be great to include this opportunity to the
reservation service.
We are thinking about base actions for the virtual reservations
that will be supported by Climate, like boot/delete for instance,
create/delete for volume and create/delete for the stacks. The
same will be with volumes, IPs, etc. As for more complicated
behaviour, it may be implemented in Heat. This will make
reservations simpler to use for the end users.

Don't you think so?

Well yes and and no. It really depends upon what you put behind
those lease actions. The view I am trying to sustain is separation
of duties to keep the service simple, ubiquitous and non
prescriptive of a certain kind of usage pattern. In other words,
keep Climate for reservation of capacity (physical or virtual),
Heat for orchestration, and so forth. ... Consider for example the
case of reservation as a non technical act but rather as a
business enabler for wholesales activities. Don't need, and
probably don't want to start or stop any resource there. I do not
deny that there are cases where it is desirable but then how
reservations are used and composed together at the end of the day
mainly depends on exogenous factors which couldn't be anticipated
because they are driven by the business.

And so, rather than coupling reservations with wired resource
instantiation actions, I would rathe

Re: [openstack-dev] Proposal to support new Cinder driver for CloudByte's Elastistor

2013-08-13 Thread Thierry Carrez
Amit Das wrote:
> We have implemented a CINDER driver for our QoS aware storage solution
> (CloudByte Elastistor).
> 
> We would like to integrate this driver code with the next version of
> OpenStack (Havana).
> 
> Please let us know the approval processes to be followed for this new
> driver support.

See https://wiki.openstack.org/wiki/Release_Cycle and
https://wiki.openstack.org/wiki/Blueprints for the beginning of an answer.

Note that we are pretty late in the Havana cycle with lots of features
which have been proposed a long time ago still waiting for reviews and
merging... so it's a bit unlikely that a new feature would be added now
to that already-overloaded backlog.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] On the road to v0.2: the stable branch 'release-0.2' is created

2013-08-13 Thread Denis Koryavov
Hello folks,

We are in the homestretch to a new stable release - Murano v0.2. All
planned blueprints (see [1]) are implemented or are in 'beta' state. Thus,
today we prepared a branch 'release-v0.2' which is intended to be the
stable release.

Starting today, all v0.2-related commits should be pushed to this branch.
To do this, just do the next:

git checkout release-0.2
git checkout -b MY-TOPIC-BRANCH
git commit
git review release-0.2

(for more information please see [2]).

First of all the branch is intended for bugs fixing and stabilization of
our code base, so reception of new code will be limited. If you want to
commit a big change is better to push it to the 'master' branch which is
open for reception new features from today.

The final release is scheduled on 5th September.

[1] https://launchpad.net/murano/+milestone/0.2
[2] https://wiki.openstack.org/wiki/GerritJenkinsGithub#Milestones

Have a nice day.

--
Denis
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Savanna PTL election proposal

2013-08-13 Thread Matthew Farrellee

On 08/13/2013 07:29 AM, Thierry Carrez wrote:

Matthew Farrellee wrote:

  2. Candidate nomination -
   a. anyone can list names in
https://etherpad.openstack.org/savanna-ptl-candidates-0
   b. anyone mentioned during this week's IRC meeting
   c. both (a) and (b)
   - Current direction is to be inclusive and thus (c)


We do self-nomination (people who want to run nominate themselves)
because then you don't have to go through the painful step of
*confirming* candidates (people may not agree to run).


That's good to know and a nice modification.



  3. Electorate -
   a. all AUTHORS on the Savanna repositories
   b. all committers (git log --author) on Savanna repos since Grizzly
release
   c. all committers since Savanna inception
   d. savanna-core members (currently 2 people)
   e. committers w/ filter on number of commits or size of commits
   - Current direction is to be broadly inclusive (not (d) or (e)) thus
(a), it is believed that (a) ~= (b) ~= (c).


If you want to make it "like OpenStack" it should be all Savanna recent
authors (last year), as given by git. Maybe the infra team could even
give you a list of emails for use in CIVS.


Also good to know. I'll mention this during out meeting when folks are 
agreeing on the proposal options.


FYI, Savanna is < 1 yo and the AUTHORS file is automatically updated 
when new commits come in.




  4. Duration of election -
   a. 1 week (from 15 Aug meeting to 22 Aug meeting)
  5. Term -
   a. effective immediately through next full OpenStack election cycle
(i.e. now until "I" release, 6 mo+)
   b. effective immediately until min(6 mo, incubation)
   c. effective immediately until end of incubation
   - Current direction is any option that aligns with the standard
OpenStack election cycle


I think (a) would work well.


Thanks for the feedback!


Best,


matt


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reminder: Project & release status meeting - 21:00 UTC

2013-08-13 Thread Thierry Carrez
Today in the Project & release status meeting, more havana-3 goodness.

Feel free to add extra topics to the agenda:
[1] http://wiki.openstack.org/Meetings/ProjectMeeting

All Technical Leads for integrated programs should be present (if you
can't make it, please name a substitute on [1]). Other program leads and
everyone else is very welcome to attend.

The meeting will be held at 21:00 UTC on the #openstack-meeting channel
on Freenode IRC. You can look up how this time translates locally at:
[2] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20130813T21

See you there,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Savanna PTL election proposal

2013-08-13 Thread Thierry Carrez
Matthew Farrellee wrote:
>  2. Candidate nomination -
>   a. anyone can list names in
> https://etherpad.openstack.org/savanna-ptl-candidates-0
>   b. anyone mentioned during this week's IRC meeting
>   c. both (a) and (b)
>   - Current direction is to be inclusive and thus (c)

We do self-nomination (people who want to run nominate themselves)
because then you don't have to go through the painful step of
*confirming* candidates (people may not agree to run).

>  3. Electorate -
>   a. all AUTHORS on the Savanna repositories
>   b. all committers (git log --author) on Savanna repos since Grizzly
> release
>   c. all committers since Savanna inception
>   d. savanna-core members (currently 2 people)
>   e. committers w/ filter on number of commits or size of commits
>   - Current direction is to be broadly inclusive (not (d) or (e)) thus
> (a), it is believed that (a) ~= (b) ~= (c).

If you want to make it "like OpenStack" it should be all Savanna recent
authors (last year), as given by git. Maybe the infra team could even
give you a list of emails for use in CIVS.

>  4. Duration of election -
>   a. 1 week (from 15 Aug meeting to 22 Aug meeting)
>  5. Term -
>   a. effective immediately through next full OpenStack election cycle
> (i.e. now until "I" release, 6 mo+)
>   b. effective immediately until min(6 mo, incubation)
>   c. effective immediately until end of incubation
>   - Current direction is any option that aligns with the standard
> OpenStack election cycle

I think (a) would work well.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Planning to build Openstack Private Cloud...

2013-08-13 Thread Thierry Carrez
Jay Kumbhani wrote:
> I am planning to build Openstack Private cloud for my company. We have bunch 
> of Dell blade servers but very old - so probably investing on resources on it 
> is not good idea.
> 
> I am looking for acquiring new Server with high amount of CPU and Memory 
> resources. Can anyone suggest the best suited server brand and model for 
> Openstack deployment (Basically for Openstack compute node)? We are looking 
> for building infrastructure for minimum of ~100 VM's can run concurrently 
> with 1-4 GB of RAM allocation. 
> 
> It would be great help if you can provide suitable server brand and model & 
> with reason.
> 
> Appreciate and Thanks in advance

This is a development mailing-list, focused on discussing the future of
OpenStack -- your question is unlikely to get the best answer here, if
any. You should post to the general openstack mailing-list instead
(openst...@lists.openstack.org). For more information, see:

https://wiki.openstack.org/wiki/Mailing_Lists

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: Proposal for approving Auto HA development blueprint.

2013-08-13 Thread balaji patnala
Potential candidate as new service like Ceilometer, Heat etc for OpenStack
and provide High Availability of VMs. Good topic to discuss at Summit for
implementation post Havana Release.

On Tue, Aug 13, 2013 at 12:03 PM, Alex Glikson  wrote:

> Agree. Some enhancements to Nova might be still required (e.g., to handle
> resource reservations, so that there is enough capacity), but the
> end-to-end framework probably should be outside of existing services,
> probably talking to Nova, Ceilometer and potentially other components
> (maybe Cinder, Neutron, Ironic), and 'orchestrating' failure detection,
> fencing and recovery.
> Probably worth a discussion at the upcoming summit.
>
>
> Regards,
> Alex
>
>
>
> From:Konglingxian 
> To:OpenStack Development Mailing List <
> openstack-dev@lists.openstack.org>,
> Date:13/08/2013 07:07 AM
> Subject:[openstack-dev] 答复:  Proposal for approving Auto HA
> developmentblueprint.
> --
>
>
>
> Hi yongiman:
>
> Your idea is good, but I think the auto HA operation is not OpenStack’s
> business. IMO, Ceilometer offers ‘monitoring’, Nova  offers ‘evacuation’,
> and you can combine them to realize HA operation.
>
> So, I’m afraid I can’t understand the specific implementation details very
> well.
>
> Any different opinions?
>
> *发件人:* yongi...@gmail.com [mailto:yongi...@gmail.com ]
> *
> 发送时间:* 2013年8月12日 20:52*
> 收件人:* openstack-dev@lists.openstack.org*
> 主题:* Re: [openstack-dev] Proposal for approving Auto HA development
> blueprint.
>
>
>
> Hi,
>
> Now, I am developing auto ha operation for vm high availability.
>
> This function is all progress automatically.
>
> It needs other service like ceilometer.
>
> ceilometer monitors compute nodes.
>
> When ceilometer detects broken compute node, it send a api call to Nova,
> nova exposes for auto ha API.
>
> When received auto ha call, nova progress auto ha operation.
>
> All auto ha enabled VM where are running on broken host are all migrated
> to auto ha Host which is extra compute node for using only Auto-HA function.
>
> Below is my blueprint and wiki page.
>
> Wiki page is not yet completed. Now I am adding lots of information for
> this function.
>
> Thanks
>
> *https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken*
>
> *https://wiki.openstack.org/wiki/Autoha*
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Can we use two nova schedulers at the same time?

2013-08-13 Thread Russell Bryant
On 08/13/2013 05:57 AM, sudheesh sk wrote:
> I have one quick question regarding the 3rd point you have mentioned
> (Multiple scheduler configurations within a single (potentially
> heterogeneous) Nova deployment)
> 
> In this case ultimately when a VM is created - would it have gone
> through all the schedulers or just one scheduler which was dynamically
> selected?

The plan has been to dynamically choose a scheduler (and its config) and
use only that.

> Is there any chance of having 2 schedulers  impacting creation of one VM?

Can you explain a bit more about your use case here and how you would
expect such a thing to work?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Can we use two nova schedulers at the same time?

2013-08-13 Thread sudheesh sk
I have one quick question regarding the 3rd point you have mentioned (Multiple 
scheduler configurations
within a single (potentially heterogeneous) Nova deployment)

In this case ultimately when a VM is created - would it have gone through all 
the schedulers or just one scheduler which was dynamically selected?
Is there any chance of having 2 schedulers  impacting creation of one VM?


Thanks,
Sudheesh


 From: Alex Glikson 
To: sudheesh sk ; OpenStack Development Mailing List 
 
Sent: Tuesday, 13 August 2013 1:45 PM
Subject: Re: [openstack-dev] Can we use two nova schedulers at the same time?
 


There are roughly three cases. 
1. Multiple identical instances of the
scheduler service. This is typically done to increase scalability, and
is already supported (although sometimes may result in provisioning failures
due to race conditions between scheduler instances). There is a single
queue of provisioning requests, all the scheduler instances are subscribed,
and each request will be processed by one of the instances (randomly, more
or less). I think this is not the option that you referred to, though. 
2. Multiple cells, each having its own
scheduler. This is also supported, but is applicable only if you decide
to use cells (e.g., in large-scale geo-distributed deployments). 
3. Multiple scheduler configurations
within a single (potentially heterogeneous) Nova deployment, with dynamic
selection of configuration/policy at run time (for simplicity let's assume
just one scheduler service/runtime). This capability is under development
(https://review.openstack.org/#/c/37407/) , targeting Havana. The current design
is that the admin will be able to override scheduler properties (such as
driver, filters, etc) using flavor extra specs. In some cases you would
want to combine this capability with a mechanism that would ensure disjoint
partitioning of the managed compute nodes between the drivers. This can
be currently achieved by using host aggregates and AggregateInstanceExtraSpec
filter of FilterScheduler. For example, if you want to apply driver_A on
hosts in aggregate_X, and dirver_B on hosts in aggregate_Y, you would have
flavor AX specifying driver_A and properties that would map to aggregate_X,
and similarly for BY. 

Hope this helps. 

Regards, 
Alex 



From:      
 sudheesh sk  
To:      
 "openstack-dev@lists.openstack.org"
,  
Date:      
 13/08/2013 10:30 AM 
Subject:    
   [openstack-dev]
Can we use two nova schedulers at the same time? 

 


Hi, 

1) Can nova have more than one scheduler
at a time? Standard Scheduler + one custom scheduler? 

2) If its possible to add multiple schedulers
- how we should configure it. lets say I have a scheduler called 'Scheduler'
. So nova conf may look like below scheduler_manager = 
nova.scheduler.filters.SchedulerManager
scheduler_driver = nova.scheduler.filter.Scheduler Then how can I add a
second scheduler 

3) If there are 2 schedulers - will both
of these called when creating a VM? 


I am asking these questions based on a response
I got from ask openstack forum 

Thanks, 
Sudheesh___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] security_groups extension in nova api v3

2013-08-13 Thread Day, Phil
Hi All,

If we really want to get clean separation between Nova and Neutron in the V3 
API should we consider making the Nov aV3 API only accept lists o port ids in 
the server create command ?

That way there would be no need to every pass security group information into 
Nova.

Any cross project co-ordination (for example automatically creating ports) 
could be handled in the client layer, rather than inside Nova.

Phil 

> -Original Message-
> From: Melanie Witt [mailto:melw...@yahoo-inc.com]
> Sent: 09 August 2013 23:05
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [nova] security_groups extension in nova api v3
> 
> Hi All,
> 
> I did the initial port of the security_groups api extension to v3 and have 
> been
> testing it out in devstack while adding the expected_errors decorator to it.
> 
> The guidance so far on network-related extensions in v3 is not to duplicate
> actions that can be accomplished through the neutron api and assuming nova-
> network deprecation is imminent. So, the only actions left in the extension 
> are
> the associate/disassociate security group with instance.
> 
> However, when security_group_api = neutron, all associate/disassociate will do
> is call the neutron api to update the port for the instance (port device_id ==
> instance uuid) and append the specified security group. I'm wondering if this
> falls under the nova proxying we don't want to be doing and if
> associate/disassociate should be removed from the extension for v3.
> 
> If removed, it would leave the extension only providing support for
> server_create (cyeoh has a patch up for review).
> 
> Any opinions?
> 
> Thanks,
> Melanie
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [ceilometer] Periodic Auditing In Glance

2013-08-13 Thread Julien Danjou
On Mon, Aug 12 2013, Andrew Melton wrote:

> So, my question to the Ceilometer community is this, does this sound like
> something Ceilometer would find value in and use? If so, would this be
> something
> we would want most deployers turning on?

Yes. I think we would definitely be happy to have the ability to drop
our pollster at some time.
I'm just concerned with the type of notification you'd send. It has to
be enough fine grained so we don't lose too much information.

-- 
Julien Danjou
// Free Software hacker / freelance consultant
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Nova_tests failing in jenkins

2013-08-13 Thread Julien Danjou
On Mon, Aug 12 2013, Herndon, John Luke (HPCS - Ft. Collins) wrote:

> The nova_tests are failing for a couple of different Ceilometer reviews,
> due to 'module' object has no attribute 'add_driver'.
>
> This review (https://review.openstack.org/#/c/41316/) had nothing to do
> with the nova_tests, yet they are failing. Any clue what's going on?
>
> Apologies if there is an obvious answer - I've never encountered this
> before.

FTR, Terri opened a bug about it:
  https://bugs.launchpad.net/ceilometer/+bug/1211532

-- 
Julien Danjou
# Free Software hacker # freelance consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Can we use two nova schedulers at the same time?

2013-08-13 Thread Alex Glikson
There are roughly three cases.
1. Multiple identical instances of the scheduler service. This is 
typically done to increase scalability, and is already supported (although 
sometimes may result in provisioning failures due to race conditions 
between scheduler instances). There is a single queue of provisioning 
requests, all the scheduler instances are subscribed, and each request 
will be processed by one of the instances (randomly, more or less). I 
think this is not the option that you referred to, though.
2. Multiple cells, each having its own scheduler. This is also supported, 
but is applicable only if you decide to use cells (e.g., in large-scale 
geo-distributed deployments).
3. Multiple scheduler configurations within a single (potentially 
heterogeneous) Nova deployment, with dynamic selection of 
configuration/policy at run time (for simplicity let's assume just one 
scheduler service/runtime). This capability is under development (
https://review.openstack.org/#/c/37407/) , targeting Havana. The current 
design is that the admin will be able to override scheduler properties 
(such as driver, filters, etc) using flavor extra specs. In some cases you 
would want to combine this capability with a mechanism that would ensure 
disjoint partitioning of the managed compute nodes between the drivers. 
This can be currently achieved by using host aggregates and 
AggregateInstanceExtraSpec filter of FilterScheduler. For example, if you 
want to apply driver_A on hosts in aggregate_X, and dirver_B on hosts in 
aggregate_Y, you would have flavor AX specifying driver_A and properties 
that would map to aggregate_X, and similarly for BY.

Hope this helps.

Regards,
Alex



From:   sudheesh sk 
To: "openstack-dev@lists.openstack.org" 
, 
Date:   13/08/2013 10:30 AM
Subject:[openstack-dev] Can we use two nova schedulers at the same 
time?



Hi,

1) Can nova have more than one scheduler at a time? Standard Scheduler + 
one custom scheduler?

2) If its possible to add multiple schedulers - how we should configure 
it. lets say I have a scheduler called 'Scheduler' . So nova conf may look 
like below scheduler_manager = nova.scheduler.filters.SchedulerManager 
scheduler_driver = nova.scheduler.filter.Scheduler Then how can I add a 
second scheduler

3) If there are 2 schedulers - will both of these called when creating a 
VM?


I am asking these questions based on a response I got from ask openstack 
forum

Thanks,
Sudheesh___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Henry Nash
Hi

So few comebacks to the various comments:

1) While I understand the idea that a client would follow the next/prev links 
returned in collections, I wasn't aware that we considered 'page'/'per-page' as 
not standardized.   We list these explicitly throughout the identity API spec 
(look in each List 'entity' example).  How I imagined it would work would be:

a) If a client did not include 'page' in the url we would not paginate
b) Once we are paginating, a client can either build the next/prevs urls 
themselves if they want (by incrementing/decrementing the page number), or just 
follow the next/prev links (which come with the appropriate 'page=x' in them) 
returned in the collection which saves them having to do this.
c) Regarding implementation, the controller would continue to be able to 
paginate on behalf of drivers that couldn't, but those paginate-aware drivers 
would take over that capability (and indicate this to the controller the state 
of the pagination so that it can build the correct next/prev links)

2) On the subject of huge enumerates, options are:
a) Support a backend manager scoped (i.e. identity/assignent/token) limit in 
the conf file which would be honored by drivers.  Assuming that you set this 
larger than your pagination limit, this would make sense whether your driver is 
paginating or not in terms of minimizing the delay in responding data as well 
as not messing up pagination.  In the non-paginated case when we hit the limit, 
should we indicate this to the client?  Maybe a 206 return code?  Although i) 
not quite sure that meets http standards, and ii) would we break a bunch of 
clients by doing this?
b) We scrap the whole idea of pagination, and just set a conf limit as in 2a).  
To make this work of course, we must implement any defined filters in the 
backend (otherwise we still end up with today's performance problems - remember 
that today, in general,  filtering is done in the controller on a full 
enumeration of the entities in question).  I was planning to implement this 
backend filtering anyway as part of (or on top of) my change, since we are 
holding (at least one of) our hands behind our backs right now by not doing so. 
 And our filters need to be powerful, do we support wildcards for example, e.g. 
GET /users?name = fred*  ?
 
Henry

On 13 Aug 2013, at 04:40, Adam Young wrote:

> On 08/12/2013 09:22 PM, Miller, Mark M (EB SW Cloud - R&D - Corvallis) wrote:
>> The main reason I use user lists (i.e. keystone user-list) is to get the 
>> list of usernames/IDs for other keystone commands. I do not see the value of 
>> showing all of the users in an LDAP server when they are not part of the 
>> keystone database (i.e. do not have roles assigned to them). Performing a 
>> “keystone user-list” command against the HP Enterprise Directory locks up 
>> keystone for about 1 ½ hours in that it will not perform any other commands 
>> until it is done.  If it is decided that user lists are necessary, then at a 
>> minimum they need to be paged to return control back to keystone for another 
>> command.
> 
> We need a way to tell HP ED to limit the number of rows, and to do filtering.
> 
> We have a bug for the second part.  I'll open one for the limit.
> 
>>  
>> Mark
>>  
>> From: Adam Young [mailto:ayo...@redhat.com] 
>> Sent: Monday, August 12, 2013 5:27 PM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [keystone] Pagination
>>  
>> On 08/12/2013 05:34 PM, Henry Nash wrote:
>> Hi
>>  
>> I'm working on extending the pagination into the backends.  Right now, we 
>> handle the pagination in the v3 controller classand in fact it is 
>> disabled right now and we return the whole list irrespective of whether 
>> page/per-page is set in the query string, e.g.:
>> Pagination is a broken concept. We should not be returning lists so long 
>> that we need to paginate.  Instead, we should have query limits, and filters 
>> to refine the queries.
>> 
>> Some people are doing full user lists against LDAP.  I don't need to tell 
>> you how broken that is.  Why do we allow user-list at the Domain (or 
>> unscoped level)?  
>> 
>> I'd argue that we should drop enumeration of objects in general, and 
>> certainly limit the number of results that come back.  Pagination in LDAP 
>> requires cursors, and thus continuos connections from Keystone to 
>> LDAP...this is not a scalable solution.
>> 
>> Do we really need this?
>> 
>> 
>> 
>>  
>> def paginate(cls, context, refs):
>> """Paginates a list of references by page & per_page query 
>> strings."""
>> # FIXME(dolph): client needs to support pagination first
>> return refs
>>  
>> page = context['query_string'].get('page', 1)
>> per_page = context['query_string'].get('per_page', 30)
>> return refs[per_page * (page - 1):per_page * page]
>>  
>> I wonder both for the V3 controller (which still needs to handle pagination 
>> for backends that do not support it)

Re: [openstack-dev] Weight normalization in scheduler

2013-08-13 Thread Álvaro López García
Hi again.

Thank you for your reply, Sandy. Some more comments inline.

On Thu 01 Aug 2013 (10:04), Sandy Walsh wrote:
> On 08/01/2013 09:51 AM, Álvaro López García wrote:
> > On Thu 01 Aug 2013 (09:07), Sandy Walsh wrote:
> >> On 08/01/2013 04:24 AM, Álvaro López García wrote:
> >>> Hi all.
> >>>
> >>> TL;DR: I've created a blueprint [1] regarding weight normalization.
> >>> I would be very glad if somebody could examine and comment it.
> >>
> >> Something must have changed. It's been a while since I've done anything
> >> with the scheduler, but normalized weights is the way it was designed
> >> and implemented.
> > 
> > It seems reasonable, but it is not there anymore:
> > 
> > class RAMWeigher(weights.BaseHostWeigher):
> > (...)
> > def _weigh_object(self, host_state, weight_properties):
> > """Higher weights win.  We want spreading to be the default."""
> > return host_state.free_ram_mb
> 
> Hmm, that's unfortunate. We use our own weighing functions internally,
> so perhaps we were unaffected by this change.

And that is why we spoted this. We wanted to implement our very own
functions apart from the RAMWeigher and we found that RAW values were
used.

> >> The separate Weighing plug-ins are responsible for taking the specific
> >> units (cpu load, disk, ram, etc) and converting them into normalized
> >> 0.0-1.0 weights. Internally the plug-ins can work however they like, but
> >> their output should be 0-1.
> > 
> > With the current code, this is not true. Anyway, I think this responsability
> > should be implemented in the BaseWeightHandler rather than each weigher.
> > This way each weigher can return whatever they want, but we will be
> > always using a correct value.
> 
> I think the problem with moving it to the base handler is that the base
> doesn't know the max range of the value ... of course, this could be
> passed down. But yeah, we wouldn't want to duplicate the normalization
> code itself in every function.

With the code in [1] the weigher can specify the maximum and minimum
values where a weight can range if it is needed (it most cases just
taking these values from the list of returned values should be enough)
and the BaseWeightHandler will normalize the list before adding them
up to the objects.

I do not see any real advantage in doing it into each weigher. Apart
from code duplication it is difficult to maintain in the long term,
since any change to the normalization should be propagated to all the
weighers (ok, now there's only one ;-) ).

[1] https://review.openstack.org/#/c/27160

Cheers,
-- 
Álvaro López García  al...@ifca.unican.es
Instituto de Física de Cantabria http://alvarolopez.github.io
Ed. Juan Jordá, Campus UC  tel: (+34) 942 200 969
Avda. de los Castros s/n
39005 Santander (SPAIN)
_
http://xkcd.com/571/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Can we use two nova schedulers at the same time?

2013-08-13 Thread sudheesh sk
Hi,

1) Can nova have more than one scheduler at a time?   Standard Scheduler + one 
custom scheduler?

2) If its possible to add multiple schedulers - how we should configure it. 
lets say I have  a scheduler called 'Scheduler'  . So nova conf may look like 
below
scheduler_manager = nova.scheduler.filters.SchedulerManager
scheduler_driver = nova.scheduler.filter.Scheduler Then how can I add a second 
scheduler

3) If there are 2 schedulers - will both of these called when creating a VM?


I am asking these questions based on a response I got from ask openstack forum

Thanks,
Sudheesh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Difference between RBAC polices thats stored in policy.json and policies that can be created using openstack/identity/v3/policies

2013-08-13 Thread sudheesh sk
Hi ,

I am trying to understand Difference between RBAC polices thats stored in 
policy.json and policies that can be created using 
openstack/identity/v3/policies.
I got answer from openstack forum that I can use both DB and policy.json based 
implementation for RBAC policy management.

Can you please tell me how to use DB based RBAC ?   I can elaborate my question
 1. In policy.json(keystone) I am able to define rule called -  admin_required 
 2. Similarly I can define rules  line custome_role_required
 3. Then I can add this rule against each services (like for eg :  
identity:list_users = custom_role_required

How can I use this for DB based RBAC policies?

Also there are code like   self.policy_api.enforce(context, creds, 
'admin_required', {})   in many places (this is in wsgi.py) 

How can I utilize the same code and at the same time move the policy definition 
to DB

Thanks a million,
Sudheesh___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ANVIL] Missing openvswitch dependency for basic-neutron.yaml persona

2013-08-13 Thread Sylvain Bauza

Cross-posting to openstack-ops@.
Maybe someone experienced the same issue and workarounded it ?

-Sylvain

Le 12/08/2013 18:10, Sylvain Bauza a écrit :

Hi,

./smithy -a install -p conf/personas/in-a-box/basic-neutron.yaml is 
failing because of openvswitch missing.

See logs here [1].

Does anyone knows why openvswitch is needed when asking for 
linuxbridge in components/neutron.yaml ?

Shall I update distros/rhel.yaml ?

-Sylvain



[1] : http://pastebin.com/TFkDrrDc




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Yee, Guang
Passing the query parameters, whatever they are, into the driver if the
given driver supports pagination and allowing the driver to override the
manager default pagination functionality seem reasonable to me.

 

 

Guang

 

 

From: Dolph Mathews [mailto:dolph.math...@gmail.com] 
Sent: Monday, August 12, 2013 8:22 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [keystone] Pagination

 

 

On Mon, Aug 12, 2013 at 7:51 PM, Jamie Lennox  wrote:

I'm not sure where it would make sense within the API to return the name
of the page/per_page variables to the client that doesn't involve having
already issued the call (ie returning the names within the links box
means you've already issued the query).

 

I think you're missing the point (and you're right: that wouldn't make sense
at all). The API client follows links. The controller builds links. The
driver defines it's own pagination interface to build related links.

 

If the client is forced to understand the pagination interface then the
abstraction is broken.

 

If we standardize on the
page/per_page combination

 

There doesn't need to be a "standard."

 

then this can be handled at the controller
level then the driver has permission to simply ignore it - or have the
controller do the slicing after the driver has returned.

 

Correct. This sort of "default" pagination can be implemented by the
manager, and overridden by a specific driver.

 


To weigh in on the other question i think it should be checked that page
is an integer, unless per_page is specified in which case default to 1.

For example:

GET /v3/users?page=

I would expect to return all users as page is not set. However:

GET /v3/users?per_page=30

As per_page is useless without a page i think we can default to page=1.

As an aside are we indexing from 1?

 

Rhetorical: why not index from -1 and count in base 64? This is all
arbitrary and can vary by driver.

 


On Mon, 2013-08-12 at 19:05 -0500, Dolph Mathews wrote:
> The way paginated links are defined by the v3 API (via `next` and
> `previous` links), it can be completely up to the driver as to what
> the query parameters look like. So, the client shouldn't have (nor
> require) any knowledge of how to build query parameters for
> pagination. It just needs to follow the links it's given.
>
>
> 'page' and 'per_page' are trivial for the controller to implement (as
> it's just slicing into an list... as shown)... so that's a reasonable
> default behavior (for when a driver does not support pagination).
> However, if the underlying driver DOES support pagination, it should
> provide a way for the controller to ask for the query parameters
> required to specify the next/previous links (so, one driver could
> return `marker` and `limit` parameters while another only exposes the
> `page` number, but not quantity `per_page`).
>
>
> On Mon, Aug 12, 2013 at 4:34 PM, Henry Nash
>  wrote:
> Hi
>
>
> I'm working on extending the pagination into the backends.
>  Right now, we handle the pagination in the v3 controller
> classand in fact it is disabled right now and we return
> the whole list irrespective of whether page/per-page is set in
> the query string, e.g.:
>
>
> def paginate(cls, context, refs):
> """Paginates a list of references by page & per_page
> query strings."""
> # FIXME(dolph): client needs to support pagination
> first
> return refs
>
>
> page = context['query_string'].get('page', 1)
> per_page = context['query_string'].get('per_page', 30)
> return refs[per_page * (page - 1):per_page * page]
>
>
> I wonder both for the V3 controller (which still needs to
> handle pagination for backends that do not support it) and the
> backends that dowhether we could use wether 'page' is
> defined in the query-string as an indicator as to whether we
> should paginate or not?  That way clients who can handle it
> can ask for it, those that don'twill just get everything.
>
>
> Henry
>
>
>
>
>
>
> --
>
>
> -Dolph

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 

-- 

 

-Dolph 



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev