[openstack-dev] [Horizon] The source language of Horizon in Transifex

2013-08-14 Thread Ying Chun Guo

Hi,

Now the source language of Horizon in Transifex is set as en_US, not en. So
when pulling the translations
from Transifex, there will be some dummy characters in en.po, which will
cause errors in unit
tests.

I don't find a way to change the setting in Transifex. I think the only way
to fix it is to
re-upload the resources with source language setting as en, and delete the
existing resources.

Please let me know if Horizon development team know the issue and have any
plans to fix it. Thanks

Regards
Daisy___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for approving Auto HA development blueprint.

2013-08-14 Thread Konglingxian
Hi yongiman:

I wander what’s the difference between your ‘auto HA’ API and ‘evacuate’


Lingxian Kong
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com

From: yongi...@gmail.com [mailto:yongi...@gmail.com]
Sent: Tuesday, August 13, 2013 9:12 PM
To: OpenStack Development Mailing List
Cc: OpenStack Development Mailing List
Subject: Re: [openstack-dev] 答复: Proposal for approving Auto HA development 
blueprint.

For realizing auto HA function, we need monitoring service like ceilometer.


Ceilometer monitors status of compute nodes ( network interface..connection, 
healthcheck,,etc,,)


What I focus on is that this operation goes on automatically.


Nova expose auto ha API. When nova received a auto api call. VMs automatically 
migrate to auto ha host.( which is extra compute node for only auto ha)


All of information of auto ha is stored in auto_ha_hosts table.



In this tables, used column of auto ha hosts is changed to true


Administrator check broken compute node and fix( or replace ) the compute node.


After fixing the compute node, VMs is migrating to operating compute nodes. Now 
auto ha host is empty again.



When the number of runnning VMs in the auto ha host is zero, used column is 
replaced to false for using again by periodic task.


Combination with monitoring service is important. Howerver in this blueprint, I 
want to realize nova's auto ha operation.


My wiki page is still building now, I will fill out as soon as possbile.


I am expecting your advices . Thank you very much~!




Sent from my iPad

On 2013. 8. 13., at 오후 8:01, balaji patnala 
patnala...@gmail.commailto:patnala...@gmail.com wrote:
Potential candidate as new service like Ceilometer, Heat etc for OpenStack and 
provide High Availability of VMs. Good topic to discuss at Summit for 
implementation post Havana Release.
On Tue, Aug 13, 2013 at 12:03 PM, Alex Glikson 
glik...@il.ibm.commailto:glik...@il.ibm.com wrote:
Agree. Some enhancements to Nova might be still required (e.g., to handle 
resource reservations, so that there is enough capacity), but the end-to-end 
framework probably should be outside of existing services, probably talking to 
Nova, Ceilometer and potentially other components (maybe Cinder, Neutron, 
Ironic), and 'orchestrating' failure detection, fencing and recovery.
Probably worth a discussion at the upcoming summit.


Regards,
Alex



From:Konglingxian 
konglingx...@huawei.commailto:konglingx...@huawei.com
To:OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
Date:13/08/2013 07:07 AM
Subject:[openstack-dev] 答复:  Proposal for approving Auto HA development 
   blueprint.




Hi yongiman:

Your idea is good, but I think the auto HA operation is not OpenStack’s 
business. IMO, Ceilometer offers ‘monitoring’, Nova  offers ‘evacuation’, and 
you can combine them to realize HA operation.

So, I’m afraid I can’t understand the specific implementation details very well.

Any different opinions?

发件人: yongi...@gmail.commailto:yongi...@gmail.com [mailto:yongi...@gmail.com]
发送时间: 2013年8月12日 20:52
收件人: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
主题: Re: [openstack-dev] Proposal for approving Auto HA development blueprint.



Hi,

Now, I am developing auto ha operation for vm high availability.

This function is all progress automatically.

It needs other service like ceilometer.

ceilometer monitors compute nodes.

When ceilometer detects broken compute node, it send a api call to Nova,
nova exposes for auto ha API.

When received auto ha call, nova progress auto ha operation.

All auto ha enabled VM where are running on broken host are all migrated to 
auto ha Host which is extra compute node for using only Auto-HA function.

Below is my blueprint and wiki page.

Wiki page is not yet completed. Now I am adding lots of information for this 
function.

Thanks

https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken

https://wiki.openstack.org/wiki/Autoha___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Horizon] The source language of Horizon in Transifex

2013-08-14 Thread Julie Pichon
Hi Daisy,

Ying Chun Guo guoyi...@cn.ibm.com wrote:
 Now the source language of Horizon in Transifex is set as en_US, not en. So
 when pulling the translations
 from Transifex, there will be some dummy characters in en.po, which will
 cause errors in unit
 tests.
 
 I don't find a way to change the setting in Transifex. I think the only way
 to fix it is to
 re-upload the resources with source language setting as en, and delete the
 existing resources.
 
 Please let me know if Horizon development team know the issue and have any
 plans to fix it. Thanks

We have a bug open for tracking this, 1179526 [0]. Akihiro Motoki
suggested a temporary change that prevents the issues we saw with
unit tests, which we should implement. I think at this point we
were planning on reaching out to our contacts at Transifex (...if
we have any?) to see if it would be possible to fix this without
having to delete and recreate the project.

Regards,

Julie

[0] https://bugs.launchpad.net/horizon/+bug/1179526


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Question about get_meters query using a JOIN

2013-08-14 Thread Julien Danjou
On Tue, Aug 13 2013, Thomas Maddox wrote:

 I was curious about why we went for a JOIN here rather than just using the
 meter table initially?
 https://github.com/openstack/ceilometer/blob/master/ceilometer/storage/impl_sqlalchemy.py#L336-L391.
 Doug had mentioned that some performance testing had gone on with some of
 these queries, so before writing up requests to change this to the meter
 table only, I wanted to see if this was a result of that performance
 testing? Like the JOIN was less expensive than a DISTINCT.

Because the SQL driver has been historically a straight conversion from
MongoDB, who was working that way.
The MongoDB driver had these 2 collections, meter and resources, and
used both of them to construct these data.

The original idea was to register resources in the eponym table, and use
it to filter. Unfortunately, that didn't work out, especially with APIv2
that allows to filter on many things. Among other things, there was
timestamp fields in the resource table that were used to filter based on
timestamp, but this was a naive implementation that failed pretty soon,
and we removed and replace with a more solid one based on the meter
table.

What you see here are the rest of that time, and in the end, this
resource table should be removed AFAICT, in MongoDB also.

-- 
Julien Danjou
;; Free Software hacker ; freelance consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for approving Starting by scheduler development blueprint.

2013-08-14 Thread Konglingxian
I’m interested in this, want to hear others’ opinions.


Lingxian Kong
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com

From: cosmos cosmos [mailto:cosmos0...@gmail.com]
Sent: Wednesday, August 14, 2013 10:32 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Proposal for approving Starting by scheduler 
development blueprint.

Hello.
My name is Rucia for Samsung SDS.

Now, I am developing Start Logic by nova-scheduler for efficient resources of 
host.
This function is already implemented in folsom release version.

It is used for the iscsi target such as HP san storage.

This is slightly different from the original version.
If you start the instance after stop, the instance will started at optimal 
Compute host.
The selected host is through the nova-scheduler.

Before Logic
1. Do not use the scheduler originally in start logic of Openstack Nova
2. Start on the host where the instance is created

After Logic
1. When the stopped instance start, Changed to start from the hosts that is 
selected by nova-scheduler
2. When the VM starts, Check the resources through check_resource_limit()

Pros
- You can use resources efficiently
- When you start a virtual machine, You can solve the problem that is error 
caused by the lack of resources on a host.

Below is my blueprint and wiki page.
Thanks

https://blueprints.launchpad.net/nova/+spec/start-instance-by-scheduler
https://wiki.openstack.org/wiki/Start
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Difference between RBAC polices thats stored in policy.json and policies that can be created using openstack/identity/v3/policies

2013-08-14 Thread Henry Nash
Hi Sudheesh,

Using v3/policies is just a way of allowing other keystone projects (nova, 
glance) etc. a place to centrally store/access their policy files.  Keystone 
does not interpret any of the data you store here - it is simply acting as a 
central repository (where you can store a big blob of data that is, in effect, 
your policy file).  So the only place you can set policies is in the policy 
file.

Henry
On 13 Aug 2013, at 08:22, sudheesh sk wrote:

 Hi ,
 
 I am trying to understand Difference between RBAC polices thats stored in 
 policy.json and policies that can be created using 
 openstack/identity/v3/policies.
 
 I got answer from openstack forum that I can use both DB and policy.json 
 based implementation for RBAC policy management.
 
 Can you please tell me how to use DB based RBAC ? I can elaborate my question
  1. In policy.json(keystone) I am able to define rule called - admin_required 
  2. Similarly I can define rules line custome_role_required
  3. Then I can add this rule against each services (like for eg : 
 identity:list_users = custom_role_required How can I use this for DB based 
 RBAC policies? Also there are code like self.policy_api.enforce(context, 
 creds, 'admin_required', {}) in many places (this is in wsgi.py) 
 
 How can I utilize the same code and at the same time move the policy 
 definition to DB
 
 Thanks a million,
 Sudheesh
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Concerning get_resources/get_meters and the Ceilometer API

2013-08-14 Thread Julien Danjou
On Tue, Aug 13 2013, Thomas Maddox wrote:

Hi Thomas,

   *   Driving get_resources() with the Meter table instead of the
   Resource table. This is mainly because of the additional filtering
   available in the Meter table, which allows us to satisfy a use case
   like getting a list of resources a user had during a period of time
   to get meters to compute billing with. The semantics are tripping me
   up a bit; the question this boiled down to for me was: why use a
   resource query to get meters to show usage by a tenant? I was
   curious about why we needed the timestamp filtering when looking at
   Resources, and why we would use Resource as a way to get at metering
   data, rather than a Meter request itself? This was answered by
   resources being the current vector to get at metering data for a
   tenant in terms of resources, if I understood correctly.
   *   With this implementation, we have to do aggregation to get at
   the discrete Resources (via the Meter table) rather than just
   filtering the already distinct ones in the Resource table.

I think I already answered that in a previous email when I said drop
the resource table. :)

   *   This brought up some confusion with the API for me with the
   major use cases I can think of:
  *   As a new consumer of this API, I would think that
   /resource/resource_id would get me details for a resource, e.g.
   current state, when it was created, last updated/used timestamp, who
   owns it; not the attributes from the first sample to come through
   about it

s/first/last/ actually

I wouldn't disagree if you had such improvements to propose.
However, we're pretty flexible in Ceilometer as we don't allow only
metering OpenStack; be careful of somethings like who owns it might
change in other systems than OpenStack for example, depending on the
time range you filter on.

  *   I would think that
  /meter/?q.field=resource_idq.value=resource_id ought to get me
  a list of meter(s) details for a specific resource, e.g. name,
  unit, and origin; but not a huge mixture of samples.

That could be a nice improvement indeed.

 *   Additionally /meter/?q.field=user_idq.value=user_id
 would get me a list of all meters that are currently related
 to the user

Same as above.

  *   The ultimate use case, for billing queries, I would think
  that /meter/meter_id/statistics?time
  filtersuser(resource_id) would get me the measurements for
  that meter to bill for.

We'd like that too, though it's not always perfect since we don't handle
the different counter types explicitly.

 If I understand correctly, one main intent driving this is wanting to
 avoid end users having to write a bunch of API requests themselves
 from the billing side and instead just drill down from payloads for
 each resource to get the billing information for their customers. It
 also looks like there's a BP to add grouping functionality to
 statistics queries to allow us this functionality easily (this one, I
 think:
 https://blueprints.launchpad.net/ceilometer/+spec/api-group-by).

Yes.

 I'm new to this project, so I'm trying to get a handle on how we got
 here and maybe offer some outside perspective, if it's needed or
 wanted. =]

I think you got the picture right. We're trying to improve the API, but
we always happy to get help. There's a sort of meta blueprint:

  https://blueprints.launchpad.net/ceilometer/+spec/api-v2-improvement

With various ideas to improve the API. It's assigned to me, though I
didn't implement most of the ideas there, and won't probably have time
to implement them all, so feel free to contribute!

-- 
Julien Danjou
/* Free Software hacker * freelance consultant
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ANVIL] Missing openvswitch dependency for basic-neutron.yaml persona

2013-08-14 Thread Sylvain Bauza
Thanks for the links, pretty useful. I do understand the process, but I 
have to admin I don't catch which Cheetah placeholder I would use for 
doing a big 'if' statement conditioning the package 
openstack-neutron-openvswitch on the core_plugin yaml option.


As you said, this is not enough, if asked, openvswitch should also 
either be compiled or fetched from RDO.

I filed a bug : https://bugs.launchpad.net/anvil/+bug/1212165


Anyway, I'm pretty much interested in doing the 1. you mentioned, I 
still need to understand things, tho. Could you be more precise on the 
way the spec files are populated ?


Thanks,
-Sylvain


Le 13/08/2013 19:55, Joshua Harlow a écrit :

Haha, no problem. Darn time differences.

So some other useful links that I think will be helpful.

- 
https://github.com/stackforge/anvil/blob/master/conf/templates/packaging/specs/openstack-neutron.spec 



This one is likely the biggest part of the issue, since it is the 
combination of all of neutron into 1 package (which has sub-packages).


- One of those sub-packages is 
https://github.com/stackforge/anvil/blob/master/conf/templates/packaging/specs/openstack-neutron.spec#L274


This is pulling in the openvswitch part, that I think u don't want (at 
least not always want, it's wanted if neutron is going to use it, 
which under certain plugins it will).


As you've seen it likely shouldn't be installing/needing that if 
https://github.com/stackforge/anvil/blob/master/anvil/components/configurators/neutron_plugins/linuxbridge.py is 
used.


This should be coming from the following config (which will come from 
the yaml files) 'get_option' 'call':


https://github.com/stackforge/anvil/blob/master/anvil/components/configurators/neutron.py#L49

So I think what can be done is a couple of things:

 1. Don't include sub-packages that we don't want (the spec files are
cheetah http://www.cheetahtemplate.org/ templates, so this can
be done dynamically).
 2. See if there is a way to make yum (or via yyoom) not pull in the
dependencies for a sub-package when it won't be used (?)
 3. Always build openvswitch (not as preferable) and include it
(https://github.com/stackforge/anvil/blob/master/tools/build-openvswitch.sh)

  * I think the RDO repos might have some of these components.
  * 
http://openstack.redhat.com/Frequently_Asked_Questions#For_which_distributions_does_RDO_provide_packages.3F
  * This means we can just include the RDO repo rpm (like epel and
use that openvswitch version there) instead of build your own.

Hope some of this offers some good pointers.

-Josh

From: Sylvain Bauza sylvain.ba...@bull.net 
mailto:sylvain.ba...@bull.net

Date: Tuesday, August 13, 2013 9:52 AM
To: Joshua Harlow harlo...@yahoo-inc.com mailto:harlo...@yahoo-inc.com
Cc: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org 
mailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [ANVIL] Missing openvswitch dependency 
for basic-neutron.yaml persona


Well, then I have to read thru the docs to see how it can be done thru 
a config option... =)


Nope, I won't be able to catch you up on IRC, time difference you know :-)
Anyway, let me go thru it, I'll try to sort it out.

I RTFM'd all the anvil docs, but do you have any other pointer for me ?

Thanks,
-Sylvain

Le 13/08/2013 18:39, Joshua Harlow a écrit :
Well open switch is likely needed still when it's really needed 
right? So I think there is a need for it. It just might have to be a 
dynamic choice (based on a config option) instead of a static choice. 
Make sense??


The other personas don't use neutron so I think that's how they work, 
since nova-network base functionality still exists.


Any patches would be great, will be on irc soon if u want to discuss 
more.


Josh

Sent from my really tiny device...

On Aug 13, 2013, at 9:23 AM, Sylvain Bauza sylvain.ba...@bull.net 
mailto:sylvain.ba...@bull.net wrote:


Do you confirm the basic idea would be to get rid of any openvswitch 
reference in rhel.yaml ?

If so, wouldn't it be breaking other personas ?

I can provide a patch so the team would review it.

-Sylvain

Le 13/08/2013 17:57, Joshua Harlow a écrit :

It likely shouldn't be needed :)

I haven't personally messes around with the neutron persona to much 
and I know that it just underwent the great rename of 2013 so u 
might be hitting issues due to that.


Try seeing if u can adjust the yaml file and if not I am on irc to 
help more.


Sent from my really tiny device...

On Aug 12, 2013, at 9:14 AM, Sylvain Bauza 
sylvain.ba...@bull.net mailto:sylvain.ba...@bull.net wrote:



Hi,

./smithy -a install -p conf/personas/in-a-box/basic-neutron.yaml 
is failing because of openvswitch missing.

See logs here [1].

Does anyone knows why openvswitch is needed when asking for 
linuxbridge in components/neutron.yaml ?

Shall I update distros/rhel.yaml ?

-Sylvain



[1] : http://pastebin.com/TFkDrrDc



Re: [openstack-dev] Swift, netifaces, PyPy, and cffi

2013-08-14 Thread Chmouel Boudjnah
On Tue, Aug 13, 2013 at 11:58 PM, Alex Gaynor alex.gay...@gmail.com wrote:

 a) cffi-only approach, this is obviously the simplest approach, and works
 everywhere (assuming you can install a PPA, use pip, or similar for cffi)
 b) wait until the next LTS to move to this approach (requires waiting
 until 2014 for PyPy support)
 c) Support using either netifaces or cffi: most complex, and most code,
 plus one or the other dependencies aren't well supported by most tools as
 far as I know.


I think it sucks for large public cloud already in production to have to
package a new python module and having to deploy it when most of them are
pretty conservative with what goes in production, and we can't really
assume they are going to update all their LTS to the new version when it's
going to be released.

Having said that there is people interested to know what performances
improvement can come out from your work (thanks for that) to get PyPy
support

If there is a vote I would go for c) then, potentially in the future we
could unify it to just use cffi.

When you are saying 'plus one or the other dependencies aren't well
supported by most tools as far as I know' are you talking about apt/pip
etc..? I would assume that this is up to the distro packager which knows
that the distro targetted have the cffi package. A far goes for
pip-requires I would conservatively keep the dependence to netifaces.


Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] GPU passthrough support blueprints for OpenStack

2013-08-14 Thread Bob Ball
Hi Brian,

Instead of specific GPU pass-through there are blueprints for generic PCI pass 
through at https://blueprints.launchpad.net/nova/+spec/pci-passthrough-base - 
these are being worked on for Havana and we're hopeful they will get in 
(possibly libvirt only) 

In terms of vGPU rather than whole-GPU pass through, the slides are correct 
that XenServer support is very nearly here: 
http://www.xenserver.org/discuss-virtualization/q-and-a/nvidia-vgpu-on-xenserver-eg-gpu-hypervisor.html

Support for static allocation of vGPU is likely going to be the first phase - 
and adding it as a resource similar to RAM/CPU isn't likely to be proposed 
until a second phase.

As such, I think the integration should be very similar to PCI pass through 
above.


Bob 

 -Original Message-
 From: Brian Schott [mailto:brian.sch...@nimbisservices.com]
 Sent: 13 August 2013 23:07
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] GPU passthrough support blueprints for OpenStack
 
 Are there more recent blueprints related to adding GPU pass-through
 support?  All that I can find are some stale blueprints that I created around
 the Cactus timeframe (while wearing a different hat) that are pretty out of
 date.
 
 I just heard a rumor that folks are doing Nvidia GRID K2 GPU passthrough
 with KVM successfully using linux 3.10.6 kernel with RHEL.
 
 In addition, Lorin and I did some GPU passthrough testing back in the spring
 with GRID K2 on HyperV, libvirt+xen, and XenServer.  Slides are here:
 http://www.slideshare.net/bfschott/nimbis-schott-
 openstackgpustatus20130618
 
 The virtualization support for  GPU-enabled virtual desktops and GPGPU
 seems to have stabilized this year for server deployments.  How is this going
 to be supported in OpenStack?
 
 Brian
 
 -
 Brian Schott, CTO
 Nimbis Services, Inc.
 brian.sch...@nimbisservices.com
 ph: 443-274-6064  fx: 443-274-6060
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Ceilometer support

2013-08-14 Thread Gary Kotton
[cid:image001.jpg@01CE98FD.2A08F120]

inline: image001.jpg___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Does Nova v2API need to know about any Neutron objects apart from ports ? (was RE: [nova] security_groups extension in nova api v3)

2013-08-14 Thread Day, Phil
 
 On Aug 13, 2013, at 2:11 AM, Day, Phil wrote:
 
 If we really want to get clean separation between Nova and Neutron in the V3
 API should we consider making the Nov aV3 API only accept lists o port ids in
 the server create command ?

 That way there would be no need to every pass security group information
 into Nova.

 Any cross project co-ordination (for example automatically creating ports)
 could be handled in the client layer, rather than inside Nova.
 
 Server create is always (until there's a separate layer) going to go cross 
 project
 calling other apis like neutron and cinder while an instance is being
 provisioned. For that reason, I tend to think it's ok to give some extra
 convenience of automatically creating ports if needed, and being able to
 specify security groups.

I think there's a difference between the current interaction with Cinder, where 
Nova will accept a UUID of a cinder object to attach to, and the interaction 
with Neutron where Nova actually creates the Port Objects in Neutron, and as a 
result also needs to accept parameters that are attributes of the foreign 
object.

Fundamentally the relationship between objects in Nova and Neutron is the 
instanceport mapping - so I'm suggesting that's all that needs to be 
reflected in the V3 API.

If consider some of the problems we've had  in Havana (Neutron port quota 
running out during server create, Errors from the neutron client being reported 
back as 500 errors in Nova) etc it seems to me that a lot of those would have 
been avoided if we had a much simpler model and pushed the cross-service 
complexity / convenience up to the client layer.


 
 For the associate and disassociate, the only convenience is being able to use 
 the
 instance display name and security group name, which is already handled at
 the client layer. It seems a clearer case of duplicating what neutron offers.

Agree that this is a clearer case - but in the context of V3 API changes I 
think we should look at the wider issue.

Cheers,
Phil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] GPU passthrough support blueprints for OpenStack

2013-08-14 Thread Alessandro Pilotti
Hi Brian,

On the Hyper-V side we have Remote-Fx support in the works for Havana: 
https://blueprints.launchpad.net/nova/+spec/hyper-v-remotefx
The code will be ready for review around next week.

Thanks,

Alessandro



On Aug 14, 2013, at 01:07 , Brian Schott 
brian.sch...@nimbisservices.commailto:brian.sch...@nimbisservices.com wrote:

Are there more recent blueprints related to adding GPU pass-through support?  
All that I can find are some stale blueprints that I created around the Cactus 
timeframe (while wearing a different hat) that are pretty out of date.

I just heard a rumor that folks are doing Nvidia GRID K2 GPU passthrough with 
KVM successfully using linux 3.10.6 kernel with RHEL.

In addition, Lorin and I did some GPU passthrough testing back in the spring 
with GRID K2 on HyperV, libvirt+xen, and XenServer.  Slides are here:
http://www.slideshare.net/bfschott/nimbis-schott-openstackgpustatus20130618

The virtualization support for  GPU-enabled virtual desktops and GPGPU seems to 
have stabilized this year for server deployments.  How is this going to be 
supported in OpenStack?

Brian

-
Brian Schott, CTO
Nimbis Services, Inc.
brian.sch...@nimbisservices.com
ph: 443-274-6064  fx: 443-274-6060



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] dnsmasq doesn't send DHCPACK

2013-08-14 Thread Alvise Dorigo
Dear list,
I'm facing a problem related with network bring up inside a virtual instance 
Scientific Linux 5.
My Openstack installation has been made with packstack (with options --allinone 
--os-swift-install=n --os-quantum-install=n --os-cinder-install=n), using the 
latest Grizzly, on a Scientific Linux 6.4 host machine:

[root@lxadorigo log]# rpm -qa|egrep 'openstack|qemu|virt|dnsmasq'

qemu-img-0.12.1.2-2.355.el6_4.6.x86_64
openstack-packstack-2013.1.1-0.24.dev660.el6.noarch
dnsmasq-utils-2.48-13.el6.x86_64
libvirt-client-0.10.2-18.el6_4.9.x86_64
libvirt-python-0.10.2-18.el6_4.9.x86_64
openstack-nova-scheduler-2013.1.3-1.el6.noarch
openstack-glance-2013.1.2-2.el6.noarch
openstack-nova-common-2013.1.3-1.el6.noarch
openstack-nova-network-2013.1.3-1.el6.noarch
gpxe-roms-qemu-0.9.7-6.9.el6.noarch
libvirt-0.10.2-18.el6_4.9.x86_64
openstack-nova-compute-2013.1.3-1.el6.noarch
openstack-utils-2013.1-8.el6.noarch
python-django-openstack-auth-1.0.7-1.el6.noarch
openstack-nova-api-2013.1.3-1.el6.noarch
qemu-kvm-0.12.1.2-2.355.el6_4.6.x86_64
openstack-nova-conductor-2013.1.3-1.el6.noarch
openstack-nova-cert-2013.1.3-1.el6.noarch
kernel-2.6.32-358.114.1.openstack.el6.gre.2.x86_64
virt-what-1.11-1.2.el6.x86_64
openstack-nova-novncproxy-2013.1.3-1.el6.noarch
openstack-dashboard-2013.1.1-1.el6.noarch
kernel-firmware-2.6.32-358.114.1.openstack.el6.gre.2.noarch
openstack-keystone-2013.1.3-1.el6.noarch
openstack-nova-console-2013.1.3-1.el6.noarch
dnsmasq-2.48-13.el6.x86_64

[root@lxadorigo log]# uname -r
2.6.32-358.114.1.openstack.el6.gre.2.x86_64


When I upload a Scientific Linux 6 AMI image (linked to a proper kernel and 
initramfs) and then I start it, I do not have any problem with it; the instance 
starts and this is the response the dhcp client gets from the dnsmasq:

Aug 14 13:48:43 lxadorigo dnsmasq[3321]: read /etc/hosts - 2 addresses
Aug 14 13:48:43 lxadorigo dnsmasq[3321]: read 
/var/lib/nova/networks/nova-br100.conf
Aug 14 13:48:45 lxadorigo kernel: kvm: 24134: cpu0 disabled perfctr wrmsr: 0xc1 
data 0xabcd
Aug 14 13:48:51 lxadorigo kernel: device vnet0 entered promiscuous mode
Aug 14 13:48:51 lxadorigo kernel: br100: port 1(vnet0) entering forwarding state
Aug 14 13:48:51 lxadorigo qemu-kvm: Could not find keytab file: 
/etc/qemu/krb5.tab: No such file or directory
Aug 14 13:48:54 lxadorigo ntpd[2283]: Listening on interface #15 vnet0, 
fe80::fc16:3eff:fe16:fbbc#123 Enabled
Aug 14 13:48:56 lxadorigo kernel: kvm: 24392: cpu0 unhandled wrmsr: 0x391 data 
200f
Aug 14 13:49:04 lxadorigo dnsmasq-dhcp[3321]: DHCPDISCOVER(br100) 
fa:16:3e:16:fb:bc 
Aug 14 13:49:04 lxadorigo dnsmasq-dhcp[3321]: DHCPOFFER(br100) 192.168.32.4 
fa:16:3e:16:fb:bc 
Aug 14 13:49:04 lxadorigo dnsmasq-dhcp[3321]: DHCPREQUEST(br100) 192.168.32.4 
fa:16:3e:16:fb:bc 
Aug 14 13:49:04 lxadorigo dnsmasq-dhcp[3321]: DHCPACK(br100) 192.168.32.4 
fa:16:3e:16:fb:bc sl64-5gb



If I upload a Scientific Linux 5, when its dhclient asks the dnsmasq for 
negotiation this is the dnsmasq's answer:

Aug 14 14:08:53 lxadorigo dnsmasq[25019]: read /etc/hosts - 2 addresses
Aug 14 14:08:53 lxadorigo dnsmasq[25019]: read 
/var/lib/nova/networks/nova-br100.conf
Aug 14 14:08:55 lxadorigo kernel: kvm: 26374: cpu0 disabled perfctr wrmsr: 0xc1 
data 0xabcd
Aug 14 14:09:00 lxadorigo kernel: device vnet0 entered promiscuous mode
Aug 14 14:09:00 lxadorigo kernel: br100: port 1(vnet0) entering forwarding state
Aug 14 14:09:00 lxadorigo qemu-kvm: Could not find keytab file: 
/etc/qemu/krb5.tab: No such file or directory
Aug 14 14:09:03 lxadorigo ntpd[2283]: Listening on interface #18 vnet0, 
fe80::fc16:3eff:fea4:1cb8#123 Enabled
Aug 14 14:09:12 lxadorigo dnsmasq-dhcp[25019]: DHCPDISCOVER(br100) 
fa:16:3e:a4:1c:b8 
Aug 14 14:09:12 lxadorigo dnsmasq-dhcp[25019]: DHCPOFFER(br100) 192.168.32.2 
fa:16:3e:a4:1c:b8 
Aug 14 14:09:20 lxadorigo dnsmasq-dhcp[25019]: DHCPDISCOVER(br100) 
fa:16:3e:a4:1c:b8 
Aug 14 14:09:20 lxadorigo dnsmasq-dhcp[25019]: DHCPOFFER(br100) 192.168.32.2 
fa:16:3e:a4:1c:b8 
Aug 14 14:09:28 lxadorigo dnsmasq-dhcp[25019]: DHCPDISCOVER(br100) 
fa:16:3e:a4:1c:b8 
Aug 14 14:09:28 lxadorigo dnsmasq-dhcp[25019]: DHCPOFFER(br100) 192.168.32.2 
fa:16:3e:a4:1c:b8 
Aug 14 14:09:35 lxadorigo dnsmasq-dhcp[25019]: DHCPDISCOVER(br100) 
fa:16:3e:a4:1c:b8 
Aug 14 14:09:35 lxadorigo dnsmasq-dhcp[25019]: DHCPOFFER(br100) 192.168.32.2 
fa:16:3e:a4:1c:b8 
Aug 14 14:09:42 lxadorigo dnsmasq-dhcp[25019]: DHCPDISCOVER(br100) 
fa:16:3e:a4:1c:b8 
Aug 14 14:09:42 lxadorigo dnsmasq-dhcp[25019]: DHCPOFFER(br100) 192.168.32.2 
fa:16:3e:a4:1c:b8 
Aug 14 14:09:56 lxadorigo dnsmasq-dhcp[25019]: DHCPDISCOVER(br100) 
fa:16:3e:a4:1c:b8 
Aug 14 14:09:56 lxadorigo dnsmasq-dhcp[25019]: DHCPOFFER(br100) 192.168.32.2 
fa:16:3e:a4:1c:b8 
Aug 14 14:10:07 lxadorigo dnsmasq-dhcp[25019]: DHCPDISCOVER(br100) 
fa:16:3e:a4:1c:b8 
Aug 14 14:10:07 lxadorigo dnsmasq-dhcp[25019]: DHCPOFFER(br100) 192.168.32.2 
fa:16:3e:a4:1c:b8 

without any DHCPOFFERS message. Now, I'm not a network and/or dnsmasq 

Re: [openstack-dev] dnsmasq doesn't send DHCPACK

2013-08-14 Thread Oleg Gelbukh
Hello, Alvise

It is possible that the version of dnsmasq and lease time is an issue:

https://bugs.launchpad.net/nova/+bug/887162

http://markmail.org/message/7kjf4hljszpydsrx#query:+page:1+mid:7kjf4hljszpydsrx+state:results

Hope this helps.

--
Best regards,
Oleg Gelbukh
Mirantis Inc.



On Wed, Aug 14, 2013 at 4:19 PM, Alvise Dorigo alvise.dor...@pd.infn.itwrote:

 Dear list,
 I'm facing a problem related with network bring up inside a virtual
 instance Scientific Linux 5.
 My Openstack installation has been made with packstack (with options
 --allinone --os-swift-install=n --os-quantum-install=n
 --os-cinder-install=n), using the latest Grizzly, on a Scientific Linux 6.4
 host machine:

 [root@lxadorigo log]# rpm -qa|egrep 'openstack|qemu|virt|dnsmasq'

 qemu-img-0.12.1.2-2.355.el6_4.6.x86_64
 openstack-packstack-2013.1.1-0.24.dev660.el6.noarch
 dnsmasq-utils-2.48-13.el6.x86_64
 libvirt-client-0.10.2-18.el6_4.9.x86_64
 libvirt-python-0.10.2-18.el6_4.9.x86_64
 openstack-nova-scheduler-2013.1.3-1.el6.noarch
 openstack-glance-2013.1.2-2.el6.noarch
 openstack-nova-common-2013.1.3-1.el6.noarch
 openstack-nova-network-2013.1.3-1.el6.noarch
 gpxe-roms-qemu-0.9.7-6.9.el6.noarch
 libvirt-0.10.2-18.el6_4.9.x86_64
 openstack-nova-compute-2013.1.3-1.el6.noarch
 openstack-utils-2013.1-8.el6.noarch
 python-django-openstack-auth-1.0.7-1.el6.noarch
 openstack-nova-api-2013.1.3-1.el6.noarch
 qemu-kvm-0.12.1.2-2.355.el6_4.6.x86_64
 openstack-nova-conductor-2013.1.3-1.el6.noarch
 openstack-nova-cert-2013.1.3-1.el6.noarch
 kernel-2.6.32-358.114.1.openstack.el6.gre.2.x86_64
 virt-what-1.11-1.2.el6.x86_64
 openstack-nova-novncproxy-2013.1.3-1.el6.noarch
 openstack-dashboard-2013.1.1-1.el6.noarch
 kernel-firmware-2.6.32-358.114.1.openstack.el6.gre.2.noarch
 openstack-keystone-2013.1.3-1.el6.noarch
 openstack-nova-console-2013.1.3-1.el6.noarch
 dnsmasq-2.48-13.el6.x86_64

 [root@lxadorigo log]# uname -r
 2.6.32-358.114.1.openstack.el6.gre.2.x86_64


 When I upload a Scientific Linux 6 AMI image (linked to a proper kernel
 and initramfs) and then I start it, I do not have any problem with it; the
 instance starts and this is the response the dhcp client gets from the
 dnsmasq:

 Aug 14 13:48:43 lxadorigo dnsmasq[3321]: read /etc/hosts - 2 addresses
 Aug 14 13:48:43 lxadorigo dnsmasq[3321]: read
 /var/lib/nova/networks/nova-br100.conf
 Aug 14 13:48:45 lxadorigo kernel: kvm: 24134: cpu0 disabled perfctr wrmsr:
 0xc1 data 0xabcd
 Aug 14 13:48:51 lxadorigo kernel: device vnet0 entered promiscuous mode
 Aug 14 13:48:51 lxadorigo kernel: br100: port 1(vnet0) entering forwarding
 state
 Aug 14 13:48:51 lxadorigo qemu-kvm: Could not find keytab file:
 /etc/qemu/krb5.tab: No such file or directory
 Aug 14 13:48:54 lxadorigo ntpd[2283]: Listening on interface #15 vnet0,
 fe80::fc16:3eff:fe16:fbbc#123 Enabled
 Aug 14 13:48:56 lxadorigo kernel: kvm: 24392: cpu0 unhandled wrmsr: 0x391
 data 200f
 Aug 14 13:49:04 lxadorigo dnsmasq-dhcp[3321]: DHCPDISCOVER(br100)
 fa:16:3e:16:fb:bc
 Aug 14 13:49:04 lxadorigo dnsmasq-dhcp[3321]: DHCPOFFER(br100)
 192.168.32.4 fa:16:3e:16:fb:bc
 Aug 14 13:49:04 lxadorigo dnsmasq-dhcp[3321]: DHCPREQUEST(br100)
 192.168.32.4 fa:16:3e:16:fb:bc
 Aug 14 13:49:04 lxadorigo dnsmasq-dhcp[3321]: DHCPACK(br100) 192.168.32.4
 fa:16:3e:16:fb:bc sl64-5gb



 If I upload a Scientific Linux 5, when its dhclient asks the dnsmasq for
 negotiation this is the dnsmasq's answer:

 Aug 14 14:08:53 lxadorigo dnsmasq[25019]: read /etc/hosts - 2 addresses
 Aug 14 14:08:53 lxadorigo dnsmasq[25019]: read
 /var/lib/nova/networks/nova-br100.conf
 Aug 14 14:08:55 lxadorigo kernel: kvm: 26374: cpu0 disabled perfctr wrmsr:
 0xc1 data 0xabcd
 Aug 14 14:09:00 lxadorigo kernel: device vnet0 entered promiscuous mode
 Aug 14 14:09:00 lxadorigo kernel: br100: port 1(vnet0) entering forwarding
 state
 Aug 14 14:09:00 lxadorigo qemu-kvm: Could not find keytab file:
 /etc/qemu/krb5.tab: No such file or directory
 Aug 14 14:09:03 lxadorigo ntpd[2283]: Listening on interface #18 vnet0,
 fe80::fc16:3eff:fea4:1cb8#123 Enabled
 Aug 14 14:09:12 lxadorigo dnsmasq-dhcp[25019]: DHCPDISCOVER(br100)
 fa:16:3e:a4:1c:b8
 Aug 14 14:09:12 lxadorigo dnsmasq-dhcp[25019]: DHCPOFFER(br100)
 192.168.32.2 fa:16:3e:a4:1c:b8
 Aug 14 14:09:20 lxadorigo dnsmasq-dhcp[25019]: DHCPDISCOVER(br100)
 fa:16:3e:a4:1c:b8
 Aug 14 14:09:20 lxadorigo dnsmasq-dhcp[25019]: DHCPOFFER(br100)
 192.168.32.2 fa:16:3e:a4:1c:b8
 Aug 14 14:09:28 lxadorigo dnsmasq-dhcp[25019]: DHCPDISCOVER(br100)
 fa:16:3e:a4:1c:b8
 Aug 14 14:09:28 lxadorigo dnsmasq-dhcp[25019]: DHCPOFFER(br100)
 192.168.32.2 fa:16:3e:a4:1c:b8
 Aug 14 14:09:35 lxadorigo dnsmasq-dhcp[25019]: DHCPDISCOVER(br100)
 fa:16:3e:a4:1c:b8
 Aug 14 14:09:35 lxadorigo dnsmasq-dhcp[25019]: DHCPOFFER(br100)
 192.168.32.2 fa:16:3e:a4:1c:b8
 Aug 14 14:09:42 lxadorigo dnsmasq-dhcp[25019]: DHCPDISCOVER(br100)
 fa:16:3e:a4:1c:b8
 Aug 14 14:09:42 lxadorigo dnsmasq-dhcp[25019]: DHCPOFFER(br100)
 192.168.32.2 fa:16:3e:a4:1c:b8
 Aug 14 14:09:56 

Re: [openstack-dev] [Ceilometer] Concerning get_resources/get_meters and the Ceilometer API

2013-08-14 Thread Sandy Walsh


On 08/14/2013 06:17 AM, Julien Danjou wrote:
 On Tue, Aug 13 2013, Thomas Maddox wrote:
 
 Hi Thomas,
 
   *   Driving get_resources() with the Meter table instead of the
   Resource table. This is mainly because of the additional filtering
   available in the Meter table, which allows us to satisfy a use case
   like getting a list of resources a user had during a period of time
   to get meters to compute billing with. The semantics are tripping me
   up a bit; the question this boiled down to for me was: why use a
   resource query to get meters to show usage by a tenant? I was
   curious about why we needed the timestamp filtering when looking at
   Resources, and why we would use Resource as a way to get at metering
   data, rather than a Meter request itself? This was answered by
   resources being the current vector to get at metering data for a
   tenant in terms of resources, if I understood correctly.
   *   With this implementation, we have to do aggregation to get at
   the discrete Resources (via the Meter table) rather than just
   filtering the already distinct ones in the Resource table.
 
 I think I already answered that in a previous email when I said drop
 the resource table. :)
 
   *   This brought up some confusion with the API for me with the
   major use cases I can think of:
  *   As a new consumer of this API, I would think that
   /resource/resource_id would get me details for a resource, e.g.
   current state, when it was created, last updated/used timestamp, who
   owns it; not the attributes from the first sample to come through
   about it
 
 s/first/last/ actually
 
 I wouldn't disagree if you had such improvements to propose.
 However, we're pretty flexible in Ceilometer as we don't allow only
 metering OpenStack; be careful of somethings like who owns it might
 change in other systems than OpenStack for example, depending on the
 time range you filter on.
 
  *   I would think that
  /meter/?q.field=resource_idq.value=resource_id ought to get me
  a list of meter(s) details for a specific resource, e.g. name,
  unit, and origin; but not a huge mixture of samples.
 
 That could be a nice improvement indeed.
 
 *   Additionally /meter/?q.field=user_idq.value=user_id
 would get me a list of all meters that are currently related
 to the user
 
 Same as above.
 
  *   The ultimate use case, for billing queries, I would think
  that /meter/meter_id/statistics?time
  filtersuser(resource_id) would get me the measurements for
  that meter to bill for.
 
 We'd like that too, though it's not always perfect since we don't handle
 the different counter types explicitly.
 
 If I understand correctly, one main intent driving this is wanting to
 avoid end users having to write a bunch of API requests themselves
 from the billing side and instead just drill down from payloads for
 each resource to get the billing information for their customers. It
 also looks like there's a BP to add grouping functionality to
 statistics queries to allow us this functionality easily (this one, I
 think:
 https://blueprints.launchpad.net/ceilometer/+spec/api-group-by).
 
 Yes.
 
 I'm new to this project, so I'm trying to get a handle on how we got
 here and maybe offer some outside perspective, if it's needed or
 wanted. =]
 
 I think you got the picture right. We're trying to improve the API, but
 we always happy to get help. There's a sort of meta blueprint:
 
   https://blueprints.launchpad.net/ceilometer/+spec/api-v2-improvement
 
 With various ideas to improve the API. It's assigned to me, though I
 didn't implement most of the ideas there, and won't probably have time
 to implement them all, so feel free to contribute!
 

+1 ... I think that would clear up a lot of confusion not only in the
api, but the underlying db  models.


 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Ceilometer support

2013-08-14 Thread Eugene Nikanorov
Hi Gary,

Isn't it something that 'metering support' is going to solve?
A couple of patches addressing this are on review.

Thanks,
Eugene.


On Wed, Aug 14, 2013 at 3:47 PM, Gary Kotton gkot...@vmware.com wrote:

 [image: Description:
 https://fbcdn-sphotos-f-a.akamaihd.net/hphotos-ak-prn2/1098074_725247340838055_1378530957_n.jpg]
 

 ** **

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


image001.jpg___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] postpone key distribution bp until icehouse?

2013-08-14 Thread Adam Young

On 08/13/2013 06:20 PM, Dolph Mathews wrote:
With regard to: 
https://blueprints.launchpad.net/keystone/+spec/key-distribution-server


During today's project status meeting [1], the state of KDS was 
discussed [2]. To quote ttx directly: we've been bitten in the past 
with late security-sensitive stuff and I'm a bit worried to ship 
late code with such security implications as a KDS. I share the same 
concern, especially considering the API only recently went up for 
formal review [3], and the WIP implementation is still failing 
smokestack [4].


Since KDS is a security tightening in acase where there is no security 
at all, adding it in can only improve security.


It is a relatively simple extension from the keystone side.  THe 
corresponding change is in the client, and that has already merged.




I'm happy to see the reviews in question continue to receive their 
fair share of attention over the next few weeks, but can (and should?) 
merging be delayed until icehouse while more security-focused eyes 
have time to review the code?


Ceilometer and nova would both be affected by a delay, as both have 
use cases for consuming trusted messaging [5] (a dependency of the bp 
in question).


Thanks for you feedback!

[1]: 
http://eavesdrop.openstack.org/irclogs/%23openstack-meeting/%23openstack-meeting.2013-08-13.log

[2]: http://paste.openstack.org/raw/44075/
[3]: https://review.openstack.org/#/c/40692/
[4]: https://review.openstack.org/#/c/37118/
[5]: https://blueprints.launchpad.net/oslo/+spec/trusted-messaging



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] GPU passthrough support blueprints for OpenStack

2013-08-14 Thread Brian Schott
Thanks for the pointers.  I'll try to work with the UCS-ISI team that currently 
maintains those stories and see if we can reconcile or close the old 
blueprints.  

-
Brian Schott, CTO
Nimbis Services, Inc.
brian.sch...@nimbisservices.com
ph: 443-274-6064  fx: 443-274-6060



On Aug 14, 2013, at 6:03 AM, Bob Ball bob.b...@citrix.com wrote:

 Hi Brian,
 
 Instead of specific GPU pass-through there are blueprints for generic PCI 
 pass through at 
 https://blueprints.launchpad.net/nova/+spec/pci-passthrough-base - these are 
 being worked on for Havana and we're hopeful they will get in (possibly 
 libvirt only) 
 
 In terms of vGPU rather than whole-GPU pass through, the slides are correct 
 that XenServer support is very nearly here: 
 http://www.xenserver.org/discuss-virtualization/q-and-a/nvidia-vgpu-on-xenserver-eg-gpu-hypervisor.html
 
 Support for static allocation of vGPU is likely going to be the first phase - 
 and adding it as a resource similar to RAM/CPU isn't likely to be proposed 
 until a second phase.
 
 As such, I think the integration should be very similar to PCI pass through 
 above.
 
 
 Bob 
 
 -Original Message-
 From: Brian Schott [mailto:brian.sch...@nimbisservices.com]
 Sent: 13 August 2013 23:07
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] GPU passthrough support blueprints for OpenStack
 
 Are there more recent blueprints related to adding GPU pass-through
 support?  All that I can find are some stale blueprints that I created around
 the Cactus timeframe (while wearing a different hat) that are pretty out of
 date.
 
 I just heard a rumor that folks are doing Nvidia GRID K2 GPU passthrough
 with KVM successfully using linux 3.10.6 kernel with RHEL.
 
 In addition, Lorin and I did some GPU passthrough testing back in the spring
 with GRID K2 on HyperV, libvirt+xen, and XenServer.  Slides are here:
 http://www.slideshare.net/bfschott/nimbis-schott-
 openstackgpustatus20130618
 
 The virtualization support for  GPU-enabled virtual desktops and GPGPU
 seems to have stabilized this year for server deployments.  How is this going
 to be supported in OpenStack?
 
 Brian
 
 -
 Brian Schott, CTO
 Nimbis Services, Inc.
 brian.sch...@nimbisservices.com
 ph: 443-274-6064  fx: 443-274-6060
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] postpone key distribution bp until icehouse?

2013-08-14 Thread Dolph Mathews
On Tue, Aug 13, 2013 at 5:54 PM, Simo Sorce s...@redhat.com wrote:

 On Tue, 2013-08-13 at 17:20 -0500, Dolph Mathews wrote:
  With regard
  to:
 https://blueprints.launchpad.net/keystone/+spec/key-distribution-server
 
 Well I am of course biased so take my comments with a grain of salt,
 that said...
 
  During today's project status meeting [1], the state of KDS was
  discussed [2]. To quote ttx directly: we've been bitten in the past
  with late security-sensitive stuff and I'm a bit worried to ship
  late code with such security implications as a KDS.

 Is ttx going to review any security implications ? The code does not
 mature just because is sit there untouched for more or less time.

   I share the same concern, especially considering the API only
  recently went up for formal review [3],

 While the API may be important it has little to no bearing over the
 security properties of the underlying code and mechanism.
 The document to review to understand and/or criticize the security
 implications is this: https://wiki.openstack.org/wiki/MessageSecurity
 and it has been available for quite a few months.

   and the WIP implementation is still failing smokestack [4].

 This is a red herring, unfortunately Smokestack doesn't say why it is
 failing but we suppose it is due to something python 2.6 doesn't like
 (only the centos machine fails). I have been developing on 2.7 and was
 planning to do a final test on a machine with 2.6 once I had reviews
 agreeing no more fundamental changes were needed.


My mistake - glancing through the patchset history I thought it was
SmokeStack that was regularly failing, but it appears to be mostly Jenkins
failures with SmokeStack failing most recently.

Either way, I try to avoid reviewing code that is still failing automated
tests, so I have yet to review the KDS implementation at all.


 
  I'm happy to see the reviews in question continue to receive their
  fair share of attention over the next few weeks, but can (and should?)
  merging be delayed until icehouse while more security-focused eyes
  have time to review the code?

 I would agree to this only if you can name individuals that are going to
 do a security review, otherwise I see no real reason to delay, as it
 will cost time to keep patches up to date, and I'd rather not do that if
 no one is lining up to do a security review.


keystone-core, at least... that's part of our responsibility. The commit
message also lacks a SecurityImpact tag.



 FWIW I did circulate the design for the security mechanism internally in
 Red Hat to some people with some expertise in crypto matters.


I'd love to see their feedback provided publicly.



 Simo.

 --
 Simo Sorce * Red Hat, Inc * New York




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Question about get_meters query using a JOIN

2013-08-14 Thread Thomas Maddox


On 8/14/13 3:26 AM, Julien Danjou jul...@danjou.info wrote:

On Tue, Aug 13 2013, Thomas Maddox wrote:

 I was curious about why we went for a JOIN here rather than just using
the
 meter table initially?
 
https://github.com/openstack/ceilometer/blob/master/ceilometer/storage/im
pl_sqlalchemy.py#L336-L391.
 Doug had mentioned that some performance testing had gone on with some
of
 these queries, so before writing up requests to change this to the meter
 table only, I wanted to see if this was a result of that performance
 testing? Like the JOIN was less expensive than a DISTINCT.

Because the SQL driver has been historically a straight conversion from
MongoDB, who was working that way.
The MongoDB driver had these 2 collections, meter and resources, and
used both of them to construct these data.

The original idea was to register resources in the eponym table, and use
it to filter. Unfortunately, that didn't work out, especially with APIv2
that allows to filter on many things. Among other things, there was
timestamp fields in the resource table that were used to filter based on
timestamp, but this was a naive implementation that failed pretty soon,
and we removed and replace with a more solid one based on the meter
table.

What you see here are the rest of that time, and in the end, this
resource table should be removed AFAICT, in MongoDB also.

Ahhh yep, when I look at the two side-by-side I can see what you're
saying. Thanks for the explanation!

So, then should I write up a BP for improvements of this sort, like the
API improvement one? It seems like we may get better results if the SQL
implementation is more SQL-like than Mongo-like since they are
fundamentally different things. I noticed some notes in there also about
the JOIN being especially expensive.


-- 
Julien Danjou
;; Free Software hacker ; freelance consultant
;; http://julien.danjou.info


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] postpone key distribution bp until icehouse?

2013-08-14 Thread Thierry Carrez
Simo Sorce wrote:
 During today's project status meeting [1], the state of KDS was
 discussed [2]. To quote ttx directly: we've been bitten in the past
 with late security-sensitive stuff and I'm a bit worried to ship
 late code with such security implications as a KDS.
 
 Is ttx going to review any security implications ? The code does not
 mature just because is sit there untouched for more or less time.

This is me wearing my vulnerability management hat on. The trick is that
we (the VMT) have to support security issues for code that will be
shipped in stable/havana. The most embarrassing security issues we had
in the past were with code that didn't see a fair amount of time in
master before we had to start supporting it.

So for us there is a big difference between landing the KDS now and have
it security-supported after one month of usage, and landing it in a few
weeks and have it security-supported after 7 months of usage. After 7
months I'm pretty sure most of the embarrassing issues will be ironed out.

I don't really want us to repeat the mistakes of the past where we
shipped really new code in keystone that ended up not really usable, but
which we still had to support security-wise due to our policy.

By security implications, I mean that this is a domain (like, say,
token expiration) where even basic bugs can easily create a
vulnerability. We just don't have the bandwidth to ship an embargoed
security advisory for every bug that will be found in the KDS one month
from now.

 I would agree to this only if you can name individuals that are going to
 do a security review, otherwise I see no real reason to delay, as it
 will cost time to keep patches up to date, and I'd rather not do that if
 no one is lining up to do a security review.

 FWIW I did circulate the design for the security mechanism internally in
 Red Hat to some people with some expertise in crypto matters.

Are you saying it won't have significantly less issues in 7 months just
by the virtue of being landed in master and put into use in various
projects ? Or that it was so thoroughly audited that my fears are
unwarranted ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Concerning get_resources/get_meters and the Ceilometer API

2013-08-14 Thread Thomas Maddox
On 8/14/13 10:29 AM, Julien Danjou jul...@danjou.info wrote:

On Wed, Aug 14 2013, Thomas Maddox wrote:

 Am I misunderstanding the code? This looks like it's returning the first
 sample's details:
 
https://github.com/openstack/ceilometer/blob/master/ceilometer/storage/im
pl
 _mongodb.py#L578-L584. When I change the metadata attribute's
aggregation
 function from $first to $last, I get the latest state of the resource,
 which corrects the bug I'm working on. Otherwise, a newly built instance
 sits in a 'scheduling' state, according to the API call
 (https://bugs.launchpad.net/ceilometer/+bug/1208547).

Haha! I'm pretty sure it used to be last. The aggregate() function is
recent, so that may be a regression that we didn't catch. Anyway the
intention is last, not first.
I blame missing tests!

Lol, understood. I'll roll that in with my bug fix since it's related. =]


 That's definitely a good point; I didn't know that. I suppose if we
wanted
 to make this API change, it'd have to be 'who owns it currently' as part
 of the contract for what details are returned. The event body or samples
 can give the historical details when desired. From a billing
perspective,
 it'd be good to know ownership over time in order to bill appropriately
 for binary ownership billing rather than usage. H...

YesŠ I admit we stuck to very simple case and assumptions in the API.
There's a lot of corner case we aren't handling correctly, but it never
mattered so far. We're trying to get better at it. As I already stated
at some point, we need more test for this corner case and more fixes. :)

Cheers!

-Thomas


-- 
Julien Danjou
# Free Software hacker # freelance consultant
# http://julien.danjou.info


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Blueprint proposal - Import / Export images with user properties

2013-08-14 Thread Emilien Macchi
Hi,


I would like to discuss here about two blueprint proposal (maybe could I
merge them into one if you prefer) :

https://blueprints.launchpad.net/glance/+spec/api-v2-export-properties
https://blueprints.launchpad.net/glance/+spec/api-v2-import-properties

*Use case* :
I would like to set specific properties to an image which could
represent a signature, and useful for licensing requirements for example.
To do that, I should be able to export an image with user properties
included.

Then, a user could reuse the exported image in the public cloud, and
Glance will be aware about its properties.
Obviously, we need the import / export feature.

The idea here is to be able to identify an image after cloning or
whatever with a property field. Of course, the user could break it in
editing the image manually, but I consider he / she won't.


Let me know if you have any thoughts and if the blueprint is valuable.

Regards,

-- 
Emilien Macchi

# OpenStack Engineer
// eNovance Inc.  http://enovance.com
// ? emil...@enovance.com ? +33 (0)1 49 70 99 80
// 10 rue de la Victoire 75009 Paris



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] postpone key distribution bp until icehouse?

2013-08-14 Thread Thierry Carrez
Dolph Mathews wrote:
 With regard
 to: https://blueprints.launchpad.net/keystone/+spec/key-distribution-server
 [...]

Dolph: you don't mention Barbican at all, does that mean that the issue
is settled and the KDS should live in keystone ?

A side-benefit of landing early in Icehouse rather than late in Havana,
IMHO, was that we could put everyone in the same room (and around the
same beers) and get a single path forward.

I'm a bit worried (that's with my release management hat on) that if
everyone discovers that Barbican is the way to go for key distribution
at the Hong-Kong summit, we would now have to deprecate the in-Keystone
KDS over several releases just because it landed a few weeks too early.

That said I haven't followed closely the latest discussions on this, so
maybe this is not as much duplication of effort as I think ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Question about get_meters query using a JOIN

2013-08-14 Thread Julien Danjou
On Wed, Aug 14 2013, Thomas Maddox wrote:

 Ahhh yep, when I look at the two side-by-side I can see what you're
 saying. Thanks for the explanation!

 So, then should I write up a BP for improvements of this sort, like the
 API improvement one? It seems like we may get better results if the SQL
 implementation is more SQL-like than Mongo-like since they are
 fundamentally different things. I noticed some notes in there also about
 the JOIN being especially expensive.

Honestly, I wouldn't require a blueprint for that but if you prefer
writing one before because you think it's going to take a while, be
split in a lot of patches, go ahead. At this stage of development anyway
we're not going to wait on it. :)

-- 
Julien Danjou
# Free Software hacker # freelance consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Create a tar.gz stream from a block device

2013-08-14 Thread Mate Lakat
Hi Stackers,

I would like to create a readable tar.gz stream, inside that stream I
want to see only one file, and that file's content should be read from a
device.

The virtual disk is attached to domU as a block device (/dev/xvdb)
I would like to produce a tar.gz, that contains the contents of
/dev/xvdb, file name inside the tar.gz does not matter.

Here is my current solution:
https://review.openstack.org/41651

The main problem is, that tarfile does not support adding content
incrementaly, it only has a addfile method:
http://docs.python.org/2/library/tarfile.html

tfile = tarfile.open(fileobj=output, mode='w|gz')
...
tfile.addfile(tinfo, fileobj=input_file)

And for the glance client, I need to provide a readable file-like:

image_service.update(self.context, image_id, metadata,
 image_stream, purge_props=False)

The change is solving the issue by creating a separate process to create
the targz stream, and give that process' output stream to glance.

Any other ideas?

Many thanks,
-- 
Mate Lakat

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Migrating to testr parallel in tempest

2013-08-14 Thread Ben Nemec

On 2013-08-13 16:39, Clark Boylan wrote:
On Tue, Aug 13, 2013 at 1:25 PM, Matthew Treinish 
mtrein...@kortar.org wrote:


Hi everyone,

So for the past month or so I've been working on getting tempest to 
work stably
with testr in parallel. As part of this you may have noticed the 
testr-full
jobs that get run on the zuul check queue. I was using that job to 
debug some
of the more obvious race conditions and stability issues with running 
tempest
in parallel. After a bunch of fixes to tempest and finding some real 
bugs in

some of the projects things seem to have smoothed out.

So I pushed the testr-full run to the gate queue earlier today. I'll 
be keeping
track of the success rate of this job vs the serial job and use this 
as the
determining factor before we push this live to be the default for all 
tempest
runs. So assuming that the success rate matches up well enough with 
serial job
on the gate queue then I will push out the change that will migrate 
all the
voting jobs to run in parallel hopefully either Friday afternoon or 
early next
week. Also, if anyone has any input on what threshold they feel is 
good enough
for this I'd welcome any input on that. For example, do we want to 
ensure
a = 1:1 match for job success? Or would something like 90% as stable 
as the
serial job be good enough considering the speed advantage. (The 
parallel runs
take about half as much time as a full serial run, the parallel job 
normally
finishes in ~25-30min) Since this affects almost every project I don't 
want to

define this threshold without input from everyone.

After there is some more data for the gate queue's parallel job I'll 
have some
pretty graphite graphs that I can share comparing the success trends 
between

the parallel and serial jobs.

So at this point we're in the home stretch and I'm asking for 
everyone's help
in getting this merged. So, if everyone who is reviewing and pushing 
commits
could watch the results from these non-voting jobs and if things fail 
on the
parallel job but not the serial job please investigate the failure and 
open a
bug if necessary. If it turns out to be a bug in tempest please link 
it against

this blueprint:

https://blueprints.launchpad.net/tempest/+spec/speed-up-tempest

so that I'll give it the attention it deserves. I'd hate to get this 
close to
getting this merged and have a bit of racy code get merged at the last 
second

and block us for another week or two.

I feel that we need to get this in before the H3 rush starts up as it 
will help

everyone get through the extra review load faster.


Getting this in before the H3 rush would be very helpful. When we made
the switch with Nova's unittests we fixed as many of the test bugs
that we could find, merged the change to switch the test runner, then
treated all failures as very high priority bugs that received
immediate attention. Getting this in before H3 will give everyone a
little more time to debug any potential new issues exposed by Jenkins
or people running the tests locally.

I think we should be bold here and merge this as soon as we have good
numbers that indicate the trend is for these tests to pass. Graphite
can give us the pass to fail ratios over time, as long as these trends
are similar for both the old nosetest jobs and the new testr job I say
we go for it. (Disclaimer: most of the projecst I work on are not
affected by the tempest jobs; however, I am often called upon to help
sort out issues in the gate).


I'm inclined to agree.  It's not as if we don't have transient failures 
now, and if we're looking at a 50% speedup in recheck/verify times then 
as long as the new version isn't significantly less stable it should be 
a net improvement.


Of course, without hard numbers we're kind of discussing in a vacuum 
here.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] postpone key distribution bp until icehouse?

2013-08-14 Thread Yee, Guang
It's just an extension, shouldn't be treated differently as long as it
follow the rules and regulations.

 

1.  Bp

2.  Spec (identity-api)

3.  Server-side changes (keystone)

4.  Client-side changes if any (python-keystoneclient)

 

If OpenStack security community is participating in the code reviews, that
would even be awesomer.

 

 

Guang

 

 

From: Adam Young [mailto:ayo...@redhat.com] 
Sent: Wednesday, August 14, 2013 6:24 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [keystone] [oslo] postpone key distribution bp
until icehouse?

 

On 08/13/2013 06:20 PM, Dolph Mathews wrote:

With regard to:
https://blueprints.launchpad.net/keystone/+spec/key-distribution-server

 

During today's project status meeting [1], the state of KDS was discussed
[2]. To quote ttx directly: we've been bitten in the past with late
security-sensitive stuff and I'm a bit worried to ship late code with such
security implications as a KDS. I share the same concern, especially
considering the API only recently went up for formal review [3], and the WIP
implementation is still failing smokestack [4].


Since KDS is a security tightening in acase where there is no security at
all, adding it in can only improve security.

It is a relatively simple extension from the keystone side.  THe
corresponding change is in the client, and that has already merged.




 

I'm happy to see the reviews in question continue to receive their fair
share of attention over the next few weeks, but can (and should?) merging be
delayed until icehouse while more security-focused eyes have time to review
the code?

 

Ceilometer and nova would both be affected by a delay, as both have use
cases for consuming trusted messaging [5] (a dependency of the bp in
question).

 

Thanks for you feedback!

 

[1]:
http://eavesdrop.openstack.org/irclogs/%23openstack-meeting/%23openstack-mee
ting.2013-08-13.log

[2]: http://paste.openstack.org/raw/44075/

[3]: https://review.openstack.org/#/c/40692/

[4]: https://review.openstack.org/#/c/37118/

[5]: https://blueprints.launchpad.net/oslo/+spec/trusted-messaging

 






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift, netifaces, PyPy, and cffi

2013-08-14 Thread Ben Nemec

On 2013-08-13 16:58, Alex Gaynor wrote:

One of the issues that came up in this review however, is that cffi is
not packaged in the most recent Ubuntu LTS (and likely other
distributions), although it is available in raring, and in a PPA
(http://packages.ubuntu.com/raring/python-cffi [2]
 
nd https://launchpad.net/~pypy/+archive/ppa?field.series_filter=precise

[3] respectively).

As a result of this, we wanted to get some feedback on which direction
is best to go:

a) cffi-only approach, this is obviously the simplest approach, and
works everywhere (assuming you can install a PPA, use pip, or similar
for cffi)
b) wait until the next LTS to move to this approach (requires waiting
until 2014 for PyPy support)
c) Support using either netifaces or cffi: most complex, and most
code, plus one or the other dependencies aren't well supported by
most tools as far as I know.


It doesn't appear to me that this is available for RHEL yet, although it 
looks like they're working on it: 
https://admin.fedoraproject.org/updates/python-cffi-0.6-4.el6


That's also going to need to happen before we can do this, I think.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-14 Thread Mac Innes, Kiall
So, Are we saying that UIs built on OpenStack APIs shouldn't be able to 
show traditional pagination controls? Or am I missing how this should 
work with marker/limit?

e.g. for 11 pages of content, something like: 1 2 3 .. 10 11

Thanks,
Kiall

On 13/08/13 22:45, Jay Pipes wrote:
 On 08/13/2013 05:04 PM, Gabriel Hurley wrote:
 I have been one of the earliest, loudest, and most consistent PITA's about 
 pagination, so I probably oughta speak up. I would like to state three facts:

 1. Marker + limit (e.g. forward-only) pagination is horrific for building a 
 user interface.
 2. Pagination doesn't scale.
 3. OpenStack's APIs have historically had useless filtering capabilities.

 In a world where pagination is a must-have feature we need to have page 
 number + limit pagination in order to build a reasonable UI. Ironically 
 though, I'm in favor of ditching pagination altogether. It's the 
 lowest-common denominator, used because we as a community haven't buckled 
 down and built meaningful ways for our users to get to the data they really 
 want.

 Filtering is great, but it's only 1/3 of the solution. Let me break it down 
 with problems and high level solutions:

 Problem 1: I know what I want and I need to find it.
 Solution: filtering/search systems.

 This is a good place to start. Glance has excellent filtering/search
 capabilities -- built in to the API from early on in the Essex
 timeframe, and only expanded over the last few releases.

 Pagination solutions should build on a solid filtering/search
 functionality in the API, where there is a consistent sort key and
 direction (either hard-coded or user-determined, doesn't matter).

 Limit/offset pagination solutions (forward and backwards paging, random
 skip-to-a-page) are inefficient from a SQL query perspective and should
 be a last resort, IMO, compared to limit/marker. With some smart
 session-storage of a page's markers, backwards paging with limit/marker
 APIs is certainly possible -- just store the previous page's marker.

 Problem 2: I don't know what I want, and it may or may not exist.
 Solution: tailored discovery mechanisms.

 This should not be a use case that we spend much time on. Frankly, this
 use case can be summarized as the window shopper scenario. Providing a
 quality search/filtering mechanism, including the *API* itself providing
 REST-ful discovery of the filters and search criteria the API supports,
 is way more important...

 Problem 3: I need to know something about *all* the data in my system.
 Solution: reporting systems.

 Sure, no disagreement there.

 We've got the better part of none of that.

 I disagree. Some of the APIs have support for a good bit of
 search/filtering. We just need to bring all the projects up to search
 speed, Captain.

 Best,
 -jay

 p.s. I very often go to the second and third pages of Google searches.
 :) But I never skip to the 127th page of results.

But I'd like to solve these issues. I have lots of thoughts on all of
 those, and I think the UX and design communities can offer a lot in
 terms of the usability of the solutions we come up with. Even more, I
 think this would be an awesome working group session at the next summit
 to talk about nothing other than how can we get rid of pagination?

 As a parting thought, what percentage of the time do you click to the second 
 page of results in Google?

 All the best,

   - Gabriel


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Blueprint proposal - Import / Export images with user properties

2013-08-14 Thread Mark Washenberger
Lets dig into this a bit more so that I can understand it.

Given that we have properties that we want to export with an image, where
would those properties be stored? Somewhere in the image data itself? I
believe some image formats support metadata, but I can't imagine all of
them would. Is there a specific format you're thinking of using?


On Wed, Aug 14, 2013 at 8:36 AM, Emilien Macchi emilien.mac...@enovance.com
 wrote:

  Hi,


 I would like to discuss here about two blueprint proposal (maybe could I
 merge them into one if you prefer) :

 https://blueprints.launchpad.net/glance/+spec/api-v2-export-properties
 https://blueprints.launchpad.net/glance/+spec/api-v2-import-properties

 *Use case* :
 I would like to set specific properties to an image which could represent
 a signature, and useful for licensing requirements for example.
 To do that, I should be able to export an image with user properties
 included.

 Then, a user could reuse the exported image in the public cloud, and
 Glance will be aware about its properties.
 Obviously, we need the import / export feature.

 The idea here is to be able to identify an image after cloning or whatever
 with a property field. Of course, the user could break it in editing the
 image manually, but I consider he / she won't.


 Let me know if you have any thoughts and if the blueprint is valuable.

  Regards,

 --
 Emilien Macchi
 
 # OpenStack Engineer
 // eNovance Inc.  http://enovance.com
 // ✉ emil...@enovance.com ☎ +33 (0)1 49 70 99 80
 // 10 rue de la Victoire 75009 Paris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Skipping tests in tempest via config file

2013-08-14 Thread Alexius Ludeman
I am running tempest on OEL 6.3 (aka RHEL 6.3) and I had issues with python
2.6 and skipException[3], so now I'm using python 2.7 just for tempest.  I
also had difficulty with yum and python module dependency and made the
transition to venv.  This has reduced the yum dependency nightmare greatly.

now that testr is default for tempest.  testr does not appear to support
--exclusion[1] or --stop[2].

I have a work around for --exclusion, by:

testr list-tests | egrep -v regex-exclude-list  unit-tests.txt
testr --load-list unit-tests.txt

I do not have a work around for --stop.

[1]https://bugs.launchpad.net/testrepository/+bug/1208610
[2]https://bugs.launchpad.net/testrepository/+bug/1211926
[3]https://bugs.launchpad.net/tempest/+bug/1202815



On Tue, Aug 13, 2013 at 7:25 PM, Matt Riedemann mrie...@us.ibm.com wrote:

 I have the same issue.  I run a subset of the tempest tests via nose on a
 RHEL 6.4 VM directly against the site-packages (not using virtualenv).  I'm
 running on x86_64, ppc64 and s390x and have different issues on all of them
 (a mix of DB2 on x86_64 and MySQL on the others, and different nova/cinder
 drivers on each).  What I had to do was just make a nose.cfg for each of
 them and throw that into ~/ for each run of the suite.

 The switch from nose to testr hasn't impacted me because I'm not using a
 venv.  However, there was a change this week that broke me on python 2.6
 and I opened this bug:

 *https://bugs.launchpad.net/tempest/+bug/1212071*https://bugs.launchpad.net/tempest/+bug/1212071



 Thanks,

 *MATT RIEDEMANN*
 Advisory Software Engineer
 Cloud Solutions and OpenStack Development
 --
  *Phone:* 1-507-253-7622 | *Mobile:* 1-507-990-1889*
 E-mail:* *mrie...@us.ibm.com* mrie...@us.ibm.com
 [image: IBM]

 3605 Hwy 52 N
 Rochester, MN 55901-1407
 United States





 From:Ian Wienand iwien...@redhat.com
 To:openstack-dev@lists.openstack.org,
 Date:08/13/2013 09:13 PM
 Subject:[openstack-dev] Skipping tests in tempest via config file
 --



 Hi,

 I proposed a change to tempest that skips tests based on a config file
 directive [1].  Reviews were inconclusive and it was requested the
 idea be discussed more widely.

 Of course issues should go upstream first.  However, sometimes test
 failures are triaged to a local/platform problem and it is preferable
 to keep everything else running by skipping the problematic tests
 while its being worked on.

 My perspective is one of running tempest in a mixed CI environment
 with RHEL, Fedora, etc.  Python 2.6 on RHEL doesn't support testr (it
 doesn't do the setUpClass calls required by temptest) and nose
 upstream has some quirks that make it hard to work with the tempest
 test layout [2].

 Having a common place in the temptest config to set these skips is
 more convienent than having to deal with the multiple testing
 environments.

 Another proposal is to have a separate JSON file of skipped tests.  I
 don't feel strongly but it does seem like another config file.

 -i

 [1] https://review.openstack.org/#/c/39417/
 [2] https://github.com/nose-devs/nose/pull/717

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


image/gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] SQL meter table schema improvements

2013-08-14 Thread Julien Danjou
On Wed, Aug 14 2013, Jay Pipes wrote:

 I submitted a bug report to improve the schema of the meter table to reduce
 duplication:

 https://bugs.launchpad.net/ceilometer/+bug/1211985

 This would only affect the SQL storage driver.

 Interested in hearing thoughts on the above recommendations, which are:

 1) Replace counter_type with an integer column counter_type_id that
 references a lookup table (counter_type)
 2) Replace counter_unit with an integer column counter_unit_id that
 references a lookup table (counter_unit)
 3) Replace counter_name with an integer column counter_id that references a
 lookup table (counter)

 Just those three changes would reduce the overall row size for records in
 the meter table by about 10%, by my calculations, which is a good
 improvement for a relatively painless set of modifications.

That means we'll have to do 3 joins most of the time. Is that less
costly than fetching these right away from the record?

-- 
Julien Danjou
-- Free Software hacker - freelance consultant
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-14 Thread Jay Pipes

On 08/14/2013 12:25 PM, Mac Innes, Kiall wrote:

So, Are we saying that UIs built on OpenStack APIs shouldn't be able to
show traditional pagination controls? Or am I missing how this should
work with marker/limit?


No, not quite what I'm saying. The operation to get the total number of 
pages -- or more explicitly, the operation to get the *exact* number of 
pages in a list result -- is expensive, and in order to be reasonably 
efficient, some level of caching is almost always needed.


However, being able to page forwards and backwards is absolutely 
possible with limit/marker solutions. It simply requires the paging 
client (in this case, Horizon), to store the list of previously seen 
page links returned in listing results (there is a next and prev link in 
the returned list images results, for example).



e.g. for 11 pages of content, something like: 1 2 3 .. 10 11


Yeah, that's not an efficient operation unless you have some sort of 
caching in place. You can use things like MySQL's SQL_CALC_FOUND_ROWS, 
but that is not efficient since instead of stopping the query after 
LIMIT rows, you end up executing the entire query to determine the 
number of rows that *would* have been returned if no LIMIT was applied. 
In order to make such a thing efficient, you'd want to cache the value 
of SQL_CALC_FOUND_ROWS in the session and use that to calculate the 
number of pages.


It's something that can be done, but isn't, IMHO, worth it to get the 
traditional UI you describe. Instead, a good filter/search UI would be 
better, with just next/prev links.


Best,
-jay


Thanks,
Kiall

On 13/08/13 22:45, Jay Pipes wrote:

On 08/13/2013 05:04 PM, Gabriel Hurley wrote:

I have been one of the earliest, loudest, and most consistent PITA's about 
pagination, so I probably oughta speak up. I would like to state three facts:

1. Marker + limit (e.g. forward-only) pagination is horrific for building a 
user interface.
2. Pagination doesn't scale.
3. OpenStack's APIs have historically had useless filtering capabilities.

In a world where pagination is a must-have feature we need to have page 
number + limit pagination in order to build a reasonable UI. Ironically though, I'm in 
favor of ditching pagination altogether. It's the lowest-common denominator, used because 
we as a community haven't buckled down and built meaningful ways for our users to get to 
the data they really want.

Filtering is great, but it's only 1/3 of the solution. Let me break it down with problems 
and high level solutions:

Problem 1: I know what I want and I need to find it.
Solution: filtering/search systems.


This is a good place to start. Glance has excellent filtering/search
capabilities -- built in to the API from early on in the Essex
timeframe, and only expanded over the last few releases.

Pagination solutions should build on a solid filtering/search
functionality in the API, where there is a consistent sort key and
direction (either hard-coded or user-determined, doesn't matter).

Limit/offset pagination solutions (forward and backwards paging, random
skip-to-a-page) are inefficient from a SQL query perspective and should
be a last resort, IMO, compared to limit/marker. With some smart
session-storage of a page's markers, backwards paging with limit/marker
APIs is certainly possible -- just store the previous page's marker.


Problem 2: I don't know what I want, and it may or may not exist.
Solution: tailored discovery mechanisms.


This should not be a use case that we spend much time on. Frankly, this
use case can be summarized as the window shopper scenario. Providing a
quality search/filtering mechanism, including the *API* itself providing
REST-ful discovery of the filters and search criteria the API supports,
is way more important...


Problem 3: I need to know something about *all* the data in my system.
Solution: reporting systems.


Sure, no disagreement there.


We've got the better part of none of that.


I disagree. Some of the APIs have support for a good bit of
search/filtering. We just need to bring all the projects up to search
speed, Captain.

Best,
-jay

p.s. I very often go to the second and third pages of Google searches.
:) But I never skip to the 127th page of results.

But I'd like to solve these issues. I have lots of thoughts on all of
those, and I think the UX and design communities can offer a lot in
terms of the usability of the solutions we come up with. Even more, I
think this would be an awesome working group session at the next summit
to talk about nothing other than how can we get rid of pagination?


As a parting thought, what percentage of the time do you click to the second 
page of results in Google?

All the best,

   - Gabriel


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list

Re: [openstack-dev] [keystone] [oslo] postpone key distribution bp until icehouse?

2013-08-14 Thread Thierry Carrez
Adam Young wrote:
 On 08/13/2013 06:20 PM, Dolph Mathews wrote:
 During today's project status meeting [1], the state of KDS was
 discussed [2]. To quote ttx directly: we've been bitten in the past
 with late security-sensitive stuff and I'm a bit worried to ship
 late code with such security implications as a KDS. I share the same
 concern, especially considering the API only recently went up for
 formal review [3], and the WIP implementation is still failing
 smokestack [4].
 
 Since KDS is a security tightening in acase where there is no security
 at all, adding it in can only improve security.

It's not really a question of more security or less security... It's
about putting young sensitive code into a release, with the risk of
having to issue a lot of security advisories for early bugs.

I'm all for that code to land in the icehouse master branch as soon as
it opens and that it gets put into good use by projects throughout the
icehouse development cycle. I just think the benefits of waiting
outweigh the benefits of landing it now.

I explained why I prefer it to land in a few weeks rather than now...
Can someone explain why they prefer the reverse ? Why does it have to be
in havana ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift, netifaces, PyPy, and cffi

2013-08-14 Thread Alex Gaynor
I just chatted with the Python product owner at Red Hat, he says this is
going to make it's way to the next step later today (this past weekend was
a Fedora conference), so this should be happening soon.

Joe: Yup, I'm familiar with that piece (I had lunch with Vish the other
week and he's the one who suggested Swift as the best place to get started
with OpenStack + PyPy). For those who don't know I'm one of the core
developers of PyPy :)

Alex



On Wed, Aug 14, 2013 at 9:24 AM, Ben Nemec openst...@nemebean.com wrote:

 On 2013-08-13 16:58, Alex Gaynor wrote:

 One of the issues that came up in this review however, is that cffi is
 not packaged in the most recent Ubuntu LTS (and likely other
 distributions), although it is available in raring, and in a PPA
 (http://packages.ubuntu.com/**raring/python-cffihttp://packages.ubuntu.com/raring/python-cffi[2]
  nd https://launchpad.net/~**pypy/+archive/ppa?field.**
 series_filter=precisehttps://launchpad.net/~pypy/+archive/ppa?field.series_filter=precise
 [3] respectively).


 As a result of this, we wanted to get some feedback on which direction
 is best to go:

 a) cffi-only approach, this is obviously the simplest approach, and
 works everywhere (assuming you can install a PPA, use pip, or similar
 for cffi)
 b) wait until the next LTS to move to this approach (requires waiting
 until 2014 for PyPy support)
 c) Support using either netifaces or cffi: most complex, and most
 code, plus one or the other dependencies aren't well supported by
 most tools as far as I know.


 It doesn't appear to me that this is available for RHEL yet, although it
 looks like they're working on it: https://admin.fedoraproject.**
 org/updates/python-cffi-0.6-4.**el6https://admin.fedoraproject.org/updates/python-cffi-0.6-4.el6

 That's also going to need to happen before we can do this, I think.

 -Ben


 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
I disapprove of what you say, but I will defend to the death your right to
say it. -- Evelyn Beatrice Hall (summarizing Voltaire)
The people's good is the highest law. -- Cicero
GPG Key fingerprint: 125F 5C67 DFE9 4084
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ANVIL] Missing openvswitch dependency for basic-neutron.yaml persona

2013-08-14 Thread Joshua Harlow
Hi Sylvain,

Thanks for checking looking it over.

I think another anvil dev has started looking at the bug but here is an 
explanation of the spec process anyway :-)

Preparing: which does the creation of the SRPMs
0. https://github.com/stackforge/anvil/blob/master/anvil/actions/prepare.py#L71
  a. This then creates a yum class (which will do the work, this was abstracted 
so that its relatively easy to say have deb packaging later if ever wanted…)
1. https://github.com/stackforge/anvil/blob/master/anvil/packaging/yum.py#L150 
then gets called will either package the openstack dependencies or the 
openstack component itself.
  a. Lets go down the openstack component itself path for now (the other path 
is simpler).
2. https://github.com/stackforge/anvil/blob/master/anvil/packaging/yum.py#L572 
gets called
  a. This will create the SRPM (and in that SRPM will be the spec file itself), 
the rest of that function creates the parameters for the spec file to be 
expanded with (see _write_spec_file also for how requirements, epoch get 
determined).

Building: aka, converting the SRPMs - RPMs
0. https://github.com/stackforge/anvil/blob/master/anvil/actions/build.py#L44
1. https://github.com/stackforge/anvil/blob/master/anvil/packaging/yum.py#L172
  a. This is actually a simpler process since yum has a nice way to easily 
translate SRPMs-RPMs so all we do is use a make file with a parallel number of 
jobs to do that.

So your hook could be at #2 if you wanted to not include the openvswitch logic 
in the spec file, u can hook in there to determine if say openvswitch is in 
this variable (which each instance/component that gets built should have 
populated, 
https://github.com/stackforge/anvil/blob/master/anvil/components/base.py#L30 
and then easily switch on/off the spec file inclusion of the openvswitch 
packaging logic).

I hope that makes sense, try printing out the subsystems variable (which comes 
in from 
https://github.com/stackforge/anvil/blob/master/conf/personas/in-a-box/basic.yaml#L37
 , for example) .

-Josh

From: Sylvain Bauza sylvain.ba...@bull.netmailto:sylvain.ba...@bull.net
Date: Wednesday, August 14, 2013 2:23 AM
To: Joshua Harlow harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com
Cc: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [ANVIL] Missing openvswitch dependency for 
basic-neutron.yaml persona

Thanks for the links, pretty useful. I do understand the process, but I have to 
admin I don't catch which Cheetah placeholder I would use for doing a big 'if' 
statement conditioning the package openstack-neutron-openvswitch on the 
core_plugin yaml option.

As you said, this is not enough, if asked, openvswitch should also either be 
compiled or fetched from RDO.
I filed a bug : https://bugs.launchpad.net/anvil/+bug/1212165


Anyway, I'm pretty much interested in doing the 1. you mentioned, I still need 
to understand things, tho. Could you be more precise on the way the spec files 
are populated ?

Thanks,
-Sylvain


Le 13/08/2013 19:55, Joshua Harlow a écrit :
Haha, no problem. Darn time differences.

So some other useful links that I think will be helpful.

- 
https://github.com/stackforge/anvil/blob/master/conf/templates/packaging/specs/openstack-neutron.spec

This one is likely the biggest part of the issue, since it is the combination 
of all of neutron into 1 package (which has sub-packages).

- One of those sub-packages is 
https://github.com/stackforge/anvil/blob/master/conf/templates/packaging/specs/openstack-neutron.spec#L274

This is pulling in the openvswitch part, that I think u don't want (at least 
not always want, it's wanted if neutron is going to use it, which under certain 
plugins it will).

As you've seen it likely shouldn't be installing/needing that if 
https://github.com/stackforge/anvil/blob/master/anvil/components/configurators/neutron_plugins/linuxbridge.py
 is used.

This should be coming from the following config (which will come from the yaml 
files) 'get_option' 'call':

https://github.com/stackforge/anvil/blob/master/anvil/components/configurators/neutron.py#L49

So I think what can be done is a couple of things:

  1.  Don't include sub-packages that we don't want (the spec files are 
cheetahhttp://www.cheetahtemplate.org/ templates, so this can be done 
dynamically).
  2.  See if there is a way to make yum (or via yyoom) not pull in the 
dependencies for a sub-package when it won't be used (?)
  3.  Always build openvswitch (not as preferable) and include it 
(https://github.com/stackforge/anvil/blob/master/tools/build-openvswitch.sh)
 *   I think the RDO repos might have some of these components.
 *   
http://openstack.redhat.com/Frequently_Asked_Questions#For_which_distributions_does_RDO_provide_packages.3F
 *   This means we can just include the RDO repo rpm (like epel and use 
that openvswitch version there) instead of build your own.

Hope some of this 

Re: [openstack-dev] Skipping tests in tempest via config file

2013-08-14 Thread Matt Riedemann
I put a nose.cfg with my excludes in the user's root and it works to run 
nosetests via the virtual environment like this:

tempest/tools/./with_venv.sh nosetests

I had to use the run_tests.sh script in tempest to create the virtual 
environment, but after that running tempest via nose within the venv 
wasn't a problem.  Of course, I didn't want to duplicate the test runs 
when setting up the venv via run_tests.sh, so I created it with the -p 
option to only run pep8 after it was setup (I'm not aware of a way to tell 
it to not run any tests and simply setup the environment).

Going back to the bug I opened last night for failures on py26, it's fixed 
with this patch: https://review.openstack.org/#/c/39346/ 



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Alexius Ludeman l...@lexinator.com
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
Date:   08/14/2013 11:39 AM
Subject:Re: [openstack-dev] Skipping tests in tempest via config 
file



I am running tempest on OEL 6.3 (aka RHEL 6.3) and I had issues with 
python 2.6 and skipException[3], so now I'm using python 2.7 just for 
tempest.  I also had difficulty with yum and python module dependency and 
made the transition to venv.  This has reduced the yum dependency 
nightmare greatly.

now that testr is default for tempest.  testr does not appear to support 
--exclusion[1] or --stop[2].

I have a work around for --exclusion, by:
testr list-tests | egrep -v regex-exclude-list  unit-tests.txt
testr --load-list unit-tests.txt

I do not have a work around for --stop.

[1]https://bugs.launchpad.net/testrepository/+bug/1208610
[2]https://bugs.launchpad.net/testrepository/+bug/1211926
[3]https://bugs.launchpad.net/tempest/+bug/1202815



On Tue, Aug 13, 2013 at 7:25 PM, Matt Riedemann mrie...@us.ibm.com 
wrote:
I have the same issue.  I run a subset of the tempest tests via nose on a 
RHEL 6.4 VM directly against the site-packages (not using virtualenv). 
 I'm running on x86_64, ppc64 and s390x and have different issues on all 
of them (a mix of DB2 on x86_64 and MySQL on the others, and different 
nova/cinder drivers on each).  What I had to do was just make a nose.cfg 
for each of them and throw that into ~/ for each run of the suite. 

The switch from nose to testr hasn't impacted me because I'm not using a 
venv.  However, there was a change this week that broke me on python 2.6 
and I opened this bug: 

https://bugs.launchpad.net/tempest/+bug/1212071 



Thanks, 

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development 

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com 


3605 Hwy 52 N
Rochester, MN 55901-1407
United States





From:Ian Wienand iwien...@redhat.com 
To:openstack-dev@lists.openstack.org, 
Date:08/13/2013 09:13 PM 
Subject:[openstack-dev] Skipping tests in tempest via config file 




Hi,

I proposed a change to tempest that skips tests based on a config file
directive [1].  Reviews were inconclusive and it was requested the
idea be discussed more widely.

Of course issues should go upstream first.  However, sometimes test
failures are triaged to a local/platform problem and it is preferable
to keep everything else running by skipping the problematic tests
while its being worked on.

My perspective is one of running tempest in a mixed CI environment
with RHEL, Fedora, etc.  Python 2.6 on RHEL doesn't support testr (it
doesn't do the setUpClass calls required by temptest) and nose
upstream has some quirks that make it hard to work with the tempest
test layout [2].

Having a common place in the temptest config to set these skips is
more convienent than having to deal with the multiple testing
environments.

Another proposal is to have a separate JSON file of skipped tests.  I
don't feel strongly but it does seem like another config file.

-i

[1] https://review.openstack.org/#/c/39417/
[2] https://github.com/nose-devs/nose/pull/717

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

image/gifimage/gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] postpone key distribution bp until icehouse?

2013-08-14 Thread Simo Sorce
On Wed, 2013-08-14 at 12:35 -0300, Thierry Carrez wrote:
 Simo Sorce wrote:
  During today's project status meeting [1], the state of KDS was
  discussed [2]. To quote ttx directly: we've been bitten in the past
  with late security-sensitive stuff and I'm a bit worried to ship
  late code with such security implications as a KDS.
  
  Is ttx going to review any security implications ? The code does not
  mature just because is sit there untouched for more or less time.
 
 This is me wearing my vulnerability management hat on. The trick is that
 we (the VMT) have to support security issues for code that will be
 shipped in stable/havana. The most embarrassing security issues we had
 in the past were with code that didn't see a fair amount of time in
 master before we had to start supporting it.
 
 So for us there is a big difference between landing the KDS now and have
 it security-supported after one month of usage, and landing it in a few
 weeks and have it security-supported after 7 months of usage. After 7
 months I'm pretty sure most of the embarrassing issues will be ironed out.
 
 I don't really want us to repeat the mistakes of the past where we
 shipped really new code in keystone that ended up not really usable, but
 which we still had to support security-wise due to our policy.
 
 By security implications, I mean that this is a domain (like, say,
 token expiration) where even basic bugs can easily create a
 vulnerability. We just don't have the bandwidth to ship an embargoed
 security advisory for every bug that will be found in the KDS one month
 from now.

I understand and appreciate that, so are you saying you want to veto KDS
introduction in Havana on this ground ?

  I would agree to this only if you can name individuals that are going to
  do a security review, otherwise I see no real reason to delay, as it
  will cost time to keep patches up to date, and I'd rather not do that if
  no one is lining up to do a security review.
 
  FWIW I did circulate the design for the security mechanism internally in
  Red Hat to some people with some expertise in crypto matters.
 
 Are you saying it won't have significantly less issues in 7 months just
 by the virtue of being landed in master and put into use in various
 projects ? Or that it was so thoroughly audited that my fears are
 unwarranted ?

Bugs can always happen, and whether 7 month of being used in development
makes a difference when it comes to security relevant bugs I can't say.
I certainly am not going to claim my work flawless, I know better than
that :)

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] postpone key distribution bp until icehouse?

2013-08-14 Thread Simo Sorce
On Wed, 2013-08-14 at 12:41 -0300, Thierry Carrez wrote:
 Dolph Mathews wrote:
  With regard
  to: https://blueprints.launchpad.net/keystone/+spec/key-distribution-server
  [...]
 
 Dolph: you don't mention Barbican at all, does that mean that the issue
 is settled and the KDS should live in keystone ?
 
 A side-benefit of landing early in Icehouse rather than late in Havana,
 IMHO, was that we could put everyone in the same room (and around the
 same beers) and get a single path forward.
 
 I'm a bit worried (that's with my release management hat on) that if
 everyone discovers that Barbican is the way to go for key distribution
 at the Hong-Kong summit, we would now have to deprecate the in-Keystone
 KDS over several releases just because it landed a few weeks too early.
 
 That said I haven't followed closely the latest discussions on this, so
 maybe this is not as much duplication of effort as I think ?

For the Nth time KDS and Barbican do not do the same job, no more than
Keystone auth paths and barbican do the same job. All three use crypto
and 'keys' ... in completely different ways.

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] postpone key distribution bp until icehouse?

2013-08-14 Thread Simo Sorce
On Wed, 2013-08-14 at 14:06 -0300, Thierry Carrez wrote:
 Adam Young wrote:
  On 08/13/2013 06:20 PM, Dolph Mathews wrote:
  During today's project status meeting [1], the state of KDS was
  discussed [2]. To quote ttx directly: we've been bitten in the past
  with late security-sensitive stuff and I'm a bit worried to ship
  late code with such security implications as a KDS. I share the same
  concern, especially considering the API only recently went up for
  formal review [3], and the WIP implementation is still failing
  smokestack [4].
  
  Since KDS is a security tightening in acase where there is no security
  at all, adding it in can only improve security.
 
 It's not really a question of more security or less security... It's
 about putting young sensitive code into a release, with the risk of
 having to issue a lot of security advisories for early bugs.
 
 I'm all for that code to land in the icehouse master branch as soon as
 it opens and that it gets put into good use by projects throughout the
 icehouse development cycle. I just think the benefits of waiting
 outweigh the benefits of landing it now.
 
 I explained why I prefer it to land in a few weeks rather than now...
 Can someone explain why they prefer the reverse ? Why does it have to be
 in havana ?

Because it was painful top rebase due to the migrations code, however
since Adam landed the code that splits migrations so that extensions can
have their own separate code for that I think the burden will be
substantially lower.

If this is your final word on the matter I'll take notice that the work
will be deferred till Icehouse and I will slightly demote its priority
in my work queue.

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Migrating to testr parallel in tempest

2013-08-14 Thread Alexius Ludeman
I kind of high jacked another thread with my testr problems, but I want to
reiterate it directly on this one as they are my pain points I had from the
transition.

testr does not appear to support --exclusion[1] or --stop[2].

I have a work around for --exclusion, by:

testr list-tests | egrep -v regex-exclude-list  unit-tests.txt
testr --load-list unit-tests.txt

I do not have a work around for --stop.

[1]https://bugs.launchpad.net/testrepository/+bug/1208610
[2]https://bugs.launchpad.net/testrepository/+bug/1211926


On Tue, Aug 13, 2013 at 1:25 PM, Matthew Treinish mtrein...@kortar.orgwrote:


 Hi everyone,

 So for the past month or so I've been working on getting tempest to work
 stably
 with testr in parallel. As part of this you may have noticed the testr-full
 jobs that get run on the zuul check queue. I was using that job to debug
 some
 of the more obvious race conditions and stability issues with running
 tempest
 in parallel. After a bunch of fixes to tempest and finding some real bugs
 in
 some of the projects things seem to have smoothed out.

 So I pushed the testr-full run to the gate queue earlier today. I'll be
 keeping
 track of the success rate of this job vs the serial job and use this as the
 determining factor before we push this live to be the default for all
 tempest
 runs. So assuming that the success rate matches up well enough with serial
 job
 on the gate queue then I will push out the change that will migrate all the
 voting jobs to run in parallel hopefully either Friday afternoon or early
 next
 week. Also, if anyone has any input on what threshold they feel is good
 enough
 for this I'd welcome any input on that. For example, do we want to ensure
 a = 1:1 match for job success? Or would something like 90% as stable as
 the
 serial job be good enough considering the speed advantage. (The parallel
 runs
 take about half as much time as a full serial run, the parallel job
 normally
 finishes in ~25-30min) Since this affects almost every project I don't
 want to
 define this threshold without input from everyone.

 After there is some more data for the gate queue's parallel job I'll have
 some
 pretty graphite graphs that I can share comparing the success trends
 between
 the parallel and serial jobs.

 So at this point we're in the home stretch and I'm asking for everyone's
 help
 in getting this merged. So, if everyone who is reviewing and pushing
 commits
 could watch the results from these non-voting jobs and if things fail on
 the
 parallel job but not the serial job please investigate the failure and
 open a
 bug if necessary. If it turns out to be a bug in tempest please link it
 against
 this blueprint:

 https://blueprints.launchpad.net/tempest/+spec/speed-up-tempest

 so that I'll give it the attention it deserves. I'd hate to get this close
 to
 getting this merged and have a bit of racy code get merged at the last
 second
 and block us for another week or two.

 I feel that we need to get this in before the H3 rush starts up as it will
 help
 everyone get through the extra review load faster.

 Thanks,

 Matt Treinish

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] ack(), reject() and requeue() support in rpc ...

2013-08-14 Thread Sandy Walsh
At Eric's request in https://review.openstack.org/#/c/41979/ I'm
bringing this to the ML for feedback.

Currently, oslo-common rpc behaviour is to always ack() a message no
matter what.

For billing purposes we can't afford to drop important notifications
(like *.exists). We only want to ack() if no errors are raised by the
consumer, otherwise we want to requeue the message.

Now, once we introduce this functionality, we will also need to support
.reject() semantics.

The use-case we've seen for this is:
1. grab notification
2. write to disk
3. do some processing on that notification, which raises an exception.
4. the event is requeued and steps 2-3 repeat very quickly. Lots of
duplicate records. In our case we've blown out our database.

Since each notification has a unique message_id, it's easy to detect
events we've seen before and .reject() them. There's a branch coming for
that as well.

This is really only a concern for the notification mechanism ... rpc
should not need this, though I can think of lots of places it would be
handy with orchestration and the scheduler. But, notifications are where
the billing-related things live.

By default, everything is the same, ack-on-error=True. But, for some
consumers we need to support ack-on-error=False.

Personally, I think this is must-have functionality.

Look forward to your thoughts.
-S

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] SQL meter table schema improvements

2013-08-14 Thread Dan Prince


- Original Message -
 From: Jay Pipes jaypi...@gmail.com
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Wednesday, August 14, 2013 12:42:29 PM
 Subject: [openstack-dev] [Ceilometer] SQL meter table schema improvements
 
 Hi all,
 
 I submitted a bug report to improve the schema of the meter table to
 reduce duplication:
 
 https://bugs.launchpad.net/ceilometer/+bug/1211985
 
 This would only affect the SQL storage driver.
 
 Interested in hearing thoughts on the above recommendations, which are:
 
 1) Replace counter_type with an integer column counter_type_id that
 references a lookup table (counter_type)
 2) Replace counter_unit with an integer column counter_unit_id that
 references a lookup table (counter_unit)
 3) Replace counter_name with an integer column counter_id that
 references a lookup table (counter)
 
 Just those three changes would reduce the overall row size for records
 in the meter table by about 10%, by my calculations, which is a good
 improvement for a relatively painless set of modifications.

Jay:

Looks promising. Have you done any perf testing or do you have any numbers on 
how this affects performance other than the data size calculation?

 
 Thanks,
 -jay
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Migrating to testr parallel in tempest

2013-08-14 Thread Dan Prince


- Original Message -
 From: Matthew Treinish mtrein...@kortar.org
 To: openstack-dev@lists.openstack.org
 Sent: Tuesday, August 13, 2013 4:25:14 PM
 Subject: [openstack-dev] Migrating to testr parallel in tempest
 
 
 Hi everyone,
 
 So for the past month or so I've been working on getting tempest to work
 stably
 with testr in parallel. As part of this you may have noticed the testr-full
 jobs that get run on the zuul check queue. I was using that job to debug some
 of the more obvious race conditions and stability issues with running tempest
 in parallel. After a bunch of fixes to tempest and finding some real bugs in
 some of the projects things seem to have smoothed out.
 
 So I pushed the testr-full run to the gate queue earlier today. I'll be
 keeping
 track of the success rate of this job vs the serial job and use this as the
 determining factor before we push this live to be the default for all tempest
 runs. So assuming that the success rate matches up well enough with serial
 job
 on the gate queue then I will push out the change that will migrate all the
 voting jobs to run in parallel hopefully either Friday afternoon or early
 next
 week. Also, if anyone has any input on what threshold they feel is good
 enough
 for this I'd welcome any input on that. For example, do we want to ensure
 a = 1:1 match for job success? Or would something like 90% as stable as the
 serial job be good enough considering the speed advantage. (The parallel runs
 take about half as much time as a full serial run, the parallel job normally
 finishes in ~25-30min) Since this affects almost every project I don't want
 to
 define this threshold without input from everyone.

Nice work on the speedups!

Regarding the stability... Having tests which fail due for stability reasons 
is concerning especially if it is as high as 10%. I personally get frustrated 
even at the 1% level.

Lets see where the numbers fall. If we try it out we can always switch back if 
it is causing too many rechecks or failed gates right?

If it does work great. If not I might prefer to see us keep a leaner set of 
gating Tempest tests so that runtime stays down in serial mode while we work 
bugs out of parallel mode.

Dan


 
 After there is some more data for the gate queue's parallel job I'll have
 some
 pretty graphite graphs that I can share comparing the success trends between
 the parallel and serial jobs.
 
 So at this point we're in the home stretch and I'm asking for everyone's help
 in getting this merged. So, if everyone who is reviewing and pushing commits
 could watch the results from these non-voting jobs and if things fail on the
 parallel job but not the serial job please investigate the failure and open a
 bug if necessary. If it turns out to be a bug in tempest please link it
 against
 this blueprint:
 
 https://blueprints.launchpad.net/tempest/+spec/speed-up-tempest
 
 so that I'll give it the attention it deserves. I'd hate to get this close to
 getting this merged and have a bit of racy code get merged at the last second
 and block us for another week or two.
 
 I feel that we need to get this in before the H3 rush starts up as it will
 help
 everyone get through the extra review load faster.
 
 Thanks,
 
 Matt Treinish
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Why is network and subnet modeled separately?

2013-08-14 Thread Jay Pipes

On 08/14/2013 04:31 PM, Jay Buffington wrote:

network is layer 2 and subnet is layer 3.


That's quite confusing.


A common use case would be that one tenant creates a network with a
10.1.1.0/24 http://10.1.1.0/24 subnet.  Another tenant wants to use
that same network, so they create a new network and can also create the
10.1.1.0/24 http://10.1.1.0/24 subnet.


Why not just call them all networks?

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Skipping tests in tempest via config file

2013-08-14 Thread Robert Collins
On 14 August 2013 14:10, Ian Wienand iwien...@redhat.com wrote:
 Hi,

 I proposed a change to tempest that skips tests based on a config file
 directive [1].  Reviews were inconclusive and it was requested the
 idea be discussed more widely.

 Of course issues should go upstream first.  However, sometimes test
 failures are triaged to a local/platform problem and it is preferable
 to keep everything else running by skipping the problematic tests
 while its being worked on.

There are arguments for and against having testr manage skipping.
Testr definitely could do that, but should it? I suspect it's case by
case. Skipping if you're on a small memory environment is really a
test runner problem - you can probe for memory and skip. Skipping
because Python2.6 doesn't run a bunch of tests likewise - it's
internal to the test runner.

Skipping because of a policy the test runner can't know about sounds
like something testr can/should know about.

-Rob
-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] postpone key distribution bp until icehouse?

2013-08-14 Thread Thierry Carrez
Simo Sorce wrote:
 On Wed, 2013-08-14 at 12:35 -0300, Thierry Carrez wrote:
 Simo Sorce wrote:
 During today's project status meeting [1], the state of KDS was
 discussed [2]. To quote ttx directly: we've been bitten in the past
 with late security-sensitive stuff and I'm a bit worried to ship
 late code with such security implications as a KDS.

 Is ttx going to review any security implications ? The code does not
 mature just because is sit there untouched for more or less time.

 This is me wearing my vulnerability management hat on. The trick is that
 we (the VMT) have to support security issues for code that will be
 shipped in stable/havana. The most embarrassing security issues we had
 in the past were with code that didn't see a fair amount of time in
 master before we had to start supporting it.

 So for us there is a big difference between landing the KDS now and have
 it security-supported after one month of usage, and landing it in a few
 weeks and have it security-supported after 7 months of usage. After 7
 months I'm pretty sure most of the embarrassing issues will be ironed out.

 I don't really want us to repeat the mistakes of the past where we
 shipped really new code in keystone that ended up not really usable, but
 which we still had to support security-wise due to our policy.

 By security implications, I mean that this is a domain (like, say,
 token expiration) where even basic bugs can easily create a
 vulnerability. We just don't have the bandwidth to ship an embargoed
 security advisory for every bug that will be found in the KDS one month
 from now.
 
 I understand and appreciate that, so are you saying you want to veto KDS
 introduction in Havana on this ground ?

It's more of a trade-off: I want the benefits to exceed the drawbacks.
Since I see this drawback, I'd like to understand the benefits so that
we can collectively make the good trade-off... Does this really need to
be in havana and why ? Or is it preferable to have it really early in
icehouse ?

Note that I can't really veto anything as long as the PTL wants it in :)

 Are you saying it won't have significantly less issues in 7 months just
 by the virtue of being landed in master and put into use in various
 projects ? Or that it was so thoroughly audited that my fears are
 unwarranted ?
 
 Bugs can always happen, and whether 7 month of being used in development
 makes a difference when it comes to security relevant bugs I can't say.
 I certainly am not going to claim my work flawless, I know better than
 that :)

Damn, you escaped my trap :)

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Migrating to testr parallel in tempest

2013-08-14 Thread Robert Collins
On 15 August 2013 08:02, Alexius Ludeman l...@lexinator.com wrote:
 I kind of high jacked another thread with my testr problems, but I want to
 reiterate it directly on this one as they are my pain points I had from the
 transition.

 testr does not appear to support --exclusion[1] or --stop[2].

1 is tricky to do well; 2 likewise but there is a trivial workaround
in serial mode : tell the runner to stop early. I also replied in the
other thread.

-Rob
-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Migrating to testr parallel in tempest

2013-08-14 Thread Matthew Treinish
On Wed, Aug 14, 2013 at 11:05:35AM -0500, Ben Nemec wrote:
 On 2013-08-13 16:39, Clark Boylan wrote:
 On Tue, Aug 13, 2013 at 1:25 PM, Matthew Treinish
 mtrein...@kortar.org wrote:
 
 Hi everyone,
 
 So for the past month or so I've been working on getting tempest
 to work stably
 with testr in parallel. As part of this you may have noticed the
 testr-full
 jobs that get run on the zuul check queue. I was using that job
 to debug some
 of the more obvious race conditions and stability issues with
 running tempest
 in parallel. After a bunch of fixes to tempest and finding some
 real bugs in
 some of the projects things seem to have smoothed out.
 
 So I pushed the testr-full run to the gate queue earlier today.
 I'll be keeping
 track of the success rate of this job vs the serial job and use
 this as the
 determining factor before we push this live to be the default
 for all tempest
 runs. So assuming that the success rate matches up well enough
 with serial job
 on the gate queue then I will push out the change that will
 migrate all the
 voting jobs to run in parallel hopefully either Friday afternoon
 or early next
 week. Also, if anyone has any input on what threshold they feel
 is good enough
 for this I'd welcome any input on that. For example, do we want
 to ensure
 a = 1:1 match for job success? Or would something like 90% as
 stable as the
 serial job be good enough considering the speed advantage. (The
 parallel runs
 take about half as much time as a full serial run, the parallel
 job normally
 finishes in ~25-30min) Since this affects almost every project I
 don't want to
 define this threshold without input from everyone.
 
 After there is some more data for the gate queue's parallel job
 I'll have some
 pretty graphite graphs that I can share comparing the success
 trends between
 the parallel and serial jobs.
 
 So at this point we're in the home stretch and I'm asking for
 everyone's help
 in getting this merged. So, if everyone who is reviewing and
 pushing commits
 could watch the results from these non-voting jobs and if things
 fail on the
 parallel job but not the serial job please investigate the
 failure and open a
 bug if necessary. If it turns out to be a bug in tempest please
 link it against
 this blueprint:
 
 https://blueprints.launchpad.net/tempest/+spec/speed-up-tempest
 
 so that I'll give it the attention it deserves. I'd hate to get
 this close to
 getting this merged and have a bit of racy code get merged at
 the last second
 and block us for another week or two.
 
 I feel that we need to get this in before the H3 rush starts up
 as it will help
 everyone get through the extra review load faster.
 
 Getting this in before the H3 rush would be very helpful. When we made
 the switch with Nova's unittests we fixed as many of the test bugs
 that we could find, merged the change to switch the test runner, then
 treated all failures as very high priority bugs that received
 immediate attention. Getting this in before H3 will give everyone a
 little more time to debug any potential new issues exposed by Jenkins
 or people running the tests locally.
 
 I think we should be bold here and merge this as soon as we have good
 numbers that indicate the trend is for these tests to pass. Graphite
 can give us the pass to fail ratios over time, as long as these trends
 are similar for both the old nosetest jobs and the new testr job I say
 we go for it. (Disclaimer: most of the projecst I work on are not
 affected by the tempest jobs; however, I am often called upon to help
 sort out issues in the gate).
 
 I'm inclined to agree.  It's not as if we don't have transient
 failures now, and if we're looking at a 50% speedup in
 recheck/verify times then as long as the new version isn't
 significantly less stable it should be a net improvement.
 
 Of course, without hard numbers we're kind of discussing in a vacuum
 here.
 

I also would like to get this in sooner rather than later and fix the bugs as
they come in. But, I'm wary of doing this because there isn't a proven success
history yet. No one likes gate resets, and I've only been running it on the
gate queue for a day now.

So here is the graphite graph that I'm using to watch parallel vs serial in the
gate queue:
https://tinyurl.com/pdfz93l

On that graph the blue and yellow shows the number of jobs that succeeded
grouped together in per hour buckets. (yellow being parallel and blue serial)

Then the red line is showing failures, a horizontal bar means that there is no
difference in the number of failures between serial and parallel. When it dips
negative it is showing a failure in parallel that wasn't on serial a serial run
at the same time. When it goes positive it showing a failure on serial that
doesn't occur on parallel at the same time. But, because the serial runs take
longer the failures happen at an offset. So if the plot shows parallel fails
followed closely by a serial failure than that is probably on the same commit
and 

[openstack-dev] [State-Management] Agenda for tomorrows meeting at 2000 UTC

2013-08-14 Thread Joshua Harlow
Hi all,

The [state-management] project team holds a weekly meeting in 
#openstack-meeting on thursdays, 2000 UTC. The next meeting is tomorrow, Aug 
14!!!

As usual, everyone is welcome :-)

Link: https://wiki.openstack.org/wiki/Meetings/StateManagement
Taskflow: https://wiki.openstack.org/TaskFlow

## Agenda (30-60 mins): ##

- Discuss any action items from last meeting.
- Discuss ongoing status of the overall effort and any needed coordination.
- Discuss about any new engine, block design concepts/reviews…
- Talk about progress with regards to cinder/nova/ironic/heat/trove integration.
- Discuss about any other potential new use-cases for said library.
- Discuss about any other ideas, problems, issues, solutions, questions (and 
more).

Any other topics are welcome :-)

See you all soon!

-Josh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Blueprint proposal - Import / Export images with user properties

2013-08-14 Thread Emilien Macchi
Thank's Mark, it's exactly this kind of feedback I was looking for.

As you suggested, I'm going to make a single blueprint for OVF.


Regards,

Emilien Macchi

# OpenStack Engineer
// eNovance Inc.  http://enovance.com
// ✉ emil...@enovance.com ☎ +33 (0)1 49 70 99 80
// 10 rue de la Victoire 75009 Paris

On 08/14/2013 07:39 PM, Mark Washenberger wrote:
 I think this could fit alongside a current blueprint we've discussed
 (https://blueprints.launchpad.net/glance/+spec/iso-image-metadata)
 that does similar things for metadata in isos.

 In general, I think the sane way to add a feature like this is as an
 optional container-format-specific plugin for import and export. Since
 the import and export features are still in pretty early stages of
 development (advanced on design though!), I don't expect such a
 feature would land until mid Icehouse, just fyi.

 Can you restructure these blueprints as a single bp feature to
 export/import metadata in ovf?


 On Wed, Aug 14, 2013 at 10:09 AM, Emilien Macchi
 emilien.mac...@enovance.com mailto:emilien.mac...@enovance.com wrote:

 Hi Mark,


 I was thinking at the OVF container format first since as far I
 know, it does support metadatas.


 Thank's,


 Emilien Macchi
 
 # OpenStack Engineer
 // eNovance Inc.  http://enovance.com
 // ✉ emil...@enovance.com mailto:emil...@enovance.com ☎ +33 (0)1 49 
 70 99 80 tel:%2B33%20%280%291%2049%2070%2099%2080
 // 10 rue de la Victoire 75009 Paris

 On 08/14/2013 06:34 PM, Mark Washenberger wrote:
 Lets dig into this a bit more so that I can understand it.

 Given that we have properties that we want to export with an
 image, where would those properties be stored? Somewhere in the
 image data itself? I believe some image formats support metadata,
 but I can't imagine all of them would. Is there a specific format
 you're thinking of using?


 On Wed, Aug 14, 2013 at 8:36 AM, Emilien Macchi
 emilien.mac...@enovance.com
 mailto:emilien.mac...@enovance.com wrote:

 Hi,


 I would like to discuss here about two blueprint proposal
 (maybe could I merge them into one if you prefer) :

 
 https://blueprints.launchpad.net/glance/+spec/api-v2-export-properties
 
 https://blueprints.launchpad.net/glance/+spec/api-v2-import-properties

 *Use case* :
 I would like to set specific properties to an image which
 could represent a signature, and useful for licensing
 requirements for example.
 To do that, I should be able to export an image with user
 properties included.

 Then, a user could reuse the exported image in the public
 cloud, and Glance will be aware about its properties.
 Obviously, we need the import / export feature.

 The idea here is to be able to identify an image after
 cloning or whatever with a property field. Of course, the
 user could break it in editing the image manually, but I
 consider he / she won't.


 Let me know if you have any thoughts and if the blueprint is
 valuable.

 Regards,

 -- 
 Emilien Macchi
 
 # OpenStack Engineer
 // eNovance Inc.  http://enovance.com
 // ✉ emil...@enovance.com mailto:emil...@enovance.com ☎ +33 
 (0)1 49 70 99 80 tel:%2B33%20%280%291%2049%2070%2099%2080
 // 10 rue de la Victoire 75009 Paris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org 
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Looking to get opinions on scheduler bug 1212478

2013-08-14 Thread Matt Riedemann
For this bug: https://bugs.launchpad.net/nova/+bug/1212478 

I can see the argument about not wanting the scheduler to keep trying to 
find a new host for the migration when the specific targeted host didn't 
work, but this seems like a design change so wanted to bring it up in the 
mailing list before trying to triage this.  My initial thought is why have 
the RetryFilter configured for the scheduler if you don't want it to 
retry, but maybe you want to support both targeted and open (non-targeted) 
migrations.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States
image/gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Looking to get opinions on scheduler bug 1212478

2013-08-14 Thread Joseph W Cropper
Hi Matt--

After some additional investigation and sifting through code, I found that 
the RetryFilter and live migration are unrelated.  I've posted some 
additional comments on there and invalidated this one.  Thanks for the 
quick turnaround.

Thanks,



Joe Cropper
 3605 Hwy 52 N

Cloud Systems Software Development and x86 TCEM
 Rochester, MN 55901-1407
Phone:
507-253-1976
 
e-mail:
jwcro...@us.ibm.com
 





From:   Matt Riedemann/Rochester/IBM@IBMUS
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
Date:   08/14/2013 08:44 PM
Subject:[openstack-dev] [nova] Looking to get opinions on 
scheduler bug   1212478



For this bug: https://bugs.launchpad.net/nova/+bug/1212478 

I can see the argument about not wanting the scheduler to keep trying to 
find a new host for the migration when the specific targeted host didn't 
work, but this seems like a design change so wanted to bring it up in the 
mailing list before trying to triage this.  My initial thought is why have 
the RetryFilter configured for the scheduler if you don't want it to 
retry, but maybe you want to support both targeted and open (non-targeted) 
migrations.



Thanks, 

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development 

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com 


3605 Hwy 52 N
Rochester, MN 55901-1407
United States
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

image/gifimage/gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev