Re: [openstack-dev] [oslo] nominating Victor Stinner for the Oslo core reviewers team

2014-04-22 Thread Mark McLoughlin
On Mon, 2014-04-21 at 12:39 -0400, Doug Hellmann wrote:
 I propose that we add Victor Stinner (haypo on freenode) to the Oslo
 core reviewers team.
 
 Victor is a Python core contributor, and works on the development team
 at eNovance. He created trollius, a port of Python 3's tulip/asyncio
 module to Python 2, at least in part to enable a driver for
 oslo.messaging. He has been quite active with Python 3 porting work in
 Oslo and some other projects, and organized a sprint to work on the
 port at PyCon last week. The patches he has written for the python 3
 work have all covered backwards-compatibility so that the code
 continues to work as before under python 2.
 
 Given his background, skills, and interest, I think he would be a good
 addition to the team.

Sounds good to me!

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Migrate to Postrgresql 9

2014-04-22 Thread Mike Scherbakov
Ideally such things should land at the beginning of new release cycle, and
definitely not after feature freeze. Let's make an exception, if it is very
well tested by many on dev envs.
If we make an exception, I suggest to switch it quickly - in order to have
more time before release for testing, with the possibility to revert it
back at any moment.

Thanks,


On Tue, Apr 22, 2014 at 1:53 AM, Vladimir Kuklin vkuk...@mirantis.comwrote:

 +MAX_ULONG :-)


 On Tue, Apr 22, 2014 at 1:28 AM, Dmitry Borodaenko 
 dborodae...@mirantis.com wrote:

 +++

 We already use PostgreSQL 9 on some of our dev boxes, and Nailgun
 works fine in fake mode and unit tests, so the risk of upgrading it
 now is minimal. I agree with Dmitry P. that it will cost us more to
 postpone it and make that upgrade a part of Fuel upgrade.

 -DmitryB

 On Mon, Apr 21, 2014 at 1:58 PM, Jay Pipes jaypi...@gmail.com wrote:
  On Tue, 2014-04-22 at 00:55 +0400, Dmitry Pyzhov wrote:
  We use postgresql 8 on master node right now. At some point we will
  have to migrate to 9th version. And database migration can became
  painful part of master node upgrade at that point.
 
  At the moment part of our developers use psql9 in their environments
  and see no issues. Should we enforce upgrade before 5.0 release?
 
  ++
 
  -jay
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Dmitry Borodaenko

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: 答复: [Nova][Neutron][Cinder][Heat]Should we support tags for os resources?

2014-04-22 Thread Shake Chen
Now the HPcloud provide Tag feature,  now can tag the vm.

maybe we study how to achieve the feature in HPcloud.


On Tue, Apr 22, 2014 at 10:02 AM, Huangtianhua huangtian...@huawei.comwrote:

  Thanks very much.



 I have register the blueprints for nova.

 https://blueprints.launchpad.net/nova/+spec/add-tags-for-os-resources



 The simple plan is:

 1.   Add the tags api (create tags/delete tags/describe tags) for v3
 api

 2.   Change the implement for instance from “metadata” to “tags”



 Your suggestions?



 Thanks

 *发件人:* Jay Pipes [mailto:jaypi...@gmail.com]
 *发送时间:* 2014年4月22日 3:46
 *收件人:* OpenStack Development Mailing List (not for usage questions)
 *主题:* Re: [openstack-dev] 答复: [Nova][Neutron][Cinder][Heat]Should we
 support tags for os resources?



 Absolutely. Feel free.



 On Mon, Apr 21, 2014 at 4:48 AM, Huangtianhua huangtian...@huawei.com
 wrote:

 I plan to register a blueprints in nova for record this. Can I?


 -邮件原件-
 发件人: Jay Pipes [mailto:jaypi...@gmail.com]
 发送时间: 2014年4月20日 21:06
 收件人: openstack-dev@lists.openstack.org
 主题: Re: [openstack-dev] [Nova][Neutron][Cinder][Heat]Should we support
 tags for os resources?


 On Sun, 2014-04-20 at 08:35 +, Huangtianhua wrote:
  Hi all:
 
  Currently, the EC2 API of OpenStack only has tags support (metadata)
  for instances. And there has already a blueprint about to add support
  for volumes and volume snapshots using “metadata”.
 
  There are a lot of resources such as
  image/subnet/securityGroup/networkInterface(port) are supported add
  tags for AWS.
 
  I think we should support tags for these resources. There may be no
  property “metadata for these resources, so we should to add
  “metadata” to support the resource tags, the change related API.

 Hi Tianhua,

 In OpenStack, generally, the choice was made to use maps of key/value
 pairs instead of lists of strings (tags) to annotate objects exposed in the
 REST APIs. OpenStack REST APIs inconsistently call these maps of key/value
 pairs:

  * properties (Glance, Cinder Image, Volume respectively)
  * extra_specs (Nova InstanceType)
  * metadata (Nova Instance, Aggregate and InstanceGroup, Neutron)
  * metadetails (Nova Aggregate and InstanceGroup)
  * system_metadata (Nova Instance -- differs from normal metadata in
 that the key/value pairs are 'owned' by Nova, not a user...)

 Personally, I think tags are a cleaner way of annotating objects when the
 annotation is coming from a normal user. Tags represent by far the most
 common way for REST APIs to enable user-facing annotation of objects in a
 way that is easy to search on. I'd love to see support for tags added to
 any searchable/queryable object in all of the OpenStack APIs.

 I'd also like to see cleanup of the aforementioned inconsistencies in how
 maps of key/value pairs are both implemented and named throughout the
 OpenStack APIs. Specifically, I'd like to see this implemented in the next
 major version of the Compute API:

  * Removal of the metadetails term
  * All key/value pairs can only be changed by users with elevated
 privileged system-controlled (normal users should use tags)
  * Call all these key/value pair combinations properties -- technically,
 metadata is data about data, like the size of an integer. These
 key/value pairs are just data, not data about data.
  * Identify key/value pairs that are relied on by all of Nova to be a
 specific key and value combination, and make these things actual real
 attributes on some object model -- since that is a much greater guard for
 the schema of an object and enables greater performance by allowing both
 type safety of the underlying data and removes the need to search by both a
 key and a value.

 Best,
 -jay


   ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Shake Chen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Reviews wanted for new TripleO elements

2014-04-22 Thread Macdonald-Wallace, Matthew
Apologies to both.

I have been asking in IRC but with little luck and had missed that original 
email.

I'll stick to IRC... :)

Matt

 -Original Message-
 From: Ben Nemec [mailto:openst...@nemebean.com]
 Sent: 21 April 2014 16:59
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Tripleo] Reviews wanted for new TripleO elements
 
 Please don't make review requests on the list.  Details here:
 http://lists.openstack.org/pipermail/openstack-dev/2013-
 September/015264.html
 
 Thanks.
 
 -Ben
 
 On 04/20/2014 02:44 PM, Macdonald-Wallace, Matthew wrote:
  Hi all,
 
  Can I please ask for some reviews on the following:
 
  https://review.openstack.org/#/c/87226/ - Install checkmk_agent
  https://review.openstack.org/#/c/87223/ - Install icinga cgi interface
 
  I already have a souple of +1s and jenkins is happy, all I need is +2
  and +A! :)
 
  Thanks,
 
  Matt
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] nominating Victor Stinner for the Oslo core reviewers team

2014-04-22 Thread Flavio Percoco

On 21/04/14 12:39 -0400, Doug Hellmann wrote:

I propose that we add Victor Stinner (haypo on freenode) to the Oslo
core reviewers team.

Victor is a Python core contributor, and works on the development team
at eNovance. He created trollius, a port of Python 3's tulip/asyncio
module to Python 2, at least in part to enable a driver for
oslo.messaging. He has been quite active with Python 3 porting work in
Oslo and some other projects, and organized a sprint to work on the
port at PyCon last week. The patches he has written for the python 3
work have all covered backwards-compatibility so that the code
continues to work as before under python 2.

Given his background, skills, and interest, I think he would be a good
addition to the team.


+1



Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpHFf7jIDQ1s.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova Baremetal] disk-image builder tool error

2014-04-22 Thread Jander lu
Hi,

I am following this wiki page(https://wiki.openstack.org/wiki/Baremetal ),
by using devstack to set up a nova baremetal provision environment, but met
one problems as below:

when I use disk-image builder(
https://github.com/openstack/diskimage-builder.git) command to build a
image, there is no output file and the error message seems confused to me,I
don't know what I should do next,  below is the output of the console.

stack@helpme:~/diskimage-builder$ bin/disk-image-create -u base -o my-image
Building elements: base  base
Expanded element dependencies to: base
Building in /tmp/image.LB2Dc8vH
dib-run-parts Mon Apr 21 19:05:11 EDT 2014 Running
/tmp/image.LB2Dc8vH/hooks/root.d/01-ccache
dib-run-parts Mon Apr 21 19:05:11 EDT 2014 01-ccache completed
--- PROFILING ---

Target: root.d

Script Seconds
---  --

01-ccache 0.042

- END PROFILING -
Please include at least one distribution root element.
stack@helpme:~/diskimage-builder$ cd ~


I think the wiki content both in Baremetal and Disk-image-builder is not
latest updated.

could someone point me how to give a try with this?

thanks!!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova Baremetal] disk-image builder tool error

2014-04-22 Thread Ramakrishnan G
As the error suggests, please use one of the root distribution elements
(ubuntu, fedora, etc).  disk-image-create -h will give some examples.



On Tue, Apr 22, 2014 at 1:24 PM, Jander lu lhcxx0...@gmail.com wrote:

 Hi,

 I am following this wiki page(https://wiki.openstack.org/wiki/Baremetal), by 
 using devstack to set up a nova baremetal provision environment, but
 met one problems as below:

 when I use disk-image builder(
 https://github.com/openstack/diskimage-builder.git) command to build a
 image, there is no output file and the error message seems confused to me,I
 don't know what I should do next,  below is the output of the console.

 stack@helpme:~/diskimage-builder$ bin/disk-image-create -u base -o
 my-image
 Building elements: base  base
 Expanded element dependencies to: base
 Building in /tmp/image.LB2Dc8vH
 dib-run-parts Mon Apr 21 19:05:11 EDT 2014 Running
 /tmp/image.LB2Dc8vH/hooks/root.d/01-ccache
 dib-run-parts Mon Apr 21 19:05:11 EDT 2014 01-ccache completed
 --- PROFILING ---

 Target: root.d

 Script Seconds
 ---  --

 01-ccache 0.042

 - END PROFILING -
 Please include at least one distribution root element.
 stack@helpme:~/diskimage-builder$ cd ~


 I think the wiki content both in Baremetal and Disk-image-builder is not
 latest updated.

 could someone point me how to give a try with this?

 thanks!!

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Ramesh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova Baremetal] disk-image builder tool error

2014-04-22 Thread Clint Byrum
Excerpts from Jander lu's message of 2014-04-22 00:54:18 -0700:
 Hi,
 
 I am following this wiki page(https://wiki.openstack.org/wiki/Baremetal ),

That web page is out of date. Will try to fix it in the next 48 hours.

 by using devstack to set up a nova baremetal provision environment, but met
 one problems as below:
 
 when I use disk-image builder(
 https://github.com/openstack/diskimage-builder.git) command to build a
 image, there is no output file and the error message seems confused to me,I
 don't know what I should do next,  below is the output of the console.
 
 stack@helpme:~/diskimage-builder$ bin/disk-image-create -u base -o my-image
 Building elements: base  base
 Expanded element dependencies to: base
 Building in /tmp/image.LB2Dc8vH
 dib-run-parts Mon Apr 21 19:05:11 EDT 2014 Running
 /tmp/image.LB2Dc8vH/hooks/root.d/01-ccache
 dib-run-parts Mon Apr 21 19:05:11 EDT 2014 01-ccache completed
 --- PROFILING ---
 
 Target: root.d
 
 Script Seconds
 ---  --
 
 01-ccache 0.042
 
 - END PROFILING -
 Please include at least one distribution root element.

You need to specify 'debian', 'fedora', 'opensuse' or 'ubuntu' at the
very least.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Use cases document

2014-04-22 Thread Samuel Bercovici
Hi,

I have seen a few addition to 
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit?pli=1
I think that it would make sense to keep this document with uses cases that 
were discussed in ML.
A use case that I have seen and is missing is related to availability zones.
Please feel free to update this and add your own to the document.

I have also added sections for Cloud Admin/Cloud Operator use cases. Please add 
additional use cases based on your experience.

Regards,
-Sam.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Question on Nova Compute on Havana

2014-04-22 Thread Prashant Upadhyaya
Hi guys,

When a Compute Node re-starts, how does Nova Compute obtain the 'compute node 
id' (assuming the compute node exists at the controller), so that the periodic 
compute_node_update RPC can be sent from Nova Compute towards the controller 
with the appropriate compute node id.

Regards
-Prashant



DISCLAIMER: This message is proprietary to Aricent and is intended solely for 
the use of the individual to whom it is addressed. It may contain privileged or 
confidential information and should not be circulated or used for any purpose 
other than for what it is intended. If you have received this message in error, 
please notify the originator immediately. If you are not the intended 
recipient, you are notified that you are strictly prohibited from using, 
copying, altering, or disclosing the contents of this message. Aricent accepts 
no responsibility for loss or damage arising from the use of the information 
transmitted by this email including damage from virus.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-22 Thread Samuel Bercovici
Hi,

The work on SSL termination has started and is very near completion.
the blue print is in 
https://blueprints.launchpad.net/neutron/+spec/lbaas-ssl-termination and wiki 
is in https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL
Do you see anything missing there?


Regards,
-Sam.




-Original Message-
From: Carlos Garza [mailto:carlos.ga...@rackspace.com] 
Sent: Saturday, April 19, 2014 2:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario 
question


On Apr 18, 2014, at 10:21 AM, Stephen Balukoff sbaluk...@bluebox.net wrote:

 Howdy, folks!
 
 Could someone explain to me the SSL usage scenario where it makes 
 sense to re-encrypt traffic traffic destined for members of a back-end 
 pool?  SSL termination on the load balancer makes sense to me, but I'm 
 having trouble understanding why one would be concerned about then 
 re-encrypting the traffic headed toward a back-end app server. (Why 
 not just use straight TCP load balancing in this case, and save the 
 CPU cycles on the load balancer?)
 

1. Some customers want their servers to be external from our data centers for 
example the loadbalancer is in Chicago with rackspace hosting the loadbalancers 
and the back end pool members being on Amazon AWS servers. (We don't know why 
they would do this but a lot are doing it). They can't can't simply just audit 
the links between AWS and our DataCenters for PCI lots of backbones being 
crossed. so they just want encryption to their backend pool members. Also take 
note that amazon has chosen to support encryption 
http://aws.amazon.com/about-aws/whats-new/2011/10/04/amazon-s3-announces-server-side-encryption-support/
They've had it for a while now and for what ever reason a lot of customers are 
now demanding it from us as well.  

I agree they could simply just use HTTPS load balancing but they seem to think 
providers that don't offer encryption are inferior feature wise.

2. User that are on providers that are incapable of One Armed With Source Nat 
load balancing capabilities (See link below) are at the mercy of using 
X-forwarded for style headers to determine the original  source of a 
connection. (A must if they want to know where abusive connections are coming 
from). Under traditional NAT routing the source ip will always be the 
loadbalancers ip so X-Forwarded for has been the traditional method of show the 
server this(This applies to HTTP load balancing as well). But in the case of 
SSL the loadbalancer unless its decrypting traffic won't be able to inject 
these headers in. and when the pool members are on an external network it is 
prudent to allow for encryption, this pretty much forces them to use a trusted 
LoadBalancer as a man in the middle to decrypt add X-forwarded for, then 
encrypting to the back end. 

http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_One_Arm_Mode_with_Source_NAT_on_the_Cisco_Application_Control_Engine_Configuration_Example


3. Unless I'm mistaken it looks like encryption was already apart of the API or 
was accepted as a requirement for neutron lbaas.
https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL#Current_design
is this document still valid?

4. We also assumed we were expected to support the use cases described in
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit?pli=1
where case 7 specifically is asking for Re-encryption.


 We terminate a lot of SSL connections on our load balancers, but have 
 yet to have a customer use this kind of functionality.  (We've had a 
 few ask about it, usually because they didn't understand what a load 
 balancer is supposed to do-- and with a bit of explanation they went 
 either with SSL termination on the load balancer + clear text on the 
 back-end, or just straight TCP load balancing.)

We terminate a lot of SSL connections on our loadbalancers as well and we 
get a lot of pressure for this kind of functionality.  I think you have no 
customers using that functionality because you are unable to offer it  which is 
the case for us as well. But due to a significant amount of pressure we have a 
solution already ready and waiting for testing on our CLB1.0 offering. 

We wished this was the case for us that only a few users are requesting 
this feature  but we have customers that really do want their backend pool 
members on a separate non secure network or worse want this as a more advance 
form of HTTPS passthrough(TCP load balancing as your describing it). 

Providers may be able to secure their loadbalancers but they may not always 
be able to secure their back and forward connections. Users still want end to 
end encrypted connectivity but want the loadbalancer to be capable of making 
intelligent decisions(requiring decryption at the loadbalancer) as well as 
inject useful headers going to the back end pool member still need encryption 
functionality.

 When 

Re: [openstack-dev] [Glance] Ideas needed for v2 registry testing

2014-04-22 Thread Erno Kuvaja

Hi Mark,

I'd like to get ~ the same coverage for the api+reg as we have for 
api+sqlalchemy. The problem is that currently we are not running any 
functional tests against v2 api+reg. I tried to enable the v2 tests 
there is for registry as well, but due to some fundamental differences 
they will not work with noauth like they do on v1.


BR,
Erno

On 18/04/14 16:23, Mark Washenberger wrote:

Hi Erno,

Just looking for a little more information here. What are the 
particular areas around keystone integration in the v2 api+registry 
stack that you want to test? Is the v2 api + v2 registry stack using 
keystone differently than how v1 api + v1 registry stack uses it?


Thanks


On Fri, Apr 18, 2014 at 6:35 AM, Erno Kuvaja kuv...@hp.com 
mailto:kuv...@hp.com wrote:


Hi all,

I have been trying to enable functional testing for Glance API v2
using data_api = glance.db.registry.api without great success.

The current functionality of the v2 api+reg relies on the fact
that keystone is used and our current tests does not facilitate
that expectation.

I do not like either option I have managed to come up with so now
is time to call for help. Currently only way I see we could run
the registry tests is to convert our functional tests using
keystone instead of noauth or write test suite that passes API
server and targets the registry directly. Neither of these are
great as starting keystone would make the already long taking
functional tests even longer and more resource hog on top of that
we would have a need to pull in keystone just to run glance tests;
on the other hand bypassing API server would not give us any
guarantee that the behavior of the glance is the same regardless
which data_api is used.

At this point any ideas/discussion would be more than welcome how
we can make these tests running on both configurations.

Thanks,
Erno

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Summit] Topic review for Atlanta

2014-04-22 Thread Thierry Carrez
Robert Collins wrote:
 I've pulled the summit talks into an etherpad
 (https://etherpad.openstack.org/p/tripleo-icehouse-summit) - btw, who
 can review these within the system itself?

As the lead for the TripleO topic, you should be able to review them
(you should have a Review topic button near the top of the page). Let
me know if that's not the case.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Fix migration that breaks Grenade jobs

2014-04-22 Thread Anna Kamyshnikova
Hello everyone!

I'm working on fixing bug 1307344. I found out solution that will fix
Grenade jobs and will work for online and offline migartions.
https://review.openstack.org/87935 But I faced the problem that Metering
usage won't be fixed as we need to create 2 tables (meteringlabels,
meteringlabelrules). I tried to create both in patch set #7 but it won't
work for offline migration. In fact to fix Grenade it is enough to create
meteringlabels table, that is done in my change in the last patch set #8. I
want to ask reviewers to take a look at this change and suggest something
or approve it. I'm available on IRC(akamyshnikova) or by email.

Regards
Ann
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Fix migration that breaks Grenade jobs

2014-04-22 Thread Jakub Libosvar
On 04/22/2014 10:53 AM, Anna Kamyshnikova wrote:
 Hello everyone!
 
 I'm working on fixing bug 1307344. I found out solution that will fix
 Grenade jobs and will work for online and offline migartions.
 https://review.openstack.org/87935 But I faced the problem that Metering
 usage won't be fixed as we need to create 2 tables (meteringlabels,
 meteringlabelrules). I tried to create both in patch set #7 but it won't
 work for offline migration. In fact to fix Grenade it is enough to
 create meteringlabels table, that is done in my change in the last patch
 set #8. I want to ask reviewers to take a look at this change and
 suggest something or approve it. I'm available on IRC(akamyshnikova) or
 by email.
 
 Regards 
 Ann
Hi Ann,

Good suggestion how to get out of failing job but I don't think it
should go to 33c3db036fe4_set_length_of_description_field_metering.py
because this failure is grenade specific while the real issue is a fact
that we're not able to add new service plugin to already deployed Neutron.

I think the same workaround you proposed in the 87935 review should go
to grenade itself (from-havana/upgrade-neutron script) just to have the
job working on havana-icehouse upgrade. It's a bit of ugly workaround
though but imho so far the best solution to reach stable job in a short
time.

Kuba

 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Updates to the template for Neutron BPs

2014-04-22 Thread Jaume Devesa
Another question about this, Kyle:

once merged, will there be a place where to read these approved specs, or
should we run it locally? I tried to find the approved *nova-specs *in
docs.openstack.org, but I couldn't find them.

Regards,
jaume


On 21 April 2014 09:02, Mandeep Dhami dh...@noironetworks.com wrote:


 Got it. Thanks.

 Regards,
 Mandeep


 On Sun, Apr 20, 2014 at 11:49 PM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 That's because spec was proposed to the juno/ folder. Look at
 https://raw.githubusercontent.com/openstack/neutron-specs/master/doc/source/index.rst,
 if spec is in juno/ folder, then contents shows it as approved one.

 Once merged, it means approved, right? So it is going to be Ok after
 merge. Though a better reminding than just draft in the url could be
 required if many start to mess it up...


 On Mon, Apr 21, 2014 at 10:43 AM, Kevin Benton blak...@gmail.com wrote:

 Yes. It shows up in the approved section since it's just a build of the
 patch as-is.

 The link is titled gate-neutron-specs-docs in the message from Jenkins.

 --
 Kevin Benton


 On Sun, Apr 20, 2014 at 11:37 PM, Mandeep Dhami dh...@noironetworks.com
  wrote:

 Just for clarification. Jenkins link in the description puts the
 generated HTML in the section Juno approved specs even tho' the
 blueprint is still being reviewed. Am I looking at the right link?

 Regards,
 Mandeep


 On Sun, Apr 20, 2014 at 10:54 PM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 Yes, thanks, that's exactly what I was looking for!


 On Mon, Apr 21, 2014 at 12:03 AM, Kyle Mestery 
 mest...@noironetworks.com wrote:

 On Sat, Apr 19, 2014 at 5:11 PM, Mike Scherbakov
 mscherba...@mirantis.com wrote:
  Hi Kyle,
  I built template and it looks awesome. We are considering to use
 same
  approach for Fuel.
 
  My assumption is that spec will be on review for a negotiation
 time, which
  is going to be quite a while. In my opinion, it is not always very
  convenient to read spec in gerrit.
 
 Agreed, though for some specs, this is actually an ok way to do
 reviews.

  Did you guys have any thoughts on auto-build these specs into html
 on every
  patch upload? So we could go somewhere and see built results,
 without a
  requirement to fetch neutron-specs, and run tox? The possible
 drawback is
  that reader won't see gerrit comments..
 
 I followed what Nova was going and committed code into
 openstack-infra/config which allows for some jenkins jobs to run when
 we commit to the neutron-specs gerrit. [1]. As an example, look at
 this commit here [2]. If you look at the latest Jenkins run, you'll
 see a link which takes you to an HTML generated document [3] which you
 can review in lieu of the raw restructured text in gerrit. That will
 actually generate all the specs in the repository, so you'll see the
 Example Spec along with the Nuage one I used for reference here.

 Hope that helps!
 Kyle

 [1] https://review.openstack.org/#/c/88069/
 [2] https://review.openstack.org/#/c/88690/
 [3]
 http://docs-draft.openstack.org/90/88690/3/check/gate-neutron-specs-docs/fe4282a/doc/build/html/

  Thanks,
 
 
  On Sat, Apr 19, 2014 at 12:08 AM, Kyle Mestery 
 mest...@noironetworks.com
  wrote:
 
  Hi folks:
 
  I just wanted to let people know that we've merged a few patches
 [1]
  to the neutron-specs repository over the past week which have
 updated
  the template.rst file. Specifically, Nachi has provided some
  instructions for using Sphinx diagram tools in lieu of
 asciiflow.com.
  Either approach is fine for any Neutron BP submissions, but Nachi's
  patch has some examples of using both approaches. Bob merged a
 patch
  which shows an example of defining REST APIs with attribute tables.
 
  Just an update for anyone proposing BPs for Juno at the moment.
 
  Thanks!
  Kyle
 
  [1]
 
 https://review.openstack.org/#/q/status:merged+project:openstack/neutron-specs,n,z
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Mike Scherbakov
  #mihgen
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [Devstack] add support for ceph

2014-04-22 Thread Soren Hansen
2014-04-18 12:32 GMT+02:00 Sean Dague s...@dague.net:
 On 04/18/2014 12:03 AM, Scott Devoid wrote:
 So I have had a chance to look over the whole review history again. I
 agree with Sean Dague and Dean Troyer's concerns that the current
 patch affects code outside of lib/storage and extras.d. We should
 make the Devstack extension system more flexible to allow for more
 extensions.  Although I am not sure if this responsibility falls
 completely in the lap of those wishing to integrate Ceph.
 Where should it fall? This has been pretty common with trying to bring
 in anything major, the general plumbing needs to come from that same
 effort. It's also a pretty sane litmus test on whether this is a drive
 by contribution that will get no support in the future (and thus just
 expect Dean and I to go fix things), or something which will have
 someone actively contributing to keep things working in the future.

As far as litmus tests go, this is a very biased one.

If someone has the skill, time, and inclination to undertake whatever
refactoring is needed to add their bit of functionality, yes, that's an
indication that they'd be able to maintain it long term. Whether they'll
have the time and/or inclination to actually do so is anybody's guess.

I think it's an unnecessarily high bar to set for every contributor. We
happily accept patches that don't require major refactoring or plumbing
changes regardless of whether or not the contributor would have had the
skill, time or inclination to do those things as well, simply because
this contributor's changes didn't happen to require such efforts.

Conversely, just because someone has the skill, time, and inclination to
add a small, simple features and actively maintain it forever and ever,
doesn't mean they have the skill, time or inclination to undertake a
major refactoring.

I'm not saying it's your job to do it all instead, but if a contributor
provides a patch that is useful and is proven to work (devstack runs are
easily reproducable) and it doesn't break anything else (ensured by the
gate), it seems like some amount of work should fall on the core team to
make sure things move forward either by:

a) accepting the patch as is and tending to the refactoring later on,

b) doing the refactoring straight away and getting the contributor to
adjust their code accordingly, or

c) providing enough guidance for the contributor to be able to do the
refactoring.

 My concern is that there is a lot of code in devstack. And every time
 I play with a different set of options we don't enable in the gate,
 things get brittle. For instance, Fedora support gets broken all the
 time, because it's not tested in the gate.

How can we gate on it, if devstack doesn't support it?

If we (the OpenStack project) or someone else actually cares about
Fedora support, we/they should be running the tests. If noone cares
enough to test it, let it be broken. If it's dragging anything else down
with it somehow, rip it out.

But, if devstack doesn't support it, how do we expect people to run
these tests?

 Something as big as using ceph for storage back end across a range of
 services is big. And while there have been patches, I've yet to see
 anyone volunteer 3rd party testing here to help us keep it working.

There's obviously a chicken-and-egg situation there.

Sure, we could repeat the exact test runs done in OpenStack and report
back whether they worked for us. However, that's unlikely to actually be
useful to OpenStack (repeating the same test with the same configuration
in a very similar environment isn't likely to be very informative).

Also, it's almost entirely useless to us, because our real environment
will be running with a different configuration, so we're not actually
getting much additional confidence in the changes coming in from
upstream by running the 3rd party tests.

Only once the toolchain allows us to run tests with a configuration that
actually vaguely mimics our real environment will we (or OpenStack for
that matter) have anything to gain from having us run 3rd party tests.

 Some of the late reverts in nova for icehouse hit this same kind of
 issue, where once certain rbd paths were lit in the code base within
 24hrs we had user reports coming back of things exploding. That makes
 me feel like there are a lot of daemons lurking here, and if this is
 going to be a devstack mode, and that people are going to use a lot,
 then it needs to be something that's tested.

This sounds completely backwards to me.

Devstack is how things are deployed in OpenStack's gates. I'd expect
people wanting to provide 3rd party testing back to OpenStack to try to
use the same tools to do so, i.e. devstack.

Devstack is supposed (at least as I undertand it) to exactly be the tool
to let us test things in a reproducable fashion, but it seems to me that
you're saying that things need to already be covered by other methods of
testing before it can get into devstack?

Had devstack had support for 

Re: [openstack-dev] [oslo] nominating Victor Stinner for the Oslo core reviewers team

2014-04-22 Thread Julien Danjou
On Mon, Apr 21 2014, Doug Hellmann wrote:

 I propose that we add Victor Stinner (haypo on freenode) to the Oslo
 core reviewers team.

+1

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Migrate to Postrgresql 9

2014-04-22 Thread Dmitry Pyzhov
I've created a ticket: https://bugs.launchpad.net/fuel/+bug/1311030


On Tue, Apr 22, 2014 at 10:54 AM, Mike Scherbakov
mscherba...@mirantis.comwrote:

 Ideally such things should land at the beginning of new release cycle, and
 definitely not after feature freeze. Let's make an exception, if it is very
 well tested by many on dev envs.
 If we make an exception, I suggest to switch it quickly - in order to have
 more time before release for testing, with the possibility to revert it
 back at any moment.

 Thanks,


 On Tue, Apr 22, 2014 at 1:53 AM, Vladimir Kuklin vkuk...@mirantis.comwrote:

 +MAX_ULONG :-)


 On Tue, Apr 22, 2014 at 1:28 AM, Dmitry Borodaenko 
 dborodae...@mirantis.com wrote:

 +++

 We already use PostgreSQL 9 on some of our dev boxes, and Nailgun
 works fine in fake mode and unit tests, so the risk of upgrading it
 now is minimal. I agree with Dmitry P. that it will cost us more to
 postpone it and make that upgrade a part of Fuel upgrade.

 -DmitryB

 On Mon, Apr 21, 2014 at 1:58 PM, Jay Pipes jaypi...@gmail.com wrote:
  On Tue, 2014-04-22 at 00:55 +0400, Dmitry Pyzhov wrote:
  We use postgresql 8 on master node right now. At some point we will
  have to migrate to 9th version. And database migration can became
  painful part of master node upgrade at that point.
 
  At the moment part of our developers use psql9 in their environments
  and see no issues. Should we enforce upgrade before 5.0 release?
 
  ++
 
  -jay
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Dmitry Borodaenko

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Selecting Compute tests by driver/hypervisor

2014-04-22 Thread Sean Dague
On 04/21/2014 06:52 PM, Daryl Walleck wrote:
 I nearly opened a spec for this, but I’d really like to get some
 feedback first. One of the challenges I’ve seen lately for Nova teams
 not using KVM or Xen (Ironic and LXC are just a few) is how to properly
 run the subset of Compute tests that will run for their hypervisor or
 driver. Regexes are what Ironic went with, but I’m not sure how well
 that will work long term since it’s very much dependent on naming
 conventions. The good thing is that the capabilities for each
 hypervisor/driver are well defined
 (https://wiki.openstack.org/wiki/HypervisorSupportMatrix), so it’s just
 a matter of how to convey that information. I see a few ways forward
 from here:
 
  
 
 1.   Expand the compute_features_group config section to include all
 Compute actions and make sure all tests that require specific
 capabilities have skipIfs or raise a skipException. This options seems
 it would require the least work within Tempest, but the size of the
 config will continue to grow as more Nova actions are added.
 
 2.   Create a new decorator class like was done with service tags
 that defines what drivers the test does or does not work for, and have
 the definitions of the different driver capabilities be referenced by
 the decorator. This is nice because it gets rid of the config creep, but
 it’s also yet another decorator, which may not be desirable.
 
  
 
 I’m going to continue working through both of these possibilities, but
 any feedback either solution would be appreciated.

Ironic mostly went with regexes for expediency to get something gating
before their driver actually implements the requirements for the compute
API.

Nova API is Nova API, the compute driver should be irrelevant. The part
that is optional is specified by extensions (at the granularity level of
an extension enable/disable). Creating all the knobs that are optional
for extensions is good, and we're definitely not there yet. However if
an API behaves differently based on compute driver, that's a problem
with that compute driver.

I realize today that we're not there yet, but we have to be headed in
that direction. The diagnostics API was an instance where this was
pretty bad, and meant it was in no way an API, because the client had no
idea what data payload it was getting back.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Selecting Compute tests by driver/hypervisor

2014-04-22 Thread Daniel P. Berrange
On Mon, Apr 21, 2014 at 10:52:39PM +, Daryl Walleck wrote:
 I nearly opened a spec for this, but I'd really like to get some
 feedback first. One of the challenges I've seen lately for Nova
 teams not using KVM or Xen (Ironic and LXC are just a few) is
 how to properly run the subset of Compute tests that will run for
 their hypervisor or driver. Regexes are what Ironic went with,
 but I'm not sure how well that will work long term since it's
 very much dependent on naming conventions. The good thing is
 that the capabilities for each hypervisor/driver are well defined
 (https://wiki.openstack.org/wiki/HypervisorSupportMatrix), so
 it's just a matter of how to convey that information. I see a
 few ways forward from here:
 
 1.   Expand the compute_features_group config section to include
 all Compute actions and make sure all tests that require specific
 capabilities have skipIfs or raise a skipException. This options seems
 it would require the least work within Tempest, but the size of the
 config will continue to grow as more Nova actions are added.
 
 2.   Create a new decorator class like was done with service tags
 that defines what drivers the test does or does not work for, and have
 the definitions of the different driver capabilities be referenced by
 the decorator. This is nice because it gets rid of the config creep,
 but it's also yet another decorator, which may not be desirable.
 
 I'm going to continue working through both of these possibilities,
 but any feedback either solution would be appreciated.

It strikes me that if the test suites have a problem determining
support status of APIs with the currently active driver, then the
applications using OpenStack will likely suffer the same problem.
Given that it'd be desirable to ensure we can solve it in a general
way, rather than only consider the test suite needs.

On the Nova side of things, I think it would be important to ensure
that there is a single OperationNotSupported exception that is
always raised when the API tries to exercise a feature that is not
available with a specific hypervisor driver. If a test case in the
test suite ever receives OperationNotSupported it could then just
mark that test case as skipped rather than having exception propagate
to result in a fail.  To me the nice thing about such an approach is
that you do not need to ever maintain a matrix of supported features,
as the test suite would just do the right thing whenever the driver is
updated.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][Cinder][Heat]Should we support tags for os resources?

2014-04-22 Thread Thomas Spatzier
Jay Pipes jaypi...@gmail.com wrote on 20/04/2014 15:05:51:

 From: Jay Pipes jaypi...@gmail.com
 To: openstack-dev@lists.openstack.org
 Date: 20/04/2014 15:07
 Subject: Re: [openstack-dev] [Nova][Neutron][Cinder][Heat]Should we
 support tags for os resources?

 On Sun, 2014-04-20 at 08:35 +, Huangtianhua wrote:
  Hi all:
 
  Currently, the EC2 API of OpenStack only has tags support (metadata)
  for instances. And there has already a blueprint about to add support
  for volumes and volume snapshots using “metadata”.
 
  There are a lot of resources such as
  image/subnet/securityGroup/networkInterface(port) are supported add
  tags for AWS.
 
  I think we should support tags for these resources. There may be no
  property “metadata for these resources, so we should to add
  “metadata” to support the resource tags, the change related API.

 Hi Tianhua,

 In OpenStack, generally, the choice was made to use maps of key/value
 pairs instead of lists of strings (tags) to annotate objects exposed in
 the REST APIs. OpenStack REST APIs inconsistently call these maps of
 key/value pairs:

  * properties (Glance, Cinder Image, Volume respectively)
  * extra_specs (Nova InstanceType)
  * metadata (Nova Instance, Aggregate and InstanceGroup, Neutron)
  * metadetails (Nova Aggregate and InstanceGroup)
  * system_metadata (Nova Instance -- differs from normal metadata in
 that the key/value pairs are 'owned' by Nova, not a user...)

 Personally, I think tags are a cleaner way of annotating objects when
 the annotation is coming from a normal user. Tags represent by far the
 most common way for REST APIs to enable user-facing annotation of
 objects in a way that is easy to search on. I'd love to see support for
 tags added to any searchable/queryable object in all of the OpenStack
 APIs.

Fully agree. Tags should be something a normal end user can use to make the
objects he is working with searchable for his purposes.
And this is likely something different from system-controlled properties
that _all_ users (not the one specific end user) can rely on.


 I'd also like to see cleanup of the aforementioned inconsistencies in
 how maps of key/value pairs are both implemented and named throughout
 the OpenStack APIs. Specifically, I'd like to see this implemented in
 the next major version of the Compute API:

+1 on making this uniform across the various projects. This would make it
much more intuitive.


  * Removal of the metadetails term
  * All key/value pairs can only be changed by users with elevated
 privileged system-controlled (normal users should use tags)

+1 on this, because this would be data that other users or projects rely on
- see also my use case below.

  * Call all these key/value pair combinations properties --
 technically, metadata is data about data, like the size of an
 integer. These key/value pairs are just data, not data about data.

+1 on properties

  * Identify key/value pairs that are relied on by all of Nova to be a
 specific key and value combination, and make these things actual real
 attributes on some object model -- since that is a much greater guard
 for the schema of an object and enables greater performance by allowing
 both type safety of the underlying data and removes the need to search
 by both a key and a value.

Makes a lot of sense to me. So are you suggesting to have a set of
well-defined property names per resource but still store them in the
properties name-value map? Or would you rather make those part of the
resource schema?

BTW, here is a use case in the context of which we have been thinking about
that topic: we opened a BP for allowing constraint based selection of
images for Heat templates, i.e. instead of saying something like (using
pseudo template language)

image ID must be in [fedora-19-x86_64, fedora-20-x86_64]

say something like

architecture must be x86_64, distro must be fedora, version must be
between 19 and 20

(see also [1]).

This of course would require the existence of well-defined properties in
glance so an image selection query in Heat can work.
As long as properties are just custom properties, we would require a lot
of discipline from every to maintain properties correctly. And the
implementation in Heat could be kind of tolerant, i.e. give it a try, and
if the query fails just fail the stack creation. But if this is likely to
happen in 90% of all environments, the usefulness is questionable.

Here is a link to the BP I mentioned:
[1]
https://blueprints.launchpad.net/heat/+spec/constraint-based-flavors-and-images

Regards,
Thomas


 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Selecting Compute tests by driver/hypervisor

2014-04-22 Thread Sean Dague
On 04/22/2014 06:55 AM, Daniel P. Berrange wrote:
 On Mon, Apr 21, 2014 at 10:52:39PM +, Daryl Walleck wrote:
 I nearly opened a spec for this, but I'd really like to get some
 feedback first. One of the challenges I've seen lately for Nova
 teams not using KVM or Xen (Ironic and LXC are just a few) is
 how to properly run the subset of Compute tests that will run for
 their hypervisor or driver. Regexes are what Ironic went with,
 but I'm not sure how well that will work long term since it's
 very much dependent on naming conventions. The good thing is
 that the capabilities for each hypervisor/driver are well defined
 (https://wiki.openstack.org/wiki/HypervisorSupportMatrix), so
 it's just a matter of how to convey that information. I see a
 few ways forward from here:

 1.   Expand the compute_features_group config section to include
 all Compute actions and make sure all tests that require specific
 capabilities have skipIfs or raise a skipException. This options seems
 it would require the least work within Tempest, but the size of the
 config will continue to grow as more Nova actions are added.

 2.   Create a new decorator class like was done with service tags
 that defines what drivers the test does or does not work for, and have
 the definitions of the different driver capabilities be referenced by
 the decorator. This is nice because it gets rid of the config creep,
 but it's also yet another decorator, which may not be desirable.

 I'm going to continue working through both of these possibilities,
 but any feedback either solution would be appreciated.
 
 It strikes me that if the test suites have a problem determining
 support status of APIs with the currently active driver, then the
 applications using OpenStack will likely suffer the same problem.
 Given that it'd be desirable to ensure we can solve it in a general
 way, rather than only consider the test suite needs.
 
 On the Nova side of things, I think it would be important to ensure
 that there is a single OperationNotSupported exception that is
 always raised when the API tries to exercise a feature that is not
 available with a specific hypervisor driver. If a test case in the
 test suite ever receives OperationNotSupported it could then just
 mark that test case as skipped rather than having exception propagate
 to result in a fail.  To me the nice thing about such an approach is
 that you do not need to ever maintain a matrix of supported features,
 as the test suite would just do the right thing whenever the driver is
 updated.

Agreed. Though I think we probably want the Nova API to be explicit
about what parts of the API it's ok to throw a Not Supported. Because I
don't think it's a blanket ok. On API endpoints where this is ok, we can
convert not supported to a skip.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][sahara] Merging Sahara-UI Dashboard code into horizon

2014-04-22 Thread Jaromir Coufal

Hey Chad,

thank you very much for starting this thread.

Let me start with short introduction to my thoughts about OpenStack 
Dashboard's direction and our latest work there. I am working towards a 
higher goal of having the OpenStack UI a stable, simple and coherent 
story for our end users. Which means that it follows their workflows and 
enables them to be effective and happy when using it.


Historically the UI resembled a little bit admin views, showing models 
and their objects in tables. We are improving here.


One part which is leading towards the high level goal is information 
architecture reorganization which we started at last summit, but it got 
blocked a little bit by RBAC implementation. Though it is still an 
ongoing process.


My concern is that the UI is starting to become a place where other 
projects just add another dashboard or panel in order to get incubated. 
I don't want this to happen, so I would like to put more thoughts where 
the Sahara functionality belongs and how it works together in 
relationship with other functionality and projects which are already in.


To understand Sahara more, I read wiki page, various documentations and 
saw screencasts. After going through all of the materials there appeared 
various questions, which might be caused with lack of my knowledge about 
Sahara. So in the following paragraphs, I would like to put some 
questions (some might be rhetorical) and suggest few solutions to 
improve the placement of Sahara features.


1) What is the relationship of Sahara to Heat? Isn't Heat supposed to 
deal with provisioning of such a group of VMs? Is Hadoop cluster somehow 
different from other stacks that it can't be handled by Heat?


The reason why I am asking is, that it looks to me, that what you are 
doing with Sahara in the first phase is actually designing your cluster 
with specific node types, you save it as a template (which only 
specifies which node types and how meany instances should run) and then 
you provision this cluster. For me it looks more or less the same as how 
Heat is supposed to work:

* Node Type == Heat Resource (Type)
* Cluster Template == Heat Template
* Cluster == Heat Stack

Therefore I am a little bit hesitative to adding another views, which 
are having similar purpose. If they are different, can you please be a 
bit more specific about the differencies?


2) Lot of views (menu items) are reflecting mostly data model, but I 
don't think that there needs to be that many of them.


Current views:
* Clusters, Cluster Templates, Node Group Templates, Job Executions, 
Jobs, Job Binaries, Data Sources, Image Registry, Plugins


Suggested structure A:
* Orchestration
- Clusters / Stacks (one view including Clusters, Cluster Templates)
- Node Group Templates
* Data Processing
- Overview
- Jobs (one view including Job Executions, Jobs, Job Binaries)
- Data Sources
? Image Registry - should be part of already existing Images catalog 
(why to separate them?)
? Plugins - Do they have to be managed via UI? Can't be all enabled by 
default and configured somewhere else in the preferences? I think it 
will confuse users to manage plugins from the top level main navigation 
- it's not something what with the user visit regularly.


The question here is, if Clusters, Cluster Templates and Node Group 
Templates aren't supposed be reflected somehow with Heat, since Heat is 
the tool for Orchestration and it already deals with that. Because if we 
do this, then Clusters look like duplication for Heat Stacks (already 
described above in paragraph #1).


if the answer is no, then suggested structure B:
* Data Processing (specific only for Hadoop):
   - Clusters
   - Node Group Templates
   - Jobs
   - Data Sources

I lean towards the A suggestion so far so I understand Sahara.


Few comments inline:

On 2014/17/04 21:06, Chad Roberts wrote:

Per blueprint  
https://blueprints.launchpad.net/horizon/+spec/merge-sahara-dashboard we are 
merging the Sahara Dashboard UI code into the Horizon code base.

Over the last week, I have been working on making this merge happen and along 
the way some interesting questions have come up.  Hopefully, together we can 
make the best possible decisions.

Sahara is the Data Processing platform for Openstack.  During incubation and prior to 
that, a horizon dashboard plugin was developed to work with the data processing api.  Our 
original implementation was a separate dashboard that we would activate by adding to 
HORIZON_CONFIG and INSTALLED_APPS.  The layout gave us a root of Sahara on 
the same level as Admin and Project.  Under Sahara, we have 9 panels that make-up the 
entirety of the functionality for the Sahara dashboard.

Over the past week there seems to be at least 2 questions that have come up.  
I'd like to get input from anyone interested.

1)  Where should the functionality live within the Horizon UI? So far, 2 
options have been presented.
 a)  In a separate dashboard 

[openstack-dev] [sahara] apache hive no longer distributing 0.11.0 - image-elements don't build

2014-04-22 Thread Matthew Farrellee

https://sahara.mirantis.com/logs/97/86997/4/check/diskimage-integration-ubuntu/d37fe82/console.html
https://sahara.mirantis.com/logs/97/86997/4/check/diskimage-integration-fedora/da83f57/console.html
https://sahara.mirantis.com/logs/97/86997/4/check/diskimage-integration-centos/27d14d1/console.html

all fail with...

--2014-04-21 22:35:45-- 
http://www.apache.org/dist/hive/hive-0.11.0/hive-0.11.0-bin.tar.gz
Resolving www.apache.org (www.apache.org)... 192.87.106.229, 
140.211.11.131, 2001:610:1:80bc:192:87:106:229
Connecting to www.apache.org (www.apache.org)|192.87.106.229|:80... 
connected.

HTTP request sent, awaiting response... 404 Not Found
2014-04-21 22:35:45 ERROR 404: Not Found.

it looks like the 0.11.0 tarball is no more,

http://www.apache.org/dist/hive/

it'll take me a day or two to build up an image w/ 0.12.0 (or better: 
0.13.0) and see if it works so we can upgrade the dib scripts.


if someone from mirantis wants to cache a copy of 0.11.0 on 
sahara-files, update the url and file a bug about updating to 0.13.0, 
i'll fast track a +2/+A.


best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Selecting Compute tests by driver/hypervisor

2014-04-22 Thread Russell Bryant
On 04/22/2014 07:23 AM, Sean Dague wrote:
 On 04/22/2014 06:55 AM, Daniel P. Berrange wrote:
 On Mon, Apr 21, 2014 at 10:52:39PM +, Daryl Walleck wrote:
 I nearly opened a spec for this, but I'd really like to get
 some feedback first. One of the challenges I've seen lately for
 Nova teams not using KVM or Xen (Ironic and LXC are just a few)
 is how to properly run the subset of Compute tests that will
 run for their hypervisor or driver. Regexes are what Ironic
 went with, but I'm not sure how well that will work long term
 since it's very much dependent on naming conventions. The good
 thing is that the capabilities for each hypervisor/driver are
 well defined 
 (https://wiki.openstack.org/wiki/HypervisorSupportMatrix), so 
 it's just a matter of how to convey that information. I see a 
 few ways forward from here:
 
 1.   Expand the compute_features_group config section to
 include all Compute actions and make sure all tests that
 require specific capabilities have skipIfs or raise a
 skipException. This options seems it would require the least
 work within Tempest, but the size of the config will continue
 to grow as more Nova actions are added.
 
 2.   Create a new decorator class like was done with
 service tags that defines what drivers the test does or does
 not work for, and have the definitions of the different driver
 capabilities be referenced by the decorator. This is nice
 because it gets rid of the config creep, but it's also yet
 another decorator, which may not be desirable.
 
 I'm going to continue working through both of these
 possibilities, but any feedback either solution would be
 appreciated.
 
 It strikes me that if the test suites have a problem determining 
 support status of APIs with the currently active driver, then
 the applications using OpenStack will likely suffer the same
 problem. Given that it'd be desirable to ensure we can solve it
 in a general way, rather than only consider the test suite
 needs.
 
 On the Nova side of things, I think it would be important to
 ensure that there is a single OperationNotSupported exception
 that is always raised when the API tries to exercise a feature
 that is not available with a specific hypervisor driver. If a
 test case in the test suite ever receives OperationNotSupported
 it could then just mark that test case as skipped rather than
 having exception propagate to result in a fail.  To me the nice
 thing about such an approach is that you do not need to ever
 maintain a matrix of supported features, as the test suite
 would just do the right thing whenever the driver is updated.
 
 Agreed. Though I think we probably want the Nova API to be
 explicit about what parts of the API it's ok to throw a Not
 Supported. Because I don't think it's a blanket ok. On API
 endpoints where this is ok, we can convert not supported to a
 skip.

Definitely agreed with Dan's points here.

We already raise NotImplementedError for this in the code.  Assuming
it makes it all the way back up to the API, it should be converted to
a 501 Not Implemented response.

It doesn't look like this is handled in a general way in the API code.
 Each API extension that supports this is handling it manually.
However, it is used a lot.  A quick grep in nova/api shows 50 cases of
raising webob.exc.HTTPNotImplemented.

It would be nice to identify the specific cases where this isn't
working as expected.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] wrap_instance_event() swallows return codes....on purpose?

2014-04-22 Thread Russell Bryant
On 04/21/2014 06:01 PM, Chris Friesen wrote:
 Hi all,
 
 In compute/manager.py the function wrap_instance_event() just calls
 function().
 
 This means that if it's used to decorate a function that returns a
 value, then the caller will never see the return code.
 
 Is this a bug, or is the expectation that we would only ever use this
 wrapper for functions that don't return a value?

Looks like a bug to me.  Nice catch.

Want to submit a patch for this?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Selecting Compute tests by driver/hypervisor

2014-04-22 Thread Daniel P. Berrange
On Tue, Apr 22, 2014 at 07:23:20AM -0400, Sean Dague wrote:
 On 04/22/2014 06:55 AM, Daniel P. Berrange wrote:
  On Mon, Apr 21, 2014 at 10:52:39PM +, Daryl Walleck wrote:
  I nearly opened a spec for this, but I'd really like to get some
  feedback first. One of the challenges I've seen lately for Nova
  teams not using KVM or Xen (Ironic and LXC are just a few) is
  how to properly run the subset of Compute tests that will run for
  their hypervisor or driver. Regexes are what Ironic went with,
  but I'm not sure how well that will work long term since it's
  very much dependent on naming conventions. The good thing is
  that the capabilities for each hypervisor/driver are well defined
  (https://wiki.openstack.org/wiki/HypervisorSupportMatrix), so
  it's just a matter of how to convey that information. I see a
  few ways forward from here:
 
  1.   Expand the compute_features_group config section to include
  all Compute actions and make sure all tests that require specific
  capabilities have skipIfs or raise a skipException. This options seems
  it would require the least work within Tempest, but the size of the
  config will continue to grow as more Nova actions are added.
 
  2.   Create a new decorator class like was done with service tags
  that defines what drivers the test does or does not work for, and have
  the definitions of the different driver capabilities be referenced by
  the decorator. This is nice because it gets rid of the config creep,
  but it's also yet another decorator, which may not be desirable.
 
  I'm going to continue working through both of these possibilities,
  but any feedback either solution would be appreciated.
  
  It strikes me that if the test suites have a problem determining
  support status of APIs with the currently active driver, then the
  applications using OpenStack will likely suffer the same problem.
  Given that it'd be desirable to ensure we can solve it in a general
  way, rather than only consider the test suite needs.
  
  On the Nova side of things, I think it would be important to ensure
  that there is a single OperationNotSupported exception that is
  always raised when the API tries to exercise a feature that is not
  available with a specific hypervisor driver. If a test case in the
  test suite ever receives OperationNotSupported it could then just
  mark that test case as skipped rather than having exception propagate
  to result in a fail.  To me the nice thing about such an approach is
  that you do not need to ever maintain a matrix of supported features,
  as the test suite would just do the right thing whenever the driver is
  updated.
 
 Agreed. Though I think we probably want the Nova API to be explicit
 about what parts of the API it's ok to throw a Not Supported. Because I
 don't think it's a blanket ok. On API endpoints where this is ok, we can
 convert not supported to a skip.

Yep, this ties into a discussion I recall elsewhere about specifying
exactly which parts of the Nova API are considered mandatory features
for drivers to implement.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Fix migration that breaks Grenade jobs

2014-04-22 Thread Salvatore Orlando
When I initially spoke to the infra team regarding this problem, they
suggested that just fixing migrations so that the job could run was not a
real option.
I tend to agree with this statement.
However, I'm open to options for getting grenade going until the migration
problem is solved. Ugly workarounds might be fine, as long as we won't be
doing anything that a real deployer would never do.

Personally, I still think the best way for getting grenade to work is to
ensure previos_rev and current_rev have the same configuration. For the
havana/icehouse upgrade, this will mean that devstack for icehouse should
not add the metering plugin. I and Jakub are overdue a discussion on
whether this would be feasible or not.

Change 87935 is acceptable as a fix for that specific migration.
However it does not fix the general issue, where the root cause is that
currently the state of the neutron database depends on configuration
settings, and therefore migrations are idempotent as long as the plugin
configuration is not changed, which is not the case.

Salvatore


On 22 April 2014 11:14, Jakub Libosvar libos...@redhat.com wrote:

 On 04/22/2014 10:53 AM, Anna Kamyshnikova wrote:
  Hello everyone!
 
  I'm working on fixing bug 1307344. I found out solution that will fix
  Grenade jobs and will work for online and offline migartions.
  https://review.openstack.org/87935 But I faced the problem that Metering
  usage won't be fixed as we need to create 2 tables (meteringlabels,
  meteringlabelrules). I tried to create both in patch set #7 but it won't
  work for offline migration. In fact to fix Grenade it is enough to
  create meteringlabels table, that is done in my change in the last patch
  set #8. I want to ask reviewers to take a look at this change and
  suggest something or approve it. I'm available on IRC(akamyshnikova) or
  by email.
 
  Regards
  Ann
 Hi Ann,

 Good suggestion how to get out of failing job but I don't think it
 should go to 33c3db036fe4_set_length_of_description_field_metering.py
 because this failure is grenade specific while the real issue is a fact
 that we're not able to add new service plugin to already deployed Neutron.

 I think the same workaround you proposed in the 87935 review should go
 to grenade itself (from-havana/upgrade-neutron script) just to have the
 job working on havana-icehouse upgrade. It's a bit of ugly workaround
 though but imho so far the best solution to reach stable job in a short
 time.

 Kuba

 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Selecting Compute tests by driver/hypervisor

2014-04-22 Thread Russell Bryant
On 04/22/2014 08:35 AM, Daniel P. Berrange wrote:
 On Tue, Apr 22, 2014 at 07:23:20AM -0400, Sean Dague wrote:
 On 04/22/2014 06:55 AM, Daniel P. Berrange wrote:
 On Mon, Apr 21, 2014 at 10:52:39PM +, Daryl Walleck wrote:
 I nearly opened a spec for this, but I'd really like to get some
 feedback first. One of the challenges I've seen lately for Nova
 teams not using KVM or Xen (Ironic and LXC are just a few) is
 how to properly run the subset of Compute tests that will run for
 their hypervisor or driver. Regexes are what Ironic went with,
 but I'm not sure how well that will work long term since it's
 very much dependent on naming conventions. The good thing is
 that the capabilities for each hypervisor/driver are well defined
 (https://wiki.openstack.org/wiki/HypervisorSupportMatrix), so
 it's just a matter of how to convey that information. I see a
 few ways forward from here:

 1.   Expand the compute_features_group config section to include
 all Compute actions and make sure all tests that require specific
 capabilities have skipIfs or raise a skipException. This options seems
 it would require the least work within Tempest, but the size of the
 config will continue to grow as more Nova actions are added.

 2.   Create a new decorator class like was done with service tags
 that defines what drivers the test does or does not work for, and have
 the definitions of the different driver capabilities be referenced by
 the decorator. This is nice because it gets rid of the config creep,
 but it's also yet another decorator, which may not be desirable.

 I'm going to continue working through both of these possibilities,
 but any feedback either solution would be appreciated.

 It strikes me that if the test suites have a problem determining
 support status of APIs with the currently active driver, then the
 applications using OpenStack will likely suffer the same problem.
 Given that it'd be desirable to ensure we can solve it in a general
 way, rather than only consider the test suite needs.

 On the Nova side of things, I think it would be important to ensure
 that there is a single OperationNotSupported exception that is
 always raised when the API tries to exercise a feature that is not
 available with a specific hypervisor driver. If a test case in the
 test suite ever receives OperationNotSupported it could then just
 mark that test case as skipped rather than having exception propagate
 to result in a fail.  To me the nice thing about such an approach is
 that you do not need to ever maintain a matrix of supported features,
 as the test suite would just do the right thing whenever the driver is
 updated.

 Agreed. Though I think we probably want the Nova API to be explicit
 about what parts of the API it's ok to throw a Not Supported. Because I
 don't think it's a blanket ok. On API endpoints where this is ok, we can
 convert not supported to a skip.
 
 Yep, this ties into a discussion I recall elsewhere about specifying
 exactly which parts of the Nova API are considered mandatory features
 for drivers to implement.

I put that in as a proposal to discuss at the design summit.

http://summit.openstack.org/cfp/details/55

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-22 Thread Salvatore Orlando
From previous requirements discussions, I recall that:
- A control plan outage is unavoidable (I think everybody agrees here)
- Data plane outages should be avoided at all costs; small l3 outages
deriving from the transition to the l3 agent from the network node might be
allowed.

However, a L2 data plane outage on the instance NIC, albeit small, would
probably still case existing TCP connections to be terminated.
I'm not sure it this can be accepted; however, if there is no way to avoid
it, we should probably consider tolerating it.

It would be good to know what kind of modifications the NIC needs; perhaps
no data plane downtime is needed.
Regarding libvirt version, I thinks it's ok to have no-downtime migrations
only for deployments running at least a certain version of libvirt.

Salvatore


On 21 April 2014 13:18, Akihiro Motoki mot...@da.jp.nec.com wrote:


 (2014/04/21 18:10), Oleg Bondarev wrote:


 On Fri, Apr 18, 2014 at 9:10 PM, Kyle Mestery 
 mest...@noironetworks.comwrote:

 On Fri, Apr 18, 2014 at 8:52 AM, Oleg Bondarev obonda...@mirantis.com
 wrote:
  Hi all,
 
  While investigating possible options for Nova-network to Neutron
 migration
  I faced a couple of issues with libvirt.
  One of the key requirements for the migration is that instances should
 stay
  running and don't need restarting. In order to meet this requirement we
 need
  to either attach new nic to the instance or update existing one to plug
 it
  to the Neutron network.
 
  Thanks for looking into this Oleg! I just wanted to mention that if
 we're trying to plug a new NIC into the VM, this will likely require
 modifications in the guest. The new NIC will likely have a new PCI ID,
 MAC, etc., and thus the guest would have to switch to this. Therefor,
 I think it may be better to try and move the existing NIC from a nova
 network onto a neutron network.


  Yeah, I agree that modifying the existing NIC is the preferred way.


 Thanks for investigating ways of migrating from nova-network to neutron.
 I think we need to define the levels of the migration.
 We can't satisfy all requirements at the same time, so we need to
 determine/clarify
 some reasonable limitations on the migration.

 - datapath downtime
   - no downtime
   - a small period of downtime
   - rebooting an instnace
 - API and management plane downtime
 - Combination of the above

 I think modifying the existing NIC requires plug and unplug an device in
 some way
 (plug/unplug an network interface to VM? move a tap device from
 nova-network
 to neutron bridge?). It leads to a small downtime. On the other hand,
 adding a new
 interface requires a geust to deal with network migration (though it can
 potentially
 provide no downtime migration as an infra level).
 IMO a small downtime can be accepted in cloud use cases and it is a good
 start line.

 Thanks,
 Akihiro




  So what I've discovered is that attaching a new network device is only
  applied
  on the instance after reboot although VIR_DOMAIN_AFFECT_LIVE flag is
 passed
  to
  the libvirt call attachDeviceFlags():
 
 https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L1412
  Is that expected? Are there any other options to apply new nic without
  reboot?
 
  I also tried to update existing nic of an instance by using libvirt
  updateDeviceFlags() call,
  but it fails with the following:
  'this function is not supported by the connection driver: cannot modify
  network device configuration'
  libvirt API spec (http://libvirt.org/hvsupport.html) shows that 0.8.0
 as
  minimal
  qemu version for the virDomainUpdateDeviceFlags call, kvm --version on
 my
  setup shows
  'QEMU emulator version 1.0 (qemu-kvm-1.0)'
  Could someone please point what am I missing here?
 
  What does libvirtd -V show for the libvirt version? On my Fedora 20
 setup, I see the following:

 [kmestery@fedora-mac neutron]$ libvirtd -V
 libvirtd (libvirt) 1.1.3.4
 [kmestery@fedora-mac neutron]$


  On my Ubuntu 12.04 it shows:
   $ libvirtd --version
  libvirtd (libvirt) 0.9.8


 Thanks,
 Kyle

  Any help on the above is much appreciated!
 
  Thanks,
  Oleg
 
 
   ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [sahara] apache hive no longer distributing 0.11.0 - image-elements don't build

2014-04-22 Thread Matthew Farrellee

On 04/22/2014 08:21 AM, Matthew Farrellee wrote:

https://sahara.mirantis.com/logs/97/86997/4/check/diskimage-integration-ubuntu/d37fe82/console.html

https://sahara.mirantis.com/logs/97/86997/4/check/diskimage-integration-fedora/da83f57/console.html

https://sahara.mirantis.com/logs/97/86997/4/check/diskimage-integration-centos/27d14d1/console.html


all fail with...

--2014-04-21 22:35:45--
http://www.apache.org/dist/hive/hive-0.11.0/hive-0.11.0-bin.tar.gz
Resolving www.apache.org (www.apache.org)... 192.87.106.229,
140.211.11.131, 2001:610:1:80bc:192:87:106:229
Connecting to www.apache.org (www.apache.org)|192.87.106.229|:80...
connected.
HTTP request sent, awaiting response... 404 Not Found
2014-04-21 22:35:45 ERROR 404: Not Found.

it looks like the 0.11.0 tarball is no more,

http://www.apache.org/dist/hive/

it'll take me a day or two to build up an image w/ 0.12.0 (or better:
0.13.0) and see if it works so we can upgrade the dib scripts.

if someone from mirantis wants to cache a copy of 0.11.0 on
sahara-files, update the url and file a bug about updating to 0.13.0,
i'll fast track a +2/+A.

best,


matt


fyi - i found a copy of hive 0.11.0 in archive.apache.org and have filed,

https://review.openstack.org/#/c/89561/

best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] apache hive no longer distributing 0.11.0 - image-elements don't build

2014-04-22 Thread Sergey Lukjanov
Nice catch, as it was discussed, it's fixed by using archive.apache.org

Change Request https://review.openstack.org/#/c/89561/ approved already.

On Tue, Apr 22, 2014 at 4:21 PM, Matthew Farrellee m...@redhat.com wrote:
 https://sahara.mirantis.com/logs/97/86997/4/check/diskimage-integration-ubuntu/d37fe82/console.html
 https://sahara.mirantis.com/logs/97/86997/4/check/diskimage-integration-fedora/da83f57/console.html
 https://sahara.mirantis.com/logs/97/86997/4/check/diskimage-integration-centos/27d14d1/console.html

 all fail with...

 --2014-04-21 22:35:45--
 http://www.apache.org/dist/hive/hive-0.11.0/hive-0.11.0-bin.tar.gz
 Resolving www.apache.org (www.apache.org)... 192.87.106.229, 140.211.11.131,
 2001:610:1:80bc:192:87:106:229
 Connecting to www.apache.org (www.apache.org)|192.87.106.229|:80...
 connected.
 HTTP request sent, awaiting response... 404 Not Found
 2014-04-21 22:35:45 ERROR 404: Not Found.

 it looks like the 0.11.0 tarball is no more,

 http://www.apache.org/dist/hive/

 it'll take me a day or two to build up an image w/ 0.12.0 (or better:
 0.13.0) and see if it works so we can upgrade the dib scripts.

 if someone from mirantis wants to cache a copy of 0.11.0 on sahara-files,
 update the url and file a bug about updating to 0.13.0, i'll fast track a
 +2/+A.

 best,


 matt

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaas] Single call API discussion

2014-04-22 Thread Eugene Nikanorov
Hi Brandon,


On Tue, Apr 22, 2014 at 6:58 AM, Brandon Logan
brandon.lo...@rackspace.comwrote:

 Hello Eugene!

 Are you talking about seeing the code in a simplified approach for a
 single create call using the current API objects, or one that uses
 objects created based on the proposal?

I'm talking about actually implementing single-call API within existing
code (LBaaS plugin)
Let's see what it takes. It obviously should not account for each end every
case we have in mind,
but at least it should allow existing functionality (single vip, single
pool).


 I was experimenting over the weekend on getting a single create call in
 the current API model.  I was able to implement it pretty easy but did
 run into some issues.  Now, since this was just a quick test, and to
 save time I did not implement it the correct way, only a way in which it
 accepted a single create call and did everything else the usual way.  If
 it were actually a blueprint and up for a merge I would have done it the
 proper way (and with everyone else's input).  If you want to see that
 code let me know, its just on a branch of a fork.  Nothing really much
 to see though, implemented in the easiest way possible.  For what its
 worth though, it did speed up the creation of an actual load balancer by
 75% on average.

I'd prefer to see such patch on gerrit.




 The current way to define an extension's resources and objects is using
 a dictionary that defines the resource, object expected for POSTs and
 PUTs, and plugin methods to be implemented.  This dictionary is passed
 to the neutron API controller that does validation, defaulting, and
 checks if an attribute of an object is required and if it can be changed
 on posts and puts.  This currently does not support defaults for 2nd
 level nested dictionary objects, and doesn't support validation,
 defaulting, or required attributes for any nesting level after the
 2nd.

 This can easily be added in obviously (smells like recursion will come
 in handy),

That also smells like a bit of generic work for Neutron Extension Framework.
It has limited support for 2nd level resources right now.

but it should be noted that just the resource and API object
 schema definitions for what we need for a single create call are not
 supported right now.

 Maybe there's some way to allow the extensions to define their own
 validation for their own resources and objects.  That's probably another
 topic for another day, though.


Yes, I think most extensions provide both additional resources and
additional validation methods.

Thanks,
Eugene.


 On Fri, 2014-04-18 at 17:53 +0400, Eugene Nikanorov wrote:
   3. Could you describe the most complicated use case
   that your single-call API supports? Again, please be
   very specific here.
  Same data can be derived from the link above.
 
 
 
 
  Ok, I'm actually not seeing and complicated examples, but I'm
  guessing that any attributes at the top of the page could be
  expanded on according the the syntax described.
 
 
  Hmmm...  one of the draw-backs I see with a one-call
  approach is you've got to have really good syntax checking for
  everything right from the start, or (if you plan to handle
  primitives one at a time) a really solid roll-back strategy if
  anything fails or has problems, cleaning up any primitives
  that might already have been created before the whole call
  completes.
 
 
  The alternative is to not do this with primitives... but then
  I don't see how that's possible either. (And certainly not
  easy to write tests for:  The great thing about small
  primitives is their methods tend to be easier to unit test.)
 
  These are the good arguments! That's why I'd like to actually see the
  code (even simplified approach will could work as a first step), i
  thing it could do a lot of things clearer.
 
 
  Thanks,
  Eugene.
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Fix migration that breaks Grenade jobs

2014-04-22 Thread Jakub Libosvar
On 04/22/2014 02:38 PM, Salvatore Orlando wrote:
 When I initially spoke to the infra team regarding this problem, they
 suggested that just fixing migrations so that the job could run was
 not a real option.
 I tend to agree with this statement.
 However, I'm open to options for getting grenade going until the
 migration problem is solved. Ugly workarounds might be fine, as long as
 we won't be doing anything that a real deployer would never do.
 
 Personally, I still think the best way for getting grenade to work is to
 ensure previos_rev and current_rev have the same configuration. For the
 havana/icehouse upgrade, this will mean that devstack for icehouse
 should not add the metering plugin.

Is it possible to run Icehouse tempest without metering tests?

Kuba

 I and Jakub are overdue a discussion
 on whether this would be feasible or not.
 
 Change 87935 is acceptable as a fix for that specific migration.
 However it does not fix the general issue, where the root cause is that
 currently the state of the neutron database depends on configuration
 settings, and therefore migrations are idempotent as long as the plugin
 configuration is not changed, which is not the case.
 
 Salvatore
 
 
 On 22 April 2014 11:14, Jakub Libosvar libos...@redhat.com
 mailto:libos...@redhat.com wrote:
 
 On 04/22/2014 10:53 AM, Anna Kamyshnikova wrote:
  Hello everyone!
 
  I'm working on fixing bug 1307344. I found out solution that will fix
  Grenade jobs and will work for online and offline migartions.
  https://review.openstack.org/87935 But I faced the problem that
 Metering
  usage won't be fixed as we need to create 2 tables (meteringlabels,
  meteringlabelrules). I tried to create both in patch set #7 but it
 won't
  work for offline migration. In fact to fix Grenade it is enough to
  create meteringlabels table, that is done in my change in the last
 patch
  set #8. I want to ask reviewers to take a look at this change and
  suggest something or approve it. I'm available on
 IRC(akamyshnikova) or
  by email.
 
  Regards
  Ann
 Hi Ann,
 
 Good suggestion how to get out of failing job but I don't think it
 should go to 33c3db036fe4_set_length_of_description_field_metering.py
 because this failure is grenade specific while the real issue is a fact
 that we're not able to add new service plugin to already deployed
 Neutron.
 
 I think the same workaround you proposed in the 87935 review should go
 to grenade itself (from-havana/upgrade-neutron script) just to have the
 job working on havana-icehouse upgrade. It's a bit of ugly workaround
 though but imho so far the best solution to reach stable job in a short
 time.
 
 Kuba
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Fix migration that breaks Grenade jobs

2014-04-22 Thread Jakub Libosvar
On 04/22/2014 03:57 PM, Jakub Libosvar wrote:
 On 04/22/2014 02:38 PM, Salvatore Orlando wrote:
 When I initially spoke to the infra team regarding this problem, they
 suggested that just fixing migrations so that the job could run was
 not a real option.
 I tend to agree with this statement.
 However, I'm open to options for getting grenade going until the
 migration problem is solved. Ugly workarounds might be fine, as long as
 we won't be doing anything that a real deployer would never do.

 Personally, I still think the best way for getting grenade to work is to
 ensure previos_rev and current_rev have the same configuration. For the
 havana/icehouse upgrade, this will mean that devstack for icehouse
 should not add the metering plugin.
 
 Is it possible to run Icehouse tempest without metering tests?
Oh, I see, we can pass regexp to tox.

 
 Kuba
 
 I and Jakub are overdue a discussion
 on whether this would be feasible or not.

 Change 87935 is acceptable as a fix for that specific migration.
 However it does not fix the general issue, where the root cause is that
 currently the state of the neutron database depends on configuration
 settings, and therefore migrations are idempotent as long as the plugin
 configuration is not changed, which is not the case.

 Salvatore


 On 22 April 2014 11:14, Jakub Libosvar libos...@redhat.com
 mailto:libos...@redhat.com wrote:

 On 04/22/2014 10:53 AM, Anna Kamyshnikova wrote:
  Hello everyone!
 
  I'm working on fixing bug 1307344. I found out solution that will fix
  Grenade jobs and will work for online and offline migartions.
  https://review.openstack.org/87935 But I faced the problem that
 Metering
  usage won't be fixed as we need to create 2 tables (meteringlabels,
  meteringlabelrules). I tried to create both in patch set #7 but it
 won't
  work for offline migration. In fact to fix Grenade it is enough to
  create meteringlabels table, that is done in my change in the last
 patch
  set #8. I want to ask reviewers to take a look at this change and
  suggest something or approve it. I'm available on
 IRC(akamyshnikova) or
  by email.
 
  Regards
  Ann
 Hi Ann,

 Good suggestion how to get out of failing job but I don't think it
 should go to 33c3db036fe4_set_length_of_description_field_metering.py
 because this failure is grenade specific while the real issue is a fact
 that we're not able to add new service plugin to already deployed
 Neutron.

 I think the same workaround you proposed in the 87935 review should go
 to grenade itself (from-havana/upgrade-neutron script) just to have the
 job working on havana-icehouse upgrade. It's a bit of ugly workaround
 though but imho so far the best solution to reach stable job in a short
 time.

 Kuba

 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Fix migration that breaks Grenade jobs

2014-04-22 Thread Sean Dague
On 04/22/2014 09:57 AM, Jakub Libosvar wrote:
 On 04/22/2014 02:38 PM, Salvatore Orlando wrote:
 When I initially spoke to the infra team regarding this problem, they
 suggested that just fixing migrations so that the job could run was
 not a real option.
 I tend to agree with this statement.
 However, I'm open to options for getting grenade going until the
 migration problem is solved. Ugly workarounds might be fine, as long as
 we won't be doing anything that a real deployer would never do.

 Personally, I still think the best way for getting grenade to work is to
 ensure previos_rev and current_rev have the same configuration. For the
 havana/icehouse upgrade, this will mean that devstack for icehouse
 should not add the metering plugin.
 
 Is it possible to run Icehouse tempest without metering tests?

The running of the tests isn't the problem, it's the setting up of the
services where it blows up.

Given that all migrations are chained (i.e. we test havana - icehouse,
and are about to light up icehouse - master) that means effectively
turning off metering as a service for testing entirely.

Which is basically what an end user would need to do, because the
current service structure means that if you don't enable a new neutron
service the moment it's introduced, you have to do manual database
changes to do so later.

I do really think that before pushing a work around here, there needs to
be a clear plan on how this is supposed to work in the future. Because I
fear that will get lost.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-22 Thread Jesse Pretorius
On 22 April 2014 14:58, Salvatore Orlando sorla...@nicira.com wrote:

 From previous requirements discussions,


There's a track record of discussions on the whiteboard here:
https://blueprints.launchpad.net/neutron/+spec/nova-to-quantum-upgrade
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NEUTRON] [IPv6] [VPNaaS] - IPSec by default on each Tenant router, the beginning of the Opportunistic Encryption era (rfc4322 ?)...

2014-04-22 Thread Carl Baldwin
Keys are distributed via dns records.

https://tools.ietf.org/html/rfc4322

Carl
On Apr 21, 2014 5:35 PM, Kevin Benton blak...@gmail.com wrote:

 This is interesting. How is key distribution handled when I want to use OE
 with someone like Google.com for example?


 On Thu, Apr 17, 2014 at 12:07 PM, Martinx - ジェームズ 
 thiagocmarti...@gmail.com wrote:

 Guys,

 I here thinking about IPSec when with IPv6 and, one of the first
 ideas/wishes of IPv6 scientists, was to always deploy it with IPSec
 enabled, always (I've heard). But, this isn't well diffused by now. Who is
 actually using IPv6 Opportunistic Encryption?!

 For example: With O.E., we'll be able to make a IPv6 IPSec VPN with
 Google, so we can ping6 google.com safely... Or with Twitter,
 Facebook! Or whatever! That is the purpose of Opportunistic Encryption, am
 I right?!

 Then, with OpenStack, we might have a muiti-Region or even a multi-AZ
 cloud, based on the topology Per-Tenant Routers with Private Networks,
 for example, so, how hard it will be to deploy the Namespace routers with
 IPv6+IPSec O.E. just enabled by default?

 I'm thinking about this:


 * IPv6 Tenant 1 subnet A - IPv6 Router + IPSec O.E. - *Internet
 IPv6* - IPv6 Router + IPSec O.E. - IPv6 Tenant 1 subnet B


 So, with O.E., it will be simpler (from the tenant's point of view) to
 safely interconnect multiple tenant's subnets, don't you guys think?!

 Amazon in the other hand, for example, provides things like VPC
 Peering, or VPN Instances, or NAT instances, as a solution to
 interconnect creepy IPv4 networks... We don't need none of this kind of
 solutions when with IPv6... Right?!

 Basically, the OpenStack VPNaaS (O.E.) will come enabled at the Namespace
 Router by default, without the tenant even knowing it is there, but of
 course, we can still show that IPv6-IPSec-VPN at the Horizon Dashboard,
 when established, just for fun... But tenants will never need to think
 about it...   =)

 And to share the IPSec keys, the stuff required for Opportunistic
 Encryption to gracefully works, each OpenStack in the wild, can become a
 *pod*, which will form a network of *pods*, I mean, independently
 owned *pods* which interoperate to form the *Opportunistic Encrypt
 Network of OpenStack Clouds*.

 I'll try to make a comparison here, as an analogy, do you guys have ever
 heard about the DIASPORA* Project? No, take a look:
 http://en.wikipedia.org/wiki/Diaspora_(social_network)

 I think that, OpenStack might be for the Opportunistic Encryption, what
 DIASPORA* Project is for Social Networks!

 If OpenStack can share its keys (O.E. stuff) in someway, with each other,
 we can easily build a huge network of OpenStacks, and then, each one will
 naturally talk with each other, using a secure connection.

 I would love to hear some insights from you guys!

 Please, keep in mind that I never deployed a IPSec O.E. before, this is
 just an idea I had... If I'm wrong, ignore this e-mail.


 References:

 https://tools.ietf.org/html/rfc4322

 https://groups.google.com/d/msg/ipv6hackers/3LCTBJtr-eE/Om01uHUcf9UJ

 http://www.inrialpes.fr/planete/people/chneuman/OE.html


 Best!
 Thiago

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Updates to the template for Neutron BPs

2014-04-22 Thread Anne Gentle
On Tue, Apr 22, 2014 at 4:42 AM, Jaume Devesa devv...@gmail.com wrote:

 Another question about this, Kyle:

 once merged, will there be a place where to read these approved specs, or
 should we run it locally? I tried to find the approved *nova-specs *in
 docs.openstack.org, but I couldn't find them.


Hi Jaume,
Since these are specifications and not created features (yet), they will
not be published on docs.openstack.org but on specs.openstack.org/$project.

See http://markmail.org/message/du6djpz3unbdgzpm for more details. I don't
believe the site is set up quite yet.

Thanks,
Anne



 Regards,
 jaume


 On 21 April 2014 09:02, Mandeep Dhami dh...@noironetworks.com wrote:


 Got it. Thanks.

 Regards,
 Mandeep


 On Sun, Apr 20, 2014 at 11:49 PM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 That's because spec was proposed to the juno/ folder. Look at
 https://raw.githubusercontent.com/openstack/neutron-specs/master/doc/source/index.rst,
 if spec is in juno/ folder, then contents shows it as approved one.

 Once merged, it means approved, right? So it is going to be Ok after
 merge. Though a better reminding than just draft in the url could be
 required if many start to mess it up...


 On Mon, Apr 21, 2014 at 10:43 AM, Kevin Benton blak...@gmail.comwrote:

 Yes. It shows up in the approved section since it's just a build of the
 patch as-is.

 The link is titled gate-neutron-specs-docs in the message from Jenkins.

 --
 Kevin Benton


 On Sun, Apr 20, 2014 at 11:37 PM, Mandeep Dhami 
 dh...@noironetworks.com wrote:

 Just for clarification. Jenkins link in the description puts the
 generated HTML in the section Juno approved specs even tho' the
 blueprint is still being reviewed. Am I looking at the right link?

 Regards,
 Mandeep


 On Sun, Apr 20, 2014 at 10:54 PM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 Yes, thanks, that's exactly what I was looking for!


 On Mon, Apr 21, 2014 at 12:03 AM, Kyle Mestery 
 mest...@noironetworks.com wrote:

 On Sat, Apr 19, 2014 at 5:11 PM, Mike Scherbakov
 mscherba...@mirantis.com wrote:
  Hi Kyle,
  I built template and it looks awesome. We are considering to use
 same
  approach for Fuel.
 
  My assumption is that spec will be on review for a negotiation
 time, which
  is going to be quite a while. In my opinion, it is not always very
  convenient to read spec in gerrit.
 
 Agreed, though for some specs, this is actually an ok way to do
 reviews.

  Did you guys have any thoughts on auto-build these specs into html
 on every
  patch upload? So we could go somewhere and see built results,
 without a
  requirement to fetch neutron-specs, and run tox? The possible
 drawback is
  that reader won't see gerrit comments..
 
 I followed what Nova was going and committed code into
 openstack-infra/config which allows for some jenkins jobs to run when
 we commit to the neutron-specs gerrit. [1]. As an example, look at
 this commit here [2]. If you look at the latest Jenkins run, you'll
 see a link which takes you to an HTML generated document [3] which
 you
 can review in lieu of the raw restructured text in gerrit. That will
 actually generate all the specs in the repository, so you'll see the
 Example Spec along with the Nuage one I used for reference here.

 Hope that helps!
 Kyle

 [1] https://review.openstack.org/#/c/88069/
 [2] https://review.openstack.org/#/c/88690/
 [3]
 http://docs-draft.openstack.org/90/88690/3/check/gate-neutron-specs-docs/fe4282a/doc/build/html/

  Thanks,
 
 
  On Sat, Apr 19, 2014 at 12:08 AM, Kyle Mestery 
 mest...@noironetworks.com
  wrote:
 
  Hi folks:
 
  I just wanted to let people know that we've merged a few patches
 [1]
  to the neutron-specs repository over the past week which have
 updated
  the template.rst file. Specifically, Nachi has provided some
  instructions for using Sphinx diagram tools in lieu of
 asciiflow.com.
  Either approach is fine for any Neutron BP submissions, but
 Nachi's
  patch has some examples of using both approaches. Bob merged a
 patch
  which shows an example of defining REST APIs with attribute
 tables.
 
  Just an update for anyone proposing BPs for Juno at the moment.
 
  Thanks!
  Kyle
 
  [1]
 
 https://review.openstack.org/#/q/status:merged+project:openstack/neutron-specs,n,z
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Mike Scherbakov
  #mihgen
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen

 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [nova][qa][all] Home of rendered specs

2014-04-22 Thread Mike Scherbakov
Hi folks,
would it be any possible to have same benefit for stackforge-hosted
projects..?

Thank you,


On Fri, Mar 28, 2014 at 6:02 PM, Anne Gentle a...@openstack.org wrote:




 On Thu, Mar 27, 2014 at 6:25 PM, Joe Gordon joe.gord...@gmail.com wrote:

 Hi All,

 Now that nova and qa are beginning to use specs repos [0][1]. Instead of
 being forced to read raw RST or relying on github [3],  we want a domain
 where we can publish the fully rendered sphinxdocs based specs (rendered
 with oslosphinx of course). So how about:

   specs.openstack.org/$project

 specs instead of docs because docs.openstack.org should only contain
 what is actually implemented so keeping specs in another subdomain is an
 attempt to avoid confusion as we don't expect every approved blueprint to
 get implemented.



 Thanks for this, Joe and all!

 Anne


  Best,
 Joe


 [0] http://git.openstack.org/cgit/openstack/nova-specs/
 [1] http://git.openstack.org/cgit/openstack/qa-specs/
 [3]
 https://github.com/openstack/nova-specs/blob/master/specs/template.rst


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Updates to the template for Neutron BPs

2014-04-22 Thread Jaume Devesa
Great! Thanks Anne!


On 22 April 2014 16:44, Anne Gentle a...@openstack.org wrote:




 On Tue, Apr 22, 2014 at 4:42 AM, Jaume Devesa devv...@gmail.com wrote:

 Another question about this, Kyle:

 once merged, will there be a place where to read these approved specs, or
 should we run it locally? I tried to find the approved *nova-specs *in
 docs.openstack.org, but I couldn't find them.


 Hi Jaume,
 Since these are specifications and not created features (yet), they will
 not be published on docs.openstack.org but on specs.openstack.org/$project
 .

 See http://markmail.org/message/du6djpz3unbdgzpm for more details. I
 don't believe the site is set up quite yet.

 Thanks,
 Anne



 Regards,
 jaume


 On 21 April 2014 09:02, Mandeep Dhami dh...@noironetworks.com wrote:


 Got it. Thanks.

 Regards,
 Mandeep


 On Sun, Apr 20, 2014 at 11:49 PM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 That's because spec was proposed to the juno/ folder. Look at
 https://raw.githubusercontent.com/openstack/neutron-specs/master/doc/source/index.rst,
 if spec is in juno/ folder, then contents shows it as approved one.

 Once merged, it means approved, right? So it is going to be Ok after
 merge. Though a better reminding than just draft in the url could be
 required if many start to mess it up...


 On Mon, Apr 21, 2014 at 10:43 AM, Kevin Benton blak...@gmail.comwrote:

 Yes. It shows up in the approved section since it's just a build of
 the patch as-is.

 The link is titled gate-neutron-specs-docs in the message from Jenkins.

 --
 Kevin Benton


 On Sun, Apr 20, 2014 at 11:37 PM, Mandeep Dhami 
 dh...@noironetworks.com wrote:

 Just for clarification. Jenkins link in the description puts the
 generated HTML in the section Juno approved specs even tho' the
 blueprint is still being reviewed. Am I looking at the right link?

 Regards,
 Mandeep


 On Sun, Apr 20, 2014 at 10:54 PM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 Yes, thanks, that's exactly what I was looking for!


 On Mon, Apr 21, 2014 at 12:03 AM, Kyle Mestery 
 mest...@noironetworks.com wrote:

 On Sat, Apr 19, 2014 at 5:11 PM, Mike Scherbakov
 mscherba...@mirantis.com wrote:
  Hi Kyle,
  I built template and it looks awesome. We are considering to use
 same
  approach for Fuel.
 
  My assumption is that spec will be on review for a negotiation
 time, which
  is going to be quite a while. In my opinion, it is not always very
  convenient to read spec in gerrit.
 
 Agreed, though for some specs, this is actually an ok way to do
 reviews.

  Did you guys have any thoughts on auto-build these specs into
 html on every
  patch upload? So we could go somewhere and see built results,
 without a
  requirement to fetch neutron-specs, and run tox? The possible
 drawback is
  that reader won't see gerrit comments..
 
 I followed what Nova was going and committed code into
 openstack-infra/config which allows for some jenkins jobs to run
 when
 we commit to the neutron-specs gerrit. [1]. As an example, look at
 this commit here [2]. If you look at the latest Jenkins run, you'll
 see a link which takes you to an HTML generated document [3] which
 you
 can review in lieu of the raw restructured text in gerrit. That will
 actually generate all the specs in the repository, so you'll see the
 Example Spec along with the Nuage one I used for reference here.

 Hope that helps!
 Kyle

 [1] https://review.openstack.org/#/c/88069/
 [2] https://review.openstack.org/#/c/88690/
 [3]
 http://docs-draft.openstack.org/90/88690/3/check/gate-neutron-specs-docs/fe4282a/doc/build/html/

  Thanks,
 
 
  On Sat, Apr 19, 2014 at 12:08 AM, Kyle Mestery 
 mest...@noironetworks.com
  wrote:
 
  Hi folks:
 
  I just wanted to let people know that we've merged a few patches
 [1]
  to the neutron-specs repository over the past week which have
 updated
  the template.rst file. Specifically, Nachi has provided some
  instructions for using Sphinx diagram tools in lieu of
 asciiflow.com.
  Either approach is fine for any Neutron BP submissions, but
 Nachi's
  patch has some examples of using both approaches. Bob merged a
 patch
  which shows an example of defining REST APIs with attribute
 tables.
 
  Just an update for anyone proposing BPs for Juno at the moment.
 
  Thanks!
  Kyle
 
  [1]
 
 https://review.openstack.org/#/q/status:merged+project:openstack/neutron-specs,n,z
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Mike Scherbakov
  #mihgen
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov

Re: [openstack-dev] [Openstack] [nova] Havana - Icehouse upgrades with cells

2014-04-22 Thread Chris Behrens

On Apr 19, 2014, at 11:08 PM, Sam Morrison sorri...@gmail.com wrote:

 Thanks for the info Chris, I’ve actually managed to get things working. 
 Haven’t tested everything fully but seems to be working pretty good.
 
 On 19 Apr 2014, at 7:26 am, Chris Behrens cbehr...@codestud.com wrote:
 
 The problem here is that Havana is not going to know how to backport the 
 Icehouse object, even if had the conductor methods to do so… unless you’re 
 running the Icehouse conductor. But yes, your nova-computes would also need 
 the code to understand to hit conductor to do the backport, which we must 
 not have in Havana?
 
 OK this conductor api method was actually back ported to Havana, it kept it’s 
 1.62 version for the method but in Havana conductor manager it is set to 1.58.
 That is easily fixed but then it gets worse. I may be missing something but 
 the object_backport method doesn’t work at all and looking at the signature 
 never worked?
 I’ve raised a bug: https://bugs.launchpad.net/nova/+bug/1308805

(CCing openstack-dev and Dan Smith)

That looked wrong to me as well, and then I talked with Dan Smith and he 
reminded me the RPC deserializer would turn that primitive into a an object on 
the conductor side. The primitive there is the full primitive we use to wrap 
the object with the versioning information, etc.

Does your backport happen to not pass the full object primitive?  Or maybe 
missing the object RPC deserializer on conductor? (I would think that would 
have to be set in Havana)  nova/service.py would have:

194 serializer = objects_base.NovaObjectSerializer()
195
196 self.rpcserver = rpc.get_server(target, endpoints, serializer)
197 self.rpcserver.start()

I’m guessing that’s there… so I would think maybe the object_backport call you 
have is not passing the full primitive.

I don’t have the time to peak at your code on github right this second, but 
maybe later. :)

- Chris


 
 This also means that if you don’t want your computes on Icehouse yet, you 
 must actually be using nova-conductor and not use_local=True for it. (I saw 
 the patch go up to fix the objects use of conductor API… so I’m guessing you 
 must be using local right now?)
 
 Yeah we still haven’t moved to use conductor so if you also don’t use 
 conductor you’ll need the simple fix at bug: 
 https://bugs.launchpad.net/nova/+bug/1308811
 
 So, I think an upgrade process could be:
 
 1) Backport the ‘object backport’ code into Havana.
 2) Set up *Icehouse* nova-conductor in your child cells and use_local=False 
 on your nova-computes
 3) Restart your nova-computes.
 4) Update *all* nova-cells processes (in all cells) to Icehouse. You can 
 keep use_local=False on these, but you’ll need that object conductor API 
 patch.
 
 At this point you’d have all nova-cells and all nova-conductors on Icehouse 
 and everything else on Havana. If the Havana computes are able to talk to 
 the Icehouse conductors, they should be able to backport any newer object 
 versions. Same with nova-cells receiving older objects from nova-api. It 
 should be able to backport them.
 
 After this, you should be able to upgrade nova-api… and then probably 
 upgrade your nova-computes on a cell-by-cell basis.
 
 I don’t *think* nova-scheduler is getting objects yet, especially if you’re 
 somehow magically able to get builds to work in what you tested so far. :) 
 But if it is, you may find that you need to insert an upgrade of your 
 nova-schedulers to Icehouse between steps 3 and 4 above…or maybe just after 
 #4… so that it can backport objects, also.
 
 I still doubt this will work 100%… but I dunno. :)  And I could be missing 
 something… but… I wonder if that makes sense?
 
 What I have is an Icehouse API cell and a Havana compute cell and havana 
 compute nodes with the following changes:
 
 Change the method signature of attach_volume to match icehouse, the 
 additional arguments are optional and don’t seem to break things if you 
 ignore them.
 https://bugs.launchpad.net/nova/+bug/1308846
 
 Needed a small fix for unlocking, there is a race condition that I have a fix 
 for but haven’t pushed up.
 
 Then I hacked up a fix for object back porting.
 The code is at 
 https://github.com/NeCTAR-RC/nova/commits/nectar/havana-icehouse-compat
 The last three commits are the fixes needed. 
 I still need to push up the unlocking one and also a minor fix for metadata 
 syncing with deleting and notifications.
 
 Would love to get the object back porting stuff fixed properly from someone 
 who knows how all the object stuff works.
 
 Cheers,
 Sam

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack/GSoC - Welcome!

2014-04-22 Thread Julie Pichon
On 21/04/14 22:15, Davanum Srinivas wrote:
 Hi Team,
 
 Please join me in welcoming the following students to our GSoC program
 [1]. Congrats everyone. Now the hard work begins :) Have fun as well.
 
 Artem Shepelev
 Kumar Rishabh
 Manishanker Talusani
 Masaru Nomura
 Prashanth Raghu
 Tzanetos Balitsaris
 Victoria Martínez de la Cruz

Congratulations and welcome, everyone! I hope you have a very
interesting summer with OpenStack.

It's really great to see OpenStack participate in GSoC. Thanks to all
the people who helped make it happen!

Julie


 
 -- dims
 
 [1] https://www.google-melange.com/gsoc/org2/google/gsoc2014/openstack
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] wrap_instance_event() swallows return codes....on purpose?

2014-04-22 Thread Chris Friesen

On 04/22/2014 06:34 AM, Russell Bryant wrote:

On 04/21/2014 06:01 PM, Chris Friesen wrote:

Hi all,

In compute/manager.py the function wrap_instance_event() just calls
function().

This means that if it's used to decorate a function that returns a
value, then the caller will never see the return code.

Is this a bug, or is the expectation that we would only ever use this
wrapper for functions that don't return a value?


Looks like a bug to me.  Nice catch.

Want to submit a patch for this?


Can do.

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][qa][all] Home of rendered specs

2014-04-22 Thread Hayes, Graham
Hi,

I would second that - if there was a way stackforge projects could use
it, that would be great.

Graham

On Tue, 2014-04-22 at 18:52 +0400, Mike Scherbakov wrote:
 Hi folks,
 
 would it be any possible to have same benefit for stackforge-hosted
 projects..?
 
 
 Thank you,
 
 
 
 On Fri, Mar 28, 2014 at 6:02 PM, Anne Gentle a...@openstack.org
 wrote:
 
 
 
 
 
 
 On Thu, Mar 27, 2014 at 6:25 PM, Joe Gordon
 joe.gord...@gmail.com wrote:
 
 Hi All,
 
 
 Now that nova and qa are beginning to use specs repos
 [0][1]. Instead of being forced to read raw RST or
 relying on github [3],  we want a domain where we can
 publish the fully rendered sphinxdocs based specs
 (rendered with oslosphinx of course). So how about:
 
 
   specs.openstack.org/$project
 
 
 specs instead of docs because docs.openstack.org
 should only contain what is actually implemented so
 keeping specs in another subdomain is an attempt to
 avoid confusion as we don't expect every approved
 blueprint to get implemented.
 
 
 
 
 
 
 Thanks for this, Joe and all!
 
 
 Anne
  
 
 Best,
 Joe
 
 
 
 
 [0] http://git.openstack.org/cgit/openstack/nova-specs/
 [1] http://git.openstack.org/cgit/openstack/qa-specs/
 [3] 
 https://github.com/openstack/nova-specs/blob/master/specs/template.rst
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
 
 
 
 -- 
 
 Mike Scherbakov
 #mihgen
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: This is a digitally signed message part


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] nominating Victor Stinner for the Oslo core reviewers team

2014-04-22 Thread Ben Nemec

Makes sense to me, +1.

On 04/21/2014 11:39 AM, Doug Hellmann wrote:

I propose that we add Victor Stinner (haypo on freenode) to the Oslo
core reviewers team.

Victor is a Python core contributor, and works on the development team
at eNovance. He created trollius, a port of Python 3's tulip/asyncio
module to Python 2, at least in part to enable a driver for
oslo.messaging. He has been quite active with Python 3 porting work in
Oslo and some other projects, and organized a sprint to work on the
port at PyCon last week. The patches he has written for the python 3
work have all covered backwards-compatibility so that the code
continues to work as before under python 2.

Given his background, skills, and interest, I think he would be a good
addition to the team.

Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Vote Vote Vote

2014-04-22 Thread Anita Kuno
We have about 3ish more days for the voting for the TC election.

Please check your email for Subject: Poll: OpenStack Technical
Committee (TC) Election - April 2014 and vote.

Here is the email to the full details about this election:
http://lists.openstack.org/pipermail/openstack-dev/2014-April/033173.html

Please include yourself in selecting the leadership for the technical
flow of OpenStack.

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Looking for experienced guide to understand libvirt driver

2014-04-22 Thread Solly Ross
Hi Scott,

I've actually been diving through those Nova code paths recently, to work on a 
BP
to move the Nova libvirt driver to use libvirt storage pools for image backends,
so I may be able to help you.  Just FYI, if my blueprint gets accepted, it may
actually reduce the amount of code that you have to write, since libvirt already
has a (limitted) storage driver for sheepdog (see the BP here: 
https://review.openstack.org/#/c/86947/)

Best Regards,
Solly Ross

- Original Message -
From: Scott Devoid dev...@anl.gov
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Sent: Monday, April 21, 2014 7:38:13 PM
Subject: [openstack-dev] [nova] Looking for experienced guide to understand 
libvirt driver

Hi folks! 

I am working to add Sheepdog as a disk backend for the libvirt driver. I have a 
blueprint started and an early version of the code. However I am having trouble 
working my way thorough the code in the libvirt driver. The storage code 
doesn't feel vary modular to start with and my changes only seem to make it 
worse; e.g. adding more if blocks to 400 line functions. 

Is there an experienced contributor that could spend 30 minutes walking through 
parts of the code? 

- Blueprint: https://review.openstack.org/#/c/82584/ 
- Nova code: https://review.openstack.org/#/c/74148/ 
- Devstack code: https://review.openstack.org/#/c/89434/ 

Thanks, 
~ Scott 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Agent manager customization

2014-04-22 Thread Carl Baldwin
Cedric,

I'm just getting back from a short vacation.  Please excuse the delayed reply.

I have a feeling that this subject may have been discussed in the past
before I was very active in Neutron.  So, others may chime in if I'm
missing something.

For the customizations that you're making, it sounds like some sort of
hook system would work best.  You are currently using inheritance to
achieve it but I worry that the L3 agent class has not been designed
for this inheritance and may not be entirely suitable for your needs.
What has been your experience?  Have you found it easy to maintain
your subclass as the L3 agent evolves?  If not, what problems have you
seen?  Are there parts of the agent design that made it difficult or
awkward?

I suspect that a well-designed and stable hook system will better suit
your needs in the long run.  However, nothing like that exists in the
agent now.

Is there some synergy here with the L3 Vendor plugins summit topic
proposal [1].  Could you look through that proposal and the linked
blueprints with that in mind?

Carl

[1] http://summit.openstack.org/cfp/details/81

On Fri, Apr 18, 2014 at 9:11 AM, zz elle zze...@gmail.com wrote:
 Hi everyone,


 I would like to propose a change to simplify/allow l3 agent manager
 customization and i would like the community feedback.


 Just to precise my context, I deploy OpenStack for small specific business
 use cases and i often customize it because of specific use case needs.
 In particular must of the time i must customize L3 agent behavior in order
 to:
 - add custom iptables rules in the router (on router/port post-deployment),
 - remove custom iptables rules in the router (on port pre-undeployment),
 - update router config through sysctl (on router post-deployment),
 - start an application in the router (on router/port post-deployment),
 - stop an application in the router (on router/port pre-undeployment),
 - etc ...
 Currently (Havana,Icehouse), i create my own L3 agent manager which extends
 neutron one.
 And I replace neutron-l3-agent binary, indeed it's not possible to
 change/hook l3 agent manager implementation by configuration.


 What would be the correct way to allow l3 agent manager customization ?
  - Allow to specify l3 agent manager implementation through configuration
   == like the option router_scheduler_driver which allows to change router
 scheduler implementation
  - Allow to hook l3 agent manager implementation
   == like the generic hook system in nova (nova.hooks used in
 nova.compute.api)
   == or like the neutron ML2 mechanism hook system
 (neutron.plugins.ml2.driver_api:MechanismDriver)
  - Other idea ?


 It seems the same question could be asked for the dhcp agent ?


 Thanks,

 Cedric (zzelle@irc)


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Service VMs

2014-04-22 Thread Edgar Magana Perdomo (eperdomo)
Folks,

I think is correct to have Service VMs on a separate project because it
seems to me a very specific implementation (form factor) of a Service
Insertion use case.
However, there are few interesting services sessions proposed for the
summit intended to enhance all areas of the ³Networking Services² such as
DB, plugin and the agent side.

I strongly recommend to have all services-related sessions together and
finally to decide as a team, which path will be for Neutron on this area,
we should not be extending more and more the current framework that was
not designed (if it was ever designed) for the functionality that we are
giving it today. Let¹s consider a couple of already presented proposals on
this area:

https://wiki.openstack.org/wiki/Neutron/ServiceInsertion


I will update the wiki with the latest ideas on this area.. But please, as
a team we should start considering finally a ³Generic Services Insertion²
framework regardless of the backend technology and the service form factor
(bare metal, vm, etc.).

Cheers,

Edgar


On 4/21/14, 6:41 PM, Kyle Mestery mest...@noironetworks.com wrote:

On Mon, Apr 21, 2014 at 4:20 PM, Doug Hellmann
doug.hellm...@dreamhost.com wrote:
 On Mon, Apr 21, 2014 at 3:07 PM, Kyle Mestery
mest...@noironetworks.com wrote:
 For the upcoming Summit there are 3 sessions filed around Service
 VMs in Neutron. After discussing this with a few different people,
 I'd like to propose the idea that the Service VM work be moved out
 of Neutron and into it's own project on stackforge. There are a few
 reasons for this:

 How long do you anticipate the project needing to live on stackforge
 before it can move to a place where we can introduce symmetric gating
 with the projects that use it?

The patches for this (look at the BP here [1]) have been in review for
a while now as WIP. I think it's reasonable to expect that moving this
to stackforge would let the authors and others interested collaborate
faster. I expect this would take a cycle on stackforge before we could
talk about other projects using this. But honestly, that's a better
question for Isaku and Bob.

 Who is going to drive the development work?

For that, I'm thinking Isaku and Bob (copied above) would be the ones
driving it. But anyone else who is interested should feel free to jump
in as well.

Thanks,
Kyle

[1] https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms

 Doug


 1. There is nothing Neutron specific about service VMs.
 2. Service VMs can perform services for other OpenStack projects.
 3. The code is quite large and may be better served being inside it's
 own project.

 Moving the work out of Neutron and into it's own project would allow
 for separate velocity for this project, and for code to be shared for
 the Service VM work for things other than Neutron services.

 I'm starting this email thread now to get people's feedback on this
 and see what comments other have. I've specifically copied Isaku and
 Bob, who both filed summit sessions on this and have done a lot of
 work in this area to date.

 Thanks,
 Kyle

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Migration to packages, step 2/2

2014-04-22 Thread Dmitry Pyzhov
Mirrors cleanup is done. No more gems, no more source files for third-party
tools. Only centos and ubuntu packages.

Next and the last action item - create astute package during iso build.

On Mon, Apr 21, 2014 at 2:10 PM, Dmitry Pyzhov dpyz...@mirantis.com wrote:

 How does it impact development process? If I change code of, let's say,
 shotgun, and then run make iso, will I get an ISO with my code of
 shotgun? What about other packages, sources of which I did not touch (let's
 say nailgun) ?

 Everything is packaged, with two exceptions: *astute* and third-party
 packages for nailgun-agent. We are working on it.

 How far we became from implementing simple command to build an OpenStack
 package from source?

 Not even a bit closer. Design for OpenStack packages is in progress.

 What is the time difference, is ISO build became faster? Can you provide
 numbers?

 A little bit faster. I did not perform precise measurements and we still
 need remove gem mirror. It will be something like decreasing from 22
 minutes to 17 minutes.

 We still have puppet modules not packaged, right? Do we have plans for
 packaging it too?

 Yes, puppet is not packaged. It is in our post-5.0 plans.

 On Sat, Apr 19, 2014 at 12:55 AM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 That's cool actually.
 I have a few specific questions:

1. How does it impact development process? If I change code of, let's
say, shotgun, and then run make iso, will I get an ISO with my code of
shotgun? What about other packages, sources of which I did not touch 
 (let's
say nailgun) ?
2. How far we became from implementing simple command to build an
OpenStack package from source?
3. What is the time difference, is ISO build became faster? Can you
provide numbers?
4. We still have puppet modules not packaged, right? Do we have plans
for packaging it too?

 I assume we will document the usage of this somewhere in dev docs too.

 Thanks,


 On Fri, Apr 18, 2014 at 6:06 PM, Dmitry Pyzhov dpyz...@mirantis.comwrote:

 Guys,

 I've removed ability to use eggs packages on master node:
 https://review.openstack.org/#/c/88012/

 Next step is to remove gems mirror:
 https://review.openstack.org/#/c/88278/
 It will be merged when osci team fix rubygem-yajl-ruby package.
 Hopefully on Monday.

 From that moment all our code will be installed everywhere from
 packages. And there will be option to build packages during iso build or
 use pre-built packages from our mirrors.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Undercloud Ceilometer

2014-04-22 Thread Neal, Phil
 From: Ladislav Smola [mailto:lsm...@redhat.com]
 Sent: Wednesday, April 16, 2014 8:37 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [TripleO] [Tuskar] Undercloud Ceilometer
 
 No response so far, but -1 on the image element for making Ceilometer
 optional.

Sorry for the delayed response, Ladislov. It turns out that the mailing list 
was filtering out these TripleO mails for me.

Let me add a little context to that -1: given that a TripleO user may not want 
to enable a UI layer at the undercloud level (there's a use case for using the 
undercloud solely for spinning up the overcloud), I think we want to support as 
small a footprint as possible. 

 
 OK, so what about having variable in devtest_variables: USE_TRIPLEO_UI.
 

I like this approach better...in fact I will look into adding something similar 
into the changes I'm making to enable Ceilometer by default in the overcloud 
control node: https://review.openstack.org/#/c/89625/1

 It would add Undercloud Ceilometer, Tuskar-UI and Horizon. And Overcloud
 SNMPd.
 
 Defaulted to USE_TRIPLEO_UI=1 so we have UI stuff in CI.
 
 How does it sound?
 
Perhaps specify something like UNDERCLOUD_USE_TRIPLEO_UI to be more specific 
on where this will be deployed. 
 
 On 04/14/2014 01:31 PM, Ladislav Smola wrote:
  Hello,
 
  I am planning to add Ceilometer to Undercloud as default. Since
  Tuskar-UI uses
  it as primary source of metering samples and Tuskar should be in
  Undercloud
  as default, it made sense to me.
 
  So is my assumption correct or there are some reasons not to do this?
 
  Here are the reviews, that are adding working Undercloud Ceilometer:
  https://review.openstack.org/#/c/86915/
  https://review.openstack.org/#/c/86917/  (depends on the template
 change)
  https://review.openstack.org/#/c/87215/
 
  Configuration for automatic obtaining of stats from all Overcloud
  nodes via.
  SNMP will follow soon.
 
  Thanks,
  Ladislav
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Service VMs

2014-04-22 Thread Sumit Naiksatam
Edgar, There is a weekly IRC meeting to discuss to discuss Neutron
advanced services related topics -
https://wiki.openstack.org/wiki/Meetings/AdvancedServices

Service insertion and chaining is one of them, and there is a sub team
working on it. Per the PTL, there will soon be a standing item in the
Neutron IRC meeting as well to provide updates on this.

Thanks,
~Sumit.



On Tue, Apr 22, 2014 at 9:19 AM, Edgar Magana Perdomo (eperdomo)
eperd...@cisco.com wrote:
 Folks,

 I think is correct to have Service VMs on a separate project because it
 seems to me a very specific implementation (form factor) of a Service
 Insertion use case.
 However, there are few interesting services sessions proposed for the
 summit intended to enhance all areas of the ³Networking Services² such as
 DB, plugin and the agent side.

 I strongly recommend to have all services-related sessions together and
 finally to decide as a team, which path will be for Neutron on this area,
 we should not be extending more and more the current framework that was
 not designed (if it was ever designed) for the functionality that we are
 giving it today. Let¹s consider a couple of already presented proposals on
 this area:

 https://wiki.openstack.org/wiki/Neutron/ServiceInsertion


 I will update the wiki with the latest ideas on this area.. But please, as
 a team we should start considering finally a ³Generic Services Insertion²
 framework regardless of the backend technology and the service form factor
 (bare metal, vm, etc.).

 Cheers,

 Edgar


 On 4/21/14, 6:41 PM, Kyle Mestery mest...@noironetworks.com wrote:

On Mon, Apr 21, 2014 at 4:20 PM, Doug Hellmann
doug.hellm...@dreamhost.com wrote:
 On Mon, Apr 21, 2014 at 3:07 PM, Kyle Mestery
mest...@noironetworks.com wrote:
 For the upcoming Summit there are 3 sessions filed around Service
 VMs in Neutron. After discussing this with a few different people,
 I'd like to propose the idea that the Service VM work be moved out
 of Neutron and into it's own project on stackforge. There are a few
 reasons for this:

 How long do you anticipate the project needing to live on stackforge
 before it can move to a place where we can introduce symmetric gating
 with the projects that use it?

The patches for this (look at the BP here [1]) have been in review for
a while now as WIP. I think it's reasonable to expect that moving this
to stackforge would let the authors and others interested collaborate
faster. I expect this would take a cycle on stackforge before we could
talk about other projects using this. But honestly, that's a better
question for Isaku and Bob.

 Who is going to drive the development work?

For that, I'm thinking Isaku and Bob (copied above) would be the ones
driving it. But anyone else who is interested should feel free to jump
in as well.

Thanks,
Kyle

[1] https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms

 Doug


 1. There is nothing Neutron specific about service VMs.
 2. Service VMs can perform services for other OpenStack projects.
 3. The code is quite large and may be better served being inside it's
 own project.

 Moving the work out of Neutron and into it's own project would allow
 for separate velocity for this project, and for code to be shared for
 the Service VM work for things other than Neutron services.

 I'm starting this email thread now to get people's feedback on this
 and see what comments other have. I've specifically copied Isaku and
 Bob, who both filed summit sessions on this and have done a lot of
 work in this area to date.

 Thanks,
 Kyle

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Design summit preparation - Next steps for Heat Software Orchestration

2014-04-22 Thread Thomas Spatzier

Hi all,

following up on Zane's request from end of last week, I wanted to kick off
some discussion on the ML around a design summit session proposal titled 
Next steps for Heat Software Orchestration. I guess there will be things
that can be sorted out this way and others that can be refined so we can
have a productive session in Atlanta. I am basically copying the complete
contents of the session proposal below so we can iterate on various points.
If it turns out that we need to split off threads, we can do that at a
later point.

The session proposal itself is here:
http://summit.openstack.org/cfp/details/306

And here are the details:

With the Icehouse release, Heat includes implementation for software
orchestration (Kudos to Steve Baker and Jun Jie Nan) which enables clean
separation of any kind of software configuration from compute instances and
thus enables a great new set of features. The implementation for software
orchestration in Icehouse has probably been the major chunk of work to
achieve a first end-to-end flow for software configuration thru scripts,
Chef or Puppet, but there is more work to be done to enable Heat for more
software orchestration use cases beyond the current support.
Below are a couple of use cases, and more importantly, thoughts on design
options of how those use cases can be addressed.

#1 Enable software components for full lifecycle:
With the current design, software components defined thru SoftwareConfig
resources allow for only one config (e.g. one script) to be specified.
Typically, however, a software component has a lifecycle that is hard to
express in a single script. For example, software must be installed
(created), there should be support for suspend/resume handling, and it
should be possible to allow for deletion-logic. This is also in line with
the general Heat resource lifecycle.
By means of the optional 'actions' property of SoftwareConfig it is
possible today to specify at which lifecycle action of a SoftwareDeployment
resource the single config hook shall be executed at runtime. However, for
modeling complete handling of a software component, this would require a
number of separate SoftwareConfig and SoftwareDeployment resources to be
defined which makes a template more verbose than it would have to be.
As an optimization, SoftwareConfig could allow for providing several hooks
to address all default lifecycle operations that would then be triggered
thru the respective lifecycle actions of a SoftwareDeployment resource.
Resulting SoftwareConfig definitions could then look like the one outlined
below. I think this would fit nicely into the overall Heat resource model
for actions beyond stack-create (suspend, resume, delete). Furthermore,
this will also enable a closer alignment and straight-forward mapping to
the TOSCA Simple Profile YAML work done at OASIS and the heat-translator
StackForge project.

So in a short, stripped-down version, SoftwareConfigs could look like

my_sw_config:
  type: OS::Heat::SoftwareConfig
  properties:
create_config: # the hook for software install
suspend_config: # hook for suspend action
resume_config: # hook for resume action
delete_config: # hook for delete action

When such a SoftwareConfig gets associated to a server via
SoftwareDeployment, the SoftwareDeployment resource lifecycle
implementation could trigger the respective hooks defined in SoftwareConfig
(if a hook is not defined, a no-op is performed). This way, all config
related to one piece of software is nicely defined in one place.


#2 Enable add-hoc actions on software components:
Apart from basic resource lifecycle hooks, it would be desirable to allow
for invocation of add-hoc actions on software. Examples would be the ad-hoc
creation of DB backups, application of patches, or creation of users for an
application. Such hooks (implemented as scripts, Chef recipes or Puppet
facts) could be defined in the same way as basic lifecycle hooks. They
could be triggered by doing property updates on the respective
SoftwareDeployment resources (just a thought and to be discussed during
design sessions).
I think this item could help bridging over to some discussions raised by
the Murano team recently (my interpretation: being able to trigger actions
from workflows). It would add a small feature on top of the current
software orchestration in Heat and keep definitions in one place. And it
would allow triggering by something or somebody else (e.g. a workflow)
probably using existing APIs.


#3 address known limitations of Heat software orchestration
As of today, there already are a couple of know limitations or points where
we have seen the need for additional discussion and design work. Below is a
collection of such issues.
Maybe some are already being worked on; others need more discussion.

#3.1 software deployment should run just once:
A bug has been raised because with today's implementation it can happen
that SoftwareDeployments get executed multiple 

[openstack-dev] [Ironic] Supporting preserve_ephemeral in rebuild()

2014-04-22 Thread David Shrewsbury
Hi,

I'm working on implementing rebuild() in the nova.virt.ironic driver so
that we can support the --preserve-ephemeral option. I have a design
question and would love some feedback on it.

The way to trigger a deploy is to set the provision state to ACTIVE.
However, for a rebuild, we cannot currently use this, since the API will
return an error saying that the target state and the current provision
state are the same, and return an error.

I can think of a couple of ways around this:

(1) If target and current provision state are ACTIVE, go ahead and allow
the (re)deploy.

(2) Add a new provision state that would set the instance to a sort of
temporary limbo state, expecting to be redeployed at some point by setting
target to ACTIVE (as normal).

Both changes would require changing NodeStatesController.
provision() for the new behaviour. However, I'm not sure which is
preferable, or if there is another option I haven't considered.

Thoughts?

-Dave
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][nova][Neutron] Launch VM with multiple Ethernet interfaces with I.P. of single subnet.

2014-04-22 Thread racha
Hi Vikash,
   I am wondering why you need to have specs approved to have things
working as you want? There's nothing that prevent you to have openstack
support whatever you want except probably for vendor proprietary plugins.
Install OpenStack with Neutron, search for one of the multi patches that
enable that in Nova and apply it to your installation, and voila you can
have nova boot VMs with multi vnics on same neutron. If you want to test
your setup with a public cloud provider that allow that, you can loock in
to Amazon EC2.

Best Regards,
Racha



On Wed, Apr 16, 2014 at 3:48 AM, Vikash Kumar 
vikash.ku...@oneconvergence.com wrote:

 Hi,

  I want to launch one VM which will have two Ethernet interfaces with
 IP of single subnet. Is this supported now in openstack ? Any suggestion ?


 Thanx

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Supporting preserve_ephemeral in rebuild()

2014-04-22 Thread Clint Byrum
Excerpts from David Shrewsbury's message of 2014-04-22 10:16:42 -0700:
 Hi,
 
 I'm working on implementing rebuild() in the nova.virt.ironic driver so
 that we can support the --preserve-ephemeral option. I have a design
 question and would love some feedback on it.
 
 The way to trigger a deploy is to set the provision state to ACTIVE.
 However, for a rebuild, we cannot currently use this, since the API will
 return an error saying that the target state and the current provision
 state are the same, and return an error.
 
 I can think of a couple of ways around this:
 
 (1) If target and current provision state are ACTIVE, go ahead and allow
 the (re)deploy.
 
 (2) Add a new provision state that would set the instance to a sort of
 temporary limbo state, expecting to be redeployed at some point by setting
 target to ACTIVE (as normal).

I'm not familiar with Ironic's internals, but I think rebuild is a
special state, since it involves a pretty violent change to the box
(overwriting the disks) that is definitely not the same as being in an
ACTIVE state where the user might expect it would be working.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Neutron] 2 NICs on Instance Creation not working

2014-04-22 Thread Hopper, Justin
Aaron,

This is true ­ adding this to disk image builder element would not be an
issue, I just did not know it was a required step.

Thanks,

Justin Hopper
Software Engineer - DBaaS
irc: juice | gpg: EA238CF3 | twt: @justinhopper

From:  Aaron Rosen aaronoro...@gmail.com
Reply-To:  OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:  Monday, April 21, 2014 at 22:11
To:  OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [Openstack][Neutron] 2 NICs on Instance
Creation not working

Hi, 

I'm guessing the scripts inside your guest is only setup to configure dhcp
on the first interface. See  /etc/network/interfaces

Best, 

Aaron


On Mon, Apr 21, 2014 at 4:59 PM, Hopper, Justin justin.hop...@hp.com
wrote:
 They are on separate Networks.
 
 Justin Hopper
 Software Engineer - DBaaS
 irc: juice | gpg: EA238CF3 | twt: @justinhopper
 
 From: Kevin Benton blak...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Monday, April 21, 2014 at 16:54
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Openstack][Neutron] 2 NICs on Instance Creation
 not working
 
 Are the two NICs on the same or different networks? Currently there is a
 limitation of Nova that does not permit two NICs to be attached to the same
 Neutron network. 
 
 --
 Kevin Benton
 
 
 On Mon, Apr 21, 2014 at 4:44 PM, Hopper, Justin justin.hop...@hp.com wrote:
 So we are trying to create an instance (Precise Cloud Image) via nova with
 two NICs.  It appears that the second Interface does not get configured.
 Does the Image Itself need to contain the configuration for the 2nd Interface
 or is this something the Neuton/Nova should take care of us automatically.  I
 guess the same issue would arise if you would attach a 2nd Interface to the
 Instance after it was created (via nova interface-attach).
 
 Thanks,
 
 Justin Hopper
 Software Engineer - DBaaS
 irc: juice | gpg: EA238CF3 | twt: @justinhopper
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Kevin Benton
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 





smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Proposing Thomas Spatzier for heat-core

2014-04-22 Thread Zane Bitter

I'd like to propose that we add Thomas Spatzier to the heat-core team.

Thomas has been involved in and consistently contributing to the Heat 
community for around a year, since the time of the Havana design summit. 
His code reviews are of extremely high quality IMO, and he has been 
reviewing at a rate consistent with a member of the core team[1].


One thing worth addressing is that Thomas has only recently started 
expanding the focus of his reviews from HOT-related changes out into the 
rest of the code base. I don't see this as an obstacle - nobody is 
familiar with *all* of the code, and we trust core reviewers to know 
when we are qualified to give +2 and when we should limit ourselves to 
+1 - and as far as I know nobody else is bothered either. However, if 
you have strong feelings on this subject nobody will take it personally 
if you speak up :)


Heat Core team members, please vote on this thread. A quick reminder of 
your options[2]:

+1  - five of these are sufficient for acceptance
 0  - abstention is always an option
-1  - this acts as a veto

cheers,
Zane.


[1] http://russellbryant.net/openstack-stats/heat-reviewers-30.txt
[2] https://wiki.openstack.org/wiki/Heat/CoreTeam#Adding_or_Removing_Members

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Default paths in os-*-config projects

2014-04-22 Thread Ben Nemec

On 04/15/2014 10:32 PM, Clint Byrum wrote:

Excerpts from Steve Baker's message of 2014-04-15 16:30:32 -0700:

On 15/04/14 13:30, Clint Byrum wrote:

Excerpts from Ben Nemec's message of 2014-04-14 15:41:23 -0700:

Right now the os-*-config projects default to looking for their files in
/opt/stack, with an override env var provided for other locations.  For
packaging purposes it would be nice if they defaulted to a more
FHS-compliant location like /var/lib.  For devtest we could either
override the env var or simply install the appropriate files to /var/lib.

This was discussed briefly in IRC and everyone seemed to be onboard with
the change, but Robert wanted to run it by the list before we make any
changes.  If anyone objects to changing the default, please reply here.
   I'll take silence as agreement with the move. :-)


+1 from me for doing FHS compliance. :)

/var/lib is not actually FHS compliant as it is for Variable state
information. os-collect-config does have such things, and does use
/var/lib. But os-refresh-config reads executables and os-apply-config
reads templates, neither of which will ever be variable state
information.

/usr/share would be the right place, as it is Architecture independent
data. I suppose if somebody wants to compile a C program as an o-r-c
script we could rethink that, but I'd just suggest they drop it in a bin
dir and exec it from a one line shell script in the /usr/share.

So anyway, I suggest:

/usr/share/os-apply-config/templates
/usr/share/os-refresh-config/scripts

With the usual hierarchy underneath.

+1, but might I suggest the orc location be:
/usr/libexec/os-refresh-config/*.d



Good catch. I had not read the latest draft of FHS 3.0, and indeed
libexec is included. Seems daft to base on 2.3 if 3.0 is likely to be
released this summer.


We'll need to continue to support the non-FHS paths for at least a few
releases as well.


Instead of supporting both paths, how about the orc and oac elements set
OS_REFRESH_CONFIG_BASE_DIR and OS_CONFIG_APPLIER_TEMPLATES to
/opt/stack/... until tripleo is ready to switch? With some prep changes
it should be possible to make the flag-day change to require only
changing the value of these env vars in tripleo-image-templates.



I'm not worried about TripleO. I'm worried about all the other users who
may be relying on the old defaults. I think the proper way to handle it
is something like this:

if os.path.exists(old_templates_dir):
 logger.warn(%s is deprecated. Please use %s % (old_templates_dir, 
new_templates_dir))
 templates_dir = merge_dirs(old_templates_dir, new_templates_dir)

We may be an 0.x release, but I think this is straight forward enough
to support being gentle with any users we don't know about.


Okay, thanks for the good suggestions.  I've proposed a couple of 
changes to switch the default and still support the old defaults.


I did not change the tools to pull from both directories if they exist 
because I thought that could potentially be confusing if someone had 
stale files in the old location and was expecting to use the new 
location.  I think it's less confusing to make the switch 
all-or-nothing.  Plus it makes the code change less complex :-).


https://review.openstack.org/#/c/89667/
https://review.openstack.org/#/c/89668/

Thanks.

-Ben


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Proposing Thomas Spatzier for heat-core

2014-04-22 Thread Zane Bitter

Resending with [Heat] in the subject line. My bad.

On 22/04/14 14:21, Zane Bitter wrote:

I'd like to propose that we add Thomas Spatzier to the heat-core team.

Thomas has been involved in and consistently contributing to the Heat
community for around a year, since the time of the Havana design summit.
His code reviews are of extremely high quality IMO, and he has been
reviewing at a rate consistent with a member of the core team[1].

One thing worth addressing is that Thomas has only recently started
expanding the focus of his reviews from HOT-related changes out into the
rest of the code base. I don't see this as an obstacle - nobody is
familiar with *all* of the code, and we trust core reviewers to know
when we are qualified to give +2 and when we should limit ourselves to
+1 - and as far as I know nobody else is bothered either. However, if
you have strong feelings on this subject nobody will take it personally
if you speak up :)

Heat Core team members, please vote on this thread. A quick reminder of
your options[2]:
+1  - five of these are sufficient for acceptance
  0  - abstention is always an option
-1  - this acts as a veto

cheers,
Zane.


[1] http://russellbryant.net/openstack-stats/heat-reviewers-30.txt
[2]
https://wiki.openstack.org/wiki/Heat/CoreTeam#Adding_or_Removing_Members

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposing Thomas Spatzier for heat-core

2014-04-22 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2014-04-22 11:21:40 -0700:
 I'd like to propose that we add Thomas Spatzier to the heat-core team.
 
 Thomas has been involved in and consistently contributing to the Heat 
 community for around a year, since the time of the Havana design summit. 
 His code reviews are of extremely high quality IMO, and he has been 
 reviewing at a rate consistent with a member of the core team[1].
 
 One thing worth addressing is that Thomas has only recently started 
 expanding the focus of his reviews from HOT-related changes out into the 
 rest of the code base. I don't see this as an obstacle - nobody is 
 familiar with *all* of the code, and we trust core reviewers to know 
 when we are qualified to give +2 and when we should limit ourselves to 
 +1 - and as far as I know nobody else is bothered either. However, if 
 you have strong feelings on this subject nobody will take it personally 
 if you speak up :)
 
 Heat Core team members, please vote on this thread. A quick reminder of 
 your options[2]:
 +1  - five of these are sufficient for acceptance
   0  - abstention is always an option
 -1  - this acts as a veto
 

+1!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Status of the agent driver

2014-04-22 Thread Jim Rollenhagen
Hi folks! Deva and I talked a bit more about the agent driver last night, and I 
wanted to give everyone a quick status update on where we stand with merging 
the agent driver into Ironic itself.

First off, we’ve taken all of the agent driver patches we had and squashed them 
into the main agent patch here: https://review.openstack.org/#/c/84795/

That patch still depends on two other patches:
* https://review.openstack.org/#/c/81391/
* https://review.openstack.org/#/c/81919/

which should be close to landing.

The plan going forward is to continue to iterate on 84795 until it lands. Not 
everything is complete yet, but I would prefer it to land it and file bugs, 
etc. for missing features or things that are broken. The patch is already 
pretty large and getting a bit unwieldy.

What we know is not ready today (I’d like to land these in later patches, but 
feedback welcome on that):
* tear_down() is not fully implemented.
* Networking things are not fully implemented.
* More hardware info coming from the agent should be stored in the database 
(IMO).
* The agent and PXE drivers should have similar driver_info and instance_info - 
this is not true today.
* The agent currently relies on a static DHCP configuration rather than the 
Neutron support the PXE driver uses - which means the agent cannot be used 
side-by-side with other drivers today. This should be fixes but may take a fair 
amount of work.
* There are quite a few TODOs littered around - some are functional things, 
others are optimizations. If there are some that should be implemented before 
landing this, we’re happy to do so.

We would appreciate it if folks could start reviewing this patch, in case there 
are things I missed in this list.

One last thing: testing. We plan to add tempest tests for this driver sooner 
than later. I think having similar or identical driver_info, and using Neutron 
for DHCP, etc, will simplify these tests, and possibly converge them to one 
test. That said, I’d like to start writing the tempest tests now and converge 
as we go.

So, with all that, thoughts?

// jim___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] Query leases by project_id/domain_id

2014-04-22 Thread Fuente, Pablo A
Hi
I'm trying to tackle this bug
(https://bugs.launchpad.net/climate/+bug/1304435). The options that I'm
considering are:

1 - Add the project_id query parameter to the leases API
2 - Use the X_PROJECT_ID header

I prefer the first option, but I would like to know if there is a
general OpenStack approach for implementing this. BTW, I'm planning to
apply the same criteria for making queries for Domains.

Pablo.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Reviews wanted for new TripleO elements

2014-04-22 Thread Ryan Brady


- Original Message -
 From: Ben Nemec openst...@nemebean.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Monday, April 21, 2014 11:59:22 AM
 Subject: Re: [openstack-dev] [Tripleo] Reviews wanted for new TripleO elements
 
 Please don't make review requests on the list.  Details here:
 http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html
 

I just updated the wiki about this under Review team section.  Some folks may 
have missed the email, while those that are new to OpenStack
probably wouldn't know what to look for or how far back in the mailing list 
archives to go.

-r

 Thanks.
 
 -Ben
 
 On 04/20/2014 02:44 PM, Macdonald-Wallace, Matthew wrote:
  Hi all,
 
  Can I please ask for some reviews on the following:
 
  https://review.openstack.org/#/c/87226/ - Install checkmk_agent
  https://review.openstack.org/#/c/87223/ - Install icinga cgi interface
 
  I already have a souple of +1s and jenkins is happy, all I need is +2 and
  +A! :)
 
  Thanks,
 
  Matt
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Selecting Compute tests by driver/hypervisor

2014-04-22 Thread Daryl Walleck
I see your point. I had assume that the Hypervisor support matrix was something 
that was blessed, but that's what I get for assuming. :-) Once there's a list 
of required operations, I think that would become more clear. 

That said, there are server actions right now that even KVM doesn't implement 
(Change Password), which was one of the reasons I first implemented feature 
flags in Tempest. There's also some actions that don't necessarily make sense 
for certain drivers (resizing a bare metal server). If we make no assumptions 
about the underlying driver, shouldn't these capability flags go away 
altogether, or stay for convenience purposes? 

Either way, I'll hold off on this idea until after the summit discussion. 
Thanks for the feedback!

Daryl

-Original Message-
From: Sean Dague [mailto:s...@dague.net] 
Sent: Tuesday, April 22, 2014 5:40 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [qa] Selecting Compute tests by driver/hypervisor

On 04/21/2014 06:52 PM, Daryl Walleck wrote:
 I nearly opened a spec for this, but I’d really like to get some 
 feedback first. One of the challenges I’ve seen lately for Nova teams 
 not using KVM or Xen (Ironic and LXC are just a few) is how to 
 properly run the subset of Compute tests that will run for their 
 hypervisor or driver. Regexes are what Ironic went with, but I’m not 
 sure how well that will work long term since it’s very much dependent 
 on naming conventions. The good thing is that the capabilities for 
 each hypervisor/driver are well defined 
 (https://wiki.openstack.org/wiki/HypervisorSupportMatrix), so it’s 
 just a matter of how to convey that information. I see a few ways 
 forward from here:
 
  
 
 1.   Expand the compute_features_group config section to include all
 Compute actions and make sure all tests that require specific 
 capabilities have skipIfs or raise a skipException. This options seems 
 it would require the least work within Tempest, but the size of the 
 config will continue to grow as more Nova actions are added.
 
 2.   Create a new decorator class like was done with service tags
 that defines what drivers the test does or does not work for, and have 
 the definitions of the different driver capabilities be referenced by 
 the decorator. This is nice because it gets rid of the config creep, 
 but it’s also yet another decorator, which may not be desirable.
 
  
 
 I’m going to continue working through both of these possibilities, but 
 any feedback either solution would be appreciated.

Ironic mostly went with regexes for expediency to get something gating before 
their driver actually implements the requirements for the compute API.

Nova API is Nova API, the compute driver should be irrelevant. The part that is 
optional is specified by extensions (at the granularity level of an extension 
enable/disable). Creating all the knobs that are optional for extensions is 
good, and we're definitely not there yet. However if an API behaves differently 
based on compute driver, that's a problem with that compute driver.

I realize today that we're not there yet, but we have to be headed in that 
direction. The diagnostics API was an instance where this was pretty bad, and 
meant it was in no way an API, because the client had no idea what data payload 
it was getting back.

-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Proposing Thomas Spatzier for heat-core

2014-04-22 Thread Randall Burt
+1

On Apr 22, 2014, at 1:43 PM, Zane Bitter zbit...@redhat.com wrote:

 Resending with [Heat] in the subject line. My bad.
 
 On 22/04/14 14:21, Zane Bitter wrote:
 I'd like to propose that we add Thomas Spatzier to the heat-core team.
 
 Thomas has been involved in and consistently contributing to the Heat
 community for around a year, since the time of the Havana design summit.
 His code reviews are of extremely high quality IMO, and he has been
 reviewing at a rate consistent with a member of the core team[1].
 
 One thing worth addressing is that Thomas has only recently started
 expanding the focus of his reviews from HOT-related changes out into the
 rest of the code base. I don't see this as an obstacle - nobody is
 familiar with *all* of the code, and we trust core reviewers to know
 when we are qualified to give +2 and when we should limit ourselves to
 +1 - and as far as I know nobody else is bothered either. However, if
 you have strong feelings on this subject nobody will take it personally
 if you speak up :)
 
 Heat Core team members, please vote on this thread. A quick reminder of
 your options[2]:
 +1  - five of these are sufficient for acceptance
  0  - abstention is always an option
 -1  - this acts as a veto
 
 cheers,
 Zane.
 
 
 [1] http://russellbryant.net/openstack-stats/heat-reviewers-30.txt
 [2]
 https://wiki.openstack.org/wiki/Heat/CoreTeam#Adding_or_Removing_Members
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Proposing Thomas Spatzier for heat-core

2014-04-22 Thread Thomas Herve

 Resending with [Heat] in the subject line. My bad.
 
 On 22/04/14 14:21, Zane Bitter wrote:
  I'd like to propose that we add Thomas Spatzier to the heat-core team.
 
  Thomas has been involved in and consistently contributing to the Heat
  community for around a year, since the time of the Havana design summit.
  His code reviews are of extremely high quality IMO, and he has been
  reviewing at a rate consistent with a member of the core team[1].
 
  One thing worth addressing is that Thomas has only recently started
  expanding the focus of his reviews from HOT-related changes out into the
  rest of the code base. I don't see this as an obstacle - nobody is
  familiar with *all* of the code, and we trust core reviewers to know
  when we are qualified to give +2 and when we should limit ourselves to
  +1 - and as far as I know nobody else is bothered either. However, if
  you have strong feelings on this subject nobody will take it personally
  if you speak up :)
 
  Heat Core team members, please vote on this thread. A quick reminder of
  your options[2]:
  +1  - five of these are sufficient for acceptance
0  - abstention is always an option
  -1  - this acts as a veto
 
  cheers,
  Zane.

+1!

That's the beginning of the Thomas' versus the Steves.


-- 
Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Catalog Backend in Deployments (Templated, SQL, etc)

2014-04-22 Thread Morgan Fainberg
During the weekly Keystone meeting, the topic of improving the Catalog was 
brought up. This topic is in the context of preparing for the design summit 
session on the Service Catalog. There are currently limitations in the 
templated catalog that do not exist in the SQL backed catalog. In an effort to 
provide the best support for the catalog going forward, the Keystone team would 
like to get feedback on the use of the various catalog backends.  

What we are looking for:
In your OpenStack deployments, which catalog backend are you using?
Which Keystone API version are you using?
This information will help us to prioritize updates to the catalog over the 
next development cycle (Juno) as well as identify if any changes need to be 
back-ported. In the long term, there is a desire to target new features and 
functionality for the Service Catalog to the SQL (and for testing KVS) 
backends, limiting enhancements and new development done on the templated 
catalog backend.

Please feel free to respond via the survey below or via email to the mailing 
list.

Keystone Catalog Backend Usage Survey: https://www.surveymonkey.com/s/3DL7FTY

Cheers,
Morgan

—
Morgan Fainberg
Principal Software Engineer
Core Developer, Keystone
m...@metacloud.com___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Proposing Thomas Spatzier for heat-core

2014-04-22 Thread Steven Dake

HOT seemed like a job for Ethan Hunt.

Nice  work on finishing the job!

big +1 from me


On 04/22/2014 11:43 AM, Zane Bitter wrote:

Resending with [Heat] in the subject line. My bad.

On 22/04/14 14:21, Zane Bitter wrote:

I'd like to propose that we add Thomas Spatzier to the heat-core team.

Thomas has been involved in and consistently contributing to the Heat
community for around a year, since the time of the Havana design summit.
His code reviews are of extremely high quality IMO, and he has been
reviewing at a rate consistent with a member of the core team[1].

One thing worth addressing is that Thomas has only recently started
expanding the focus of his reviews from HOT-related changes out into the
rest of the code base. I don't see this as an obstacle - nobody is
familiar with *all* of the code, and we trust core reviewers to know
when we are qualified to give +2 and when we should limit ourselves to
+1 - and as far as I know nobody else is bothered either. However, if
you have strong feelings on this subject nobody will take it personally
if you speak up :)

Heat Core team members, please vote on this thread. A quick reminder of
your options[2]:
+1  - five of these are sufficient for acceptance
  0  - abstention is always an option
-1  - this acts as a veto

cheers,
Zane.


[1] http://russellbryant.net/openstack-stats/heat-reviewers-30.txt
[2]
https://wiki.openstack.org/wiki/Heat/CoreTeam#Adding_or_Removing_Members

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [SWIFT] Delete operation problem

2014-04-22 Thread Clay Gerrard
409 on DELETE (object?) is a pretty specific error.  That should mean that
the timestamp assigned to the delete is earlier than the timestamp of the
data file.

Most likely mean that you're getting some time-drift on your proxies (but
that assumes multi-node) or maybe that you're reusing names between threads
and your object server's see PUT(ts1) PUT(ts3) DELETE(ts2) - but that'd be
a pretty tight race...

Should all be logged - try and find a DELETE that went 409 and trace the
transaction id.


On Mon, Apr 21, 2014 at 5:54 AM, taurus huang huanggeng.8...@gmail.comwrote:

 Please provide the log file: /var/log/swift/swift.log   AND
 /var/log/keystone/keystone.log


 On Mon, Apr 21, 2014 at 11:55 AM, Sumit Gaur sumitkg...@gmail.com wrote:

 Hi
 I using jclouds lib integrated with Openstack Swift+ keystone
 combination. Things are working fine except stability test. After 20-30
 hours of test jclouds/SWIFT start degrading in TPS and keep going down over
 the time.

 1) I am running the (PUT-GET-DEL) cycle in 10 parallel threads.
 2) I am getting a lot of 409 and DEL failure for the response too from
 SWIFT.


 Can sombody help me what is going wrong here ?

 Thanks
 sumit

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Proposing Thomas Spatzier for heat-core

2014-04-22 Thread Jason Dunsmore
+1

On Tue, Apr 22 2014, Steven Dake wrote:

 HOT seemed like a job for Ethan Hunt.

 Nice  work on finishing the job!

 big +1 from me


 On 04/22/2014 11:43 AM, Zane Bitter wrote:
 Resending with [Heat] in the subject line. My bad.

 On 22/04/14 14:21, Zane Bitter wrote:
 I'd like to propose that we add Thomas Spatzier to the heat-core team.

 Thomas has been involved in and consistently contributing to the Heat
 community for around a year, since the time of the Havana design summit.
 His code reviews are of extremely high quality IMO, and he has been
 reviewing at a rate consistent with a member of the core team[1].

 One thing worth addressing is that Thomas has only recently started
 expanding the focus of his reviews from HOT-related changes out into the
 rest of the code base. I don't see this as an obstacle - nobody is
 familiar with *all* of the code, and we trust core reviewers to know
 when we are qualified to give +2 and when we should limit ourselves to
 +1 - and as far as I know nobody else is bothered either. However, if
 you have strong feelings on this subject nobody will take it personally
 if you speak up :)

 Heat Core team members, please vote on this thread. A quick reminder of
 your options[2]:
 +1  - five of these are sufficient for acceptance
   0  - abstention is always an option
 -1  - this acts as a veto

 cheers,
 Zane.


 [1] http://russellbryant.net/openstack-stats/heat-reviewers-30.txt
 [2]
 https://wiki.openstack.org/wiki/Heat/CoreTeam#Adding_or_Removing_Members

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] API discussion

2014-04-22 Thread Eugene Nikanorov
Hi folks,

I've added some API examples illustrating API/object model proposals on the
wiki
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion

Here's the link (also, at the bottom of the wiki page):
https://etherpad.openstack.org/p/neutron-lbaas-api-proposals


Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Proposing Thomas Spatzier for heat-core

2014-04-22 Thread Steven Hardy
On Tue, Apr 22, 2014 at 02:43:08PM -0400, Zane Bitter wrote:
 Resending with [Heat] in the subject line. My bad.
 
 On 22/04/14 14:21, Zane Bitter wrote:
 I'd like to propose that we add Thomas Spatzier to the heat-core team.
 
 Thomas has been involved in and consistently contributing to the Heat
 community for around a year, since the time of the Havana design summit.
 His code reviews are of extremely high quality IMO, and he has been
 reviewing at a rate consistent with a member of the core team[1].
 
 One thing worth addressing is that Thomas has only recently started
 expanding the focus of his reviews from HOT-related changes out into the
 rest of the code base. I don't see this as an obstacle - nobody is
 familiar with *all* of the code, and we trust core reviewers to know
 when we are qualified to give +2 and when we should limit ourselves to
 +1 - and as far as I know nobody else is bothered either. However, if
 you have strong feelings on this subject nobody will take it personally
 if you speak up :)
 
 Heat Core team members, please vote on this thread. A quick reminder of
 your options[2]:
 +1  - five of these are sufficient for acceptance
   0  - abstention is always an option
 -1  - this acts as a veto

+1, I agree, great work Thomas! :)

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Design Summit Sessions

2014-04-22 Thread Kyle Mestery
Folks:

Just a note to all who have had a design summit session accepted. As
was pointed out to me by Maru, it's perfectly fine and actually
encouraged to start discussing ideas for your session on the ML and
IRC before the Summit. If you can lay the foundation for your 40 (or
20) minute slot, all the better. It's critical to make effective use
of the face to face time we all get at the Summit.

Also, another note on if you session was declined. We were almost 3:1
oversubscribed in Neutron. If your session was declined, this doesn't
mean your BP or will not make it into Neutron. You can follow the
directions here [1] to get a BP accepted for Juno.

Thanks!
Kyle

[1] https://wiki.openstack.org/wiki/Blueprints#Neutron

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Supporting preserve_ephemeral in rebuild()

2014-04-22 Thread Devananda van der Veen
On Tue, Apr 22, 2014 at 10:52 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from David Shrewsbury's message of 2014-04-22 10:16:42 -0700:
  Hi,
 
  I'm working on implementing rebuild() in the nova.virt.ironic driver so
  that we can support the --preserve-ephemeral option. I have a design
  question and would love some feedback on it.
 
  The way to trigger a deploy is to set the provision state to ACTIVE.
  However, for a rebuild, we cannot currently use this, since the API will
  return an error saying that the target state and the current provision
  state are the same, and return an error.
 
  I can think of a couple of ways around this:
 
  (1) If target and current provision state are ACTIVE, go ahead and allow
  the (re)deploy.
 
  (2) Add a new provision state that would set the instance to a sort of
  temporary limbo state, expecting to be redeployed at some point by
 setting
  target to ACTIVE (as normal).

 I'm not familiar with Ironic's internals, but I think rebuild is a
 special state, since it involves a pretty violent change to the box
 (overwriting the disks) that is definitely not the same as being in an
 ACTIVE state where the user might expect it would be working.


I see this question as referring to what should be sent to the
nodes/UUID/states/provision endpoint to trigger the rebuild operation. The
operation should be distinct from the operations to provision (target:
active) or wipe (target: none), so I think we need a new value to represent
this, akin to sending reboot to the nodes/UUID/states/power endpoint.

As far as how the API represents the state of the node during a rebuild, I
would expect the node's provision_state to represent that it is being built
while that is in progress, with the provision_state being deploying,
wait call-back, and so forth, and the target_provision_state to be
active. I don't think this needs a new state, and the API can present the
same series of state changes as a normal deployment would, to make it
easier for the client to follow the progress.


-Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Congrats and welcome to OPW interns!

2014-04-22 Thread Anne Gentle
Hi all,
I want to warmly welcome our new interns joining us through the GNOME
Outreach Program for Women. Here's a little bit of information about the
participants and their projects.

Virginia Gresham is in Guilford, CT and Deer Isle, ME (east coast US,
represent!) She'll work on a persona research and design project for the
Horizon dashboard. That'll include Horizon usability tests. Liz Blanchard
and Ju Lim are her mentors.

Ana Malagon will work out of New Haven, CT doing period-spanning statistics
for Ceilometer with mentor Eoghan Glynn.

Nataliia Uvarova (AAzza on IRC) is in Gjøvik, Norway and Kiev, Ukraine
(that's a commute to me, maybe I will check that geography).  She'll work
on Py3K support in Marconi with Flavio Percoco and Alejandro Cabrera.

I also want to show our appreciation to the mentors who worked with many
applicants and who did such a great job gathering projects and getting
first patches submitted and reviewed. Julie Pichon especially shone while
sorting through the many applicants and working with other organizations
who also get great interns through this program. One of our prior interns,
Terri Yu, did a fantastic job helping applicants and recruiting great
people for OpenStack. I know there are many more who helped this round and
I can't say enough about what super humans we have here.

The official announcement is here:
https://wiki.gnome.org/OutreachProgramForWomen/2014/MayAugust#OpenStack

With OPW and GSoC, we have 10 interns with many more mentors working on
OpenStack. This is fantastic! Thanks everyone who is making these programs
a reality for our community.

Anne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Catalog Backend in Deployments (Templated, SQL, etc)

2014-04-22 Thread Jay Pipes
On Tue, 2014-04-22 at 12:38 -0700, Morgan Fainberg wrote:
 During the weekly Keystone meeting, the topic of improving the Catalog
 was brought up. This topic is in the context of preparing for the
 design summit session on the Service Catalog. There are currently
 limitations in the templated catalog that do not exist in the SQL
 backed catalog. In an effort to provide the best support for the
 catalog going forward, the Keystone team would like to get feedback on
 the use of the various catalog backends.  
 
 
 What we are looking for:
  1. In your OpenStack deployments, which catalog backend are you
 using?

templated.

  1. Which Keystone API version are you using?

2

Best,
-jay

p.s. You probably want to ask this on the openstack-operators list.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [solum] Environments Working Group

2014-04-22 Thread Roshan Agrawal
Let us meet to develop a POV on
1. Which OpenStack program/project should Environments live under
2. What projects does Environments depend on (Heat, Keystone, OpenStack 
congress, etc.)

Here is a set of environment use cases to frame the notion of environments -
https://wiki.openstack.org/wiki/Solum/Environments

Please indicate your availability for an IRC meeting on the doodle poll:
http://doodle.com/n4w9gmekwz58ekdz

It is worth noting that we are scheduling with participants across 5 times 
zones ! [US CST, US PST, Europe, Australia, India].

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit downtime and upgrade on 2014-04-28

2014-04-22 Thread James E. Blair
Zaro zaro0...@gmail.com writes:

 Hello All.  The OpenStack infra team has been working to put
 everything in place so that we can upgrade review.o.o from Gerrit
 version 2.4.4 to version 2.8.4  We are happy to announce that we are
 finally ready to make it happen!

 We will begin the upgrade on Monday, April 28th at 1600 UTC (the
 OpenStack recommended 'off' week).

 We would like to advise that you can expect a couple hours of downtime
 followed by several more hours of automated systems not quite working
 as expected.  Hopefully you shouldn't notice anyway because you should
 all be on vacation :)

Hi,

This is a reminder that next week, Gerrit will be unavailable for a few
hours starting at 1600 UTC on April 28th.

There are a few changes that will impact developers.  We will have more
detailed documentation about this soon, but here are the main things you
should know about:

* The Important Changes view is going away.  Instead, Gerrit 2.8
  supports complex custom dashboards.  We will have an equivalent of the
  Important Changes screen implemented as a custom dashboard.

* The Approval review label will be renamed to Workflow.  The +1
  value will still be Approved and will be available to core
  developers -- nothing about the approval process is changing.

* The new Workflow label will have a -1 Work In Progress value which
  will replace the Work In Progress button and review state.  Core
  reviewers and change owners will have permission to set that value
  (which will be removed when a new patchset is uploaded).

* We will also take this opportunity to change Gerrit's SSH host key.
  We will supply instructions for updating your known_hosts file.  As a
  reminder, you can always verify the fingerprints on this page:
  https://review.openstack.org/#/settings/ssh-keys

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Proposing Thomas Spatzier for heat-core

2014-04-22 Thread Steve Baker
+1 nice work Thomas!

On 23/04/14 06:43, Zane Bitter wrote:
 Resending with [Heat] in the subject line. My bad.

 On 22/04/14 14:21, Zane Bitter wrote:
 I'd like to propose that we add Thomas Spatzier to the heat-core team.

 Thomas has been involved in and consistently contributing to the Heat
 community for around a year, since the time of the Havana design summit.
 His code reviews are of extremely high quality IMO, and he has been
 reviewing at a rate consistent with a member of the core team[1].

 One thing worth addressing is that Thomas has only recently started
 expanding the focus of his reviews from HOT-related changes out into the
 rest of the code base. I don't see this as an obstacle - nobody is
 familiar with *all* of the code, and we trust core reviewers to know
 when we are qualified to give +2 and when we should limit ourselves to
 +1 - and as far as I know nobody else is bothered either. However, if
 you have strong feelings on this subject nobody will take it personally
 if you speak up :)

 Heat Core team members, please vote on this thread. A quick reminder of
 your options[2]:
 +1  - five of these are sufficient for acceptance
   0  - abstention is always an option
 -1  - this acts as a veto

 cheers,
 Zane.


 [1] http://russellbryant.net/openstack-stats/heat-reviewers-30.txt
 [2]
 https://wiki.openstack.org/wiki/Heat/CoreTeam#Adding_or_Removing_Members

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo] [Ironic] Can we change rpc_thread_pool_size default value?

2014-04-22 Thread Devananda van der Veen
Hi!

When a project is using oslo.messaging, how can we change our default
rpc_thread_pool_size?

---
Background

Ironic has hit a bug where a flood of API requests can deplete the RPC
worker pool on the other end and cause things to break in very bad ways.
Apparently, nova-conductor hit something similar a while back too. There've
been a few long discussions on IRC about it, tracked partially here:
  https://bugs.launchpad.net/ironic/+bug/1308680

tldr; a way we can fix this is to set the rpc_thread_pool_size very small
(eg, 4) and keep our conductor.worker_pool size near its current value (eg,
64). I'd like these to be the default option values, rather than require
every user to change the rpc_thread_pool_size in their local ironic.conf
file.

We're also about to switch from the RPC module in oslo-incubator to using
the oslo.messaging library.

Why are these related? Because it looks impossible for us to change the
default for this option from within Ironic, because the option is
registered when EventletExecutor is instantaited (rather than loaded).

https://github.com/openstack/oslo.messaging/blob/master/oslo/messaging/_executors/impl_eventlet.py#L76


Thanks,
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-22 Thread Salvatore Orlando
It's great to see that there is activity on the launchpad blueprint as well.
From what I heard Oleg should have already translated the various
discussion into a list of functional requirements (or something like that).

If that is correct, it might be a good idea to share them with relevant
stakeholders (operators and developers), define an actionable plan for
Juno, and then distribute tasks.
It would be a shame if it turns out several contributors are working on
this topic independently.

Salvatore


On 22 April 2014 16:27, Jesse Pretorius jesse.pretor...@gmail.com wrote:

 On 22 April 2014 14:58, Salvatore Orlando sorla...@nicira.com wrote:

 From previous requirements discussions,


 There's a track record of discussions on the whiteboard here:
 https://blueprints.launchpad.net/neutron/+spec/nova-to-quantum-upgrade

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [SWIFT] Delete operation problem

2014-04-22 Thread Sumit Gaur
Hi clay,
Thanks for responding , first about setup , it is multi node env, and
problem is with delete object .

I checked proxy and storage nodes clock, they are in sync. Also storage
node run only main daemons I.e. no auditor, and others.

I traced the 409 call and it is coming from object server.
I am generating random id for key so no reuse of same name.  But after one
put(t1) next is del(t1).is this could be a problem...I think they are
sync calls.

Regards
Sumit

On Apr 23, 2014 5:14 AM, Clay Gerrard clay.gerr...@gmail.com wrote:

 409 on DELETE (object?) is a pretty specific error.  That should mean
that the timestamp assigned to the delete is earlier than the timestamp of
the data file.

 Most likely mean that you're getting some time-drift on your proxies (but
that assumes multi-node) or maybe that you're reusing names between threads
and your object server's see PUT(ts1) PUT(ts3) DELETE(ts2) - but that'd be
a pretty tight race...

 Should all be logged - try and find a DELETE that went 409 and trace the
transaction id.


 On Mon, Apr 21, 2014 at 5:54 AM, taurus huang huanggeng.8...@gmail.com
wrote:

 Please provide the log file: /var/log/swift/swift.log   AND
/var/log/keystone/keystone.log


 On Mon, Apr 21, 2014 at 11:55 AM, Sumit Gaur sumitkg...@gmail.com
wrote:

 Hi
 I using jclouds lib integrated with Openstack Swift+ keystone
combination. Things are working fine except stability test. After 20-30
hours of test jclouds/SWIFT start degrading in TPS and keep going down over
the time.

 1) I am running the (PUT-GET-DEL) cycle in 10 parallel threads.
 2) I am getting a lot of 409 and DEL failure for the response too from
SWIFT.


 Can sombody help me what is going wrong here ?

 Thanks
 sumit

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-22 Thread Jay Pipes
Hi Vincent, Zhi, Huiba, sorry for delayed response. See comments inline.

On Tue, 2014-04-22 at 10:59 +0800, Sheng Bo Hou wrote:
 I actually support the idea Huiba has proposed, and I am thinking of
 how to optimize the large data transfer(for example, 100G in a short
 time) as well. 
 I registered two blueprints in nova-specs, one is for an image upload
 plug-in to upload the image to
 glance(https://review.openstack.org/#/c/84671/), the other is a data
 transfer plug-in(https://review.openstack.org/#/c/87207/) for data
 migration among nova nodes. I would like to see other transfer
 protocols, like FTP, bitTorrent, p2p, etc, implemented for data
 transfer in OpenStack besides HTTP. 
 
 Data transfer may have many use cases. I summarize them into two
 catalogs. Please feel free to comment on it. 
 1. The machines are located in one network, e.g. one domain, one
 cluster, etc. The characteristic is the machines can access each other
 directly via the IP addresses(VPN is beyond consideration). In this
 case, data can be transferred via iSCSI, NFS, and definitive zero-copy
 as Zhiyan mentioned. 
 2. The machines are located in different networks, e.g. two data
 centers, two firewalls, etc. The characteristic is the machines can
 not access each other directly via the IP addresses(VPN is beyond
 consideration). The machines are isolated, so they can not be
 connected with iSCSI, NFS, etc. In this case, data have to go via the
 protocols, like HTTP, FTP, p2p, etc. I am not sure whether zero-copy
 can work for this case. Zhiyan, please help me with this doubt. 
 
 I guess for data transfer, including image downloading, image
 uploading, live migration, etc, OpenStack needs to taken into account
 the above two catalogs for data transfer.

For live migration, we use shared storage so I don't think it's quite
the same as getting/putting image bits from/to arbitrary locations.

  It is hard to say that one protocol is better than another, and one
 approach prevails another(BitTorrent is very cool, but if there is
 only one source and only one target, it would not be that faster than
 a direct FTP). The key is the use
 case(FYI:http://amigotechnotes.wordpress.com/2013/12/23/file-transmission-with-different-sharing-solution-on-nas/).

Right, a good solution would allow for some flexibility via multiple
transfer drivers.

 Jay Pipes has suggested we figure out a blueprint for a separate
 library dedicated to the data(byte) transfer, which may be put in oslo
 and used by any projects in need (Hoping Jay can come in:-)). Huiba,
 Zhiyan, everyone else, do you think we come up with a blueprint about
 the data transfer in oslo can work?

Yes, so I believe the most appropriate solution is to create a library
-- in oslo or a standalone library like taskflow -- that would offer a
simple byte streaming library that could be used by nova.image to expose
a neat and clean task-based API.

Right now, there is a bunch of random image transfer code spread
throughout nova.image and in each of the virt drivers there seems to be
different re-implementations of similar functionality. I propose we
clean all that up and have nova.image expose an API so that a virt
driver could do something like this:

from nova.image import api as image_api

...

task = image_api.copy(from_path_or_uri, to_path_or_uri)
# do some other work
copy_task_result = task.wait()

Within nova.image.api.copy(), we would use the aforementioned transfer
library to move the image bits from the source to the destination using
the most appropriate method.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer][Nova][Glance]

2014-04-22 Thread Hachem Chraiti
Hi,
How to detect or trigger the launching of a new instance , image or any new
event for all meters in order to save them into a database for some uses,
thanks

Sincerly,
Chraiti Hachem,
Software Engineer
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] sort_dir parameter

2014-04-22 Thread Cindy Lu


Hi,

Does Nova GET API support sort_dir and sort_key?  I would like to pass in a
parameter similar to what the Glance API currently has:
http://docs.openstack.org/developer/glance/glanceapi.html#filtering-images-lists.

Thank you,

Cindy___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Selecting Compute tests by driver/hypervisor

2014-04-22 Thread Adam Gandelman
On Tue, Apr 22, 2014 at 4:23 AM, Sean Dague s...@dague.net wrote:


 Agreed. Though I think we probably want the Nova API to be explicit
 about what parts of the API it's ok to throw a Not Supported. Because I
 don't think it's a blanket ok. On API endpoints where this is ok, we can
 convert not supported to a skip.

 -Sean


I'd favor going even further and let any such exception convert to a skip,
at least in the main test suite.  Keep Tempest a point-and-shoot suite that
can be pointed at any cloud and do the right thing.  We can add another
test or utility (perhaps in tools/?) to interpret results and attach
meaning to them WRT skips/fails validated against individual projects'
current notion of what is mandatory.   A list of these tests must pass
against any driver instead of a driver feature matrix.   This would allow
such policies to easily change over time outside of the actual test code.

-Adam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: 答复: [Nova][Neutron][Cinder][Heat]Should we support tags for os resources?

2014-04-22 Thread Jay Pipes
On Tue, 2014-04-22 at 02:02 +, Huangtianhua wrote:
 Thanks very much.
 
  
 
 I have register the blueprints for nova.  
 
 https://blueprints.launchpad.net/nova/+spec/add-tags-for-os-resources
 
  
 
 The simple plan is:
 
 1.  Add the tags api (create tags/delete tags/describe tags) for
 v3 api
 
 2.  Change the implement for instance from “metadata” to “tags”
 
  
 
 Your suggestions?

Hi again,

The Nova blueprint process has changed. We now use a Gerrit repository
to submit, review, and approve blueprint specifications. Please see here
for information on how to submit a spec for the proposed blueprint:

https://wiki.openstack.org/wiki/Blueprints#Nova

Thank you!
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][Cinder][Heat]Should we support tags for os resources?

2014-04-22 Thread Jay Pipes
On Tue, 2014-04-22 at 13:14 +0200, Thomas Spatzier wrote:
 snip
   * Identify key/value pairs that are relied on by all of Nova to be a
  specific key and value combination, and make these things actual real
  attributes on some object model -- since that is a much greater guard
  for the schema of an object and enables greater performance by allowing
  both type safety of the underlying data and removes the need to search
  by both a key and a value.
 
 Makes a lot of sense to me. So are you suggesting to have a set of
 well-defined property names per resource but still store them in the
 properties name-value map? Or would you rather make those part of the
 resource schema?

I'd rather have the common ones in the resource schema itself, since
that is, IMHO, better practice for enforcing consistency and type
safety.

 BTW, here is a use case in the context of which we have been thinking about
 that topic: we opened a BP for allowing constraint based selection of
 images for Heat templates, i.e. instead of saying something like (using
 pseudo template language)
 
 image ID must be in [fedora-19-x86_64, fedora-20-x86_64]
 
 say something like
 
 architecture must be x86_64, distro must be fedora, version must be
 between 19 and 20
 
 (see also [1]).
 
 This of course would require the existence of well-defined properties in
 glance so an image selection query in Heat can work.

Zactly :)

 As long as properties are just custom properties, we would require a lot
 of discipline from every to maintain properties correctly.

Yep, and you'd need to keep in sync with the code in Nova that currently
maintains these properties. :)

Best,
-jay

  And the
 implementation in Heat could be kind of tolerant, i.e. give it a try, and
 if the query fails just fail the stack creation. But if this is likely to
 happen in 90% of all environments, the usefulness is questionable.
 
 Here is a link to the BP I mentioned:
 [1]
 https://blueprints.launchpad.net/heat/+spec/constraint-based-flavors-and-images
 
 Regards,
 Thomas
 
 
  Best,
  -jay
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Selecting Compute tests by driver/hypervisor

2014-04-22 Thread Sean Dague
On 04/22/2014 07:33 PM, Adam Gandelman wrote:
 On Tue, Apr 22, 2014 at 4:23 AM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:
 
 
 Agreed. Though I think we probably want the Nova API to be explicit
 about what parts of the API it's ok to throw a Not Supported. Because I
 don't think it's a blanket ok. On API endpoints where this is ok, we can
 convert not supported to a skip.
 
 -Sean
 
 
 I'd favor going even further and let any such exception convert to a
 skip, at least in the main test suite.  Keep Tempest a point-and-shoot
 suite that can be pointed at any cloud and do the right thing.  We can
 add another test or utility (perhaps in tools/?) to interpret results
 and attach meaning to them WRT skips/fails validated against individual
 projects' current notion of what is mandatory.   A list of these tests
 must pass against any driver instead of a driver feature matrix.   This
 would allow such policies to easily change over time outside of the
 actual test code.

I think the policy about what's allowed to be not implemented or not
shouldn't be changing so quickly that it needs to be left to the
projects to decide after the fact.

It also really needs to be documented in our API. If a not implemented
is allowable for a particular API call, the client needs to know that's
valid response, in advance.

There are a lot of challenges with interpreting things after the fact,
especially if your path was cut short because things just started
skipping. And past experiences with auto skipping mostly meant that
project breaks got in (which is why we don't do that any more).

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [SWIFT] Delete operation problem

2014-04-22 Thread Sumit Gaur
Further to above details
please find related logs ...

*object server*

4:07:21:20 +] DELETE
/vdb1/844/AUTH_215c458021024a8c87471157a7040644/suntmp7/objkeye385cf9c-ad96-423f-8c8d-94d001631f8b
409 - DELETE
http://www.example.com/v2/AUTH_215c458021024a8c87471157a7040644/suntmp7/objkeye385cf9c-ad96-423f-8c8d-94d001631f8b;
txcb3b4b47278e483ca560e-005350d271 proxy-server 9619 0.0004

or

[22/Apr/2014:08:30:07 +] DELETE
/vdb1/238/AUTH_215c458021024a8c87471157a7040644/zoom43/objkeyf03550a3-2be9-4b59-b12b-df75378aea14
409 - DELETE
http://www.example.com/v2/AUTH_215c458021024a8c87471157a7040644/zoom43/objkeyf03550a3-2be9-4b59-b12b-df75378aea14;
tx0bc79d62fec445d28dfab-005356288f proxy-server 2142 0.0005




On Wed, Apr 23, 2014 at 8:20 AM, Sumit Gaur sumitkg...@gmail.com wrote:

 Hi clay,
 Thanks for responding , first about setup , it is multi node env, and
 problem is with delete object .

 I checked proxy and storage nodes clock, they are in sync. Also storage
 node run only main daemons I.e. no auditor, and others.

 I traced the 409 call and it is coming from object server.
 I am generating random id for key so no reuse of same name.  But after one
 put(t1) next is del(t1).is this could be a problem...I think they are
 sync calls.

 Regards
 Sumit

 On Apr 23, 2014 5:14 AM, Clay Gerrard clay.gerr...@gmail.com wrote:
 
  409 on DELETE (object?) is a pretty specific error.  That should mean
 that the timestamp assigned to the delete is earlier than the timestamp of
 the data file.
 
  Most likely mean that you're getting some time-drift on your proxies
 (but that assumes multi-node) or maybe that you're reusing names between
 threads and your object server's see PUT(ts1) PUT(ts3) DELETE(ts2) - but
 that'd be a pretty tight race...
 
  Should all be logged - try and find a DELETE that went 409 and trace the
 transaction id.
 
 
  On Mon, Apr 21, 2014 at 5:54 AM, taurus huang huanggeng.8...@gmail.com
 wrote:
 
  Please provide the log file: /var/log/swift/swift.log   AND
 /var/log/keystone/keystone.log
 
 
  On Mon, Apr 21, 2014 at 11:55 AM, Sumit Gaur sumitkg...@gmail.com
 wrote:
 
  Hi
  I using jclouds lib integrated with Openstack Swift+ keystone
 combination. Things are working fine except stability test. After 20-30
 hours of test jclouds/SWIFT start degrading in TPS and keep going down over
 the time.
 
  1) I am running the (PUT-GET-DEL) cycle in 10 parallel threads.
  2) I am getting a lot of 409 and DEL failure for the response too from
 SWIFT.
 
 
  Can sombody help me what is going wrong here ?
 
  Thanks
  sumit
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] nova-specs and python-novaclient

2014-04-22 Thread Joe Gordon
Hi All,

Several folks have submitted python-novaclient blueprints to nova specs for
the Juno Release [0][1], but since python-novaclient isn't part of the
integrated release this doesn't really make sense. Furthermore the template
we have has sections that make no sense for the client (such as 'REST API
impact').

So how should we handle python-novaclient blueprints? Keep them in
nova-specs in a separate directory? Separate repo?

I think generalize the nova-specs repo from a repo for blueprints for just
nova to a repo for all 'compute program' blueprints. Right now that would
just cover nova and python-novaclient, but may include other repositories
in the future.

best,
Joe



[0] https://review.openstack.org/#/c/88320/
[1] https://review.openstack.org/#/c/89468/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-specs and python-novaclient

2014-04-22 Thread Jay Pipes
On Tue, 2014-04-22 at 17:00 -0700, Joe Gordon wrote:
 Hi All,
 
 
 Several folks have submitted python-novaclient blueprints to nova
 specs for the Juno Release [0][1], but since python-novaclient isn't
 part of the integrated release this doesn't really make sense.
 Furthermore the template we have has sections that make no sense for
 the client (such as 'REST API impact'). 
 
 
 So how should we handle python-novaclient blueprints? Keep them in
 nova-specs in a separate directory? Separate repo?
 
 
 I think generalize the nova-specs repo from a repo for blueprints for
 just nova to a repo for all 'compute program' blueprints. Right now
 that would just cover nova and python-novaclient, but may include
 other repositories in the future.

+1

-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Proposing Thomas Spatzier for heat-core

2014-04-22 Thread Angus Salkeld

On 22/04/14 14:43 -0400, Zane Bitter wrote:

Resending with [Heat] in the subject line. My bad.

On 22/04/14 14:21, Zane Bitter wrote:

I'd like to propose that we add Thomas Spatzier to the heat-core team.


+1

-Angus



Thomas has been involved in and consistently contributing to the Heat
community for around a year, since the time of the Havana design summit.
His code reviews are of extremely high quality IMO, and he has been
reviewing at a rate consistent with a member of the core team[1].

One thing worth addressing is that Thomas has only recently started
expanding the focus of his reviews from HOT-related changes out into the
rest of the code base. I don't see this as an obstacle - nobody is
familiar with *all* of the code, and we trust core reviewers to know
when we are qualified to give +2 and when we should limit ourselves to
+1 - and as far as I know nobody else is bothered either. However, if
you have strong feelings on this subject nobody will take it personally
if you speak up :)

Heat Core team members, please vote on this thread. A quick reminder of
your options[2]:
+1  - five of these are sufficient for acceptance
 0  - abstention is always an option
-1  - this acts as a veto

cheers,
Zane.


[1] http://russellbryant.net/openstack-stats/heat-reviewers-30.txt
[2]
https://wiki.openstack.org/wiki/Heat/CoreTeam#Adding_or_Removing_Members

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-specs and python-novaclient

2014-04-22 Thread Joe Gordon
On Tue, Apr 22, 2014 at 5:04 PM, Jay Pipes jaypi...@gmail.com wrote:

 On Tue, 2014-04-22 at 17:00 -0700, Joe Gordon wrote:
  Hi All,
 
 
  Several folks have submitted python-novaclient blueprints to nova
  specs for the Juno Release [0][1], but since python-novaclient isn't
  part of the integrated release this doesn't really make sense.
  Furthermore the template we have has sections that make no sense for
  the client (such as 'REST API impact').
 
 
  So how should we handle python-novaclient blueprints? Keep them in
  nova-specs in a separate directory? Separate repo?
 
 
  I think generalize the nova-specs repo from a repo for blueprints for
  just nova to a repo for all 'compute program' blueprints. Right now
  that would just cover nova and python-novaclient, but may include
  other repositories in the future.


Here is a proof of concept: https://review.openstack.org/#/c/89725/



 +1

 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-specs and python-novaclient

2014-04-22 Thread Michael Still
My biggest concern with your proof of concept is that it would require
all outstanding blueprints to do a rebase, which sounds painful. Could
we perhaps create a subdirectory for novaclient, and keep the nova
stuff at the top level until most things have landed?

Michael

On Wed, Apr 23, 2014 at 10:24 AM, Joe Gordon joe.gord...@gmail.com wrote:



 On Tue, Apr 22, 2014 at 5:04 PM, Jay Pipes jaypi...@gmail.com wrote:

 On Tue, 2014-04-22 at 17:00 -0700, Joe Gordon wrote:
  Hi All,
 
 
  Several folks have submitted python-novaclient blueprints to nova
  specs for the Juno Release [0][1], but since python-novaclient isn't
  part of the integrated release this doesn't really make sense.
  Furthermore the template we have has sections that make no sense for
  the client (such as 'REST API impact').
 
 
  So how should we handle python-novaclient blueprints? Keep them in
  nova-specs in a separate directory? Separate repo?
 
 
  I think generalize the nova-specs repo from a repo for blueprints for
  just nova to a repo for all 'compute program' blueprints. Right now
  that would just cover nova and python-novaclient, but may include
  other repositories in the future.


 Here is a proof of concept: https://review.openstack.org/#/c/89725/



 +1

 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] sort_dir parameter

2014-04-22 Thread Matt Riedemann



On 4/22/2014 6:22 PM, Cindy Lu wrote:

Hi,

Does Nova GET API support sort_dir and sort_key?  I would like to pass
in a parameter similar to what the Glance API currently has:
http://docs.openstack.org/developer/glance/glanceapi.html#filtering-images-lists.

Thank you,

Cindy


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Hi Cindy,

I think the answer is 'no' today but Steve Kaufer has a nova blueprint 
spec up for review [1] now to change that.


[1] https://review.openstack.org/#/c/84451/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] [Nova] BP about usb-passthrough RE: Change in openstack/nova-specs[master]: Support specify USB controller for USB-passthrough

2014-04-22 Thread Yuanjing (D)
Hi

I have proposed three BPs about usb-passthrough.
1. Usb-passthrough is the core function I want to provide which is in 
https://review.openstack.org/#/c/86404/.
2. The function of specify usb controller for usb-passthrough is for refine the 
use of Usb-passthrough, which is 
in https://review.openstack.org/#/c/88337/. 
3. The function of specify usb controller is the prerequisite of specify usb 
controller for usb-passthrough which is 
in https://review.openstack.org/#/c/88334.

The backgrounds are:
I want to make a detailed explanation about why I suggest to provide this 
function.
 
We provide VDI(Virtual Desktop) and server virtualization solutions for 
customers, our customers have strong requirements for using USB devices.
 
The typical use cases and our solutions are described as below:

1.In VDI solution, customers want to use local USB printers or USB scanners 
with TC(Thin-Client), because remote desktop protocol like ICA have already 
support USB-redirection, 
so customers only need to attach USB device to TC, the protocol can map USB 
device to VM.

2. In virtualization solution, when starting or restarting some 
business-critical applications, a connected USB-KEY is needed for 
authentication, 
some applications even need a daily authentication by USB-KEY. we suggest the 
following solutions:
(1) Using physical 'USB-HUB' box and technology of USB-redirection over TCP/IP. 
Customers need to buy USB-HUB and install software in guest os, the software 
helps
redirecting USB device to VM.

(2) Using USB-Passthrough and USB hot-plug functions provided by our 
virtualization software. The end users(normally application or system 
administrators) 
insert USB devices to host that containing the VM, then  can see USB device 
list in portal and choose USB device to attach.

This solution has advantages that
1. It doesn't need additional physical devices 

2. It doesn't need a special server to run spice client for USB-Redirection 

3. Business-critical applications commonly need stable and long-standing 
USB-KEY to attach, USB-Passthrough maybe more stable than USB-Redirection over 
TCP/IP or remote desktop protocol.

Welcome for any advices.

Thanks


-Original Message-
From: Jay Pipes (Code Review) [mailto:rev...@openstack.org] 
Sent: Wednesday, April 23, 2014 6:59 AM
To: Yuanjing (D)
Cc: Daniel Berrange; Joe Gordon
Subject: Change in openstack/nova-specs[master]: Support specify USB controller 
for USB-passthrough

Jay Pipes has posted comments on this change.

Change subject: Support specify USB controller for USB-passthrough 
..


Patch Set 1: I would prefer that you didn't merge this

Jing, what is the difference between this blueprint and 
https://review.openstack.org/#/c/88334/? Are they the same?

--
To view, visit https://review.openstack.org/88337
To unsubscribe, visit https://review.openstack.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I26c81c61754af883b8de4c1ffe58384b87b22a77
Gerrit-PatchSet: 1
Gerrit-Project: openstack/nova-specs
Gerrit-Branch: master
Gerrit-Owner: Jing Yuan yj.y...@huawei.com
Gerrit-Reviewer: Daniel Berrange berra...@redhat.com
Gerrit-Reviewer: Jay Pipes jaypi...@gmail.com
Gerrit-Reviewer: Jenkins
Gerrit-Reviewer: Joe Gordon joe.gord...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >