[openstack-dev] [TripleO][CI] all overcloud jobs failing

2014-03-28 Thread Robert Collins
Swift changed the permissions on the swift ring object file which
broke tripleo deployments of swift. (root:root mode 0600 files are not
readable by the 'swift' user). We've got a patch in flight
(https://review.openstack.org/#/c/83645/) that will fix this, but
until that lands please don't spend a lot of time debugging why your
overcloud tests fail :). (Also please don't land any patch that might
affect the undercloud functionality or overcloud until the fix is
landed).

Btw Swift folk - 'check experimental' runs the tripleo jobs in all
projects, so if you any concerns about impacting deployments - please
run 'check experimental' before approving things ;)

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Operators Design Summit ideas for Atlanta

2014-03-28 Thread Tom Fifield
Thanks to those projects that responded. I've proposed sessions in 
swift, ceilometer, tripleO and horizon.


On 17/03/14 07:54, Tom Fifield wrote:

All,

Many times we've heard a desire for more feedback and interaction from
users. However, their attendance at design summit sessions is met with
varied success.

However, last summit, by happy accident, a swift session turned into a
something a lot more user driven. A competent user was able to describe
their use case, and the developers were able to stage a number of
question to them. In this way, some of the assumptions about the way
certain things were implemented, and the various priorities of future
plans became clearer. It worked really well ... perhaps this is
something we'd like to have happen for all the projects?

*Idea*: Add an ops session for each project in the design summit
https://etherpad.openstack.org/p/ATL-ops-dedicated-design-summit-sessions


Most operators running OpenStack tend to treat it more holistically than
those coding it. They are aware of, but don't necessarily think or work
in terms of project  breakdowns. To this end, I'd imagine the such
sessions would:

  * have a primary purpose for developers to ask the operators to answer
questions, and request information

  * allow operators to tell the developers things (give feedback) as a
secondary purpose that could potentially be covered better in a
cross-project session

  * need good moderation, for example to push operator-to-operator
discussion into forums with more time available (eg
https://etherpad.openstack.org/p/ATL-ops-unconference-RFC )

  * be reinforced by having volunteer good users in potentially every
design summit session
(https://etherpad.openstack.org/p/ATL-ops-in-design-sessions )


Anyway, just a strawman - please jump on the etherpad
(https://etherpad.openstack.org/p/ATL-ops-dedicated-design-summit-sessions)
or leave your replies here!


Regards,


Tom


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] sample config files should be ignored in git...

2014-03-28 Thread Chmouel Boudjnah
On Thu, Mar 27, 2014 at 7:29 PM, Kurt Griffiths 
kurt.griffi...@rackspace.com wrote:

 P.S. - Any particular reason this script wasn't written in Python? Seems
 like that would avoid a lot of cross-platform gotchyas.



I think it just need to have someone stepping up doing it.

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] Best practices for uploading large amounts of data

2014-03-28 Thread Serge Kovaleff
Hi Iliia,

I would take a look into BSON http://bsonspec.org/

Cheers,
Serge Kovaleff

On Thu, Mar 27, 2014 at 8:23 PM, Illia Khudoshyn ikhudos...@mirantis.comwrote:

 Hi, Openstackers,

 I'm currently working on adding bulk data load functionality to MagnetoDB.
 This functionality implies inserting huge amounts of data (billions of
 rows, gigabytes of data). The data being uploaded is a set of JSON's (for
 now). The question I'm interested in is a way of data transportation. For
 now I do streaming HTTP POST request from the client side with
 gevent.pywsgi on the server side.

 Could anybody suggest any (better?) approach for the transportation,
 please?
 What are best practices for that.

 Thanks in advance.

 --

 Best regards,

 Illia Khudoshyn,
 Software Engineer, Mirantis, Inc.



 38, Lenina ave. Kharkov, Ukraine

 www.mirantis.com http://www.mirantis.ru/

 www.mirantis.ru



 Skype: gluke_work

 ikhudos...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dependency freeze exception for happybase (I would like version 0.8)

2014-03-28 Thread Thierry Carrez
Thomas Goirand wrote:
 I'd like to ask everyone's opinion here. Is it ok to do a freeze
 exception in this case? If yes (please, everyone, agree! :) ), then
 would =0.8 or =0.4,!=0.6,!=0.7 be better?

At this point I think it's safest to go with =0.4,!=0.6,!=0.7, *if*
Ceilometer folks confirm that 0.8 is fine by them. That way distros that
are stuck with 0.5 are not otherwise adversely affected.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][qa][all] Home of rendered specs

2014-03-28 Thread Thierry Carrez
Joe Gordon wrote:
 Now that nova and qa are beginning to use specs repos [0][1]. Instead of
 being forced to read raw RST or relying on github [3],  we want a domain
 where we can publish the fully rendered sphinxdocs based specs (rendered
 with oslosphinx of course). So how about:
 
   specs.openstack.org/$project http://specs.openstack.org/$project
 
 specs instead of docs because docs.openstack.org
 http://docs.openstack.org should only contain what is actually
 implemented so keeping specs in another subdomain is an attempt to avoid
 confusion as we don't expect every approved blueprint to get implemented.

Great idea.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][qa][all] Home of rendered specs

2014-03-28 Thread Chmouel Boudjnah
Thierry Carrez wrote:
 specs instead of docs because docs.openstack.org
  http://docs.openstack.org should only contain what is actually
  implemented so keeping specs in another subdomain is an attempt to avoid
  confusion as we don't expect every approved blueprint to get implemented.

 Great idea.

Great idea indeed! that would allow them to be nicely indexed by the
search engines!

Chmouel.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dependency freeze exception for happybase (I would like version 0.8)

2014-03-28 Thread Nadya Privalova
Hi folks,

Running rests against 0.8. Will update you ASAP

Thanks,
Nadya


On Fri, Mar 28, 2014 at 1:39 PM, Thierry Carrez thie...@openstack.orgwrote:

 Thomas Goirand wrote:
  I'd like to ask everyone's opinion here. Is it ok to do a freeze
  exception in this case? If yes (please, everyone, agree! :) ), then
  would =0.8 or =0.4,!=0.6,!=0.7 be better?

 At this point I think it's safest to go with =0.4,!=0.6,!=0.7, *if*
 Ceilometer folks confirm that 0.8 is fine by them. That way distros that
 are stuck with 0.5 are not otherwise adversely affected.

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Horizon] Searching for a new name for Tuskar UI

2014-03-28 Thread Dougal Matthews

On 27/03/14 18:11, Jay Dobies wrote:

It might be good to do a similar thing as Keystone does. We could keep
python-tuskarclient focused only on Python bindings for Tuskar (but keep
whatever CLI we already implemented there, for backwards compatibility),
and implement CLI as a plugin to OpenStackClient. E.g. when you want to
access Keystone v3 API features (e.g. domains resource), then
python-keystoneclient provides only Python bindings, it no longer
provides CLI.


+1



+1 also, I completely agree and almost brought this up but wanted to
stick on the topic of names. So essentially one standard python-*client
named after the API with only bindings and another CLI client named
after the UI.

Dougal

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] Best practices for uploading large amounts of data

2014-03-28 Thread Maksym Iarmak
2014-03-28 11:29 GMT+02:00 Serge Kovaleff skoval...@mirantis.com:

 Hi Iliia,

 I would take a look into BSON http://bsonspec.org/

 Cheers,
 Serge Kovaleff

 On Thu, Mar 27, 2014 at 8:23 PM, Illia Khudoshyn 
 ikhudos...@mirantis.comwrote:

 Hi, Openstackers,

 I'm currently working on adding bulk data load functionality to
 MagnetoDB. This functionality implies inserting huge amounts of data
 (billions of rows, gigabytes of data). The data being uploaded is a set of
 JSON's (for now). The question I'm interested in is a way of data
 transportation. For now I do streaming HTTP POST request from the client
 side with gevent.pywsgi on the server side.

 Could anybody suggest any (better?) approach for the transportation,
 please?
 What are best practices for that.

 Thanks in advance.

 --

 Best regards,

 Illia Khudoshyn,
 Software Engineer, Mirantis, Inc.



 38, Lenina ave. Kharkov, Ukraine

 www.mirantis.com http://www.mirantis.ru/

 www.mirantis.ru



 Skype: gluke_work

 ikhudos...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] Best practices for uploading large amounts of data

2014-03-28 Thread Maksym Iarmak
Hi guys,

I suggest taking a look, how Swift and Ceph do such things.


2014-03-28 12:33 GMT+02:00 Maksym Iarmak miar...@mirantis.com:




 2014-03-28 11:29 GMT+02:00 Serge Kovaleff skoval...@mirantis.com:

 Hi Iliia,

 I would take a look into BSON http://bsonspec.org/

 Cheers,
 Serge Kovaleff

 On Thu, Mar 27, 2014 at 8:23 PM, Illia Khudoshyn ikhudos...@mirantis.com
  wrote:

 Hi, Openstackers,

 I'm currently working on adding bulk data load functionality to
 MagnetoDB. This functionality implies inserting huge amounts of data
 (billions of rows, gigabytes of data). The data being uploaded is a set of
 JSON's (for now). The question I'm interested in is a way of data
 transportation. For now I do streaming HTTP POST request from the client
 side with gevent.pywsgi on the server side.

 Could anybody suggest any (better?) approach for the transportation,
 please?
 What are best practices for that.

 Thanks in advance.

 --

 Best regards,

 Illia Khudoshyn,
 Software Engineer, Mirantis, Inc.



 38, Lenina ave. Kharkov, Ukraine

 www.mirantis.com http://www.mirantis.ru/

 www.mirantis.ru



 Skype: gluke_work

 ikhudos...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] Best practices for uploading large amounts of data

2014-03-28 Thread Chmouel Boudjnah
Maksym Iarmak wrote:
 I suggest taking a look, how Swift and Ceph do such things.
under swift (and CEPH via the radosgw which implement swift API) we are
using POST and PUT which has been working relatively well

Chmouel

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] Docker environment for Climate now ready

2014-03-28 Thread Sylvain Bauza

Hi folks,

I made a quick and dirty Dockerfile for creating a Docker image 
containing Climate trunk and starting services.


You can find the source there : https://github.com/sbauza/docker_climate

That's a third option for deploying Climate, rather to be used for 
correctly isolating Climate.


Let me know your thoughts,
-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-28 Thread Jesse Pretorius
On 27 March 2014 20:52, Chris Friesen chris.frie...@windriver.com wrote:

 It'd be nice to be able to do a heat template where you could specify
 things like put these three servers on separate hosts from each other, and
 these other two servers on separate hosts from each other (but maybe on the
 same hosts as the first set of servers), and they all have to be on the
 same network segment because they talk to each other a lot and I want to
 minimize latency, and they all need access to the same shared instance
 storage for live migration.


Surely this can be achieved with:
1) Configure compute hosts with shared storage and on the same switch
infrastructure in a host aggregate, with an AZ set in the aggregate
(setting the AZ gives visibility to the end-user)
2) Ensure that both the GroupAntiAffinityFilter and AvailabilityZoneFilter
are setup on the scheduler
3) Boot the instances using the availability zone and group scheduler hints
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-28 Thread Day, Phil
 Personally, I feel it is a mistake to continue to use the Amazon concept
 of an availability zone in OpenStack, as it brings with it the
 connotation from AWS EC2 that each zone is an independent failure
 domain. This characteristic of EC2 availability zones is not enforced in
 OpenStack Nova or Cinder, and therefore creates a false expectation for
 Nova users.

I think this is backwards training, personally. I think azs as separate failure
domains were done like that for a reason by amazon, and make good sense. 
What we've done is overload that with cells, aggregates etc which should 
have a better interface and are a different concept. Redefining well 
understood 
terms because they don't suite your current implementation is a slippery 
slope, 
and overloading terms that already have a meaning in the industry in just 
annoying.

+1
I don't think there is anything wrong with identifying new use cases and 
working out how to cope with them:

 - First we generalized Aggregates
- Then we mapped AZs onto aggregates as a special mutually exclusive group
- Now we're recognizing that maybe we need to make those changes to support AZs 
more generic so we can create additional groups of mutually exclusive aggregates

That all feels like good evolution.

But I don't see why that means we have to fit that in under the existing 
concept of AZs - why can't we keep AZs as they are and have a better thing 
called Zones that is just an OSAPI concept and is better that AZs ?
Arguments around not wanting to add new options to create server seem a bit 
weak to me - for sure we don't want to add them in an uncontrolled way, but if 
we have a new, richer, concept we should be able to express that separately.

I'm still not personally convinced by the need use cases of racks having 
orthogonal power failure domains and switch failure domains - that seems to me 
from a practical perspective that it becomes really hard to work out where to 
separate VMs so that they don't share a failure mode.Every physical DC 
design I've been involved with tries to get the different failure domains to 
align.   However if it the use case makes sense to someone then I'm not against 
extending aggregates to support multiple mutually exclusive groups.

I think I see a Design Summit session emerging here

Phil
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Docker environment for Climate now ready

2014-03-28 Thread Nikolay Starodubtsev
Great! Thank you, Sylvain!



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1


2014-03-28 14:57 GMT+04:00 Sylvain Bauza sylvain.ba...@bull.net:

  Hi folks,

 I made a quick and dirty Dockerfile for creating a Docker image containing
 Climate trunk and starting services.

 You can find the source there : https://github.com/sbauza/docker_climate

 That's a third option for deploying Climate, rather to be used for
 correctly isolating Climate.

 Let me know your thoughts,
 -Sylvain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Docker environment for Climate now ready

2014-03-28 Thread Sylvain Bauza

Le 28/03/2014 12:04, Nikolay Starodubtsev a écrit :

Great! Thank you, Sylvain!



The README.md file is a bit minimal, it still requires a Devstack 
running elsewhere for accessing rabbitmq, Nova and Keystone.


I'm currently looking at dockenstack [1] for seeing how I could have a 
running devstack within a container, but it relies on a forked Nova repo 
plus some extra patches for supporting the Docker driver, as the driver 
has recently been wiped from Nova (and will reappear in Juno) due to 
lack of CI.


I'm still wondering if it's worth running qemu inside a container and 
what could be the side effects.


-Sylvain


*__*

Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1



2014-03-28 14:57 GMT+04:00 Sylvain Bauza sylvain.ba...@bull.net 
mailto:sylvain.ba...@bull.net:


Hi folks,

I made a quick and dirty Dockerfile for creating a Docker image
containing Climate trunk and starting services.

You can find the source there :
https://github.com/sbauza/docker_climate

That's a third option for deploying Climate, rather to be used for
correctly isolating Climate.

Let me know your thoughts,
-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Regarding on iptables in openstack

2014-03-28 Thread shiva m
Hi,

I installed devstack-havana on ubuntu-13.10. I see iptables-save with all
iptable rules. Can any one please help me how do I  add a new rule  or edit
iptables-save  on  Openstack?

Thanks
Shiva
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] Best practices for uploading large amounts of data

2014-03-28 Thread Dmitriy Ukhlov

On 03/28/2014 11:29 AM, Serge Kovaleff wrote:

Hi Iliia,

I would take a look into BSON http://bsonspec.org/

Cheers,
Serge Kovaleff

On Thu, Mar 27, 2014 at 8:23 PM, Illia Khudoshyn 
ikhudos...@mirantis.com mailto:ikhudos...@mirantis.com wrote:


Hi, Openstackers,

I'm currently working on adding bulk data load functionality to
MagnetoDB. This functionality implies inserting huge amounts of
data (billions of rows, gigabytes of data). The data being
uploaded is a set of JSON's (for now). The question I'm interested
in is a way of data transportation. For now I do streaming HTTP
POST request from the client side with gevent.pywsgi on the server
side.

Could anybody suggest any (better?) approach for the
transportation, please?
What are best practices for that.

Thanks in advance.

-- 


Best regards,

Illia Khudoshyn,
Software Engineer, Mirantis, Inc.

38, Lenina ave. Kharkov, Ukraine

www.mirantis.com http://www.mirantis.ru/

www.mirantis.ru http://www.mirantis.ru/

Skype: gluke_work

ikhudos...@mirantis.com mailto:ikhudos...@mirantis.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hi Iliia,
I guess if we a talking about cassandra batch loading the fastest way is 
to generate sstables locally and load it into Cassandra via JMX or 
sstableloader

http://www.datastax.com/dev/blog/bulk-loading

If you want to implement bulk load via magnetodb layer (not to cassandra 
directly) you could try to use simple tcp socket and implement your 
binary protocol (using bson for example). Http is text protocol so using 
tcp socket can help you to avoid overhead of base64 encoding. In my 
opinion, working with HTTP and BSON is doubtful solution
because you wil use 2 phase encoddung and decoding: 1) object to bson, 
2) bson to base64, 3) base64 to bson, 4) bson to object 1) obect  
to json instead of 1) object to json, 2) json to object in case of 
HTTP + json


Http streaming as I know is asynchronous type of http. You can expect 
performance growing thanks to skipping generation of http response on 
server side and waiting on for that response on client side for each 
chunk. But you still need to send almost the same amount of data. So if 
network throughput is your bottleneck - it doesn't help. If server side 
is your bottleneck - it doesn't help too.


Also pay your attention that in any case, now MagnetoDB Cassandra 
Storage convert your data to CQL query which is also text. It would be 
nice to implement MagnetoDB BatchWriteItem operation via Cassandra 
sstable generation and loading via sstableloader, but unfortunately as I 
know this functionality support implemented only for Java world


--
Best regards,
Dmitriy Ukhlov
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] Best practices for uploading large amounts of data

2014-03-28 Thread Aleksandr Chudnovets
Dmitriy Ukhlov wrote:

   I guess if we a talking about cassandra batch loading the fastest way
 is to generate sstables locally and load it into Cassandra via JMX or
 sstableloader
 http://www.datastax.com/dev/blog/bulk-loading


 Good idea, Dmitriy. IMHO bulk load is back-end specific task. So using
specialized tools seems good idea for me.

Regards,
Alexander Chudnovets
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dependency freeze exception for happybase (I would like version 0.8)

2014-03-28 Thread Nadya Privalova
Today I've tested 0.6 and 0.8. They are acceptable for us. But 0.4 is not.
So I'd like to support Thomas's suggestion about freeze exception for
happybase.

Thanks, Nadya


On Fri, Mar 28, 2014 at 1:56 PM, Nadya Privalova nprival...@mirantis.comwrote:

 Hi folks,

 Running rests against 0.8. Will update you ASAP

 Thanks,
 Nadya


 On Fri, Mar 28, 2014 at 1:39 PM, Thierry Carrez thie...@openstack.orgwrote:

 Thomas Goirand wrote:
  I'd like to ask everyone's opinion here. Is it ok to do a freeze
  exception in this case? If yes (please, everyone, agree! :) ), then
  would =0.8 or =0.4,!=0.6,!=0.7 be better?

 At this point I think it's safest to go with =0.4,!=0.6,!=0.7, *if*
 Ceilometer folks confirm that 0.8 is fine by them. That way distros that
 are stuck with 0.5 are not otherwise adversely affected.

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-28 Thread Belmiro Moreira
+1 for Phil comments.
I agree that VMs should spread between different default avzs if user
doesn't define one at boot time.
There is a blueprint for that feature that unfortunately didn't make it for
icehouse.
https://blueprints.launchpad.net/nova/+spec/schedule-set-availability-zones

Belmiro



On Fri, Mar 28, 2014 at 12:01 PM, Day, Phil philip@hp.com wrote:

  Personally, I feel it is a mistake to continue to use the Amazon concept
  of an availability zone in OpenStack, as it brings with it the
  connotation from AWS EC2 that each zone is an independent failure
  domain. This characteristic of EC2 availability zones is not enforced in
  OpenStack Nova or Cinder, and therefore creates a false expectation for
  Nova users.

 I think this is backwards training, personally. I think azs as separate
 failure
 domains were done like that for a reason by amazon, and make good sense.
 What we've done is overload that with cells, aggregates etc which should
 have a better interface and are a different concept. Redefining well
 understood
 terms because they don't suite your current implementation is a slippery
 slope,
 and overloading terms that already have a meaning in the industry in just
 annoying.

 +1
 I don't think there is anything wrong with identifying new use cases and
 working out how to cope with them:

  - First we generalized Aggregates
 - Then we mapped AZs onto aggregates as a special mutually exclusive group
 - Now we're recognizing that maybe we need to make those changes to
 support AZs more generic so we can create additional groups of mutually
 exclusive aggregates

 That all feels like good evolution.

 But I don't see why that means we have to fit that in under the existing
 concept of AZs - why can't we keep AZs as they are and have a better thing
 called Zones that is just an OSAPI concept and is better that AZs ?
  Arguments around not wanting to add new options to create server seem a
 bit weak to me - for sure we don't want to add them in an uncontrolled way,
 but if we have a new, richer, concept we should be able to express that
 separately.

 I'm still not personally convinced by the need use cases of racks having
 orthogonal power failure domains and switch failure domains - that seems to
 me from a practical perspective that it becomes really hard to work out
 where to separate VMs so that they don't share a failure mode.Every
 physical DC design I've been involved with tries to get the different
 failure domains to align.   However if it the use case makes sense to
 someone then I'm not against extending aggregates to support multiple
 mutually exclusive groups.

 I think I see a Design Summit session emerging here

 Phil


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dependency freeze exception for happybase (I would like version 0.8)

2014-03-28 Thread Sean Dague
Given that some RCs have already shipped, I feel like the ship has
sailed here, because projects already have branches outside of master
with the requirements files they are going to have for icehouse, and the
normal auto propose requirements doesn't work here.

So I'm -0 on this for icehouse.

-Sean

On 03/28/2014 08:48 AM, Nadya Privalova wrote:
 Today I've tested 0.6 and 0.8. They are acceptable for us. But 0.4 is
 not. So I'd like to support Thomas's suggestion about freeze exception
 for happybase.
 
 Thanks, Nadya
 
 
 On Fri, Mar 28, 2014 at 1:56 PM, Nadya Privalova
 nprival...@mirantis.com mailto:nprival...@mirantis.com wrote:
 
 Hi folks,
 
 Running rests against 0.8. Will update you ASAP
 
 Thanks,
 Nadya
 
 
 On Fri, Mar 28, 2014 at 1:39 PM, Thierry Carrez
 thie...@openstack.org mailto:thie...@openstack.org wrote:
 
 Thomas Goirand wrote:
  I'd like to ask everyone's opinion here. Is it ok to do a freeze
  exception in this case? If yes (please, everyone, agree! :) ),
 then
  would =0.8 or =0.4,!=0.6,!=0.7 be better?
 
 At this point I think it's safest to go with
 =0.4,!=0.6,!=0.7, *if*
 Ceilometer folks confirm that 0.8 is fine by them. That way
 distros that
 are stuck with 0.5 are not otherwise adversely affected.
 
 --
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dependency freeze exception for happybase (I would like version 0.8)

2014-03-28 Thread Sergey Lukjanov
Sean, happybase is used only in Ceilometer.

On Fri, Mar 28, 2014 at 5:04 PM, Sean Dague s...@dague.net wrote:
 Given that some RCs have already shipped, I feel like the ship has
 sailed here, because projects already have branches outside of master
 with the requirements files they are going to have for icehouse, and the
 normal auto propose requirements doesn't work here.

 So I'm -0 on this for icehouse.

 -Sean

 On 03/28/2014 08:48 AM, Nadya Privalova wrote:
 Today I've tested 0.6 and 0.8. They are acceptable for us. But 0.4 is
 not. So I'd like to support Thomas's suggestion about freeze exception
 for happybase.

 Thanks, Nadya


 On Fri, Mar 28, 2014 at 1:56 PM, Nadya Privalova
 nprival...@mirantis.com mailto:nprival...@mirantis.com wrote:

 Hi folks,

 Running rests against 0.8. Will update you ASAP

 Thanks,
 Nadya


 On Fri, Mar 28, 2014 at 1:39 PM, Thierry Carrez
 thie...@openstack.org mailto:thie...@openstack.org wrote:

 Thomas Goirand wrote:
  I'd like to ask everyone's opinion here. Is it ok to do a freeze
  exception in this case? If yes (please, everyone, agree! :) ),
 then
  would =0.8 or =0.4,!=0.6,!=0.7 be better?

 At this point I think it's safest to go with
 =0.4,!=0.6,!=0.7, *if*
 Ceilometer folks confirm that 0.8 is fine by them. That way
 distros that
 are stuck with 0.5 are not otherwise adversely affected.

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Not running for PTL

2014-03-28 Thread Russell Bryant
Now that PTL nominations are open for the Juno cycle, it seems
appropriate that I should make my intentions clear.  I do not plan to
run for PTL this cycle.  Nova PTL is truly a full time job.  I'm
planning to take a month off after the Icehouse release for personal
reasons so I feel it's best for the project to let someone else take
over this time.

Nova has a strong group of leaders [1] that work together to make the
project work.  The nova-core team [2] works hard reviewing code to merge
thousands of patches a year without letting the code fall apart.  A
subset of nova-core, the nova-drivers [3] team, has gone above and
beyond during the Icehouse cycle to help review blueprints to set
direction for the project.  Additionally, we have others dedicated to
organizing sub-teams [4]. I'm incredibly thankful for the dedication and
leadership of all of these people.

Thank you all for the opportunity to serve as the Nova PTL for Havana
and Icehouse.  It is truly an honor to work with all of the people that
make OpenStack happen.

[1] https://wiki.openstack.org/wiki/Nova#People
[2] https://review.openstack.org/#/admin/groups/25,members
[3] https://launchpad.net/~nova-drivers/+members#active
[4] https://wiki.openstack.org/wiki/Nova#Nova_subteams

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] PTL Candidacy

2014-03-28 Thread Dan Smith
Hi all,

I would like to run for the OpenStack Compute (Nova) PTL position.

Qualifications
-
I have been working almost exclusively on Nova since mid-2012, and have
been on the nova-core team since late 2012. I am also a member of
nova-drivers, where I help to target and prioritize blueprints to help
shape and focus the direction of the project. I spend a lot of time
reviewing code from all over the Nova tree, and am regularly in the top
five reviewers:

  http://russellbryant.net/openstack-stats/nova-reviewers-90.txt

and I have sustained that level of activity consistently over time:

  http://russellbryant.net/openstack-stats/nova-reviewers-365.txt

My focus since I started has been on improving Nova's live upgrade
capabilities, which started with significant contributions to completion
of the no-db-compute blueprint, creation of the conductor service, and
most recently the concept and implementation for the NovaObject work. I
have been in or at the top of the list of contributors by commit count
for a few cycles now:


https://github.com/openstack/nova/graphs/contributors?from=2012-07-25to=2014-03-22type=c

  http://www.ohloh.net/p/novacc/contributors


http://www.stackalytics.com/?release=icehousemetric=commitsproject_type=openstackmodule=company=user_id=danms

Icehouse Accomplishments
-
This past cycle, I worked to get us to the point at which we could
successfully perform live upgrades for a subset of scenarios from Havana
to the Icehouse release. With the help of many folks, this is now a
reality, with an upstream gating test to prevent regressions going
forward. This is largely possible due to the no-db-compute and
NovaObject efforts in the past, which provide us an architecture of
version compatibility.

Late in the Icehouse cycle, I also worked with developers from Neutron
to design and implement a system for coordination between services. This
allows us to better integrate Nova's network cache and instance
modification tasks with Neutron's processes for increased reliability
and performance.

Looking forward to Juno

Clearly, as Nova continues to grow, the difficult task of scaling the
leadership is increasingly important. In the Icehouse cycle, we gained
some momentum around this, specifically with involving the entire
nova-drivers team in the task of targeting and prioritizing blueprints.
The creation of the nova-specs repo will help organize the task of
proposing new work, but will add some additional steps to the process. I
plan to continue to lean on the drivers team as a whole for keeping up
with blueprint-related tasks. Further, we gained blueprint and bug czars
in John Garbutt and Tracy Jones, both of which have done an excellent
job of wrangling the paperwork involved with tracking these items. I
think that delegation is extremely important, and something we should
attempt to replicate for other topics.

The most tactile issue around scaling the project is, of course, the
review bandwidth and latency. Russell did a fantastic job of keeping
fresh blood on the nova-core team, which both encourages existing
members to exhibit a high level of activity, as well as encourages other
contributors to aim for the level of activity and review quality needed
to be on the core team. I plan to continue to look for ways to increase
communication between the core team members, as well as keep it packed
with people capable of devoting time to the important task of reviewing
code submissions.

Another excellent win for the Nova project in Icehouse was the
requirement for third-party CI testing of our virtualization drivers.
Not only did this significantly improve our quality and
regression-spotting abilities on virt drivers, but it also spurred other
projects to require the same from their contributing vendors. Going
forward, I think we need to increase focus on the success rate for each
of these systems which will help us trust them when they report failure.
Additionally, I think it is important for us to define a common and
minimum set of functions that a virt driver must support. Currently, our
hypervisor support matrix shows a widely-varying amount of support for
some critical things that a user would expect from a driver integrated
in our tree.

Thanks for your consideration!

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder]Persistance layer for cinder + taskflow

2014-03-28 Thread Kekane, Abhishek
Hello everyone,

Currently I am working on adding persistence layer for create_volume api using 
taskflow.
Could you please give your opinions on  whether is it a good idea to add 
taskflow tables in existing cinder database or to create a new database for 
taskflow.

Thanks  Regards,

Abhishek Kekane



__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] Best practices for uploading large amounts of data

2014-03-28 Thread Romain Hardouin
Bulk loading with sstableloader is blazingly fast (the price to pay is that's 
not portable of course). 
Also it's network efficient thanks to SSTable compression. If the network is 
not a limiting factor then LZ4 will be great.




Le Vendredi 28 mars 2014 13h46, Aleksandr Chudnovets achudnov...@mirantis.com 
a écrit :
 
Dmitriy Ukhlov wrote:

 I guess if we a talking about cassandra batch loading the fastest way is to 
 generate sstables locally and load it into Cassandra via JMX or sstableloader
http://www.datastax.com/dev/blog/bulk-loading



 Good idea, Dmitriy. IMHO bulk load is back-end specific task. So using 
specialized tools seems good idea for me.

Regards,
Alexander Chudnovets

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] PTL Candidacy

2014-03-28 Thread James E. Blair
Hi,

I would like to announce my candidacy for the Infrastructure PTL.

I have developed and operated the project infrastructure for several
years and have been honored to serve as the PTL for the Icehouse cycle.

I was instrumental not only in creating the project gating system and
development process, but also in scaling it from three projects to 250.

We face more scaling challenges of a different nature this cycle.
Interest in our development tools and processes in their own right has
increased dramatically.  This is great for us and the project as it
provides new opportunities for contributions.  Helping the
infrastructure projects evolve into more widely useful tools while
maintaining the sharp focus on serving OpenStack's needs that made them
compelling in the first place is a challenge I look forward to.

The amazing growth of the third-party CI system is an area where we can
make a lot of improvement.  During Icehouse, everyone was under deadline
pressure so we tried to limit system or process modifications that would
impact that.  During Juno, I would like to improve the experience both
for third-party CI providers as well as the developers who use their
results.

I am thrilled to be a part of one of the most open free software project
infrastructures, and I would very much like to continue to serve as its
PTL.

Thanks,

Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]Persistance layer for cinder + taskflow

2014-03-28 Thread Joshua Harlow
An idea that might be good to do is to start off using SQLite as the taskflow 
persistence backend.

Get that working using SQLite files (which should be fine as a persistence 
method for most usages) and then after this works there can be discussion 
around moving those tables to something like MySQL and the benefits/drawbacks 
of doing that.

What do u think?

Start small then grow seems to be a good approach.

Sent from my really tiny device...

On Mar 28, 2014, at 7:11 AM, Kekane, Abhishek 
abhishek.kek...@nttdata.commailto:abhishek.kek...@nttdata.com wrote:


Hello everyone,

Currently I am working on adding persistence layer for create_volume api using 
taskflow.
Could you please give your opinions on  whether is it a good idea to add 
taskflow tables in existing cinder database or to create a new database for 
taskflow.

Thanks  Regards,

Abhishek Kekane



__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data. If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] PTL Candidacy

2014-03-28 Thread Anita Kuno
confirmed

On 03/28/2014 11:19 AM, James E. Blair wrote:
 Hi,
 
 I would like to announce my candidacy for the Infrastructure PTL.
 
 I have developed and operated the project infrastructure for several
 years and have been honored to serve as the PTL for the Icehouse cycle.
 
 I was instrumental not only in creating the project gating system and
 development process, but also in scaling it from three projects to 250.
 
 We face more scaling challenges of a different nature this cycle.
 Interest in our development tools and processes in their own right has
 increased dramatically.  This is great for us and the project as it
 provides new opportunities for contributions.  Helping the
 infrastructure projects evolve into more widely useful tools while
 maintaining the sharp focus on serving OpenStack's needs that made them
 compelling in the first place is a challenge I look forward to.
 
 The amazing growth of the third-party CI system is an area where we can
 make a lot of improvement.  During Icehouse, everyone was under deadline
 pressure so we tried to limit system or process modifications that would
 impact that.  During Juno, I would like to improve the experience both
 for third-party CI providers as well as the developers who use their
 results.
 
 I am thrilled to be a part of one of the most open free software project
 infrastructures, and I would very much like to continue to serve as its
 PTL.
 
 Thanks,
 
 Jim
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Ml2Plugin] Setting _original_network in NetworkContext:

2014-03-28 Thread Mathieu Rohon
hi nader,

I don't think this parameter could be used in this case. As andre said
, tha original-network is usefull for update and delete commands. It
would led to misunderstandings if we use this param in other cases,
and particulary in create commands.
I'm still thinking that the result of  super(Ml2Plugin,
self).create_network(context, network), should have network extension
informations[1]. did you talk with salvotore about reverting his
change and using another workaround?

[1]https://answers.launchpad.net/neutron/+question/245773

Mathieu

On Thu, Mar 27, 2014 at 5:24 PM, Nader Lahouti nader.laho...@gmail.com wrote:
 Hi Andre,

 Thans for your reply.

 There is no existing network. The scenario is for the first time that we
 create a network with an extension. Consider, a mechanism driver adds an
 attribute (through extensions) to the network resource. When user creates a
 network, the attribute is set and it is present in the 'network' parameter,
 when calling create_network() in Ml2Plugin.
 But when create_network_pre/post_commit is called, the attribute won't be
 available to the mechanism driver. Because the attribute is not included in
 network object passed to MD - as I mentioned in previous email, the 'result'
 does not have the new attribute.


 Thanks,
 Nader.








 On Wed, Mar 26, 2014 at 3:52 PM, Andre Pech ap...@aristanetworks.com
 wrote:

 Hi Nader,

 When I wrote this, the intention was that original_network only really
 makes sense during an update_network call (ie when there's an existing
 network that you are modifying). In a create_network call, the assumption is
 that no network exists yet, so there is no original network to set.

 Can you provide a bit more detail on the case where there's an existing
 network when create_network is called? Sorry, I didn't totally follow when
 this would happen.

 Thanks
 Andre


 On Tue, Mar 25, 2014 at 8:45 AM, Nader Lahouti nader.laho...@gmail.com
 wrote:

 Hi All,

 In the current Ml2Plugin code when 'create_network' is called, as shown
 below:



 def create_network(self, context, network)

 net_data = network['network']

 ...

 session = context.session

 with session.begin(subtransactions=True):

 self._ensure_default_security_group(context, tenant_id)

 result = super(Ml2Plugin, self).create_network(context,
 network)

 ...

 mech_context = driver_context.NetworkContext(self, context,
 result)

 self.mechanism_manager.create_network_precommit(mech_context)

 ...



 the original_network parameter is not set (the default is None) when
 instantiating NetworkContext, and as a result the mech_context has only the
 value of network object returned from super(Ml2Plugin,
 self).create_network().

 This causes issue when a mechanism driver needs to use the original
 network parameters (given to the create_network), specially when extension
 is used for the network resources.

 (The 'result' only has the network attributes without extension which is
 used to set the '_network' in the NetwrokContext object).

 Even using  extension function registration using

 db_base_plugin_v2.NeutronDbPluginV2.register_dict_extend_funcs(...) won't
 help as the network object that is passed to the registered function does
 not include the extension parameters.


 Is there any reason that the original_network is not set when
 initializing the NetworkContext? Would that cause any issue to set it to
 'net_data' so that any mechanism driver can use original network parameters
 as they are available when create_network is called?


 Appreciate your comments.


 Thanks,

 Nader.





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer]

2014-03-28 Thread Adam Young

On 03/28/2014 12:15 PM, Hachem Chraiti wrote:

Hi,
Can i have some UML Diagrams for the Ceilometer Project??
Or even diagrams for Keystone too

Sincerly ,
Chraiti Hachem
  software engineer


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I'm working on some for Keystone, but won't have them ready for a 
while.  Some older ones are in here


http://adam.younglogic.com/presentations/KeystoneFolsom/

But they are pretty thin.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone]

2014-03-28 Thread Hachem Chraiti
Hi,
Can i have some UML Diagrams for the Keystoner Project??
for the objects like(User,Project,Domain,Ressource,Instance,...)

Sincerly ,
Chraiti Hachem
  software engineer
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Last day for ATC free summit registration !

2014-03-28 Thread Thierry Carrez
Reminder: This is the *last* day for ATCs to use their Atlanta summit
registration discount:

 Original message 
Sujet:  [Openstack] ATC Reminder: Summit Registration
Date :  Fri, 28 Mar 2014 08:43:02 -0500
De :Claire Massey cla...@openstack.org

Hi everyone,

Just a quick reminder that registration prices for the May 2014
OpenStack Summit in Atlanta will increase TODAY, March 28 at 11:55pm
CST.  This is also the deadline for ATCs to register for the Summit for
free.

We already provided all active technical contributors (ATCs) who
contributed to the Havana release or Icehouse release (prior to March 7,
2014) with a USD $600-off discount code to register for a Full Access
Pass to the Summit - this means that *_all ATCs can register for the
Summit for FREE, but only if you use the code to register by 11:55pm CST
TODAY_*.  If you use the ATC code to register after March 28 then a fee
will be charged.

When you register on EventBrite, you will need to enter your code before
you select the Full Access level pass. It's easy to overlook, so please
reference the following
illustration:
https://www.dropbox.com/s/durn7bgi3jatjeq/HowToUseCode_AtlantaSummit.png

*REGISTER HERE-  https://openstacksummitmay2014.eventbrite.co.uk
*
Please email eve...@openstack.org with any Summit related questions.

Cheers,
Claire

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dependency freeze exception for happybase (I would like version 0.8)

2014-03-28 Thread Thierry Carrez
Thierry Carrez wrote:
 Julien Danjou wrote:
 On Thu, Mar 27 2014, Thomas Goirand wrote:

 -happybase=0.4,=0.6
 +happybase=0.8

 Good for me, and Ceilometer is the only one using happybase as far as I
 know so that shouldn't be a problem.
 
 OK so I would be fine with happybase=0.4,!=0.6,!=0.7 as it allows 0.8
 to be run without adversely impacting downstreams who don't have it
 available.

Actually that should be happybase=0.4,!=0.7 since 0.6 was allowed before.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dependency freeze exception for happybase (I would like version 0.8)

2014-03-28 Thread Sean Dague
On 03/28/2014 12:50 PM, Thierry Carrez wrote:
 Thierry Carrez wrote:
 Julien Danjou wrote:
 On Thu, Mar 27 2014, Thomas Goirand wrote:

 -happybase=0.4,=0.6
 +happybase=0.8

 Good for me, and Ceilometer is the only one using happybase as far as I
 know so that shouldn't be a problem.

 OK so I would be fine with happybase=0.4,!=0.6,!=0.7 as it allows 0.8
 to be run without adversely impacting downstreams who don't have it
 available.
 
 Actually that should be happybase=0.4,!=0.7 since 0.6 was allowed before.

The review has comments about 0.7 being the version in Debian and Ubuntu
right now https://review.openstack.org/#/c/82438/.

Which is problematic, as that's a version that's specifically being
called out by the ceilometer team as not useable. Do we know why, and if
it can be worked around?

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dealing with changes of plans, rejections and other annoyances

2014-03-28 Thread Stefano Maffulli
On 03/27/2014 06:25 PM, Anita Kuno wrote:
 Back to my point, when newcomers come into an opensource project with no
 understanding of what opensource means or how to communicate in an
 opensource project, that creates additional learning demand that might
 not have been in evidence previously.

Indeed, your words resonate a lot with what I'm sensing is happening
within OpenStack.

 I wonder if in addition to large numbers of new contributors, there is
 an expectation from new contributors that was not in evidence
 previously. I also wonder if the new contributors are coming to
 openstack with a different understanding of what it means to contribute
 to openstack, compared to new contributors a year ago.

I wonder that too. I've started a journey to understand more the
newcomers dynamics, motivations and improve their journey from the
outside to the inside circle.

I also want to understand better what points of friction we have
already, inside the existing community. How do we identify, deal with
and resolve conflicts inside such circle.

 I am glad that the newcomers training includes how to review a patch.
 I wonder if there should be any part that includes how to
 help/support/teach others in the training. If we teach supporting the
 growth of others as an expectation, then the developers can scale
 better, at least that has been my experience.

This is a good suggestion. My expectation is that training people to the
values of collaboration, the processes and tools early on will make them
good citizens, which in my mind means also be ready to help others.

The problem of how to scale this Upstream Training program is already
present, not sure yet what the solution will be. Delivering a successful
session in and after Atlanta is still my #1 priority at the moment.

Thanks,
Stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][qa][all] Home of rendered specs

2014-03-28 Thread Anne Gentle
On Thu, Mar 27, 2014 at 6:25 PM, Joe Gordon joe.gord...@gmail.com wrote:

 Hi All,

 Now that nova and qa are beginning to use specs repos [0][1]. Instead of
 being forced to read raw RST or relying on github [3],  we want a domain
 where we can publish the fully rendered sphinxdocs based specs (rendered
 with oslosphinx of course). So how about:

   specs.openstack.org/$project

 specs instead of docs because docs.openstack.org should only contain what
 is actually implemented so keeping specs in another subdomain is an attempt
 to avoid confusion as we don't expect every approved blueprint to get
 implemented.



Thanks for this, Joe and all!

Anne


 Best,
 Joe


 [0] http://git.openstack.org/cgit/openstack/nova-specs/
 [1] http://git.openstack.org/cgit/openstack/qa-specs/
 [3] https://github.com/openstack/nova-specs/blob/master/specs/template.rst


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Not running for PTL

2014-03-28 Thread Jay Pipes
On Fri, 2014-03-28 at 09:57 -0400, Russell Bryant wrote:
 Now that PTL nominations are open for the Juno cycle, it seems
 appropriate that I should make my intentions clear.  I do not plan to
 run for PTL this cycle.  Nova PTL is truly a full time job.  I'm
 planning to take a month off after the Icehouse release for personal
 reasons so I feel it's best for the project to let someone else take
 over this time.

A well deserved bit of time off.

 Nova has a strong group of leaders [1] that work together to make the
 project work.  The nova-core team [2] works hard reviewing code to merge
 thousands of patches a year without letting the code fall apart.  A
 subset of nova-core, the nova-drivers [3] team, has gone above and
 beyond during the Icehouse cycle to help review blueprints to set
 direction for the project.  Additionally, we have others dedicated to
 organizing sub-teams [4]. I'm incredibly thankful for the dedication and
 leadership of all of these people.
 
 Thank you all for the opportunity to serve as the Nova PTL for Havana
 and Icehouse.  It is truly an honor to work with all of the people that
 make OpenStack happen.

Thank *you*, Russell, for the effort you've put into the PTL job. It is
truly a full time job, and you've displayed patience and professionalism
throughout your time as PTL.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [neutron] Neutron Full Parallel job - Last 4 days failures

2014-03-28 Thread Matt Riedemann



On 3/27/2014 8:00 AM, Salvatore Orlando wrote:


On 26 March 2014 19:19, James E. Blair jebl...@openstack.org
mailto:jebl...@openstack.org wrote:

Salvatore Orlando sorla...@nicira.com mailto:sorla...@nicira.com
writes:

  On another note, we noticed that the duplicated jobs currently
executed for
  redundancy in neutron actually seem to point all to the same
build id.
  I'm not sure then if we're actually executing each job twice or just
  duplicating lines in the jenkins report.

Thanks for catching that, and I'm sorry that didn't work right.  Zuul is
in fact running the jobs twice, but it is only looking at one of them
when sending reports and (more importantly) decided whether the change
has succeeded or failed.  Fixing this is possible, of course, but turns
out to be a rather complicated change.  Since we don't make heavy use of
this feature, I lean toward simply instantiating multiple instances of
identically configured jobs and invoking them (eg neutron-pg-1,
neutron-pg-2).

Matthew Treinish has already worked up a patch to do that, and I've
written a patch to revert the incomplete feature from Zuul.


That makes sense to me. I think it is just a matter about the results
are reported to gerrit since from what I gather in logstash the jobs are
executed twice for each new patchset or recheck.


For the status of the full job, I gave a look at the numbers reported by
Rossella.
All the bugs are already known; some of them are not even bug; others
have been recently fixed (given the time span of Rossella analysis and
the fact it covers also non-rebased patches it might be possible to have
this kind of false positive).

of all full job failures, 44% should be discarded.
Bug 1291611 (12%) is definitely not a neutron bug... hopefully.
Bug 1281969 (12%) is really too generic.
It bears the hallmark of bug1283522, and therefore the high number might
be due to the fact that trunk was plagued by this bug up to a few days
before the analysis.
However, it's worth noting that there is also another instance of lock
timeout which has caused 11 failures in full job in the past week.
A new bug has been filed for this issue:
https://bugs.launchpad.net/neutron/+bug/1298355
Bug 1294603 was related to a test now skipped. It is still being debated
whether the problem lies in test design, neutron LBaaS or neutron L3.

The following bugs seem not to be neutron bugs:
1290642, 1291920, 1252971, 1257885

Bug 1292242 appears to have been fixed while the analysis was going on
Bug 1277439 instead is already known to affects neutron jobs occasionally.

The actual state of the job is perhaps better than what the raw numbers
say. I would keep monitoring it, and then make it voting after the
Icehouse release is cut, so that we'll be able to deal with possible
higher failure rate in the quiet period of the release cycle.



-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I reported this bug [1] yesterday.  This was hit in our internal Tempest 
runs on RHEL 6.5 with x86_64 and the nova libvirt driver with the 
neutron openvswitch ML2 driver.  We're running without tenant isolation 
on python 2.6 (no testr yet) so the tests are in serial.  We're running 
basically the full API/CLI/Scenarios tests though, no filtering on the 
smoke tag.


Out of 1,971 tests run, we had 3 failures where a nova instance failed 
to spawn because networking callback events failed, i.e. neutron sends a 
server event request to nova and it's a bad URL so nova API pukes and 
then the networking request in neutron server fails.  As linked in the 
bug report I'm seeing the same neutron server log error showing up in 
logstash for community jobs but it's not 100% failure.  I haven't seen 
the n-api log error show up in logstash though.


Just bringing this to people's attention in case anyone else sees it.

[1] https://bugs.launchpad.net/nova/+bug/1298640

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dependency freeze exception for happybase (I would like version 0.8)

2014-03-28 Thread Thierry Carrez
Julien Danjou wrote:
 On Thu, Mar 27 2014, Thomas Goirand wrote:
 
 -happybase=0.4,=0.6
 +happybase=0.8
 
 Good for me, and Ceilometer is the only one using happybase as far as I
 know so that shouldn't be a problem.

OK so I would be fine with happybase=0.4,!=0.6,!=0.7 as it allows 0.8
to be run without adversely impacting downstreams who don't have it
available.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Regarding on iptables in openstack

2014-03-28 Thread Solly Ross
Hi Shiva,
This list is for development discussion and questions only.  Please use the 
Operator List 
(http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators)
or the general list for questions about deploying or operating OpenStack.

Best Regards,
Solly Ross

- Original Message -
From: shiva m anjane...@gmail.com
To: openstack-dev@lists.openstack.org
Sent: Friday, March 28, 2014 7:50:47 AM
Subject: [openstack-dev] Regarding on iptables in openstack

Hi, 

I installed devstack-havana on ubuntu-13.10. I see iptables-save with all 
iptable rules. Can any one please help me how do I add a new rule or edit 
iptables-save on Openstack? 

Thanks 
Shiva 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] PTL Candidacy

2014-03-28 Thread Anita Kuno
confirmed

On 03/28/2014 10:07 AM, Dan Smith wrote:
 Hi all,
 
 I would like to run for the OpenStack Compute (Nova) PTL position.
 
 Qualifications
 -
 I have been working almost exclusively on Nova since mid-2012, and have
 been on the nova-core team since late 2012. I am also a member of
 nova-drivers, where I help to target and prioritize blueprints to help
 shape and focus the direction of the project. I spend a lot of time
 reviewing code from all over the Nova tree, and am regularly in the top
 five reviewers:
 
   http://russellbryant.net/openstack-stats/nova-reviewers-90.txt
 
 and I have sustained that level of activity consistently over time:
 
   http://russellbryant.net/openstack-stats/nova-reviewers-365.txt
 
 My focus since I started has been on improving Nova's live upgrade
 capabilities, which started with significant contributions to completion
 of the no-db-compute blueprint, creation of the conductor service, and
 most recently the concept and implementation for the NovaObject work. I
 have been in or at the top of the list of contributors by commit count
 for a few cycles now:
 
 
 https://github.com/openstack/nova/graphs/contributors?from=2012-07-25to=2014-03-22type=c
 
   http://www.ohloh.net/p/novacc/contributors
 
 
 http://www.stackalytics.com/?release=icehousemetric=commitsproject_type=openstackmodule=company=user_id=danms
 
 Icehouse Accomplishments
 -
 This past cycle, I worked to get us to the point at which we could
 successfully perform live upgrades for a subset of scenarios from Havana
 to the Icehouse release. With the help of many folks, this is now a
 reality, with an upstream gating test to prevent regressions going
 forward. This is largely possible due to the no-db-compute and
 NovaObject efforts in the past, which provide us an architecture of
 version compatibility.
 
 Late in the Icehouse cycle, I also worked with developers from Neutron
 to design and implement a system for coordination between services. This
 allows us to better integrate Nova's network cache and instance
 modification tasks with Neutron's processes for increased reliability
 and performance.
 
 Looking forward to Juno
 
 Clearly, as Nova continues to grow, the difficult task of scaling the
 leadership is increasingly important. In the Icehouse cycle, we gained
 some momentum around this, specifically with involving the entire
 nova-drivers team in the task of targeting and prioritizing blueprints.
 The creation of the nova-specs repo will help organize the task of
 proposing new work, but will add some additional steps to the process. I
 plan to continue to lean on the drivers team as a whole for keeping up
 with blueprint-related tasks. Further, we gained blueprint and bug czars
 in John Garbutt and Tracy Jones, both of which have done an excellent
 job of wrangling the paperwork involved with tracking these items. I
 think that delegation is extremely important, and something we should
 attempt to replicate for other topics.
 
 The most tactile issue around scaling the project is, of course, the
 review bandwidth and latency. Russell did a fantastic job of keeping
 fresh blood on the nova-core team, which both encourages existing
 members to exhibit a high level of activity, as well as encourages other
 contributors to aim for the level of activity and review quality needed
 to be on the core team. I plan to continue to look for ways to increase
 communication between the core team members, as well as keep it packed
 with people capable of devoting time to the important task of reviewing
 code submissions.
 
 Another excellent win for the Nova project in Icehouse was the
 requirement for third-party CI testing of our virtualization drivers.
 Not only did this significantly improve our quality and
 regression-spotting abilities on virt drivers, but it also spurred other
 projects to require the same from their contributing vendors. Going
 forward, I think we need to increase focus on the success rate for each
 of these systems which will help us trust them when they report failure.
 Additionally, I think it is important for us to define a common and
 minimum set of functions that a virt driver must support. Currently, our
 hypervisor support matrix shows a widely-varying amount of support for
 some critical things that a user would expect from a driver integrated
 in our tree.
 
 Thanks for your consideration!
 
 --Dan
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] QA Program PTL Candidacy

2014-03-28 Thread Matthew Treinish
I would like to run for the QA Program PTL position.

I'm currently a core reviewer on the Tempest and elastic-recheck projects and a
member of the stable maint team. I have been working on Tempest and on improving
overall OpenStack quality since I started contributing to the project late in
the Folsom cycle. 

One of the coolest things I've seen since I first started working on OpenStack
is the incredible growth in the tempest community. For example, back during
Folsom Tempest was a relatively small project that had just recently begun
gating with very few contributors. Now it is one of the top 5 most active
OpenStack projects. Which I feel is a testament to the dedication in the
OpenStack community towards maintaining the quality of the project.

I have worked on some key areas since I started working on OpenStack. The
biggest of which is probably enabling tempest to run in parallel in the gate
using testr. This not only improved run time but also helped us shook loose
additional bugs in all the projects by making parallel request. As a result of
all the new bugs it shook loose in the gate I helped Joe Gordon get
elastic-recheck off the ground so that we could track all of the bugs and
prioritize things.

This past cycle I have mostly been concentrating on adding unit testing for
Tempest. Additionally I've been working on trying to improve the user experience
for tempest by working on cleaning up the UX around tempest (config file, run
scripts, etc.) and working on additionally tooling to make configuring and
running Tempest simpler. 

One of my top priorities for the Juno cycle will be bringing the level of gating
for all the integrated projects up the same level and trying to have clear
definitions of what testing should be expected for the projects. Part of this
will also be working on cleaning up and defining exactly what the test matrix
should be in the gate to both balance resources and testing coverage.
Additionally, I think another large push for the Juno cycle should be on the
accessibility of the QA tooling, whether it be tempest, elastic-recheck, 
grenade,
or any of the projects. During Icehouse we saw a large number of new
contributors to QA projects. However what I've seen is that new contributors
often don't have a clear place to reference exactly how all the projects work.
So I think working on making the documentation and tooling around the QA
projects should also be a priority for Juno.

An opinion that I share with Sean from his PTL candidacy email:

I believe the QA PTL role is mostly one of working across projects, as
what's most needed to ensure OpenStack quality at any point in time
may be: more tests, or it may be adjusting the infrastructure to
address a hole, or it may be debugging an issue with developers in any
one of our growing number of integrated projects, or clients, or libraries.


My commit history:
https://review.openstack.org/#/q/owner:treinish,n,z
My review history:
https://review.openstack.org/#/q/reviewer:treinish,n,z

Or for those fond of pie charts:
http://www.stackalytics.com/?release=allmetric=commitsproject_type=openstackmodule=company=user_id=treinish

Thanks,

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Domains prototype in Nova

2014-03-28 Thread Henrique Truta
Hi all!

I've been working on a prototype of Domains in Nova. In that prototype the
user is now able to do the following API calls with a domain scoped token:

GET v2/domains/{domain_id}/servers: Lists servers which projects belong to
the given domain
GET v2/domains/{domain_id}/servers/{server_id}: Gets details from the given
server
DELETE v2/domains/{domain_id}/servers/{server_id}: Deletes the given server
POST v2/domains/{domain_id}/servers/{server_id}/action: Reboots the given
server

Could you help me test these functionalities and review the code?

The code can be found in my github repo (
https://github.com/henriquetruta/nova) on the domains-prototype branch.

Thanks!

-- 
--
Ítalo Henrique Costa Truta
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer]

2014-03-28 Thread Hachem Chraiti
Hi,
Can i have some UML Diagrams for the Ceilometer Project??
Or even diagrams for Keystone too

Sincerly ,
Chraiti Hachem
  software engineer
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] Meeting minutes

2014-03-28 Thread Dina Belova
Hello stackers!

Thanks for taking part in our meeting - meeting minutes are:

Minutes:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-03-28-15.00.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-03-28-15.00.txt
Log:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-03-28-15.00.log.html

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [horizon] [nova]

2014-03-28 Thread Ryan Hallisey
Currently, when you delete a tenant that has 1 or more running instances, the 
tenant will be deleted
without warning and the running instance(s) will be left in place. Should there 
be a warning
and should the admin be allowed to do this?  Or should it be left be?

Thanks,
-Ryan Hallisey


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [horizon] [nova]

2014-03-28 Thread Dolph Mathews
FWIW, that issue is tracked here:
https://bugs.launchpad.net/keystone/+bug/967832

On Fri, Mar 28, 2014 at 1:02 PM, Ryan Hallisey rhall...@redhat.com wrote:

 Currently, when you delete a tenant that has 1 or more running instances,
 the tenant will be deleted
 without warning and the running instance(s) will be left in place. Should
 there be a warning
 and should the admin be allowed to do this?  Or should it be left be?

 Thanks,
 -Ryan Hallisey


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]Persistance layer for cinder + taskflow

2014-03-28 Thread Duncan Thomas
On 28 March 2014 14:38, Joshua Harlow harlo...@yahoo-inc.com wrote:
 An idea that might be good to do is to start off using SQLite as the
 taskflow persistence backend.

 Get that working using SQLite files (which should be fine as a persistence
 method for most usages) and then after this works there can be discussion
 around moving those tables to something like MySQL and the
 benefits/drawbacks of doing that.

 What do u think?

 Start small then grow seems to be a good approach.

I'd say at tables to the existing database first, it adds the smallest
number of new moving parts, then look at a new database if and only if
there seems to be evidence that one is needed. As a systems operator,
it is hard enough managing one db let alone N local dbs...

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] PTL Candidacy

2014-03-28 Thread Russell Bryant
On 03/28/2014 10:07 AM, Dan Smith wrote:
 Hi all,
 
 I would like to run for the OpenStack Compute (Nova) PTL position.

Awesome.  :-)

Dan has been doing an amazing job in many areas of Nova.  He has been a
prolific contributor and code reviewer.  He has emerged as one of the
top technical leaders of the project as a key participant in design
discussions.  He has also been an active member of the nova-drivers team
that helps review blueprints to set direction for the project.  Lastly,
Dan has taken an interest and participated in discussions around project
policy and process.

I feel confident that Dan would be a fantastic PTL for Nova.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Docs] PTL Candidacy

2014-03-28 Thread Anne Gentle
Hi all,
I'm writing to announce my candidacy for the Documentation Program
Technical Lead (PTL).

I have been working on OpenStack upstream documentation since September
2010, and am currently serving in this role. I recently summarized the many
community and collaboration methods we use to create and maintain
documentation across the core OpenStack programs. [1] I think these open
source, open documentation methods are what keep OpenStack docs vibrant and
alive.

I can't take credit for all the work that goes on in our community, but
please let me highlight the results the coordinated teams and individuals
have produced:

- The Documentation team has grown and matured in the past release, and we
released simultaneously with the code for the first time with the Havana
release. We are on track to do that again for Icehouse.

- We have a translation toolchain working this year and I'm constantly
amazed at the outpouring of dedication from our translation community.

- Our coordinated documentation tool chains that enable automation and
continuous publication are working seamlessly with the various projects
across OpenStack.

- We're releasing an O'Reilly edition of the OpenStack Operations Guide.

- The API Reference at http://api.openstack.org/api-ref.html got a complete
refresh to provide a responsive web design and to streamline the underlying
CSS and JS code.

- We have been incubating the open source training manuals team within the
OpenStack Documentation program. They've produced an Associate Training
Guide, with outlines and schedules for an Operator Training Guide, a
Developer Training Guide, and an Architect Training Guide.

- While our focus has been on both Guides for operators and installers as
well as API reference documentation, I am interested in working with the
app developer community to build documentation collaboratively that fits
their needs. Everett Toews recently updated the API docs landing page to
take this first step.

I hope you can support my continuing efforts for the wide scope and breadth
of serving the documentation needs in this community.

Thanks,
Anne


[1]
http://justwriteclick.com/2014/03/21/how-to-build-openstack-docs-and-contributors-through-community/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] QA Program PTL Candidacy

2014-03-28 Thread Anita Kuno
confirmed

On 03/28/2014 01:29 PM, Matthew Treinish wrote:
 I would like to run for the QA Program PTL position.
 
 I'm currently a core reviewer on the Tempest and elastic-recheck projects and 
 a
 member of the stable maint team. I have been working on Tempest and on 
 improving
 overall OpenStack quality since I started contributing to the project late in
 the Folsom cycle. 
 
 One of the coolest things I've seen since I first started working on OpenStack
 is the incredible growth in the tempest community. For example, back during
 Folsom Tempest was a relatively small project that had just recently begun
 gating with very few contributors. Now it is one of the top 5 most active
 OpenStack projects. Which I feel is a testament to the dedication in the
 OpenStack community towards maintaining the quality of the project.
 
 I have worked on some key areas since I started working on OpenStack. The
 biggest of which is probably enabling tempest to run in parallel in the gate
 using testr. This not only improved run time but also helped us shook loose
 additional bugs in all the projects by making parallel request. As a result of
 all the new bugs it shook loose in the gate I helped Joe Gordon get
 elastic-recheck off the ground so that we could track all of the bugs and
 prioritize things.
 
 This past cycle I have mostly been concentrating on adding unit testing for
 Tempest. Additionally I've been working on trying to improve the user 
 experience
 for tempest by working on cleaning up the UX around tempest (config file, run
 scripts, etc.) and working on additionally tooling to make configuring and
 running Tempest simpler. 
 
 One of my top priorities for the Juno cycle will be bringing the level of 
 gating
 for all the integrated projects up the same level and trying to have clear
 definitions of what testing should be expected for the projects. Part of this
 will also be working on cleaning up and defining exactly what the test matrix
 should be in the gate to both balance resources and testing coverage.
 Additionally, I think another large push for the Juno cycle should be on the
 accessibility of the QA tooling, whether it be tempest, elastic-recheck, 
 grenade,
 or any of the projects. During Icehouse we saw a large number of new
 contributors to QA projects. However what I've seen is that new contributors
 often don't have a clear place to reference exactly how all the projects work.
 So I think working on making the documentation and tooling around the QA
 projects should also be a priority for Juno.
 
 An opinion that I share with Sean from his PTL candidacy email:
 
 I believe the QA PTL role is mostly one of working across projects, as
 what's most needed to ensure OpenStack quality at any point in time
 may be: more tests, or it may be adjusting the infrastructure to
 address a hole, or it may be debugging an issue with developers in any
 one of our growing number of integrated projects, or clients, or libraries.
 
 
 My commit history:
 https://review.openstack.org/#/q/owner:treinish,n,z
 My review history:
 https://review.openstack.org/#/q/reviewer:treinish,n,z
 
 Or for those fond of pie charts:
 http://www.stackalytics.com/?release=allmetric=commitsproject_type=openstackmodule=company=user_id=treinish
 
 Thanks,
 
 -Matt Treinish
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Docs] PTL Candidacy

2014-03-28 Thread Anita Kuno
confirmed

On 03/28/2014 02:29 PM, Anne Gentle wrote:
 Hi all,
 I'm writing to announce my candidacy for the Documentation Program
 Technical Lead (PTL).
 
 I have been working on OpenStack upstream documentation since September
 2010, and am currently serving in this role. I recently summarized the many
 community and collaboration methods we use to create and maintain
 documentation across the core OpenStack programs. [1] I think these open
 source, open documentation methods are what keep OpenStack docs vibrant and
 alive.
 
 I can't take credit for all the work that goes on in our community, but
 please let me highlight the results the coordinated teams and individuals
 have produced:
 
 - The Documentation team has grown and matured in the past release, and we
 released simultaneously with the code for the first time with the Havana
 release. We are on track to do that again for Icehouse.
 
 - We have a translation toolchain working this year and I'm constantly
 amazed at the outpouring of dedication from our translation community.
 
 - Our coordinated documentation tool chains that enable automation and
 continuous publication are working seamlessly with the various projects
 across OpenStack.
 
 - We're releasing an O'Reilly edition of the OpenStack Operations Guide.
 
 - The API Reference at http://api.openstack.org/api-ref.html got a complete
 refresh to provide a responsive web design and to streamline the underlying
 CSS and JS code.
 
 - We have been incubating the open source training manuals team within the
 OpenStack Documentation program. They've produced an Associate Training
 Guide, with outlines and schedules for an Operator Training Guide, a
 Developer Training Guide, and an Architect Training Guide.
 
 - While our focus has been on both Guides for operators and installers as
 well as API reference documentation, I am interested in working with the
 app developer community to build documentation collaboratively that fits
 their needs. Everett Toews recently updated the API docs landing page to
 take this first step.
 
 I hope you can support my continuing efforts for the wide scope and breadth
 of serving the documentation needs in this community.
 
 Thanks,
 Anne
 
 
 [1]
 http://justwriteclick.com/2014/03/21/how-to-build-openstack-docs-and-contributors-through-community/
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [Openstack] ATC Reminder: Summit Registration

2014-03-28 Thread Mark Collier
FYI, you can avoid the #1 problem ATCs have when redeeming codes if you
look at this first:

https://www.dropbox.com/s/durn7bgi3jatjeq/HowToUseCode_AtlantaSummit.png

Don't be that stacker who forgets to put the promo code in the right box
at the beginning :)

Mark
 -- Forwarded message --
From: Claire Massey cla...@openstack.org
Date: Mar 28, 2014 8:51 AM
Subject: [Openstack] ATC Reminder: Summit Registration
To: commun...@lists.openstack.org, openst...@lists.openstack.org
Cc: Shari Mahrdt sh...@openstack.org

Hi everyone,

Just a quick reminder that registration prices for the May 2014 OpenStack
Summit in Atlanta will increase TODAY, March 28 at 11:55pm CST.  This is
also the deadline for ATCs to register for the Summit for free.

We already provided all active technical contributors (ATCs) who
contributed to the Havana release or Icehouse release (prior to March 7,
2014) with a USD $600-off discount code to register for a Full Access Pass
to the Summit - this means that *all ATCs can register for the Summit for
FREE, but only if you use the code to register by 11:55pm CST TODAY*.  If
you use the ATC code to register after March 28 then a fee will be charged.

When you register on EventBrite, you will need to enter your code before
you select the Full Access level pass. It's easy to overlook, so please
reference the following illustration:
https://www.dropbox.com/s/durn7bgi3jatjeq/HowToUseCode_AtlantaSummit.png


*REGISTER HERE-  https://openstacksummitmay2014.eventbrite.co.uk
https://openstacksummitmay2014.eventbrite.co.uk*
Please email eve...@openstack.org with any Summit related questions.

Cheers,
Claire



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-28 Thread Jay Pipes
On Fri, 2014-03-28 at 13:56 +0100, Belmiro Moreira wrote:
 +1 for Phil comments.
 I agree that VMs should spread between different default avzs if user
 doesn't define one at boot time.
 There is a blueprint for that feature that unfortunately didn't make
 it for icehouse.
 https://blueprints.launchpad.net/nova/+spec/schedule-set-availability-zones

That's not at all what I've been talking about.

I'm talking about the concept of EC2 availability zones being improperly
applied in Nova and leading to wrong expectations of the user. Thus, I
am proposing to drop the concept of EC2 AZs in the next major API
version and instead have a general compute node container that may or
may not contain other generic containers of compute nodes.

The HostAggregate concept in Nova was a hack to begin with (it was
originally just XenServer-specific), and then was further hacked to
allow tagging a host aggregate with an AZ -- ostensibly to expose
something to the end user that could be used to direct scheduler hints.

I'm proposing getting rid of the host aggregate hack (or maybe evolving
it?) as well as the availability zone concept and replacing them with a
more flexible generic container object that may be hierarchical in
nature.

Best,
-jay





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] PTL Candidacy

2014-03-28 Thread Michael Still
Hi.

I would like to run for the OpenStack Compute PTL position as well.

I have been an active nova developer since late 2011, and have been a
core reviewer for quite a while. I am currently serving on the
Technical Committee, where I have recently been spending my time
liaising with the board about how to define what software should be
able to use the OpenStack trade mark. I've also served on the
vulnerability management team, and as nova bug czar in the past.

I have extensive experience running Open Source community groups,
having served on the TC, been the Director for linux.conf.au 2013, as
well as serving on the boards of various community groups over the
years.

In Icehouse I hired a team of nine software engineers who are all
working 100% on OpenStack at Rackspace Australia, developed and
deployed the turbo hipster third party CI system along with Joshua
Hesketh, as well as writing nova code. I recognise that if I am
successful I will need to rearrange my work responsibilities, and my
management is supportive of that.

The future
--

To be honest, I've thought for a while that the PTL role in OpenStack
is poorly named. Specifically, its the T that bothers me. Sure, we
need strong technical direction for our programs, but putting it in
the title raises technical direction above the other aspects of the
job. Compute at the moment is in an interesting position -- we're
actually pretty good on technical direction and we're doing
interesting things. What we're not doing well on is the social aspects
of the PTL role.

When I first started hacking on nova I came from an operations
background where I hadn't written open source code in quite a while. I
feel like I'm reasonably smart, but nova was certainly the largest
python project I'd ever seen. I submitted my first patch, and it was
rejected -- as it should have been. However, Vishy then took the time
to sit down with me and chat about what needed to change, and how to
improve the patch. That's really why I'm still involved with
OpenStack, Vishy took an interest and was always happy to chat. I'm
told by others that they have had similar experiences.

I think that's what compute is lacking at the moment. For the last few
cycles we're focused on the technical, and now the social aspects are
our biggest problem. I think this is a pendulum, and perhaps in a
release or two we'll swing back to needing to re-emphasise on
technical aspects, but for now we're doing poorly on social things.
Some examples:

- we're not keeping up with code reviews because we're reviewing the
wrong things. We have a high volume of patches which are unlikely to
ever land, but we just reject them. So far in the Icehouse cycle we've
seen 2,334 patchsets proposed, of which we approved 1,233. Along the
way, we needed to review 11,747 revisions. We don't spend enough time
working with the proposers to improve the quality of their code so
that it will land. Specifically, whilst review comments in gerrit are
helpful, we need to identify up and coming contributors and help them
build a relationship with a mentor outside gerrit. We can reduce the
number of reviews we need to do by improving the quality of initial
proposals.

- we're not keeping up with bug triage, or worse actually closing
bugs. I think part of this is that people want to land their features,
but part of it is also that closing bugs is super frustrating at the
moment. It can take hours (or days) to replicate and then diagnose a
bug. You propose a fix, and then it takes weeks to get reviewed. I'd
like to see us tweak the code review process to prioritise bug fixes
over new features for the Juno cycle. We should still land features,
but we should obsessively track review latency for bug fixes. Compute
fails if we're not producing reliable production grade code.

- I'd like to see us focus more on consensus building. We're a team
after all, and when we argue about solely the technical aspects of a
problem we ignore the fact that we're teaching the people involved a
behaviour that will continue on. Ultimately if we're not a welcoming
project that people want to code on, we'll run out of developers. I
personally want to be working on compute in five years, and I want the
compute of the future to be a vibrant, friendly, supportive place. We
get there by modelling the behaviour we want to see in the future.

So, some specific actions I think we should take:

- when we reject a review from a relatively new contributor, we should
try and pair them up with a more experienced developer to get some
coaching. That experienced dev should take point on code reviews for
the new person so that they receive low-latency feedback as they
learn. Once the experienced dev is ok with a review, nova-core can
pile on to actually get the code approved. This will reduce the
workload for nova-core (we're only reviewing things which are of a
known good standard), while improving the experience for new
contributors.

- we should obsessively 

Re: [openstack-dev] QA Program PTL Candidacy

2014-03-28 Thread Sean Dague
On 03/28/2014 01:29 PM, Matthew Treinish wrote:
 I would like to run for the QA Program PTL position.
 
 I'm currently a core reviewer on the Tempest and elastic-recheck projects and 
 a
 member of the stable maint team. I have been working on Tempest and on 
 improving
 overall OpenStack quality since I started contributing to the project late in
 the Folsom cycle. 
 
 One of the coolest things I've seen since I first started working on OpenStack
 is the incredible growth in the tempest community. For example, back during
 Folsom Tempest was a relatively small project that had just recently begun
 gating with very few contributors. Now it is one of the top 5 most active
 OpenStack projects. Which I feel is a testament to the dedication in the
 OpenStack community towards maintaining the quality of the project.
 
 I have worked on some key areas since I started working on OpenStack. The
 biggest of which is probably enabling tempest to run in parallel in the gate
 using testr. This not only improved run time but also helped us shook loose
 additional bugs in all the projects by making parallel request. As a result of
 all the new bugs it shook loose in the gate I helped Joe Gordon get
 elastic-recheck off the ground so that we could track all of the bugs and
 prioritize things.
 
 This past cycle I have mostly been concentrating on adding unit testing for
 Tempest. Additionally I've been working on trying to improve the user 
 experience
 for tempest by working on cleaning up the UX around tempest (config file, run
 scripts, etc.) and working on additionally tooling to make configuring and
 running Tempest simpler. 
 
 One of my top priorities for the Juno cycle will be bringing the level of 
 gating
 for all the integrated projects up the same level and trying to have clear
 definitions of what testing should be expected for the projects. Part of this
 will also be working on cleaning up and defining exactly what the test matrix
 should be in the gate to both balance resources and testing coverage.
 Additionally, I think another large push for the Juno cycle should be on the
 accessibility of the QA tooling, whether it be tempest, elastic-recheck, 
 grenade,
 or any of the projects. During Icehouse we saw a large number of new
 contributors to QA projects. However what I've seen is that new contributors
 often don't have a clear place to reference exactly how all the projects work.
 So I think working on making the documentation and tooling around the QA
 projects should also be a priority for Juno.
 
 An opinion that I share with Sean from his PTL candidacy email:
 
 I believe the QA PTL role is mostly one of working across projects, as
 what's most needed to ensure OpenStack quality at any point in time
 may be: more tests, or it may be adjusting the infrastructure to
 address a hole, or it may be debugging an issue with developers in any
 one of our growing number of integrated projects, or clients, or libraries.
 
 
 My commit history:
 https://review.openstack.org/#/q/owner:treinish,n,z
 My review history:
 https://review.openstack.org/#/q/reviewer:treinish,n,z
 
 Or for those fond of pie charts:
 http://www.stackalytics.com/?release=allmetric=commitsproject_type=openstackmodule=company=user_id=treinish
 
 Thanks,
 
 -Matt Treinish

I think Matt would be an excellent choice for QA PTL, and I'm happy that
he's put his hat in the ring.

I will not be running for QA PTL in this next cycle. I've been QA PTL
for the last 2 cycles, since we created the program, and I feel like we
are strongest as a community when we rotate leadership roles regularly.
I think now is a great time for new eyes on the QA program.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] PTL Candidacy

2014-03-28 Thread Boris Pavlovic
+2 =)


On Fri, Mar 28, 2014 at 11:21 PM, Michael Still mi...@stillhq.com wrote:

 Hi.

 I would like to run for the OpenStack Compute PTL position as well.

 I have been an active nova developer since late 2011, and have been a
 core reviewer for quite a while. I am currently serving on the
 Technical Committee, where I have recently been spending my time
 liaising with the board about how to define what software should be
 able to use the OpenStack trade mark. I've also served on the
 vulnerability management team, and as nova bug czar in the past.

 I have extensive experience running Open Source community groups,
 having served on the TC, been the Director for linux.conf.au 2013, as
 well as serving on the boards of various community groups over the
 years.

 In Icehouse I hired a team of nine software engineers who are all
 working 100% on OpenStack at Rackspace Australia, developed and
 deployed the turbo hipster third party CI system along with Joshua
 Hesketh, as well as writing nova code. I recognise that if I am
 successful I will need to rearrange my work responsibilities, and my
 management is supportive of that.

 The future
 --

 To be honest, I've thought for a while that the PTL role in OpenStack
 is poorly named. Specifically, its the T that bothers me. Sure, we
 need strong technical direction for our programs, but putting it in
 the title raises technical direction above the other aspects of the
 job. Compute at the moment is in an interesting position -- we're
 actually pretty good on technical direction and we're doing
 interesting things. What we're not doing well on is the social aspects
 of the PTL role.

 When I first started hacking on nova I came from an operations
 background where I hadn't written open source code in quite a while. I
 feel like I'm reasonably smart, but nova was certainly the largest
 python project I'd ever seen. I submitted my first patch, and it was
 rejected -- as it should have been. However, Vishy then took the time
 to sit down with me and chat about what needed to change, and how to
 improve the patch. That's really why I'm still involved with
 OpenStack, Vishy took an interest and was always happy to chat. I'm
 told by others that they have had similar experiences.

 I think that's what compute is lacking at the moment. For the last few
 cycles we're focused on the technical, and now the social aspects are
 our biggest problem. I think this is a pendulum, and perhaps in a
 release or two we'll swing back to needing to re-emphasise on
 technical aspects, but for now we're doing poorly on social things.
 Some examples:

 - we're not keeping up with code reviews because we're reviewing the
 wrong things. We have a high volume of patches which are unlikely to
 ever land, but we just reject them. So far in the Icehouse cycle we've
 seen 2,334 patchsets proposed, of which we approved 1,233. Along the
 way, we needed to review 11,747 revisions. We don't spend enough time
 working with the proposers to improve the quality of their code so
 that it will land. Specifically, whilst review comments in gerrit are
 helpful, we need to identify up and coming contributors and help them
 build a relationship with a mentor outside gerrit. We can reduce the
 number of reviews we need to do by improving the quality of initial
 proposals.

 - we're not keeping up with bug triage, or worse actually closing
 bugs. I think part of this is that people want to land their features,
 but part of it is also that closing bugs is super frustrating at the
 moment. It can take hours (or days) to replicate and then diagnose a
 bug. You propose a fix, and then it takes weeks to get reviewed. I'd
 like to see us tweak the code review process to prioritise bug fixes
 over new features for the Juno cycle. We should still land features,
 but we should obsessively track review latency for bug fixes. Compute
 fails if we're not producing reliable production grade code.

 - I'd like to see us focus more on consensus building. We're a team
 after all, and when we argue about solely the technical aspects of a
 problem we ignore the fact that we're teaching the people involved a
 behaviour that will continue on. Ultimately if we're not a welcoming
 project that people want to code on, we'll run out of developers. I
 personally want to be working on compute in five years, and I want the
 compute of the future to be a vibrant, friendly, supportive place. We
 get there by modelling the behaviour we want to see in the future.

 So, some specific actions I think we should take:

 - when we reject a review from a relatively new contributor, we should
 try and pair them up with a more experienced developer to get some
 coaching. That experienced dev should take point on code reviews for
 the new person so that they receive low-latency feedback as they
 learn. Once the experienced dev is ok with a review, nova-core can
 pile on to actually get the code approved. This will reduce the
 

Re: [openstack-dev] [Nova] PTL Candidacy

2014-03-28 Thread Anita Kuno
confirmed

On 03/28/2014 03:21 PM, Michael Still wrote:
 Hi.
 
 I would like to run for the OpenStack Compute PTL position as well.
 
 I have been an active nova developer since late 2011, and have been a
 core reviewer for quite a while. I am currently serving on the
 Technical Committee, where I have recently been spending my time
 liaising with the board about how to define what software should be
 able to use the OpenStack trade mark. I've also served on the
 vulnerability management team, and as nova bug czar in the past.
 
 I have extensive experience running Open Source community groups,
 having served on the TC, been the Director for linux.conf.au 2013, as
 well as serving on the boards of various community groups over the
 years.
 
 In Icehouse I hired a team of nine software engineers who are all
 working 100% on OpenStack at Rackspace Australia, developed and
 deployed the turbo hipster third party CI system along with Joshua
 Hesketh, as well as writing nova code. I recognise that if I am
 successful I will need to rearrange my work responsibilities, and my
 management is supportive of that.
 
 The future
 --
 
 To be honest, I've thought for a while that the PTL role in OpenStack
 is poorly named. Specifically, its the T that bothers me. Sure, we
 need strong technical direction for our programs, but putting it in
 the title raises technical direction above the other aspects of the
 job. Compute at the moment is in an interesting position -- we're
 actually pretty good on technical direction and we're doing
 interesting things. What we're not doing well on is the social aspects
 of the PTL role.
 
 When I first started hacking on nova I came from an operations
 background where I hadn't written open source code in quite a while. I
 feel like I'm reasonably smart, but nova was certainly the largest
 python project I'd ever seen. I submitted my first patch, and it was
 rejected -- as it should have been. However, Vishy then took the time
 to sit down with me and chat about what needed to change, and how to
 improve the patch. That's really why I'm still involved with
 OpenStack, Vishy took an interest and was always happy to chat. I'm
 told by others that they have had similar experiences.
 
 I think that's what compute is lacking at the moment. For the last few
 cycles we're focused on the technical, and now the social aspects are
 our biggest problem. I think this is a pendulum, and perhaps in a
 release or two we'll swing back to needing to re-emphasise on
 technical aspects, but for now we're doing poorly on social things.
 Some examples:
 
 - we're not keeping up with code reviews because we're reviewing the
 wrong things. We have a high volume of patches which are unlikely to
 ever land, but we just reject them. So far in the Icehouse cycle we've
 seen 2,334 patchsets proposed, of which we approved 1,233. Along the
 way, we needed to review 11,747 revisions. We don't spend enough time
 working with the proposers to improve the quality of their code so
 that it will land. Specifically, whilst review comments in gerrit are
 helpful, we need to identify up and coming contributors and help them
 build a relationship with a mentor outside gerrit. We can reduce the
 number of reviews we need to do by improving the quality of initial
 proposals.
 
 - we're not keeping up with bug triage, or worse actually closing
 bugs. I think part of this is that people want to land their features,
 but part of it is also that closing bugs is super frustrating at the
 moment. It can take hours (or days) to replicate and then diagnose a
 bug. You propose a fix, and then it takes weeks to get reviewed. I'd
 like to see us tweak the code review process to prioritise bug fixes
 over new features for the Juno cycle. We should still land features,
 but we should obsessively track review latency for bug fixes. Compute
 fails if we're not producing reliable production grade code.
 
 - I'd like to see us focus more on consensus building. We're a team
 after all, and when we argue about solely the technical aspects of a
 problem we ignore the fact that we're teaching the people involved a
 behaviour that will continue on. Ultimately if we're not a welcoming
 project that people want to code on, we'll run out of developers. I
 personally want to be working on compute in five years, and I want the
 compute of the future to be a vibrant, friendly, supportive place. We
 get there by modelling the behaviour we want to see in the future.
 
 So, some specific actions I think we should take:
 
 - when we reject a review from a relatively new contributor, we should
 try and pair them up with a more experienced developer to get some
 coaching. That experienced dev should take point on code reviews for
 the new person so that they receive low-latency feedback as they
 learn. Once the experienced dev is ok with a review, nova-core can
 pile on to actually get the code approved. This will reduce the
 workload for 

Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-28 Thread Jay Pipes
On Fri, 2014-03-28 at 11:01 +, Day, Phil wrote:
  Personally, I feel it is a mistake to continue to use the Amazon concept
  of an availability zone in OpenStack, as it brings with it the
  connotation from AWS EC2 that each zone is an independent failure
  domain. This characteristic of EC2 availability zones is not enforced in
  OpenStack Nova or Cinder, and therefore creates a false expectation for
  Nova users.
 
 I think this is backwards training, personally. I think azs as separate 
 failure
 domains were done like that for a reason by amazon, and make good sense. 
 What we've done is overload that with cells, aggregates etc which should 
 have a better interface and are a different concept. Redefining well 
 understood 
 terms because they don't suite your current implementation is a slippery 
 slope, 
 and overloading terms that already have a meaning in the industry in just 
 annoying.
 
 +1
 I don't think there is anything wrong with identifying new use cases and 
 working out how to cope with them:
 
  - First we generalized Aggregates
 - Then we mapped AZs onto aggregates as a special mutually exclusive group
 - Now we're recognizing that maybe we need to make those changes to support 
 AZs more generic so we can create additional groups of mutually exclusive 
 aggregates
 
 That all feels like good evolution.

I see your point, though I'm not sure it was all that good.

 But I don't see why that means we have to fit that in under the existing 
 concept of AZs - why can't we keep AZs as they are and have a better thing 
 called Zones that is just an OSAPI concept and is better that AZs ?

Phil, that is exactly what I am proposing. For the next major version of
the Nova API, introducing a new generic container of compute resources.

  Arguments around not wanting to add new options to create server seem a bit 
 weak to me 

Who has made that argument?

 - for sure we don't want to add them in an uncontrolled way, but if we have a 
 new, richer, concept we should be able to express that separately.

Which is what I've proposed, no?

 I'm still not personally convinced by the need use cases of racks having 
 orthogonal power failure domains and switch failure domains - that seems to 
 me from a practical perspective that it becomes really hard to work out where 
 to separate VMs so that they don't share a failure mode.Every physical DC 
 design I've been involved with tries to get the different failure domains to 
 align.   However if it the use case makes sense to someone then I'm not 
 against extending aggregates to support multiple mutually exclusive groups.

My point is that if we had a concept of a generic container of compute
resources, then whether or not that container was an independent failure
domain or not would simply be an explicit trait of the container. As it
stands now with availability zones, being an independent failure domain
is an *implied* trait that isn't enforced by Nova, Neutron or Cinder.

Best,
-jay

 
 Phil
  
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] SR-IOV and IOMMU check

2014-03-28 Thread Steve Gordon
- Original Message -
 Hi, all
 
 Currently openstack can support SR-IOV device pass-through (at least there
 are some patches for this), but the prerequisite to this is both IOMMU and
 SR-IOV must be enabled correctly, it seems there is not a robust way to
 check this in openstack, I have implemented a way to do this and hope it can
 be committed into upstream, this can help find the issue beforehand, instead
 of letting kvm report the issue no IOMMU found until the VM is started. I
 didn't find an appropriate place to put into this, do you think this is
 necessary? Where can it be put into? Welcome your advice and thank you in
 advance.

What's the mechanism you are using on the host side to determine that IOMMU is 
supported/enabled?

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-28 Thread CARVER, PAUL
Jay Pipes wrote: 
I'm proposing getting rid of the host aggregate hack (or maybe evolving
it?) as well as the availability zone concept and replacing them with a
more flexible generic container object that may be hierarchical in
nature.

Is the thing you're proposing to replace them with something that already
exists or a brand new thing you're proposing should be created?

We need some sort of construct that allows the tenant to be confident that
they aren't going to lose multiple VMs simultaneously due to a failure of
underlying hardware. The semantics of it need to be easily comprehensible
to the tenant, otherwise you'll get people thinking they're protected because
they built a redundant pair of VMs but sheer bad luck results in them losing
them both at the same time.

We're using availability zone for that currently and it seems to serve the
purpose in a way that's easy to explain to a tenant.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-28 Thread Jay Pipes
On Fri, 2014-03-28 at 19:38 +, CARVER, PAUL wrote:
 Jay Pipes wrote: 
 I'm proposing getting rid of the host aggregate hack (or maybe evolving
 it?) as well as the availability zone concept and replacing them with a
 more flexible generic container object that may be hierarchical in
 nature.
 
 Is the thing you're proposing to replace them with something that already
 exists or a brand new thing you're proposing should be created?

Either an evolution of the host aggregate concept (possibly renamed) or
a brand new concept.

 We need some sort of construct that allows the tenant to be confident that
 they aren't going to lose multiple VMs simultaneously due to a failure of
 underlying hardware.

? Tenants currently assume this is the case if they are using multiple
availability zones, but there is nothing in Nova that actually prevents
multiple availability zones from sharing hardware.

Frankly, this is an SLA thing, and should not be part of the API, IMO.
If a deployer wishes to advertise an SLA that says this container of
compute resources is a failure domain, then they should be free to make
that SLA and even include it in a description of said generic container
of compute resource, but there should be no *implicit* SLAs.

  The semantics of it need to be easily comprehensible
 to the tenant, otherwise you'll get people thinking they're protected because
 they built a redundant pair of VMs but sheer bad luck results in them losing
 them both at the same time.

Umm, that's possible today. There is an implicit trust right now in the
API that availability zones are independent failure domains. And what I
am telling you is that no such constraint exists in the implementation
of Nova availability zones (exposed via host aggregate).

 We're using availability zone for that currently and it seems to serve the
 purpose in a way that's easy to explain to a tenant.

It may be easy to explain to a tenant -- simply because of its use in
AWS. But that doesn't mean it's something that is real in practice.
You're furthering a false trust if you explain to tenants that an
availability zone is an independent failure domain when it can easily
NOT be an independent failure domain because of the exposure of
availability zones through the host aggregate concept (which themselves
may overlap hardware and therefore spoil the promise of independent
failure domains).

Thus, we need a different concept than availability zone to expose to
users. Thus, my proposal.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]Persistance layer for cinder + taskflow

2014-03-28 Thread Jay Pipes
On Fri, 2014-03-28 at 18:16 +, Duncan Thomas wrote:
 On 28 March 2014 14:38, Joshua Harlow harlo...@yahoo-inc.com wrote:
  An idea that might be good to do is to start off using SQLite as the
  taskflow persistence backend.
 
  Get that working using SQLite files (which should be fine as a persistence
  method for most usages) and then after this works there can be discussion
  around moving those tables to something like MySQL and the
  benefits/drawbacks of doing that.
 
  What do u think?
 
  Start small then grow seems to be a good approach.
 
 I'd say at tables to the existing database first, it adds the smallest
 number of new moving parts, then look at a new database if and only if
 there seems to be evidence that one is needed. As a systems operator,
 it is hard enough managing one db let alone N local dbs...

My suggestion would be to add a new CONF option
(persistent_db_connection?) that would default to the value of
CONF.db.connection). That way you default to using the same DB as the
main Cinder tables, but automagically provide the deployer with the
ability to handle the taskflow tables separately if they want.

best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dependency freeze exception for happybase (I would like version 0.8)

2014-03-28 Thread Nadya Privalova
Sean, please see my comment in bug regarding 0.7. Unfortunately I have
nothing to suggest here. If distributive contains only 0.7 we cannot work
with HBase through happybase. And I don't see any solutions.
But anyway if you change requirements for happybase (I guess we need to get
0.7 back, no?) could you please remove 0.4? 0.4 doesn't allow to work with
pools but we need to.

Thanks,
Nadya


On Fri, Mar 28, 2014 at 9:07 PM, Sean Dague s...@dague.net wrote:

 On 03/28/2014 12:50 PM, Thierry Carrez wrote:
  Thierry Carrez wrote:
  Julien Danjou wrote:
  On Thu, Mar 27 2014, Thomas Goirand wrote:
 
  -happybase=0.4,=0.6
  +happybase=0.8
 
  Good for me, and Ceilometer is the only one using happybase as far as I
  know so that shouldn't be a problem.
 
  OK so I would be fine with happybase=0.4,!=0.6,!=0.7 as it allows 0.8
  to be run without adversely impacting downstreams who don't have it
  available.
 
  Actually that should be happybase=0.4,!=0.7 since 0.6 was allowed
 before.

 The review has comments about 0.7 being the version in Debian and Ubuntu
 right now https://review.openstack.org/#/c/82438/.

 Which is problematic, as that's a version that's specifically being
 called out by the ceilometer team as not useable. Do we know why, and if
 it can be worked around?

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dependency freeze exception for happybase (I would like version 0.8)

2014-03-28 Thread Julien Danjou
On Thu, Mar 27 2014, Thomas Goirand wrote:

 -happybase=0.4,=0.6
 +happybase=0.8

Good for me, and Ceilometer is the only one using happybase as far as I
know so that shouldn't be a problem.

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ML2 Type driver for supporting network overlays, with more than 4K seg

2014-03-28 Thread Mathieu Rohon
Hi,


the more I think about your use case, the more I think you should
create a BP to have tenant network based on interfaces created with
VDP protocol.
I'm not a VDP specialist, but if it creates some vlan back interfaces,
you might match those physical interfaces with the
physical_interface_mappings parameter in your ml2_conf.ini. Then you
could create flat networks backed on those interfaces.
SR-IOv use cases also talk about using vif_type 802.1qbg :
https://wiki.openstack.org/wiki/Nova-neutron-sriov


Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-28 Thread Chris Friesen

On 03/28/2014 05:01 AM, Jesse Pretorius wrote:

On 27 March 2014 20:52, Chris Friesen chris.frie...@windriver.com
mailto:chris.frie...@windriver.com wrote:

It'd be nice to be able to do a heat template where you could
specify things like put these three servers on separate hosts from
each other, and these other two servers on separate hosts from each
other (but maybe on the same hosts as the first set of servers), and
they all have to be on the same network segment because they talk to
each other a lot and I want to minimize latency, and they all need
access to the same shared instance storage for live migration.


Surely this can be achieved with:
1) Configure compute hosts with shared storage and on the same switch
infrastructure in a host aggregate, with an AZ set in the aggregate
(setting the AZ gives visibility to the end-user)
2) Ensure that both the GroupAntiAffinityFilter and
AvailabilityZoneFilter are setup on the scheduler
3) Boot the instances using the availability zone and group scheduler hints


Last I checked, heat doesn't support creating server groups.  Is it 
possible to use GroupAntiAffinityFilter without server groups?


I'm thinking of a setup where you may have multiple shared storage 
zones, such that not all compute nodes have access to the same storage 
(for performance reasons).


Similarly, in a large environment it's possible that compute nodes don't 
all have access to the same network.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]Persistance layer for cinder + taskflow

2014-03-28 Thread Joshua Harlow
That'd work to, allowing for trying out/deploying/using this feature in 
different ways if it's wanted.

Hopefully not to many issues, if so, u know where to find us ;-)

-Josh

From: Jay Pipes jaypi...@gmail.commailto:jaypi...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, March 28, 2014 at 12:52 PM
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Cinder]Persistance layer for cinder + taskflow

On Fri, 2014-03-28 at 18:16 +, Duncan Thomas wrote:
On 28 March 2014 14:38, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:
 An idea that might be good to do is to start off using SQLite as the
 taskflow persistence backend.

 Get that working using SQLite files (which should be fine as a persistence
 method for most usages) and then after this works there can be discussion
 around moving those tables to something like MySQL and the
 benefits/drawbacks of doing that.

 What do u think?

 Start small then grow seems to be a good approach.
I'd say at tables to the existing database first, it adds the smallest
number of new moving parts, then look at a new database if and only if
there seems to be evidence that one is needed. As a systems operator,
it is hard enough managing one db let alone N local dbs...

My suggestion would be to add a new CONF option
(persistent_db_connection?) that would default to the value of
CONF.db.connection). That way you default to using the same DB as the
main Cinder tables, but automagically provide the deployer with the
ability to handle the taskflow tables separately if they want.

best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dependency freeze exception for happybase (I would like version 0.8)

2014-03-28 Thread Sean Dague
So how did Ceilometer get into this situation? Because the ceilometer
requirements are happybase=0.4,=0.6

So we told distributions at Feature Freeze that was adequate version for
ceilometer, and now we're telling them it's not at RC. And that's
honestly too late for most of them.

So I remain -1 on this at this point (if ttx want's to override, that's
his call). And we need to figure out how we don't end up in the same
position in Juno.

-Sean

On 03/28/2014 03:56 PM, Nadya Privalova wrote:
 Sean, please see my comment in bug regarding 0.7. Unfortunately I have
 nothing to suggest here. If distributive contains only 0.7 we cannot
 work with HBase through happybase. And I don't see any solutions.
 But anyway if you change requirements for happybase (I guess we need to
 get 0.7 back, no?) could you please remove 0.4? 0.4 doesn't allow to
 work with pools but we need to.
 
 Thanks,
 Nadya
 
 
 On Fri, Mar 28, 2014 at 9:07 PM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:
 
 On 03/28/2014 12:50 PM, Thierry Carrez wrote:
  Thierry Carrez wrote:
  Julien Danjou wrote:
  On Thu, Mar 27 2014, Thomas Goirand wrote:
 
  -happybase=0.4,=0.6
  +happybase=0.8
 
  Good for me, and Ceilometer is the only one using happybase as
 far as I
  know so that shouldn't be a problem.
 
  OK so I would be fine with happybase=0.4,!=0.6,!=0.7 as it
 allows 0.8
  to be run without adversely impacting downstreams who don't have it
  available.
 
  Actually that should be happybase=0.4,!=0.7 since 0.6 was
 allowed before.
 
 The review has comments about 0.7 being the version in Debian and Ubuntu
 right now https://review.openstack.org/#/c/82438/.
 
 Which is problematic, as that's a version that's specifically being
 called out by the ceilometer team as not useable. Do we know why, and if
 it can be worked around?
 
 -Sean
 
 --
 Sean Dague
 Samsung Research America
 s...@dague.net mailto:s...@dague.net / sean.da...@samsung.com
 mailto:sean.da...@samsung.com
 http://dague.net
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Problem plugging I/F into Neutron...

2014-03-28 Thread Paul Michali (pcm)
Hi,

I have a VM that I start up outside of OpenStack (as a short term solution, 
until we get it working inside a Nova VM), using KVM. It has scrips associated 
with the three interfaces that are created, to hook this VM into Neutron. One 
I/F is on br-ex (connected to the “public network for DevStack), another to 
br-int (connected to a management network that is created), and a third is 
connected to br-int (connected to the “private” network for DevStack). It’s 
understood these are hacks to get things going and can be brittle.  With 
DevStack, I have a vanilla localrc, so using ML2, without any ML2 settings 
specified.

Now, the first two scripts use internal Neutron client calls to create the 
port, and then plug the VIF. The third, uses Neutron to create the port, and 
then Nova to plug the VIF. I don’t know why - I inherited the scripts.

On one system, where Nova is based on commit b3e2e05 (10 days ago), this all 
works just peachy. Interfaces are hooked in and I can ping to my hearts 
content. On another system, that I just reimaged today, using the latest and 
greatest OpenStack projects, the third script fails.

I talked to Nova folks, and the vic is now an object, instead of a plain dict, 
and therefore calls on the object fail (as the script just provides a dict). I 
started trying to convert the vif to an object, but in discussing with a 
co-worker, we thought that we could too use Neutron calls for all of the setup 
of this third interface.

Well, I tried, and the port is created, but unlike the other system, the port 
is DOWN, and I cannot ping to or from it (the other ports still work fine, with 
this newer OpenStack repo). One difference is that the port is showing  
{port_filter: true, ovs_hybrid_plug: true} for binding:vif_details, in the 
neutron port-show output. On the older system this is empty (so must be new 
changes in Neutron?)


Here is the Neutron based code (trimmed) to do the create and plugging:

import neutron.agent.linux.interface as vif_driver
from neutronclient.neutron import client as qclient

qc = qclient.Client('2.0', auth_url=KEYSTONE_URL, username=user, 
tenant_name=tenant, password=pw)

prefix, net_name = interface.split('__')
port_name = net_name + '_p'
try:
nw_id = qc.list_networks(name=net_name)['networks'][0]['id']
except qcexp.NeutronClientException as e:
…

p_spec = {'port': {'admin_state_up': True,
   'name': port_name,
   'network_id': nw_id,
   'mac_address': mac_addr,
   'binding:host_id': hostname,
   'device_id': vm_uuid,
   'device_owner': 'compute:None'}}

try:
port = qc.create_port(p_spec)
except qcexp.NeutronClientException as e:
...

port_id = port['port']['id']
br_name = 'br-int'

conf = cfg.CONF
config.register_root_helper(conf)
conf.register_opts(vif_driver.OPTS)

driver = vif_driver.OVSInterfaceDriver(cfg.CONF)
driver.plug(nw_id, port_id, interface, mac_addr, br_name)

Finally, here are the questions (hope you stuck with the long message)…

Any idea why the neutron version is not working? I know there were a bunch of 
recent changes.
Is there a way for me to turn off the ova_hybrid_plug and port_filter flags? 
Should I?
Should I go back to using Nova and build a VIF object?
If so, any reason why the Neutron version would not work?
Is there a way to do a similar thing, but via using Northbound APIs (so it 
isn’t as brittle)?

Thanks in advance!

PCM (Paul Michali)

MAIL …..…. p...@cisco.commailto:p...@cisco.com
IRC ……..… pcm_ (irc.freenode.comhttp://irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Call for testing: 2013.2.3 candidate tarballs

2014-03-28 Thread Adam Gandelman
Hi all,

We are scheduled to publish the 2013.2.3 release on Thursday April 3 for
Ceilometer, Cinder, Glance, Heat, Horizon, Keystone, Neutron and Nova.

The list of issues fixed so far can be seen here:

  https://launchpad.net/ceilometer/+milestone/2013.2.3
  https://launchpad.net/cinder/+milestone/2013.2.3
  https://launchpad.net/glance/+milestone/2013.2.3
  https://launchpad.net/heat/+milestone/2013.2.3
  https://launchpad.net/horizon/+milestone/2013.2.3
  https://launchpad.net/keystone/+milestone/2013.2.3
  https://launchpad.net/neutron/+milestone/2013.2.3
  https://launchpad.net/nova/+milestone/2013.2.3

We'd appreciate anyone who could test the candidate 2013.2.3 tarballs:

  http://tarballs.openstack.org/ceilometer/ceilometer-stable-havana.tar.gz
  http://tarballs.openstack.org/cinder/cinder-stable-havana.tar.gz
  http://tarballs.openstack.org/glance/glance-stable-havana.tar.gz
  http://tarballs.openstack.org/heat/heat-stable-havana.tar.gz
  http://tarballs.openstack.org/horizon/horizon-stable-havana.tar.gz
  http://tarballs.openstack.org/keystone/keystone-stable-havana.tar.gz
  http://tarballs.openstack.org/neutron/neutron-stable-havana.tar.gz
  http://tarballs.openstack.org/nova/nova-stable-havana.tar.gz

Effective immediately, stable/havana branches enter freeze until release on
Thursday (April 3).

Thanks
Adam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dependency freeze exception for happybase (I would like version 0.8)

2014-03-28 Thread James E. Blair
Sean Dague s...@dague.net writes:

 So how did Ceilometer get into this situation? Because the ceilometer
 requirements are happybase=0.4,=0.6

Is this a case where testing minimums might have helped?

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Ml2Plugin] Setting _original_network in NetworkContext:

2014-03-28 Thread Nader Lahouti
Hi Mathieu,

Thanks a lot for your reply.

Even in the neutron/neutron/db/db_base_plugin_v2.py: create_network()
passes network object:

911 
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#911
   *def* create_network
http://www.xrefs.info/neutron-icehouse-3/s?refs=create_network(self
http://www.xrefs.info/neutron-icehouse-3/s?defs=self, context
http://www.xrefs.info/neutron-icehouse-3/s?defs=context, network
http://www.xrefs.info/neutron-icehouse-3/s?defs=network):912
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#912
   Handle creation of a single network.913
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#913
   # single request processing914
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#914
   n = network
http://www.xrefs.info/neutron-icehouse-3/s?defs=network['network']
* 'n' has all the network info (including
extensions)*915
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#915
   # NOTE(jkoelker) Get the tenant_id outside of the session to
avoid916 
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#916
   #unneeded db action if the operation raises917
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#917
   tenant_id
http://www.xrefs.info/neutron-icehouse-3/s?defs=tenant_id = self
http://www.xrefs.info/neutron-icehouse-3/s?defs=self._get_tenant_id_for_create
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#_get_tenant_id_for_create(context
http://www.xrefs.info/neutron-icehouse-3/s?defs=context, n)918
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#918
   *with* context
http://www.xrefs.info/neutron-icehouse-3/s?defs=context.session
http://www.xrefs.info/neutron-icehouse-3/s?defs=session.begin
http://www.xrefs.info/neutron-icehouse-3/s?defs=begin(subtransactions
http://www.xrefs.info/neutron-icehouse-3/s?defs=subtransactions=True
http://www.xrefs.info/neutron-icehouse-3/s?defs=True):919
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#919
   args http://www.xrefs.info/neutron-icehouse-3/s?defs=args
= {'tenant_id': tenant_id
http://www.xrefs.info/neutron-icehouse-3/s?defs=tenant_id,920
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#920
   'id': n.get
http://www.xrefs.info/neutron-icehouse-3/s?defs=get('id') *or*
uuidutils 
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#uuidutils.generate_uuid
http://www.xrefs.info/neutron-icehouse-3/s?defs=generate_uuid(),921
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#921
   'name': n['name'],922
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#922
   'admin_state_up': n['admin_state_up'],923
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#923
   'shared': n['shared'],924
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#924
   'status': n.get
http://www.xrefs.info/neutron-icehouse-3/s?defs=get('status',
constants 
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#constants.NET_STATUS_ACTIVE
http://www.xrefs.info/neutron-icehouse-3/s?defs=NET_STATUS_ACTIVE)}925
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#925
   network
http://www.xrefs.info/neutron-icehouse-3/s?defs=network = models_v2
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#models_v2.Network
http://www.xrefs.info/neutron-icehouse-3/s?defs=Network(**args
http://www.xrefs.info/neutron-icehouse-3/s?defs=args)  *=
'network' does not include extensions.*926
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#926
   context
http://www.xrefs.info/neutron-icehouse-3/s?defs=context.session
http://www.xrefs.info/neutron-icehouse-3/s?defs=session.add
http://www.xrefs.info/neutron-icehouse-3/s?defs=add(network
http://www.xrefs.info/neutron-icehouse-3/s?defs=network)927
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#927
   *return* self
http://www.xrefs.info/neutron-icehouse-3/s?defs=self._make_network_dict
http://www.xrefs.info/neutron-icehouse-3/xref/neutron/db/db_base_plugin_v2.py#_make_network_dict(network
http://www.xrefs.info/neutron-icehouse-3/s?defs=network,
process_extensions
http://www.xrefs.info/neutron-icehouse-3/s?defs=process_extensions=False
http://www.xrefs.info/neutron-icehouse-3/s?defs=False)

even if process_extensions set to True, we still have issue.

If using original_network, causes confusion can we add a new parameter and
use it in mechanism driver?
Also haven't received any reply from salvotore.

* Another issue with the 

Re: [openstack-dev] [nova] SR-IOV and IOMMU check

2014-03-28 Thread Luohao (brian)
This is the approach mentioned by linux-kvm.org

http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM

3. reboot and verify that your system has IOMMU support

AMD Machine
dmesg | grep AMD-Vi
 ...
 AMD-Vi: Enabling IOMMU at :00:00.2 cap 0x40
 AMD-Vi: Lazy IO/TLB flushing enabled
 AMD-Vi: Initialized for Passthrough Mode
 ...
Intel Machine
dmesg | grep -e DMAR -e IOMMU
 ...
 DMAR:DRHD base: 0x00feb03000 flags: 0x0
 IOMMU feb03000: ver 1:0 cap c9008020e30260 ecap 1000
 ...



-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com] 
Sent: Saturday, March 29, 2014 3:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] SR-IOV and IOMMU check

- Original Message -
 Hi, all
 
 Currently openstack can support SR-IOV device pass-through (at least 
 there are some patches for this), but the prerequisite to this is both 
 IOMMU and SR-IOV must be enabled correctly, it seems there is not a 
 robust way to check this in openstack, I have implemented a way to do 
 this and hope it can be committed into upstream, this can help find 
 the issue beforehand, instead of letting kvm report the issue no 
 IOMMU found until the VM is started. I didn't find an appropriate 
 place to put into this, do you think this is necessary? Where can it 
 be put into? Welcome your advice and thank you in advance.

What's the mechanism you are using on the host side to determine that IOMMU is 
supported/enabled?

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature about QEMU Assisted online-extend volume

2014-03-28 Thread Zhangleiqiang (Trump)
Hi, Duncan:
Thanks for your advice. 

About the summit session you mentioned, what things can I do for it ? 


--
zhangleiqiang (Trump)

Best Regards

 -Original Message-
 From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
 Sent: Friday, March 28, 2014 12:43 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] Feature about QEMU Assisted
 online-extend volume
 
 It sounds like a useful feature, and there are a growing number of touch 
 points
 for libvirt assisted cinder features. A summit session to discuss how that
 interface should work (hopefully get a few nova folks there as well, the
 interface has two ends) might be a good idea
 
 On 27 March 2014 16:15, Trump.Zhang zhangleiqi...@gmail.com wrote:
  Online-extend volume feature aims to extend a cinder volume which is
  in-use, and make the corresponding disk in instance extend without
  stop the instance.
 
 
  The background is that, John Griffith has proposed a BP ([1]) aimed to
  provide an cinder extension to enable extend of in-use/attached volumes.
  After discussing with Paul Marshall, the assignee of this BP, he only
  focus on OpenVZ driver currently, so I want to take the work of
  libvirt/qemu based on his current work.
 
  A volume can be extended or not is determined by Cinder. However, if
  we want the capacity of corresponding disk in instance extends, Nova
  must be involved.
 
  Libvirt provides block_resize interface for this situation. For
  QEMU, the internal workflow for block_resize as follows:
 
  1) Drain all IO of this disk from instance
  2) If the backend of disk is a normal file, such as raw, qcow2, etc,
  qemu will do the *extend* work
  3) If the backend of disk is block device, qemu will first judge if
  there is enough free space on the device, if only so, it will do the 
  *extend*
 work.
 
  So I think the online-extend volume will need QEMU Assisted, which
  is simlar to BP [2].
 
  Do you think we should introduce this feature?
 
  [1]
  https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-exte
  nsion [2]
  https://blueprints.launchpad.net/nova/+spec/qemu-assisted-snapshots
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 --
 Duncan Thomas
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] SR-IOV and IOMMU check

2014-03-28 Thread Steve Gordon
- Original Message -
 This is the approach mentioned by linux-kvm.org
 
 http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM
 
 3. reboot and verify that your system has IOMMU support
 
 AMD Machine
 dmesg | grep AMD-Vi
  ...
  AMD-Vi: Enabling IOMMU at :00:00.2 cap 0x40
  AMD-Vi: Lazy IO/TLB flushing enabled
  AMD-Vi: Initialized for Passthrough Mode
  ...
 Intel Machine
 dmesg | grep -e DMAR -e IOMMU
  ...
  DMAR:DRHD base: 0x00feb03000 flags: 0x0
  IOMMU feb03000: ver 1:0 cap c9008020e30260 ecap 1000
  ...

Right, but the question is whether grepping dmesg is an acceptable/stable API 
to be relying on from the Nova level. Basically what I'm saying is the reason 
there isn't a robust way to check this from OpenStack is that there doesn't 
appear to be a robust way to check this from the kernel?

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-docs] [cinder] recover Sheepdog documentation

2014-03-28 Thread Steve Gordon


- Original Message -
 On 28/03/14 14:00, Takazawa Shiori wrote:
  Hello
 
 
  I would like to add document abount Sheepdogfor Commodity Storage Back-end
  Technologies.  Will this work for users?
 
 
  http://docs.openstack.org/trunk/openstack-ops/content/storage_decision.html
 
 
  In previous documentation Sheepdog was included. But in current version,
  dropped out.
 
  Do you know the reason?
 
  http://dream.daynight.jp/openstack/openstack-ops/content/storage_decision.html
 
  (Sorry only found japanese version)
 
 
  Sheepdog is distributed storage system and can be used from Cinder, Glance
  and Swift.
 
  It is introduced Openstack Summit in 2013 at Honk Kong.
 
  http://sheepdog.github.io/sheepdog/_static/sheepdog-openstack.pdf
 
 
 Hmm, I don't see sheepdog on
 https://wiki.openstack.org/wiki/CinderSupportMatrix
 
 I wonder why that is?
 
 Regards,
 
 
 Tom

Not sure, it appears to be in tree?

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev