Re: [openstack-dev] cgroups cpu share allocation in grizzly seems incorrect

2013-08-27 Thread Qiu Yu
On Fri, Aug 23, 2013 at 6:00 AM, Chris Friesen
chris.frie...@windriver.com wrote:

 I just noticed that in Grizzly regardless of the number of vCPUs the value
 of /sys/fs/cgroup/cpu/libvirt/qemu/instance-X/cpu.shares seems to be the
 same.  If we were overloaded, this would give all instances the same cpu
 time regardless of the number of vCPUs in the instance.

 Is this design intent?  It seems to me that it would be more correct to have
 the instance value be multiplied by the number of vCPUs.

I think it make sense to have each vCPU an equal weight for scheduling
sake. This makes each vCPU an equal entity with same computing power.

For limiting vCPU hard limit purpose, using CFS quota/period, please
check following BP for reference.
https://blueprints.launchpad.net/nova/+spec/quota-instance-resource

--
Qiu Yu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Marconi

2013-08-27 Thread Flavio Percoco

On 26/08/13 15:34 -0500, Anne Gentle wrote:
Hi Kurt, 


There's a thread that John Griffith started about 3rd party storage drivers,
where the code lives, how to review, how to ensure quality and maintenance, see
http://lists.openstack.org/pipermail/openstack-dev/2013-July/012557.html.

It won't answer all your questions but gives you an idea of pluggable
architecture and maintenance.
Anne



Hey Anne,

Thanks for pointing that out. 


We'd like to have a set of core drivers and let other drivers live
outside Marconi's code base. We're currently using stevedore, which
makes this possible and very easy. 


That being said, we'll also need, as John mentioned in that thread, a
way to qualify external drivers and maybe have them listed in a file -
or linked as submodules - so that users know which ones are supposed
to work with current version.

Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Cinder] Driver qualification

2013-08-27 Thread Flavio Percoco

On 25/07/13 18:44 -0600, John Griffith wrote:

Hey Everyone,

Something I've been kicking around for quite a while now but never really been
able to get around to is the idea of requiring that drivers in Cinder run a
qualification test and submit results prior to introduction in to Cinder.



FWIW, big +1.

This is something we'll face in Marconi as well. We've been figuring
out a set of drivers that should live in the code base and we'd like
to keep all other drivers out of it. However, this means we need to
have a way to:

1) Know what drivers exist out there - at least let implementers have
them listed somewhere)

2) Make sure those drivers are compliant before being listed anywhere
in the project.

We have some base unit test that test all Marconi's resources
lifecycle and we'll, most probably, do the same for functional tests.


I'd love to see this happening and to help making it happen, feel free
to ping me.

Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Requesting feedback on review 35759

2013-08-27 Thread Gary Kotton

 -Original Message-
 From: Wang, Shane [mailto:shane.w...@intel.com]
 Sent: Tuesday, August 27, 2013 6:31 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Nova] Requesting feedback on review 35759
 
 Hi,
 
 We submitted the patches for bp
 https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling
 1+ month ago.
 The first patch is to add a column to save metrics collected by plugins -
 https://review.openstack.org/#/c/35759/.
 Is there anyone who is interested in that, would it be possible to get some
 reviews for that?

 [Gary Kotton]  I have taken a look. I too have patches in from over a month 
ago and understand the frustration... 

 
 Thanks.
 --
 Shane
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Add a library js for creating charts

2013-08-27 Thread Julien Danjou
On Mon, Aug 26 2013, Maxime Vidori wrote:

 Currently, the charts for Horizon are directly created with D3. Maybe if we
 add a js library on top of d3 it will be easier and development will be
 faster. A blueprint was created at
 https://blueprints.launchpad.net/horizon/+spec/horizon-chart.js We actually
 need some reviews or feedback.

It sounds like a good plan to pick Rickshaw. Better building on top of
it, contributing back to it, rather than starting cold or building a new
wheel.

-- 
Julien Danjou
;; Free Software hacker ; independent consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Need help with Alembic...

2013-08-27 Thread Boris Pavlovic
Jay,

I should probably share to you about our work around DB.

Migrations should be run only in production and only for production
backends (e.g. psql and mysql)
In tests we should use Schemas created by Models
(BASE.metadata.create_all())

We are not able to use in this approach in moment  because we don't have
any mechanism to check that MODELS and SCHEMAS are EQUAL.
And actually MODELS and SCHEMAS are DIFFERENT.

E.g. in Celiometer we have BP that syncs models and migration
https://blueprints.launchpad.net/ceilometer/+spec/ceilometer-db-sync-models-with-migrations
(in other projects we are doing the same)

And also we are working around (oslo) generic tests that checks that models
and migrations are equal:
https://review.openstack.org/#/c/42307/


So in our roadmap (in this case is):
1) Soft switch to alembic (with code that allows to have sqla-migrate and
alembic migration in the same time)
2) Sync Models and Migrations (fix DB schemas also)
3) Add from oslo generic test that checks all this stuff
4) Use BASE.create_all() for Schema creation instead of migrations.


But in OpenStack is not so simple to implement such huge changes, so it
take some time=)


Best regards,
Boris Pavlovic
---
Mirantis Inc.










On Tue, Aug 27, 2013 at 12:02 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/26/2013 03:40 PM, Herndon, John Luke (HPCS - Ft. Collins) wrote:

 Jay -

 It looks there is an error in the migration script that causes it to
 abort:

 AttributeError: 'ForeignKeyConstraint' object has no attribute 'drop'

 My guess is the migration runs on the first test, creates event types
 table fine, but exits with the above error, so migration is not
 complete. Thus every subsequent test tries to migrate the db, and
 notices that event types already exists.


 I'd corrected that particular mistake and pushed an updated migration
 script.

 Best,
 -jay



  -john

 On 8/26/13 1:15 PM, Jay Pipes jaypi...@gmail.com wrote:

  I just noticed that every single test case for SQL-driver storage is
 executing every single migration upgrade before every single test case
 run:

 https://github.com/openstack/**ceilometer/blob/master/**
 ceilometer/tests/db.pyhttps://github.com/openstack/ceilometer/blob/master/ceilometer/tests/db.py
 #L46

 https://github.com/openstack/**ceilometer/blob/master/**
 ceilometer/storage/imphttps://github.com/openstack/ceilometer/blob/master/ceilometer/storage/imp
 l_sqlalchemy.py#L153

 instead of simply creating a new database schema from the models in the
 current source code base using a call to sqlalchemy.MetaData.create_**
 all().

 This results in re-running migrations over and over again, instead of
 having dedicated migration tests that would test each migration
 individually, as is the case in projects like Glance...

 Is this intentional?

 Best,
 -jay

 On 08/26/2013 02:59 PM, Sandy Walsh wrote:

 I'm getting the same problem with a different migration (mine is
 complaining that a column already exists)

 http://paste.openstack.org/**show/44512/http://paste.openstack.org/show/44512/

 I've compared it to the other migrations and it seems fine.

 -S

 On 08/26/2013 02:34 PM, Jay Pipes wrote:

 Hey all,

 I'm trying to figure out what is going wrong with my code for this
 patch:

 https://review.openstack.org/**41316https://review.openstack.org/41316

 I had previously added a sqlalchemy-migrate migration script to add an
 event_type table, and had that working, but then was asked to instead
 use Alembic for migrations. So, I removed the sqlalchemy-migrate
 migration file and added an Alembic migration [1].

 Unfortunately, I am getting the following error when running tests:

 OperationalError: (OperationalError) table event_type already exists
 u'\nCREATE TABLE event_type (\n\tid INTEGER NOT NULL, \n\tdesc
 VARCHAR(255), \n\tPRIMARY KEY (id), \n\tUNIQUE (desc)\n)\n\n' ()

 The migration adds the event_type table. I've seen this error occur
 before when using SQLite due to SQLite's ALTER TABLE statement not
 allowing the rename of a column. In the sqlalchemy-migrate migration, I
 had a specialized SQLite migration upgrade [2] and downgrade [3]
 script,
 but I'm not sure how I am supposed to handle this in Alembic. Could
 someone help me out?

 Thanks,
 -jay

 [1]

 https://review.openstack.org/#**/c/41316/16/ceilometer/**
 storage/sqlalchemy/https://review.openstack.org/#/c/41316/16/ceilometer/storage/sqlalchemy/
 alembic/versions/49036dfd_**add_event_types.py

 [2]

 https://review.openstack.org/#**/c/41316/14/ceilometer/**
 storage/sqlalchemy/https://review.openstack.org/#/c/41316/14/ceilometer/storage/sqlalchemy/
 migrate_repo/versions/013_**sqlite_upgrade.sql

 [3]

 https://review.openstack.org/#**/c/41316/14/ceilometer/**
 storage/sqlalchemy/https://review.openstack.org/#/c/41316/14/ceilometer/storage/sqlalchemy/
 migrate_repo/versions/013_**sqlite_downgrade.sql


 __**_
 OpenStack-dev mailing list
 

Re: [openstack-dev] [Nova] Requesting feedback on review 35759

2013-08-27 Thread Gary Kotton


 -Original Message-
 From: Wang, Shane [mailto:shane.w...@intel.com]
 Sent: Tuesday, August 27, 2013 10:51 AM
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] [Nova] Requesting feedback on review 35759
 
 Thank you, Gary, a little bit frustration but still have passion:)

 [Gary Kotton] Cool. Nice to hear. 

 
 --
 Shane
 
  -Original Message-
  From: Gary Kotton [mailto:gkot...@vmware.com]
  Sent: Tuesday, August 27, 2013 3:36 PM
  To: OpenStack Development Mailing List
  Subject: Re: [openstack-dev] [Nova] Requesting feedback on review
  35759
 
 
   -Original Message-
   From: Wang, Shane [mailto:shane.w...@intel.com]
   Sent: Tuesday, August 27, 2013 6:31 AM
   To: OpenStack Development Mailing List
   Subject: [openstack-dev] [Nova] Requesting feedback on review 35759
  
   Hi,
  
   We submitted the patches for bp
   https://blueprints.launchpad.net/nova/+spec/utilization-aware-schedu
   ling
   1+ month ago.
   The first patch is to add a column to save metrics collected by
   plugins - https://review.openstack.org/#/c/35759/.
   Is there anyone who is interested in that, would it be possible to
   get some reviews for that?
 
   [Gary Kotton]  I have taken a look. I too have patches in from over a
  month ago and understand the frustration...
 
  
   Thanks.
   --
   Shane
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Propose Liang Chen for heat-core

2013-08-27 Thread Steven Hardy
On Thu, Aug 22, 2013 at 04:57:31PM +0100, Steven Hardy wrote:
 Hi,
 
 I'd like to propose that we add Liang Chen to the heat-core team[1]
 
 Liang has been doing some great work recently, consistently providing good
 review feedback[2][3], and also sending us some nice patches[4][5], 
 implementing
 several features and fixes for Havana.
 
 Please respond with +1/-1.

Thanks for all the responses, I've now added Liang to heat-core, welcome!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Interested in a mid-Icehouse-cycle Nova meet-up?

2013-08-27 Thread Daniel P. Berrange
On Mon, Aug 19, 2013 at 05:42:00PM -0400, Russell Bryant wrote:
 Greetings,
 
 Some OpenStack programs have started a nice trend of getting together in
 the middle of the development cycle.  These meetups can serve a number
 of useful purposes: community building, ramping up new contributors,
 tackling hard problems by getting together in the same room, and more.
 
 I am in the early stages of attempting to plan a Nova meet-up for the
 middle of the Icehouse cycle.  To start, I need to get a rough idea of
 how much interest there is.
 
 I have very little detail at this point, other than I'm looking at
 locations in the US, and that it would be mid-cycle (January/February).

Is openstack looking to have a strong presence at FOSDEM 2014 ? I didn't
make it to FOSDEM this year, but IIUC, there were quite a few openstack
contributors  talks in 2013.

IOW, should we consider holding the meetup in Brussels just before/after
FOSDEM, so that people who want/need to attend both can try to maximise
utilization of their often limited travel budgets and/or minimise the
number of days lost to travelling ?

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] savanna-extra and hadoop-patch

2013-08-27 Thread Ruslan Kamaldinov
Matt, 

From the bug description:
Affects Version/s:1.2.0, 2.0.3-alpha

Target Version/s: 3.0.0, 2.3.0


So, it seems that Hadoop folks don't intend to include this patch into Hadoop 
1.x


Ruslan 


On Tuesday, August 27, 2013 at 2:41 PM, Matthew Farrellee wrote:

 Howdy Ivan,
 
 FYI, https://issues.apache.org/jira/browse/HADOOP-8545 is currently 
 targeting 1.2.0 and 2.0.3-alpha. And the code (HADOOP-8545-034.patch) 
 appears to provide support to Hadoop 1.x HDFS, though I may be missing 
 something.
 
 I'd suggest only adding a Swift HCFS repo only if the code is not 
 destined to go to Apache Hadoop.
 
 +1 discuss at meeting
 
 Best,
 
 
 s/matt/erik/
 
 On 08/27/2013 01:45 AM, Sergey Lukjanov wrote:
  Hi Erik,
  
  First of all, savanna-extra has been created exactly for such needs -
  to store all stuff that we need but couldn't be placed to another
  repos. Initially it contains elements and pre-builded jar with Swift
  HCFS. Now the last one has been moved to the CDN and it's a good idea
  to make separated project for elements.
  
  As for Swift HCFS the core attached to the HADOOP-8545 is targeted to
  the Hadoop 2 version and should be patched to work with Hadoop 1.X
  correctly. So, that's why we add it to the extra repo. It looks like
  that it's ok to add one more repo for Swift HCFS near the savanna at
  stackforge like HCFS for Gluster[0].
  
  So, let's discuss both of the migrations at the next IRC team meeting.
  
  [0] https://github.com/gluster/hadoop-glusterfs
  
  Sincerely yours,
  Sergey Lukjanov
  Savanna Technical Lead
  Mirantis Inc.
  
  On Aug 27, 2013, at 5:18, Matthew Farrellee m...@redhat.com 
  (mailto:m...@redhat.com) wrote:
  
   https://review.openstack.org/#/c/42926/
   
   I didn't get back to this on Friday and it got merged this morning, so 
   here's my feedback.
   
   The savanna-extra repository now appears to hold (a) DIB image elements 
   as well as (b) the source for the Swift backed HCFS (Hadoop Compatible 
   File System) implementation.
   
   If I understand this correctly, (b) is actually the patch set that is 
   being proposed to the Apache Hadoop community. That patch set has not 
   been accepted and is being tracked in HADOOP-8545[0], which appears 
   stalled since July 2013.
   
   Let's break Savanna's DIB elements out of savanna-extra and into 
   savanna-image-elements. It has a clear path forward and a good definition 
   of scope.
   
   Let's also leave savanna-extra as a grab bag, whose only occupant is 
   currently the Swift code. Eventually that code will need a proper home, 
   either contributed to Apache Hadoop or broken out as its own project.
   
   Best,
   
   
   matt
   
   [0] https://issues.apache.org/jira/browse/HADOOP-8545
   
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org 
   (mailto:OpenStack-dev@lists.openstack.org)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] savanna-extra and hadoop-patch

2013-08-27 Thread Matthew Farrellee

Oh good catch. That's some poor UX.

We should find out why it isn't targeted for a 1.x release.

Best,


matt

On 08/27/2013 06:48 AM, Ruslan Kamaldinov wrote:

Matt,

 From the bug description:
Affects Version/s:1.2.0, 2.0.3-alpha

Target Version/s: 3.0.0, 2.3.0


So, it seems that Hadoop folks don't intend to include this patch into Hadoop 
1.x


Ruslan


On Tuesday, August 27, 2013 at 2:41 PM, Matthew Farrellee wrote:


Howdy Ivan,

FYI, https://issues.apache.org/jira/browse/HADOOP-8545 is currently
targeting 1.2.0 and 2.0.3-alpha. And the code (HADOOP-8545-034.patch)
appears to provide support to Hadoop 1.x HDFS, though I may be missing
something.

I'd suggest only adding a Swift HCFS repo only if the code is not
destined to go to Apache Hadoop.

+1 discuss at meeting

Best,


s/matt/erik/

On 08/27/2013 01:45 AM, Sergey Lukjanov wrote:

Hi Erik,

First of all, savanna-extra has been created exactly for such needs -
to store all stuff that we need but couldn't be placed to another
repos. Initially it contains elements and pre-builded jar with Swift
HCFS. Now the last one has been moved to the CDN and it's a good idea
to make separated project for elements.

As for Swift HCFS the core attached to the HADOOP-8545 is targeted to
the Hadoop 2 version and should be patched to work with Hadoop 1.X
correctly. So, that's why we add it to the extra repo. It looks like
that it's ok to add one more repo for Swift HCFS near the savanna at
stackforge like HCFS for Gluster[0].

So, let's discuss both of the migrations at the next IRC team meeting.

[0] https://github.com/gluster/hadoop-glusterfs

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

On Aug 27, 2013, at 5:18, Matthew Farrellee m...@redhat.com 
(mailto:m...@redhat.com) wrote:


https://review.openstack.org/#/c/42926/

I didn't get back to this on Friday and it got merged this morning, so here's 
my feedback.

The savanna-extra repository now appears to hold (a) DIB image elements as well 
as (b) the source for the Swift backed HCFS (Hadoop Compatible File System) 
implementation.

If I understand this correctly, (b) is actually the patch set that is being 
proposed to the Apache Hadoop community. That patch set has not been accepted 
and is being tracked in HADOOP-8545[0], which appears stalled since July 2013.

Let's break Savanna's DIB elements out of savanna-extra and into 
savanna-image-elements. It has a clear path forward and a good definition of 
scope.

Let's also leave savanna-extra as a grab bag, whose only occupant is currently 
the Swift code. Eventually that code will need a proper home, either 
contributed to Apache Hadoop or broken out as its own project.

Best,


matt

[0] https://issues.apache.org/jira/browse/HADOOP-8545

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Quantum resource URL with more than two levels

2013-08-27 Thread B Veera-B37207
Hi,

The current infrastructure provided in Quantum [Grizzly], while building 
Quantum API resource URL using the base function 'base.create_resource()' and 
RESOURCE_ATTRIBUTE_MAP/SUB_RESOURCE_ATTRIBUTE_MAP, supports only two level URI.
Example:
GET  /lb/pools/pool_id/members/member_id

Some applications may need more than two levels of URL support. Example: GET  
/lb/pools/pool_id/members/member_id/xyz/xyz_id

If anybody is interested in this, we want to contribute for this as BP and make 
it upstream.

Regards,
Veera.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Add a library js for creating charts

2013-08-27 Thread Ladislav Smola
I have prepared the testing implementation of Rickshaw wrapped into a 
general linechart and connected it to Ceilometer here:

https://review.openstack.org/#/c/35590/
(rendering is mostly copied from examples with some parts from Maxime 
Vidori)


Rickshaw really works like a charm. I think it will be the best choice.

It is work in the progress and the data of the statistics needs to be 
formatted correctly, but it shows this could work.


I will extract the parts to correct Blueprints and I will start the 
blueprints for implementation of the charts in the dashboard, then 
connect it through dependencies. So people can start implementing this 
to dashboard.


There is ongoing UX discussion about the ceilometer and the charts in 
the dashboard and how it will look like. I do expect we will use 
scatterplot, pie and bar charts (we are using this on overview pages in 
tuskar-ui). So these charts should be probably packed in similar manner 
(though only scatterplot is in the Rickshaw)



On 08/27/2013 10:14 AM, Julien Danjou wrote:

On Mon, Aug 26 2013, Maxime Vidori wrote:


Currently, the charts for Horizon are directly created with D3. Maybe if we
add a js library on top of d3 it will be easier and development will be
faster. A blueprint was created at
https://blueprints.launchpad.net/horizon/+spec/horizon-chart.js We actually
need some reviews or feedback.

It sounds like a good plan to pick Rickshaw. Better building on top of
it, contributing back to it, rather than starting cold or building a new
wheel.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About multihost patch review

2013-08-27 Thread Yongsheng Gong
First 'be like nova-network' is a merit for some deployments.
second, To allow admin to decide which network will be multihosted at
runtime will enable the neutron to continue using the current network node
(dhcp agent) mode at the same time.

If we force the network multihosted when the configuration enable_multihost
is true, and then administrator wants to transfer to normal neutron way,
he/she must modify the configuration item and then restart.



On Tue, Aug 27, 2013 at 9:14 AM, Maru Newby ma...@redhat.com wrote:


 On Aug 26, 2013, at 4:06 PM, Edgar Magana emag...@plumgrid.com wrote:

  Hi Developers,
 
  Let me explain my point of view on this topic and please share your
 thoughts in order to merge this new feature ASAP.
 
  My understanding is that multi-host is nova-network HA  and we are
 implementing this bp
 https://blueprints.launchpad.net/neutron/+spec/quantum-multihost for the
 same reason.
  So, If in neutron configuration admin enables multi-host:
  etc/dhcp_agent.ini
 
  # Support multi host networks
  # enable_multihost = False
 
  Why do tenants needs to be aware of this? They should just create
 networks in the way they normally do and not by adding the multihost
 extension.

 I was pretty confused until I looked at the nova-network HA doc [1].  The
 proposed design would seem to emulate nova-network's multi-host HA option,
 where it was necessary to both run nova-network on every compute node and
 create a network explicitly as multi-host.  I'm not sure why nova-network
 was implemented in this way, since it would appear that multi-host is
 basically all-or-nothing.  Once nova-network services are running on every
 compute node, what does it mean to create a network that is not multi-host?

 So, to Edgar's question - is there a reason other than 'be like
 nova-network' for requiring neutron multi-host to be configured per-network?


 m.

 1:
 http://docs.openstack.org/trunk/openstack-compute/admin/content/existing-ha-networking-options.html


  I could be totally wrong and crazy, so please provide some feedback.
 
  Thanks,
 
  Edgar
 
 
  From: Yongsheng Gong gong...@unitedstack.com
  Date: Monday, August 26, 2013 2:58 PM
  To: Kyle Mestery (kmestery) kmest...@cisco.com, Aaron Rosen 
 aro...@nicira.com, Armando Migliaccio amigliac...@vmware.com, Akihiro
 MOTOKI amot...@gmail.com, Edgar Magana emag...@plumgrid.com, Maru
 Newby ma...@redhat.com, Nachi Ueno na...@nttmcl.com, Salvatore
 Orlando sorla...@nicira.com, Sumit Naiksatam 
 sumit.naiksa...@bigswitch.com, Mark McClain mark.mccl...@dreamhost.com,
 Gary Kotton gkot...@vmware.com, Robert Kukura rkuk...@redhat.com
  Cc: OpenStack List openstack-dev@lists.openstack.org
  Subject: Re: About multihost patch review
 
  Hi,
  Edgar Magana has commented to say:
  'This is the part that for me is confusing and I will need some
 clarification from the community. Do we expect to have the multi-host
 feature as an extension or something that will natural work as long as the
 deployment include more than one Network Node. In my opinion, Neutron
 deployments with more than one Network Node by default should call DHCP
 agents in all those nodes without the need to use an extension. If the
 community has decided to do this by extensions, then I am fine' at
 
 https://review.openstack.org/#/c/37919/11/neutron/extensions/multihostnetwork.py
 
  I have commented back, what is your opinion about it?
 
  Regards,
  Yong Sheng Gong
 
 
  On Fri, Aug 16, 2013 at 9:28 PM, Kyle Mestery (kmestery) 
 kmest...@cisco.com wrote:
  Hi Yong:
 
  I'll review this and try it out today.
 
  Thanks,
  Kyle
 
  On Aug 15, 2013, at 10:01 PM, Yongsheng Gong gong...@unitedstack.com
 wrote:
 
   The multihost patch is there for a long long time, can someone help
 to review?
   https://review.openstack.org/#/c/37919/
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About multihost patch review

2013-08-27 Thread Maru Newby

On Aug 26, 2013, at 4:06 PM, Edgar Magana emag...@plumgrid.com wrote:

 Hi Developers,
 
 Let me explain my point of view on this topic and please share your thoughts 
 in order to merge this new feature ASAP.
 
 My understanding is that multi-host is nova-network HA  and we are 
 implementing this bp 
 https://blueprints.launchpad.net/neutron/+spec/quantum-multihost for the same 
 reason.
 So, If in neutron configuration admin enables multi-host:
 etc/dhcp_agent.ini
 
 # Support multi host networks
 # enable_multihost = False
 
 Why do tenants needs to be aware of this? They should just create networks in 
 the way they normally do and not by adding the multihost extension.

I was pretty confused until I looked at the nova-network HA doc [1].  The 
proposed design would seem to emulate nova-network's multi-host HA option, 
where it was necessary to both run nova-network on every compute node and 
create a network explicitly as multi-host.  I'm not sure why nova-network was 
implemented in this way, since it would appear that multi-host is basically 
all-or-nothing.  Once nova-network services are running on every compute node, 
what does it mean to create a network that is not multi-host?

So, to Edgar's question - is there a reason other than 'be like nova-network' 
for requiring neutron multi-host to be configured per-network?


m.

1: 
http://docs.openstack.org/trunk/openstack-compute/admin/content/existing-ha-networking-options.html


 I could be totally wrong and crazy, so please provide some feedback.
 
 Thanks,
 
 Edgar
 
 
 From: Yongsheng Gong gong...@unitedstack.com
 Date: Monday, August 26, 2013 2:58 PM
 To: Kyle Mestery (kmestery) kmest...@cisco.com, Aaron Rosen 
 aro...@nicira.com, Armando Migliaccio amigliac...@vmware.com, Akihiro 
 MOTOKI amot...@gmail.com, Edgar Magana emag...@plumgrid.com, Maru Newby 
 ma...@redhat.com, Nachi Ueno na...@nttmcl.com, Salvatore Orlando 
 sorla...@nicira.com, Sumit Naiksatam sumit.naiksa...@bigswitch.com, Mark 
 McClain mark.mccl...@dreamhost.com, Gary Kotton gkot...@vmware.com, 
 Robert Kukura rkuk...@redhat.com
 Cc: OpenStack List openstack-dev@lists.openstack.org
 Subject: Re: About multihost patch review
 
 Hi,
 Edgar Magana has commented to say:
 'This is the part that for me is confusing and I will need some clarification 
 from the community. Do we expect to have the multi-host feature as an 
 extension or something that will natural work as long as the deployment 
 include more than one Network Node. In my opinion, Neutron deployments with 
 more than one Network Node by default should call DHCP agents in all those 
 nodes without the need to use an extension. If the community has decided to 
 do this by extensions, then I am fine' at
 https://review.openstack.org/#/c/37919/11/neutron/extensions/multihostnetwork.py
 
 I have commented back, what is your opinion about it?
 
 Regards,
 Yong Sheng Gong
 
 
 On Fri, Aug 16, 2013 at 9:28 PM, Kyle Mestery (kmestery) kmest...@cisco.com 
 wrote:
 Hi Yong:
 
 I'll review this and try it out today.
 
 Thanks,
 Kyle
 
 On Aug 15, 2013, at 10:01 PM, Yongsheng Gong gong...@unitedstack.com wrote:
 
  The multihost patch is there for a long long time, can someone help to 
  review?
  https://review.openstack.org/#/c/37919/
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About multihost patch review

2013-08-27 Thread Maru Newby

On Aug 26, 2013, at 9:39 PM, Yongsheng Gong gong...@unitedstack.com wrote:

 First 'be like nova-network' is a merit for some deployments.

I'm afraid 'merit' is a bit vague for me.  Would you please elaborate?
 

 second, To allow admin to decide which network will be multihosted at runtime 
 will enable the neutron to continue using the current network node (dhcp 
 agent) mode at the same time.

If multi-host and non- multi-host networks are permitted to co-exist (because 
configuration is per-network), won't compute nodes have to be allowed to be 
heterogenous (some multi-host capable, some not)?  And won't Nova then need to 
schedule VMs configured with multi-host networks on compatible nodes?  I don't 
recall mention of this issue in the blueprint or design doc, and would 
appreciate pointers to where this decision was documented.


 
 If we force the network multihosted when the configuration enable_multihost 
 is true, and then administrator wants to transfer to normal neutron way, 
 he/she must modify the configuration item and then restart.

I'm afraid I don't follow - are you suggesting that configuring multi-host 
globally will be harder on admins than the change under review?  Switching to 
non multi-host under the current proposal involves reconfiguring and restarting 
of an awful lot of agents, to say nothing of the db changes.


m. 


 
 
 
 On Tue, Aug 27, 2013 at 9:14 AM, Maru Newby ma...@redhat.com wrote:
 
 On Aug 26, 2013, at 4:06 PM, Edgar Magana emag...@plumgrid.com wrote:
 
  Hi Developers,
 
  Let me explain my point of view on this topic and please share your 
  thoughts in order to merge this new feature ASAP.
 
  My understanding is that multi-host is nova-network HA  and we are 
  implementing this bp 
  https://blueprints.launchpad.net/neutron/+spec/quantum-multihost for the 
  same reason.
  So, If in neutron configuration admin enables multi-host:
  etc/dhcp_agent.ini
 
  # Support multi host networks
  # enable_multihost = False
 
  Why do tenants needs to be aware of this? They should just create networks 
  in the way they normally do and not by adding the multihost extension.
 
 I was pretty confused until I looked at the nova-network HA doc [1].  The 
 proposed design would seem to emulate nova-network's multi-host HA option, 
 where it was necessary to both run nova-network on every compute node and 
 create a network explicitly as multi-host.  I'm not sure why nova-network was 
 implemented in this way, since it would appear that multi-host is basically 
 all-or-nothing.  Once nova-network services are running on every compute 
 node, what does it mean to create a network that is not multi-host?
 
 So, to Edgar's question - is there a reason other than 'be like nova-network' 
 for requiring neutron multi-host to be configured per-network?
 
 
 m.
 
 1: 
 http://docs.openstack.org/trunk/openstack-compute/admin/content/existing-ha-networking-options.html
 
 
  I could be totally wrong and crazy, so please provide some feedback.
 
  Thanks,
 
  Edgar
 
 
  From: Yongsheng Gong gong...@unitedstack.com
  Date: Monday, August 26, 2013 2:58 PM
  To: Kyle Mestery (kmestery) kmest...@cisco.com, Aaron Rosen 
  aro...@nicira.com, Armando Migliaccio amigliac...@vmware.com, Akihiro 
  MOTOKI amot...@gmail.com, Edgar Magana emag...@plumgrid.com, Maru Newby 
  ma...@redhat.com, Nachi Ueno na...@nttmcl.com, Salvatore Orlando 
  sorla...@nicira.com, Sumit Naiksatam sumit.naiksa...@bigswitch.com, 
  Mark McClain mark.mccl...@dreamhost.com, Gary Kotton 
  gkot...@vmware.com, Robert Kukura rkuk...@redhat.com
  Cc: OpenStack List openstack-dev@lists.openstack.org
  Subject: Re: About multihost patch review
 
  Hi,
  Edgar Magana has commented to say:
  'This is the part that for me is confusing and I will need some 
  clarification from the community. Do we expect to have the multi-host 
  feature as an extension or something that will natural work as long as the 
  deployment include more than one Network Node. In my opinion, Neutron 
  deployments with more than one Network Node by default should call DHCP 
  agents in all those nodes without the need to use an extension. If the 
  community has decided to do this by extensions, then I am fine' at
  https://review.openstack.org/#/c/37919/11/neutron/extensions/multihostnetwork.py
 
  I have commented back, what is your opinion about it?
 
  Regards,
  Yong Sheng Gong
 
 
  On Fri, Aug 16, 2013 at 9:28 PM, Kyle Mestery (kmestery) 
  kmest...@cisco.com wrote:
  Hi Yong:
 
  I'll review this and try it out today.
 
  Thanks,
  Kyle
 
  On Aug 15, 2013, at 10:01 PM, Yongsheng Gong gong...@unitedstack.com 
  wrote:
 
   The multihost patch is there for a long long time, can someone help to 
   review?
   https://review.openstack.org/#/c/37919/
 
 
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Interested in a mid-Icehouse-cycle Nova meet-up?

2013-08-27 Thread Thierry Carrez
Daniel P. Berrange wrote:
 Is openstack looking to have a strong presence at FOSDEM 2014 ? I didn't
 make it to FOSDEM this year, but IIUC, there were quite a few openstack
 contributors  talks in 2013.

Yes, we are aiming for a devroom again at FOSDEM this year.

 IOW, should we consider holding the meetup in Brussels just before/after
 FOSDEM, so that people who want/need to attend both can try to maximise
 utilization of their often limited travel budgets and/or minimise the
 number of days lost to travelling ?

I would certainly like that, but I'm not sure the center of gravity for
Nova contributors is in Europe :)

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reminder: Project release status meeting - 21:00 UTC

2013-08-27 Thread Thierry Carrez
Today in the Project  release status meeting, we are one week away from
FeatureFreeze. We'll review the remaining blueprints before the final rush.

Feel free to add extra topics to the agenda:
[1] http://wiki.openstack.org/Meetings/ProjectMeeting

All Technical Leads for integrated programs should be present (if you
can't make it, please name a substitute on [1]). Other program leads and
everyone else is very welcome to attend.

The meeting will be held at 21:00 UTC on the #openstack-meeting channel
on Freenode IRC. You can look up how this time translates locally at:
[2] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20130827T21

See you there,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] live-snapshot/cloning of virtual machines

2013-08-27 Thread Russell Bryant
On 08/26/2013 08:15 PM, Tim Smith wrote:
 Hi all,
 
 On Mon, Aug 19, 2013 at 11:49 PM, Bob Ball bob.b...@citrix.com
 mailto:bob.b...@citrix.com wrote:
 
 I agree with the below from a XenServer perspective.  As with
 vmware, XenServer supports live snapshotting and creating multiple
 clones from that live snapshot.
 
 I understand that there is a XenAPI equivalent in the works and
 therefore would argue the API changes need to be accepted as a minimum.
 
 
 Can nova technical leadership provide clarification on the current
 standing of this blueprint? Two hypervisor vendors have expressed plans
 for supporting this feature, and one has specifically requested that the
 API changes be merged, but it appears that both the API changeset [1]
 and novaclient support [2] have both been rejected pending libvirt
 support (which has assumedly been ruled out for the Havana release).
 
 [1] https://review.openstack.org/#/c/34036/
 [2] https://review.openstack.org/#/c/43777/ 
  
 
 In order to minimize the feature divergence between hypervisors, I'd
 also argue that we should accept the libvirt implementation even if
 it uses unsupported APIs - perhaps disabled by default with a
 suitable warning that it isn't considered safe by libvirt/QEmu.
 
 
 It's understandable that changes to the libvirt driver would be held
 back until libvirt/qemu-upstream support for live snapshotting is
 established (if ever), but given that other vendors whose release
 cadences don't necessarily align with the nova release schedule have
 expressed plans to support the interface it's unclear why lack of
 libvirt driver support would block the entire blueprint.

Two other driver maintainers have expressed interest in it, but AFAIK,
there are not implementations of this feature ready for review and
merging for these drivers.  Given that's the case, it doesn't make any
sense to me to merge the API with no ability to use it.  I'm only saying
it should wait until it can be merged with something that makes it usable.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] live-snapshot/cloning of virtual machines

2013-08-27 Thread Russell Bryant
On 08/27/2013 10:06 AM, Alessandro Pilotti wrote:
 We are also planning to implement the live snapshot feature in the
 Hyper-V driver during the next release cycle. 
 
 I'm personally in favour of publishing the APIs in Havana, as this would
 provide a stable baseline at the beginning of the release cycle and also

The API is published already.  What matters even more than the API for
you as a driver maintainer is the driver interface, which is actually
already merged.  It went in before it became clear the libvirt patch
wouldn't go in, but I don't think there's any reason to remove it now.

 give the ability to users and third parties to backport the driver's
 feature to Havana (outside of the official repo of course).

If you're backporting stuff anyway, you can backport the API patch, as
well.  I see no sense in delivering an API to *everyone* that can't be used.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-27 Thread Russell Bryant
On 08/27/2013 10:43 AM, Daniel P. Berrange wrote:
 I tend to focus the bulk of my review activity on the libvirt driver,
 since that's where most of my knowledge is. I've recently done some
 reviews outside this area to help reduce our backlog, but I'm not
 so comfortable approving stuff in many of the general infrastructure
 shared areas since I've not done much work on those areas of code.
 
 I think Nova is large enough that it (mostly) beyond the scope of any
 one person to know all areas of Nova code well enough todo quality
 reviews. IOW, as we grow the nova-core team further, it may be worth
 adding more reviewers who have strong knowledge of specific areas 
 can focus their review energy in those areas, even if their review
 count will be low when put in the context of nova as a whole.

I'm certainly open to that.

Another way I try to do this unofficially is give certain +1s a whole
lot of weight when I'm looking at a patch.  I do this regularly when
looking over patches to hypervisor drivers I'm not very familiar with.

Another thing we could consider is take this approach more officially.
Oslo has started doing this for its incubator.  A maintainer of a part
of the code not on oslo-core has their +1 treated as a +2 on that code.

http://git.openstack.org/cgit/openstack/oslo-incubator/tree/MAINTAINERS

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-27 Thread Daniel P. Berrange
On Tue, Aug 27, 2013 at 10:55:03AM -0400, Russell Bryant wrote:
 On 08/27/2013 10:43 AM, Daniel P. Berrange wrote:
  I tend to focus the bulk of my review activity on the libvirt driver,
  since that's where most of my knowledge is. I've recently done some
  reviews outside this area to help reduce our backlog, but I'm not
  so comfortable approving stuff in many of the general infrastructure
  shared areas since I've not done much work on those areas of code.
  
  I think Nova is large enough that it (mostly) beyond the scope of any
  one person to know all areas of Nova code well enough todo quality
  reviews. IOW, as we grow the nova-core team further, it may be worth
  adding more reviewers who have strong knowledge of specific areas 
  can focus their review energy in those areas, even if their review
  count will be low when put in the context of nova as a whole.
 
 I'm certainly open to that.
 
 Another way I try to do this unofficially is give certain +1s a whole
 lot of weight when I'm looking at a patch.  I do this regularly when
 looking over patches to hypervisor drivers I'm not very familiar with.
 
 Another thing we could consider is take this approach more officially.
 Oslo has started doing this for its incubator.  A maintainer of a part
 of the code not on oslo-core has their +1 treated as a +2 on that code.
 
 http://git.openstack.org/cgit/openstack/oslo-incubator/tree/MAINTAINERS

Yes, just having a list of expert maintainers for each area of Nova
would certainly be helpful in identifying whose comments to place
more weight by, regardless of anything else we might do.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] live-snapshot/cloning of virtual machines

2013-08-27 Thread David Scannell
On Tue, Aug 27, 2013 at 10:34 AM, Russell Bryant rbry...@redhat.com wrote:

 On 08/27/2013 10:06 AM, Alessandro Pilotti wrote:
  We are also planning to implement the live snapshot feature in the
  Hyper-V driver during the next release cycle.
 
  I'm personally in favour of publishing the APIs in Havana, as this would
  provide a stable baseline at the beginning of the release cycle and also

 The API is published already.  What matters even more than the API for
 you as a driver maintainer is the driver interface, which is actually
 already merged.  It went in before it became clear the libvirt patch
 wouldn't go in, but I don't think there's any reason to remove it now.


Since the API is published already, where is the harm in offering a backing
implementation of it? This completes the picture and leaves only the virt
driver maintainers to finish up the work and they can do that in their own
time
based on their own priorities and release schedule.

Ultimately the API implementation and the virt driver work is being done by
two distinct groups. I don't think its beneficial to block one group's
efforts
because another group has different priorities. Especially since both groups
have expressed a desire to see the work in.


  give the ability to users and third parties to backport the driver's
  feature to Havana (outside of the official repo of course).

 If you're backporting stuff anyway, you can backport the API patch, as
 well.  I see no sense in delivering an API to *everyone* that can't be
 used.

 Why require the additional hassle of backporting the API patch (which
affects a
different sets of nodes / services than backporting pure driver support)?
Especially
since the API patch simply fills in the implementation of the published API.

I understand that in the current outlook, Icehouse will be the release
where this feature
really shines because it'll be supported by most of the virt drivers.
However in the 6
months of the Havana release there is a rough patch of the functionality
available in
Vish's libvirt patch. Yes, it is not the long term solution, unsupported by
libvirt maintainers
and comes with a bunch of caveats around its use. But early adopters can
certainly use
this patch to experiment with this API and see what interesting workflows
come out of it.
That way they will be ready for when Icehouse lands with full support and
it is ready for
primetime.

Thanks,
David Scannell
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-27 Thread Joe Gordon
On Tue, Aug 27, 2013 at 11:04 AM, Daniel P. Berrange berra...@redhat.comwrote:

 On Tue, Aug 27, 2013 at 10:55:03AM -0400, Russell Bryant wrote:
  On 08/27/2013 10:43 AM, Daniel P. Berrange wrote:
   I tend to focus the bulk of my review activity on the libvirt driver,
   since that's where most of my knowledge is. I've recently done some
   reviews outside this area to help reduce our backlog, but I'm not
   so comfortable approving stuff in many of the general infrastructure
   shared areas since I've not done much work on those areas of code.
  
   I think Nova is large enough that it (mostly) beyond the scope of any
   one person to know all areas of Nova code well enough todo quality
   reviews. IOW, as we grow the nova-core team further, it may be worth
   adding more reviewers who have strong knowledge of specific areas 
   can focus their review energy in those areas, even if their review
   count will be low when put in the context of nova as a whole.
 
  I'm certainly open to that.
 
  Another way I try to do this unofficially is give certain +1s a whole
  lot of weight when I'm looking at a patch.  I do this regularly when
  looking over patches to hypervisor drivers I'm not very familiar with.
 
  Another thing we could consider is take this approach more officially.
  Oslo has started doing this for its incubator.  A maintainer of a part
  of the code not on oslo-core has their +1 treated as a +2 on that code.
 
  http://git.openstack.org/cgit/openstack/oslo-incubator/tree/MAINTAINERS

 Yes, just having a list of expert maintainers for each area of Nova
 would certainly be helpful in identifying whose comments to place
 more weight by, regardless of anything else we might do.


I think we can dynamically generate this based on git log/blame and gerrit
statistics per file.  For example if someone has authored half the lines in
a file or reviewed most of the patches that touched that file, they are
probably are very familiar with the file and would be a good person to
review any change.



 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/:|
 |: http://libvirt.org  -o- http://virt-manager.org:|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/:|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc:|

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] live-snapshot/cloning of virtual machines

2013-08-27 Thread Russell Bryant
On 08/27/2013 10:53 AM, Alessandro Pilotti wrote:
 That's IMO a different story: backporting a driver is usually quite
 trivial as it affects only one service (nova-compute) and one
 interaction point with Nova (the driver's interface). Between Havana and
 Grizzly for example, the entire Hyper-V driver can be backported without
 substantial issues. On the deployment side, we have to care only about
 updating the code which runs con the compute nodes, using vanilla
 OpenStack components on the controller and remaining nodes.
 
 Backporting the public APIs is a whole different story, it affects way
 more components that need to be deployed (nova-api as a minimum of
 course), with way more interaction points that might incur into patching
 hell.

Do you really know that?  This is pretty hand wavy.  I think you're
making this backport out to be _way_ more complicated than it is.  I
don't see why it's any more complicated than a virt driver feature backport.

 What about publishing the API as blacklisted by default? This way it
 would be available only to users that enable it explicitly, while still
 supporting the scenario described above.

It still makes no sense to me to merge an API for a feature that can't
be used.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-27 Thread Alessandro Pilotti
On Aug 27, 2013, at 18:40 , Joe Gordon 
joe.gord...@gmail.commailto:joe.gord...@gmail.com
 wrote:




On Tue, Aug 27, 2013 at 11:04 AM, Daniel P. Berrange 
berra...@redhat.commailto:berra...@redhat.com wrote:
On Tue, Aug 27, 2013 at 10:55:03AM -0400, Russell Bryant wrote:
 On 08/27/2013 10:43 AM, Daniel P. Berrange wrote:
  I tend to focus the bulk of my review activity on the libvirt driver,
  since that's where most of my knowledge is. I've recently done some
  reviews outside this area to help reduce our backlog, but I'm not
  so comfortable approving stuff in many of the general infrastructure
  shared areas since I've not done much work on those areas of code.
 
  I think Nova is large enough that it (mostly) beyond the scope of any
  one person to know all areas of Nova code well enough todo quality
  reviews. IOW, as we grow the nova-core team further, it may be worth
  adding more reviewers who have strong knowledge of specific areas 
  can focus their review energy in those areas, even if their review
  count will be low when put in the context of nova as a whole.

 I'm certainly open to that.

 Another way I try to do this unofficially is give certain +1s a whole
 lot of weight when I'm looking at a patch.  I do this regularly when
 looking over patches to hypervisor drivers I'm not very familiar with.

 Another thing we could consider is take this approach more officially.
 Oslo has started doing this for its incubator.  A maintainer of a part
 of the code not on oslo-core has their +1 treated as a +2 on that code.

 http://git.openstack.org/cgit/openstack/oslo-incubator/tree/MAINTAINERS

Yes, just having a list of expert maintainers for each area of Nova
would certainly be helpful in identifying whose comments to place
more weight by, regardless of anything else we might do.

I think we can dynamically generate this based on git log/blame and gerrit 
statistics per file.  For example if someone has authored half the lines in a 
file or reviewed most of the patches that touched that file, they are probably 
are very familiar with the file and would be a good person to review any change.

+1 :-)




Daniel
--
|: http://berrange.comhttp://berrange.com/  -o-
http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.orghttp://libvirt.org/  -o- 
http://virt-manager.orghttp://virt-manager.org/ :|
|: http://autobuild.orghttp://autobuild.org/   -o- 
http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.orghttp://entangle-photo.org/   -o-   
http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] live-snapshot/cloning of virtual machines

2013-08-27 Thread Russell Bryant
On 08/27/2013 12:04 PM, Alessandro Pilotti wrote:
 
 
 
 On Aug 27, 2013, at 18:52 , Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com
  wrote:
 
 On 08/27/2013 10:53 AM, Alessandro Pilotti wrote:
 That's IMO a different story: backporting a driver is usually quite
 trivial as it affects only one service (nova-compute) and one
 interaction point with Nova (the driver's interface). Between Havana and
 Grizzly for example, the entire Hyper-V driver can be backported without
 substantial issues. On the deployment side, we have to care only about
 updating the code which runs con the compute nodes, using vanilla
 OpenStack components on the controller and remaining nodes.

 Backporting the public APIs is a whole different story, it affects way
 more components that need to be deployed (nova-api as a minimum of
 course), with way more interaction points that might incur into patching
 hell.

 Do you really know that?  This is pretty hand wavy.  I think you're
 making this backport out to be _way_ more complicated than it is.  I
 don't see why it's any more complicated than a virt driver feature
 backport.
 
 No, that's why I used might instead of will :-)
 
 More important than the coding issue, there's the deployment and support
 issue for additional components that need to be mantained outside of the
 main code repo.
 
 What about publishing the API as blacklisted by default? This way it
 would be available only to users that enable it explicitly, while still
 supporting the scenario described above.
 It still makes no sense to me to merge an API for a feature that can't
 be used.
 
 This depends on the definition of can't be used:
 
 It's a feature that can't be used in the Havana code base, but I can
 assure you that we would do good use of it by backporting the I code.

Used by code *in the tree*.  If you're backporting anything, you're on
your own.  I find it completely unreasonable to ask the upstream project
to worry about supporting that kind of thing.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] live-snapshot/cloning of virtual machines

2013-08-27 Thread Russell Bryant
On 08/27/2013 12:04 PM, Tim Smith wrote:
 
 On Tue, Aug 27, 2013 at 8:52 AM, Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com wrote:
 
  What about publishing the API as blacklisted by default? This way it
  would be available only to users that enable it explicitly, while
 still
  supporting the scenario described above.
 
 It still makes no sense to me to merge an API for a feature that can't
 be used.
 
 
 While it's true that there won't be an in-tree driver that supports the
 API for this release cycle, we have a commercial driver that supports it
 (https://github.com/gridcentric/cobalt).
 
 Having the API standardized in Havana would ensure that client support
 is immediately available for our users as well as for the other
 hypervisor vendors should they release a supporting driver in the next 9
 months. I believe there is precedent for publishing a nova API for those
 purposes.

IMO, to be the healthiest project we can be, we must focus on what code
is actually a part of Nova.  If you'd like to submit your changes for
inclusion into Nova, then we can talk.

What you are seeing here is a part of the pain of maintaining a fork.  I
am not OK with shifting part of that burden on to the upstream project
when it doesn't help the upstream project *at all*.

When we have supporting code to make the feature usable, then the API
can go in.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-dev] Rechecks and Reverifies

2013-08-27 Thread John Griffith
This message has gone out a number of times but I want to stress
(particularly to those submitting to Cinder) the importance of logging
accurate recheck information.  Please take the time to view the logs on a
Jenkins fail before blindly entering recheck no bug.  This is happening
fairly frequently and quite frankly it does us no good if we don't look at
the failure and capture things that might be going wrong in the tests.

It's not hard, the CI team has put forth a good deal of effort to actually
make it pretty easy.  There's even a how to proceed link provided upon
failure to walk you through the steps.  The main thing is you have to look
at the console output from your failed job.  Also just FYI, pep8 and
py26/27 failures are very rarely no bug they are usually a real problem
in your patch.  It would be good to pay particular attention to these
before hitting recheck no bug.

Thanks,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Need help with Alembic...

2013-08-27 Thread Jay Pipes

On 08/27/2013 11:22 AM, Sandy Walsh wrote:

On 08/27/2013 05:32 AM, Boris Pavlovic wrote:

Jay,

I should probably share to you about our work around DB.

Migrations should be run only in production and only for production
backends (e.g. psql and mysql)
In tests we should use Schemas created by Models
(BASE.metadata.create_all())

We are not able to use in this approach in moment  because we don't have
any mechanism to check that MODELS and SCHEMAS are EQUAL.
And actually MODELS and SCHEMAS are DIFFERENT.

E.g. in Celiometer we have BP that syncs models and migration
https://blueprints.launchpad.net/ceilometer/+spec/ceilometer-db-sync-models-with-migrations
(in other projects we are doing the same)

And also we are working around (oslo) generic tests that checks that
models and migrations are equal:
https://review.openstack.org/#/c/42307/


So in our roadmap (in this case is):
1) Soft switch to alembic (with code that allows to have sqla-migrate
and alembic migration in the same time)
2) Sync Models and Migrations (fix DB schemas also)
3) Add from oslo generic test that checks all this stuff
4) Use BASE.create_all() for Schema creation instead of migrations.


But in OpenStack is not so simple to implement such huge changes, so it
take some time=)


Hmm, so I'm not sure how to proceed here?

Should we be using alembic or the older migrations right now?

Are there any examples of what you're proposing here?


Yeah, basically I am just confused in the case of Ceilometer, 
specifically. There are sqlalchemy-migrate migrations [1] and then there 
are Alembic migrations [2] all in the same project.


I have no idea how they are supposed to work together, but it seems like 
the SQLalchemy migrations are supposed to be run before the Alembic 
migrations [3]. However, I don't think the Alembic migrations have even 
been tested, frankly. [4]


What I believe is an appropriate course of action would be to do a 
single patch (hopefully BEFORE Havana is released) that would:


a) Remove all SQLAlchemy-migrate migrations
b) Create an initial Alembic migration script that would set the 
database state to the model state that should have existed at the end 
of the existing SQLAlchemy migrations (currently, migration 012)
c) Have the initial Alembic migration remove the sqlalchemy-migrate 
versioning table and history, if it exists.
d) Remove the existing code in the base storage test case that calls 
conn.upgrade() in the setUp() method (!)
e) Have the base storage test case for the sqlalchemy driver simply call 
sqlalchemy.MetaData.create_all() instead of running any migrations
f) Have EVERY Alembic migration tested individually with a database 
schema that has actual data in it (like is done in Glance with the 
sqlalchemy-migrate migrations)


That said, Alembic has no support for common ALTER TABLE operations with 
SQLite [5], and so I recommend having the migration tests in f) above 
only tested on production database engines (MySQL, PostgreSQL, etc).


Thoughts?
-jay

[1] 
https://github.com/openstack/ceilometer/tree/master/ceilometer/storage/sqlalchemy/migrate_repo/versions
[2] 
https://github.com/openstack/ceilometer/tree/master/ceilometer/storage/sqlalchemy/alembic/versions
[3] 
https://github.com/openstack/ceilometer/blob/master/ceilometer/storage/sqlalchemy/migration.py#L68

[4] https://bugs.launchpad.net/ceilometer/+bug/1217156
[5] 
https://bitbucket.org/zzzeek/alembic/issue/21/column-renames-not-supported-on-sqlite


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev] Rechecks and Reverifies

2013-08-27 Thread Alex Gaynor
I wonder if there's any sort of automation we can apply to this, for
example having known rechecks have signatures and if a failure matches
the signature it auto applies the recheck.

Alex


On Tue, Aug 27, 2013 at 9:18 AM, John Griffith
john.griff...@solidfire.comwrote:

 This message has gone out a number of times but I want to stress
 (particularly to those submitting to Cinder) the importance of logging
 accurate recheck information.  Please take the time to view the logs on a
 Jenkins fail before blindly entering recheck no bug.  This is happening
 fairly frequently and quite frankly it does us no good if we don't look at
 the failure and capture things that might be going wrong in the tests.

 It's not hard, the CI team has put forth a good deal of effort to actually
 make it pretty easy.  There's even a how to proceed link provided upon
 failure to walk you through the steps.  The main thing is you have to look
 at the console output from your failed job.  Also just FYI, pep8 and
 py26/27 failures are very rarely no bug they are usually a real problem
 in your patch.  It would be good to pay particular attention to these
 before hitting recheck no bug.

 Thanks,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
I disapprove of what you say, but I will defend to the death your right to
say it. -- Evelyn Beatrice Hall (summarizing Voltaire)
The people's good is the highest law. -- Cicero
GPG Key fingerprint: 125F 5C67 DFE9 4084
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] Service Relationships and Dependencies

2013-08-27 Thread John Speidel
Some services/components are related or have dependencies on other 
services and components.As an example, in HDP, the Hive service depends 
on HBase and Zookeeper.In Savanna, there is no way to express this 
relationship.If a user wanted to deploy Hive, they would need to know to 
install both HBase and Zookeeper a priori.Also, because the list of 
service components(node processes) that is provided to a user to be used 
in node groups is a flat list, only the component name gives any 
indication as to what service the components belong to.Because of this, 
it will likely be difficult for the user to understand exactly what 
components are required to be installed for a given 
service(s).Currently, the HDP stack consists of approximately 25 service 
components.



A primary reason that it isn't currently possible to express 
service/component relationships is that topology is defined from the 
bottom up.This means that a user first selects components and assigns 
them to a node template.The users first interaction is with components, 
not services.Currently, the user will not know if a given topology is 
valid until an attempt is made to deploy a cluster and validate is 
called on the plugin.At this point, if the topology were invalid, the 
user would need to go back and create new node and cluster templates.




One way to express service relationships would be to define topology top 
down, with the user first selecting services.After selecting services, 
the related service components could be listed and the required 
components could be noted. This approach is a significant change to how 
Savanna currently works, has not been thoroughly thought through and and 
is only meant to promote conversation on the matter.




After making new services available from the HDP plugin, it is clear 
that defining a desired (valid) topology will be very difficult and 
error prone with the current savanna architecture.I look forward to 
discussing solutions to this matter with the community.



-John

--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Need help with Alembic...

2013-08-27 Thread Jay Pipes

On 08/27/2013 04:32 AM, Boris Pavlovic wrote:

Jay,

I should probably share to you about our work around DB.

Migrations should be run only in production and only for production
backends (e.g. psql and mysql)
In tests we should use Schemas created by Models
(BASE.metadata.create_all())


Agree on both.


We are not able to use in this approach in moment  because we don't have
any mechanism to check that MODELS and SCHEMAS are EQUAL.
And actually MODELS and SCHEMAS are DIFFERENT.


Sorry, I don't understand the connection... how does not having a 
codified way of determining the difference between model and schema 
(BTW, this does exist in sqlalchemy-migrate... look at the 
compare_model_to_db method) not allow you to use metadata.create_all() 
in tests or mean that you can't run migrations only in production?



E.g. in Celiometer we have BP that syncs models and migration
https://blueprints.launchpad.net/ceilometer/+spec/ceilometer-db-sync-models-with-migrations
(in other projects we are doing the same)

And also we are working around (oslo) generic tests that checks that
models and migrations are equal:
https://review.openstack.org/#/c/42307/


OK, cool.


So in our roadmap (in this case is):
1) Soft switch to alembic (with code that allows to have sqla-migrate
and alembic migration in the same time)


I don't see the point in this at all... I would rather see patches that 
just switch to Alembic and get rid of SQLAlchemy-migrate. Create an 
initial Alembic migration that has the last state of the database schema 
under SQLAlchemy-migrate... and then delete SA-Migrate.



2) Sync Models and Migrations (fix DB schemas also)
3) Add from oslo generic test that checks all this stuff
4) Use BASE.create_all() for Schema creation instead of migrations.


This is already done in some projects, IIRC... (Glance used to be this 
way, at least)



But in OpenStack is not so simple to implement such huge changes, so it
take some time=)


Best regards,
Boris Pavlovic
---
Mirantis Inc.










On Tue, Aug 27, 2013 at 12:02 AM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

On 08/26/2013 03:40 PM, Herndon, John Luke (HPCS - Ft. Collins) wrote:

Jay -

It looks there is an error in the migration script that causes
it to abort:

AttributeError: 'ForeignKeyConstraint' object has no attribute
'drop'

My guess is the migration runs on the first test, creates event
types
table fine, but exits with the above error, so migration is not
complete. Thus every subsequent test tries to migrate the db, and
notices that event types already exists.


I'd corrected that particular mistake and pushed an updated
migration script.

Best,
-jay



-john

On 8/26/13 1:15 PM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

I just noticed that every single test case for SQL-driver
storage is
executing every single migration upgrade before every single
test case
run:


https://github.com/openstack/__ceilometer/blob/master/__ceilometer/tests/db.py

https://github.com/openstack/ceilometer/blob/master/ceilometer/tests/db.py
#L46


https://github.com/openstack/__ceilometer/blob/master/__ceilometer/storage/imp

https://github.com/openstack/ceilometer/blob/master/ceilometer/storage/imp
l_sqlalchemy.py#L153

instead of simply creating a new database schema from the
models in the
current source code base using a call to
sqlalchemy.MetaData.create___all().

This results in re-running migrations over and over again,
instead of
having dedicated migration tests that would test each migration
individually, as is the case in projects like Glance...

Is this intentional?

Best,
-jay

On 08/26/2013 02:59 PM, Sandy Walsh wrote:

I'm getting the same problem with a different migration
(mine is
complaining that a column already exists)

http://paste.openstack.org/__show/44512/
http://paste.openstack.org/show/44512/

I've compared it to the other migrations and it seems fine.

-S

On 08/26/2013 02:34 PM, Jay Pipes wrote:

Hey all,

I'm trying to figure out what is going wrong with my
code for this
patch:

https://review.openstack.org/__41316
https://review.openstack.org/41316

I had previously added a sqlalchemy-migrate
migration script to add an
event_type table, and had that working, but then was
asked to instead
 

Re: [openstack-dev] live-snapshot/cloning of virtual machines

2013-08-27 Thread Daniel P. Berrange
On Tue, Aug 27, 2013 at 12:13:49PM -0400, Russell Bryant wrote:
 On 08/27/2013 12:04 PM, Tim Smith wrote:
  
  On Tue, Aug 27, 2013 at 8:52 AM, Russell Bryant rbry...@redhat.com
  mailto:rbry...@redhat.com wrote:
  
   What about publishing the API as blacklisted by default? This way it
   would be available only to users that enable it explicitly, while
  still
   supporting the scenario described above.
  
  It still makes no sense to me to merge an API for a feature that can't
  be used.
  
  
  While it's true that there won't be an in-tree driver that supports the
  API for this release cycle, we have a commercial driver that supports it
  (https://github.com/gridcentric/cobalt).
  
  Having the API standardized in Havana would ensure that client support
  is immediately available for our users as well as for the other
  hypervisor vendors should they release a supporting driver in the next 9
  months. I believe there is precedent for publishing a nova API for those
  purposes.
 
 IMO, to be the healthiest project we can be, we must focus on what code
 is actually a part of Nova.  If you'd like to submit your changes for
 inclusion into Nova, then we can talk.
 
 What you are seeing here is a part of the pain of maintaining a fork.  I
 am not OK with shifting part of that burden on to the upstream project
 when it doesn't help the upstream project *at all*.
 
 When we have supporting code to make the feature usable, then the API
 can go in.

Totally agreed with this. Supporting APIs in Nova with no in-tree users,
to satisfy the requirements of out of tree drivers should be an explicit
non-goal of the community IMHO. If a 3rd party does not wish to contribute
their code to Nova codebase, then it is expected that they take on all the
burden of doing the extra integration work their fork/branch implies.

For a code review POV, I would also not be satisfied doing review of APIs
without illustration of an in-tree driver wired up to the wire. Doing API
design is hard work, and I've been burnt too many times on other projects
adding APIs without in tree users, which then had to be thrown away or
replaced when the in-tree user finally arrived. So I would be very much
against adding any APIs without in-tree users, even ignoring the fact
that I think the live VM cloning concept as a whole is flawed.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] live-snapshot/cloning of virtual machines

2013-08-27 Thread Dan Smith
 While it's true that there won't be an in-tree driver that supports
 the API for this release cycle, we have a commercial driver that
 supports it ( https://github.com/gridcentric/cobalt).

IMHO, out of tree virt drivers are completely out of scope here. We
change the virt driver API at will, adding, removing, and changing
things without any concern for what it may do to anything outside of
our tree. Until we commit to a stable virt driver API, the above
argument holds no weight for me.
 
 Having the API standardized in Havana would ensure that client
 support is immediately available for our users as well as for the
 other hypervisor vendors should they release a supporting driver in
 the next 9 months. I believe there is precedent for publishing a nova
 API for those purposes.

Having an API codified (and committed-to) before we have a working
implementation that uses it in Nova is asking for trouble, IMHO.

Personally, I think the hardest bit is already in the tree, which is
the piece that allows the API to communicate with a fictional
live-snapshot-supporting virt driver. Backporting the API and writing a
virt driver to the interface that is there is cake compared to what it
would take to bolt on the plumbing part.

IIRC, the plumbing was speculatively merged ahead of the libvirt and
API pieces, but perhaps the safest (and least confusing) thing for us
to do would be to yank it out prior to Havana?

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V Meeting minutes

2013-08-27 Thread Alessandro Pilotti
Today's Hyper-V meeting minutes:

Minutes:
http://eavesdrop.openstack.org/meetings/hyper_v/2013/hyper_v.2013-08-27-16.06.html
Minutes (text): 
http://eavesdrop.openstack.org/meetings/hyper_v/2013/hyper_v.2013-08-27-16.06.txt
Log:
http://eavesdrop.openstack.org/meetings/hyper_v/2013/hyper_v.2013-08-27-16.06.log.html


Thanks,

Alessandro
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Attribute validation questions (from review comments on 43559)

2013-08-27 Thread Paul Michali
Hi!

For VPNaaS there are some attributes, like the Dead Peer Detection interval and 
timeout, that have some dependencies (timeout should be  interval).  Another 
example, is the minimum value for the MTU attribute, which would differ, 
depending upon whether IPv4 or IPv6 is being used.

I see that api/v2/attributes.py allows one to check an individual attribute. I 
have these questions:

Is there a way to validate an attribute that has a dependency?
If so, is that using some other mechanism than the validators in attributes.py?
For the DPD example, those two attributes are part of a dict. Is there a way to 
validate the dict, in addition to the individual attributes (can I have a 
validator on the dict)?
What about validating un-related attributes, like MTU? I'm not sure yet, how 
I'll know whether IPv4 or IPv6 is selected (I assume by checking the peer_cidr 
attribute), as the user could specify MTU first.
If we don't test the MTU based on IPv4/6, then what do I use for a lower limit 
- the IPv4 limit of 68?

Thanks!

PCM (Paul Michali)

MAIL p...@cisco.com
IRC   pcm_  (irc.freenode.net)
TW   @pmichali



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Need help with Alembic...

2013-08-27 Thread Boris Pavlovic
Jay,


We are not able to use in this approach in moment  because we don't have
 any mechanism to check that MODELS and SCHEMAS are EQUAL.
 And actually MODELS and SCHEMAS are DIFFERENT.


Sorry, I don't understand the connection... how does not having a codified
way of determining the difference between model and schema (BTW, this does
exist in sqlalchemy-migrate... look at the compare_model_to_db method) not
allow you to use metadata.create_all() in tests or mean that you can't run
migrations only in production?


There is no method out of box that will properly compare models with
migrations..  (especially in our case of supporting alembic and
sqlalchemy-migrate together)



 2) Sync Models and Migrations (fix DB schemas also)
 3) Add from oslo generic test that checks all this stuff
 4) Use BASE.create_all() for Schema creation instead of migrations.


This is already done in some projects, IIRC... (Glance used to be this way,
at least)


And it is totally unsafe (because result of models and migrations are
different)


On Tue, Aug 27, 2013 at 8:30 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/27/2013 04:32 AM, Boris Pavlovic wrote:

 Jay,

 I should probably share to you about our work around DB.

 Migrations should be run only in production and only for production
 backends (e.g. psql and mysql)
 In tests we should use Schemas created by Models
 (BASE.metadata.create_all())


 Agree on both.


  We are not able to use in this approach in moment  because we don't have
 any mechanism to check that MODELS and SCHEMAS are EQUAL.
 And actually MODELS and SCHEMAS are DIFFERENT.


 Sorry, I don't understand the connection... how does not having a codified
 way of determining the difference between model and schema (BTW, this does
 exist in sqlalchemy-migrate... look at the compare_model_to_db method) not
 allow you to use metadata.create_all() in tests or mean that you can't run
 migrations only in production?


  E.g. in Celiometer we have BP that syncs models and migration
 https://blueprints.launchpad.**net/ceilometer/+spec/**
 ceilometer-db-sync-models-**with-migrationshttps://blueprints.launchpad.net/ceilometer/+spec/ceilometer-db-sync-models-with-migrations
 (in other projects we are doing the same)

 And also we are working around (oslo) generic tests that checks that
 models and migrations are equal:
 https://review.openstack.org/#**/c/42307/https://review.openstack.org/#/c/42307/


 OK, cool.


  So in our roadmap (in this case is):
 1) Soft switch to alembic (with code that allows to have sqla-migrate
 and alembic migration in the same time)


 I don't see the point in this at all... I would rather see patches that
 just switch to Alembic and get rid of SQLAlchemy-migrate. Create an initial
 Alembic migration that has the last state of the database schema under
 SQLAlchemy-migrate... and then delete SA-Migrate.


  2) Sync Models and Migrations (fix DB schemas also)
 3) Add from oslo generic test that checks all this stuff
 4) Use BASE.create_all() for Schema creation instead of migrations.


 This is already done in some projects, IIRC... (Glance used to be this
 way, at least)

  But in OpenStack is not so simple to implement such huge changes, so it
 take some time=)


 Best regards,
 Boris Pavlovic
 ---
 Mirantis Inc.










 On Tue, Aug 27, 2013 at 12:02 AM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:

 On 08/26/2013 03:40 PM, Herndon, John Luke (HPCS - Ft. Collins) wrote:

 Jay -

 It looks there is an error in the migration script that causes
 it to abort:

 AttributeError: 'ForeignKeyConstraint' object has no attribute
 'drop'

 My guess is the migration runs on the first test, creates event
 types
 table fine, but exits with the above error, so migration is not
 complete. Thus every subsequent test tries to migrate the db,
 and
 notices that event types already exists.


 I'd corrected that particular mistake and pushed an updated
 migration script.

 Best,
 -jay



 -john

 On 8/26/13 1:15 PM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:

 I just noticed that every single test case for SQL-driver
 storage is
 executing every single migration upgrade before every single
 test case
 run:

 https://github.com/openstack/_**_ceilometer/blob/master/__**
 ceilometer/tests/db.pyhttps://github.com/openstack/__ceilometer/blob/master/__ceilometer/tests/db.py
 https://github.com/openstack/**ceilometer/blob/master/**
 ceilometer/tests/db.pyhttps://github.com/openstack/ceilometer/blob/master/ceilometer/tests/db.py
 
 #L46

 https://github.com/openstack/_**_ceilometer/blob/master/__**
 ceilometer/storage/imphttps://github.com/openstack/__ceilometer/blob/master/__ceilometer/storage/imp
 

Re: [openstack-dev] [Ceilometer] Need help with Alembic...

2013-08-27 Thread Alexei Kornienko

Hello,

Please see my answers inline and my previous email regarding this topic:

Hello,

This conversion is actually quite simple. We are currently working to 
support alembic migrations in ceilometer:

https://blueprints.launchpad.net/ceilometer/+spec/convert-to-alembic

For now we agreed that conversion process will be the following:
1) Create folder for alembic migrations and configure engine correctly
2) Run alembic migrations after sqlalchemy-migrate (alembic creates a 
separate stamp table by default)

3) Create new migrations in alembic
4) Sync model with migrations* - 
https://blueprints.launchpad.net/ceilometer/+spec/ceilometer-db-sync-models-with-migrations 

5) Remove old migration files after the end of support period (2 
releases)


* Is needed to remove the need of base migration so the clean database 
could be created from models directly without the need of migrations.
This allows to simply drop old migrations without the need to compact 
them to one big migration (133_folsom.py in nova for example)


Please share your thoughts about proposed process.

Regards,
Alexei Kornienko 


Regards,
Alexei Kornienko

On 08/27/2013 07:30 PM, Jay Pipes wrote:

On 08/27/2013 04:32 AM, Boris Pavlovic wrote:

Jay,

I should probably share to you about our work around DB.

Migrations should be run only in production and only for production
backends (e.g. psql and mysql)
In tests we should use Schemas created by Models
(BASE.metadata.create_all())


Agree on both.


We are not able to use in this approach in moment  because we don't have
any mechanism to check that MODELS and SCHEMAS are EQUAL.
And actually MODELS and SCHEMAS are DIFFERENT.


Sorry, I don't understand the connection... how does not having a 
codified way of determining the difference between model and schema 
(BTW, this does exist in sqlalchemy-migrate... look at the 
compare_model_to_db method) not allow you to use metadata.create_all() 
in tests or mean that you can't run migrations only in production?
As Boris said we'll use 2 completely different ways to create DB schema 
in production and test environment. Cause of this we won't be able to 
guarantee that code is correct unless we'll have a dedicated test that 
will assure that we work with the same DB schema in both envs.



E.g. in Celiometer we have BP that syncs models and migration
https://blueprints.launchpad.net/ceilometer/+spec/ceilometer-db-sync-models-with-migrations 


(in other projects we are doing the same)

And also we are working around (oslo) generic tests that checks that
models and migrations are equal:
https://review.openstack.org/#/c/42307/


OK, cool.


So in our roadmap (in this case is):
1) Soft switch to alembic (with code that allows to have sqla-migrate
and alembic migration in the same time)


I don't see the point in this at all... I would rather see patches 
that just switch to Alembic and get rid of SQLAlchemy-migrate. Create 
an initial Alembic migration that has the last state of the database 
schema under SQLAlchemy-migrate... and then delete SA-Migrate.
It's not that simple as it seems. Please take into account that we have 
to keep SA-migrate + migrations during maintenance period for all 
projects. Cause of this we have to go the long way and keep both engines 
running before we'll be able to completely remove SA-migrate.





2) Sync Models and Migrations (fix DB schemas also)
3) Add from oslo generic test that checks all this stuff
4) Use BASE.create_all() for Schema creation instead of migrations.


This is already done in some projects, IIRC... (Glance used to be this 
way, at least)



But in OpenStack is not so simple to implement such huge changes, so it
take some time=)


Best regards,
Boris Pavlovic
---
Mirantis Inc.










On Tue, Aug 27, 2013 at 12:02 AM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

On 08/26/2013 03:40 PM, Herndon, John Luke (HPCS - Ft. Collins) 
wrote:


Jay -

It looks there is an error in the migration script that causes
it to abort:

AttributeError: 'ForeignKeyConstraint' object has no attribute
'drop'

My guess is the migration runs on the first test, creates event
types
table fine, but exits with the above error, so migration is not
complete. Thus every subsequent test tries to migrate the 
db, and

notices that event types already exists.


I'd corrected that particular mistake and pushed an updated
migration script.

Best,
-jay



-john

On 8/26/13 1:15 PM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

I just noticed that every single test case for SQL-driver
storage is
executing every single migration upgrade before every single
test case
run:

https://github.com/openstack/__ceilometer/blob/master/__ceilometer/tests/db.py

Re: [openstack-dev] [Ceilometer] Need help with Alembic...

2013-08-27 Thread Boris Pavlovic
Hi,

Instead of spending tons of times through these mailing lists lets make a
code and reviews:

There is already RoadMap about what we should do.

1. Sync DB.Models with result of migrations
2. Sync migrations for different backends
3. (OSLO) Merge checker that all is synced in
https://review.openstack.org/#/c/42307/
4. (Ceilometer) add 3 in celiometer https://review.openstack.org/#/c/43872/
5. Use in TestCase class DB created by models instead of by migrations
6. (Nova then Oslo and other pr.) Run test against all backends (Mysql,
Psql)https://review.openstack.org/#/c/42142/


Then we won't need to provide sqlite in migrations and will use Alembic
without any problems.

Best regards,
Boris Pavlovic
---
Mirantis Inc.



On Tue, Aug 27, 2013 at 8:54 PM, Boris Pavlovic bo...@pavlovic.me wrote:

 Jay,


 We are not able to use in this approach in moment  because we don't have
 any mechanism to check that MODELS and SCHEMAS are EQUAL.
 And actually MODELS and SCHEMAS are DIFFERENT.


 Sorry, I don't understand the connection... how does not having a codified
 way of determining the difference between model and schema (BTW, this does
 exist in sqlalchemy-migrate... look at the compare_model_to_db method) not
 allow you to use metadata.create_all() in tests or mean that you can't run
 migrations only in production?


 There is no method out of box that will properly compare models with
 migrations..  (especially in our case of supporting alembic and
 sqlalchemy-migrate together)



 2) Sync Models and Migrations (fix DB schemas also)
 3) Add from oslo generic test that checks all this stuff
 4) Use BASE.create_all() for Schema creation instead of migrations.


 This is already done in some projects, IIRC... (Glance used to be this
 way, at least)


 And it is totally unsafe (because result of models and migrations are
 different)


 On Tue, Aug 27, 2013 at 8:30 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/27/2013 04:32 AM, Boris Pavlovic wrote:

 Jay,

 I should probably share to you about our work around DB.

 Migrations should be run only in production and only for production
 backends (e.g. psql and mysql)
 In tests we should use Schemas created by Models
 (BASE.metadata.create_all())


 Agree on both.


  We are not able to use in this approach in moment  because we don't have
 any mechanism to check that MODELS and SCHEMAS are EQUAL.
 And actually MODELS and SCHEMAS are DIFFERENT.


 Sorry, I don't understand the connection... how does not having a
 codified way of determining the difference between model and schema (BTW,
 this does exist in sqlalchemy-migrate... look at the compare_model_to_db
 method) not allow you to use metadata.create_all() in tests or mean that
 you can't run migrations only in production?


  E.g. in Celiometer we have BP that syncs models and migration
 https://blueprints.launchpad.**net/ceilometer/+spec/**
 ceilometer-db-sync-models-**with-migrationshttps://blueprints.launchpad.net/ceilometer/+spec/ceilometer-db-sync-models-with-migrations
 (in other projects we are doing the same)

 And also we are working around (oslo) generic tests that checks that
 models and migrations are equal:
 https://review.openstack.org/#**/c/42307/https://review.openstack.org/#/c/42307/


 OK, cool.


  So in our roadmap (in this case is):
 1) Soft switch to alembic (with code that allows to have sqla-migrate
 and alembic migration in the same time)


 I don't see the point in this at all... I would rather see patches that
 just switch to Alembic and get rid of SQLAlchemy-migrate. Create an initial
 Alembic migration that has the last state of the database schema under
 SQLAlchemy-migrate... and then delete SA-Migrate.


  2) Sync Models and Migrations (fix DB schemas also)
 3) Add from oslo generic test that checks all this stuff
 4) Use BASE.create_all() for Schema creation instead of migrations.


 This is already done in some projects, IIRC... (Glance used to be this
 way, at least)

  But in OpenStack is not so simple to implement such huge changes, so it
 take some time=)


 Best regards,
 Boris Pavlovic
 ---
 Mirantis Inc.










 On Tue, Aug 27, 2013 at 12:02 AM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:

 On 08/26/2013 03:40 PM, Herndon, John Luke (HPCS - Ft. Collins)
 wrote:

 Jay -

 It looks there is an error in the migration script that causes
 it to abort:

 AttributeError: 'ForeignKeyConstraint' object has no attribute
 'drop'

 My guess is the migration runs on the first test, creates event
 types
 table fine, but exits with the above error, so migration is not
 complete. Thus every subsequent test tries to migrate the db,
 and
 notices that event types already exists.


 I'd corrected that particular mistake and pushed an updated
 migration script.

 Best,
 -jay



 -john

 On 8/26/13 1:15 PM, Jay Pipes jaypi...@gmail.com
 

Re: [openstack-dev] Neutron + Grenade (or other upgrade testing)

2013-08-27 Thread Maru Newby

On Aug 26, 2013, at 10:23 AM, Dean Troyer dtro...@gmail.com wrote:

 On Mon, Aug 26, 2013 at 10:50 AM, Maru Newby ma...@redhat.com wrote:
 Is anyone working on/planning on adding support for neutron to grenade?  Or 
 is there any other automated upgrade testing going on for neutron?
 
 We deliberately avoided migrations in Grenade (like Nova Volume - Cinder) as 
 we wanted to focus on upgrades within projects.  Migrations will necessarily 
 be much more complicated, especially Nova Network - Neutron.  At some point 
 Neutron should be added to Grenade, but only as a release upgrade step for 
 some basic configuration.
 
 That said, I'm sure there would be great appreciation for a recipe to 
 duplicate an existing Nova Network config in Neutron.  We can debate if that 
 belongs in Grenade should it ever exist…

I was referring to upgrades within projects - in this case Quantum to Neutron.  
I'm assuming that belongs in grenade?


m. 



 dt
 
 -- 
 
 Dean Troyer
 dtro...@gmail.com
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-27 Thread Matt Dietz
Good idea!

Only thing I would point out is there are a fair amount of changes, especially 
lately, where code is just moving from one portion of the project to another, 
so there may be cases where someone ends up being authoritative over code they 
don't totally understand.

From: Alessandro Pilotti 
apilo...@cloudbasesolutions.commailto:apilo...@cloudbasesolutions.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, August 27, 2013 10:48 AM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] Frustrations with review wait times

On Aug 27, 2013, at 18:40 , Joe Gordon 
joe.gord...@gmail.commailto:joe.gord...@gmail.com
 wrote:




On Tue, Aug 27, 2013 at 11:04 AM, Daniel P. Berrange 
berra...@redhat.commailto:berra...@redhat.com wrote:
On Tue, Aug 27, 2013 at 10:55:03AM -0400, Russell Bryant wrote:
 On 08/27/2013 10:43 AM, Daniel P. Berrange wrote:
  I tend to focus the bulk of my review activity on the libvirt driver,
  since that's where most of my knowledge is. I've recently done some
  reviews outside this area to help reduce our backlog, but I'm not
  so comfortable approving stuff in many of the general infrastructure
  shared areas since I've not done much work on those areas of code.
 
  I think Nova is large enough that it (mostly) beyond the scope of any
  one person to know all areas of Nova code well enough todo quality
  reviews. IOW, as we grow the nova-core team further, it may be worth
  adding more reviewers who have strong knowledge of specific areas 
  can focus their review energy in those areas, even if their review
  count will be low when put in the context of nova as a whole.

 I'm certainly open to that.

 Another way I try to do this unofficially is give certain +1s a whole
 lot of weight when I'm looking at a patch.  I do this regularly when
 looking over patches to hypervisor drivers I'm not very familiar with.

 Another thing we could consider is take this approach more officially.
 Oslo has started doing this for its incubator.  A maintainer of a part
 of the code not on oslo-core has their +1 treated as a +2 on that code.

 http://git.openstack.org/cgit/openstack/oslo-incubator/tree/MAINTAINERS

Yes, just having a list of expert maintainers for each area of Nova
would certainly be helpful in identifying whose comments to place
more weight by, regardless of anything else we might do.

I think we can dynamically generate this based on git log/blame and gerrit 
statistics per file.  For example if someone has authored half the lines in a 
file or reviewed most of the patches that touched that file, they are probably 
are very familiar with the file and would be a good person to review any change.

+1 :-)




Daniel
--
|: http://berrange.comhttp://berrange.com/  -o-
http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.orghttp://libvirt.org/  -o- 
http://virt-manager.orghttp://virt-manager.org/ :|
|: http://autobuild.orghttp://autobuild.org/   -o- 
http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.orghttp://entangle-photo.org/   -o-   
http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev] Rechecks and Reverifies

2013-08-27 Thread Clark Boylan
On Tue, Aug 27, 2013 at 10:15 AM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from John Griffith's message of 2013-08-27 09:42:37 -0700:
 On Tue, Aug 27, 2013 at 10:26 AM, Alex Gaynor alex.gay...@gmail.com wrote:

  I wonder if there's any sort of automation we can apply to this, for
  example having known rechecks have signatures and if a failure matches
  the signature it auto applies the recheck.
 

 I think we kinda already have that, the recheck list and the bug ID
 assigned to it no?  Automatically scanning said list and doing the recheck
 automatically seems like overkill in my opinion.  At some point human
 though/interaction is required and I don't think it's too much to ask a
 technical contributor to simply LOOK at the output from the test runs
 against their patches and help out a bit. At the very least if you didn't
 test your patch yourself and waited for Jenkins to tell you it's broken I
 would hope that a submitter would at least be motivated to fix their own
 issue that they introduced.


 It is worth thinking about though, because ask a technical contributor
 to simply LOOK is a lot more expensive than let a script confirm the
 failure and tack it onto the list for rechecks.

 Ubuntu has something like this going for all of their users and it is
 pretty impressive.

 Apport and/or whoopsie see crashes and look at the
 backtraces/coredumps/etc and then (with user permission) submit a
 signature to the backend. It is then analyzed and the result is this:

 http://errors.ubuntu.com/

 Known false positives are shipped along side packages so that they do
 not produce noise, and known points of pain for debugging are eased by
 including logs and other things in bug reports when users are running
 the dev release. This results in a much better metric for what bugs to
 address first. IIRC update-manager also checks in with a URL that is
 informed partially by this data about whether or not to update packages,
 so if there is a high fail rate early on, the server side will basically
 signal update-manager don't update right now.

 I'd love to see our CI system enhanced to do all of the pattern
 matching to group failures by common patterns, and then when a technical
 contributor looks at these groups they have tons of data points to _fix_
 the problem rather than just spending their precious time identifying it.

 The point of the recheck system, IMHO, isn't to make running rechecks
 easier, it is to find and fix bugs.

This is definitely worth thinking about and we had a session on
dealing with CI logs to do interesting things like update bugs and
handle rechecks automatically at the Havana summit[0]. Since then we
have built a logstash + elasticsearch system[1] that filters many of
our test logs and indexes a subset of what was filtered (typically
anything with a log level greater than DEBUG). Building this system is
step one in being able to detect anomalous logs, update bugs, and
potentially perform automatic rechecks with the appropriate bug.
Progress has been somewhat slow, but the current setup should be
mostly stable. If anyone is interested in poking at these tools to do
interesting automation with them feel free to bug the Infra team.

That said, we won't have something super automagic like that before
the end of Havana making John's point an important one. If previous
release feature freezes are any indication we will continue to put
more pressure on the CI system as we near Havana's feature freeze. Any
unneeded rechecks or reverifies can potentially slow the whole process
down for everyone. We should be running as many tests as possible
locally before pushing to Gerrit (this is as simple as running `tox`)
and making a best effort to identify the bugs that cause failures when
performing rechecks or reverifies.

[0] https://etherpad.openstack.org/havana-ci-logging
[1] http://ci.openstack.org/logstash.html

Thank you,
Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] live-snapshot/cloning of virtual machines

2013-08-27 Thread Adin Scannell
On Tue, Aug 27, 2013 at 12:13 PM, Russell Bryant rbry...@redhat.com wrote:

 IMO, to be the healthiest project we can be, we must focus on what code
  is actually a part of Nova.  If you'd like to submit your changes for
 inclusion into Nova, then we can talk.


That's ultimately what we're trying to accomplish here.

What you are seeing here is a part of the pain of maintaining a fork.  I
 am not OK with shifting part of that burden on to the upstream project
 when it doesn't help the upstream project *at all*.


We're not maintaining a fork, nor trying to shift burden. That's unfair. We
have a long term interest in the success of OpenStack, like everyone here.
We're not asking upstream to do anything. This isn't adversarial.

There are plenty of things that don't help nova directly, but certainly
enable a vibrant ecosystem. For example, having extensible APIs and
pluggable backends is critical to the philosophy and success of OpenStack
as a whole.

We absolutely understand that having solid, in-tree implementations is also
important. That's why as a part of the blueprint, there was a pan-community
effort made to create a libvirt implementation. Although that particular
effort has hit some speed bumps, merging the API extension would still
benefit members of the community by simplifying deployments (i.e. for us)
and Havana backports (i.e. needing to provide only an updated compute
driver w/ config change). Other hypervisor driver maintainers have also
expressed the desire to see it merged to speed and simplify development of
in-tree implementations moving forward. It's not just going to get dropped
on the floor.

In the end, I think that the need to have the reference implementation land
at the same moment should be balanced against community interests. Yes, we
really want to see the API in Havana, but it seems we're not alone. I would
understand holding off if there were substantial downsides to merging, or
if multiple hypervisor vendors had expressed a desire to see it not merged.
But that doesn't seem to be the case. Six months is a long time, and
ultimately the evolution of an open source ecosystem is just as important
as the code itself.

Thanks,
-Adin
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-27 Thread Russell Bryant
On 08/27/2013 01:30 PM, Matt Dietz wrote:
 Good idea!
 
 Only thing I would point out is there are a fair amount of changes,
 especially lately, where code is just moving from one portion of the
 project to another, so there may be cases where someone ends up being
 authoritative over code they don't totally understand. 

Right.  While some automation can provide some insight, it certainly can
not make any decisions in this area, IMO.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] New Nova hypervisor: Docker

2013-08-27 Thread Sam Alba
Hi all,

We've been working hard during the last couple of weeks with some
people. Brian Waldon helped a lot designing the Glance integration and
driver testing. Dean Troyer helped a lot on bringing Docker support in
Devstack[1]. On top of that, we got several feedback on the Nova code
review which definitely helped to improve the code.

The blueprint[2] explains what Docker brings to Nova and how to use it.

Before getting it merged into Nova core, the code lives on github[3].

Our goal right now is to have everything ready to get merged for the
Havana release. We need help for getting the code reviewed[4] on time.


[1] https://review.openstack.org/#/c/40759/
[2] 
https://github.com/dotcloud/openstack-docker/blob/master/docs/nova_blueprint.md
[3] https://github.com/dotcloud/openstack-docker
[4] https://review.openstack.org/#/c/32960/

-- 
@sam_alba

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] puppet heat config file

2013-08-27 Thread Steven Hardy
On Mon, Aug 26, 2013 at 02:50:16PM +1000, Ian Wienand wrote:
 Hi,
 
 The current heat puppet modules don't work to create the heat config
 file [1]
 
 My first attempt [2] created separate config files for each heat
 component.  It was pointed out that configuration had been
 consolidated into a single file [3].  My second attempt [4] did this,
 but consensus seems to be lacking that this will work.

So.. This change appears to have been poorly communicated, both within the
team and the wider community, so my apologies for that.

I would welcome feedback from the contributor of this change (and those who
reviewed/approved it who probably understand this better than I do),
however my understanding is the following:

- The old per-service config files should still work for backwards
  compatibility/transition

- The new consolidated heat.conf file should work fine[1], and is recommended

- If both old and new versions exist, the old ones seem to take precedence,
  but (despite both versions existing in heat/master atm) this is not
  recommended, and probably the root-cause of your issues?

 As Mathieu alludes to, it does seem that there is a critical problem
 with the single config file in that it is not possible to specify
 separate bind_port's to individual daemons [5].  The current TOT
 config files [6] don't seem to provide a clear example to work from?

[1] except for this issue:

Yes, it appears this issue needs fixing, using the consolidated config
file, there's no way to specify per-service non-default options (but heat
should still work fine using the default bind_host/bind_port/log_file)

I've raised a bug to track fixing this:

https://launchpad.net/bugs/1217463

 What output should the puppet modules be producing?  Would it make
 sense for them to create the multiple-configuration-file scenario for
 now, and migrate to the single-configuration-file at some future
 point; since presumably heat will remain backwards compatible for some
 time?

I think we should fix the bug above and they should create the new format,
but if sticking with the multiple-configuration-file scenario allows you to
progress in the short term, then that seems like a reasonable workaround.

It seems we have the following tasks to complete from a Heat perspective:
- Fix bug #1217463, in a backwards compatible way
- Update all the docs to reference the new config file
- Discuss packaging impact with downstream packagers (particularly we'll
  need to consider how upgrades should work..)
- Remove the old config files from the heat master tree (there is a review
  for this but it's currently abandoned:
  https://review.openstack.org/#/c/40257/)

I hope this clarifies things a bit, please do let us know/raise bugs if you
find things which are not working as expected while we work through this
transition.

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev] Rechecks and Reverifies

2013-08-27 Thread John Griffith
On Tue, Aug 27, 2013 at 11:47 AM, Clark Boylan clark.boy...@gmail.comwrote:

 On Tue, Aug 27, 2013 at 10:15 AM, Clint Byrum cl...@fewbar.com wrote:
  Excerpts from John Griffith's message of 2013-08-27 09:42:37 -0700:
  On Tue, Aug 27, 2013 at 10:26 AM, Alex Gaynor alex.gay...@gmail.com
 wrote:
 
   I wonder if there's any sort of automation we can apply to this, for
   example having known rechecks have signatures and if a failure
 matches
   the signature it auto applies the recheck.
  
 
  I think we kinda already have that, the recheck list and the bug ID
  assigned to it no?  Automatically scanning said list and doing the
 recheck
  automatically seems like overkill in my opinion.  At some point human
  though/interaction is required and I don't think it's too much to ask a
  technical contributor to simply LOOK at the output from the test runs
  against their patches and help out a bit. At the very least if you
 didn't
  test your patch yourself and waited for Jenkins to tell you it's broken
 I
  would hope that a submitter would at least be motivated to fix their own
  issue that they introduced.
 
 
  It is worth thinking about though, because ask a technical contributor
  to simply LOOK is a lot more expensive than let a script confirm the
  failure and tack it onto the list for rechecks.
 
  Ubuntu has something like this going for all of their users and it is
  pretty impressive.
 
  Apport and/or whoopsie see crashes and look at the
  backtraces/coredumps/etc and then (with user permission) submit a
  signature to the backend. It is then analyzed and the result is this:
 
  http://errors.ubuntu.com/
 
  Known false positives are shipped along side packages so that they do
  not produce noise, and known points of pain for debugging are eased by
  including logs and other things in bug reports when users are running
  the dev release. This results in a much better metric for what bugs to
  address first. IIRC update-manager also checks in with a URL that is
  informed partially by this data about whether or not to update packages,
  so if there is a high fail rate early on, the server side will basically
  signal update-manager don't update right now.
 
  I'd love to see our CI system enhanced to do all of the pattern
  matching to group failures by common patterns, and then when a technical
  contributor looks at these groups they have tons of data points to _fix_
  the problem rather than just spending their precious time identifying it.
 
  The point of the recheck system, IMHO, isn't to make running rechecks
  easier, it is to find and fix bugs.
 
 This is definitely worth thinking about and we had a session on
 dealing with CI logs to do interesting things like update bugs and
 handle rechecks automatically at the Havana summit[0]. Since then we
 have built a logstash + elasticsearch system[1] that filters many of
 our test logs and indexes a subset of what was filtered (typically
 anything with a log level greater than DEBUG). Building this system is
 step one in being able to detect anomalous logs, update bugs, and
 potentially perform automatic rechecks with the appropriate bug.
 Progress has been somewhat slow, but the current setup should be
 mostly stable. If anyone is interested in poking at these tools to do
 interesting automation with them feel free to bug the Infra team.

 That said, we won't have something super automagic like that before
 the end of Havana making John's point an important one. If previous
 release feature freezes are any indication we will continue to put
 more pressure on the CI system as we near Havana's feature freeze. Any
 unneeded rechecks or reverifies can potentially slow the whole process
 down for everyone. We should be running as many tests as possible
 locally before pushing to Gerrit (this is as simple as running `tox`)
 and making a best effort to identify the bugs that cause failures when
 performing rechecks or reverifies.

 [0] https://etherpad.openstack.org/havana-ci-logging
 [1] http://ci.openstack.org/logstash.html

 Thank you,
 Clark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


The automation ideas are great, no argument there didn't mean to imply they
weren't or discount them.  Just don't want the intent of the message to get
lost in all the things we could do going forward.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-27 Thread John Griffith
On Tue, Aug 27, 2013 at 12:14 PM, Russell Bryant rbry...@redhat.com wrote:

 On 08/27/2013 01:30 PM, Matt Dietz wrote:
  Good idea!
 
  Only thing I would point out is there are a fair amount of changes,
  especially lately, where code is just moving from one portion of the
  project to another, so there may be cases where someone ends up being
  authoritative over code they don't totally understand.

 Right.  While some automation can provide some insight, it certainly can
 not make any decisions in this area, IMO.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


All great ideas, but really isn't the core of the issue rate of new patches
 rate of available reviewers?

Seems to me that with the growth of the projects and more people
contributing the number of people actively involved in reviews is not
keeping pace.  Then throw in all of the new projects which takes at least
a portion of someone who used to do all Nova all the time and now they're
spreading that work-load across 3 or 4 projects it seems the only solution
is more reviewers.

Prioritizing and assigning maintainers is a great idea and I think we've
all kinda feel into that unofficially any way, but there is a need for more
quality reviewers and to be quite honest with all of the new projects
coming in to play I think that problem is going to continue in to the next
release as well.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-27 Thread Matt Riedemann
Going back to the original discussion, something I've noticed recently is 
the large patches coming through tied to blueprints.  In at least a few 
cases I've made comments in patches asking them to be broken up to be more 
easily digested.  The wiki also covers that area:

https://wiki.openstack.org/wiki/GitCommitMessages#Things_to_avoid_when_creating_commits
 


Discussing this point in IRC today, I raised one of my primary issues with 
reviewing large patches (typically for a blueprint) is how much harder it 
makes to verify the code is adequately covered with unit tests.

One thought is it'd be cool if we could get code coverage reports tied to 
the patches so we know if a given patch is severely lacking in test 
coverage (when it's not obvious).

This was pointed out:

http://logs.openstack.org/5c/5cc63c91d045f7a37136107053f71db1d8edf425/post/nova-coverage/e91683d/cover/
 


Which is nice and it gives the commit, but when I asked around in 
#openstack-infra about it, apparently that's only in post-queue on merged 
commits, so doesn't help you with a review.  The infra guys said they'd 
toyed with doing coverage reports in the check queue but it took too long 
(instrumenting the code for coverage added too much time to the check).

However, with the recent push for running parallel tests with testr, it 
sounds like it might be worth looking at check queue coverage reports 
again which might be a good tool in improving review efficiency.  This is 
probably something to pursue again after h3.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Russell Bryant rbry...@redhat.com
To: openstack-dev@lists.openstack.org, 
Date:   08/27/2013 01:19 PM
Subject:Re: [openstack-dev] [Nova] Frustrations with review wait 
times



On 08/27/2013 01:30 PM, Matt Dietz wrote:
 Good idea!
 
 Only thing I would point out is there are a fair amount of changes,
 especially lately, where code is just moving from one portion of the
 project to another, so there may be cases where someone ends up being
 authoritative over code they don't totally understand. 

Right.  While some automation can provide some insight, it certainly can
not make any decisions in this area, IMO.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


image/gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Meeting agenda for Wed August 28th at 2000 UTC

2013-08-27 Thread Steven Hardy
The Heat team holds a weekly meeting in #openstack-meeting, see

https://wiki.openstack.org/wiki/Meetings/HeatAgenda for more details

The next meeting is on Wed August 28th at 2000 UTC

Current topics for discussion:
* Review last week's actions
* Reminder re Havana_Release_Schedule FeatureProposalFreeze
* h3 blueprint status
* Single config file confusion/issues and way forward
* moving rackspace cloud server specific resources out of heat tree
* Open discussion

If anyone has any other topic to discuss, please add to the wiki.

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Heat mission statement

2013-08-27 Thread Steven Hardy
We had some recent discussions regarding the Heat mission statement and
came up with:

To explicitly model the relationships between OpenStack resources of all
kinds; and to harness those models, expressed in forms accessible to both
humans and machines, to manage infrastructure resources throughout the
lifecycle of applications.

The ideas, iterations and some discussion is captured in this etherpad:

https://etherpad.openstack.org/heat-mission

If anyone has any remaining comments, please speak now, but I think most of
those involved in the discussion thus-far have reached the point of wishing
to declare it final ;)

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V Meeting minutes

2013-08-27 Thread Peter Pouliot

Thank you.  I appreciate you handling it.

P

Sent from my Verizon Wireless 4G LTE Smartphone



 Original message 
From: Alessandro Pilotti apilo...@cloudbasesolutions.com
Date: 08/27/2013 12:40 PM (GMT-05:00)
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Hyper-V Meeting minutes


Today's Hyper-V meeting minutes:

Minutes: 
http://eavesdrop.openstack.org/meetings/hyper_v/2013/hyper_v.2013-08-27-16.06.html
Minutes (text):  
http://eavesdrop.openstack.org/meetings/hyper_v/2013/hyper_v.2013-08-27-16.06.txt
Log: 
http://eavesdrop.openstack.org/meetings/hyper_v/2013/hyper_v.2013-08-27-16.06.log.html


Thanks,

Alessandro
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-neutronclient] Need to get some eyes on a change if possible

2013-08-27 Thread Justin Hammond
I have a really simple 1-line change that was looking like it was going to get 
merged but then infra had their issues. Since then I fear that it'll just 
linger. I appreciate anyone who can take the time to check out this simple 
change.

It is at:

https://review.openstack.org/#/c/42242/

Thank you for your time,

Justin Hammond
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Meeting Tuesday August 27th at 19:00 UTC

2013-08-27 Thread Elizabeth Krumbach Joseph
On Mon, Aug 26, 2013 at 11:56 AM, Elizabeth Krumbach Joseph
l...@princessleia.com wrote:
 The OpenStack Infrastructure (Infra) team is hosting our weekly
 meeting tomorrow, Tuesday August 27th, at 19:00 UTC in
 #openstack-meeting

Logs and minutes:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-08-27-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-08-27-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-08-27-19.02.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] multiple-scheduler-drivers blueprint

2013-08-27 Thread Russell Bryant
Greetings,

One of the important things to strive for in our community is consensus.
 When there's not consensus, we should take a step back and see if we
need to change directions.

There has been a lot of iterating on this feature, and I'm afraid we
still don't have consensus around the design.  Phil Day has been posting
some really good feedback on the review.  I asked Joe Gordon to take a
look and provide another opinion.  He agreed with Phil that we really
need to have scheduler policies be a first class API citizen.

So, that pushes this feature out to Icehouse, as it doesn't seem
possible to get this done in the required timeframe for Havana.

If you'd really like to push to get this into Havana, please make your
case.  :-)

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][tempest][rdo] looking for opinions on change 43298

2013-08-27 Thread Matt Riedemann
This change:

https://review.openstack.org/#/c/43298/ 

Is attempting to fix a bug where a tempest test fails when nova-manage 
--version is different from nova-manage version when using a RHEL 6 
installation rather than devstack.

Pavel points out an RDO bug that was filed back in April to address the 
issue: https://bugzilla.redhat.com/show_bug.cgi?id=952811 

That RDO bug hasn't gotten any attention though (I wasn't aware of it when 
I reported the launchpad bug).

So my question is, is this worth changing in Tempest or should we expect 
that nova-manage --version will always equal nova-manage version?  I'm 
not even really sure how they are getting their values, one appears to be 
coming from the python distribution and one from the rpm (looks like 
argparse must do something there).



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States
image/gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-27 Thread Robert Collins
On 28 August 2013 06:32, John Griffith john.griff...@solidfire.com wrote:

 All great ideas, but really isn't the core of the issue rate of new patches
   rate of available reviewers?

 Seems to me that with the growth of the projects and more people
 contributing the number of people actively involved in reviews is not
 keeping pace.  Then throw in all of the new projects which takes at least
 a portion of someone who used to do all Nova all the time and now they're
 spreading that work-load across 3 or 4 projects it seems the only solution
 is more reviewers.

 Prioritizing and assigning maintainers is a great idea and I think we've all
 kinda feel into that unofficially any way, but there is a need for more
 quality reviewers and to be quite honest with all of the new projects coming
 in to play I think that problem is going to continue in to the next release
 as well.

I suspect so too. In the tripleo projects, I personally try daily to
ensure that any proposal in the tripleo projects gets a review (that
hasn't already been -1'd by another reviewer or failed validation).
Clearly this doesn't scale on an individual basis as the rate picks up
- but it's more important for scaling both the review team and
contributor sizes that feedback on proposals is received promptly and
early.

So I'd like to throw two ideas into the mix.

Firstly, consider having a rota - ideally 24x5 but that will need some
more geographical coverage I suspect for many projects - of folk who
spend a dedicated time period only reviewing. Reviewing is hard, and
you need to take breaks and let the brain decompress - so the rota
might be broken down to 2 hour periods or something. Launchpad [the
project, not the site] did this with considerable success : every
qualified reviewer committed to a time slot and didn't *try* to code -
they focused on reviews. A variation on this was to focus on doing
reviews when the reviewee was around - they'd get pinged on IRC and
then look at the review and be able to ask questions in realtime.
Really busy projects might need more concurrent reviewers, and we'd
want to ensure +2'd stuff gets a second +2-capable review quickly, so
it doesn't go stale unnecessarily.

Separately, and this is perhaps contentious; maybe folk that are
reviewers should deliberately not take on large bodies of work and
instead focus on being 50% or even 75% time doing reviews? The most
readily available source of skilled reviewers familiar with our code
bases is the existing reviewers... where companies are sponsoring us
to to work on OpenStack, this should be a straight forward discussion
- we all want OpenStack to be wildly successful, and this is a very
important part of scaling projects, though it is at the cost of
contributor time. However, bottlenecks are where work builds up, and
we don't have a 'write some code' bottleneck : we have 'design the
system' and 'review changes' bottlenecks.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev] Rechecks and Reverifies

2013-08-27 Thread Alex Gaynor
Indeed, sorry for the distraction!

Alex


On Tue, Aug 27, 2013 at 11:23 AM, John Griffith john.griff...@solidfire.com
 wrote:




 On Tue, Aug 27, 2013 at 11:47 AM, Clark Boylan clark.boy...@gmail.comwrote:

 On Tue, Aug 27, 2013 at 10:15 AM, Clint Byrum cl...@fewbar.com wrote:
  Excerpts from John Griffith's message of 2013-08-27 09:42:37 -0700:
  On Tue, Aug 27, 2013 at 10:26 AM, Alex Gaynor alex.gay...@gmail.com
 wrote:
 
   I wonder if there's any sort of automation we can apply to this, for
   example having known rechecks have signatures and if a failure
 matches
   the signature it auto applies the recheck.
  
 
  I think we kinda already have that, the recheck list and the bug ID
  assigned to it no?  Automatically scanning said list and doing the
 recheck
  automatically seems like overkill in my opinion.  At some point human
  though/interaction is required and I don't think it's too much to ask a
  technical contributor to simply LOOK at the output from the test runs
  against their patches and help out a bit. At the very least if you
 didn't
  test your patch yourself and waited for Jenkins to tell you it's
 broken I
  would hope that a submitter would at least be motivated to fix their
 own
  issue that they introduced.
 
 
  It is worth thinking about though, because ask a technical contributor
  to simply LOOK is a lot more expensive than let a script confirm the
  failure and tack it onto the list for rechecks.
 
  Ubuntu has something like this going for all of their users and it is
  pretty impressive.
 
  Apport and/or whoopsie see crashes and look at the
  backtraces/coredumps/etc and then (with user permission) submit a
  signature to the backend. It is then analyzed and the result is this:
 
  http://errors.ubuntu.com/
 
  Known false positives are shipped along side packages so that they do
  not produce noise, and known points of pain for debugging are eased by
  including logs and other things in bug reports when users are running
  the dev release. This results in a much better metric for what bugs to
  address first. IIRC update-manager also checks in with a URL that is
  informed partially by this data about whether or not to update packages,
  so if there is a high fail rate early on, the server side will basically
  signal update-manager don't update right now.
 
  I'd love to see our CI system enhanced to do all of the pattern
  matching to group failures by common patterns, and then when a technical
  contributor looks at these groups they have tons of data points to _fix_
  the problem rather than just spending their precious time identifying
 it.
 
  The point of the recheck system, IMHO, isn't to make running rechecks
  easier, it is to find and fix bugs.
 
 This is definitely worth thinking about and we had a session on
 dealing with CI logs to do interesting things like update bugs and
 handle rechecks automatically at the Havana summit[0]. Since then we
 have built a logstash + elasticsearch system[1] that filters many of
 our test logs and indexes a subset of what was filtered (typically
 anything with a log level greater than DEBUG). Building this system is
 step one in being able to detect anomalous logs, update bugs, and
 potentially perform automatic rechecks with the appropriate bug.
 Progress has been somewhat slow, but the current setup should be
 mostly stable. If anyone is interested in poking at these tools to do
 interesting automation with them feel free to bug the Infra team.

 That said, we won't have something super automagic like that before
 the end of Havana making John's point an important one. If previous
 release feature freezes are any indication we will continue to put
 more pressure on the CI system as we near Havana's feature freeze. Any
 unneeded rechecks or reverifies can potentially slow the whole process
 down for everyone. We should be running as many tests as possible
 locally before pushing to Gerrit (this is as simple as running `tox`)
 and making a best effort to identify the bugs that cause failures when
 performing rechecks or reverifies.

 [0] https://etherpad.openstack.org/havana-ci-logging
 [1] http://ci.openstack.org/logstash.html

 Thank you,
 Clark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 The automation ideas are great, no argument there didn't mean to imply
 they weren't or discount them.  Just don't want the intent of the message
 to get lost in all the things we could do going forward.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
I disapprove of what you say, but I will defend to the death your right to
say it. -- Evelyn Beatrice Hall (summarizing Voltaire)
The people's good is the highest law. -- Cicero
GPG Key fingerprint: 125F 5C67 DFE9 

[openstack-dev] [savanna] migration to pbr completed

2013-08-27 Thread Sergey Lukjanov
Hi folks,

migration of all Savanna sub projects to pbr has been completed.

Please, inform us and/or create bugs for all packaging-related issues.

Thanks.

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Heat mission statement

2013-08-27 Thread Robert Collins
On 28 August 2013 06:54, Steven Hardy sha...@redhat.com wrote:
 We had some recent discussions regarding the Heat mission statement and
 came up with:

 To explicitly model the relationships between OpenStack resources of all
 kinds; and to harness those models, expressed in forms accessible to both
 humans and machines, to manage infrastructure resources throughout the
 lifecycle of applications.

Bingo!

 The ideas, iterations and some discussion is captured in this etherpad:

 https://etherpad.openstack.org/heat-mission

 If anyone has any remaining comments, please speak now, but I think most of
 those involved in the discussion thus-far have reached the point of wishing
 to declare it final ;)

I think there is some confusion about implementation vs intent here
:). Or at least I hope so. I wouldn't expect Nova's mission statement
to talk about 'modelling virtual machines' : modelling is internal
jargon, not a mission!

What you want, IMO, is for a moderately technical sysadmin to read the
mission statement and go 'hell yeahs, I want to use Heat'.

Create a human and machine accessible service for managing the entire
lifecycle of infrastructure and applications within OpenStack clouds.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TRIPLEO] Derekh for tripleo core

2013-08-27 Thread Robert Collins
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

- Derek is reviewing fairly regularly and has got a sense of the
culture etc now, I think.

So - calling for votes for Derek to become a TripleO core reviewer!

I think we're nearly at the point where we can switch to the 'two
+2's' model - what do you think?

Also tsk! to those cores who aren't reviewing as regularly :)

Cheers,
Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Modal form without redirect

2013-08-27 Thread Toshiyuki Hayashi
Hi all,

I’m working on custmoizing modal form for topology view, I would like to prevent
redirecting after submitting.
https://github.com/openstack/horizon/blob/master/horizon/static/horizon/js/horizon.modals.js#L110
According to this code, if there is a no redirect_header, the modal
form won't redirect. But I couldn't figure out how to remove redirect
information from http header.
For example, if I want to remove redirect from LaunchInstance
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py#L508
How should I do that?
I tried success_url = None, but it doesn't work.

If you have any idea, that would be great.

Regards,
Toshiyuki

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] Derekh for tripleo core

2013-08-27 Thread Chris K
+1 here


On Tue, Aug 27, 2013 at 2:25 PM, Robert Collins
robe...@robertcollins.netwrote:

 http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
 http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

 - Derek is reviewing fairly regularly and has got a sense of the
 culture etc now, I think.

 So - calling for votes for Derek to become a TripleO core reviewer!

 I think we're nearly at the point where we can switch to the 'two
 +2's' model - what do you think?

 Also tsk! to those cores who aren't reviewing as regularly :)

 Cheers,
 Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] Derekh for tripleo core

2013-08-27 Thread Chris Jones
Hi

On 27 August 2013 22:25, Robert Collins robe...@robertcollins.net wrote:

 So - calling for votes for Derek to become a TripleO core reviewer


+1


 I think we're nearly at the point where we can switch to the 'two
 +2's' model - what do you think?


Selfishly I'd quite like to see a little more EU core reviewer presence,
but in reality there's not many hours where we'll be potentially unable to
land things. That aside, I like the idea.

-- 
Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Heat mission statement

2013-08-27 Thread Zane Bitter

On 27/08/13 23:13, Robert Collins wrote:

I think there is some confusion about implementation vs intent here
:). Or at least I hope so. I wouldn't expect Nova's mission statement
to talk about 'modelling virtual machines' : modelling is internal
jargon, not a mission!


So, I don't really agree with either of those points. Nova, at its core, 
deals with virtual machines, while Heat deals with abstract 
representations of resources. Talking about models in the Heat mission 
statement seems about as out of place as talking about VMs would be in 
the Nova one. And model is not a term we use anywhere internally. It's 
not intended to be internal jargon (which would be one level of 
abstraction below Heat-the-service), it's intended to be at one level of 
abstraction _above_ Heat-the-service.


That said...


Create a human and machine accessible service for managing the entire
lifecycle of infrastructure and applications within OpenStack clouds.


Sounds good enough.

- ZB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About multihost patch review

2013-08-27 Thread Tom Fifield
On 27/08/13 15:23, Maru Newby wrote:
 
 On Aug 26, 2013, at 9:39 PM, Yongsheng Gong gong...@unitedstack.com wrote:
 
 First 'be like nova-network' is a merit for some deployments.
 
 I'm afraid 'merit' is a bit vague for me.  Would you please elaborate?

One area of 'merit' in this area is for migration from nova-network to
neutron. If there's something exactly analogous to something that
already exists, its easier to move across.

 
 second, To allow admin to decide which network will be multihosted at 
 runtime will enable the neutron to continue using the current network node 
 (dhcp agent) mode at the same time.
 
 If multi-host and non- multi-host networks are permitted to co-exist (because 
 configuration is per-network), won't compute nodes have to be allowed to be 
 heterogenous (some multi-host capable, some not)?  And won't Nova then need 
 to schedule VMs configured with multi-host networks on compatible nodes?  I 
 don't recall mention of this issue in the blueprint or design doc, and would 
 appreciate pointers to where this decision was documented.
 
 

 If we force the network multihosted when the configuration enable_multihost 
 is true, and then administrator wants to transfer to normal neutron way, 
 he/she must modify the configuration item and then restart.
 
 I'm afraid I don't follow - are you suggesting that configuring multi-host 
 globally will be harder on admins than the change under review?  Switching to 
 non multi-host under the current proposal involves reconfiguring and 
 restarting of an awful lot of agents, to say nothing of the db changes.
 
 
 m. 
 
 



 On Tue, Aug 27, 2013 at 9:14 AM, Maru Newby ma...@redhat.com wrote:

 On Aug 26, 2013, at 4:06 PM, Edgar Magana emag...@plumgrid.com wrote:

 Hi Developers,

 Let me explain my point of view on this topic and please share your 
 thoughts in order to merge this new feature ASAP.

 My understanding is that multi-host is nova-network HA  and we are 
 implementing this bp 
 https://blueprints.launchpad.net/neutron/+spec/quantum-multihost for the 
 same reason.
 So, If in neutron configuration admin enables multi-host:
 etc/dhcp_agent.ini

 # Support multi host networks
 # enable_multihost = False

 Why do tenants needs to be aware of this? They should just create networks 
 in the way they normally do and not by adding the multihost extension.

 I was pretty confused until I looked at the nova-network HA doc [1].  The 
 proposed design would seem to emulate nova-network's multi-host HA option, 
 where it was necessary to both run nova-network on every compute node and 
 create a network explicitly as multi-host.  I'm not sure why nova-network 
 was implemented in this way, since it would appear that multi-host is 
 basically all-or-nothing.  Once nova-network services are running on every 
 compute node, what does it mean to create a network that is not multi-host?

 So, to Edgar's question - is there a reason other than 'be like 
 nova-network' for requiring neutron multi-host to be configured per-network?


 m.

 1: 
 http://docs.openstack.org/trunk/openstack-compute/admin/content/existing-ha-networking-options.html


 I could be totally wrong and crazy, so please provide some feedback.

 Thanks,

 Edgar


 From: Yongsheng Gong gong...@unitedstack.com
 Date: Monday, August 26, 2013 2:58 PM
 To: Kyle Mestery (kmestery) kmest...@cisco.com, Aaron Rosen 
 aro...@nicira.com, Armando Migliaccio amigliac...@vmware.com, Akihiro 
 MOTOKI amot...@gmail.com, Edgar Magana emag...@plumgrid.com, Maru Newby 
 ma...@redhat.com, Nachi Ueno na...@nttmcl.com, Salvatore Orlando 
 sorla...@nicira.com, Sumit Naiksatam sumit.naiksa...@bigswitch.com, 
 Mark McClain mark.mccl...@dreamhost.com, Gary Kotton 
 gkot...@vmware.com, Robert Kukura rkuk...@redhat.com
 Cc: OpenStack List openstack-dev@lists.openstack.org
 Subject: Re: About multihost patch review

 Hi,
 Edgar Magana has commented to say:
 'This is the part that for me is confusing and I will need some 
 clarification from the community. Do we expect to have the multi-host 
 feature as an extension or something that will natural work as long as the 
 deployment include more than one Network Node. In my opinion, Neutron 
 deployments with more than one Network Node by default should call DHCP 
 agents in all those nodes without the need to use an extension. If the 
 community has decided to do this by extensions, then I am fine' at
 https://review.openstack.org/#/c/37919/11/neutron/extensions/multihostnetwork.py

 I have commented back, what is your opinion about it?

 Regards,
 Yong Sheng Gong


 On Fri, Aug 16, 2013 at 9:28 PM, Kyle Mestery (kmestery) 
 kmest...@cisco.com wrote:
 Hi Yong:

 I'll review this and try it out today.

 Thanks,
 Kyle

 On Aug 15, 2013, at 10:01 PM, Yongsheng Gong gong...@unitedstack.com 
 wrote:

 The multihost patch is there for a long long time, can someone help to 
 review?
 https://review.openstack.org/#/c/37919/




 
 
 

Re: [openstack-dev] [heat] puppet heat config file

2013-08-27 Thread Angus Salkeld

On 27/08/13 19:20 +0100, Steven Hardy wrote:

On Mon, Aug 26, 2013 at 02:50:16PM +1000, Ian Wienand wrote:

Hi,

The current heat puppet modules don't work to create the heat config
file [1]

My first attempt [2] created separate config files for each heat
component.  It was pointed out that configuration had been
consolidated into a single file [3].  My second attempt [4] did this,
but consensus seems to be lacking that this will work.


So.. This change appears to have been poorly communicated, both within the
team and the wider community, so my apologies for that.

I would welcome feedback from the contributor of this change (and those who
reviewed/approved it who probably understand this better than I do),
however my understanding is the following:

- The old per-service config files should still work for backwards
 compatibility/transition

Yes they do (and will).



- The new consolidated heat.conf file should work fine[1], and is recommended

- If both old and new versions exist, the old ones seem to take precedence,
 but (despite both versions existing in heat/master atm) this is not
 recommended, and probably the root-cause of your issues?


Not really, it's that we need the wsgi options in a group.


As Mathieu alludes to, it does seem that there is a critical problem
with the single config file in that it is not possible to specify
separate bind_port's to individual daemons [5].  The current TOT
config files [6] don't seem to provide a clear example to work from?


[1] except for this issue:

Yes, it appears this issue needs fixing, using the consolidated config
file, there's no way to specify per-service non-default options (but heat
should still work fine using the default bind_host/bind_port/log_file)


I have a patch up that tries to deal with this:
https://review.openstack.org/#/c/43697/
(I might need to add config_file there too)



I've raised a bug to track fixing this:

https://launchpad.net/bugs/1217463


There is already this one:
https://bugs.launchpad.net/heat/+bug/1209141




What output should the puppet modules be producing?  Would it make
sense for them to create the multiple-configuration-file scenario for
now, and migrate to the single-configuration-file at some future
point; since presumably heat will remain backwards compatible for some
time?


I think we should fix the bug above and they should create the new format,
but if sticking with the multiple-configuration-file scenario allows you to
progress in the short term, then that seems like a reasonable workaround.

It seems we have the following tasks to complete from a Heat perspective:
- Fix bug #1217463, in a backwards compatible way


https://review.openstack.org/#/c/43697/


- Update all the docs to reference the new config file
- Discuss packaging impact with downstream packagers (particularly we'll
 need to consider how upgrades should work..)
- Remove the old config files from the heat master tree (there is a review
 for this but it's currently abandoned:
 https://review.openstack.org/#/c/40257/)


- devstack support for the new heat.conf too.



I hope this clarifies things a bit, please do let us know/raise bugs if you
find things which are not working as expected while we work through this
transition.


I'll try sort this out.

-Angus



Thanks,

Steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][vmware] VMwareAPI sub-team reviews update 2013-08-27

2013-08-27 Thread Shawn Hartsock
Greetings stackers!

We are deep in the freeze ... so here's what people are working on in the 
VMwareAPI sub-team and here's the reviews in order from most ready on top to 
least ready on the bottom. Some of these are *very* ready with 8 +1 reviews... 
others need some attention and revision. Let's try and get no more than a day 
between when a review has a negative vote and a revision. The merge freeze for 
blueprints is September 5th, so please keep the pressure on.

Needs one more core review/approval:
* NEW, https://review.openstack.org/#/c/40105/ ,'VMware: use VM uuid for volume 
attach and detach'
https://blueprints.launchpad.net/nova/+spec/vmware-nova-cinder-support
core votes,1, non-core votes,5, down votes, 0
* NEW, https://review.openstack.org/#/c/42619/ ,'fix broken WSDL logic'
https://bugs.launchpad.net/nova/+bug/1171215
core votes,1, non-core votes,1, down votes, 0

Ready for core reviewer:
* NEW, https://review.openstack.org/#/c/40245/ ,'Nova support for vmware cinder 
driver'
https://blueprints.launchpad.net/nova/+spec/vmware-nova-cinder-support
core votes,0, non-core votes,3, down votes, 0
* NEW, https://review.openstack.org/#/c/37819/ ,'VMware image clone strategy 
settings and overrides'
https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy
core votes,0, non-core votes,4, down votes, 0
* NEW, https://review.openstack.org/#/c/30282/ ,'Multiple Clusters using single 
compute service'

https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service
core votes,0, non-core votes,5, down votes, 0
* NEW, https://review.openstack.org/#/c/33100/ ,'Fixes host stats for 
VMWareVCDriver'
https://bugs.launchpad.net/nova/+bug/1190515
core votes,0, non-core votes,4, down votes, 0
* NEW, https://review.openstack.org/#/c/30628/ ,'Fix VCDriver to pick the 
datastore that has capacity'
https://bugs.launchpad.net/nova/+bug/1171930
core votes,0, non-core votes,8, down votes, 0
* NEW, https://review.openstack.org/#/c/40029/ ,'VMware: Config Drive Support'
https://bugs.launchpad.net/nova/+bug/1206584
core votes,0, non-core votes,5, down votes, 0
* NEW, https://review.openstack.org/#/c/40298/ ,'Fix snapshot in VMwareVCDriver'
https://bugs.launchpad.net/nova/+bug/1184807
core votes,0, non-core votes,6, down votes, 0

Needs VMware API expert review:
* NEW, https://review.openstack.org/#/c/43268/ ,'VMware: enable VNC access 
without user having to enter password'
https://bugs.launchpad.net/nova/+bug/1215352
core votes,0, non-core votes,4, down votes, 0
* NEW, https://review.openstack.org/#/c/36882/ ,'Fix VMware fakes'
https://bugs.launchpad.net/nova/+bug/1200482
core votes,0, non-core votes,3, down votes, 0
* NEW, https://review.openstack.org/#/c/41387/ ,'VMware: Nova boot from cinder 
volume'
https://blueprints.launchpad.net/nova/+spec/vmware-nova-cinder-support
core votes,0, non-core votes,3, down votes, 0
* NEW, https://review.openstack.org/#/c/35633/ ,'Enhance the vCenter driver to 
support FC volume attach'
https://blueprints.launchpad.net/nova/+spec/fc-support-for-vcenter-driver
core votes,0, non-core votes,1, down votes, 0
* NEW, https://review.openstack.org/#/c/41657/ ,'Fix VMwareVCDriver to support 
multi-datastore'
https://bugs.launchpad.net/nova/+bug/1104994
core votes,0, non-core votes,3, down votes, 0
* NEW, https://review.openstack.org/#/c/43721/ ,'VMware: handle exceptions from 
RetrievePropertiesEx correctly'
https://bugs.launchpad.net/nova/+bug/1216961
core votes,0, non-core votes,3, down votes, 0
* NEW, https://review.openstack.org/#/c/43621/ ,'VMware: Handle case when there 
are no hosts in cluster'
https://bugs.launchpad.net/nova/+bug/1197041
core votes,0, non-core votes,2, down votes, 0

Needs discussion/work (has -1):
* NEW, https://review.openstack.org/#/c/37659/ ,'Enhance VMware instance disk 
usage'
https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage
core votes,0, non-core votes,1, down votes, -1
* NEW, https://review.openstack.org/#/c/42024/ ,'VMWare: Disabling linked clone 
doesn't cache images'
https://bugs.launchpad.net/nova/+bug/1207064
core votes,0, non-core votes,1, down votes, -3
* NEW, https://review.openstack.org/#/c/43582/ ,'Fixes host stats for 
VMWareVCDriver'
https://bugs.launchpad.net/nova/+bug/1190515
core votes,0, non-core votes,0, down votes, -1
* NEW, https://review.openstack.org/#/c/34903/ ,'Deploy vCenter templates'

https://blueprints.launchpad.net/nova/+spec/deploy-vcenter-templates-from-vmware-nova-driver
core votes,0, non-core votes,2, down votes, -2
* NEW, https://review.openstack.org/#/c/33504/ ,'VMware: nova-compute crashes 
if VC not available'
https://bugs.launchpad.net/nova/+bug/1192016
core votes,0, non-core votes,2, down votes, -1
* NEW, https://review.openstack.org/#/c/43665/ ,'VMware: Validate the returned 
object data prior to 

Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-27 Thread Wang, Shane
Definitely, +1 ;-)

--
Shane

From: Joe Gordon [mailto:joe.gord...@gmail.com]
Sent: Tuesday, August 27, 2013 11:40 PM
To: Daniel P. Berrange; OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Nova] Frustrations with review wait times



On Tue, Aug 27, 2013 at 11:04 AM, Daniel P. Berrange 
berra...@redhat.commailto:berra...@redhat.com wrote:
On Tue, Aug 27, 2013 at 10:55:03AM -0400, Russell Bryant wrote:
 On 08/27/2013 10:43 AM, Daniel P. Berrange wrote:
  I tend to focus the bulk of my review activity on the libvirt driver,
  since that's where most of my knowledge is. I've recently done some
  reviews outside this area to help reduce our backlog, but I'm not
  so comfortable approving stuff in many of the general infrastructure
  shared areas since I've not done much work on those areas of code.
 
  I think Nova is large enough that it (mostly) beyond the scope of any
  one person to know all areas of Nova code well enough todo quality
  reviews. IOW, as we grow the nova-core team further, it may be worth
  adding more reviewers who have strong knowledge of specific areas 
  can focus their review energy in those areas, even if their review
  count will be low when put in the context of nova as a whole.

 I'm certainly open to that.

 Another way I try to do this unofficially is give certain +1s a whole
 lot of weight when I'm looking at a patch.  I do this regularly when
 looking over patches to hypervisor drivers I'm not very familiar with.

 Another thing we could consider is take this approach more officially.
 Oslo has started doing this for its incubator.  A maintainer of a part
 of the code not on oslo-core has their +1 treated as a +2 on that code.

 http://git.openstack.org/cgit/openstack/oslo-incubator/tree/MAINTAINERS
Yes, just having a list of expert maintainers for each area of Nova
would certainly be helpful in identifying whose comments to place
more weight by, regardless of anything else we might do.

I think we can dynamically generate this based on git log/blame and gerrit 
statistics per file.  For example if someone has authored half the lines in a 
file or reviewed most of the patches that touched that file, they are probably 
are very familiar with the file and would be a good person to review any change.


Daniel
--
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About multihost patch review

2013-08-27 Thread Maru Newby

On Aug 27, 2013, at 3:27 PM, Tom Fifield t...@openstack.org wrote:

 On 27/08/13 15:23, Maru Newby wrote:
 
 On Aug 26, 2013, at 9:39 PM, Yongsheng Gong gong...@unitedstack.com wrote:
 
 First 'be like nova-network' is a merit for some deployments.
 
 I'm afraid 'merit' is a bit vague for me.  Would you please elaborate?
 
 One area of 'merit' in this area is for migration from nova-network to
 neutron. If there's something exactly analogous to something that
 already exists, its easier to move across.

I apologize for being unclear, but I don't think there is any question that 
neutron needs a multi-host HA capability.  The question is not  one of 
function, but of implementation.  

I don't believe that the design of a feature being proposed for neutron should 
be acceptable simply because it reuses an implementation strategy used by 
nova-network.  Neutron's architecture may allow different decisions to be made, 
and we may have learned from nova-network's example.  In any case, reviewers 
need to understand the 'why' behind design decisions, and it doesn't appear to 
me that there is sufficient documentation justifying the current proposal's 
approach.  Only once we have more information will we be able to make an 
educated decision as to the quality of the proposal.


m.


 
 
 second, To allow admin to decide which network will be multihosted at 
 runtime will enable the neutron to continue using the current network node 
 (dhcp agent) mode at the same time.
 
 If multi-host and non- multi-host networks are permitted to co-exist 
 (because configuration is per-network), won't compute nodes have to be 
 allowed to be heterogenous (some multi-host capable, some not)?  And won't 
 Nova then need to schedule VMs configured with multi-host networks on 
 compatible nodes?  I don't recall mention of this issue in the blueprint or 
 design doc, and would appreciate pointers to where this decision was 
 documented.
 
 
 
 If we force the network multihosted when the configuration enable_multihost 
 is true, and then administrator wants to transfer to normal neutron way, 
 he/she must modify the configuration item and then restart.
 
 I'm afraid I don't follow - are you suggesting that configuring multi-host 
 globally will be harder on admins than the change under review?  Switching 
 to non multi-host under the current proposal involves reconfiguring and 
 restarting of an awful lot of agents, to say nothing of the db changes.
 
 
 m. 
 
 
 
 
 
 On Tue, Aug 27, 2013 at 9:14 AM, Maru Newby ma...@redhat.com wrote:
 
 On Aug 26, 2013, at 4:06 PM, Edgar Magana emag...@plumgrid.com wrote:
 
 Hi Developers,
 
 Let me explain my point of view on this topic and please share your 
 thoughts in order to merge this new feature ASAP.
 
 My understanding is that multi-host is nova-network HA  and we are 
 implementing this bp 
 https://blueprints.launchpad.net/neutron/+spec/quantum-multihost for the 
 same reason.
 So, If in neutron configuration admin enables multi-host:
 etc/dhcp_agent.ini
 
 # Support multi host networks
 # enable_multihost = False
 
 Why do tenants needs to be aware of this? They should just create networks 
 in the way they normally do and not by adding the multihost extension.
 
 I was pretty confused until I looked at the nova-network HA doc [1].  The 
 proposed design would seem to emulate nova-network's multi-host HA option, 
 where it was necessary to both run nova-network on every compute node and 
 create a network explicitly as multi-host.  I'm not sure why nova-network 
 was implemented in this way, since it would appear that multi-host is 
 basically all-or-nothing.  Once nova-network services are running on every 
 compute node, what does it mean to create a network that is not multi-host?
 
 So, to Edgar's question - is there a reason other than 'be like 
 nova-network' for requiring neutron multi-host to be configured per-network?
 
 
 m.
 
 1: 
 http://docs.openstack.org/trunk/openstack-compute/admin/content/existing-ha-networking-options.html
 
 
 I could be totally wrong and crazy, so please provide some feedback.
 
 Thanks,
 
 Edgar
 
 
 From: Yongsheng Gong gong...@unitedstack.com
 Date: Monday, August 26, 2013 2:58 PM
 To: Kyle Mestery (kmestery) kmest...@cisco.com, Aaron Rosen 
 aro...@nicira.com, Armando Migliaccio amigliac...@vmware.com, Akihiro 
 MOTOKI amot...@gmail.com, Edgar Magana emag...@plumgrid.com, Maru 
 Newby ma...@redhat.com, Nachi Ueno na...@nttmcl.com, Salvatore Orlando 
 sorla...@nicira.com, Sumit Naiksatam sumit.naiksa...@bigswitch.com, 
 Mark McClain mark.mccl...@dreamhost.com, Gary Kotton 
 gkot...@vmware.com, Robert Kukura rkuk...@redhat.com
 Cc: OpenStack List openstack-dev@lists.openstack.org
 Subject: Re: About multihost patch review
 
 Hi,
 Edgar Magana has commented to say:
 'This is the part that for me is confusing and I will need some 
 clarification from the community. Do we expect to have the multi-host 
 feature as an extension or something that 

Re: [openstack-dev] Blueprint for Nova native image building

2013-08-27 Thread Ian McLeod
Russell,

Thanks for the reminder to follow up on list.

To summarize my read of this thread, the strong consensus seems to be to
keep the detailed mechanics of building individual operating systems out
of nova itself.  It also seems there's no universally agreed upon
alternative location for it within the existing components, though the
plugable asynchronous import proposal for glance seems like a sensible
candidate.

What we've opted to do for the moment is focus on expanding our original
standalone proof of concept to support more operating systems and more
cleanly support both network and DVD/iso based installs, in part based
on the expanded block device mapping work that Nikola has been doing.

We're also working on a smaller/targeted enhancement to nova to allow
specifying a kernel command line, in addition to a ramdisk and kernel
when booting an instance.  This will allow us to avoid crafting custom
boot media when installing Linux under both KVM and, most likely, Xen
hypervisors.  Dennis' initial patch set for this:

https://review.openstack.org/#/c/43513/

-Ian

On Mon, 2013-08-26 at 13:46 -0400, Russell Bryant wrote:
 I believe this is where the thread stopped a couple weeks ago.  I was
 just curious what has happened since.  How did you interpret all of the
 feedback given, and what direction have you decided to take next?
 
 Thanks,
 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-27 Thread Michael Still
[Concerns over review wait times in the nova project]

I think that we're also seeing the fact that nova-core's are also
developers. nova-core members have the same feature freeze deadline,
and that means that to a certain extent we need to stop reviewing in
order to get our own code ready by the deadline.

The strength of nova-core is that its members are active developers,
so I think a reviewer caste would be a mistake. I am also not saying
that nova-core should get different deadlines (although more leniency
with exceptions would be nice).

So, I think lower review rates around deadlines are just a fact of life.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-27 Thread Joshua Harlow
Why not a rotation though, I could see it beneficial to say have a group of 
active developers code for say a release then those developers rotate to a 
reviewer position only (and rotate again for every release). This allows for a 
flow of knowledge between reviewers and a different set of coders (instead of a 
looping flow since reviewers are also coders).

For a big project like nova the workload could be spread out more like that.

Just a thought... 

Might not be feasible but could be a idea to strive towards.

Sent from my really tiny device...

On Aug 27, 2013, at 7:48 PM, Michael Still mi...@stillhq.com wrote:

 [Concerns over review wait times in the nova project]
 
 I think that we're also seeing the fact that nova-core's are also
 developers. nova-core members have the same feature freeze deadline,
 and that means that to a certain extent we need to stop reviewing in
 order to get our own code ready by the deadline.
 
 The strength of nova-core is that its members are active developers,
 so I think a reviewer caste would be a mistake. I am also not saying
 that nova-core should get different deadlines (although more leniency
 with exceptions would be nice).
 
 So, I think lower review rates around deadlines are just a fact of life.
 
 Michael
 
 -- 
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-27 Thread Christopher Yeoh
On Wed, Aug 28, 2013 at 12:15 PM, Michael Still mi...@stillhq.com wrote:

 [Concerns over review wait times in the nova project]

 I think that we're also seeing the fact that nova-core's are also
 developers. nova-core members have the same feature freeze deadline,
 and that means that to a certain extent we need to stop reviewing in
 order to get our own code ready by the deadline.

 The strength of nova-core is that its members are active developers,
 so I think a reviewer caste would be a mistake.


+1


 So, I think lower review rates around deadlines are just a fact of life.


Yes, and really encouraging people to submit their patches as early in the
cycle as possible.
It will be interesting to look at the stats afterwards to see if the
feature proposal deadline
two weeks ahead of the freeze has helped. If so, perhaps it should be
brought even further back.

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-27 Thread Mike Spreitzer
Joshua, I do not think such a strict and coarse scheduling is a practical 
way to manage developers, who have highly individualized talents, 
backgrounds, and interests.

Regards,
Mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-27 Thread Jiang, Yunhong


 -Original Message-
 From: Michael Still [mailto:mi...@stillhq.com]
 Sent: Tuesday, August 27, 2013 7:45 PM
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] [Nova] Frustrations with review wait times
 
 [Concerns over review wait times in the nova project]
 
 I think that we're also seeing the fact that nova-core's are also
 developers. nova-core members have the same feature freeze deadline,
 and that means that to a certain extent we need to stop reviewing in
 order to get our own code ready by the deadline.
 

+1. Nova cores are very kind and helpful, but I think they are really 
overloaded because they are both core developers and core reviewers.

-jyh


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev