Re: [Openstack] Availability Zones in Grizzly are not recognized

2013-07-05 Thread Joe Gordon
On Tue, Jul 2, 2013 at 10:21 AM, Vogl, Yves v...@adesso-mobile.de wrote:

  Hi,

 I want to segregate my installation into „Availability Zones“.
 I've configured a working multi-host installation with following hosts:


 [FIGURE 1]
 # nova availability-zone-list
 +---++
 | Name  | Status |
 +---++
 | internal  | available  |
 | |- Controller ||
 | | |- nova-conductor   | enabled :-) 2013-07-02T08:05:57.00 |
 | | |- nova-scheduler   | enabled :-) 2013-07-02T08:05:59.00 |
 | | |- nova-consoleauth | enabled :-) 2013-07-02T08:06:02.00 |
 | |- hvm-A   ||
 | | |- nova-network | enabled :-) 2013-07-02T08:05:58.00 |
 | |- hvm-B   ||
 | | |- nova-network | enabled :-) 2013-07-02T08:05:56.00 |
 | |- hvm-C   ||
 | | |- nova-network | enabled :-) 2013-07-02T08:05:57.00 |
 | |- hvm-D   ||
 | | |- nova-network | enabled :-) 2013-07-02T08:06:01.00 |
 | nova  | available  |
 | |- hvm-A   ||
 | | |- nova-compute | enabled :-) 2013-07-02T08:06:03.00 |
 | |- hvm-B   ||
 | | |- nova-compute | enabled :-) 2013-07-02T08:05:55.00 |
 | |- hvm-C   ||
 | | |- nova-compute | enabled :-) 2013-07-02T08:06:00.00 |
 | |- hvm-D   ||
 | | |- nova-compute | enabled :-) 2013-07-02T08:06:00.00 |
 +---++

 Now I want to introduce another „Availability Zone“ to get something like
 this:


  [FIGURE 2]
 # nova availability-zone-list
 +---++
 | Name  | Status |
 +---++
 | internal  | available  |
 | |- Controller ||
 | | |- nova-conductor   | enabled :-) 2013-07-02T08:05:57.00 |
 | | |- nova-scheduler   | enabled :-) 2013-07-02T08:05:59.00 |
 | | |- nova-consoleauth | enabled :-) 2013-07-02T08:06:02.00 |
 | |- hvm-A||
 | | |- nova-network | enabled :-) 2013-07-02T08:05:58.00 |
 | |- hvm-B||
 | | |- nova-network | enabled :-) 2013-07-02T08:05:56.00 |
 | |- hvm-C||
 | | |- nova-network | enabled :-) 2013-07-02T08:05:57.00 |
 | |- hvm-D||
 | | |- nova-network | enabled :-) 2013-07-02T08:06:01.00 |
 | zone_1| available  |
 | |- hvm-A||
 | | |- nova-compute | enabled :-) 2013-07-02T08:06:03.00 |
 | |- hvm-B||
 | | |- nova-compute | enabled :-) 2013-07-02T08:05:55.00 |
 | |- hvm-C||
 | | |- nova-compute | enabled :-) 2013-07-02T08:06:00.00 |
 | zone_2| available  |
 | |- hvm-D||
 | | |- nova-compute | enabled :-) 2013-07-02T08:06:00.00 |
 +---++


 So I configured on the Controller:

  # Scheduler
 default_schedule_zone=zone_1
 default_availability_zone=zone_1


  And on the compute nodes:

  # Scheduler
 default_availability_zone=zone_1

  resp.

  # Scheduler
 default_availability_zone=zone_2

  It seems that those values are ignored because no availability zones are
 shown up. It still looks like [FIGURE 1].

 Am I missing something?


This should help:

http://russellbryantnet.wordpress.com/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/



   --

 Mit freundlichen Grüßen
 Yves Vogl
 *
 *
 *adesso mobile solutions GmbH*
  Yves Vogl
 Leiter IT Operations
   ** **
 Stockholmer Allee 24 | 44269 Dortmund
 T +49 231 930 9379 | F +49 231 930 9317 | 
   Mail: v...@adesso-mobile.de | Web: www.adesso-mobile.de | Mobil-Web:
 adesso-mobile.mobi

  Vertretungsberechtigte Geschäftsführer: Dr. Josef Brewing, Frank
 Dobelmann
Registergericht: Amtsgericht Dortmund
 Registernummer: HRB 13763
 Umsatzsteuer-Identifikationsnummer gemäß § 27 a Umsatzsteuergesetz:
 DE201541832





 

Re: [Openstack] Reg: Compute node resources sharing Clarification

2013-05-31 Thread Joe Gordon
On Fri, May 31, 2013 at 8:20 AM, Dhanasekaran Anbalagan
bugcy...@gmail.comwrote:

 Hi Guys,

 I would like to know how the sharing of resources happening in OpenStack.
 Assume that there are two compute nodes of 4 physical cores each with 16 GB
 of Physical RAM each, would I be able to start an instance with 8 cores and
 32 Gb of RAM. How this is handled in Openstack.


Currently this is not supported.


 Please guide me.

 -Dhanasekaran.


 Did I learn something today? If not, I wasted it.

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] JSON Naming convention

2013-05-28 Thread Joe Gordon
We cannot go back and change past APIs, but standardization would be nice
for future APIs such as nova V3.


On Fri, May 17, 2013 at 1:20 PM, Balamurugan V G balamuruga...@gmail.comwrote:

 Hi,

 I see that the REST API using both the camelCase and underscore separated
 names for the JSON attributes. An example is below:


 curl  -X 'POST' http://1.1.1.1:5000/v2.0/tokens -d
 '{auth:{passwordCredentials:{username: admin,
 password:admin_pass}, tenantName:test}}' -H 'Content-type:
 application/json'
 {access: {token: {expires: 2013-05-18T06:10:07Z, id:
 d92aebd5112f44b9b13dfabbf4283110, tenant: {enabled: true,
 description: null, name: automation, id:
 4a8c3029619e4afbaa39f85fcd121003}}, serviceCatalog: [{endpoints:
 [{adminURL: http://2.2.2.2:8774/v2/4a8c3029619e4afbaa39f85fcd121003;,
 region: RegionOne, internalURL: 
 http://2.2.2.2:8774/v2/4a8c3029619e4afbaa39f85fcd121003;, id:
 5c3598dc6b384a598046471858b55417, publicURL: 
 http://1.1.1.1:8774/v2/4a8c3029619e4afbaa39f85fcd121003}],
 endpoints_links: [], type: compute, name: nova}, {endpoints:
 [{adminURL: http://2.2.2.2:9696/;, region: RegionOne,
 internalURL: http://2.2.2.2:9696/;, id:
 18ffe92b8db242d8b39991b4dceafe6c, publicURL: http://1.1.1.1:9696/}],
 endpoints_links: [], type: network, name: quantum}, {endpoints:
 [{adminURL: http://2.2.2.2:9292/v2;, region: RegionOne,
 internalURL: http://2.2.2.2:9292/v2;, id:
 91bce3f20f704165bd430629a1446baf, publicURL: http://1.1.1.1:9292/v2}],
 endpoints_links: [], type: image, name: glance}, {endpoints:
 [{adminURL: http://2.2.2.2:8776/v1/4a8c3029619e4afbaa39f85fcd121003;,
 region: RegionOne, internalURL: 
 http://2.2.2.2:8776/v1/4a8c3029619e4afbaa39f85fcd121003;, id:
 710bb818196f4f06a12f4efc4e32b47a, publicURL: 
 http://1.1.1.1:8776/v1/4a8c3029619e4afbaa39f85fcd121003}],
 endpoints_links: [], type: volume, name: cinder}, {endpoints:
 [{adminURL: http://2.2.2.2:8773/services/Admin;, region:
 RegionOne, internalURL: http://2.2.2.2:8773/services/Cloud;, id:
 01b49f1552dd486cba9499e567aa3774, publicURL: 
 http://1.1.1.1:8773/services/Cloud}], endpoints_links: [], type:
 ec2, name: ec2}, {endpoints: [{adminURL: 
 http://2.2.2.2:35357/v2.0;, region: RegionOne, internalURL: 
 http://2.2.2.2:5000/v2.0;, id: f6305996eb8a49ff8fca9c40f1a78ae8,
 publicURL: http://1.1.1.1:5000/v2.0}], endpoints_links: [], type:
 identity, name: keystone}], user: {username: admin,
 roles_links: [], id: b254901420994f3895fed48073761b00, roles:
 [{name: admin}], name: admin}, metadata: {is_admin: 0, roles:
 [55e3d24acce64b8bb29955945de47d21]}}}

 For example, 'publicURL' and 'endpoint_links'.

 Is there any plan/blueprint to standardize on one naming convention?

 Regards,
 Balu

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] Creating instances with custom UUIDs

2013-04-04 Thread Joe Gordon
On Wed, Apr 3, 2013 at 7:05 PM, Chris Behrens cbehr...@codestud.com wrote:

 I'm having a hard time understanding the original problem.  nova boot
 should return in milliseconds.  There's no blocking on provisioning.


The only thing that could block is DB access, as AFAIK the RPC to the
scheduler is still pass by reference.

- Chris

 On Apr 3, 2013, at 8:32 PM, Rafael Rosa rafaelros...@gmail.com wrote:

 API wise I was thinking about something like nova boot
 --custom-instance-uuid ABC... or something like that. To avoid problems
 with any current implementation I would set it to disabled by default and
 add a config option to enable it.

 As for collisions, my take is that if you're passing a custom UUID you
 know what you're doing and is generating them in a way that won't be
 duplicated. Just by using standard UUID generators the possibility of
 collisions are really really small.

 Thanks for the feeback :)

 Rafael Rosa Fu


 2013/4/3 Michael Still mi...@stillhq.com

 On Thu, Apr 4, 2013 at 9:16 AM, Rafael Rosa rafaelros...@gmail.com
 wrote:
  Hi,
 
  In our OpenStack installation we have an issue when creating new
 instances,
  we need to execute some long running processes before calling nova
 boot
  and the call blocks for the end user for a while. We would like to
 return
  immediately to the caller with a final instance UUID and do the work
 on
  the background, but it's only generated when during actual instance
  creation, which is a no go in our situation.

 The instance_create database call already accepts an instance UUID as
 an argument, so that bit looks like it should work out well for you.
 So, I guess this is mostly a case of working out how you want the API
 to work.

 Personally, I would have no problem with something like this, so long
 as we could somehow reserve the instance UUID so that another caller
 doesn't try and create an instance with the same UUID while you're
 doing your slow thing.

 Cheers,
 Michael


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] openstack segregation and flavors

2013-04-02 Thread Joe Gordon
See
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/type_filter.py

On Mon, Apr 1, 2013 at 8:30 PM, Jean-Daniel BUSSY silversurfer...@gmail.com
 wrote:

 Hi stackers,

 I am looking for a way to separate groups of host with their own instance
 flavors.
 In the aggregation levels [cells, regions, availability zone, aggregates],
 where does the flavors aggregation starts?
 Since regions have their own components and own API, I would saw flavors
 are grouped at the region level but I recall seeing that Availability Zone
 groups flavors.

 Any hint?

 regards

 *BUSSY Jean-Daniel*


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] AggregateInstanceExtraSpecs very slow?

2013-02-25 Thread Joe Gordon
On Sun, Feb 24, 2013 at 3:31 PM, Sam Morrison sorri...@gmail.com wrote:

 I have been playing with the AggregateInstanceExtraSpecs filter and can't
 get it to work.

 In our staging environment it works fine with 4 compute nodes, I have 2
 aggregates to split them into 2.

 When I try to do the same in our production environment which has 80
 compute nodes (splitting them again into 2 aggregates) it doesn't work.

 nova-scheduler starts to go very slow,  I scheduled an instance and gave
 up after 5 minutes, it seemed to be taking ages and the host was at 100%
 cpu. Also got about 500 messages in rabbit that were unacknowledged.


what does the nova-scheduler log say?  Where is the unacknowledged rabbitmq
messages sent from?


 We are running stable/folsom. Does anyone else have this issue or know if
 there have been any fixes in Grizzly relating to this? I couldn't see any
 bugs about it.

 Thanks,
 Sam
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] AggregateInstanceExtraSpecs very slow?

2013-02-25 Thread Joe Gordon
On Mon, Feb 25, 2013 at 6:14 PM, Sam Morrison sorri...@gmail.com wrote:

 Hi Joe,

 On 26/02/2013, at 11:19 AM, Joe Gordon j...@cloudscaling.com wrote:

 On Sun, Feb 24, 2013 at 3:31 PM, Sam Morrison sorri...@gmail.com wrote:

 I have been playing with the AggregateInstanceExtraSpecs filter and can't
 get it to work.

 In our staging environment it works fine with 4 compute nodes, I have 2
 aggregates to split them into 2.

 When I try to do the same in our production environment which has 80
 compute nodes (splitting them again into 2 aggregates) it doesn't work.

 nova-scheduler starts to go very slow,  I scheduled an instance and gave
 up after 5 minutes, it seemed to be taking ages and the host was at 100%
 cpu. Also got about 500 messages in rabbit that were unacknowledged.


 what does the nova-scheduler log say?  Where is the unacknowledged
 rabbitmq messages sent from?


 Logs are below. Note the large time gap between selecting a host, this is
 pretty much instantaneous without this filter.

 Can't figure out how to see an unacknowledged message in rabbit but my
 guess is it is the compute service updates from all the compute nodes.
 These aren't happening and I think this is the reason that the attempts to
 schedule further down are rejected with is disabled or has not been heard
 from in a while

 Do you see anything that could be an issue? Flags we use for scheduler are
 below also:

 Thanks for your help,
 Sam


 # Scheduler Flags
 compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
 ram_allocation_ratio=1.0
 cpu_allocation_ratio=0.92
 reserved_host_memory_mb=1024
 reserved_host_disk_mb=0

 scheduler_default_filters=RetryFilter,AggregateInstanceExtraSpecsFilter,RamFilter,CoreFilter,ComputeFilter
 compute_fill_first_cost_fn_weight=1.0



 2013-02-25 10:01:35 DEBUG nova.scheduler.filter_scheduler
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Attempting to build 1
 instance(s) schedule_run_instance /usr/lib/python2.7/dist-packages/nova/sc
 heduler/filter_scheduler.py:66
 2013-02-25 10:01:35 DEBUG nova.scheduler.filters.retry_filter
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Previously tried hosts:
 [].  (host=qh2-rcc27) host_passes /usr/lib/python2.7/dist-packages/n
 ova/scheduler/filters/retry_filter.py:39
 2013-02-25 10:02:13 DEBUG nova.scheduler.host_manager
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Host filter passes for
 qh2-rcc27 passes_filters
 /usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py:178
 2013-02-25 10:02:13 DEBUG nova.scheduler.filters.retry_filter
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Previously tried hosts:
 [].  (host=qh2-rcc26) host_passes
 /usr/lib/python2.7/dist-packages/nova/scheduler/filters/retry_filter.py:39
 2013-02-25 10:02:51 DEBUG nova.scheduler.host_manager
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Host filter function
 bound method CoreFilter.host_passes of
 nova.scheduler.filters.core_filter.CoreFilter object at 0x43f7a50 failed
 for qh2-rcc26 passes_filters
 /usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py:175
 2013-02-25 10:02:51 DEBUG nova.scheduler.filters.retry_filter
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Previously tried hosts:
 [].  (host=qh2-rcc25) host_passes
 /usr/lib/python2.7/dist-packages/nova/scheduler/filters/retry_filter.py:39
 2013-02-25 10:03:28 DEBUG nova.scheduler.filters.compute_filter
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] host 'qh2-rcc25':
 free_ram_mb:71086 free_disk_mb:3035136 is disabled or has not been heard
 from in a while host_passes
 /usr/lib/python2.7/dist-packages/nova/scheduler/filters/compute_filter.py:37
 2013-02-25 10:03:28 DEBUG nova.scheduler.host_manager
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Host filter function
 bound method ComputeFilter.host_passes of
 nova.scheduler.filters.compute_filter.ComputeFilter object at 0x43f7210
 failed for qh2-rcc25 passes_filters
 /usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py:175
 2013-02-25 10:03:28 DEBUG nova.scheduler.filters.retry_filter
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Previously tried hosts:
 [].  (host=qh2-rcc24) host_passes
 /usr/lib/python2.7/dist-packages/nova/scheduler/filters/retry_filter.py:39
 2013-02-25 10:04:05 DEBUG nova.scheduler.filters.compute_filter
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] host 'qh2-rcc24':
 free_ram_mb:99758 free_disk_mb:3296256 is disabled or has not been heard
 from in a while host_passes
 /usr/lib/python2.7/dist-packages/nova/scheduler/filters/compute_filter.py:37
 2013-02-25 10:04:05 DEBUG nova.scheduler.host_manager
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Host filter function
 bound method ComputeFilter.host_passes of
 nova.scheduler.filters.compute_filter.ComputeFilter object at 0x43f7210
 failed for qh2-rcc24 passes_filters
 /usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py:175
 2013-02-25 10:04:05 DEBUG nova.scheduler.filters.retry_filter
 [req-d7c77ff6

Re: [Openstack] Reinstating Trey Morris for Nova Core

2013-01-23 Thread Joe Gordon
+1

On Wed, Jan 23, 2013 at 7:58 AM, Chris Behrens cbehr...@codestud.comwrote:

 +1

 On Jan 22, 2013, at 5:38 PM, Matt Dietz matt.di...@rackspace.com wrote:

  All,
 
 I think Trey Morris has been doing really well on reviews again, so
 I'd
  like to propose him to be reinstated for Nova core. Thoughts?
 
  -Dietz
 
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Discussion / proposal: deleted column marker

2012-10-03 Thread Joe Gordon
+1 to the design proposed here

Looks like there is already a session proposed for Grizzly summit:
http://summit.openstack.org/cfp/details/63

best,
Joe

On Wed, Oct 3, 2012 at 10:07 AM, Federico Innocenti 
federico.innoce...@hp.com wrote:


 +1 to the design proposed here.

 Even without embracing anything nova specific but simply from a database
 perspective the soft-delete approach is proven to be a poor solution to
 most of the problems it promises to solve.

 In addition to what Stan already pointed out, let me recap something that
 you may already know but so that we have a complete picture, this time
 solely from a db point of view:

 - restoring a record is more than doing set deleted=0. It is about
 recovering the all graph of references in the database. Beside the
 complexity of a restore procedure, it makes not possible to selectively
 recover just the information we want.

 - tables grow also when alive data (deleted=0) are a small percentage of
 the total. This takes more space/time for backups and maintenance
 operations.

 - all queries require for every table involved an additional filter on
 deleted=0 and thus an additional scan, even though likely most DBMS are
 able to optimize queries discriminating on a binary flag.

 - using unique constraints and foreign keys is impossible. The nova
 database schema is now holding more foreign keys than in the old days, but
 unless normal deletions are performed they are worthless as they cannot
 protect the database from inconsistencies. And in Nova we saw a certain
 number of inconsistencies arising in a large usage context.


 What the soft delete approach tries (badly) to do is in practice to keep
 an archive of historical data. The best archiving solution could be left to
 the choice of the single vendor for the time being (until a more
 comprehensive notification system is in place), since every major DBMS
 provides its own facilities to implement it. In MySQL as in many other
 databases you can write db triggers which insert the row being deleted to a
 shadow table in the same db or in another db.

 Cheers,
 Federico Innocenti



 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Generalsied host aggregates in Folsom

2012-09-19 Thread Joe Gordon
On Wed, Sep 19, 2012 at 10:18 AM, Day, Phil philip@hp.com wrote:

  Hi Folks,

 ** **

 Trying to catch-up  (I’m thinking of changing my middle name to catch-up J
 ) with the generalisation of host aggregates – and looking at the code it
 looks to me as if the chain for adding a host to an aggregate still ends up
 calling the virt layer   

 ** **

 api/openstack/compute/contrib/aggregates/AggregateController/action()

 compute/api/AggregateAPI/add_host_to_aggregate()

 RPC

 compute/manager/add_aggregate_host()

 virt/add_to_aggregate()

 ** **

 I thought the change was to be able to create aggregates that can be
 linked to a hypervisor concept, but could also just be a way of “tagging”
 hosts into pools for other scheduler reasons – am I missing somethign ?


The RPC component is there to ensure XenAPI still works.  In the libvirt
driver, add_to_aggregate() is a noop.

So you can create an aggregate that can be linked to a hypervisor but also
as a way to tag hosts



 ** **

 Thanks,

 Phil

 ** **

 ** **

 ** **

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Generalsied host aggregates in Folsom

2012-09-19 Thread Joe Gordon
On Wed, Sep 19, 2012 at 1:14 PM, Day, Phil philip@hp.com wrote:

  Thanks Joe,

 ** **

 I was anticipating something more complex to be able to say when an
 aggregate should or shouldn’t be linked to the hypevisor and overlooked the
 obvious.

 ** **

 So just to make sure I’ve  got it – on libvirt systems an aggregate can be
 used for anything (because of the NoOp in the driver), but on xen systems
 it’s still liked to the hypervisor pools ?


Libvirt can be used for anything.

And Xen can be a xen hypervisor pool or anything depending on the aggregate
metadata (
https://github.com/openstack/nova/blob/master/nova/virt/xenapi/pool.py#L80
https://github.com/openstack/nova/blob/master/nova/virt/xenapi/pool_states.py
)

 

 ** **

 Thanks

 Phil

 ** **

 *From:* Joe Gordon [mailto:j...@cloudscaling.com]
 *Sent:* 19 September 2012 19:02
 *To:* Day, Phil
 *Cc:* openstack@lists.launchpad.net (openstack@lists.launchpad.net) (
 openstack@lists.launchpad.net)
 *Subject:* Re: [Openstack] Generalsied host aggregates in Folsom

 ** **

 ** **

 On Wed, Sep 19, 2012 at 10:18 AM, Day, Phil philip@hp.com wrote:

 Hi Folks,

  

 Trying to catch-up  (I’m thinking of changing my middle name to catch-up J
 ) with the generalisation of host aggregates – and looking at the code it
 looks to me as if the chain for adding a host to an aggregate still ends up
 calling the virt layer   

  

 api/openstack/compute/contrib/aggregates/AggregateController/action()

 compute/api/AggregateAPI/add_host_to_aggregate()

 RPC

 compute/manager/add_aggregate_host()

 virt/add_to_aggregate()

  

 I thought the change was to be able to create aggregates that can be
 linked to a hypervisor concept, but could also just be a way of “tagging”
 hosts into pools for other scheduler reasons – am I missing somethign ?***
 *

 ** **

 The RPC component is there to ensure XenAPI still works.  In the libvirt
 driver, add_to_aggregate() is a noop.

 ** **

 So you can create an aggregate that can be linked to a hypervisor but also
 as a way to tag hosts

 ** **

   

 Thanks,

 Phil

  

  

  


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

  ** **

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] nova-manage is getting deprecated?

2012-07-23 Thread Joe Gordon
On Fri, Jul 20, 2012 at 11:43 AM, Tong Li liton...@us.ibm.com wrote:

  Awhile back, there was a comment on a nova-manage defect stated that
 nova-manage is getting deprecated. Can any one tell what and when the
 replacement will be? Thanks.


Last I heard, python-novaclient will be replacing most of nova-manage.
 There will always be a few commands that cannot be run via the API
(python-novaclient), such as db sync, so those will stay in nova-manage.

best,
Joe



 Tong Li
 Emerging Technologies  Standards
 Building 501/B205
 liton...@us.ibm.com

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Capacity based scheduling: What updated free_ram_mb in Folsom

2012-07-13 Thread Joe Gordon
Phil,

This may shine some light.

https://bugs.launchpad.net/nova/+bug/1016273

best,
Joe

On Fri, Jul 13, 2012 at 10:27 AM, Day, Phil philip@hp.com wrote:

 Hi Jay,

 If I read it correctly that updates the *_used  values that show actual
 consumption, but what I was looking for is the updates to the allocated
 values (free_ram_mb / free_disk_gb ) that were added for schedulers that
 didn’t want to over commit.

 I remember some detailed discussion with Chris and Sandy about how best to
 implement this in the face of multiple schedulers, failing creates, etc,
 some of which involved the notification systems.

 Brian Elliot pointed me to where in the fliter scheduler just recalculates
 these values from the instance table - so I guess the plans to maintain the
  info in the Database were dropped along the way.

 Cheers,
 Phil

 -Original Message-
 From: openstack-bounces+philip.day=hp@lists.launchpad.net [mailto:
 openstack-bounces+philip.day=hp@lists.launchpad.net] On Behalf Of Jay
 Pipes
 Sent: 13 July 2012 17:36
 To: openstack@lists.launchpad.net
 Subject: Re: [Openstack] Capacity based scheduling: What updated
 free_ram_mb in Folsom

 Hi Phil,

 The nova.db.api.compute_node_update() call is what the individual virt
 drivers call to update the compute node stats. grep for that and you'll see
 where the calls to set the compute node data are called.

 Best,
 -jay

 On 07/13/2012 09:38 AM, Day, Phil wrote:
  Hi Folks,
 
 
 
  I was reviewing a code change to add generic retries for build
  failures ( https://review.openstack.org/#/c/9540/2 ), and wanted to be
  sure that it wouldn’t invalidate the capacity accounting used by the
 scheduler.
 
 
 
  However I've been sitting here for a while working through the Folsom
  scheduler code trying to understand how the capacity based scheduling
  now works, and I’m sure I’m missing something obvious but I just can’t
  work out where the free_ram_mb value in the compute_node table gets
 updated.
 
 
 
  I can see the database api method to update the values,
  compute_node_utilization_update(),  it doesn’t look as if anything in
  the code ever calls that ?
 
 
 
  From when I last looked at this / various discussions here and at the
  design summits I thought the approach was that:
 
  -  The scheduler would make a call (rather than a cast) to the
  compute manger, which would then do some verification work, update the
  DB table whilst in the context of that call, and then start a thread
  to complete the spawn.  The need to go all the way to the compute node
  as a call was to avoid race conditions from multiple schedulers.  (the
  change I’m looking at is part of a blueprint to avoid such a race, so
  maybe I imagined the change from cast to call ?)
 
 
 
  -  On a delete, the capacity_notifer (which had to be configured
  into the list_notifier) would detect the delete message, and decrement
  the database values.
 
 
 
  But now I look through the code it looks as if the scheduler is still
  doing a cast (scheduler/driver),  and although I can see the database
  api call to update the values, compute_node_utilization_update(),  it
  doesn’t look as if anything in the code ever calls that ?
 
 
 
  The ram_filter scheduler seems to use the free_ram_mb value, and that
  value seems to come from the host_manager in the scheduler which is
  read from the Database,  but I can't for the life of me work out where
  these values are updated in the Database.
 
 
 
  The capacity_notifier, which used to decrement values on a VM deletion
  only (according to the comments the increment was done in the
  scheduler) seems to have now disappeared altogether in the move of the
  notifier to openstack/common ?
 
 
 
  So I’m sure I’m missing some other even more cunning plan on how to
  keep the values current, but I can’t for the life of me work out what
  it is – can someone fill me in please ?
 
 
 
  Thanks,
 
  Phil
 
 
 
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] PEP8 checks

2012-07-09 Thread Joe Gordon
On Mon, Jul 9, 2012 at 3:01 PM, Dave Walker davewal...@ubuntu.com wrote:

 On Mon, Jul 02, 2012 at 08:28:04AM -0400, Monty Taylor wrote:
 
 
  On 07/02/2012 06:46 AM, John Garbutt wrote:
   Hi,
  
   I noticed I can now run the pep8 tests like this (taken from Jenkins
 job):
   tox -v -epep8
   ...
   pep8: commands succeeded
   congratulations :)
  
   But the old way to run tests seems to fail:
   ./run-tests.sh -p
   ...
   File
 /home/johngar/openstack/nova/.venv/local/lib/python2.7/site-packages/pep8.py,
 line 1220, in check_logical
   for result in self.run_check(check, argument_names):
   TypeError: 'NoneType' object is not iterable
  
   Is this expected?
   Did I just miss an email about this change?
 
  I cannot reproduce this on my system. Can you run
  bash -x run_tests.sh -p and pastebin the output? Also,
  tools/with_venv.sh pep8 --version just to be sure.
 

 Hi,

 The issue is that as of a recent change to upstream pep8 [1], the
 additional pep8 rules in tools/hacking.py need to be changed from
 returns to yields.. :(


Proof of Concept:  https://review.openstack.org/#/c/9569



 [1]
 https://github.com/jcrocholl/pep8/commit/b9f72b16011aac981ce9e3a47fd0ffb1d3d2e085

 Kind Regards,

 Dave Walker dave.wal...@canonical.com
 Engineering Manager,
 Ubuntu Server Infrastructure

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack special sauce checking tool??

2012-06-28 Thread Joe Gordon
Josh,

https://github.com/openstack/nova/blob/master/tools/hacking.py

run  when do a ./run_tests.sh -p in nova.

On Thu, Jun 28, 2012 at 10:15 AM, Joshua Harlow harlo...@yahoo-inc.comwrote:

  Hi all,

 I remember hearing once that someone had a openstack import/hacking style
 checking tool.

 I was wondering if such a thing existed to verify same the openstack way
 of doing imports and other special checks to match the openstack style.

 I know a lot of us run pep8/pylint, but those don’t catch these special
 sauce checks

 And it would be nice to not have to manually check these (visually...) but
 be able to run a tool that would just do the basic checks.

 Does such a thing exist??

 Thx,

 Josh

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] gerrit reviews change?

2012-06-13 Thread Joe Gordon
I was wondering the same thing...

On Wed, Jun 13, 2012 at 3:11 PM, Kevin L. Mitchell 
kevin.mitch...@rackspace.com wrote:

 For the past few days, I have noticed that I no longer get emails when
 new changes are pushed, when changes I've commented on have new patch
 sets pushed, or when changes I've commented on are finally merged.  I do
 receive emails when comments are made on changes I've commented on, but
 the other emails are MIA.  What's up?  I depended on those emails to
 tell me when I needed to re-review a change or stop tracking a change
 because it merged…
 --
 Kevin L. Mitchell kevin.mitch...@rackspace.com


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Lossage in nova test suite?

2012-06-04 Thread Joe Gordon
On Mon, Jun 4, 2012 at 5:47 PM, Gabe Westmaas
gabe.westm...@rackspace.comwrote:

 Should we revert this change till we get it cleared up?


+1



 On 6/4/12 8:29 PM, James E. Blair cor...@inaugust.com wrote:

 On 06/04/2012 04:00 PM, Kevin L. Mitchell wrote:
  Today I've noticed some significant problems with nova's test suite
  leaving literally hundreds of python processes out.  I'm guessing that
  this has to do with the unit tests for the multiprocess patch, which was
  just approved.  This could be causing problems with jenkins, tooŠ
 
  Anybody have any other insights?
 
 Yes, several Jenkins slaves have been taken out by running nova unit
 tests.  The one that I am able to log into seems to be continuously
 respawing python processes.  Other slaves are inaccessible due to having
 exhausted their RAM.
 
 I note that all of the tests run after that change merged carry this
 warning notice from Jenkins:
 
Process leaked file descriptors. See
 http://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+build
 for more information
 
 So I think it's fair to say that Jenkins corroborates your suspicion
 that change introduced a problem with leaking processes.
 
 This is affecting any Jenkins slave that the nova unit tests job runs
 on, which in turn affects jobs for unrelated projects that happen to
 later run on that slave.
 
 In addition to correcting this problem, I believe we should add a build
 step to Jenkins to ensure that all of the test processes have terminated
 correctly.
 
 -Jim
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] EC2 api testing

2012-05-07 Thread Joe Gordon
On Fri, May 4, 2012 at 10:09 AM, Martin Packman 
martin.pack...@canonical.com wrote:

 At the Folsom Design Summit we discussed[1] trying to collaborate on a
 test suite for EC2 api support. Currently nova supports the common
 stuff pretty well has differing behaviour in a lot of edge cases.
 Having a separate suite, along the lines of tempest, that could be run
 against other existing clouds as well as OpenStack would let us test
 the tests as well, and would be useful for other projects.

 Various parties have done work in this direction in the past, the
 trick is going to be combining it into something we can all use. The
 existing code I know about includes aws-compat[2], Openstack-EC2[3],


We would be happy to merge our aws-compat[2] project with another project.
 IMHO it is easier to maintain one group project then a meta-tool and a few
child tools.

~Joe


 the tests in nova itself, some experimental code in awsome, and an
 Enstratus test suite. I'm hoping to find out more about the Enstratus
 code, James Urquhart suggested opening the remaining parts would be a
 reasonable step. Is there anything else out there we should look at as
 well?

 Are there any strong opinions over the right way of getting started on
 this?

 Martin


 [1] Nova EC2 compatibility sesson etherpad
 http://etherpad.openstack.org/FolsomEC2Compatibility
 [2] https://github.com/cloudscaling/aws-compat
 [3] https://github.com/yahoo/Openstack-EC2

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] need help on passing unit/integration tests

2012-05-07 Thread Joe Gordon
Yun,

I just ran the unit tests on mac via '.run_test.sh' and could not reproduce
your error.  But then again I was using python 2.7 and not python 2.6.

best,
~Joe

On Mon, May 7, 2012 at 9:01 AM, Yun Mao yun...@gmail.com wrote:

 Hi guys,

 I can't get my master branch freshly off github to pass the
 run_test.sh script. The errors are as follows. Tried on mac and ubuntu
 12.04.. Any ideas? Thanks,

 Yun


 ==
 ERROR: test_json (nova.tests.test_log.JSONFormatterTestCase)
 --
 Traceback (most recent call last):
  File /Users/maoy/git/nova/nova/tests/test_log.py, line 183, in test_json
data = json.loads(self.stream.getvalue())
  File
 /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/__init__.py,
 line 307, in loads
return _default_decoder.decode(s)
  File
 /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/decoder.py,
 line 319, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File
 /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/decoder.py,
 line 338, in raw_decode
raise ValueError(No JSON object could be decoded)
 ValueError: No JSON object could be decoded
   begin captured logging  
 test-json: DEBUG: This is a log line
 -  end captured logging  -

 ==
 ERROR: test_json_exception (nova.tests.test_log.JSONFormatterTestCase)
 --
 Traceback (most recent call last):
  File /Users/maoy/git/nova/nova/tests/test_log.py, line 207, in
 test_json_exception
data = json.loads(self.stream.getvalue())
  File
 /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/__init__.py,
 line 307, in loads
return _default_decoder.decode(s)
  File
 /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/decoder.py,
 line 319, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File
 /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/decoder.py,
 line 338, in raw_decode
raise ValueError(No JSON object could be decoded)
 ValueError: No JSON object could be decoded
   begin captured logging  
 test-json: ERROR: This is exceptional
 Traceback (most recent call last):
  File /Users/maoy/git/nova/nova/tests/test_log.py, line 203, in
 test_json_exception
raise Exception('This is exceptional')
 Exception: This is exceptional
 -  end captured logging  -

 ==
 FAIL: test_deserialize_remote_exception
 (nova.tests.rpc.test_common.RpcCommonTestCase)
 --
 Traceback (most recent call last):
  File /Users/maoy/git/nova/nova/tests/rpc/test_common.py, line 98,
 in test_deserialize_remote_exception
self.assertTrue('test message' in unicode(after_exc))
 AssertionError

 ==
 FAIL: test_deserialize_remote_exception_user_defined_exception
 (nova.tests.rpc.test_common.RpcCommonTestCase)
 --
 Traceback (most recent call last):
  File /Users/maoy/git/nova/nova/tests/rpc/test_common.py, line 127,
 in test_deserialize_remote_exception_user_defined_exception
self.assertTrue('raise FakeUserDefinedException' in unicode(after_exc))
 AssertionError

 ==
 FAIL: test_call_converted_exception
 (nova.tests.rpc.test_kombu.RpcKombuTestCase)
 --
 Traceback (most recent call last):
  File /Users/maoy/git/nova/nova/test.py, line 87, in _skipper
func(*args, **kw)
  File /Users/maoy/git/nova/nova/tests/rpc/test_kombu.py, line 369,
 in test_call_converted_exception
self.assertTrue(value in unicode(exc))
 AssertionError:
   begin captured logging  
 nova.rpc.common: INFO: Connected to AMQP server on localhost:5672
 nova.rpc.common: INFO: Connected to AMQP server on localhost:5672
 nova.rpc.amqp: ERROR: Exception during message handling
 2012-05-07 11:54:32 TRACE nova.rpc.amqp Traceback (most recent call last):
 2012-05-07 11:54:32 TRACE nova.rpc.amqp   File
 /Users/maoy/git/nova/nova/rpc/amqp.py, line 263, in _process_data
 2012-05-07 11:54:32 TRACE nova.rpc.amqp rval =
 node_func(context=ctxt, **node_args)
 2012-05-07 11:54:32 TRACE nova.rpc.amqp   File
 /Users/maoy/git/nova/nova/tests/rpc/common.py, line 264, in
 fail_converted
 2012-05-07 11:54:32 TRACE nova.rpc.amqp raise
 

Re: [Openstack] [OpenStack][Nova] Minimum required code coverage per file

2012-04-26 Thread Joe Gordon
It would nice to initially see the code coverage delta per merge proposal
as a comment in gerrit (similar to SmokeStack), and not as a gating factor.



Kevin,  should we start copying openstack-common tests to client projects?
 Or just make sure to not count openstack-common code in the code coverage
numbers for client projects?

best,
Joe

On Wed, Apr 25, 2012 at 7:30 PM, Tim Simpson tim.simp...@rackspace.comwrote:

  Great point Justin. I've worked on projects where this has happened
 repeatedly and it's a drag.

  --
 *From:* 
 openstack-bounces+tim.simpson=rackspace@lists.launchpad.net[openstack-bounces+tim.simpson=
 rackspace@lists.launchpad.net] on behalf of Justin Santa Barbara [
 jus...@fathomdb.com]
 *Sent:* Wednesday, April 25, 2012 5:20 PM
 *To:* Monty Taylor

 *Cc:* openstack@lists.launchpad.net
 *Subject:* Re: [Openstack] [OpenStack][Nova] Minimum required code
 coverage per file

  One concern I have is this: suppose we find that a code block is
 unnecessary, or can be refactored more compactly, but it has test coverage.
  Then removing it would make the % coverage fall.

  We want to remove the code, but we'd have to add unrelated tests to the
 same merge because otherwise the test coverage % would fall?

  I think we can certainly enhance the metrics, but I do have concerns
 over strict gating (particularly per file, where the problem is more likely
 to occur than per-project)

  Maybe the gate could be that line count of uncovered lines must not
 increase, unless the new % coverage  80%.

  Or we could simply have a gate bypass.

  Justin

 On Wed, Apr 25, 2012 at 2:45 PM, Monty Taylor mord...@inaugust.comwrote:

 Hey - funny story - in responding to Justin I re-read the original email
 and realized it was asking for a static low number, which we _can_ do -
 at least project-wide. We can't do per-file yet, nor can we fail on a
 downward inflection... and I've emailed Justin about that.

 If we have consensus on gating on project-wide threshold, I can
 certainly add adding that to the gate to the todo list. (If we decide to
 do that, I'd really like to make that be openstack-wide rather than just
 nova... although I imagine it might take a few weeks to come to
 consensus on what the project-wide low number should be.

 Current numbers on project-wide lines numbers:

 nova: 79%
 glance: 75%
 keystone: 81%
 swift: 80%
 horizon: 91%

 Perhaps we get nova and glance up to 80 and then set the threshold for 80?

 Also, turns out we're not running this on the client libs...

 Monty

 On 04/25/2012 03:53 PM, Justin Santa Barbara wrote:
   If you let me know in a bit more detail what you're looking for, I can
  probably whip something up.  Email me direct?
 
  Justin
 
 
  On Wed, Apr 25, 2012 at 6:59 AM, Monty Taylor mord...@inaugust.com
   mailto:mord...@inaugust.com wrote:
 
 
 
  On 04/24/2012 10:08 PM, Lorin Hochstein wrote:
  
   On Apr 24, 2012, at 4:11 PM, Joe Gordon wrote:
  
   Hi All,
  
   I would like to propose a minimum required code coverage level
 per
   file in Nova.  Say 80%.  This would mean that any new
 feature/file
   should only be accepted if it has over 80% code coverage.
  Exceptions
   to this rule would be allowed for code that is covered by skipped
   tests (as long as 80% is reached when the tests are not skipped).
  
  
   I like the idea of looking at code coverage numbers. For any
  particular
   merge proposal, I'd also like to know whether it increases or
  decreases
   the overall code coverage of the project. I don't think we should
 gate
   on this, but it would be helpful for a reviewer to see that,
  especially
   for larger proposals.
 
  Yup... Nati requested this a couple of summits ago - main issue is
 that
  while we run code coverage and use the jenkins code coverage plugin
 to
  track the coverage numbers, the plugin doesn't fully support this
  particular kind of report.
 
  HOWEVER - if any of our fine java friends out there want to chat
 with me
  about adding support to the jenkins code coverage plugin to track
 and
  report this, I will be thrilled to put it in as a piece of reported
  information.
 
   With 193 python files in nova/tests, Nova unit tests produce 85%
   overall code coverage (calculated with ./run_test.sh -c [1]).
   But 23%
   of files (125 files) have lower then 80% code coverage (30 tests
   skipped on my machine).  Getting all files to hit the 80% code
   coverage mark should be one of the goals for Folsom.
  
  
   I would really like to see a visualization of the code coverage
   distribution, in order to help spot the outliers.
  
  
   Along these lines, there's been a lot of work in the software
   engineering research community about predicting which parts of the
  code
   are most likely to contain bugs

[Openstack] [OpenStack][Nova] Minimum required code coverage per file

2012-04-24 Thread Joe Gordon
Hi All,

I would like to propose a minimum required code coverage level per file in
Nova.  Say 80%.  This would mean that any new feature/file should only be
accepted if it has over 80% code coverage.  Exceptions to this rule would
be allowed for code that is covered by skipped tests (as long as 80% is
reached when the tests are not skipped).

With 193 python files in nova/tests, Nova unit tests produce 85% overall
code coverage (calculated with ./run_test.sh -c [1]).  But 23% of files
(125 files) have lower then 80% code coverage (30 tests skipped on my
machine).  Getting all files to hit the 80% code coverage mark should be
one of the goals for Folsom.

Some files with low coverage:

nova/flags 18%

nova/tests/integrated/test_servers 27%

nova/testing/runner 31%

nova/api/ec2/faults 36%

nova/network/quantum/client  36%

nova/network/quantum/melange_connection  38%

nova/openstack/common/iniparser 40%

nova/openstack/common/cfg 41%

nova/tests/db/fakes 41%

nova/console/api  44%

nova/image/s3 50%

nova/api/ec2/__init__  53%

nova/notifier/log_notifier  56%


best,
Joe Gordon



[1] With https://review.openstack.org/#/c/6750/
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] removing nova-direct-api

2012-04-10 Thread Joe Gordon
Looks like a unanimous decision.

Here is the blueprint.

https://blueprints.launchpad.net/nova/+spec/remove-nova-direct-api

best,
Joe


On Mon, Apr 9, 2012 at 3:49 PM, Devin Carlen de...@openstack.org wrote:

 +1  It doesn't have to go home but it can't stay here.


 On Apr 9, 2012, at 12:31 PM, Kevin L. Mitchell wrote:

  On Mon, 2012-04-09 at 11:58 -0700, Vishvananda Ishaya wrote:
  +1 to removal.  I just tested to see if it still works, and due to our
  policy checking and loading objects before sending them into
  compute.api, it no longer functions. Probably wouldn't be too hard to
  fix it, but clearly no one is using it so lets axe it.
 
  Also +1 for removal.  I discovered this thing when I was first trying to
  figure out how the API worked, and it confused me no end…
  --
  Kevin L. Mitchell kevin.mitch...@rackspace.com
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Nova] removing nova-direct-api

2012-04-09 Thread Joe Gordon
Hi All,

The other day I noticed that in addition to EC2 and OpenStack APIs there is
a third API type: nova-direct-api.  As best I can tell, this was used
early on for development/testing before the EC2 and OpenStack APIs were
mature.

My question is, since most of the code hasn't been touched in over a year
and we have two mature documented APIs, is anyone using this?  If not, I
propose to remove it.


Proposed Change:  https://review.openstack.org/6375


best,
Joe
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Caching strategies in Nova ...

2012-03-23 Thread Joe Gordon
+1

Documenting these findings would be nice too.


best,
Joe

On Fri, Mar 23, 2012 at 2:15 PM, Justin Santa Barbara
jus...@fathomdb.comwrote:

 This is great: hard numbers are exactly what we need.  I would love to see
 a statement-by-statement SQL log with timings from someone that has a
 performance issue.  I'm happy to look into any DB problems that
 demonstrates.

 The nova database is small enough that it should always be in-memory (if
 you're running a million VMs, I don't think asking for one gigabyte of RAM
 on your DB is unreasonable!)

 If it isn't hitting disk, PostgreSQL or MySQL with InnoDB can serve 10k
 'indexed' requests per second through SQL on a low-end ($1000) box.  With
 tuning you can get 10x that.  Using one of the SQL bypass engines (e.g.
 MySQL HandlerSocket) can supposedly give you 10x again.  Throwing money at
 the problem in the form of multi-processor boxes (or disks if you're I/O
 bound) can probably get you 10x again.

 However, if you put a DB on a remote host, you'll have to wait for a
 network round-trip per query.  If your ORM is doing a 1+N query, the total
 read time will be slow.  If your DB is doing a sync on every write, writes
 will be slow.  If the DB isn't tuned with a sensible amount of cache (at
 least as big as the DB size), it will be slow(er).  Each of these has a
 very simple fix for OpenStack.

 Relational databases have very efficient caching mechanisms built in.  Any
 out-of-process cache will have a hard time beating it.  Let's make sure the
 bottleneck is the DB, and not (for example) RabbitMQ, before we go off a
 huge rearchitecture.

 Justin




 On Thu, Mar 22, 2012 at 7:53 PM, Mark Washenberger 
 mark.washenber...@rackspace.com wrote:

 Working on this independently, I created a branch with some simple
 performance logging around the nova-api, and individually around
 glance, nova.db, and nova.rpc calls. (Sorry, I only have a local
 copy and its on a different computer right now, and probably needs
 a rebase. I will rebase and publish it on GitHub tomorrow.)

 With this logging, I could get some simple profiling that I found
 very useful. Here is a GH project with the analysis code as well
 as some nova-api logs I was using as input.

 https://github.com/markwash/nova-perflog

 With these tools, you can get a wall-time profile for individual
 requests. For example, looking at one server create request (and
 you can run this directly from the checkout as the logs are saved
 there):

 markw@poledra:perflogs$ cat nova-api.vanilla.1.5.10.log | python
 profile-request.py req-3cc0fe84-e736-4441-a8d6-ef605558f37f
 keycountavg
 nova.api.openstack.wsgi.POST   1  0.657
 nova.db.api.instance_update1  0.191
 nova.image.show1  0.179
 nova.db.api.instance_add_security_group1  0.082
 nova.rpc.cast  1  0.059
 nova.db.api.instance_get_all_by_filters1  0.034
 nova.db.api.security_group_get_by_name 2  0.029
 nova.db.api.instance_create1  0.011
 nova.db.api.quota_get_all_by_project   3  0.003
 nova.db.api.instance_data_get_for_project  1  0.003

 key  count  total
 nova.api.openstack.wsgi  1  0.657
 nova.db.api 10  0.388
 nova.image   1  0.179
 nova.rpc 1  0.059

 All times are in seconds. The nova.rpc time is probably high
 since this was the first call since server restart, so the
 connection handshake is probably included. This is also probably
 1.5 months stale.

 The conclusion I reached from this profiling is that we just plain
 overuse the db (and we might do the same in glance). For example,
 whenever we do updates, we actually re-retrieve the item from the
 database, update its dictionary, and save it. This is double the
 cost it needs to be. We also handle updates for data across tables
 inefficiently, where they could be handled in single database round
 trip.

 In particular, in the case of server listings, extensions are just
 rough on performance. Most extensions hit the database again
 at least once. This isn't really so bad, but it clearly is an area
 where we should improve, since these are the most frequent api
 queries.

 I just see a ton of specific performance problems that are easier
 to address one by one, rather than diving into a general (albeit
 obvious) solution such as caching.


 Sandy Walsh sandy.wa...@rackspace.com said:

  We're doing tests to find out where the bottlenecks are, caching is the
  most obvious solution, but there may be others. Tools like memcache do a
  really good job of sharing memory across servers so we don't have to
  reinvent the wheel or hit the db at all.
 
  In addition to looking into caching technologies/approaches we're gluing
  together some tools for finding those bottlenecks. Our first step will
  be finding them, then squashing them 

[Openstack-poc] [Bug 954488] Re: nova/openstack/common/cfg.py: 'nova.api.openstack.contrib.standard_extensions' is non-existant

2012-03-13 Thread Joe Gordon
** Also affects: openstack-common
   Importance: Undecided
   Status: New

** Changed in: openstack-common
 Assignee: (unassigned) = Joe Gordon (joe-gordon0)

-- 
You received this bug notification because you are a member of OpenStack
Common Drivers, which is the registrant for openstack-common.
https://bugs.launchpad.net/bugs/954488

Title:
  nova/openstack/common/cfg.py:
  'nova.api.openstack.contrib.standard_extensions' is non-existant

Status in OpenStack Compute (Nova):
  Fix Committed
Status in openstack-common:
  In Progress

Bug description:
  nova/openstack/common/cfg.py   refers to
  'nova.api.openstack.contrib.standard_extensions' but this is a non-
  existant path.

  
  nova/openstack/common/cfg.py:
  
  DEFAULT_EXTENSIONS = [
  'nova.api.openstack.contrib.standard_extensions'
  ]
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/954488/+subscriptions

___
Mailing list: https://launchpad.net/~openstack-poc
Post to : openstack-poc@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-poc
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Enabling data deduplication on Swift

2012-03-10 Thread Joe Gordon
Paulo, Caitlin,


Can SHA-1 collisions be generated?  If so can you point me to the article?

Also why compare hashes in the first place?  Linux 'Kenel Samepage
Merging', which does page deduplication for KVM, does a full compare to be
safe [1].  Even if collisions can't be generated, what are the odds of a
collision (for SHA-1 and SHA-256) happening by chance when using Swift at
scale?


best,
Joe Gordon

 




[1] http://www.linux-kvm.com/sites/default/files/KvmForum2008_KSM.pdf


On Fri, Mar 9, 2012 at 4:44 PM, Caitlin Bestler caitlin.best...@nexenta.com
 wrote:

  Paulo,

 ** **

 I believe you’ll find that we’re thinking along the same lines. Please
 review my proposal at http://etherpad.openstack.org/P9MMYSWE6U

 ** **

 One quick observation is that SHA-1 is totally inadequate for
 fingerprinting objects in a public object store. An attacker could easily*
 ***

 predict the fingerprint of content likely to be posted, generate alternate
 content that had the same SHA-1 fingerprint and pre-empt

 the signature. For example: an ISO of an open source OS distribution. If I
 get my false content with the same fingerprint into the

 repository first then everyone who downloads that ISO will get my altered
 copy.



 ** **

 SHA-256 is really needed to make this type of attack infeasible.

 **

 I also think that distributed deduplication works very well with object
 versioning. Your comments on the proposal cited above 

 would be great to hear.

 ** **

 *From:* 
 openstack-bounces+caitlin.bestler=nexenta@lists.launchpad.net[mailto:
 openstack-bounces+caitlin.bestler=nexenta@lists.launchpad.net] *On
 Behalf Of *Paulo Ricardo Motta Gomes
 *Sent:* Thursday, March 08, 2012 1:19 PM
 *To:* openstack@lists.launchpad.net

 *Subject:* [Openstack] Enabling data deduplication on Swift

 ** **

 Hello everyone,

 ** **

 I'm a student of the European Master in Distributed Computing (EMDC)
 currently working on my master thesis on distributed content-addressable
 storage/deduplication.

 ** **

 I'm happy to announce I will be contributing the outcome of my thesis work
 to OpenStack by enabling both object-level and block-level deduplication
 functionality on Swift (
 https://answers.launchpad.net/swift/+question/156862).

 ** **

 I have written a detailed blog post where I describe the initial
 architecture of my solution:
 http://paulormg.com/2012/03/05/enabling-deduplication-in-a-distributed-object-storage/
 

 ** **

 Feedback from the OpenStack/Swift community would be very appreciated.

 ** **

 Cheers,

  

 Paulo

 ** **

 --
 European Master in Distributed Computing - www.kth.se/emdc
 Royal Institute of Technology - KTH

 Instituto Superior Técnico - IST

 http://paulormg.com

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] please compress the public nova.git repository (~100MB - 30MB)

2012-02-29 Thread Joe Gordon
If you just want to install from source or just look at the current code
you can do a:

git clone git://github.com/openstack/nova.git --depth 1

best,
Joe

On Tue, Feb 28, 2012 at 8:06 AM, Jim Meyering j...@meyering.net wrote:

 Hello,

 Looking at nova.git for the first time, I cloned it.
 Surprised that it was so much larger than the others, and too large for
 the size of the source and the amount of history, I tried to compress it.
 That reduced the repository size from nearly 100MiB to just 30:

  $ git clone git://github.com/openstack/nova.git
  $ git-repo-compress .git
  97M .git
  Counting objects: 101858, done.
  Delta compression using up to 12 threads.
  Compressing objects: 100% (100594/100594), done.
  Writing objects: 100% (101858/101858), done.
  Total 101858 (delta 80708), reused 18388 (delta 0)
  150.91user 81.16system 0:35.05elapsed 662%CPU (0avgtext+0avgdata
 324536maxresident)k
  29056inputs+59312outputs (381major+252932minor)pagefaults 0swaps
  started Tue 2012-02-28 16:40:42 +0100
  Tue 2012-02-28 16:41:17 +0100
  30M .git

 The command I used is just a wrapper around git repack:

  git-repo-compress () {
  local d=$1
  du -sh $d
  start=$(date)
  env time git --git-dir=$d repack -afd --window=250 --depth=250
  echo started $start
  date
  du -sh $d
  }

 Future cloners (and those who push, too) will be
 grateful if someone with access to the server would
 do the same thing to the public nova.git repository.
 Compressing the repo improves server performance, too.

 These other repositories compressed well, too:
  size of .git repo (MiB)
   current  compressed
  swift  9.8M2.3M
  keystone11M9.9M
  horizon4.1M3.2M
  glance 5.2M1.9M
  quantum3.0M1.4M

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] run_tests.sh (-x | --stop) is broken, optparse and nosetests

2012-02-15 Thread Joe Gordon
Sounds like a plan to me Monty.  Lets take this offline.

best,
Joe

On Tue, Feb 14, 2012 at 8:51 PM, Monty Taylor mord...@inaugust.com wrote:

 Hi!

 On 02/14/2012 07:29 PM, Joe Gordon wrote:
  Hi Developers,
 
  I have been looking at https://bugs.launchpad.net/nova/+bug/931608,
  run_tests.sh (-x | --stop) is broken.  A fix was committed but it only
  stopped ./run_tests.sh -x from failing, and not restoring the
  ./run_tests.sh -x functionality.
 
  run_tests.sh (-x | --stop) is a nosetests parameter which gets passed
  in via nova/testing/runner.py:367.  sys.argv for nova/testing/runner.py,
  takes two sets of parameters nova parameters and any arbitrary nosetests
  parameter.  The nova flags (hide-elapsed etc..) are handled via
  'flags.FLAGS.register_cli_opt' and the nosetest parameters are generated
  from the args return value from optparse
  (nova/openstack/common/cfg.py:768:   (values, args) =
  self._oparser.parse_args(self._args) ).
 
  The problem is when optparse sees an unknown flag (such as '-x') it
  terminates the program without throwing an error.  Additionally the
  'args' return value doesn't contain the flags just flag arguments (only
  'a' in '-x a').  So there is no way of passing on just the unknown flags
  to nosetest.  Additionally nosetest uses optparse itself so if you pass
  in sys.argv to nosetest with a nova argument set will case the tests to
  break too.member:jog0
 
 
  While I can write a hack to look for 'hide-elapsed' or other nova flags,
  I am looking for more elegant solutions.  Should this solution live in
  the cfg module?

 So - you and I may be working in parallel here, but in different
 directions.

 I'm currently working on getting install_venv/with_venv replaced by tox.
 That may not sound related to your issue, but step #2 on my list of that
 project is to get nova/testing/runner.py replaced by pure nosetests
 (with the openstack.nose_plugin installed so that we can get our pretty
 output)

 My vote is - gimme a hand with that, and we can then pass things
 straight through to their proper place, and we can stop carrying a
 custom test runner.

 Monty

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] run_tests.sh (-x | --stop) is broken, optparse and nosetests

2012-02-14 Thread Joe Gordon
Hi Developers,

I have been looking at https://bugs.launchpad.net/nova/+bug/931608,
run_tests.sh (-x | --stop) is broken.  A fix was committed but it only
stopped ./run_tests.sh -x from failing, and not restoring the
./run_tests.sh -x functionality.

run_tests.sh (-x | --stop) is a nosetests parameter which gets passed in
via nova/testing/runner.py:367.  sys.argv for nova/testing/runner.py, takes
two sets of parameters nova parameters and any arbitrary nosetests
parameter.  The nova flags (hide-elapsed etc.) are handled via
'flags.FLAGS.register_cli_opt' and the nosetest parameters are generated
from the args return value from optparse (nova/openstack/common/cfg.py:768:
  (values, args) = self._oparser.parse_args(self._args) ).

The problem is when optparse sees an unknown flag (such as '-x') it
terminates the program without throwing an error.  Additionally the 'args'
return value doesn't contain the flags just flag arguments (only 'a' in '-x
a').  So there is no way of passing on just the unknown flags to nosetest.
 Additionally nosetest uses optparse itself so if you pass in sys.argv to
nosetest with a nova argument set will case the tests to break too.member:jog0


While I can write a hack to look for 'hide-elapsed' or other nova flags, I
am looking for more elegant solutions.  Should this solution live in the
cfg module?


best,
Joe Gordon


member:jog0
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Memory quota in nova-compute nodes.

2012-01-19 Thread Joe Gordon
Hi Jorge,

I have two questions:

1) Has anyone optimized nova to work in a HPC environment like you
describe? Such as an intelligent scheduler that will generate VMs that
consume x percent of a physical machines resources (so you don't end up
with one machine with two separate Hadoop instances competing for
resources)?

2) Why not use something like
http://en.wikipedia.org/wiki/TORQUE_Resource_Manager or
http://hadoop.apache.org/common/docs/r0.16.4/hod.html?

best,
Joe Gordon


On Thu, Jan 19, 2012 at 10:49 AM, Jorge Luiz Correa corre...@gmail.comwrote:

 Hum, I'm just studying and understanding the ccgroups to try this with
 libvirt and kvm (all nodes are linux here).

 My case is a test that can be very useful for us. We have about 150
 computers spread over the LAN. These computers are desktops and notebook
 underutilized. So, our test scenario is not an isolated datacenter, what I
 think is the ideal scenario for private clouds.

 We need to run some simulations that requires, most of the time, a lot of
 processor nodes with not so much memory (for example, to run Hadoop). Once
 the computers have 8 GB or 16 GB and are used to run office applications,
 they are idle. We think to attach them to the cloud (a controller of a
 private cloud) and use these idle resources. But, we want to ensure
 nova-compute not interfere in computers' usability (in this case we can
 define what is considered usability, like 2 cores and 4 GB of memory).
 These idle resources over the LAN can be VERY useful, and are cheap (they
 have already been purchased)!

 And, we have a laboratory with 20 good hosts that is used along some
 periods of time. At the lab we can use all the hosts resources when it
 isn't being used.

 This is our test scenario.

 Regards.

 :)


 On Thu, Jan 19, 2012 at 4:00 PM, Christian Berendt 
 bere...@b1-systems.dewrote:

 Hi Jorge.

  I would like to know if it's possible to configure quota in each
  nova-compute node. For example, I set up a new hardware with 8 GB of
 memory
  and install the nova-compute, but I wish only 4 GB of memory are used
  (dedicated to nova-compute). Is it possible? If so, how can I configure
  that?

 I can't remember such a function at the moment, but it's relative simple
 to implement such a feature (at least for Linux systems) using cgroups.

 Can you please describe your use case. At the moment I can't follow
 where I should use the feature. Why should I install nova-compute on a
 bare metal system with 32 GByte memory and only use 16 GByte memory?

 Bye, Christian.

 --
 Christian Berendt
 Linux / Unix Consultant  Developer
 Mail: bere...@b1-systems.de

 B1 Systems GmbH
 Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
 GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




 --
 - MSc. Correa, J.L.


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Nova HACKING compliance testing tool

2012-01-04 Thread Joe Gordon
Hi Everyone,

I have started working on a tool to test for nova HACKING compliance.
 Although I have implemented only three rules (more on the way), it already
flags 115 HACKING compliance issues.

If you are interested in adding more tests or fixing some current problems,
the code can be found on github (
https://github.com/cloudscaling/nova-HACKING)



best,
Joe Gordon
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp