[Openstack] When a tarmac job is failing...

2011-04-12 Thread Soren Hansen
...19 times out of 20, it's because a branch that is approved has
neither a commit message nor a description.

The commit message is what ends up in the changelog. The description
is whatever you want to tell the reviewer to make their lives easier.
If only the description is set, it gets used as the commit message as
well.

Guys.. Take those 20 seconds to write a description. I think it's rude
to just throw patches at people and expect them to review them without
context.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Reminder: OpenStack team meeting - 21:00 UTC

2011-04-12 Thread Thierry Carrez
Hello everyone,

As a reminder, our weekly team meeting will take place at 21:00 UTC
this Tuesday in #openstack-meeting on IRC.

In particular, we'll discuss the green light for RCFreeze and Cactus
release, or whether we think we can get a better-tested release with a
short delay.

Check out how that time translates for *your* timezone:
http://timeanddate.com/s/207x

See the meeting agenda, edit the wiki to add new topics for discussion:
http://wiki.openstack.org/Meetings

Cheers,

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal for Justin Santa Barbara to join Nova-Core

2011-04-12 Thread Soren Hansen
+1 from me, too.

As per the process, if no-one objects within 5 business (my
interpretation) days, i.e. before Thursday, I'll get Justin added to
the nova-core team.

-- 
Soren Hansen
Ubuntu Developer    http://www.ubuntu.com/
OpenStack Developer http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] distributed and heterogeneous schedulers

2011-04-12 Thread Brian Schott

I'm trying to understand how best to implement our architecture-aware scheduler 
for Diablo:
https://blueprints.launchpad.net/nova/+spec/schedule-instances-on-heterogeneous-architectures

Right now our scheduler is similar in approach to SimpleScheduler with a few 
extra filters on instances and compute_nodes table queries for the cpu_arch and 
xpu_arch fields that we added.  For example, for -t cg1.4xlarge GPU instance 
type the scheduler reads instance_types.cpu_arch=x86_64 and 
instance_types.xpu_arch = fermi, then filters the respective compute_node and 
instance fields. http://wiki.openstack.org/HeterogeneousInstanceTypes

That's OK for Cactus, but going beyond that, I'm struggling to reconcile these 
different blueprints:
https://blueprints.launchpad.net/nova/+spec/advanced-scheduler
https://blueprints.launchpad.net/nova/+spec/distributed-scheduler

- How is the instance_metadata table used?  I see the cpu_arch, xpu_arch and 
other fields we added as of the same class of data as vcpus, local_gb, or 
mem_mb fields, which is why I put them in the instances table.  Virtualization 
type is of a similar class.  I think of meta-data as less defined constraints 
passed to the scheduler like near vol-12345678.

- Will your capabilities scheduler, constraint scheduler, and/or distributed 
schedulers understand different available hardware resources on compute nodes?

- Should there be an instance_types_metadata table for things like cpu_arch 
rather than our current approach?  

As long as we can inject a -t cg1.4xlarge at one end and have that get routed 
to a compute node with GPU hardware on the other end, we're not tied to the 
centralized database implementation.

Thanks,
Brian

PS: I sent this to the mailing list a week ago and didn't get a reply, now 
can't even find this in the openstack list archive.  Anyone else having their 
posts quietly rejected?
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] distributed and heterogeneous schedulers

2011-04-12 Thread Jay Pipes
Hi Brian, comments inline :)

On Tue, Apr 12, 2011 at 12:34 PM, Brian Schott bfsch...@gmail.com wrote:

 I'm trying to understand how best to implement our architecture-aware 
 scheduler for Diablo:
 https://blueprints.launchpad.net/nova/+spec/schedule-instances-on-heterogeneous-architectures

 Right now our scheduler is similar in approach to SimpleScheduler with a few 
 extra filters on instances and compute_nodes table queries for the cpu_arch 
 and xpu_arch fields that we added.  For example, for -t cg1.4xlarge GPU 
 instance type the scheduler reads instance_types.cpu_arch=x86_64 and 
 instance_types.xpu_arch = fermi, then filters the respective compute_node 
 and instance fields. http://wiki.openstack.org/HeterogeneousInstanceTypes

 That's OK for Cactus, but going beyond that, I'm struggling to reconcile 
 these different blueprints:
 https://blueprints.launchpad.net/nova/+spec/advanced-scheduler
 https://blueprints.launchpad.net/nova/+spec/distributed-scheduler

 - How is the instance_metadata table used?  I see the cpu_arch, xpu_arch 
 and other fields we added as of the same class of data as vcpus, local_gb, or 
 mem_mb fields, which is why I put them in the instances table.  
 Virtualization type is of a similar class.  I think of meta-data as less 
 defined constraints passed to the scheduler like near vol-12345678.

:( I've brought this up before as well. The term metadata is used
incorrectly to refer to custom key/value attributes of something
instead of referring to data about the data (for instance, the type
and length constraints of a data field).

Unfortunately, because the OpenStack API uses the actual term
metadata in the API, that's what the table was named and that's how
key/value pairs are referred to in the code.

We have at least three choices here:

1) Continue to add fields to the instances table (or compute_nodes
table) for these main attributes like cpu_arch, etc.
2) Use the custom key/value table (instance_metadata) to store these
attribute names and their values
3) Do both 1) and 2)

I would prefer that we use 1) above for fields that are common to all
nodes (and thus can be NOT NULL fields in the database and be properly
indexed. And all other attributes that are not common to all nodes use
the instance_metadata table.

Thoughts?

 - Will your capabilities scheduler, constraint scheduler, and/or distributed 
 schedulers understand different available hardware resources on compute nodes?

I was assuming they would understand different available hardware
resources by querying a database table that housed attributes
pertaining to a single host or a group of hosts (a zone).

 - Should there be an instance_types_metadata table for things like cpu_arch 
 rather than our current approach?

There could be if those fields were added as main attributes on the
instances table. If those attributes are added to the
instances_metadata table as custom key/value pairs, no, that wouldn't
make much sense.

 As long as we can inject a -t cg1.4xlarge at one end and have that get 
 routed to a compute node with GPU hardware on the other end, we're not tied 
 to the centralized database implementation.

I don't see how having the database implementation be centralized or
not affects the above statement. Could you elaborate?

 PS: I sent this to the mailing list a week ago and didn't get a reply, now 
 can't even find this in the openstack list archive.  Anyone else having their 
 posts quietly rejected?

I saw the original, if you are referring to this one:

https://lists.launchpad.net/openstack/msg01645.html

Cheers!
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] distributed and heterogeneous schedulers

2011-04-12 Thread Ed Leafe
On Apr 12, 2011, at 12:34 PM, Brian Schott wrote:

 - Will your capabilities scheduler, constraint scheduler, and/or distributed 
 schedulers understand different available hardware resources on compute nodes?


The distributed scheduler has the concept of 'capabilities', and would 
be able to select resources based on vm type.


-- Ed Leafe




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Enhancements to Glance in Diablo? Input welcomed

2011-04-12 Thread Jay Pipes
Hey all,

We're in the planning stages for Diablo now, working on putting
together blueprints, which turn into sessions at the design summit.

I know the Glance team is small and our project narrow in scope, but
it would be great to get some feedback from the list about stuff you'd
like to see included in Glance in the Diablo release.

Some possible thoughts:

* Authn/authz - This is a big one, but dependent on the overall
discussion of federated auth going on in the Nova/Swift communities.
Glance will merely follow suit with what Nova does most likely.
* Image conversion. This actually already has a blueprint, but maybe
good for a detailed discussion at the summit? See
https://blueprints.launchpad.net/glance/+spec/image-file-conversion
* Metrics - for instance, tracking operations performed (read/write,
bytes out/in, ?) Would this even be useful?
* Integration with more backend storage systems?
* XML support in the API?
* Having Glance understand what is contained in the disk images by
inspecting them on upload?
* A Glance dashboard app?

Please feel free to expand on any of the above and add any suggestions
you have on the future direction of Glance. Your input is truly
appreciated.

Cheers!
jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] RCFreeze pushed back one day

2011-04-12 Thread Thierry Carrez
Hey everyone,

During the recent meeting we decided to push back RCFreeze by one day,
in order to be able to squash the release-critical issues that were
found in recent testing.

RCFreeze will occur at the end of day, Wednesday. The list of bugs we'll
work on during that extra time are:

https://launchpad.net/nova/+milestone/cactus-rc (5 bugs)
https://launchpad.net/glance/+milestone/cactus-rc (4 bugs)
https://launchpad.net/swift/+milestone/1.3-rc (empty)

The release schedule was modified accordingly at:
http://wiki.openstack.org/CactusReleaseSchedule

Cheers,

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp