Re: [openstack-dev] [charms] Deployment guide stable/rocky cut

2018-08-31 Thread Edward Hope-Morley
Hi Frode, I think it would be a good idea to add a link to the charm
deployment guide at the following page:

https://docs.openstack.org/rocky/deploy/

- Ed


On 17/08/18 08:47, Frode Nordahl wrote:
> Hello OpenStack charmers,
>
> I am writing to inform you that  a `stable/rocky` branch has been cut
> for the `openstack/charm-deployment-guide` repository.
>
> Should there be any further updates to the guide before the release
> the changes will need to be landed in `master` and then back-ported to
> `stable/rocky`.
>
> -- 
> Frode Nordahl
> Software Engineer
> Canonical Ltd.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] rpc concurrency control rfc

2013-11-27 Thread Edward Hope-Morley
Moving this to the ml as requested, would appreciate
comments/thoughts/feedback.

So, I recently proposed a small patch to the oslo rpc code (initially in
oslo-incubator then moved to oslo.messaging) which extends the existing
support for limiting the rpc thread pool so that concurrent requests can
be limited based on type/method. The blueprint and patch are here:

https://blueprints.launchpad.net/oslo.messaging/+spec/rpc-concurrency-control

The basic idea is that if you have server with limited resources you may
want restrict operations that would impact those resources e.g. live
migrations on a specific hypervisor or volume formatting on particular
volume node. This patch allows you, admittedly in a very crude way, to
apply a fixed limit to a set of rpc methods. I would like to know
whether or not people think this is sort of thing would be useful or
whether it alludes to a more fundamental issue that should be dealt with
in a different manner.

Thoughts?

Ed.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] rpc concurrency control rfc

2013-11-27 Thread Edward Hope-Morley
On 27/11/13 15:49, Daniel P. Berrange wrote:
 On Wed, Nov 27, 2013 at 02:45:22PM +, Edward Hope-Morley wrote:
 Moving this to the ml as requested, would appreciate
 comments/thoughts/feedback.

 So, I recently proposed a small patch to the oslo rpc code (initially in
 oslo-incubator then moved to oslo.messaging) which extends the existing
 support for limiting the rpc thread pool so that concurrent requests can
 be limited based on type/method. The blueprint and patch are here:

 https://blueprints.launchpad.net/oslo.messaging/+spec/rpc-concurrency-control

 The basic idea is that if you have server with limited resources you may
 want restrict operations that would impact those resources e.g. live
 migrations on a specific hypervisor or volume formatting on particular
 volume node. This patch allows you, admittedly in a very crude way, to
 apply a fixed limit to a set of rpc methods. I would like to know
 whether or not people think this is sort of thing would be useful or
 whether it alludes to a more fundamental issue that should be dealt with
 in a different manner.
 Based on this description of the problem I have some observations

  - I/O load from the guest OS itself is just as important to consider
as I/O load from management operations Nova does for a guest. Both
have the capability to impose denial-of-service on a host. IIUC, the
flavour specs have the ability to express resource constraints for
the virtual machines to prevent a guest OS initiated DOS-attack

  - I/O load from live migration is attributable to the running
virtual machine. As such I'd expect that any resource controls
associated with the guest (from the flavour specs) should be
applied to control the load from live migration.

Unfortunately life isn't quite this simple with KVM/libvirt
currently. For networking we've associated each virtual TAP
device with traffic shaping filters. For migration you have
to set a bandwidth cap explicitly via the API. For network
based storage backends, you don't directly control network
usage, but instead I/O operations/bytes. Ultimately though
there should be a way to enforce limits on anything KVM does,
similarly I expect other hypervisors can do the same

  - I/O load from operations that Nova does on behalf of a guest
that may be running, or may yet to be launched. These are not
directly known to the hypervisor, so existing resource limits
won't apply. Nova however should have some capability for
applying resource limits to I/O intensive things it does and
somehow associate them with the flavour limits  or some global
per user cap perhaps.

 Thoughts?
 Overall I think that trying to apply caps on the number of API calls
 that can be made is not really a credible way to avoid users inflicting
 DOS attack on the host OS. Not least because it does nothing to control
 what a guest OS itself may do. If you do caps based on num of APIs calls
 in a time period, you end up having to do an extremely pessistic
 calculation - basically have to consider the worst case for any single
 API call, even if most don't hit the worst case. This is going to hurt
 scalability of the system as a whole IMHO.

 Regards,
 Daniel
Daniel, thanks for this, these are all valid points and essentially tie
with the fundamental issue of dealing with DOS attacks but for this bp I
actually want to stay away from this area i.e. this is not intended to
solve any tenant-based attack issues in the rpc layer (although that
definitely warrants a discussion e.g. how do we stop a single tenant
from consuming the entire thread pool with requests) but rather I'm
thinking more from a QOS perspective i.e. to allow an admin to account
for a resource bias e.g. slow RAID controller, on a given node (not
necessarily Nova/HV) which could be alleviated with this sort of crude
rate limiting. Of course one problem with this approach is that
blocked/limited requests still reside in the same pool as other requests
so if we did want to use this it may be worth considering offloading
blocked requests or giving them their own pool altogether.

...or maybe this is just pie in the sky after all.

Ed.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] rpc concurrency control rfc

2013-11-27 Thread Edward Hope-Morley
On 27/11/13 17:43, Daniel P. Berrange wrote:
 On Wed, Nov 27, 2013 at 05:39:30PM +, Edward Hope-Morley wrote:
 On 27/11/13 15:49, Daniel P. Berrange wrote:
 On Wed, Nov 27, 2013 at 02:45:22PM +, Edward Hope-Morley wrote:
 Moving this to the ml as requested, would appreciate
 comments/thoughts/feedback.

 So, I recently proposed a small patch to the oslo rpc code (initially in
 oslo-incubator then moved to oslo.messaging) which extends the existing
 support for limiting the rpc thread pool so that concurrent requests can
 be limited based on type/method. The blueprint and patch are here:

 https://blueprints.launchpad.net/oslo.messaging/+spec/rpc-concurrency-control

 The basic idea is that if you have server with limited resources you may
 want restrict operations that would impact those resources e.g. live
 migrations on a specific hypervisor or volume formatting on particular
 volume node. This patch allows you, admittedly in a very crude way, to
 apply a fixed limit to a set of rpc methods. I would like to know
 whether or not people think this is sort of thing would be useful or
 whether it alludes to a more fundamental issue that should be dealt with
 in a different manner.
 Based on this description of the problem I have some observations

  - I/O load from the guest OS itself is just as important to consider
as I/O load from management operations Nova does for a guest. Both
have the capability to impose denial-of-service on a host. IIUC, the
flavour specs have the ability to express resource constraints for
the virtual machines to prevent a guest OS initiated DOS-attack

  - I/O load from live migration is attributable to the running
virtual machine. As such I'd expect that any resource controls
associated with the guest (from the flavour specs) should be
applied to control the load from live migration.

Unfortunately life isn't quite this simple with KVM/libvirt
currently. For networking we've associated each virtual TAP
device with traffic shaping filters. For migration you have
to set a bandwidth cap explicitly via the API. For network
based storage backends, you don't directly control network
usage, but instead I/O operations/bytes. Ultimately though
there should be a way to enforce limits on anything KVM does,
similarly I expect other hypervisors can do the same

  - I/O load from operations that Nova does on behalf of a guest
that may be running, or may yet to be launched. These are not
directly known to the hypervisor, so existing resource limits
won't apply. Nova however should have some capability for
applying resource limits to I/O intensive things it does and
somehow associate them with the flavour limits  or some global
per user cap perhaps.

 Thoughts?
 Overall I think that trying to apply caps on the number of API calls
 that can be made is not really a credible way to avoid users inflicting
 DOS attack on the host OS. Not least because it does nothing to control
 what a guest OS itself may do. If you do caps based on num of APIs calls
 in a time period, you end up having to do an extremely pessistic
 calculation - basically have to consider the worst case for any single
 API call, even if most don't hit the worst case. This is going to hurt
 scalability of the system as a whole IMHO.

 Regards,
 Daniel
 Daniel, thanks for this, these are all valid points and essentially tie
 with the fundamental issue of dealing with DOS attacks but for this bp I
 actually want to stay away from this area i.e. this is not intended to
 solve any tenant-based attack issues in the rpc layer (although that
 definitely warrants a discussion e.g. how do we stop a single tenant
 from consuming the entire thread pool with requests) but rather I'm
 thinking more from a QOS perspective i.e. to allow an admin to account
 for a resource bias e.g. slow RAID controller, on a given node (not
 necessarily Nova/HV) which could be alleviated with this sort of crude
 rate limiting. Of course one problem with this approach is that
 blocked/limited requests still reside in the same pool as other requests
 so if we did want to use this it may be worth considering offloading
 blocked requests or giving them their own pool altogether.

 ...or maybe this is just pie in the sky after all.
 I don't think it is valid to ignore tenant-based attacks in this. You
 have a single resource here and it can be consumed by the tenant
 OS, by the VM associated with the tenant or by Nova itself. As such,
 IMHO adding rate limiting to Nova APIs alone is a non-solution because
 you've still left it wide open to starvation by any number of other
 routes which are arguably even more critical to address than the API
 calls.

 Daniel
Daniel, maybe I have misunderstood you here but with this optional
extension I am (a) not intending to solve DOS issues and (b) not
ignoring DOS issues since I do not expect to be adding any beyond or
accentuating those

Re: [openstack-dev] [oslo] rpc concurrency control rfc

2013-11-27 Thread Edward Hope-Morley
On 27/11/13 18:20, Daniel P. Berrange wrote:
 On Wed, Nov 27, 2013 at 06:10:47PM +, Edward Hope-Morley wrote:
 On 27/11/13 17:43, Daniel P. Berrange wrote:
 On Wed, Nov 27, 2013 at 05:39:30PM +, Edward Hope-Morley wrote:
 On 27/11/13 15:49, Daniel P. Berrange wrote:
 On Wed, Nov 27, 2013 at 02:45:22PM +, Edward Hope-Morley wrote:
 Moving this to the ml as requested, would appreciate
 comments/thoughts/feedback.

 So, I recently proposed a small patch to the oslo rpc code (initially in
 oslo-incubator then moved to oslo.messaging) which extends the existing
 support for limiting the rpc thread pool so that concurrent requests can
 be limited based on type/method. The blueprint and patch are here:

 https://blueprints.launchpad.net/oslo.messaging/+spec/rpc-concurrency-control

 The basic idea is that if you have server with limited resources you may
 want restrict operations that would impact those resources e.g. live
 migrations on a specific hypervisor or volume formatting on particular
 volume node. This patch allows you, admittedly in a very crude way, to
 apply a fixed limit to a set of rpc methods. I would like to know
 whether or not people think this is sort of thing would be useful or
 whether it alludes to a more fundamental issue that should be dealt with
 in a different manner.
 Based on this description of the problem I have some observations

  - I/O load from the guest OS itself is just as important to consider
as I/O load from management operations Nova does for a guest. Both
have the capability to impose denial-of-service on a host. IIUC, the
flavour specs have the ability to express resource constraints for
the virtual machines to prevent a guest OS initiated DOS-attack

  - I/O load from live migration is attributable to the running
virtual machine. As such I'd expect that any resource controls
associated with the guest (from the flavour specs) should be
applied to control the load from live migration.

Unfortunately life isn't quite this simple with KVM/libvirt
currently. For networking we've associated each virtual TAP
device with traffic shaping filters. For migration you have
to set a bandwidth cap explicitly via the API. For network
based storage backends, you don't directly control network
usage, but instead I/O operations/bytes. Ultimately though
there should be a way to enforce limits on anything KVM does,
similarly I expect other hypervisors can do the same

  - I/O load from operations that Nova does on behalf of a guest
that may be running, or may yet to be launched. These are not
directly known to the hypervisor, so existing resource limits
won't apply. Nova however should have some capability for
applying resource limits to I/O intensive things it does and
somehow associate them with the flavour limits  or some global
per user cap perhaps.

 Thoughts?
 Overall I think that trying to apply caps on the number of API calls
 that can be made is not really a credible way to avoid users inflicting
 DOS attack on the host OS. Not least because it does nothing to control
 what a guest OS itself may do. If you do caps based on num of APIs calls
 in a time period, you end up having to do an extremely pessistic
 calculation - basically have to consider the worst case for any single
 API call, even if most don't hit the worst case. This is going to hurt
 scalability of the system as a whole IMHO.

 Regards,
 Daniel
 Daniel, thanks for this, these are all valid points and essentially tie
 with the fundamental issue of dealing with DOS attacks but for this bp I
 actually want to stay away from this area i.e. this is not intended to
 solve any tenant-based attack issues in the rpc layer (although that
 definitely warrants a discussion e.g. how do we stop a single tenant
 from consuming the entire thread pool with requests) but rather I'm
 thinking more from a QOS perspective i.e. to allow an admin to account
 for a resource bias e.g. slow RAID controller, on a given node (not
 necessarily Nova/HV) which could be alleviated with this sort of crude
 rate limiting. Of course one problem with this approach is that
 blocked/limited requests still reside in the same pool as other requests
 so if we did want to use this it may be worth considering offloading
 blocked requests or giving them their own pool altogether.

 ...or maybe this is just pie in the sky after all.
 I don't think it is valid to ignore tenant-based attacks in this. You
 have a single resource here and it can be consumed by the tenant
 OS, by the VM associated with the tenant or by Nova itself. As such,
 IMHO adding rate limiting to Nova APIs alone is a non-solution because
 you've still left it wide open to starvation by any number of other
 routes which are arguably even more critical to address than the API
 calls.

 Daniel
 Daniel, maybe I have misunderstood you here but with this optional
 extension I am (a) not intending

Re: [openstack-dev] [oslo] rpc concurrency control rfc

2013-11-27 Thread Edward Hope-Morley
On 27/11/13 19:34, Daniel P. Berrange wrote:
 On Wed, Nov 27, 2013 at 06:43:42PM +, Edward Hope-Morley wrote:
 On 27/11/13 18:20, Daniel P. Berrange wrote:
 On Wed, Nov 27, 2013 at 06:10:47PM +, Edward Hope-Morley wrote:
 On 27/11/13 17:43, Daniel P. Berrange wrote:
 On Wed, Nov 27, 2013 at 05:39:30PM +, Edward Hope-Morley wrote:
 On 27/11/13 15:49, Daniel P. Berrange wrote:
 On Wed, Nov 27, 2013 at 02:45:22PM +, Edward Hope-Morley wrote:
 Moving this to the ml as requested, would appreciate
 comments/thoughts/feedback.

 So, I recently proposed a small patch to the oslo rpc code (initially 
 in
 oslo-incubator then moved to oslo.messaging) which extends the existing
 support for limiting the rpc thread pool so that concurrent requests 
 can
 be limited based on type/method. The blueprint and patch are here:

 https://blueprints.launchpad.net/oslo.messaging/+spec/rpc-concurrency-control

 The basic idea is that if you have server with limited resources you 
 may
 want restrict operations that would impact those resources e.g. live
 migrations on a specific hypervisor or volume formatting on particular
 volume node. This patch allows you, admittedly in a very crude way, to
 apply a fixed limit to a set of rpc methods. I would like to know
 whether or not people think this is sort of thing would be useful or
 whether it alludes to a more fundamental issue that should be dealt 
 with
 in a different manner.
 Based on this description of the problem I have some observations

  - I/O load from the guest OS itself is just as important to consider
as I/O load from management operations Nova does for a guest. Both
have the capability to impose denial-of-service on a host. IIUC, the
flavour specs have the ability to express resource constraints for
the virtual machines to prevent a guest OS initiated DOS-attack

  - I/O load from live migration is attributable to the running
virtual machine. As such I'd expect that any resource controls
associated with the guest (from the flavour specs) should be
applied to control the load from live migration.

Unfortunately life isn't quite this simple with KVM/libvirt
currently. For networking we've associated each virtual TAP
device with traffic shaping filters. For migration you have
to set a bandwidth cap explicitly via the API. For network
based storage backends, you don't directly control network
usage, but instead I/O operations/bytes. Ultimately though
there should be a way to enforce limits on anything KVM does,
similarly I expect other hypervisors can do the same

  - I/O load from operations that Nova does on behalf of a guest
that may be running, or may yet to be launched. These are not
directly known to the hypervisor, so existing resource limits
won't apply. Nova however should have some capability for
applying resource limits to I/O intensive things it does and
somehow associate them with the flavour limits  or some global
per user cap perhaps.

 Thoughts?
 Overall I think that trying to apply caps on the number of API calls
 that can be made is not really a credible way to avoid users inflicting
 DOS attack on the host OS. Not least because it does nothing to control
 what a guest OS itself may do. If you do caps based on num of APIs calls
 in a time period, you end up having to do an extremely pessistic
 calculation - basically have to consider the worst case for any single
 API call, even if most don't hit the worst case. This is going to hurt
 scalability of the system as a whole IMHO.

 Regards,
 Daniel
 Daniel, thanks for this, these are all valid points and essentially tie
 with the fundamental issue of dealing with DOS attacks but for this bp I
 actually want to stay away from this area i.e. this is not intended to
 solve any tenant-based attack issues in the rpc layer (although that
 definitely warrants a discussion e.g. how do we stop a single tenant
 from consuming the entire thread pool with requests) but rather I'm
 thinking more from a QOS perspective i.e. to allow an admin to account
 for a resource bias e.g. slow RAID controller, on a given node (not
 necessarily Nova/HV) which could be alleviated with this sort of crude
 rate limiting. Of course one problem with this approach is that
 blocked/limited requests still reside in the same pool as other requests
 so if we did want to use this it may be worth considering offloading
 blocked requests or giving them their own pool altogether.

 ...or maybe this is just pie in the sky after all.
 I don't think it is valid to ignore tenant-based attacks in this. You
 have a single resource here and it can be consumed by the tenant
 OS, by the VM associated with the tenant or by Nova itself. As such,
 IMHO adding rate limiting to Nova APIs alone is a non-solution because
 you've still left it wide open to starvation by any number of other
 routes which are arguably even more critical to address than the API

Re: [openstack-dev] [Cinder] PTL candidacy

2013-09-27 Thread Edward Hope-Morley
+1

Duncan has always been helpful and insightful as well as seeming keen to
maintain good standards within the project and while the competition is
strong I am happy to support Duncan's candidacy.

On 26/09/13 16:44, Duncan Thomas wrote:
 I would like to run for election as Cinder PTL for the upcoming
 Icehouse release.

 I've been involved with Openstack for more than 2 years, I've been an
 active and vocal member of the Cinder core team since cinder was
 formed and have contributed to variously to debates, reviews, designs
 and code. Before that I've been involved with high-performance compute
 clusters and networking both as a developer and from an ops
 prospective.

 I think Cinder is a strong and healthy project, and I'd like to
 continue to build on the great work John Griffith has been doing as
 PTL. We have at least 16 different back-ends supported, and have been
 very successful in allowing many levels of contribution and
 involvement.

 If elected, my main drives for the Icehouse release will be:

 - Cross project coordination - several features have suffered somewhat
 from the fact that coordination is needed between cinder and other
 projects, particularly nova and horizon. I'd like to work with the PTL
 and core team of those projects to see what we can do to better align
 expectations and synchronisation between projects, so that features
 like volume encryption, read-only volumes, ACLs etc. can be landed
 more smoothly

 - Deployment issues - several large companies now deploy code from
 trunk between releases, and perform regular rolling releases. I'd like
 to focus on what makes that difficult and what we can do in terms of
 reviews, testing and design to make that a smoother progress. This
 includes tying into OSLO and other projects that are working on this.
 Task-flow is a good example of a project that made significant useful
 progress by working with cinder as a first user before moving out to
 otehr projects.

 - Grow the cinder community, and encourage new contributes in form of
 testing and validation as well as new features. Generally keep the
 fantastic inclusive nature of the cinder project going, and encourage
 the healthy debates that have allowed us to come up with great
 solutions.

 - Blueprint management - Many blueprints are currently very thin
 indeed, often no more than a sentence or two. I'd like to see more
 push-back blueprints that do not provide a reasonable amount of detail
 before the code comes along, in order to allow discussion and debate
 earlier in the development cycle.

 There are many other sub-projects within cinder, such as driver
 validation, that I support and intend to do my best to see succeed.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] v2 api upload image-size issue with rbd backend store

2013-09-04 Thread Edward Hope-Morley
Hi,

I'm hitting an issue with v2 api upload() and not sure the best way to
fix it so would appreciate some opinions/suggestions.

https://bugs.launchpad.net/glance/+bug/1213880
https://bugs.launchpad.net/python-glanceclient/+bug/1220197

So, currently doing cinder upload-to-image fails with v2 glance api and
RBD backend store. This is because v2 uses upload() (as opposed to
update() in v1) and does not accept an image-size. The v2 Glance api
upload() implementation checks the request content-length (which is
currently always zero) and then tries to create an RBD image of size
zero then write to it which fails. I have tried different solutions:

1. if image size is zero, resize for each chunk then write.

2. set content-length in glanceclient to size of image

Problem with 1 is that this implicitly disables 'Transfer-Encoding:
chunked' i.e. disables chunking. Problem with 2 is you get 2RTT of
network latency per write plus overhead of a resize.

So, I now think the best way to do this would be to modify the update
call to allow the glancelcient to send x-image-meta-size so that the
backend knows how big the image will be, create the image then write the
chunk(s) incrementally (kind of like the swift store).

Suggestions?

Ed.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder Backup documentation - to which doc it should go?

2013-09-04 Thread Edward Hope-Morley
Hi Ronan,

I have a bug open (which I am guilty of letting slip) to amend to cinder
backup docs:

https://bugs.launchpad.net/openstack-manuals/+bug/1205359

I implemented the Ceph backup driver a while back and was intending to
have a cleanup of the backup section in the docs while adding info on
backup to Ceph. Did you get round to implementing your changes? If so
can we get info on ceph backup in there too? (if not I'll get my arse in
gear and do it myself).

Ed.

On 03/09/13 11:50, Ronen Kat wrote:

 I noticed the complains about code submission without appropriate
 documentation submission, so I am ready to do my part for Cinder backup
 I have just one little question.
 Not being up to date on the current set of OpenStack manuals, and as I
 noticed that the block storage admin guide lost a lot of content, to which
 document(s) should I add the Cinder backup documentation?

 The documentation includes:
 1. Backup configuration
 2. General description of Cinder backup (commands, features, etc)
 3. Description of the available backup drivers

 Should all three go to the same place? Or different documents?

 Thanks,

 Regards,
 __
 Ronen I. Kat
 Storage Research
 IBM Research - Haifa
 Phone: +972.3.7689493
 Email: ronen...@il.ibm.com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] lost exception context

2013-07-08 Thread Edward Hope-Morley
Hi all,

I just noticed the following bug -
https://bugs.launchpad.net/cinder/+bug/1197648

which is very similar to an issue I recently looked at in Swift -
https://bugs.launchpad.net/swift/+bug/1181146

Basically these problems are actually a result of a bug in
python-eventlet  0.13 i.e.
https://bitbucket.org/eventlet/eventlet/issue/149/yield-in-except-clause-with-wilcard-raise

This has now been fixed and released so please update your package and
hopefully these issues will cease.

Ed.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev