[openstack-dev] [swift] debug output truncated

2013-10-17 Thread Snider, Tim
I  have swift version 1.9.1-dev loaded. Debug output listing the 1st curl 
command is truncated.  Is there anyway to get the full command that was issued 
displayed? Has it been corrected in a later version? 
swift --verbose --debug -V 1.0 -A http://10.113.193.189/auth -U rados:swift  -K 
123 list
DEBUG:swiftclient:REQ: curl -i http://10.113.193.189/auth -X GET
-TRUNCATED

DEBUG:swiftclient:RESP STATUS: 204

DEBUG:swiftclient:REQ: curl -i http://10.113.193.189/swift/v1?format=json -X 
GET -H X-Auth-Token: 
AUTH_rgwtk0b007261646f733a7377696674858977c11983ed06452e6152a8f52212be6929858c8738ce0c0c6c7950c30c3abdf6162e

DEBUG:swiftclient:RESP STATUS: 200

DEBUG:swiftclient:RESP BODY: []


python -c 'import swift; print swift.__version__'
1.9.1-dev

Thanks,
Tim


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] 503 Service Unavailable errors

2013-09-16 Thread Snider, Tim
When I'm doing large transfers Swift often returns 503 errors with 
proxy-server Object PUT exceptions during send, 1/2 required connections in 
the log file.
Is this an indication of network issues or can someone explain the cause and 
possible solution?
Thanks

*   Trying 192.168.10.90...   % Total% Received % Xferd  Average Speed   
TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
0connected
 PUT /v1/AUTH_test/load/1gbfile51_0 HTTP/1.1
 User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 
 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
 Host: 192.168.10.90:8080
 Accept: */*
 X-Auth-Token: AUTH_tk422c579f97bb4da69528820add184204
 Content-Length: 1073741824

} [data not shown]
  0 1024M0 00 3040k  0  3304k  0:05:17 --:--:--  0:05:17 3307k
  0 1024M0 00 4512k  0  2058k  0:08:29  0:00:02  0:08:27 2060k
  0 1024M0 00 5200k  0  1772k  0:09:51  0:00:02  0:09:49 1772k
  0 1024M0 00 6544k  0  1668k  0:10:28  0:00:03  0:10:25 1669k
  0 1024M0 00 8960k  0  1835k  0:09:31  0:00:04  0:09:27 1835k
  1 1024M0 01 12.7M  0  2215k  0:07:53  0:00:05  0:07:48 2012k
  1 1024M0 01 19.2M  0  2859k  0:06:06  0:00:06  0:06:00 3233k
  2 1024M0 02 22.0M  0  2768k  0:06:18  0:00:08  0:06:10 3327k
  2 1024M0 02 26.4M  0  3058k  0:05:42  0:00:08  0:05:34 4159k
  2 1024M0 02 29.5M  0  2905k  0:06:00  0:00:10  0:05:50 3851k
  2 1024M0 02 29.5M  0  2650k  0:06:35  0:00:11  0:06:24 3113k
  2 1024M0 02 29.5M  0  2436k  0:07:10  0:00:12  0:06:58 1907k
  2 1024M0 02 29.5M  0  2254k  0:07:45  0:00:13  0:07:32 1455k
  2 1024M0 02 29.5M  0  2097k  0:08:19  0:00:14  0:08:05  560k
  2 1024M0 02 29.5M  0  1961k  0:08:54  0:00:15  0:08:39 0
  2 1024M0 02 29.5M  0  1841k  0:09:29  0:00:16  0:09:13 0
  2 1024M0 02 29.5M  0  1736k  0:10:04  0:00:17  0:09:47 0
  2 1024M0 02 29.5M  0  1641k  0:10:38  0:00:18  0:10:20 0 
HTTP/1.1 503 Service Unavailable
 Content-Length: 212
 Content-Type: text/html; charset=UTF-8
 X-Trans-Id: tx3c2f6133fc2e4b43bebc939aea2ae17f
 Date: Mon, 16 Sep 2013 02:02:37 GMT
* HTTP error before end of send, stop sending

{ [data not shown]
  2 1024M  100   2122 29.5M 11  1586k  0:11:00  0:00:19  0:10:41 0
  2 1024M  100   2122 29.5M 11  1586k  0:11:00  0:00:19  0:10:41 0
* Closing connection #0
html
head
  title503 Service Unavailable/title
/head
body
  h1503 Service Unavailable/h1
  The server is currently unavailable. Please try again at a later time.br 
/br /
/body


ssh -i /root/.ssh/id_rsa  root@10.113.193.90 grep 
tx3c2f6133fc2e4b43bebc939aea2ae17f /var/log/swift/*
/var/log/swift/proxy.error:Sep 15 19:02:37 swift14 proxy-server Object PUT 
exceptions during send, 1/2 required connections (txn: 
tx3c2f6133fc2e4b43bebc939aea2ae17f) (client_ip: 192.168.10.69)
/var/log/swift/proxy.log:Sep 15 19:02:37 swift14 proxy-server 192.168.10.69 
192.168.10.69 16/Sep/2013/02/02/37 PUT /v1/AUTH_test/load/1gbfile51_0 HTTP/1.0 
503 - 
curl/7.22.0%20%28x86_64-pc-linux-gnu%29%20libcurl/7.22.0%20OpenSSL/1.0.1%20zlib/1.2.3.4%20libidn/1.23%20librtmp/2.3
 test%2CAUTH_tk422c579f97bb4da69528820add184204 29556736 212 - 
tx3c2f6133fc2e4b43bebc939aea2ae17f - 19.0232 -


Thanks,
Tim
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] ssbench - recreating connection messages on both master and remote clients

2013-09-12 Thread Snider, Tim
I'm running ssbench with remote clients. The following message appears on all 
remote nodes and the master. Is this an indication of networking problems? If 
so what should I look at?

INFO:root:ConnectionPool: re-creating connection...
INFO:root:{'block_size': None, 'container': 'ssbench_000194', 'name': 
'500mb.1_000211', 'size_str': '500mb.1', 'network_timeout': 20.0, 
'auth_kwargs': {'insecure': '', 'storage_urls': None, 'token': None, 
'auth_version': '1.0', 'os_options': {'region_name': None, 'tenant_id': None, 
'auth_token': None, 'endpoint_type': None, 'tenant_name': None, 'service_type': 
None, 'object_storage_url': None}, 'user': 'test:tester', 'key': 'testing', 
'cacert': None, 'auth_url': 'http://192.168.10.68:8080/auth/v1.0'}, 
'head_first': False, 'type': 'upload_object', 'connect_timeout': 10.0, 'size': 
524288000} succeeded after 8 tries

Thanks,
Tim

Timothy Snider
Strategic Planning  Architecture - Advanced Development
NetApp
316-636-8736 Direct Phone
316-213-0223 Mobile Phone
tim.sni...@netapp.com
netapp.comhttp://www.netapp.com/?ref_source=eSig
 [Description: http://media.netapp.com/images/netapp-logo-sig-5.gif]

inline: image001.gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Small cluster size

2013-09-05 Thread Snider, Tim
I'd like to get input from the community on a 'realistic' size of a small Swift 
cluster that might be deployed  used in the field for production. SAIO / test 
/ lab setups aren't a consideration. I'm interested in hearing about both 
private and public cluster sizes that are deployed for production use.  4 nodes 
or fewer doesn't  seems pretty small - 6 or 8 seems like a more realistic size 
of a small cluster. But I don't have any actual data/customer experience for 
those assumptions.
Followup questions:
Given that cluster size,  do all nodes act as both Swift proxy and storage 
nodes? I assume they do.
How big does a cluster get before node roles are separated?
Thanks for the input,
Tim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] temp auth tokens expire immediately

2013-08-21 Thread Snider, Tim
Ok - in /etc/memcache.conf IP to listen on needs to be 0.0.0.0 not the actual 
node IP address.
# IP to listen on
-l 0.0.0.0

Thx.
From: Snider, Tim
Sent: Wednesday, August 21, 2013 5:08 PM
To: openstack-dev@lists.openstack.org
Subject: [swift] temp auth tokens expire immediately

I've reconfigured  my swift cluster and restarted memcache service on all the 
swift nodes.  Now I have temp auth tokens expiring immeadiately.
I've checked memcache server lines in /etc/swift/proxy-service.conf - those 
seem to be sane. What am I missing?


root@controller11:~/ssbench-0.2.16#mailto:root@controller11:~/ssbench-0.2.16# 
curl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass: testing' 
http://192.168.10.68:8080/auth/v1.0
* About to connect() to 192.168.10.68 port 8080 (#0)
*   Trying 192.168.10.68... connected
 GET /auth/v1.0 HTTP/1.1
 User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 
 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
 Host: 192.168.10.68:8080
 Accept: */*
 X-Storage-User: test:tester
 X-Storage-Pass: testing

 HTTP/1.1 200 OK
 X-Storage-Url: http://192.168.10.88:8080/v1/AUTH_test
 X-Storage-Token: AUTH_tk768c95e7330f46efa88d82787a34e9e4
 X-Auth-Token: AUTH_tk768c95e7330f46efa88d82787a34e9e4
 X-Trans-Id: tx450df531523b450eaa966b3365250f80
 Content-Length: 0
 Date: Wed, 21 Aug 2013 22:00:38 GMT

* Connection #0 to host 192.168.10.68 left intact
* Closing connection #0
root@controller11:~/ssbench-0.2.16#mailto:root@controller11:~/ssbench-0.2.16# 
curl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass: testing' 
http://192.168.10.68:8080/auth/v1.0
* About to connect() to 192.168.10.68 port 8080 (#0)
*   Trying 192.168.10.68... connected
 GET /auth/v1.0 HTTP/1.1
 User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 
 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
 Host: 192.168.10.68:8080
 Accept: */*
 X-Storage-User: test:tester
 X-Storage-Pass: testing

 HTTP/1.1 200 OK
 X-Storage-Url: http://192.168.10.87:8080/v1/AUTH_test
 X-Storage-Token: AUTH_tk21ef1ade731e462e91d8d6d3a1d8cbbe
 X-Auth-Token: AUTH_tk21ef1ade731e462e91d8d6d3a1d8cbbe
 X-Trans-Id: txe513acba005f42d298eb7c563bb1e6a3
 Content-Length: 0
 Date: Wed, 21 Aug 2013 22:00:39 GMT

* Connection #0 to host 192.168.10.68 left intact
* Closing connection #0
root@controller11:~/ssbench-0.2.16#mailto:root@controller11:~/ssbench-0.2.16# 
curl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass: testing' 
http://192.168.10.68:8080/auth/v1.0
* About to connect() to 192.168.10.68 port 8080 (#0)
*   Trying 192.168.10.68... connected
 GET /auth/v1.0 HTTP/1.1
 User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 
 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
 Host: 192.168.10.68:8080
 Accept: */*
 X-Storage-User: test:tester
 X-Storage-Pass: testing

 HTTP/1.1 200 OK
 X-Storage-Url: http://192.168.10.88:8080/v1/AUTH_test
 X-Storage-Token: AUTH_tkf433982fff3d484591826d1a02e54f7c
 X-Auth-Token: AUTH_tkf433982fff3d484591826d1a02e54f7c
 X-Trans-Id: txc009e567a213466a9de01ab14004c053
 Content-Length: 0
 Date: Wed, 21 Aug 2013 22:00:41 GMT

* Connection #0 to host 192.168.10.68 left intact
* Closing connection #0
root@controller11:~/ssbench-0.2.16#mailto:root@controller11:~/ssbench-0.2.16#

Thanks,
Tim
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Swift storage allocation

2013-07-18 Thread Snider, Tim
Are there lun sizing recommendations / ratios for allocating storage between 
containers, accounts, and objects?  i.e. Given 1TB of total storage capacity 
how many luns should be created and how should they be allocated?

Main use case is probably a consideration -- backups might require less account 
and container storage and more object storage than for consumer storage usage 
where there would be more individual customers / accounts.
I thought there might be some general guidelines.
The alternative would be to not differentiate then Swift would use storage as 
needed.

Thanks,
Tim

Timothy Snider
Strategic Planning  Architecture - Advanced Development
NetApp
316-636-8736 Direct Phone
316-213-0223 Mobile Phone
tim.sni...@netapp.com
netapp.comhttp://www.netapp.com/?ref_source=eSig
 [Description: http://media.netapp.com/images/netapp-logo-sig-5.gif]

inline: image001.gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] swift-bench 1.9.1-dev - AttributeError: Values instance has no attribute 'containers'

2013-07-10 Thread Snider, Tim
Oops - that's a swift-bench error not ssbench.
sorry

From: Snider, Tim
Sent: Tuesday, July 09, 2013 9:23 PM
To: openstack-dev@lists.openstack.org
Subject: swift-bench 1.9.1-dev - AttributeError: Values instance has no 
attribute 'containers'


I recently downloaded swift 1.9.1-dev.

swift-bench gets the following error. What can I change to get this working 
sucessfully?

Thanks,

Tim



root@controller21:~/ssbench-0.2.16mailto:root@controller21:~/ssbench-0.2.16# 
python -c 'import swift; print swift.__version__'
1.9.1-dev
root@controller21:~/ssbench-0.2.16mailto:root@controller21:~/ssbench-0.2.16#

swift-bench -A http://localHost:8080/auth/v1.0 -K testing  -U test:tester -s 10 
-n 2 -g 1
swift-bench 2013-07-09 19:17:00,338 INFO Auth version: 1.0
Traceback (most recent call last):
  File /usr/bin/swift-bench, line 149, in module
controller.run()
  File /root/swift/swift/common/bench.py, line 372, in run
puts = BenchPUT(self.logger, self.conf, self.names)
  File /root/swift/swift/common/bench.py, line 450, in __init__
self.containers = conf.containers
AttributeError: Values instance has no attribute 'containers'
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Swift debugging / performance - large latencies seen.

2013-07-09 Thread Snider, Tim
I have 2 openstack clusters running the Folsom release with multiple Swift 
nodes. I also have a small setup that is running only Swift with a single node. 
 I'm noticing very large Swift I/O latencies (seconds long) on the openstack 
clusters - ssbench output snippet is below. Performance is approximately 
identical on the openstack clusters. The Swift only cluster performs much 
better.
Setup differences:
Openstack clusters are using Keystone authentication - Swift only setup uses 
temp auth.
Multiple Swift nodes on openstack clusters - Swift only has single node.

I also see that all object I/Os are being sent to the 2nd Swift node, there are 
no objects on the 1st swift node. Both nodes are running swift-proxy services.
I have LOG_LOCAL0 set in the environment and also logging enabled in 
/etc/swift/swift.conf, but haven't seen any log entries made.

What can I look at to debug the cause of the excessive latencies on both of 
these stacks?
I'd also like to determine why all objects are on the 2nd swift node and none 
on the 1st node.

I'm supposed to do performance evaluations but need to fix the latency problem 
first.

Ssbench output snippet:
TOTAL
   Count:50  Average requests per second:   9.2
min   max  avg  std_dev  95%-ile
   Worst latency TX ID
   First-byte latency:  0.067 -   2.5130.390  (  0.604)1.948  (all 
obj sizes)  txae75691d37d544b4ac0cfe3b8cba7f38
   Last-byte  latency:  0.067 -   3.3370.430  (  0.695)1.997  (all 
obj sizes)  txdcedb82227654b338daa85751f6d1232
   First-byte latency:  0.070 -   2.5130.542  (  0.749)2.255  (
tiny objs)  txae75691d37d544b4ac0cfe3b8cba7f38
   Last-byte  latency:  0.070 -   2.5140.468  (  0.659)1.997  (
tiny objs)  txae75691d37d544b4ac0cfe3b8cba7f38
   First-byte latency:  0.067 -   1.8840.251  (  0.382)0.695  (   
small objs)  tx2ceec827f3304530b01a0d5993eea2e8
   Last-byte  latency:  0.067 -   3.3370.385  (  0.732)1.884  (   
small objs)  txdcedb82227654b338daa85751f6d1232

Thanks,
Tim

Timothy Snider
Strategic Planning  Architecture - Advanced Development
NetApp
316-636-8736 Direct Phone
316-213-0223 Mobile Phone
tim.sni...@netapp.com
netapp.comhttp://www.netapp.com/?ref_source=eSig
 [Description: http://media.netapp.com/images/netapp-logo-sig-5.gif]


inline: image001.gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift debugging / performance - large latencies seen.

2013-07-09 Thread Snider, Tim
Thanks for the hint. Is there documentation on how to run keystone using apache 
and multiple processes?
I'm pretty raw with python, ruby, apache ...
Thx.

From: Chmouel Boudjnah [mailto:chmo...@enovance.com]
Sent: Tuesday, July 09, 2013 8:22 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Swift debugging / performance - large latencies 
seen.

On Tue, Jul 9, 2013 at 2:40 PM, Snider, Tim 
tim.sni...@netapp.commailto:tim.sni...@netapp.com wrote:
I have 2 openstack clusters running the Folsom release with multiple Swift 
nodes. I also have a small setup that is running only Swift with a single node. 
 I'm noticing very large Swift I/O latencies (seconds long) on the openstack 
clusters - ssbench output snippet is below. Performance is approximately 
identical on the openstack clusters. The Swift only cluster performs much 
better.

Keystone performance can be pretty awful unless you are using something else 
than the default WSGI container configuration (single process eventlet I 
think). I would suggest you try to run it under apache with multiple process.

See the dicussion at last summit about Keystone performance here :

https://etherpad.openstack.org/havana-keystone-performance

Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift debugging / performance - large latencies seen.

2013-07-09 Thread Snider, Tim
That helped -- using tempauth instead of keystone improved performance 
significantly on the openStack cluster.
However performance is still slower compared to my Swift only setup where 
commands are sent directly to the swift node instead of going thru the 
controller node in the openstack  cluster.
How does controller and swift node communications affect swift performance?
I've also noticed that all objects are stored on the 2nd swift node and not the 
1st on the openstack cluster. I'm wondering if that could also be a factor in 
slow performance.

Keystone:
TOTAL
   Count:50  Average requests per second:   9.2
min   max  avg  std_dev  95%-ile
   Worst latency TX ID
   First-byte latency:  0.067 -   2.5130.390  (  0.604)1.948  (all 
obj sizes)  txae75691d37d544b4ac0cfe3b8cba7f38
   Last-byte  latency:  0.067 -   3.3370.430  (  0.695)1.997  (all 
obj sizes)  txdcedb82227654b338daa85751f6d1232
   First-byte latency:  0.070 -   2.5130.542  (  0.749)2.255  (
tiny objs)  txae75691d37d544b4ac0cfe3b8cba7f38
   Last-byte  latency:  0.070 -   2.5140.468  (  0.659)1.997  (
tiny objs)  txae75691d37d544b4ac0cfe3b8cba7f38
   First-byte latency:  0.067 -   1.8840.251  (  0.382)0.695  (   
small objs)  tx2ceec827f3304530b01a0d5993eea2e8
   Last-byte  latency:  0.067 -   3.3370.385  (  0.732)1.884  (   
small objs)  txdcedb82227654b338daa85751f6d1232

Tempauth:
   Count:50  Average requests per second:  65.7
min   max  avg  std_dev  95%-ile
   Worst latency TX ID
   First-byte latency:  0.006 -   0.0730.014  (  0.015)0.055  (all 
obj sizes)  tx69bf033a246645808b2c6a280e334f15
   Last-byte  latency:  0.006 -   0.2480.047  (  0.070)0.198  (all 
obj sizes)  txb8cf5dc0ce264eb08a1e05edbbf5a40f
   First-byte latency:  0.006 -   0.0730.017  (  0.020)0.072  (
tiny objs)  tx69bf033a246645808b2c6a280e334f15
   Last-byte  latency:  0.006 -   0.2480.053  (  0.072)0.195  (
tiny objs)  txb8cf5dc0ce264eb08a1e05edbbf5a40f
   First-byte latency:  0.006 -   0.0260.010  (  0.005)0.026  (   
small objs)  tx65d1fd4b6ae049bb902442ac4c28ffe9
   Last-byte  latency:  0.006 -   0.2180.040  (  0.066)0.198  (   
small objs)  txbfd6ebc74ed04068affd17c123572a44

Swift Only:
TOTAL
   Count:50  Average requests per second: 397.0
min   max  avg  std_dev  95%-ile
   Worst latency TX ID
   First-byte latency:  0.003 -   0.0070.005  (  0.001)0.006  (all 
obj sizes)  None
   Last-byte  latency:  0.003 -   0.0460.008  (  0.009)0.029  (all 
obj sizes)  None
   First-byte latency:  0.003 -   0.0070.005  (  0.001)0.007  (
tiny objs)  None
   Last-byte  latency:  0.003 -   0.0460.008  (  0.010)0.027  (
tiny objs)  None
   First-byte latency:  0.004 -   0.0060.005  (  0.001)0.006  (   
small objs)  None
   Last-byte  latency:  0.004 -   0.0430.008  (  0.009)0.029  (   
small objs)  None

From: Chmouel Boudjnah [mailto:chmo...@enovance.com]
Sent: Tuesday, July 09, 2013 8:22 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Swift debugging / performance - large latencies 
seen.

On Tue, Jul 9, 2013 at 2:40 PM, Snider, Tim 
tim.sni...@netapp.commailto:tim.sni...@netapp.com wrote:
I have 2 openstack clusters running the Folsom release with multiple Swift 
nodes. I also have a small setup that is running only Swift with a single node. 
 I'm noticing very large Swift I/O latencies (seconds long) on the openstack 
clusters - ssbench output snippet is below. Performance is approximately 
identical on the openstack clusters. The Swift only cluster performs much 
better.

Keystone performance can be pretty awful unless you are using something else 
than the default WSGI container configuration (single process eventlet I 
think). I would suggest you try to run it under apache with multiple process.

See the dicussion at last summit about Keystone performance here :

https://etherpad.openstack.org/havana-keystone-performance

Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev