[Openstack] documentation bug in openstack - redhat install guide (wrong admin_user)

2012-10-26 Thread ikke
Hi,

I just went through setting up keystone and glance to fedora 17 with
f18 folsom preview repos for openstack rpms. It seems the instructions
have some error:

it creates the users here:
http://docs.openstack.org/trunk/openstack-compute/install/yum/content/setting-up-tenants-users-and-roles-manually.html

and uses them incorrectly here, causing keystone to block access to
create-image:

http://docs.openstack.org/trunk/openstack-compute/install/yum/content/configure-glance-files.html

doc uses admin_user=admin, admin_tenant=service in two of the config
files (api+registry), even though it never created admin user for the
tenant service in the first doc link where the users get created.

After changing the admin_user to glance and also admin_password to
glance's password, it starts working.

There used to be comment box in the docs, but it doesn't seem to be
the case anymore. So I'll whine here instead ;)

BR,

 ikke

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] running HA cluster of guests within openstack

2012-04-16 Thread ikke
One item more into HA features, hot plugging.

2.8. Hot plug pre-warning events.
- Nova should tell the registered client that a node/guest is going to
be shutoff,
  and the remote entry would be given time to ack that.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] running HA cluster of guests within openstack

2012-04-16 Thread ikke
On Fri, Apr 13, 2012 at 5:54 PM, Martin Gerhard Loschwitz
 wrote:
> STONITH events from within virtual machines. I have something cooking here
> using the latest version of Pacemaker; should this turn out to work, it
> would make many things a lot easier. I'll elaborate a little bit more on
> this once I have it working the way I want it.
>
> Concerning the general subject of virtual machines (and clustered VMs for
> that matter) within OpenStack, I think there is some stuff missing in Nova
> that would be necessary (granted -- in one way or another, it would be
> possible to make Pacemaker deal with VMs that have failed within Nova, but
> in my eyes, that'd be crazy). Nova knows what VMs are supposed to be there
> and Nova can find out which VMs are in fact running and which are not, so
> I think Nova should make sure that those VMs that are supposed to run are,
> well, running :)
>
> Best regards
> Martin

Good to hear, I'm looking forward hearing more of you project. It
sounds like it would be plug-in to the earlier mentioned project:

http://wiki.openstack.org/ResourceMonitorAlertsandNotifications

You are right, one cannot have too many "heads" making the decisions
about HA in the cluster, Nova should handle it, or let someone else to
do it and let go. But isn't that exactly what nova is there for, so
it's nova's job.

BR,
it

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] running HA cluster of guests within openstack

2012-04-16 Thread ikke
On Fri, Apr 13, 2012 at 2:53 PM, Pádraig Brady  wrote:
> On 04/13/2012 10:31 AM, ikke wrote:
> I'll just point out two early stage projects
> that used in combination can provide a HA solution.
>
> http://wiki.openstack.org/Heat
> http://wiki.openstack.org/ResourceMonitorAlertsandNotifications
> cheers,
> Pádraig.

Thanks for the links, I'll look into them. It looks good having a
pluggable monitoring interface. By a quick look I don't see how do the
local driver connect to libvirt, is the alert notified in fast manner
or based on periodic polling. I need to take a further look into it.

Hopefully there could be local HW watchdog emulated in Qemu that would
somehow be connected to the plugin framework to allow fast reaction
times to guest being stuck.

Also, it would make sense to have some kind of a local decision done
immediately about the reboot of a stuck  guest, instead of taking time
to report it centrally and wait for the central manager decision.

cheers,
Ilkka

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] running HA cluster of guests within openstack

2012-04-15 Thread ikke
On Fri, Apr 13, 2012 at 5:45 PM, Jason Kölker  wrote:
> On Fri, 2012-04-13 at 12:31 +0300, ikke wrote:
>
>> 1. Private networks between guests
>>   -> Doable now using Quantum
>> 1.1. Defining VLANs visible to guest machines to separate clusters
>> internal traffic,
>>        VLAN tags should not be stripped by host (QinQ)
>
> VLANs and Quantum private networks are pretty much the same thing, why
> would you want both?

For legacy reasons. The cluster at the moment handles the cluster
internal network with VLANs, and for such the cloud layer should just
virtualize the HW functionality. It would need to provide the VLAN
layer for guests for the time being until the guest could be modified
not to require it and handle VLAN network configuration via OpenStack
interfaces instead.

Some of the questions are due the legacy need. OpenStack would offer
similar functionality, but if you intend to bring a legacy apps as
such into cloud, there is plenty of modifications needed to adapt the
legacy SW into cloud concepts. Adaptation takes time, and in some
cases it might be cheaper & faster to adapt the cloud layer to provide
legacy HW as virtualized, HW abstraction layer.

While talking about legacy SW, I mean HUGE amount of code written over
decades, which is not easily modifiable.

>> 1.2. Set pre-defined MAC addresses for the guests, needed by non-IP
>>        traffic within the guest cluster (layer2 addressing)
> If you send the mac address to Melange when you create the interface it
> will record it for that instance:
>
> http://melange.readthedocs.org/en/latest/apidoc.html#interfaces

Thanks for the link, it is exactly what I was looking for!

 -it

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] running HA cluster of guests within openstack

2012-04-13 Thread ikke
I likely am not the first one to ask this, but since I didn't find a
thread about it I start one.

Is there any shared experience available what are the capabilities of
OpenStack to run cluster of guests in the cloud? Do you have
experience of the following questions, or links to more info? The
questions relate to running a legacy HA cluster in virtual env, and
moving it into cloud...

1. Private networks between guests
  -> Doable now using Quantum
1.1. Defining VLANs visible to guest machines to separate clusters
internal traffic,
   VLAN tags should not be stripped by host (QinQ)
1.2. Set pre-defined MAC addresses for the guests, needed by non-IP
   traffic within the guest cluster (layer2 addressing)
  - will Melange do this, according to docs it's not in plans?
2. HA capabilities
2.1. Failure notification times need to be fast, i.e. no tcp timeout allowed
  - there seems to be some activity to integrate pacemaker
2.2. Failure notification of both guests and hosts needs to be included
2.3. Guest cluster controller should be able to monitor the states,
  and get fast notifications of the events.
  - rather in milliseconds than in seconds
  - basically the host should have parent of the guest pid notifying
of a child process failure.
  - Host should have a virtual watch-dog noticing of a guest being stuck
2.4. Failure recovery time, how fast can OS bring up failed guest?
  - any measurements of time from failure to noticing it,
and time that the guest is restarted and back up?
2.5. virtual HW manager (guest isolation)
  - Any plans to integrate a piece from which a state of guest could
be reliably queried, e.g. guaranteeing that if I ask to power
off another
guest, it get's done in given time (millisecs), and not
pending on e.g. some tcp
timeout, and thus leading to split brain case of running two
similar guest
simultaneously. E.g. starting another guest to replace shut
down one, but
due some communications error the first one didn't really shut
before the
new one is already up.
 - should be able to reliably cut down the guests network and disk access to
   guarantee the above case
2.6. Shared disks
 - Could there be a shared scsi device concept for the legacy HW
abstraction?
 - Qemu/KVM supports this, what would it take to make OS to understand
   such disk devices?
2.7. Isolation of redundant nodes
 - In some cases there are nodes that need to backup each others 2N, N+1,
   there should be a way to make sure they run on different host.
 - This project might be aiming for that?
http://wiki.openstack.org/DistributedScheduler

This was something from top of my head, it would be interesting to
hear your thoughts about the issues. This need is coming from the
telco world, which would need a telco-cloud with such more real-time
features in it. Certainly the same applies to many other legacy
environments too.

BR,

 Ilkka Tengvall

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] upgrade path from oneiric + managedit diablo to precise+essex?

2012-04-12 Thread ikke
Hi,

is there yet documented upgrade path from using managedit repo for
diablo on oneiric to use precise with essex?

I installed my cluster according to instructions from
docs.openstack.org for basic setup with dashboard. There are some
things that don't work still (errors) and I think it might be better
to upgrade it instead of start fixing old. But do you have experience
on what exactly goes broken, or is it straight forward as just
upgrading the packages and restarting ( it never is :) )?

 -i

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] authentication help needed, added keystone to system

2012-01-26 Thread ikke
could anyone please explain to me what is the relation between zones
in nova-manage and region in keystone-manage? And help me to get the
auth back working.

I got my fedora host test system messed up after installing keystone.
Now I suspect region/zone could be the reason for authentication
failure. Should they be the same?

I got to this point by too much copy pasting the instructions without
fully understanding the details... :( The system worked before
keystone.


---
# nova-manage host list
hostzone
blade5nova
blade6nova
blade7nova
blade8nova
---


---
# keystone-manage  endpointTemplates list
All EndpointTemplates
service region  Public URL
---
novaRegionOne   http://10.20.106.105:8774/v1.1/%tenant_id%
glance  RegionOne   http://10.20.106.105:9292/v1
swift   RegionOne   http://10.20.106.105:8080/v1/AUTH_%tenant_id%
keystoneRegionOne   http://10.20.106.105:5000/v2.0
nova_compat RegionOne   http://10.20.106.105:8774/v1.0/
---

this works for admin:

---
$ curl -d '{"auth":{"passwordCredentials":{"username": "admin",
"password": "secret"}}}' -H "Content-type: application/json"
http://node1:35357/v2.0/tokens
{"access": {"token": {"expires": "2015-02-05T00:00:00", "id":
"999888777666", "tenant": {"id": "2", "name": "admin"}},
"serviceCatalog": [{"endpoints": [{"adminURL":
"http://10.0.0.1:8774/v1.1/2";, "region": "RegionOne", "internalURL":
"http://10.0.0.1:8774/v1.1/2";, "publicURL":
"http://10.20.106.105:8774/v1.1/2"}], "type": "compute", "name":
"nova"}, {"endpoints": [{"adminURL": "http://10.0.0.1:9292/v1";,
"region": "RegionOne", "internalURL": "http://10.0.0.1:9292/v1";,
"publicURL": "http://10.20.106.105:9292/v1"}], "type": "image",
"name": "glance"}, {"endpoints": [{"adminURL":
"http://10.0.0.1:8080/v1.0/";, "region": "RegionOne", "internalURL":
"http://10.0.0.1:8080/v1/AUTH_2";, "publicURL":
"http://10.20.106.105:8080/v1/AUTH_2"}], "type": "storage", "name":
"swift"}, {"endpoints": [{"adminURL": "http://10.0.0.1:35357/v2.0";,
"region": "RegionOne", "internalURL": "http://10.0.0.1:5000/v2.0";,
"publicURL": "http://10.20.106.105:5000/v2.0"}], "type": "identity",
"name": "keystone"}, {"endpoints": [{"adminURL":
"http://10.0.0.1:8774/v1.0";, "region": "RegionOne", "internalURL":
"http://10.0.0.1:8774/v1.0";, "publicURL":
"http://10.20.106.105:8774/v1.0/"}], "type": "compute", "name":
"nova_compat"}], "user": {"id": "2", "roles": [{"id": "4", "name":
"Admin"}, {"id": "4", "name": "Admin"}, {"id": "4", "name": "Admin"},
{"id": "6", "name": "KeystoneServiceAdmin"}], "name": "admin"}}}
---

but as a user it always gives access error:

---
$ curl -d '{"auth":{"passwordCredentials":{"username": "demo",
"password": "guest"}}}' -H "Content-type: application/json"
http://node1:8774/v1.1/tokens

 
  401 Unauthorized
 
 
  401 Unauthorized
  This server could not verify that you are authorized to access the
document you requested. Either you supplied the wrong credentials
(e.g., bad password), or your browser does not understand how to
supply the credentials required.
Authentication required


 

---

What possibly could cause this?

---
# tail  -1 /var/log/keystone/admin.log
2012-01-26 16:11:01  WARNING [eventlet.wsgi.server] 10.0.0.1 - -
[26/Jan/2012 16:11:01] "POST /v2.0/tokens HTTP/1.1" 200 1519 0.084546
---



versions:

$ rpm -qa 'openstack*'
openstack-nova-doc-2011.3-18.fc17.noarch
openstack-glance-doc-2011.3-2.fc16.noarch
openstack-glance-2011.3-2.fc16.noarch
openstack-swift-doc-1.4.4-1.fc17.noarch
openstack-nova-2011.3-18.fc17.noarch
openstack-keystone-2011.3.1-2.fc17.noarch

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal for new devstack (v2?)

2012-01-23 Thread ikke
On Sat, Jan 21, 2012 at 5:47 AM, Joshua Harlow  wrote:
>
> Note rhel6 isn’t fully there yet. But in progress ;)
>

Anyone working on fedora version of it? Any known major issues
preventing it? I quickly added fedora labels next to RHEL6 in the
code, and added db.py stuff. By quick test it does nova mysql config,
and then stops at rabbitmq password change for command returning exit
code2.
diff --git a/conf/pkgs/db.json b/conf/pkgs/db.json
index b044d10..d684551 100644
--- a/conf/pkgs/db.json
+++ b/conf/pkgs/db.json
@@ -66,5 +66,35 @@
 }
 ]
 }
+},
+"fedora-16": {
+"mysql": {
+"version": "5.5.18-1.fc16",
+"allowed": ">=",
+"removable": true
+},
+"mysql-server": {
+"version": "5.5.18-1.fc16",
+"allowed": ">=",
+"removable": true,
+"post-install": [
+{ 
+# Make sure it'll start on reboot
+"run_as_root": true,
+"cmd" : [ "chkconfig", "mysqld", "on"]
+},
+{ 
+# Start the mysql service
+"run_as_root": true,
+"cmd" : [ "service", "mysqld", "start"]
+},
+{  
+# Set the root password
+"run_as_root": true,
+"cmd" : [ "mysqladmin", "-u", "root", 
+   "password", "%PASSWORD%" ]
+}
+]
+}
 }
 }
diff --git a/devstack/components/db.py b/devstack/components/db.py
index b694397..9895e0b 100644
--- a/devstack/components/db.py
+++ b/devstack/components/db.py
@@ -28,10 +28,10 @@ MYSQL = 'mysql'
 DB_ACTIONS = {
 MYSQL: {
 #hopefully these are distro independent, these should be since they are invoking system init scripts
-'start': ["service", "mysql", 'start'],
-'stop': ["service", 'mysql', "stop"],
-'status': ["service", 'mysql', "status"],
-'restart': ["service", 'mysql', "status"],
+'start': ["service", "mysqld", 'start'],
+'stop': ["service", 'mysqld', "stop"],
+'status': ["service", 'mysqld', "status"],
+'restart': ["service", 'mysqld', "status"],
 #
 'create_db': ['mysql', '--user=%USER%', '--password=%PASSWORD%', '-e', 'CREATE DATABASE %DB%;'],
 'drop_db': ['mysql', '--user=%USER%', '--password=%PASSWORD%', '-e', 'DROP DATABASE IF EXISTS %DB%;'],
diff --git a/devstack/progs/actions.py b/devstack/progs/actions.py
index 7478a52..9bf17ff 100644
--- a/devstack/progs/actions.py
+++ b/devstack/progs/actions.py
@@ -43,6 +43,7 @@ LOG = logging.getLogger("devstack.progs.actions")
 _PKGR_MAP = {
 settings.UBUNTU11: apt.AptPackager,
 settings.RHEL6: yum.YumPackager,
+settings.FEDORA16: yum.YumPackager,
 }
 
 # This is used to map an action to a useful string for
diff --git a/devstack/settings.py b/devstack/settings.py
index 305ad55..534b6dd 100644
--- a/devstack/settings.py
+++ b/devstack/settings.py
@@ -25,6 +25,7 @@ LOG = logging.getLogger("devstack.settings")
 # ie in the pkg/pip listings so update there also!
 UBUNTU11 = "ubuntu-oneiric"
 RHEL6 = "rhel-6"
+FEDORA16 = "fedora-16"
 
 # What this program is called
 PROG_NICE_NAME = "DEVSTACK"
@@ -36,7 +37,8 @@ POST_INSTALL = 'post-install'
 # Default interfaces for network ip detection
 IPV4 = 'IPv4'
 IPV6 = 'IPv6'
-DEFAULT_NET_INTERFACE = 'eth0'
+#DEFAULT_NET_INTERFACE = 'eth0'
+DEFAULT_NET_INTERFACE = 'br_iscsi'
 DEFAULT_NET_INTERFACE_IP_VERSION = IPV4
 
 # Component name mappings
@@ -120,6 +122,7 @@ STACK_CONFIG_LOCATION = os.path.join(STACK_CONFIG_DIR, "stack.ini")
 KNOWN_DISTROS = {
 UBUNTU11: re.compile('Ubuntu(.*)oneiric', re.IGNORECASE),
 RHEL6: re.compile('redhat-6\.(\d+)', re.IGNORECASE),
+FEDORA16: re.compile('fedora-16(.*)verne', re.IGNORECASE),
 }
 
 
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] how to verify nova uses openstackx extensions?

2012-01-18 Thread ikke
Hi,

I'm strugling with my dashboard setup on fedora, and I would like to verify
nova sees openstackx extensions. I don't see anything in logs about it:

grep -ri openstackx /var/log/{nova,keystone,glance}/*

returns nothing. System is fedora 16 + openstack from rawhide, and horizon
from git using diablo branch.

The only thing related to keyword admin (in openstackx extensions) is this
log, which doesn't necessarily relate to issue:


2012-01-11 13:48:25,026 AUDIT extensions [-] Loading extension file:
admin_only.py
2012-01-11 13:48:25,026 WARNING extensions [-] Did not find expected name
"Admin_only" in
/usr/lib/python2.7/site-packages/nova/api/openstack/contrib/admin_only.py


I have this on my nova.conf:


--osapi_extensions_path=/home/user/src/openstackx/extensions
--osapi_extension=nova.api.openstack.v2.contrib.standard_extensions
--osapi_compute_extension=extensions.admin.Admin
--osapi_extension=extensions.admin.Admin


I know openstackx is deprecated, but so far I have failed to find a
substitute...

BR,

 -ikke
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] dashboard on fedora, misconfigured Nova url in keystone or missing openstackx extensions

2012-01-16 Thread ikke
Hi,

I'm trying to get my cluster to run openstack on fedora. I don't get the
dashboard to work, can someone please help a bit?

I end up with this error on the web page:

"*Error: *Unable to get service info: This error may be caused by a
misconfigured Nova url in keystone's service catalog, or by missing
openstackx extensions in Nova. See the Horizon README."

and

*"Error: *Unable to get usage info: This error may be caused by a
misconfigured Nova url in keystone's service catalog, or by missing
openstackx extensions in Nova. See the Horizon README."

On the command line it says:

"
ApiException fetching usage list in instance usage on date range
"2012-01-01 00:00:00 to 2012-01-16 12:17:17.391849"
Traceback (most recent call last):
  File
"/root/src/horizon/horizon/horizon/dashboards/syspanel/instances/views.py",
line 83, in usage
datetime_end)
  File "/root/src/horizon/horizon/horizon/api/deprecated.py", line 50, in
inner
return f(*args, **kwargs)
  File "/root/src/horizon/horizon/horizon/api/nova.py", line 297, in
usage_list
return [Usage(u) for u in extras_api(request).usage.list(start, end)]
  File
"/root/src/horizon/.horizon-venv/src/openstackx/openstackx/extras/usage.py",
line 12, in list
return self._list("/extras/usage?start=%s&end=%s" % (start.isoformat(),
end.isoformat()), "usage")
  File
"/root/src/horizon/.horizon-venv/src/openstackx/openstackx/api/base.py",
line 27, in _list
resp, body = self.api.connection.get(url)
  File
"/root/src/horizon/.horizon-venv/src/openstackx/openstackx/api/connection.py",
line 78, in get
return self._cs_request(url, 'GET', **kwargs)
  File
"/root/src/horizon/.horizon-venv/src/openstackx/openstackx/api/connection.py",
line 63, in _cs_request
**kwargs)
  File
"/root/src/horizon/.horizon-venv/src/openstackx/openstackx/api/connection.py",
line 48, in request
raise exceptions.from_response(resp, body)
NotFound:  This error may be caused by a misconfigured Nova url in
keystone's service catalog, or  by missing openstackx extensions in Nova.
See the Horizon README. (HTTP 404)
[16/Jan/2012 14:17:17] "GET /syspanel/ HTTP/1.1" 200 6784
"

My system is fedora 16 with openstack and dependencies pulled from rawhide:

openstack-glance-2011.3-2.fc16.noarch
openstack-swift-doc-1.4.4-1.fc17.noarch
openstack-nova-2011.3-18.fc17.noarch
openstack-keystone-2011.3.1-2.fc17.noarch

and the horizon/dashboard is pulled directly from git:

git clone  https://github.com/openstack/horizon

I had the openstack running before keystone. Then I noticed there is
keystone required by dashboard, so I installed it and verified the curl
manages to login. I configured admin token to dashboard local config, and I
added this option to nova.conf:

--osapi_extensions_path=/root/src/horizon/.horizon-venv/src/openstackx/extensions

I have a feeling the dashboard from git is not compatible with the older
version of nova from rawhide, could there be some openstackx stuff missing?
I added the above osapi extension option to point to a directory under
horizon, I don't know if that's right. Fedora openstack packages don't have
any openstackx stuff in them.

I manage login to dashboard as admin, so keystone works ok I assume. But
what should I do with the openstackx?

Here is my endpoint configs:

-
keystone-manage service add nova compute "Nova Compute Service"
keystone-manage service add glance image "Glance Image Service"
keystone-manage service add swift storage "Swift Object Storage Service"
keystone-manage service add keystone identity "Keystone Identity Service"

keystone-manage endpointTemplates add RegionOne nova \
http://10.20.106.105:8774/v1.1/%tenant_id% \
http://10.0.0.1:8774/v1.1/%tenant_id% \
http://10.0.0.1:8774/v1.1/%tenant_id% \
1 1

keystone-manage endpointTemplates add RegionOne glance \
http://10.20.106.105:9292/v1 \
http://10.0.0.1:9292/v1 \
http://10.0.0.1:9292/v1 \
1 1

keystone-manage endpointTemplates add RegionOne swift \
http://10.20.106.105:8080/v1/AUTH_%tenant_id% \
http://10.0.0.1:8080/v1.0/ \
http://10.0.0.1:8080/v1/AUTH_%tenant_id% \
1 1

keystone-manage endpointTemplates add RegionOne keystone \
http://10.20.106.105:5000/v2.0 \
http://10.0.0.1:35357/v2.0 \
http://10.0.0.1:5000/v2.0 \
1 1

keystone-manage endpoint add admin identity
keystone-manage endpoint add admin glance
keystone-manage endpoint add admin nova
keystone-manage endpoint add admin keystone
-

Any hints welcome!

 - ikke
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp