Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-18 Thread Joshua Harlow

Chris Friesen wrote:

On 05/16/2017 10:45 AM, Joshua Harlow wrote:

So fyi,

If you really want something like this:

Just use:

http://fasteners.readthedocs.io/en/latest/api/lock.html#fasteners.lock.ReaderWriterLock



And always get a write lock.

It is a slightly different way of getting those locks (via a context
manager)
but the implementation underneath is a deque; so fairness should be
assured in
FIFO order...


I'm going ahead and doing this. Your docs for fastener don't actually
say that lock.ReaderWriterLock.write_lock() provides fairness. If you're
going to ensure that stays true it might make sense to document the fact.


Sounds great, I was starting to but then got busy with other stuff :-P



Am I correct that fasteners.InterProcessLock is basically as fair as the
underlying OS-specific lock? (Which should be reasonably fair except for
process scheduler priority.)


Yup that IMHO would be fair, its just fnctl under the covers (at least 
for linux). Though from what I remember at 
https://github.com/harlowja/fasteners/issues/26#issuecomment-253543912 
the lock class here seemed a little nicer (though more complex). That 
guy I think was going to propose some kind of merge, but that never 
seemd to appear.





Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-18 Thread Chris Friesen

On 05/16/2017 10:45 AM, Joshua Harlow wrote:

So fyi,

If you really want something like this:

Just use:

http://fasteners.readthedocs.io/en/latest/api/lock.html#fasteners.lock.ReaderWriterLock


And always get a write lock.

It is a slightly different way of getting those locks (via a context manager)
but the implementation underneath is a deque; so fairness should be assured in
FIFO order...


I'm going ahead and doing this.   Your docs for fastener don't actually say that 
lock.ReaderWriterLock.write_lock() provides fairness.  If you're going to ensure 
that stays true it might make sense to document the fact.


Am I correct that fasteners.InterProcessLock is basically as fair as the 
underlying OS-specific lock?  (Which should be reasonably fair except for 
process scheduler priority.)


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-16 Thread Joshua Harlow

Chris Friesen wrote:

On 05/16/2017 10:45 AM, Joshua Harlow wrote:

So fyi,

If you really want something like this:

Just use:

http://fasteners.readthedocs.io/en/latest/api/lock.html#fasteners.lock.ReaderWriterLock



And always get a write lock.

It is a slightly different way of getting those locks (via a context
manager)
but the implementation underneath is a deque; so fairness should be
assured in
FIFO order...


That might work as a local patch, but doesn't help the more general case
of fair locking in OpenStack. The alternative to adding fair locks in
oslo would be to add fairness code to all the various OpenStack services
that use locking, which seems to miss the whole point of oslo.


Replace 'openstack community' with 'python community'? ;)



In the implementation above it might also be worth using one condition
variable per waiter, since that way you can wake up only the next waiter
in line rather than waking up everyone only to have all-but-one of them
go back to sleep right away.



Ah good idea, I'll see about doing/adding/changing that.

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-16 Thread Chris Friesen

On 05/16/2017 10:45 AM, Joshua Harlow wrote:

So fyi,

If you really want something like this:

Just use:

http://fasteners.readthedocs.io/en/latest/api/lock.html#fasteners.lock.ReaderWriterLock


And always get a write lock.

It is a slightly different way of getting those locks (via a context manager)
but the implementation underneath is a deque; so fairness should be assured in
FIFO order...


That might work as a local patch, but doesn't help the more general case of fair 
locking in OpenStack.  The alternative to adding fair locks in oslo would be to 
add fairness code to all the various OpenStack services that use locking, which 
seems to miss the whole point of oslo.


In the implementation above it might also be worth using one condition variable 
per waiter, since that way you can wake up only the next waiter in line rather 
than waking up everyone only to have all-but-one of them go back to sleep right 
away.


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-16 Thread Joshua Harlow

So fyi,

If you really want something like this:

Just use:

http://fasteners.readthedocs.io/en/latest/api/lock.html#fasteners.lock.ReaderWriterLock

And always get a write lock.

It is a slightly different way of getting those locks (via a context 
manager) but the implementation underneath is a deque; so fairness 
should be assured in FIFO order...


https://github.com/harlowja/fasteners/blob/master/fasteners/lock.py#L139

and

https://github.com/harlowja/fasteners/blob/master/fasteners/lock.py#L220

-Josh

Chris Friesen wrote:

On 05/15/2017 03:42 PM, Clint Byrum wrote:


In order to implement fairness you'll need every lock request to happen
in a FIFO queue. This is often implemented with a mutex-protected queue
of condition variables. Since the mutex for the queue is only held while
you append to the queue, you will always get the items from the queue
in the order they were written to it.

So you have lockers add themselves to the queue and wait on their
condition variable, and then a thread running all the time that reads
the queue and acts on each condition to make sure only one thread is
activated at a time (or that one thread can just always do all the work
if the arguments are simple enough to put in a queue).


Do you even need the extra thread? The implementations I've seen for a
ticket lock (in C at least) usually have the unlock routine wake up the
next pending locker.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-16 Thread Chris Friesen

On 05/15/2017 03:42 PM, Clint Byrum wrote:


In order to implement fairness you'll need every lock request to happen
in a FIFO queue. This is often implemented with a mutex-protected queue
of condition variables. Since the mutex for the queue is only held while
you append to the queue, you will always get the items from the queue
in the order they were written to it.

So you have lockers add themselves to the queue and wait on their
condition variable, and then a thread running all the time that reads
the queue and acts on each condition to make sure only one thread is
activated at a time (or that one thread can just always do all the work
if the arguments are simple enough to put in a queue).


Do you even need the extra thread?  The implementations I've seen for a ticket 
lock (in C at least) usually have the unlock routine wake up the next pending 
locker.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-15 Thread Clint Byrum
Excerpts from Ben Nemec's message of 2017-05-15 15:48:33 -0500:
> 
> On 05/15/2017 03:24 PM, Doug Hellmann wrote:
> > Excerpts from Legacy, Allain's message of 2017-05-15 19:20:46 +:
> >>> -Original Message-
> >>> From: Doug Hellmann [mailto:d...@doughellmann.com]
> >>> Sent: Monday, May 15, 2017 2:55 PM
> >> <...>
> >>>
> >>> Excerpts from Legacy, Allain's message of 2017-05-15 18:35:58 +:
>  import eventlet
>  eventlet.monkey_patch
> >>>
> >>> That's not calling monkey_patch -- there are no '()'. Is that a typo?
> >>
> >> Yes, sorry, that was a typo when I put it in to the email.  It did have ()
> >> at the end.
> >>
> >>>
> >>> lock() claims to work differently when monkey_patch() has been called.
> >>> Without doing the monkey patching, I would expect the thread to have to
> >>> explicitly yield control.
> >>>
> >>> Did you see the problem you describe in production code, or just in this
> >>> sample program?
> >>
> >> We see this in production code.   I included the example to boil this down 
> >> to
> >> a simple enough scenario to be understood in this forum without the
> >> distraction of superfluous code.
> >>
> >
> > OK. I think from the Oslo team's perspective, this is likely to be
> > considered a bug in the application. The concurrency library is not
> > aware that it is running in an eventlet thread, so it relies on the
> > application to call the monkey patching function to inject the right
> > sort of lock class.  If that was done in the wrong order, or not
> > at all, that would cause this issue.
> 
> Does oslo.concurrency make any fairness promises?  I don't recall that 
> it does, so it's not clear to me that this is a bug.  I thought fair 
> locking was one of the motivations behind the DLM discussion.  My view 
> of the oslo.concurrency locking was that it is solely concerned with 
> preventing concurrent access to a resource.  There's no queuing or 
> anything that would ensure a given consumer can't grab the same lock 
> repeatedly.
> 

DLM is more about fairness between machines, not threads.

However, I'd agree that oslo.concurrency isn't making fairness
guarantees. It does claim to return a threading.Semaphore or
semaphore.Semaphore, neither of which facilitate fairness (nor would a
full fledged mutex).

In order to implement fairness you'll need every lock request to happen
in a FIFO queue. This is often implemented with a mutex-protected queue
of condition variables. Since the mutex for the queue is only held while
you append to the queue, you will always get the items from the queue
in the order they were written to it.

So you have lockers add themselves to the queue and wait on their
condition variable, and then a thread running all the time that reads
the queue and acts on each condition to make sure only one thread is
activated at a time (or that one thread can just always do all the work
if the arguments are simple enough to put in a queue).

> I'm also not really surprised that this example serializes all the 
> workers.  The operation being performed in each thread is simple enough 
> that it probably completes before a context switch could reasonably 
> occur, greenthreads or not.  Unfortunately one of the hard parts of 
> concurrency is that the "extraneous" details of a use case can end up 
> being important.
> 

It also gets hardware sensitive when you have true multi-threading,
since a user on a 2 core box will see different results than a 4 core.

> >
> > The next step is to look at which application had the problem, and under
> > what circumstances. Can you provide more detail there?
> 
> +1, although as I noted above I'm not sure this is actually a "bug".  It 
> would be interesting to know what real world use case is causing a 
> pathologically bad lock pattern though.
> 

I think it makes sense, especially in the greenthread example where
you're immediately seeing activity on the recently active socket and
thus just stay in that greenthread.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-15 Thread Ben Nemec



On 05/15/2017 03:24 PM, Doug Hellmann wrote:

Excerpts from Legacy, Allain's message of 2017-05-15 19:20:46 +:

-Original Message-
From: Doug Hellmann [mailto:d...@doughellmann.com]
Sent: Monday, May 15, 2017 2:55 PM

<...>


Excerpts from Legacy, Allain's message of 2017-05-15 18:35:58 +:

import eventlet
eventlet.monkey_patch


That's not calling monkey_patch -- there are no '()'. Is that a typo?


Yes, sorry, that was a typo when I put it in to the email.  It did have ()
at the end.



lock() claims to work differently when monkey_patch() has been called.
Without doing the monkey patching, I would expect the thread to have to
explicitly yield control.

Did you see the problem you describe in production code, or just in this
sample program?


We see this in production code.   I included the example to boil this down to
a simple enough scenario to be understood in this forum without the
distraction of superfluous code.



OK. I think from the Oslo team's perspective, this is likely to be
considered a bug in the application. The concurrency library is not
aware that it is running in an eventlet thread, so it relies on the
application to call the monkey patching function to inject the right
sort of lock class.  If that was done in the wrong order, or not
at all, that would cause this issue.


Does oslo.concurrency make any fairness promises?  I don't recall that 
it does, so it's not clear to me that this is a bug.  I thought fair 
locking was one of the motivations behind the DLM discussion.  My view 
of the oslo.concurrency locking was that it is solely concerned with 
preventing concurrent access to a resource.  There's no queuing or 
anything that would ensure a given consumer can't grab the same lock 
repeatedly.


I'm also not really surprised that this example serializes all the 
workers.  The operation being performed in each thread is simple enough 
that it probably completes before a context switch could reasonably 
occur, greenthreads or not.  Unfortunately one of the hard parts of 
concurrency is that the "extraneous" details of a use case can end up 
being important.




The next step is to look at which application had the problem, and under
what circumstances. Can you provide more detail there?


+1, although as I noted above I'm not sure this is actually a "bug".  It 
would be interesting to know what real world use case is causing a 
pathologically bad lock pattern though.


-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-15 Thread Doug Hellmann
Excerpts from Legacy, Allain's message of 2017-05-15 19:20:46 +:
> > -Original Message-
> > From: Doug Hellmann [mailto:d...@doughellmann.com]
> > Sent: Monday, May 15, 2017 2:55 PM
> <...>
> > 
> > Excerpts from Legacy, Allain's message of 2017-05-15 18:35:58 +:
> > > import eventlet
> > > eventlet.monkey_patch
> > 
> > That's not calling monkey_patch -- there are no '()'. Is that a typo?
> 
> Yes, sorry, that was a typo when I put it in to the email.  It did have () 
> at the end.
> 
> > 
> > lock() claims to work differently when monkey_patch() has been called.
> > Without doing the monkey patching, I would expect the thread to have to
> > explicitly yield control.
> > 
> > Did you see the problem you describe in production code, or just in this
> > sample program?
> 
> We see this in production code.   I included the example to boil this down to 
> a simple enough scenario to be understood in this forum without the 
> distraction of superfluous code. 
> 

OK. I think from the Oslo team's perspective, this is likely to be
considered a bug in the application. The concurrency library is not
aware that it is running in an eventlet thread, so it relies on the
application to call the monkey patching function to inject the right
sort of lock class.  If that was done in the wrong order, or not
at all, that would cause this issue.

The next step is to look at which application had the problem, and under
what circumstances. Can you provide more detail there?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-15 Thread Legacy, Allain
> -Original Message-
> From: Doug Hellmann [mailto:d...@doughellmann.com]
> Sent: Monday, May 15, 2017 2:55 PM
<...>
> 
> Excerpts from Legacy, Allain's message of 2017-05-15 18:35:58 +:
> > import eventlet
> > eventlet.monkey_patch
> 
> That's not calling monkey_patch -- there are no '()'. Is that a typo?

Yes, sorry, that was a typo when I put it in to the email.  It did have () 
at the end.

> 
> lock() claims to work differently when monkey_patch() has been called.
> Without doing the monkey patching, I would expect the thread to have to
> explicitly yield control.
> 
> Did you see the problem you describe in production code, or just in this
> sample program?

We see this in production code.   I included the example to boil this down to 
a simple enough scenario to be understood in this forum without the 
distraction of superfluous code. 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-15 Thread Doug Hellmann
Excerpts from Legacy, Allain's message of 2017-05-15 18:35:58 +:
> Can someone comment on whether the following scenario has been discussed
> before or whether this is viewed by the community as a bug?
> 
> While debugging a couple of different issues our investigation has lead
> us down the path of needing to look at whether the oslo concurrency lock
> utilities are working properly or not.  What we found is that it is
> possible for a greenthread to continuously acquire a lock even though
> there are other threads queued up waiting for the lock.
> 
> For instance, a greenthread acquires a lock, does some work, releases
> the lock, and then needs to repeat this process over several iterations.
> While the first greenthread holds the lock other greenthreads come along and
> attempt to acquire the lock.  Those subsequent greenthreads are added to the
> waiters list and suspended.  The observed behavior is that as long as the
> first greenthread continues to run without ever yielding it will always
> re-acquire the lock even before any of the waiters.
> 
> To illustrate my point I have included a short program that shows the
> effect of multiple threads contending for a lock with and without
> voluntarily yielding.   The code follows, but the output from both
> sample runs are included here first.
> 
> In both examples the output is formatted as "worker=XXX: YYY" where XXX
> is the worker number, and YYY is the number of times the worker has been
> executed while holding the lock.
> 
> In the first example,  notice that each worker gets to finish all of its
> tasks before any subsequence works gets to run even once.
> 
> In the second example, notice that the workload is fair and each worker
> gets to hold the lock once before passing it on to the next in line.
> 
> Example1 (without voluntarily yielding):
> =
> worker=0: 1
> worker=0: 2
> worker=0: 3
> worker=0: 4
> worker=1: 1
> worker=1: 2
> worker=1: 3
> worker=1: 4
> worker=2: 1
> worker=2: 2
> worker=2: 3
> worker=2: 4
> worker=3: 1
> worker=3: 2
> worker=3: 3
> worker=3: 4
> 
> 
> 
> Example2 (with voluntarily yielding):
> =
> worker=0: 1
> worker=1: 1
> worker=2: 1
> worker=3: 1
> worker=0: 2
> worker=1: 2
> worker=2: 2
> worker=3: 2
> worker=0: 3
> worker=1: 3
> worker=2: 3
> worker=3: 3
> worker=0: 4
> worker=1: 4
> worker=2: 4
> worker=3: 4
> 
> 
> 
> Code:
> =
> import eventlet
> eventlet.monkey_patch

That's not calling monkey_patch -- there are no '()'. Is that a typo?

lock() claims to work differently when monkey_patch() has been
called. Without doing the monkey patching, I would expect the thread
to have to explicitly yield control.

Did you see the problem you describe in production code, or just in this
sample program?

Doug

> 
> from oslo_concurrency import lockutils
> 
> workers = {}
> 
> synchronized = lockutils.synchronized_with_prefix('foo')
> 
> @synchronized('bar')
> def do_work(index):
> global workers
> workers[index] = workers.get(index, 0) + 1
> print "worker=%s: %s" % (index, workers[index])
> 
> 
> def worker(index, nb_jobs, sleep):
> for x in xrange(0, nb_jobs):
> do_work(index)
> if sleep:
> eventlet.greenthread.sleep(0)  # yield
> return index
> 
> 
> # hold the lock before starting workers to make sure that all worker queue up 
> # on the lock before any of them actually get to run.
> @synchronized('bar')
> def start_work(pool, nb_workers=4, nb_jobs=4, sleep=False):
> for i in xrange(0, nb_workers):
> pool.spawn(worker, i, nb_jobs, sleep)
> 
> 
> print "Example1:  sleep=False"
> workers = {}
> pool = eventlet.greenpool.GreenPool()
> start_work(pool)
> pool.waitall()
> 
> 
> print "Example2:  sleep=True"
> workers = {}
> pool = eventlet.greenpool.GreenPool()
> start_work(pool, sleep=True)
> pool.waitall()
> 
> 
> 
> 
> Regards,
> Allain
> 
> 
> Allain Legacy, Software Developer, Wind River
> direct 613.270.2279  fax 613.492.7870 skype allain.legacy
> 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5
> 
>  
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-15 Thread Legacy, Allain
Can someone comment on whether the following scenario has been discussed
before or whether this is viewed by the community as a bug?

While debugging a couple of different issues our investigation has lead
us down the path of needing to look at whether the oslo concurrency lock
utilities are working properly or not.  What we found is that it is
possible for a greenthread to continuously acquire a lock even though
there are other threads queued up waiting for the lock.

For instance, a greenthread acquires a lock, does some work, releases
the lock, and then needs to repeat this process over several iterations.
While the first greenthread holds the lock other greenthreads come along and
attempt to acquire the lock.  Those subsequent greenthreads are added to the
waiters list and suspended.  The observed behavior is that as long as the
first greenthread continues to run without ever yielding it will always
re-acquire the lock even before any of the waiters.

To illustrate my point I have included a short program that shows the
effect of multiple threads contending for a lock with and without
voluntarily yielding.   The code follows, but the output from both
sample runs are included here first.

In both examples the output is formatted as "worker=XXX: YYY" where XXX
is the worker number, and YYY is the number of times the worker has been
executed while holding the lock.

In the first example,  notice that each worker gets to finish all of its
tasks before any subsequence works gets to run even once.

In the second example, notice that the workload is fair and each worker
gets to hold the lock once before passing it on to the next in line.

Example1 (without voluntarily yielding):
=
worker=0: 1
worker=0: 2
worker=0: 3
worker=0: 4
worker=1: 1
worker=1: 2
worker=1: 3
worker=1: 4
worker=2: 1
worker=2: 2
worker=2: 3
worker=2: 4
worker=3: 1
worker=3: 2
worker=3: 3
worker=3: 4



Example2 (with voluntarily yielding):
=
worker=0: 1
worker=1: 1
worker=2: 1
worker=3: 1
worker=0: 2
worker=1: 2
worker=2: 2
worker=3: 2
worker=0: 3
worker=1: 3
worker=2: 3
worker=3: 3
worker=0: 4
worker=1: 4
worker=2: 4
worker=3: 4



Code:
=
import eventlet
eventlet.monkey_patch

from oslo_concurrency import lockutils

workers = {}

synchronized = lockutils.synchronized_with_prefix('foo')

@synchronized('bar')
def do_work(index):
global workers
workers[index] = workers.get(index, 0) + 1
print "worker=%s: %s" % (index, workers[index])


def worker(index, nb_jobs, sleep):
for x in xrange(0, nb_jobs):
do_work(index)
if sleep:
eventlet.greenthread.sleep(0)  # yield
return index


# hold the lock before starting workers to make sure that all worker queue up 
# on the lock before any of them actually get to run.
@synchronized('bar')
def start_work(pool, nb_workers=4, nb_jobs=4, sleep=False):
for i in xrange(0, nb_workers):
pool.spawn(worker, i, nb_jobs, sleep)


print "Example1:  sleep=False"
workers = {}
pool = eventlet.greenpool.GreenPool()
start_work(pool)
pool.waitall()


print "Example2:  sleep=True"
workers = {}
pool = eventlet.greenpool.GreenPool()
start_work(pool, sleep=True)
pool.waitall()




Regards,
Allain


Allain Legacy, Software Developer, Wind River
direct 613.270.2279  fax 613.492.7870 skype allain.legacy
350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5

 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev