On Thu, 2010-12-30 at 16:17 +0000, Steven van der Vegt wrote:
> Today I finished rewriting the hashtable locking logic. I've added the diff
> below.
Thanks for the patch, we'll have a closer look at it. A little problem
to start with: the reason we avoided thread locks is the threads library
implementation. If you look at the man page, you'll see that, depending
on scheduler support, it is not guaranteed that a write lock will get
priority over read locks. From the man page of pthread_rwlock_rdlock(3):
If the Thread Execution Scheduling option is supported, and the
threads involved in the lock are executing with the
scheduling policies SCHED_FIFO or SCHED_RR, the calling thread
shall not acquire the lock if a writer holds the lock or if
writers of higher or equal priority are blocked on the
lock; otherwise, the calling thread shall acquire the lock.
If the Threads Execution Scheduling option is supported,
and the threads involved in the lock are executing with the
SCHED_SPORADIC scheduling policy, the calling thread shall not
acquire the lock if a writer holds the lock or if writers
of higher or equal priority are blocked on the lock; otherwise,
the calling thread shall acquire the lock.
And also:
With a large number of readers, and relatively few writers, there
is the possibility of writer starvation. If there are threads
waiting for an exclusive write lock on the read/write lock and
there are threads that currently hold a shared read lock, the
shared read lock request will be granted.
That means that under heavy load the thread waiting for the write lock
may never acquire it, as at any given time you may have a bunch of read
locks active (multiple threads may acquire the read lock). I am not
entirely sure how to solve this.
> Still a few consideration to make:
> For now I use an array of a fixed size which is used to store the elements to
> be deleted.
That is not a problem - you can easily use a second hash table just to
temporarily hold the values to be deleted.
> This array is filled (with a read lock) through the do_all lhash function
> which calls the t_old function.
> If the array is full the do_all method can't be stopped and thus t_old just
> returns. After do_all is finished, the items are deleted (with a write lock).
> There are a few things that come to mind with this solution. The size of the
> array is fixed. What if every minute more items are added than deleted? I
> suggest that we make this size variable. Either in the config or calculate it.
>
> Also I found a bug in the Makefile. I edited the pound.h file but the
> poundctl binairy didn't rebuild. I think there's something wrong with the
> dependencies.
I would check the system date - we never ran into this problem.
--
Robert Segall
Apsis GmbH
Postfach, Uetikon am See, CH-8707
Tel: +41-32-512 30 19
--
To unsubscribe send an email with subject unsubscribe to [email protected].
Please contact [email protected] for questions.