> I'm more concerned about dealing with large numbers of simultaneous
> clients (say 20,000 who all hit at 10 AM) and I've run into problems
> with both dbm and mysql where at a certain point of write activity
> you basically can't keep up.  These problems may be solvable but
> timings just below the problem threshold don't give you much warning
> about what is going to happen when your locks begin to overlap. 

If you are using an RDBMS which has atomic operations, you can turn off
locking in Apache::Session with no effect.  The current locking scheme in
Apache::Session only prevents scrambling the data in non-atomic backing
stores like the disk.  You can turn locking off by using the NullLocker
class.

Version 1.5 has an option (called Transaction) which provides
transactional read consistency.  This is achieved by competely serializing
all access.  When this option is set, the session object gains an
exclusive lock before reading its contents from the backing store, and
holds the exclusinve lock until the object is destroyed.

On the subject of locking, I think that the daemon locker is going to be
the fastest solution for a large-scale operation.  Currently, the
semaphore locker is the slowest (and also only scales to one machine), and
the file locker is only slightly better.

-jwb

Reply via email to