I need to guarantee that only one process at a time enters a subroutine foo() for a particular argument.

That is, if one process is in a call to foo(1), another call to foo(1) will block, but a call to foo(2) could proceed.

This needs to be guaranteed across multiple servers, as the calls to foo() manipulate multiple shared objects in the database.

Even though foo() isn't directly associated with one database table (and thus I can't rely on database transactions directly), I figured I could use the database to enforce the mutexes.

My idea was to create a mutexes table with, say, 1024 rows:

  create table mutexes (id int);
  insert into mutexes values (0);
  ...
  insert into mutexes values (1023);

Then on a call to foo, I hash the argument to an integer in 0..1023 and reserve that row with an dummy update:

  sub foo {
     my ($id) = @_;

     my $hash = $id % 1024;
     my $dbh = DBI->connect(..., AutoCommit => 0);
$dbh->prepare("update mutexes set id = ? where id = ?", $hash, $hash);

     ...  # mutual exclusion guaranteed in here

$dbh->commit(); # or $dbh->rollback() - not sure which is cheaper
  }

I'm aware of the deadlocking potential of mutexes, but will avoid that by only reserving one row per process at a time. I'm also aware that some unnecessary serialization may occur due to hash collisions, but I'm not too worried about it and can always increase the # buckets if needed.

This seems to work in testing. Just wanted to find out if it makes sense, if there's a CPAN module that already does this (couldn't find one), or if there are problems that could cause this to blow up.

Thanks!
Jon

Reply via email to