According to POSIX 1003.1c-1995, no such mutex-altering function exists.

pthread_mutexattr_get/settype(...) functions are defined by X/Open XSH5 (Unix98). I would suggest writing a wrapper for OSs that don't implement recursive locks (it's easy enough to make your own implementation- just check pthread_self() before deciding whether to lock the mutex- potentially again). Either that or the recursive locks can be eliminated.

Just for the record, OS X, Solaris 5.8, FreeBSD 4.8, and LinuxThreads support the UNIX98 version, so perhaps this isn't so important after all.

On Thursday, June 26, 2003, at 08:45 PM, Philip Yarra wrote:

On Thu, 26 Jun 2003 11:19 am, Philip Yarra wrote:

there appears to still be a problem
occurring at "EXEC SQL DISCONNECT con_name". I'll look into it tonight if I

I did some more poking around last night, and believe I have found the issue:
RedHat Linux 7.3 (the only distro I have access to currently) ships with a
fairly challenged pthreads inplementation. The default mutex type (which you
get from PTHREAD_MUTEX_INITIALIZER) is, according the the man page,
PTHREAD_MUTEX_FAST_NP which is not a recursive mutex. If a thread owns a
mutex and attempts to lock the mutex again, it will hang.

two mutexes that are used recursively (debug_mutex and connections_mutex) I
got my sample app to work flawlessly on Linux RedHat 7.3

Sadly, the _NP suffix is used to indicate non-portable, so of course my
FreeBSD box steadfastly refused to compile it. Darn.

The correct way to do this appears to be:

pthread_mutexattr_t *mattr;
pthread_mutexattr_settype(mattr, PTHREAD_MUTEX_RECURSIVE);

(will verify this against FreeBSD when I get home, and Tru64 man page
indicates support for this too, so I'll test that later). It won't work on
RedHat Linux 7.3... I guess something like:


might do it... if we could detect the problem during configure. How is this
sort of detection handled in other cases (such as long long, etc)?

The other solution I can think of is to eradicate the two recursive locks I

One is simple: ECPGlog calls ECPGdebug, which share debug_mutex - it ought to
be okay to use different mutexes for each of these functions (there's a risk
someone might call ECPGdebug while someone else is running through ECPGlog,
but I think it is less likely, since it is a debug mechanism.)

The second recursive lock I found is ECPGdisconnect calling
ECPGget_connection, both of which share a mutex. Would it be okay if we did
the following:

ECPGdisconnect() still locks connections_mutex, but calls
ECPGget_connection_nr() instead of ECPGget_connection()

ECPGget_connection() becomes a locking wrapper, which locks connections_mutex
then calls ECPGget_connection_nr()

ECPGget_connection_nr() is a non-locking function which implements what
ECPGget_connection() currently does.

I'm not sure if this sort of thing is okay (and there may be other recursive
locking scenarios that I haven't exercised yet).

What approach should I take? I'm leaning towards eradicating recursive locks,
unless someone has a good reason not to.

All this does kinda raise the interesting question of why it worked at all
on FreeBSD... probably different scheduling and blind luck, I suppose.

FreeBSD 4.8 must have PTHREAD_MUTEX_RECURSIVE as default mutex type. I'm a bit
concerned about FreeBSD 4.2 though - I noticed (before I blew it away in
favour of 4.8) that its pthreads implementation came from a package called
linuxthreads.tgz - it might have inherited the same problematic behaviour.
Could someone with access to or knowledge of FreeBSD 4.2 check what the
default mutex type is there?

Regards, Philip.

I can just see the ad for 7.3's pthreads impementation
"Fast mutexes: zero to deadlock in 6.9 milliseconds!"

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


---------------------------(end of broadcast)--------------------------- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to