> Well, I'm late to this discussion, but it would seem to me that quite
> a few things are wrong with that ...
> First, my_errno=&errno;  might be more appropriate, after all, you need
> to reference the address of errno, not the current value, right? But that
> would also assume errno is declared as an int.  Some implementations,
> like glibc, in bits/errno.h define errno as  #define errno
> (*__errno_location())
> ... so doing what you suggest wouldn't exactly be legal.

The &errno was a typo. As for the second part, the platform knows its
definition of 'errno', so it doesn't have to worry about it being different.

> So assuming your system really declares errno as an int, and not a
> define to a function, you wouldn't just be able to use 'my_errno'
> instead of errno ... maybe (*my_errno), since obviously, you can't
> forget to dereference it, right?

Right. Again, I'm talking about an internal optimization. It wasn't mean to
be runnable code.

Really, it's an extraneous example. The point I'm trying to make is really
simple -- you cannot expect threat safety if the platform provides a way to
ask for it and you choose not to do so.

If you break the rules, optimizations and internal translations may turn
your code into something other than you meant it to be. If you follow the
rules and that happens, the optimization is broken. If you don't follow the
rules, then you get shown by the platform where you should have.

The literature is full of examples of code that broke the rules, seemed to
work, and then failed in horrible ways in production. We can either learn
from the literature or ignore it.

DS


______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       openssl-dev@openssl.org
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to