Re: Check NULL pointers or not...

2019-12-02 Thread Salz, Rich
  *   In a production environment, it is almost never appropriate to simply 
crash in an uncontrolled manner (i.e. dereferencing a NULL pointer).

Applications that want this can check parameters themselves before calling the 
function.

Saying “C arguments don’t hold” is only because it goes against your position :)
FILE *f = fopen(“/”, “w”);
fprintf(f, “hello world”);
Is a programming error.  No two ways about it. The C runtime doesn’t protect 
against that programming error.

The OpenSSL contract says things like “foo points to a {some type of object}”  
Except for the free routines – which was new in 1.1 – it says nothing about 
NULL.  (i2d/d2i also excepted)



Re: Check NULL pointers or not...

2019-12-02 Thread Salz, Rich
if (!ossl_assert(ptr != NULL)) {
ERR_raise(ERR_LIB_WHATEVER, ERR_R_PASSED_NULL_PARAMETER);
return 0;
}

If the project decides to do this, I would like a configure option to ONLY turn 
on ossl_assert so that previous behavior can be restored without debug code 
penalties.



Re: Check NULL pointers or not...

2019-11-29 Thread Matt Caswell



On 29/11/2019 10:29, Matthias St. Pierre wrote:
> 
> 
> On 29.11.19 10:22, Matt Caswell wrote:
>>
>>  if (!ossl_assert(ptr != NULL)) {
>>  ERR_raise(ERR_LIB_WHATEVER, ERR_R_PASSED_NULL_PARAMETER);
>>  return 0;
>>  }
>>
> 
> I still dislike the odd way in which the assertion needs to be formulated,
> with the double negation. With the `ossl_is_null()` macro, which I proposed
> in #7218, the same condition would read
> 
>     if (ossl_is_null(ptr)) {
>     ERR_raise(ERR_LIB_WHATEVER, ERR_R_PASSED_NULL_PARAMETER);
>     return 0;
>     }
> 
> 
> Isn't that much better readable and easier to understand?

Yes, I quite like that approach. This is just a refinement of the
"middle ground" option though.

Matt


Re: Check NULL pointers or not...

2019-11-29 Thread Matthias St. Pierre




On 29.11.19 11:29, Matthias St. Pierre wrote:



On 29.11.19 10:22, Matt Caswell wrote:


 if (!ossl_assert(ptr != NULL)) {
 ERR_raise(ERR_LIB_WHATEVER, ERR_R_PASSED_NULL_PARAMETER);
 return 0;
 }



I still dislike the odd way in which the assertion needs to be formulated,
with the double negation. With the `ossl_is_null()` macro, which I proposed
in #7218, the same condition would read

    if (ossl_is_null(ptr)) {
    ERR_raise(ERR_LIB_WHATEVER, ERR_R_PASSED_NULL_PARAMETER);
    return 0;
    }


Isn't that much better readable and easier to understand?


For more examples like that, see the change set

https://github.com/openssl/openssl/pull/7218/files


Matthias





Moreover, in the debug build you get the error message  "Invalid NULL pointer:" instead 
of a generic "Assertion Failed:"

https://github.com/openssl/openssl/pull/7218/files#diff-6e9d962dc8c30948fdf827ad471ec11dR41-R44





Re: Check NULL pointers or not...

2019-11-29 Thread Matthias St. Pierre




On 29.11.19 10:22, Matt Caswell wrote:


 if (!ossl_assert(ptr != NULL)) {
 ERR_raise(ERR_LIB_WHATEVER, ERR_R_PASSED_NULL_PARAMETER);
 return 0;
 }



I still dislike the odd way in which the assertion needs to be formulated,
with the double negation. With the `ossl_is_null()` macro, which I proposed
in #7218, the same condition would read

if (ossl_is_null(ptr)) {
ERR_raise(ERR_LIB_WHATEVER, ERR_R_PASSED_NULL_PARAMETER);
return 0;
}


Isn't that much better readable and easier to understand?


For more examples like that, see the change set

https://github.com/openssl/openssl/pull/7218/files


Matthias




Re: Check NULL pointers or not...

2019-11-29 Thread Dr Paul Dale
Oops, you are correct.  I was under the mistaken impression that ossl_assert 
compiled to nothing outside of debug mode.


Pauli
-- 
Dr Paul Dale | Distinguished Architect | Cryptographic Foundations 
Phone +61 7 3031 7217
Oracle Australia




> On 29 Nov 2019, at 7:22 pm, Matt Caswell  wrote:
> 
> 
> 
> On 29/11/2019 08:38, Dr Paul Dale wrote:
>> I’d prefer option 1 or the middle ground.  I’ve lost count of the
>> number of times I’ve seen programs crashing in the crypto library
>> which required mammoth debugging efforts to irrefutably demonstrate
>> that the caller is at fault rather than the crypto library :(
>> 
>> Option 1 would be preferable from this point of view but it can cause
>> a performance hit — most of the time it wouldn’t matter but when it
>> does it would be a big deal.  The middle ground doesn’t entail any
>> performance loss in production code (it does in debug but that
>> shouldn’t be relevant).
> 
> 
> I think you misunderstand the middle ground option:
> 
>if (!ossl_assert(ptr != NULL)) {
>ERR_raise(ERR_LIB_WHATEVER, ERR_R_PASSED_NULL_PARAMETER);
>return 0;
>}
> 
> In debug code this will crash if ptr is NULL. In production code this
> acts exactly like option 1 - so has exactly the same performance hit.
> 
> For the record my preference is the middle ground option as being the
> norm for new code and where we make a significant refactor of old code.
> If something truly is performance critical then we can choose not to do
> it in those cases.
> 
> Matt
> 



Re: Check NULL pointers or not...

2019-11-29 Thread Tim Hudson
On Fri, Nov 29, 2019 at 7:08 PM Tomas Mraz  wrote:

> The "always check for NULL pointers" approach does not avoid
> catastrophical errors in applications.


I didn't say it avoided all errors (nor did anyone else on the thread that
I've read) - but it does avoid a whole class of errors.

And for that particular context there are many things you can do to
mitigate it - and incorrect handling of EVP_CipherUpdate itself is very
common - where error returns are completely ignored.
We could reasonably define that it should wipe out the output buffer on any
error condition - that would make the function safer in a whole pile of
contexts.

However that is talking about a different issue IMHO.

Tim.


Re: Check NULL pointers or not...

2019-11-29 Thread Matt Caswell



On 29/11/2019 08:38, Dr Paul Dale wrote:
> I’d prefer option 1 or the middle ground.  I’ve lost count of the
> number of times I’ve seen programs crashing in the crypto library
> which required mammoth debugging efforts to irrefutably demonstrate
> that the caller is at fault rather than the crypto library :(
> 
> Option 1 would be preferable from this point of view but it can cause
> a performance hit — most of the time it wouldn’t matter but when it
> does it would be a big deal.  The middle ground doesn’t entail any
> performance loss in production code (it does in debug but that
> shouldn’t be relevant).


I think you misunderstand the middle ground option:

if (!ossl_assert(ptr != NULL)) {
ERR_raise(ERR_LIB_WHATEVER, ERR_R_PASSED_NULL_PARAMETER);
return 0;
}

In debug code this will crash if ptr is NULL. In production code this
acts exactly like option 1 - so has exactly the same performance hit.

For the record my preference is the middle ground option as being the
norm for new code and where we make a significant refactor of old code.
If something truly is performance critical then we can choose not to do
it in those cases.

Matt



Re: Check NULL pointers or not...

2019-11-29 Thread Tomas Mraz
The "always check for NULL pointers" approach does not avoid
catastrophical errors in applications. For example let's say an
application code encrypts some plaintext in-place and sends it out as
encrypted. Let's say we check for the NULL EVP_CIPHER_CTX in
EVP_CipherUpdate() but the app does not bother checking for the error
return as it did not bother for the same on EVP_CIPHER_CTX_new(). The
application will then happily (and silently) send out a plaintext
instead of ciphertext.

-- 
Tomáš Mráz
No matter how far down the wrong road you've gone, turn back.
  Turkish proverb
[You'll know whether the road is wrong if you carefully listen to your
conscience.]




Re: Check NULL pointers or not...

2019-11-29 Thread Dr Paul Dale
I’d prefer option 1 or the middle ground.  I’ve lost count of the number of 
times I’ve seen programs crashing in the crypto library which required mammoth 
debugging efforts to irrefutably demonstrate that the caller is at fault rather 
than the crypto library :(

Option 1 would be preferable from this point of view but it can cause a 
performance hit — most of the time it wouldn’t matter but when it does it would 
be a big deal.  The middle ground doesn’t entail any performance loss in 
production code (it does in debug but that shouldn’t be relevant).


Pauli
-- 
Dr Paul Dale | Distinguished Architect | Cryptographic Foundations 
Phone +61 7 3031 7217
Oracle Australia




> On Wed, Nov 27, 2019 at 10:38:41AM +0100, Richard Levitte wrote:
>> Depending on who you're asking, you get different responses.
>> 
>> The topic is: should we check pointers coming from applications before
>> using them or not?
>> 
>> There are two positions on this:
>> 
>> 1. Yes we should so we don't crash processes unnecessarily, and we
>>   should return an error if pointers are NULL (unless that's valid
>>   input, such as for the freeing functions).  There is a generic
>>   error reason code for it, ERR_R_PASSED_NULL_PARAMETER.
>> 
>> 2. No we should not, as we don't cater for user programming errors.
>>   Also, it allows us to catch our own errors early.
>> 
>> For a while, our pendulum was going with option 2, but lately, we seem
>> to go back to option 1.
>> 
>> Both options have their pros and cons:
>> 
>> 1. cons: there's the very real possibility that users don't check for
>> errors as they should, and things go wrong further on,
>> sometimes in ways where the actual cause is very hard to
>> figure out.
>>   pros: programs will not spuriously crash because we didn't catch an
>> internal corner case in our tests.
>> 
>> 2. cons: programs may crash, sometimes due to our own mistakes,
>> sometimes due to user errors.
>>   pros: some very subtle bugs are found, either quicker or at all.
>> An actual case is PR 7630 [1], which was uncovered because
>> there was a crash due to a NULL being used (this was
>> ultimately due to users not checking errors).
>> 
>> As a middle ground, and perhaps a way to satify both camps, we could
>> use ossl_assert() in our checks for input NULL, as follows:
>> 
>>if (!ossl_assert(ptr != NULL)) {
>>ERR_raise(ERR_LIB_WHATEVER, ERR_R_PASSED_NULL_PARAMETER);
>>return 0;
>>}
>> 
>> This means that in a debug build, there may be crashes due to pointers
>> being NULL, while in a release build, the caller gets an error.
>> 
>> Thoughts?




Re: Check NULL pointers or not...

2019-11-29 Thread Tim Hudson
The way I view the issue is to look at what happens when things go wrong -
what is the impact - and evaluate the difference in behaviour between the
approaches.
You have to start from the premise that (in general) software is not tested
for all possible usage models - i.e. test coverage isn't at 100% - so there
are always code paths and circumstances that can occur that simply haven't
been hit during testing.

That means that having a special assert enabled build doesn't solve the
problem - in that a gap in testing always remains - so the "middle ground"
doesn't alter the fundamentals as such and does also lead to basically
testing something other than what is running in a default (non-debug)
build. Testing precisely what you will actually run is a fairly normal
principle to work from - but many overlook those sorts of differences.

In a production environment, it is almost never appropriate to simply crash
in an uncontrolled manner (i.e. dereferencing a NULL pointer).
There are many contexts that generate real issues where a more graceful
failure mode (i.e. anything other than crashing) results in a much better
user outcome.
Denial-of-service attacks often rely on this sort of issue - basically
recognising that testing doesn't cover everything and finding unusual
circumstances that create some form of load on a system that impacts other
things.
You are effectively adding in a whole pile of abort() references in the
code by not checking - that is the actual effect - and abort isn't
something that should ever be called in production code.

The other rather practical thing is that when you do check for incoming
NULL pointers, you end up with a lot less reports of issues in a library -
as you aren't sitting in the call chain of a crash - and that in itself
saves a lot of time when dealing with users. Many people will report issues
like this - but if they get an error return rather than a crash they do
(generally) keep looking. And developer time for OpenSSL (like many
projects) is the most scarce resource that we have. Anything that reduces
the impact on looking at crashes enables people to perform more useful work
rather than debugging a coding error of an OpenSSL user.

Another simple argument that helps to make a choice is that whatever we do
we need to be consistent in our handling of things - and at the moment many
functions check and many functions do not.
And if we make things consistent we certainly don't really have the option
of undoing any of the null checks because that will break code - it is
effectively part of the API contract - that it is safe to call certain
functions with NULL arguments. Adding extra checks from an API contract
point of view is harmless, removing them isn't an option. And if we want
consistency then we pretty much have only one choice - to check everywhere.

There are *very few places* where such checks will have a substantial
performance impact in real-world contexts.

Arguments about the C runtime library not checking simply aren't relevant.
The C runtime library doesn't set the API contracts and usage model that we
use. And there are C runtime functions that do check.

What we should be looking at in my view is the impact when things go wrong
- and a position that basically says the caller isn't allowed to make
mistakes ever for simple things like this which are easy to check isn't
appropriate. That it does also reduce the burden on the project (and it
also increases the users perception of the quality because they are always
looking at their own bugs and not thinking that OpenSSL has a major issue
with an uncontrolled crash - even if they don't report it publicly).

Tim.


Re: Check NULL pointers or not...

2019-11-28 Thread Tomas Mraz
On Thu, 2019-11-28 at 11:19 -0800, Benjamin Kaduk wrote:
> On Wed, Nov 27, 2019 at 10:38:41AM +0100, Richard Levitte wrote:
> > Depending on who you're asking, you get different responses.
> > 
> > The topic is: should we check pointers coming from applications
> > before
> > using them or not?
> > 
> > There are two positions on this:
> > 
> > 1. Yes we should so we don't crash processes unnecessarily, and we
> >should return an error if pointers are NULL (unless that's valid
> >input, such as for the freeing functions).  There is a generic
> >error reason code for it, ERR_R_PASSED_NULL_PARAMETER.
> > 
> > 2. No we should not, as we don't cater for user programming errors.
> >Also, it allows us to catch our own errors early.
> > 
> > For a while, our pendulum was going with option 2, but lately, we
> > seem
> > to go back to option 1.
> > 
> > Both options have their pros and cons:
> > 
> > 1. cons: there's the very real possibility that users don't check
> > for
> >  errors as they should, and things go wrong further on,
> >  sometimes in ways where the actual cause is very hard to
> >  figure out.
> >pros: programs will not spuriously crash because we didn't catch
> > an
> >  internal corner case in our tests.
> > 
> > 2. cons: programs may crash, sometimes due to our own mistakes,
> >  sometimes due to user errors.
> >pros: some very subtle bugs are found, either quicker or at all.
> >  An actual case is PR 7630 [1], which was uncovered because
> >  there was a crash due to a NULL being used (this was
> >  ultimately due to users not checking errors).
> > 
> > As a middle ground, and perhaps a way to satify both camps, we
> > could
> > use ossl_assert() in our checks for input NULL, as follows:
> > 
> > if (!ossl_assert(ptr != NULL)) {
> > ERR_raise(ERR_LIB_WHATEVER, ERR_R_PASSED_NULL_PARAMETER);
> > return 0;
> > }
> > 
> > This means that in a debug build, there may be crashes due to
> > pointers
> > being NULL, while in a release build, the caller gets an error.
> > 
> > Thoughts?
> 
> My recollection was that we had some reasonable support for this
> "middle
> ground" approach.  I don't personally have strong enough feelings to
> advocate a specific position, and will try to adhere to the group
> consensus
> going forward.

I'd the option 2 or the middle-ground option. However if we choose the
middle-ground will we try to apply it to old code or will we just add
it to the new API calls or when the code of an existing API call is
being significantly reworked anyway?

Also there might be exceptional cases where taking the option 1 is
actually a better choice, but I do not think this should be the norm.

Also in some cases where the API call is called very frequently in
performance sensitive code it might be good idea to choose option 2
even in case we would decide to use the middle-ground otherwise.
-- 
Tomáš Mráz
No matter how far down the wrong road you've gone, turn back.
  Turkish proverb
[You'll know whether the road is wrong if you carefully listen to your
conscience.]




Re: Check NULL pointers or not...

2019-11-28 Thread Benjamin Kaduk
On Wed, Nov 27, 2019 at 10:38:41AM +0100, Richard Levitte wrote:
> Depending on who you're asking, you get different responses.
> 
> The topic is: should we check pointers coming from applications before
> using them or not?
> 
> There are two positions on this:
> 
> 1. Yes we should so we don't crash processes unnecessarily, and we
>should return an error if pointers are NULL (unless that's valid
>input, such as for the freeing functions).  There is a generic
>error reason code for it, ERR_R_PASSED_NULL_PARAMETER.
> 
> 2. No we should not, as we don't cater for user programming errors.
>Also, it allows us to catch our own errors early.
> 
> For a while, our pendulum was going with option 2, but lately, we seem
> to go back to option 1.
> 
> Both options have their pros and cons:
> 
> 1. cons: there's the very real possibility that users don't check for
>  errors as they should, and things go wrong further on,
>  sometimes in ways where the actual cause is very hard to
>  figure out.
>pros: programs will not spuriously crash because we didn't catch an
>  internal corner case in our tests.
> 
> 2. cons: programs may crash, sometimes due to our own mistakes,
>  sometimes due to user errors.
>pros: some very subtle bugs are found, either quicker or at all.
>  An actual case is PR 7630 [1], which was uncovered because
>  there was a crash due to a NULL being used (this was
>  ultimately due to users not checking errors).
> 
> As a middle ground, and perhaps a way to satify both camps, we could
> use ossl_assert() in our checks for input NULL, as follows:
> 
> if (!ossl_assert(ptr != NULL)) {
> ERR_raise(ERR_LIB_WHATEVER, ERR_R_PASSED_NULL_PARAMETER);
> return 0;
> }
> 
> This means that in a debug build, there may be crashes due to pointers
> being NULL, while in a release build, the caller gets an error.
> 
> Thoughts?

My recollection was that we had some reasonable support for this "middle
ground" approach.  I don't personally have strong enough feelings to
advocate a specific position, and will try to adhere to the group consensus
going forward.

-Ben


Re: AW: Check NULL pointers or not...

2019-11-28 Thread Richard Levitte
On Wed, 27 Nov 2019 11:53:01 +0100,
Dr. Matthias St. Pierre wrote:
> 
> FYI: there was a related discussion a year ago on GitHub about an 
> ossl_is_null() macro,
> 
> https://github.com/openssl/openssl/pull/7218
> 
> which ended up undecided.

Thank you.  I had totally forgotten that one, so reminder appreciated.

Looking at the diff shows me that there's a precedent for what I
proposed (using ossl_assert).

Cheers,
Richard

-- 
Richard Levitte levi...@openssl.org
OpenSSL Project http://www.openssl.org/~levitte/


Check NULL pointers or not...

2019-11-27 Thread Richard Levitte
Depending on who you're asking, you get different responses.

The topic is: should we check pointers coming from applications before
using them or not?

There are two positions on this:

1. Yes we should so we don't crash processes unnecessarily, and we
   should return an error if pointers are NULL (unless that's valid
   input, such as for the freeing functions).  There is a generic
   error reason code for it, ERR_R_PASSED_NULL_PARAMETER.

2. No we should not, as we don't cater for user programming errors.
   Also, it allows us to catch our own errors early.

For a while, our pendulum was going with option 2, but lately, we seem
to go back to option 1.

Both options have their pros and cons:

1. cons: there's the very real possibility that users don't check for
 errors as they should, and things go wrong further on,
 sometimes in ways where the actual cause is very hard to
 figure out.
   pros: programs will not spuriously crash because we didn't catch an
 internal corner case in our tests.

2. cons: programs may crash, sometimes due to our own mistakes,
 sometimes due to user errors.
   pros: some very subtle bugs are found, either quicker or at all.
 An actual case is PR 7630 [1], which was uncovered because
 there was a crash due to a NULL being used (this was
 ultimately due to users not checking errors).

As a middle ground, and perhaps a way to satify both camps, we could
use ossl_assert() in our checks for input NULL, as follows:

if (!ossl_assert(ptr != NULL)) {
ERR_raise(ERR_LIB_WHATEVER, ERR_R_PASSED_NULL_PARAMETER);
return 0;
}

This means that in a debug build, there may be crashes due to pointers
being NULL, while in a release build, the caller gets an error.

Thoughts?

Cheers,
Richard

[1] https://github.com/openssl/openssl/pull/7630

-- 
Richard Levitte levi...@openssl.org
OpenSSL Project http://www.openssl.org/~levitte/