> I disagree. Anish is talking about expecting a coherent failure
> behaviour from a function in case you pass an invalid pointer as a
> parameter. To check for NULL pointers in a function expecting a
> pointer parameter is a waste of time. Most pointer variables are
> automatic variables and thus they are not initialized to 0. In that
> context to check for NULL is like to check for 24.
>
> What makes sense and of course is mandatory is to check for a NULL
> pointer after calling a function that _explicitly_ set the return
> value as NULL when there is a failure or some other condition.
Let's use an example so I get your point.
##
/*
* Test: pdf_list_create_002
* Description:
* Try to create an empty list given a NULL list pointer.
* Success condition:
* Returns PDF_EBADDATA
*/
START_TEST (pdf_list_create_002)
{
fail_if (pdf_list_create (l_comp, l_disp, 0, NULL) != PDF_EBADDATA);
}
END_TEST
###
Is that unit test useless? If so, I wasted a long time checking for
NULL pointers and doing those tests. :-/ (dumb me)
Well, a check like:
pdf_list_create (foo, bar, baz, list)
...
if (list == NULL) then error PDF_EBADDATA;
...
would protect the function for a single case of invalid pointer. But
as I mentioned early in C there is not a way (AFAIK) to determine if a
pointer is valid or invalid. So you are covering the case list == NULL
but not list == 24 (that is likely to be an invalid address) neither
list == 25 or list == 1.
So IMHO it is not worth to check for NULL pointers inside functions
getting pointers as parameters _except_ if the function documentation
says something about a possible NULL value like:
Parameters
- Filesystem
A filesystem. If NULL then the default filesystem will be used.
--
Jose E. Marchesi <[EMAIL PROTECTED]>
<[EMAIL PROTECTED]>
GNU Spain http://es.gnu.org
GNU Project http://www.gnu.org