alloca patch

2012-12-20 Thread Patrick Welche
Having seen the message that a release might be imminent, I had a look
at the patches for autoconf in
http://ftp.netbsd.org/pub/pkgsrc/current/pkgsrc/devel/autoconf/patches

so the attached might want to go in...

Cheers,

Patrick
From ffc83effa49340314d71ff266d94b512e1f00e3a Mon Sep 17 00:00:00 2001
From: Patrick Welche pr...@cam.ac.uk
Date: Thu, 20 Dec 2012 09:41:38 +
Subject: [PATCH] AC_FUNC_ALLOCA: Don't define a prototype for alloca() on BSDs

* alloca() is defined in stdlib.h on BSDs. From
  
http://ftp.netbsd.org/pub/pkgsrc/current/pkgsrc/devel/autoconf/patches/patch-aa
---
 lib/autoconf/functions.m4 | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/autoconf/functions.m4 b/lib/autoconf/functions.m4
index de7a6b8..c7b37df 100644
--- a/lib/autoconf/functions.m4
+++ b/lib/autoconf/functions.m4
@@ -384,6 +384,8 @@ AC_CACHE_CHECK([for alloca], ac_cv_func_alloca_works,
 # ifdef _MSC_VER
 #  include malloc.h
 #  define alloca _alloca
+# elif defined(__NetBSD__) || defined(__FreeBSD__) || defined(__DragonFly__) 
|| defined(__OpenBSD__)
+#  include stdlib.h
 # else
 #  ifdef HAVE_ALLOCA_H
 #   include alloca.h
-- 
1.8.0.1



Re: alloca patch

2012-12-20 Thread Eric Blake
On 12/20/2012 02:51 AM, Patrick Welche wrote:
 Having seen the message that a release might be imminent, I had a look
 at the patches for autoconf in
 http://ftp.netbsd.org/pub/pkgsrc/current/pkgsrc/devel/autoconf/patches
 
 so the attached might want to go in...
 

 +++ b/lib/autoconf/functions.m4
 @@ -384,6 +384,8 @@ AC_CACHE_CHECK([for alloca], ac_cv_func_alloca_works,
  # ifdef _MSC_VER
  #  include malloc.h
  #  define alloca _alloca
 +# elif defined(__NetBSD__) || defined(__FreeBSD__) || defined(__DragonFly__) 
 || defined(__OpenBSD__)
 +#  include stdlib.h

Does it harm things to unconditionally include stdlib.h even on
non-BSD platforms?   If not, that would be a simpler solution.
Furthermore, I'm reluctant to patch this without also patching the
documentation to match, where we currently suggest:

#if defined STDC_HEADERS || defined HAVE_STDLIB_H
# include stdlib.h
#endif
#include stddef.h
#ifdef HAVE_ALLOCA_H
# include alloca.h
#elif !defined alloca
# ifdef __GNUC__
#  define alloca __builtin_alloca
# elif defined _AIX
#  define alloca __alloca
# elif defined _MSC_VER
#  include malloc.h
#  define alloca _alloca
# elif !defined HAVE_ALLOCA
#  ifdef  __cplusplus
extern C
#  endif
void *alloca (size_t);
# endif
#endif

Or maybe the problem is that our test for ac_cv_func_alloca_works
doesn't match the documentation, since it is only doing:

#ifdef __GNUC__
# define alloca __builtin_alloca
#else
# ifdef _MSC_VER
#  include malloc.h
#  define alloca _alloca
# else
#  ifdef HAVE_ALLOCA_H
#   include alloca.h
#  else
#   ifdef _AIX
 #pragma alloca
#   else
#ifndef alloca /* predefined by HP cc +Olibcalls */
void *alloca (size_t);
#endif
#   endif
#  endif
# endif
#endif

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature


Re: alloca patch

2012-12-20 Thread Patrick Welche
On Thu, Dec 20, 2012 at 08:31:43AM -0700, Eric Blake wrote:
 Or maybe the problem is that our test for ac_cv_func_alloca_works
 doesn't match the documentation, since it is only doing:

Indeed - the version in the documentation wouldn't need the patch...
I'll just check this...

Cheers,

Patrick



Re: Enabling compiler warning flags

2012-12-20 Thread Jeffrey Walton
On Tue, Dec 18, 2012 at 12:28 AM, David A. Wheeler
dwhee...@dwheeler.com wrote:
 Jim Meyering said:
 Did you realize that several GNU projects now enable virtually
 every gcc warning that is available (even including those that
 are new in the upcoming gcc-4.8, for folks that use bleeding edge gcc)
 via gnulib's manywarnings.m4 configure-time tests?

 Of course, there is a list of warnings that we do disable,
 due to their typical lack of utility and the invasiveness
 of changes required do suppress them.

 Is there any way that the autoconf (or automake) folks could make compiler 
 warnings much, much easier to enable?  Preferably enabled by default when you 
 start packaging something? For example, could gnulib warnings and 
 manywarnings be distributed and enabled as *part* of autoconf? If not, could 
 autoconf at least strongly advertize the existence of these, and include 
 specific instructions to people on how to quickly install it? The autoconf 
 section on gnulib never even *MENTIONS* the warnings and manywarnings 
 stuff!  And while automake has warnings, they are for the automake 
 configuration file... not for compilation.

 Compiler warning flags cost nearly nothing to turn on when you're *starting* 
 a project, but they're harder to enable later (a thousand warnings about the 
 same thing later is harder than fixing it the first time). And while some 
 warnings are nonsense, their use can make the resulting software much, much 
 better. If we got people to turn on warning flags all over the place, during 
 development, a lot of bugs would simply disappear.

To further muddy the water, there are also preprocessor macros that
affect security!

Debug configurations can/should have _DEBUG and DEBUG preprocessor
macros; while Release configurations should/must have _NDEBUG and
NDEBUG preprocessor macros. Posix only observes NDEBUG
(http://pubs.opengroup.org/onlinepubs/009604499/basedefs/assert.h.html).
The additional Debug and Release preprocessor macros help ensure the
'proper' or 'more complete' uptake of third party libraries (such as
SQLite and SQLCipher).

Other libraries also add additional macro dependencies. For example
Objective C Release configurations also need NS_BLOCK_ASSERTIONS=1
defined.

If a project does not observe proper preprocessor macros for a
configuration, a project could fall victim to runtime assertions and
actually DoS itself after the assert calls abort(). The ISC's DNS
server comes to mind (confer: there are CVE's assigned for the errant
behavior, and its happened more than once!
http://www.google.com/#q=isc+dns+assert+dos).

So there you have it: all the elements of a secure toolchain. It
includes the preprocessor (macros), the compiler (warnings), and
linker (platform security integration). Many people don't realize all
the details that go into getting a project set up correctly, long
before the first line of code is ever written. And it applies to
Makefiles, Eclipse, Net Beans, Xcode, Visual Studio, et al. Its not
just limited to one tool or one platform.

Jeff

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Enabling compiler warning flags

2012-12-20 Thread Bob Friesenhahn

On Thu, 20 Dec 2012, Jeffrey Walton wrote:


If a project does not observe proper preprocessor macros for a
configuration, a project could fall victim to runtime assertions and
actually DoS itself after the assert calls abort(). The ISC's DNS


The falling victim to runtime assertions is the same as falling 
victim to a bug.  It is not necessarily true that removing the 
assertion is better than suffering from the unhandled bug.  Once again 
this is a program/situation-specific issue.


You keep repeating standard recipies which are not proper/best for all 
software.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Enabling compiler warning flags

2012-12-20 Thread Russ Allbery
Jeffrey Walton noloa...@gmail.com writes:

 If a project does not observe proper preprocessor macros for a
 configuration, a project could fall victim to runtime assertions and
 actually DoS itself after the assert calls abort(). The ISC's DNS server
 comes to mind (confer: there are CVE's assigned for the errant behavior,
 and its happened more than once!
 http://www.google.com/#q=isc+dns+assert+dos).

It's very rare for it to be sane to continue after an assert().  That
would normally mean a serious coding error on the part of the person who
wrote the assert().  The whole point of assert() is to establish
invariants which, if violated, would result in undefined behavior.
Continuing after an assert() could well lead to an even worse security
problem, such as a remote system compromise.

The purpose of the -DNDEBUG compile-time option is not to achieve
additional security by preventing a DoS, but rather to gain additional
*performance* by removing all the checks done via assert().  If your goal
is to favor security over performance, you never want to use -DNDEBUG.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Enabling compiler warning flags

2012-12-20 Thread Jeffrey Walton
Hi Russ,

On Thu, Dec 20, 2012 at 3:49 PM, Russ Allbery r...@stanford.edu wrote:
 Jeffrey Walton noloa...@gmail.com writes:

 If a project does not observe proper preprocessor macros for a
 configuration, a project could fall victim to runtime assertions and
 actually DoS itself after the assert calls abort(). The ISC's DNS server
 comes to mind (confer: there are CVE's assigned for the errant behavior,
 and its happened more than once!
 http://www.google.com/#q=isc+dns+assert+dos).

 It's very rare for it to be sane to continue after an assert().  That
 would normally mean a serious coding error on the part of the person who
 wrote the assert().  The whole point of assert() is to establish
 invariants which, if violated, would result in undefined behavior.
 Continuing after an assert() could well lead to an even worse security
 problem, such as a remote system compromise.
So, I somewhat disagree with you here. I think the differences are
philosophical because I could never find guidance from standard bodies
(such as Posix or IEEE) on rationales or goals behind NDEBUG and the
intention of the abort() behind an assert().

First, an observation: if all the use cases are accounted (positive
and negative), code *lacking* NDEBUG will never fire an asserts. The
default case of 'fail' is enough to ensure this. You would be
surprised (or maybe not) how many functions don't have the default
'fail' case. Any code that lacks NDEBUG because it depends upon
assert() the abort() is defective by design. That includes the ISC's
DNS server and their assertion/abort scheme (critical infrastructure,
no less).

Under no circumstance is a program allowed to abort(). It processes as
expected or it fails gracefully. If it fails gracefully, it can exit()
if it likes. But it does not crash, and it does not abort().

Here's the philosophical difference (that will surely draw criticism):
asserts are a debug/diagnostic tool to aide in development. They have
no place in release code. I'll take it a step further: Posix asserts
are useless during development under a debugger because the eventually
lead to SIGTERM. A much better approach in practice is to SIGTRAP.

Code under my purview must (1) validate all parameters and (2) check
all return values. Not only must there be logic to fail the function
if anything goes wrong, *everything* must be asserted to alert of the
point of first failure. In this respect, asserts create self-debugging
code.

I found developers did not like assert in debug configurations. They
did not like asserts because of SIGTERM, which meant the developers
did not fully assert. That caused the code to be non-compliant. The
root cause was they did not like eating the dogfood of their own bugs.
So I had to rewrite the asserts to use SIGTRAP, which made them very
happy (they could make a mental note and continue on debugging). Code
improved dramatically after that - we were always aware of the first
point of failure, with out the need for breakpoints and detailed
inspection unless needed.

 The purpose of the -DNDEBUG compile-time option is not to achieve
 additional security by preventing a DoS, but rather to gain additional
 *performance* by removing all the checks done via assert().  If your goal
 is to favor security over performance, you never want to use -DNDEBUG.
Probably another philosophical difference: (1) code must be correct.
(2) code should be secure. (3) code can be efficient. NDEBUG just
removes the debugging/diagnostic aides, so it does help with (3). (1)
is achieved because there is a separate if/then/else that handles the
proper failure of a function in a release configuration.

I know many will disagree, but I will put my money where my mouth is:
I have code in the field (secure containers and secure channels) that
has never taken a bug report or taken less than a handful (fewer than
3). They were developed with the discipline described above, and they
include a complete suite of negative, multi-threaded self tests that
ensure graceful failures. I don't care too much about the positive
test cases since I can hire a kid from a third world country for $10
or $15 US a day to copy/paste code that works under the 'good' cases.

Can anyone else say claim have a non-trivial code base that does not
suffer defects (with a reasonable but broad definition of defect)?

Anyway, sorry about the philosophicals. I know it does not lend much
to the thread.

Jeff

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Enabling compiler warning flags

2012-12-20 Thread Bob Friesenhahn

On Thu, 20 Dec 2012, Jeffrey Walton wrote:


The falling victim to runtime assertions is the same as falling victim to
a bug.  It is not necessarily true that removing the assertion is better
than suffering from the unhandled bug.  Once again this is a
program/situation-specific issue.

Well, I can't think of a situation where an abort or crash is
preferred over gracefully handling a failure that could be handled
with an exit. In this case, the program is already in a code path -
why not just fail the function rather than abort? But then again, I
don't think like many others (as you can probably tell). So I could be
missing something.


Assertions are intended for detecting unexpected conditions. 
External inputs to the program do not count as 'unexpected condition' 
and so one should never write an assertion for external inputs.  When 
an unexpected condition occurs, the best thing to do is to dump core 
so that it is possible to figure out how the impossible happend.


I agree with Russ Allbery that the primary reason to disable 
assertions is to avoid the performance penalty.  In properly-written 
code (such as your own) these assertions should not be firing anyway.


In my own performance-tuned software which uses many assert 
statements, I find the performance benefit from removing assertions to 
be virtually unmeasurable.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf


Re: Enabling compiler warning flags

2012-12-20 Thread Paul Eggert

On 12/20/2012 01:32 PM, Jeffrey Walton wrote:

Posix asserts
are useless during development under a debugger because the eventually
lead to SIGTERM. A much better approach in practice is to SIGTRAP.


I didn't follow all that message, but this part doesn't appear
to be correct.  In POSIX, when assert() fails it leads to SIGABRT.

More generally, I'd rather focus this mailing list's energy into
improving Autoconf rather than worrying too much about
philosophical considerations.

___
Autoconf mailing list
Autoconf@gnu.org
https://lists.gnu.org/mailman/listinfo/autoconf