Re: US-CERT Vulnerability Note VU#162289

2008-05-12 Thread Ian Lance Taylor
Robert C. Seacord [EMAIL PROTECTED] writes:

 Once a new version or patch is available that will warn users that
 this optimization is taking place, I will recommend that we change the
 work around from Avoid newer versions of gcc to Avoid effected
 versions of gcc and/or recommend that users download the patch /
 revision.

The behaviour of pointer overflow has now changed as of the following
(as yet unreleased) versions:

gcc 4.2.4
gcc 4.3.1
gcc 4.4.0

and all subsequent versions (4.2.x where x = 4, 4.3.y where y = 1,
4.z where z = 4).

The optimization under discussion is for comparisons between P + V1
and P + V2, where P is the same pointer and V1 and V2 are variables of
some integer type.  The C/C++ language standards permit this to be
reduced to a comparison between V1 and V2.  However, if V1 or V2 are
such that the sum with P overflows, then the comparison of V1 and V2
will not yield the same result as actually computing P + V1 and P + V2
and comparing the sums.

The new behaviour as of the above releases is that this optimization
is performed by default at -O2 and above, including -Os.  It is not
performed by default at -O1 or (of course) -O0.  The optimization may
be enabled for -O1 with the -fstrict-overflow option.  The
optimization may be disabled for -O2 and above with the
-fno-strict-overflow option.

When the optimization is enabled, cases where it occurs may be
detected by using -Wstrict-overflow=N where N = 3.  Note that using
this warning option is likely to yield a number of false positive
reports--cases where this or other overflow optimizations are being
applied, but where there is no actual problem.

Please see the gcc manual for more information about these options.

Ian


Re: US-CERT Vulnerability Note VU#162289

2008-05-12 Thread Robert C. Seacord

Ian,

Sounds great, thanks,  I'll work with Chad to get the vul note updated 
accordingly.


rCs


Robert C. Seacord [EMAIL PROTECTED] writes:

  

Once a new version or patch is available that will warn users that
this optimization is taking place, I will recommend that we change the
work around from Avoid newer versions of gcc to Avoid effected
versions of gcc and/or recommend that users download the patch /
revision.



The behaviour of pointer overflow has now changed as of the following
(as yet unreleased) versions:

gcc 4.2.4
gcc 4.3.1
gcc 4.4.0

and all subsequent versions (4.2.x where x = 4, 4.3.y where y = 1,
4.z where z = 4).

The optimization under discussion is for comparisons between P + V1
and P + V2, where P is the same pointer and V1 and V2 are variables of
some integer type.  The C/C++ language standards permit this to be
reduced to a comparison between V1 and V2.  However, if V1 or V2 are
such that the sum with P overflows, then the comparison of V1 and V2
will not yield the same result as actually computing P + V1 and P + V2
and comparing the sums.

The new behaviour as of the above releases is that this optimization
is performed by default at -O2 and above, including -Os.  It is not
performed by default at -O1 or (of course) -O0.  The optimization may
be enabled for -O1 with the -fstrict-overflow option.  The
optimization may be disabled for -O2 and above with the
-fno-strict-overflow option.

When the optimization is enabled, cases where it occurs may be
detected by using -Wstrict-overflow=N where N = 3.  Note that using
this warning option is likely to yield a number of false positive
reports--cases where this or other overflow optimizations are being
applied, but where there is no actual problem.

Please see the gcc manual for more information about these options.

Ian
  




Re: US-CERT Vulnerability Note VU#162289

2008-04-26 Thread Robert Dewar

Daniel Jacobowitz wrote:

On Fri, Apr 25, 2008 at 11:45:25AM -0400, Paul Koning wrote:

 Robert To me, the whole notion of this vulnerability node is flawed
 Robert in that respect. You can write a lengthy and useful book on
 Robert pitfalls in C that must be avoided, but I see no reason to
 Robert turn such a book into a cert advisory, let alone pick out a
 Robert single arbitrary example on a particular compiler!

I think that comment is absolutely correct.


The R in CERT is Response (at least it used to be; I can't find an
expansion on their web site...).  They're responding to a problem that
was reported to them, and alerting others to the problem.  We can
argue about the details, but not about the need to respond.


But surely they are not in the general business of responding to
comments of the form:

I have an incorrect C program that is undefined by the standard,
and it did not behave as I expected it to!

If so, I can imagine lots more comments!

They can respond, but the response should be This program
is incorrect C, and its semantics are not defined by C, security
critical programs should always avoid use of such constructs.

End of (canned) response

Somehow implying that the commpiler is at fault for not providing
expected semantics for programs where the programmer has no right
to expect anything is technically unsound and confusing.

Yes, it is often the case that incorrect programs will do what
is expected (whatever that may be) one day, and not the next day.
That is what undefined is about!






RE: US-CERT Vulnerability Note VU#162289

2008-04-25 Thread Mark Syddall
Robert C. Seacord wrote on :

 I am getting tired with the personal/organizational attacks.
 If you expect a response, please keep your comments professional.

  Will you address the methodological flaws in your study, or do you consider
them to be personal/organizational attacks?


cheers,
  DaveK
-- 
Can't think of a witty .sigline today




RE: US-CERT Vulnerability Note VU#162289

2008-04-25 Thread Dave Korn
Mark Syddall wrote:
^

  Should read Dave Korn.  Apologies all recipients, we had mailserver
trouble at this end and somehow some of the resends got the admin's From: line
on them by mistake.


 Robert C. Seacord wrote on :
 
 I am getting tired with the personal/organizational attacks.
 If you expect a response, please keep your comments professional.
 
   Will you address the methodological flaws in your study, or do you
 consider them to be personal/organizational attacks?



 
cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: US-CERT Vulnerability Note VU#162289

2008-04-25 Thread Robert Dewar

One thing to realize in this discussion is that it is not
possible in general to warn when a programmer is depending
on undefined behavior, since by definition you cannot in
general guess what misunderstandings the programmer has
about C, and hence what behavior is expected.

There are some cases where you can guess well enough that
you can provide warnings that won't have an excessive
number of annoying false positivesm, and warnings in
such cases are appropriate.

But some contributions to this thread have seemed to
come near to a position of requiring warnings whenever
the compiler does something that might violate expectations
of a programmer writing undefined C code, and as a general
principle this is fundamentally impossible.



Re: US-CERT Vulnerability Note VU#162289

2008-04-25 Thread Robert Dewar

Another general point is that conceptually this is not
an optimization issue at all.

The programmer writes code that is undefined according
to the standard.

Whatever expectation the programmer has for this code
is based on either a fundamental misunderstanding of
the semantics of C, or there is a simple bug.

When you write such code, the compiler may do anything,
whether or not optimization is turned on. It's not a
security risk that the compiler fails to provide some
kind of behavior that the standard does not defined\.

The security risk is in writing undefined code, such
code has a (potentially serious) bug. Sure, this is
indeed a case where switching from one compiler to
another, one architecture to another, one version
of a particular commpiler to another, one set of
compilation switches to another etc can change the
behavior. It's even possible for such code to behave
differently on the same day with none of these
conditions changed, depending e.g. on what was
run before or is running at the same time.

If you write in a language which is not 100% safe
in this regard, you do have to worry about safety
considerations caused by undefined code, and depend
on proofs, code reviews, tools etc to ensure that
your code is free of such defects. That's worrisome,
but any bugs in supposedly secure/safe code are
worrisome, and there is no real reason to single
out the particular class of bugs corresponding to
use of undefined semantics in C.

To me, the whole notion of this vulnerability node
is flawed in that respect. You can write a lengthy
and useful book on pitfalls in C that must be
avoided, but I see no reason to turn such a book
into a cert advisory, let alone pick out a single
arbitrary example on a particular compiler!


Re: US-CERT Vulnerability Note VU#162289

2008-04-25 Thread Robert Dewar

Paul Koning wrote:


That said, it certainly is helpful if the compiler can detect some
undefined actions and warn about them.  But that doesn't create a duty
to warn about all of them.


If it were reasonable to require a compiler to generate a warning
for a particular case, the standard would have made it an error.
The whole point in allowing undefined behavior is that in certain
cases, it is too onerous for a compiler to be required to detect
the undefined behavior, so it is not required to do so.


Re: US-CERT Vulnerability Note VU#162289

2008-04-25 Thread Joel Sherrill

Robert Dewar wrote:

Paul Koning wrote:

  

That said, it certainly is helpful if the compiler can detect some
undefined actions and warn about them.  But that doesn't create a duty
to warn about all of them.



If it were reasonable to require a compiler to generate a warning
for a particular case, the standard would have made it an error.
The whole point in allowing undefined behavior is that in certain
cases, it is too onerous for a compiler to be required to detect
the undefined behavior, so it is not required to do so.
  

I recall something in the Ada LRM that a conforming
Ada program did not depend on undefined or implementation
dependent behavior.  The example I remember being
used to explain it to me was a program depending upon
the precise order of tasks executing.  That can obviously
vary based upon interrupts, CPU speed, time slice
quantum and a number of tasking implementation decisions.

When you talk undefined, the program is questionable
at best.  I like the Ada LRM because it tries to be very
clean about this in a way that a programmer can understand
and try to do the right thing.

--
Joel Sherrill, Ph.D. Director of Research  Development
[EMAIL PROTECTED]On-Line Applications Research
Ask me about RTEMS: a free RTOS  Huntsville AL 35805
  Support Available (256) 722-9985




RE: US-CERT Vulnerability Note VU#162289

2008-04-25 Thread Dave Korn
Robert Dewar wrote on :

 One thing to realize in this discussion is that it is not
 possible in general to warn when a programmer is depending
 on undefined behavior, since by definition you cannot in
 general guess what misunderstandings the programmer has
 about C, and hence what behavior is expected.
 
 There are some cases where you can guess well enough that
 you can provide warnings that won't have an excessive
 number of annoying false positivesm, and warnings in
 such cases are appropriate.
 
 But some contributions to this thread have seemed to
 come near to a position of requiring warnings whenever
 the compiler does something that might violate expectations
 of a programmer writing undefined C code, and as a general
 principle this is fundamentally impossible.


  And some have been pleas for the compiler to DWIM.  Which, while it would be
nice, is not just impossible, but conceptually flawed to the point of
meaninglessness because it begs the question.  This is because whatever the
compiler assumes you /actually/ meant when it's trying to DWYM would have to
be based on a set of rules/heuristics/inferences that would be just as liable
as the current optimisation vs. undefined behaviour rules to misunderstandings
that end up introducing security violations - but with the added disadvantage
of not being in accord with a well-defined standard of behaviour, but being
some ad-hoc and unpredictable set of transformations that you can't even know
to avoid.

  In short, any attempt by the compiler to DWIM would just move the same class
of problem around, but make it harder to detect; a lose-lose situation for
everyone.  At least with a clear standard, it's possible to make guarantees
about when and whether a program is correct or not.

cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: US-CERT Vulnerability Note VU#162289

2008-04-25 Thread Paul Koning
 Robert == Robert Dewar [EMAIL PROTECTED] writes:

 Robert Another general point is that conceptually this is not an
 Robert optimization issue at all.

 Robert The programmer writes code that is undefined according to the
 Robert standard.  ...

 Robert To me, the whole notion of this vulnerability node is flawed
 Robert in that respect. You can write a lengthy and useful book on
 Robert pitfalls in C that must be avoided, but I see no reason to
 Robert turn such a book into a cert advisory, let alone pick out a
 Robert single arbitrary example on a particular compiler!

I think that comment is absolutely correct.

I would add one point: undefined (or the equivalent) is a term that
appears in many language standards, not just in the C standard.  For
example, Algol 68 very precisely defined undefined (with essentially
the meaning we have discussed).

Given Robert's comment it seems to me that the right approach is to
withdraw the proposed vulnerability note entirely.

 paul



Re: US-CERT Vulnerability Note VU#162289

2008-04-25 Thread Daniel Jacobowitz
On Fri, Apr 25, 2008 at 11:45:25AM -0400, Paul Koning wrote:
  Robert To me, the whole notion of this vulnerability node is flawed
  Robert in that respect. You can write a lengthy and useful book on
  Robert pitfalls in C that must be avoided, but I see no reason to
  Robert turn such a book into a cert advisory, let alone pick out a
  Robert single arbitrary example on a particular compiler!
 
 I think that comment is absolutely correct.

The R in CERT is Response (at least it used to be; I can't find an
expansion on their web site...).  They're responding to a problem that
was reported to them, and alerting others to the problem.  We can
argue about the details, but not about the need to respond.

-- 
Daniel Jacobowitz
CodeSourcery


Re: US-CERT Vulnerability Note VU#162289

2008-04-25 Thread Mark Mitchell

Daniel Jacobowitz wrote:


The R in CERT is Response (at least it used to be; I can't find an
expansion on their web site...).  They're responding to a problem that
was reported to them, and alerting others to the problem.  We can
argue about the details, but not about the need to respond.


I agree.

I am not happy about some of the ways in which CERT has singled out GCC 
in this situation.  That said, warning people that applications that use 
a particular style of security check are broken is a perfectly 
reasonable thing to do, especially given that the broken security check 
is known to be present in real-world applications.  In fact, warning 
people that such code worked with GCC X, but not GCC X+1, is useful; 
some people may not be able to audit the code, but may be able to 
control whether or not they upgrade to a new compiler, and this is 
useful data for those people.  But, it should be made clear that 
switching from GCC to icc (or whatever) is not a solution, since many of 
those compilers also do the optimization.  (Never mind the risks that 
making a major change to your build environment entails...)


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


RE: US-CERT Vulnerability Note VU#162289

2008-04-25 Thread Dave Korn
Daniel Jacobowitz wrote on :

 On Fri, Apr 25, 2008 at 11:45:25AM -0400, Paul Koning wrote:
  Robert To me, the whole notion of this vulnerability node is flawed
  Robert in that respect. You can write a lengthy and useful book on
  Robert pitfalls in C that must be avoided, but I see no reason to
  Robert turn such a book into a cert advisory, let alone pick out a
  Robert single arbitrary example on a particular compiler!
 
 I think that comment is absolutely correct.
 
 The R in CERT is Response (at least it used to be; I can't find an
 expansion on their web site...).  They're responding to a problem that
 was reported to them, and alerting others to the problem.  We can
 argue about the details, but not about the need to respond.

  But the E is Emergency.  This is not an emergency and does not demand an
*urgent* (and hence rushed and methodologically flawed) response; this is just
one more facet of the problems inherent in the design of the C language that
have been going on since /forever/.

cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: US-CERT Vulnerability Note VU#162289

2008-04-25 Thread Florian Weimer
* Robert Dewar:

 To me, the whole notion of this vulnerability node
 is flawed in that respect. You can write a lengthy
 and useful book on pitfalls in C that must be
 avoided, but I see no reason to turn such a book
 into a cert advisory,

I think it's useful to point out in security advisories widespread
coding mistakens which are particularly security-related.  Perhaps I'm
biased because I did that for incorrect integer over flow checks in C
code back in 2002.  My motivation back then was that advisories were
published about common configuration mistakes, even though the
underlying tool was working as documented--and misusing a compiler seems
to fall in the same category.


Re: US-CERT Vulnerability Note VU#162289

2008-04-24 Thread Florian Weimer
* Brad Roberts:

 Additionally, the linked to notes for GCC are reflective of the original 
 innaccuracies: 

 http://www.kb.cert.org/vuls/id/CRDY-7DWKWM

 Vendor Statement
 No statement is currently available from the vendor regarding this 
 vulnerability.

Chad, it would be helpful if you could tell the GCC folks what steps
they need to take to publish their view under Vendor Statement
(presumably in a link to the GCC web site).

Note to the GCC folks: I've had trouble to get vendor statements
identified as such on the CERT web site before, so if there are some
difficulties, you shouldn't attribute this to the current controversy.


Re: US-CERT Vulnerability Note VU#162289

2008-04-24 Thread Neil Booth
David Miller wrote:-

 From: Joe Buck [EMAIL PROTECTED]
 Date: Wed, 23 Apr 2008 08:24:44 -0700
 
  If CERT is to maintain its reputation, it needs to do better.  The warning
  is misdirected in any case; given the very large number of compilers that
  these coding practices cause trouble for, you need to focus on the bad
  coding practices, not on unfair demonization of new GCC releases.
 
 In my opinion CERT's advisory has been nothing but an unfair FUD
 campaign on compilers, and GCC specifically, and has seriously
 devalued CERT's advisories, in general, which were already of low
 value to begin with.
 
 It looks similar to a news article run by a newspaper that is losing
 money and has no real news to write about, but yet they have to write
 about something.
 
 The worst part of this fiasco is that GCCs reputation has been
 unfairly harmed in one way or another, and there is nothing CERT can
 do to rectify the damage they've caused.

I'm appalled that the original desciption hasn't been corrected.  The
text reads:

  Some C compilers optimize away pointer arithmetic overflow tests that
  depend on undefined behavior without providing a diagnostic (a warning).
  Applications containing these tests may be vulnerable to buffer
  overflows if compiled with these compilers.

  I. Description
  In the C language, given the types:

char *buf;
int len;

  some C compilers will assume that buf+len = buf. 

which is an entirely bogus description of the problem.  That this
incorrect description of the state of affairs has been left to
stand only shows that CERT, and those responsible for this advisory,
have completely failed to understand what the real issue is here.
Further, the fact that the advisory stands in this erroneous
form, despite it having been pointed out to them many times over
the past weeks on this form at least, seriously undermines their
credibility in the eyes of any informed observer.

At a minimum the wording should be something more like:

  In the C language, given an object OBJ and a pointer BUF into OBJ,

char *buf;
int len;

  the C standard requires that the result of

buf + len

  must point within, or one byte beyond, the object BUF.  A program
  that does not satisfy this constraint is erroneous, and many
  compilers take advantage of this constraint to optimize code more
  effectively.  Unforunately much existing code is not well written
  and sometimes erroneous in this regard, and hence may not behave
  as originally intended when compiled with optimizations enabled.

Neil.


Re: US-CERT Vulnerability Note VU#162289

2008-04-24 Thread Robert C. Seacord

Neil,

I'm not sure I understand what you mean by the following:

A program that does not satisfy this constraint is erroneous, and many compilers take advantage of this constraint to optimize code more effectively. 

Just because a program contains undefined behavior, does not mean that it 
erroneous. It simply gives the compiler latitude with how to handle the 
undefined behavior, while still conforming.

One possibility is that GCC could handle these constructs in a consistent manner.  That is, GCC clearly implements modwrap semantics. Given this, I think the behavior exhibited in this case is inconsistent.  If, on the other hand, GCC implemented saturation semantics, it would make perfect sense to optimize out this check.  


I'll review your comments below and consider additional changes to the text of 
the vul note.  We would really like to see is a diagnostic for this condition, 
ideally enabled by -Wall.  Once a version of the compiler is available that 
provides such a diagnostic is available, I will recommend we change the US-CERT 
Addendum to recommend that developers upgrade to that version of the compiler, 
and compile using -Wall or whatever the appropriate flag turns out to be.

I am getting tired with the personal/organizational attacks.  If you expect a 
response, please keep your comments professional.

rCs




David Miller wrote:-

  

From: Joe Buck [EMAIL PROTECTED]
Date: Wed, 23 Apr 2008 08:24:44 -0700



If CERT is to maintain its reputation, it needs to do better.  The warning
is misdirected in any case; given the very large number of compilers that
these coding practices cause trouble for, you need to focus on the bad
coding practices, not on unfair demonization of new GCC releases.
  

In my opinion CERT's advisory has been nothing but an unfair FUD
campaign on compilers, and GCC specifically, and has seriously
devalued CERT's advisories, in general, which were already of low
value to begin with.

It looks similar to a news article run by a newspaper that is losing
money and has no real news to write about, but yet they have to write
about something.

The worst part of this fiasco is that GCCs reputation has been
unfairly harmed in one way or another, and there is nothing CERT can
do to rectify the damage they've caused.



I'm appalled that the original desciption hasn't been corrected.  The
text reads:

  Some C compilers optimize away pointer arithmetic overflow tests that
  depend on undefined behavior without providing a diagnostic (a warning).
  Applications containing these tests may be vulnerable to buffer
  overflows if compiled with these compilers.

  I. Description
  In the C language, given the types:

char *buf;
int len;

  some C compilers will assume that buf+len = buf. 


which is an entirely bogus description of the problem.  That this
incorrect description of the state of affairs has been left to
stand only shows that CERT, and those responsible for this advisory,
have completely failed to understand what the real issue is here.
Further, the fact that the advisory stands in this erroneous
form, despite it having been pointed out to them many times over
the past weeks on this form at least, seriously undermines their
credibility in the eyes of any informed observer.

At a minimum the wording should be something more like:

  In the C language, given an object OBJ and a pointer BUF into OBJ,

char *buf;
int len;

  the C standard requires that the result of

buf + len

  must point within, or one byte beyond, the object BUF.  A program
  that does not satisfy this constraint is erroneous, and many
  compilers take advantage of this constraint to optimize code more
  effectively.  Unforunately much existing code is not well written
  and sometimes erroneous in this regard, and hence may not behave
  as originally intended when compiled with optimizations enabled.

Neil.
  




Re: US-CERT Vulnerability Note VU#162289

2008-04-24 Thread Andreas Schwab
Neil Booth [EMAIL PROTECTED] writes:

 At a minimum the wording should be something more like:

   In the C language, given an object OBJ and a pointer BUF into OBJ,

 char *buf;
 int len;

   the C standard requires that the result of

   buf + len

   must point within, or one byte beyond, the object BUF.

ITYM OBJ here-^^^

Andreas.

-- 
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
PGP key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
And now for something completely different.


Re: US-CERT Vulnerability Note VU#162289

2008-04-24 Thread Andrew Haley
Robert C. Seacord wrote:
 Neil,
 
 I'm not sure I understand what you mean by the following:
 
 A program that does not satisfy this constraint is erroneous, and many
 compilers take advantage of this constraint to optimize code more
 effectively. 
 Just because a program contains undefined behavior, does not mean that
 it erroneous.

This is the crux of our disagreement.  To me, and I imagine almost
everyone else on the gcc list, any program that contains undefined 
behaviour is *by definition* erroneous.

It is erroneous because there is no way to determine what the program
should do.  The program is, quite literally, meaningless.

Certainly, a compiler writer can extend the language to give a compiler-
specific definition to that behaviour, in which case it's no longer
undefined.  But that is not true in this particular case.

 One possibility is that GCC could handle these constructs in a
 consistent manner.  That is, GCC clearly implements modwrap semantics.
 Given this, I think the behavior exhibited in this case is inconsistent.
 If, on the other hand, GCC implemented saturation semantics, it
 would make perfect sense to optimize out this check.

gcc implements ISO C semantics, with some extensions.  We could
extend the language in the way you suggest, but it would be very
difficult formally to specify such an extension.  I don't think
it's something we should do.

Andrew.


Re: US-CERT Vulnerability Note VU#162289

2008-04-24 Thread Mark Mitchell

Robert C. Seacord wrote:

The following article encapsulates my understanding of undefined 
behavior based on these discussions:


MSC15-A. Do not depend on undefined behavior 
https://www.securecoding.cert.org/confluence/display/seccode/MSC15-A.+Do+not+depend+on+undefined+behavior


I think that looks like a fine article, at first glance.  But, of 
course, the behavior is undefined, so, as your article says, people 
shouldn't use it.


That said, the GCC developers have already implemented a warning 
option to warn about the case where optimizations are made on this 
basis.  I do not know if it is in -Wall or not.  I doubt it, because 
it would probably create far too many false positives.


I believe that if you use the option -Wstrict-overflow=5 the development 
version of the compiler will warn about this test case.  This option was 
implemented in response to your report, and will be in GCC 4.4.  Users 
who want to can of course download the development source code for GCC 
and build it today, even before it is released, to check their code.  (I 
wouldn't recommend using the development version to build things, but 
people could use it to get the warning, if they want to do that.)


Declaring it vulnerable while not declaring those others to be 
vulnerable is unfair.  


We are still evaluating other compilers.  If we find that they exhibit 
the same behavior, we will list them as vulnerable as well.


When can we expect that you will have completed that investigation?  I 
can appreciate the desire for an independent investigation, but given 
the data we have already provided, it should be a pretty simple matter 
to verify this.


Thanks,

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: US-CERT Vulnerability Note VU#162289

2008-04-24 Thread Paul Schlie
Mark Mitchell wrote:

 ...

 And:

 Addition or subtraction of a pointer into, or just beyond, an array object
 and an integer type produces a result that does not point into, or just
 beyond, the same array object (6.5.6).

 is undefined behavior.

So then unless the compiler can determine that all pointers passed to foo,
for example below, represent a pointer to some Nth element of some array
from which not more than N will be effectively subtracted, the optimization
(if it can be called that) can not be performed, as a sum of a pointer and
an arbitrary integer may be validly be less than said pointer, as follows:

foo(char* p){
if (p+(char*)-1  p)
something }

char a[] {0,1,2,3};

foo(a+2);

I believe.




Re: US-CERT Vulnerability Note VU#162289

2008-04-24 Thread Paul Koning
 Paul == Paul Schlie [EMAIL PROTECTED] writes:

 Paul Mark Mitchell wrote:
  ...
  
  And:
  
  Addition or subtraction of a pointer into, or just beyond, an
  array object and an integer type produces a result that does not
  point into, or just beyond, the same array object (6.5.6).
  
  is undefined behavior.

 Paul So then unless the compiler can determine that all pointers
 Paul passed to foo, for example below, represent a pointer to some
 Paul Nth element of some array from which not more than N will be
 Paul effectively subtracted, the optimization (if it can be called
 Paul that) can not be performed, as a sum of a pointer and an
 Paul arbitrary integer may be validly be less than said pointer, ...

I think you're misinterpreting what undefined means.

The C standard doesn't constrain what a compiler does for undefined
programs -- it only imposes requirements for programs that do not do
undefined things.

So an optimization that does what I mean only for defined cases,
but does something surprising for undefined cases, is a valid
optimization.  The compiler is under NO obligation to check if you've
done something undefined -- instead, it is allowed to assume that you
are NOT doing undefined things.

So, for example: pointer arithmetic in programs that obey the C
standard cannot cause overflow, or arithmetic wrap, or things like
that.  Only undefined pointer arithmetic can do that.  Therefore the
compiler is allowed to make optimizations that do what I mean only
for the no-wrap or no-overflow cases.

As others have pointed out, programs that contain undefined actions
are broken, and any security issues caused are the fault of those
programs.  The compiler is NOT at fault.

That said, it certainly is helpful if the compiler can detect some
undefined actions and warn about them.  But that doesn't create a duty
to warn about all of them.

   paul


Re: US-CERT Vulnerability Note VU#162289

2008-04-23 Thread Chad Dougherty

Mark Mitchell wrote:
However, I'm surprised that only GCC is listed as vulnerable at the 
bottom of the page.  We've provided information about a lot of other 
compilers that do the same optimization.  Why is the status for 
compilers from Microsoft, Intel, IBM, etc. listed as Unknown instead 
of Vulnerable?




The vendors listed in that section are the ones we've contacted asking 
for a statement.  The note gets updated as we get information from them. 
 We won't include information about other vendors without either a 
statement from them or independent verification of their affectedness.


-Chad


Re: US-CERT Vulnerability Note VU#162289

2008-04-23 Thread Chad Dougherty

Brad Roberts wrote:
Additionally, the linked to notes for GCC are reflective of the original 
innaccuracies: 


http://www.kb.cert.org/vuls/id/CRDY-7DWKWM

Vendor Statement
No statement is currently available from the vendor regarding this 
vulnerability.


US-CERT Addendum
Vendors and developers using the GNU C compiler should consider 
downgrading their version of gcc or sticking with versions of the gcc 
compiler (before version 4.1) that do not perform the offending 
optimization. In the case of gcc, it should be emphasized that this is a 
change of behavior in the later versions of the compiler.




Why is this inaccurate?  The objections to the original version of the 
note on this list were that it appeared to advocate dumping gcc in favor 
of another compiler that may do the same optimization.  This addendum 
merely suggest considering using an older version of gcc.


-Chad


Re: US-CERT Vulnerability Note VU#162289

2008-04-23 Thread David Miller
From: Chad Dougherty [EMAIL PROTECTED]
Date: Wed, 23 Apr 2008 07:52:26 -0400

   We won't include information about other vendors without either a 
 statement from them or independent verification of their affectedness.

How, may I ask, did that policy apply to the GCC vendor
when this all got started?


Re: US-CERT Vulnerability Note VU#162289

2008-04-23 Thread Chad Dougherty

David Miller wrote:

How, may I ask, did that policy apply to the GCC vendor
when this all got started?


Our own testing of multiple versions of gcc on multiple platforms and 
subsequent confirmation by Mark that it was intentional, desired 
behavior.  This all occurred prior to even the initial version of the note.


-Chad


Re: US-CERT Vulnerability Note VU#162289

2008-04-23 Thread David Miller
From: Chad Dougherty [EMAIL PROTECTED]
Date: Wed, 23 Apr 2008 08:37:11 -0400

 David Miller wrote:
  How, may I ask, did that policy apply to the GCC vendor
  when this all got started?
 
 Our own testing of multiple versions of gcc on multiple platforms and 
 subsequent confirmation by Mark that it was intentional, desired 
 behavior.  This all occurred prior to even the initial version of the note.

CERT is asking these vendors for approval for the text they will add
mentioning anything about their product.  That's the bit I'm talking
about.

They are getting protection and consideration that was not really
afforded to GCC.

CERT treated GCC differently.


Re: US-CERT Vulnerability Note VU#162289

2008-04-23 Thread Chad Dougherty

David Miller wrote:

CERT is asking these vendors for approval for the text they will add
mentioning anything about their product.  That's the bit I'm talking
about.

They are getting protection and consideration that was not really
afforded to GCC.

CERT treated GCC differently.


This is not true.  The Statement section of the vendor status is for 
official, usually verbatim, statements from the vendor.  The Addendum 
section is reserved for our own comments, even those that may contradict 
the vendor's response if we have reason to do so.


You'll note that the suggestion about _considering_ using older versions 
of gcc appears in the addendum.


-Chad


Re: US-CERT Vulnerability Note VU#162289

2008-04-23 Thread Richard Guenther
On Wed, Apr 23, 2008 at 1:59 PM, Chad Dougherty [EMAIL PROTECTED] wrote:
 Brad Roberts wrote:

  Additionally, the linked to notes for GCC are reflective of the original
 innaccuracies:
  http://www.kb.cert.org/vuls/id/CRDY-7DWKWM
 
  Vendor Statement
  No statement is currently available from the vendor regarding this
 vulnerability.
 
  US-CERT Addendum
  Vendors and developers using the GNU C compiler should consider
 downgrading their version of gcc or sticking with versions of the gcc
 compiler (before version 4.1) that do not perform the offending
 optimization. In the case of gcc, it should be emphasized that this is a
 change of behavior in the later versions of the compiler.
 
 

  Why is this inaccurate?  The objections to the original version of the note
 on this list were that it appeared to advocate dumping gcc in favor of
 another compiler that may do the same optimization.  This addendum merely
 suggest considering using an older version of gcc.

Which is in general a bad advice as older gcc versions may have wrong-code
bugs that are serious and have security implications if such bugs applies to
your code.

Richard.


Re: US-CERT Vulnerability Note VU#162289

2008-04-23 Thread Joe Buck
On Wed, Apr 23, 2008 at 09:06:56AM -0400, Chad Dougherty wrote:
 David Miller wrote:
 CERT is asking these vendors for approval for the text they will add
 mentioning anything about their product.  That's the bit I'm talking
 about.
 
 They are getting protection and consideration that was not really
 afforded to GCC.
 
 CERT treated GCC differently.
 
 This is not true.  The Statement section of the vendor status is for 
 official, usually verbatim, statements from the vendor.  The Addendum 
 section is reserved for our own comments, even those that may contradict 
 the vendor's response if we have reason to do so.

I disagree; it is true.  You did not ask for approval before adding GCC to
vulnerable.   We have demonstrated  to you  by independent  testing that
other compilers are also vulnerable,  and have provided the steps that you
can use  to confirm  this.  But  you are dragging  your feet  on including
other compilers on  your Vulnerable list.  Meanwhile, you  still have an
unfairly slanted advisory.

If CERT is to maintain its reputation, it needs to do better.  The warning
is misdirected in any case; given the very large number of compilers that
these coding practices cause trouble for, you need to focus on the bad
coding practices, not on unfair demonization of new GCC releases.


Re: US-CERT Vulnerability Note VU#162289

2008-04-23 Thread Daniel Berlin
On Wed, Apr 23, 2008 at 9:55 AM, Richard Guenther
[EMAIL PROTECTED] wrote:

 On Wed, Apr 23, 2008 at 1:59 PM, Chad Dougherty [EMAIL PROTECTED] wrote:
   Brad Roberts wrote:

  Which is in general a bad advice as older gcc versions may have wrong-code
  bugs that are serious and have security implications if such bugs applies to
  your code.


Don't worry, they can issue advisories about those too


Re: US-CERT Vulnerability Note VU#162289

2008-04-23 Thread Joe Buck
On Wed, Apr 23, 2008 at 12:25:06PM -0400, Daniel Berlin wrote:
 On Wed, Apr 23, 2008 at 9:55 AM, Richard Guenther
 [EMAIL PROTECTED] wrote:
 
  On Wed, Apr 23, 2008 at 1:59 PM, Chad Dougherty [EMAIL PROTECTED] wrote:
Brad Roberts wrote:
 
   Which is in general a bad advice as older gcc versions may have wrong-code
   bugs that are serious and have security implications if such bugs applies 
  to
   your code.
 
 
 Don't worry, they can issue advisories about those too

But not about any other compiler, at least not without the vendor's
approval.


RE: US-CERT Vulnerability Note VU#162289

2008-04-23 Thread Dave Korn
Mark Mitchell wrote on :

 Chad Dougherty wrote:
 
 The vulnerability note has been significantly reworked to focus on the
 issue of undefined behavior handling in the compiler and the fact that
 conforming implementations are not required to warn of this condition.
 I've tried to incorporate many of the valid concerns that were raise on
 this list in response to the original vulnerability note.
 
 Thank you for making the update; this is a big improvement.
 
 However, I'm surprised that only GCC is listed as vulnerable at the
 bottom of the page.  We've provided information about a lot of other
 compilers that do the same optimization.  Why is the status for
 compilers from Microsoft, Intel, IBM, etc. listed as Unknown instead
 of Vulnerable?


  I would like to see CERT publish a consistent and reliable methodology for
their assessment of this vulnerability in various compilers, and the full
results obtained from running this test methodology against the various listed
compilers, in enough detail that anyone can reproduce them.  Consistency,
commensurability and reproducibility are at the heart of the scientific
method.

  rCs has been trying very hard to address the issues raised with some code
examples, but what we've ended up seeing is differing and incommensurable code
examples that have been evaluated by differing and incommensurable standards
of validation - sometimes evaluating the generated assembly, sometimes just
looking at the output of the program (which is an indirect inference at best),
even in one notable case just throwing up his hands because he found the
generated assembly harder to examine.  I don't want to be mean about it, but
someone who is not utterly fluent in inspecting compiler-generated assembly
code should probably not be tasked with performing this analysis.  It does not
inspire any confidence in whatever other results you have obtained without
showing us your working.

  Some cases appear to only be demonstrable in an artificial testcase where
the integer is a known constant at compile time, and vary with the value of
that constant.

  These results are then presented in a table which simply lists a single
status for each vendor, when most vendors ship multiple products which may
differ in their behaviour.

  This is not just trivium or pedantry.  The procedures you have followed are
seriously flawed and unscientific, patchy, ad-hoc and inconsistent.  The data
is fudged.  The conclusions cannot be relied upon.

  The flaws in your methodology are so severe, and yet CERT's insistence on
trying to draw invalid conclusions from these tests is so virulent, that your
actions seem inexplicably biased.  The way in which you are operating a
blatant double standard, where you present a behaviour in gcc as a serious
flaw where you excuse, overlook or ignore the exact same behaviour in every
other vendor's compiler, gives credence to what would otherwise be a ludicrous
conspiracy theory: that CERT is acting under politically-inspired direction to
attack Open Source software by spreading FUD about it being insecure.  Not
being paranoid, I can see the alternative (cock-up not conspiracy)
explanation: that this VU was written in a hurry based only on the
notification you received relating to gcc, and was not planned properly, nor
researched properly, nor written accurately, in the course of its rushed
creation.  However, the longer and more insistently you attempt to defend the
indefensible process through which you have arrived at this invalid outcome,
the more you undermine that line of defence and make the
politically-motivated-attack-on-OSS theory plausible, and you are after all a
government agency, and we do after all have a documented history in recent
years of the administration interfering with the publications of its agencies
and demanding the suppression of scientifically accurate facts and their
replacement by politically inspired factually incorrect information.  There is
no reason that we know of to suppose CERT would be any more or less immune to
this sort of pressure than, for example, federally-funded scientific bodies
studying climate change.

  But, damaging though that is to CERT as an organisation, it's still beside
the point, because regardless of your motivation, the fact is that your
procedures have been unscientific and do not support the conclusions you have
attempted to draw from them.




  This VU falls massively far below the standards we have come to expect from
CERT, and should be withdrawn and reworked from scratch, before it causes
further damage to your reputation for professionalism and independence.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



RE: US-CERT Vulnerability Note VU#162289

2008-04-23 Thread Gerald.Williams
Dave Korn wrote:
[ ... lots of exciting commentary on scientific method/etc.
  that I leave out for the protection of the innocent ... ]

Huzzah! Way to stick it to the man! :-) :-)

   This VU falls massively far below the standards we have come to
expect
 from CERT, and should be withdrawn and reworked from scratch

Good idea, although they already did rework it, and I doubt
they're going to withdraw it when it really is a potential
vulnerability that was apparently detected in the wild.

Looking through the new version, it doesn't seem all that
bad to me. The only problem is the GCC note, which has an
untempered recommendation to consider old versions. That
warning is still misguided, but you're not going to get
very far trying to say it is entirely wrong. There *may
be* someone that could be negatively affected by moving
to a new version, and RCS has implied that they can name
a case where this is true. Maybe we can convince them to
temper the warning, I guess. [I mean really, changing the
compiler in any way could trigger vulnerabilities if you
have no idea what you're shoving into it. If you cannot
depend at all on the quality of your code, test it well
and never recompile it. But that path can easily devolve
into a religious debate.]

Meanwhile, there is an opportunity for a vendor response
that will be added verbatim. Is anyone working on one for
GCC? I think that would go a long way.

gsw


Re: US-CERT Vulnerability Note VU#162289

2008-04-23 Thread David Miller
From: Joe Buck [EMAIL PROTECTED]
Date: Wed, 23 Apr 2008 08:24:44 -0700

 If CERT is to maintain its reputation, it needs to do better.  The warning
 is misdirected in any case; given the very large number of compilers that
 these coding practices cause trouble for, you need to focus on the bad
 coding practices, not on unfair demonization of new GCC releases.

In my opinion CERT's advisory has been nothing but an unfair FUD
campaign on compilers, and GCC specifically, and has seriously
devalued CERT's advisories, in general, which were already of low
value to begin with.

It looks similar to a news article run by a newspaper that is losing
money and has no real news to write about, but yet they have to write
about something.

The worst part of this fiasco is that GCCs reputation has been
unfairly harmed in one way or another, and there is nothing CERT can
do to rectify the damage they've caused.


Re: US-CERT Vulnerability Note VU#162289

2008-04-22 Thread Chad Dougherty

Joe Buck wrote:

Thanks.  I hope that you will correct the advisory promptly to avoid any
implication that one should switch from GCC to a different compiler based
on this issue, since we've already established that most of GCC's
competitors perform similar optimizations under some cicumstances (even if
the particular example that appears in the CERT report is not affected,
other, similar examples will be, particularly if they appear in a loop).

Both CERT and GCC have their reputations to consider here, and I think
that this advisory has damaged the reputations of *both*.



The vulnerability note has been significantly reworked to focus on the 
issue of undefined behavior handling in the compiler and the fact that 
conforming implementations are not required to warn of this condition. 
I've tried to incorporate many of the valid concerns that were raise on 
this list in response to the original vulnerability note.



The advisory should emphasize the solution of auditing buffer overflow
checks to make sure that they are correct C, and should help people
write such checks correctly.


The vulnerability note itself essentially punts this issue to the 
corresponding documents in our Secure Coding standard.


-Chad


Re: US-CERT Vulnerability Note VU#162289

2008-04-22 Thread Mark Mitchell

Chad Dougherty wrote:

The vulnerability note has been significantly reworked to focus on the 
issue of undefined behavior handling in the compiler and the fact that 
conforming implementations are not required to warn of this condition. 
I've tried to incorporate many of the valid concerns that were raise on 
this list in response to the original vulnerability note.


Thank you for making the update; this is a big improvement.

However, I'm surprised that only GCC is listed as vulnerable at the 
bottom of the page.  We've provided information about a lot of other 
compilers that do the same optimization.  Why is the status for 
compilers from Microsoft, Intel, IBM, etc. listed as Unknown instead 
of Vulnerable?


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: US-CERT Vulnerability Note VU#162289

2008-04-22 Thread Brad Roberts
On Tue, 22 Apr 2008, Mark Mitchell wrote:

 Chad Dougherty wrote:
 
  The vulnerability note has been significantly reworked to focus on the issue
  of undefined behavior handling in the compiler and the fact that conforming
  implementations are not required to warn of this condition. I've tried to
  incorporate many of the valid concerns that were raise on this list in
  response to the original vulnerability note.
 
 Thank you for making the update; this is a big improvement.
 
 However, I'm surprised that only GCC is listed as vulnerable at the bottom
 of the page.  We've provided information about a lot of other compilers that
 do the same optimization.  Why is the status for compilers from Microsoft,
 Intel, IBM, etc. listed as Unknown instead of Vulnerable?
 
 -- 
 Mark Mitchell
 CodeSourcery
 [EMAIL PROTECTED]
 (650) 331-3385 x713

Additionally, the linked to notes for GCC are reflective of the original 
innaccuracies: 

http://www.kb.cert.org/vuls/id/CRDY-7DWKWM

Vendor Statement
No statement is currently available from the vendor regarding this 
vulnerability.

US-CERT Addendum
Vendors and developers using the GNU C compiler should consider 
downgrading their version of gcc or sticking with versions of the gcc 
compiler (before version 4.1) that do not perform the offending 
optimization. In the case of gcc, it should be emphasized that this is a 
change of behavior in the later versions of the compiler.

Later,
Brad


Re: US-CERT Vulnerability Note VU#162289

2008-04-20 Thread Nicola Musatti

David Edelsohn wrote:

Nicola,

Please send the project files to Robert Seacord.


Done.

Cheers,
Nicola
--
Nicola.Musatti at gmail dot com
Home: http://nicola.musatti.googlepages.com/home
Blog: http://wthwdik.wordpress.com/



RE: US-CERT Vulnerability Note VU#162289

2008-04-20 Thread Rupert Wood
Nicola Musatti wrote:

 _main PROC

 ; 12   :  char * b = 0123456789;
 ; 13   :  for ( int l = 0; l  1  30; ++l )
 ; 14   :  f(b, l);
 ; 15   : }
 
   xor eax, eax
   ret 0
 _main ENDP

Note that it optimised away your whole program! It could blank out f() because 
it never needed to call it. Try marking it __declspec(dllexport) and you'll see 
it *does not* get optimised away.

That said, from my experiments VC will optimise f() away to 0 when inlining it.

Rupert.




Re: US-CERT Vulnerability Note VU#162289

2008-04-20 Thread Nicola Musatti

Rupert Wood wrote:

Nicola Musatti wrote:


_main   PROC

; 12   :char * b = 0123456789; ; 13   : for ( int l = 0; l  1
 30; ++l ) ; 14   : f(b, l); ; 15   : }

xor eax, eax ret0 _main ENDP


Note that it optimised away your whole program! It could blank out
f() because it never needed to call it.


That's true, although f() was still compiled to the equivalent of 
'return 0;'.

 This can be made more evident by changing f() to

#include iostream

int f(char *buf, int len) {
int res = 0;
len = 1  30;
if (buf + len  buf)
res =  1;
std::cout  res  '\n';
return res;
}

The resulting f() amounts to

std::cout  0  '\n';
return 0;

Which is still inlined into main().
Cheers,
Nicola
--
Nicola.Musatti at gmail dot com
Home: http://nicola.musatti.googlepages.com/home
Blog: http://wthwdik.wordpress.com/



Re: US-CERT Vulnerability Note VU#162289

2008-04-19 Thread Nicola Musatti
Sorry to be so late in joining this discussion. I'm the person who 
originally notified Mark Mitchell about Microsoft's compiler performing 
this same optimization under certain conditions. Since mailing Mark on 
the subject I tried also VC++ 2008 and it behaves exactly like its 
predecessor. Here's my test program, directly derived from Mark's test case:


int f(char *buf, int len) {
  len = 1  30;
  if (buf + len  buf)
return 1;


  return 0;
}

int main()
{
char * b = 0123456789;
for ( int l = 0; l  1  30; ++l )
f(b, l);
}

This is the command line shown by the IDE:
/Ox /GL /D WIN32 /D _CONSOLE /D _UNICODE /D UNICODE /Gm /EHsc 
/MDd /GS- /Za /FAs /FaDebug\\ /FoDebug\\ /FdDebug\vc90.pdb /W4 /c 
/Wp64 /Zi /TP .\cert.cpp


This is the assembler listing generated by VC++ 2008. VC++ 2005's 
listing differs only in the first line:


; Listing generated by Microsoft (R) Optimizing Compiler Version 
15.00.21022.08


TITLE   d:\src\cert\cert.cpp
.686P
.XMM
include listing.inc
.model  flat


$SG-5   DB  '0123456789', 00H
PUBLIC  ?f@@[EMAIL PROTECTED]   ; f
; Function compile flags: /Ogtpy
; File d:\src\cert\cert.cpp
_TEXT   SEGMENT
?f@@[EMAIL PROTECTED] PROC  ; f

; 2:   len = 1  30;
; 3:   if (buf + len  buf)
; 4: return 1;
; 5:
; 6:
; 7:   return 0;

xor eax, eax

; 8: }

ret 0
?f@@[EMAIL PROTECTED] ENDP  ; f
_TEXT   ENDS
PUBLIC  _main
; Function compile flags: /Ogtpy
_TEXT   SEGMENT
_main   PROC

; 12   :char * b = 0123456789;
; 13   :for ( int l = 0; l  1  30; ++l )
; 14   :f(b, l);
; 15   : }

xor eax, eax
ret 0
_main   ENDP
_TEXT   ENDS
END

I can make my project files available if anybody is interested.

I also tested CodeGear (Borland)'s C++ Builder 2007 compiler and as far 
as I can tell it doesn't perform this optimization. Maybe we should all 
switch to their product ;-)


Cheers,
Nicola Musatti
--
Nicola.Musatti at gmail dot com
Home: http://nicola.musatti.googlepages.com/home
Blog: http://wthwdik.wordpress.com/



Re: US-CERT Vulnerability Note VU#162289

2008-04-19 Thread David Edelsohn
Nicola,

Please send the project files to Robert Seacord.

David

--- Forwarded Message

Date: Sat, 19 Apr 2008 19:12:13 +0200
From: Robert C. Seacord [EMAIL PROTECTED]
To: David Edelsohn [EMAIL PROTECTED]
CC: [EMAIL PROTECTED]
Subject: Re: Nicola Musatti: Re: US-CERT Vulnerability Note VU#162289

David,

Sure, please email us the project files.

Thanks,
rCs

--- End of Forwarded Message



Re: US-CERT Vulnerability Note VU#162289

2008-04-18 Thread Ralph Loader
BTW,

It occurs to me that the optimisation is just as likely (if not more
likely) to remove security holes as it is to introduce them.

If someone writes comparison involving 'buf+len' because they
incorrectly ignored a possibility of 'buf+len' wrapping, then
the optimisation fixes, not breaks, the code.

My evidence-free opinion is that this scenario is in fact more likely
than the apparently contrived example in the CERT note.

Ralph.

PS  The CERT note refers to existing programs broken by the
optimisation.  I wrote to their feedback address asking about the
existing programs and heard nothing back, which makes me suspect that
the authors of the note don't know of any, despite their claim.


 The GCC SC was aware of this CERT posting before it was public.  Our 
 feeling is that this is not a GCC bug, although it is something that
 we would like GCC to warn about.  I talked to Ian Taylor and he
 agreed to work on the warning.
 
 Some other compilers do this optimization too, so compiling similar 
 applications with them will also lead to the removal of the test.
 We've identified one specific highly-optimizing compiler that does
 this optimization.  If you have information about other such
 compilers, please send it to me, and I will forward to CERT.  We
 would like to counter the idea that GCC is unique in this regard.
 (In fact, I expect that the reason that this issue was found with GCC
 is that GCC is used so widely!)
 
 Here is the test program:
 
 int f(char *buf, int len) {
len = 1  30;
if (buf + len  buf)
  return 1;
 
return 0;
 }
 
 If you know of a non-GCC compiler that optimizes away the test (so
 that the function always returns 0), please post here, and let me
 know the name, version number, command-line options, etc. you used to
 demonstrate that.
 
 Thanks,
 


Re: US-CERT Vulnerability Note VU#162289

2008-04-15 Thread Paolo Bonzini



A theoretical argument for why somebody might write problematic code
is http://www.fefe.de/openldap-mail.txt .


But that's like putting the cart before the horses (and complaining 
that it does not work).


You find a security problem, you find a solution, you find the compiler 
optimizes away, you blame the compiler.  You don't look for an 
alternative, which would be the most sensible: compare the length with 
the size, without unnecessary pointer arithmetic.  Since the length is 
unsigned, it's enough to do this:


  if (len  (size_t) (max - ptr))
/* overflow */ ;

Paolo


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Paul Schlie

 (as an aside, as most target implementations treat pointers as unsigned
 values, its not clear that presuming signed integer overflow semantics are
 a reasonable choice for pointer comparison optimization)

 The point is not of presuming signed integer overflow semantics (I was
 corrected on this by Ian Taylor). It is of presuming that pointers never
 move before the beginning of their object. If you have an array of 20
 elements, pointers a[0] to a[20] are valid (accessing a[20] is not valid),
 but the compiler can assume that the program does not refer to a[-2].

 Paolo

Yes (and in which case if the compiler is smart enough to recognize
this it should generate an error, not emit arbitrary [or absents] of
code); but the example in question was:

void f(char *buf)  {
 unsigned int len = 0xFF00u; /* or similar */

if (buf+len  buf) puts(true);

}

In which case buf is merely a pointer which may point to any char, not a
char within a particular array, implying buf+len is also just a pointer,
ultimately being compared against buf;

If all such pointers are presumed to be restricted to pointing to the
element they were originally assigned, then all composite pointer arithmetic
such as buf+len would be invalid. All this being said, I understand that in
general this is an anomalous case, however on small embedded machines with
small memory spaces or when writing drivers or memory allocators, such
pointer arithmetic may be perfectly legitimate it would seem.




Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Richard Guenther
On Mon, Apr 14, 2008 at 1:55 PM, Paul Schlie [EMAIL PROTECTED] wrote:


   (as an aside, as most target implementations treat pointers as unsigned
   values, its not clear that presuming signed integer overflow semantics are
   a reasonable choice for pointer comparison optimization)
  
   The point is not of presuming signed integer overflow semantics (I was
   corrected on this by Ian Taylor). It is of presuming that pointers never
   move before the beginning of their object. If you have an array of 20
   elements, pointers a[0] to a[20] are valid (accessing a[20] is not 
 valid),
   but the compiler can assume that the program does not refer to a[-2].
  
   Paolo

  Yes (and in which case if the compiler is smart enough to recognize
  this it should generate an error, not emit arbitrary [or absents] of
  code); but the example in question was:

  void f(char *buf)  {
   unsigned int len = 0xFF00u; /* or similar */


  if (buf+len  buf) puts(true);

  }

  In which case buf is merely a pointer which may point to any char, not a
  char within a particular array, implying buf+len is also just a pointer,
  ultimately being compared against buf;

  If all such pointers are presumed to be restricted to pointing to the
  element they were originally assigned, then all composite pointer arithmetic
  such as buf+len would be invalid. All this being said, I understand that in
  general this is an anomalous case, however on small embedded machines with
  small memory spaces or when writing drivers or memory allocators, such
  pointer arithmetic may be perfectly legitimate it would seem.

In absence of any declared object (like with this testcase where we just
have an incoming pointer to some unknown object) the compiler can
still assume that any valid object ends at the end of the address space.
Thus, an object either declared or allocated via malloc never wraps
around to address zero.  Thus, ptr + int never overflows.

Richard.


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Robert Dewar

Paul Schlie wrote:

(as an aside, as most target implementations treat pointers as unsigned
values, its not clear that presuming signed integer overflow semantics are
a reasonable choice for pointer comparison optimization)

The point is not of presuming signed integer overflow semantics (I was
corrected on this by Ian Taylor). It is of presuming that pointers never
move before the beginning of their object. If you have an array of 20
elements, pointers a[0] to a[20] are valid (accessing a[20] is not valid),
but the compiler can assume that the program does not refer to a[-2].

Paolo


Yes (and in which case if the compiler is smart enough to recognize
this it should generate an error, not emit arbitrary [or absents] of
code); but the example in question was:

void f(char *buf)  {
 unsigned int len = 0xFF00u; /* or similar */

if (buf+len  buf) puts(true);

}

In which case buf is merely a pointer which may point to any char, not a
char within a particular array, implying buf+len is also just a pointer,
ultimately being compared against buf;


nope, that may be in your mind what C means, but it's not what the
C language says. A pointer can only point within the allocated
object. Given that constraint, it is obvious that the condition can
never be true. Of course if the compiler elides the test on this
basis, it is nice if it warns you (certainly there is no basis
for an error message in the above code).


If all such pointers are presumed to be restricted to pointing to the
element they were originally assigned, then all composite pointer arithmetic
such as buf+len would be invalid. 


no, that's just wrong, the computation buf+len is valild if the 
resulting pointer is within the original object, a condition that

cannot be caught statically, and with typical implementations
cannot be caught dynamically in all cases either.

You might want to think of an implementation of C where pointers are
a pair, a base pointer and an offset, and pointer arithmetic only
works on the offset. This is a valid implementation, and is in some
sense the formal semantic model of C. It is used in some debugging
versions of C that always check this condition at run time.

It was also used in effect on the 286, where pointers at the
hardware level are segment+offset, and it is valid in C on the
286 to do arithmetic only on the offset part of the address,
and there were indeed C compilers that worked this way.

Alex Stepanov told me once that he preferred Ada to C, because Ada
has proper pointer arithmetic (via the type Integer_Address) which
is defined to work in Ada in the manner that Paul mistakenly expects
for C. Integer_Address would be a bit of a pain to implement on
a 286 :-)

Of course in C in practice, pointers are just machine addresses,
and more general pointer arithmetic does work, but any program
taking advantage of this is not written in the C language.



All this being said, I understand that in

general this is an anomalous case, however on small embedded machines with
small memory spaces or when writing drivers or memory allocators, such
pointer arithmetic may be perfectly legitimate it would seem.





Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Robert Dewar

Richard Guenther wrote:


In absence of any declared object (like with this testcase where we just
have an incoming pointer to some unknown object) the compiler can
still assume that any valid object ends at the end of the address space.
Thus, an object either declared or allocated via malloc never wraps
around to address zero.  Thus, ptr + int never overflows.


Indeed,

An interesting case is the special allowance to point just past the
end of an array if the pointer is not deferenced, this allows the
C idiom

   for (x = arr; x  arr[10]; x++) ...

where arr has bounds 0..9, the limit pointer is used only for
testing, and this test must be valid. This means that you can't
have an array allocated up to the extreme end of the address
space if this would not work properly. I remember this issue
arising on the 286, where the maximum size of an array was
one element less than 64K bytes on one compiler to avoid
this anomoly.


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Andreas Schwab
Robert Dewar [EMAIL PROTECTED] writes:

 Alex Stepanov told me once that he preferred Ada to C, because Ada
 has proper pointer arithmetic (via the type Integer_Address) which
 is defined to work in Ada in the manner that Paul mistakenly expects
 for C. Integer_Address would be a bit of a pain to implement on
 a 286 :-)

In C it is called uintptr_t.

Andreas.

-- 
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
PGP key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
And now for something completely different.


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Joe Buck

Robert C. Seacord wrote:
  i agree that the optimization is allowed by C99.  i think this is a
  quality of implementation issue,  and that it would be preferable for
  gcc to emphasize security over performance, as might be expected.

On Sun, Apr 13, 2008 at 11:51:00PM +0200, Florian Weimer wrote:
 I don't think this is reasonable.  If you use GCC and its C frontend,
 you want performance, not security.

Furthermore, there are a number of competitors to GCC.  These competitors
do not advertise better security than GCC.  Instead they claim better
performance (though such claims should be taken with a grain of salt).
To achieve high performance, it is necessary to take advantage of all of
the opportunities for optimization that the C language standard permits.

For CERT to simulataneously argue that GCC should be crippled (to
emphasize security over performance) but that nothing negative should
be said about competing compilers is the height of irresponsibility.
Any suggestion that users should avoid new versions of GCC will drive
users to competing compilers that optimize at least as aggressively.



Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Florian Weimer
* Robert Dewar:

 Florian Weimer wrote:
 * Robert C. Seacord:

 i agree that the optimization is allowed by C99.  i think this is a
 quality of implementation issue,  and that it would be preferable for
 gcc to emphasize security over performance, as might be expected.

 I don't think this is reasonable.  If you use GCC and its C frontend,
 you want performance, not security.

 I find this a *VERY* dubious claim, in my experience VERY few users
 are at the boundary where small factors in performance are critical,
 but MANY users are definitely concerned with security.

Existing safe C implementations take a performance hit which is a factor
between 5 and 11 in terms of execution time.  There is some new research
that seems to get away with a factor less than 2, but it's pretty recent
and I'm not sure if it's been reproduced independently.  If GCC users
are actually willing to take that hit for some gain in security
(significant gains for a lot of legacy code, of course), then most of
the recent work on GCC has been wasted.  I don't think this is the case.

Keep in mind it's not the comparison that's the real problem here, it's
the subsequent buffer overflow.  And plugging that hole in full
generality is either difficult to do, or involves a significant run-time
performance overhead (or both).

 To me, dubious optimizations like this at the very least should
 be optional and able to be turned off.

Why is this optimization dubious?  We would need to look at real-world
code to tell, and so far, we haven't heard anything about the context in
which the issue was originally encountered.


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Ian Lance Taylor
Florian Weimer [EMAIL PROTECTED] writes:

 To me, dubious optimizations like this at the very least should
 be optional and able to be turned off.

 Why is this optimization dubious?  We would need to look at real-world
 code to tell, and so far, we haven't heard anything about the context in
 which the issue was originally encountered.

The basis of the optimization in question is
http://gcc.gnu.org/PR27039 .

A theoretical argument for why somebody might write problematic code
is http://www.fefe.de/openldap-mail.txt .

I don't know where, or even if, such code is actually found in the
wild.

Ian


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Robert Dewar

Ian Lance Taylor wrote:


A theoretical argument for why somebody might write problematic code
is http://www.fefe.de/openldap-mail.txt .

I don't know where, or even if, such code is actually found in the
wild.

Ian


Fair enough question. The other question of course is how much this
optimization saves.


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Joe Buck
On Mon, Apr 14, 2008 at 04:27:40PM -0400, Robert Dewar wrote:
 Ian Lance Taylor wrote:
 
 A theoretical argument for why somebody might write problematic code
 is http://www.fefe.de/openldap-mail.txt .
 
 I don't know where, or even if, such code is actually found in the
 wild.
 
 Ian
 
 Fair enough question. The other question of course is how much this
 optimization saves.

The big savings are in loops, where the compiler can determine that
it doesn't have to consider the possibility of aliasing and can therefore
use values in registers instead of reloading from memory.


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Robert Dewar

Florian Weimer wrote:


Existing safe C implementations take a performance hit which is a factor
between 5 and 11 in terms of execution time.  There is some new research
that seems to get away with a factor less than 2, but it's pretty recent
and I'm not sure if it's been reproduced independently.  If GCC users
are actually willing to take that hit for some gain in security
(significant gains for a lot of legacy code, of course), then most of
the recent work on GCC has been wasted.  I don't think this is the case.\


This is wholly excessive rhetoric, it's using a common invalid device
in argument, sometimes called extenso ad absurdum. It goes like this

You advocate A

But then to be consistent you should also advocate B,C,D,E

I will now argue against the combination A,B,C,D,E :-) :-)

These implementations that are 5-11 times slower are doing all sorts
of things that

a) I am not advocating in this discussion
b) I would not advocate in any case

Are you really saying that this particular optimization is costly to
eliminate? If so I just don't believe that allegation without data.


Keep in mind it's not the comparison that's the real problem here, it's
the subsequent buffer overflow.  And plugging that hole in full
generality is either difficult to do, or involves a significant run-time
performance overhead (or both).


And there you go! I do NOT advocate plugging that hole in full
generality, so go try this argument on someone who does (I don't
think there are any such people around here!)



To me, dubious optimizations like this at the very least should
be optional and able to be turned off.


Why is this optimization dubious?  We would need to look at real-world
code to tell, and so far, we haven't heard anything about the context in
which the issue was originally encountered.


An optimziation is dubious to me if

a) it produces surprising changes in behavior (note the importance of
the word surprising here)

b) it does not provide significant performance gains (note the
importance of the word significant here).

I find this optimization qualifies as meeting both criteria a) and b),
so that's why I consider it dubious.



Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Robert Dewar

Joe Buck wrote:

On Mon, Apr 14, 2008 at 04:27:40PM -0400, Robert Dewar wrote:

Ian Lance Taylor wrote:


A theoretical argument for why somebody might write problematic code
is http://www.fefe.de/openldap-mail.txt .

I don't know where, or even if, such code is actually found in the
wild.

Ian

Fair enough question. The other question of course is how much this
optimization saves.


The big savings are in loops, where the compiler can determine that
it doesn't have to consider the possibility of aliasing and can therefore
use values in registers instead of reloading from memory.


Savings? yes
Big? (from this particular optimization) dubious without data.


RE: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Gerald.Williams
Robert Dewar wrote:
 An optimziation is dubious to me if
 
 a) it produces surprising changes in behavior (note the importance of
 the word surprising here)
 
 b) it does not provide significant performance gains (note the
 importance of the word significant here).
 
 I find this optimization qualifies as meeting both criteria a) and b),
 so that's why I consider it dubious.

I don't think this is a particularly fruitful argument to be
having at this stage.

It's already been acknowledged that the source code is wrong
to assume that the compiler knows about wrapping of pointers.
The real issue at this stage is how to warn users who may be
using GCC and implicitly relying on its old behavior, without
unintentionally pushing people in the wrong direction. Since
this optimization is performed by many other compilers, the
ship has already sailed on this one, so to speak. [In fact,
after GCC does something to warn users about this, it'll be
much safer than those other compilers.]

I agree that on the face of it, it seems like you wouldn't
want to optimize away tests like this when you can know that
pointer arithmetic really does look the same as unsigned
arithmetic (for a particular architecture, etc.). However,
sometimes an optimization may enable thirty more, so I for
one am not going to argue against it. Especially not when
many other compilers do it also.

-Jerry

P.S. I'm having some déjà vu, recalling discussions back in
the GCC 2.7 days about whether it was really OK to change
the behavior for signed arithmetic to support devices with
saturation. We've come a long way since then.


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Robert Dewar

[EMAIL PROTECTED] wrote:


It's already been acknowledged that the source code is wrong
to assume that the compiler knows about wrapping of pointers.
The real issue at this stage is how to warn users who may be
using GCC and implicitly relying on its old behavior, without
unintentionally pushing people in the wrong direction. Since
this optimization is performed by many other compilers, the
ship has already sailed on this one, so to speak.


that's a strong argument I agree


[In fact,
after GCC does something to warn users about this, it'll be
much safer than those other compilers.]


For sure you want a warning, the compiler should never be
removing explicit tests in the users code without generating
a warning I would think.


I agree that on the face of it, it seems like you wouldn't
want to optimize away tests like this when you can know that
pointer arithmetic really does look the same as unsigned
arithmetic (for a particular architecture, etc.). However,
sometimes an optimization may enable thirty more, so I for
one am not going to argue against it. Especially not when
many other compilers do it also.


Not sure I agree with that, but I can certainly live with
a warning.


-Jerry

P.S. I'm having some déjà vu, recalling discussions back in
the GCC 2.7 days about whether it was really OK to change
the behavior for signed arithmetic to support devices with
saturation. We've come a long way since then.


:-)


Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Joe Buck
On Mon, Apr 14, 2008 at 06:42:30PM -0400, Robert Dewar wrote:
 [In fact,
 after GCC does something to warn users about this, it'll be
 much safer than those other compilers.]
 
 For sure you want a warning, the compiler should never be
 removing explicit tests in the users code without generating
 a warning I would think.

I vaguely recall a paper from Dawson Engler's group (the people who
did the Stanford Checker and Coverity) about warnings for dead code
removal.  They are often bugs if seen in straight-line code, but
macros as well as inlining of functions will produce many warnings
of this kind.  They focused their work on addressing what the user
could be expected to know, the idea being to issue warnings if
the code on a single level is redundant, but suppress warnings
if the redundant text came from macros or inlining.



Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Paul Schlie
 Robert Dewar [EMAIL PROTECTED]
 
 Richard Guenther wrote:
 
 In absence of any declared object (like with this testcase where we just
 have an incoming pointer to some unknown object) the compiler can
 still assume that any valid object ends at the end of the address space.
 Thus, an object either declared or allocated via malloc never wraps
 around to address zero.  Thus, ptr + int never overflows.
 
 Indeed,
 
 An interesting case is the special allowance to point just past the
 end of an array if the pointer is not deferenced, this allows the
 C idiom
 
 for (x = arr; x  arr[10]; x++) ...
 
 where arr has bounds 0..9, the limit pointer is used only for
 testing, and this test must be valid. This means that you can't
 have an array allocated up to the extreme end of the address
 space if this would not work properly. I remember this issue
 arising on the 286, where the maximum size of an array was
 one element less than 64K bytes on one compiler to avoid
 this anomoly.

Further, although admittedly contrived, if a pointer to a second
element of an array is passed, to which an (unsigned) -1 is added, which in
effect generates an unsigned wrapping overflow and ends up pointing to the
first element of the same array; whereby such a sum will be both less than
the pointer passed and be known to reference the first element of the same
array. (So thereby if for some reason someone wants to pass not a pointer
to the first element of an array, but rather the Nth element and
subsequently access the Nth-X element, it is conceivable that one may want
to verify that the resulting pointer will always be less than the pointer
passed, and seemingly be legitimate.)




Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Robert Dewar

Joe Buck wrote:

On Mon, Apr 14, 2008 at 06:42:30PM -0400, Robert Dewar wrote:

[In fact,
after GCC does something to warn users about this, it'll be
much safer than those other compilers.]

For sure you want a warning, the compiler should never be
removing explicit tests in the users code without generating
a warning I would think.


I vaguely recall a paper from Dawson Engler's group (the people who
did the Stanford Checker and Coverity) about warnings for dead code
removal.  They are often bugs if seen in straight-line code, but
macros as well as inlining of functions will produce many warnings
of this kind.  They focused their work on addressing what the user
could be expected to know, the idea being to issue warnings if
the code on a single level is redundant, but suppress warnings
if the redundant text came from macros or inlining.


Right, we have heuristics in the Ada front end along these lines.
For instance, you generally want to be warned if a test is
always true or always false, but if the test is

   if XYZ then

where XYZ is a boolean constant, then probably this is
conditional compilation type activity that is legitimate.



Re: US-CERT Vulnerability Note VU#162289

2008-04-14 Thread Ian Lance Taylor
Robert Dewar [EMAIL PROTECTED] writes:

 An optimziation is dubious to me if

 a) it produces surprising changes in behavior (note the importance of
 the word surprising here)

 b) it does not provide significant performance gains (note the
 importance of the word significant here).

 I find this optimization qualifies as meeting both criteria a) and b),
 so that's why I consider it dubious.

To this particular optimization does not meet criteria a).  I never
write code that constructs a pointer which does not point to any
object, because I know that is invalid.  I think it is rather weird
that anybody would write code like this.  If I want to check whether
an index is out of bounds, I compare the index and the length of the
buffer.  I don't use a pointer comparison to check for an out of
bounds index.

I don't know about criteria b), I haven't measured.  It's easy to come
up with test cases where it gives a performance gain--many loops with
a pointer induction variable will benefit--but I don't know how
significant they are.

Ian


Re: US-CERT Vulnerability Note VU#162289

2008-04-13 Thread Florian Weimer
* Robert C. Seacord:

 i agree that the optimization is allowed by C99.  i think this is a
 quality of implementation issue,  and that it would be preferable for
 gcc to emphasize security over performance, as might be expected.

I don't think this is reasonable.  If you use GCC and its C frontend,
you want performance, not security.  After all, the real issue is not
the missing comparison instruction, but the fact that this might lead to
subsequent unwanted code execution.  There are C implementations that
run more or less unmodified C code in an environment which can detect
such misuse, but they come at a performance cost few are willing to pay.
Without such a sacrifice, the only thing you can come up is We are
secure, sometimes, but we can't tell you precisely when, which doesn't
seem to be that helpful.

The problem I see with this particular case is that the type of
comparison you describe is bogus even if translated to straight machine
code because it assumes an awful amount of detail about the target
environment.  Give us a real-world example of an affected program that
runs on a significant number of GCC-supported platforms, and at least I
would be more inclined to take view this as an issue in GCC.  Just to be
clear, I wouldn't be suprised if some real-world code breaks, but this
code hasn't been written in a portable style in the first place and
should be considered buggy anyway.

C programmers usually don't care much about the corner cases of the
language.  It's pretty easy to come up with things which are generally
assumed to work, but are not actually guaranteed by the standard.  There
are several ways to deal with this, like better reading material for
programmers, or some C variant that writes down the perceived implicit
programmer expectations.  But tweaking the compiler to match these
expectations, without a semi-formal description of what they are, isn't
worth the effort, IMHO.

 i think we mean what we say, which is *Avoid newer versions of gcc
 and *avoiding the use of gcc versions 4.2 and later.  i don't see
 any verbiage that says use a different compiler.

Better use older GCC versions is still disparaging to GCC developers,
like you can only rely on CERT/CC advisories issued in 2007 and earlier
years.  And it's been mentioned in various places that this is
incorrect, some 4.1 versions show similar behavior.


Re: US-CERT Vulnerability Note VU#162289

2008-04-13 Thread Paul Schlie
 Florian Weimer wrote:
 
 Robert C. Seacord wrote:

 i agree that the optimization is allowed by C99.  i think this is a
 quality of implementation issue,  and that it would be preferable for
 gcc to emphasize security over performance, as might be expected.
 
 I don't think this is reasonable.  If you use GCC and its C frontend,
 you want performance, not security.  After all, the real issue is not
 the missing comparison instruction, but the fact that this might lead to
 subsequent unwanted code execution. ...

 The problem I see with this particular case is that the type of
 comparison you describe is bogus even if translated to straight machine
 code because it assumes an awful amount of detail about the target
 environment. ...

No, (nor necessarily a question of security over performance) but
rather a question of whether any optimization should functionally change
observable program behavior, which in general it should be clear it never
should, as such schizophrenic optimizations will only tend to introduce
inconsistent results, which regardless of their legality, such
optimizations most often provide no value to anyone. (as faster is better,
but not at the expense of inconsistency; nor should optimizations be used to
misguidedly intentionally break non-portable code, as non-portable code may
still represent perfectly legal and useful programs, and is not the way
to encourage the authoring of portable programs).

Thereby clearly optimizations should be sensitive to the behavior of the
target in the absents of such optimizations; if a target wraps arithmetic
(as most if not all targets factually do, regardless of whether a language
spec says they must) pretending that they don't when an observable behavior
is known to be dependent on it, will only needlessly and unproductively
introducing a functional bug (presuming the program behaved as intended
in it's absents, regardless of the code's portability).

Aggressive optimizations (particularly those which do not coincide with
the factual behavior of most targets) should arguably only be invoked
by explicit request (i.e. not at a generic level of optimization). IMHO.

(as an aside, as most target implementations treat pointers as unsigned
values, its not clear that presuming signed integer overflow semantics are
a reasonable choice for pointer comparison optimization)




Re: US-CERT Vulnerability Note VU#162289

2008-04-13 Thread Robert Dewar

Paul Schlie wrote:

Florian Weimer wrote:


Robert C. Seacord wrote:

i agree that the optimization is allowed by C99.  i think this is a
quality of implementation issue,  and that it would be preferable for
gcc to emphasize security over performance, as might be expected.

I don't think this is reasonable.  If you use GCC and its C frontend,
you want performance, not security.  After all, the real issue is not
the missing comparison instruction, but the fact that this might lead to
subsequent unwanted code execution. ...

The problem I see with this particular case is that the type of
comparison you describe is bogus even if translated to straight machine
code because it assumes an awful amount of detail about the target
environment. ...


No, (nor necessarily a question of security over performance) but
rather a question of whether any optimization should functionally change
observable program behavior, which in general it should be clear it never
should,


Didn't we tread down this path before with someone who had extreme
views on this topic, perhaps it was you Paul, can't remember.

Anyway, the view that optimization can never change observable
behavior is going MUCH too far, if, as seems the case, you are
including programs with undefined behavior. It most likely
would preclude almost ALL optimization, because if you have a
program which references uninitialized variables, then the values
of these variables could be changed by ANY optimization.

At another level, any optimization changes the performance and
timing, and if you consider the length of time as an observable
behavior, then by definition you would preclude all optimization.
It is in any case easy enough to write a program which does totally
different things depending on how long parts of it take to run.

So though I generally am conservative about optimizations that affect
widely expected behavior, I don't for a moment buy this extreme position.



Aggressive optimizations (particularly those which do not coincide with
the factual behavior of most targets) should arguably only be invoked
by explicit request (i.e. not at a generic level of optimization). IMHO.


Ah yes, it *was* you :-)
I remember this bogus attempt to talk about the factual behavior of
a target, which seems complete nonsense to me.


Re: US-CERT Vulnerability Note VU#162289

2008-04-13 Thread Paolo Bonzini



(as an aside, as most target implementations treat pointers as unsigned
values, its not clear that presuming signed integer overflow semantics are
a reasonable choice for pointer comparison optimization)


The point is not of presuming signed integer overflow semantics (I was 
corrected on this by Ian Taylor).  It is of presuming that pointers 
never move before the beginning of their object.  If you have an array 
of 20 elements, pointers a[0] to a[20] are valid (accessing a[20] is 
not valid), but the compiler can assume that the program does not refer 
to a[-2].


Paolo


RE: US-CERT Vulnerability Note VU#162289

2008-04-11 Thread Gerald.Williams
Robert C. Seacord wrote:
 Here is another version of the program (same compiler version/flags).
[...]
 void test_signed(char *buf) {
 signed int len;
[...]
 if((buf+len  buf) != ((uintptr_t)buf+len  (uintptr_t)buf))
 printf( BUG!);
[...]
 void test_unsigned(char *buf) {
 unsigned int len;
[...]
 if((buf+len  buf) != ((uintptr_t)buf+len  (uintptr_t)buf))
 printf( BUG!);
[...]
 The unsigned test was one we performed on the gcc versions.  I added
the
 signed test, but it didn't make a difference on Visual Studio.  My
 understanding is that it shouldn't, because the real issue here is
 pointer arithmetic and the resulting type should always be a pointer.

I'm not sure what you mean by that last statement.

I think we've already established that those tests that would
print BUG aren't actually finding bugs in the compiler. It
is not correct to assume that adding a value to a char pointer
will wrap around in the same way as if you added a value to an
unsigned number. You also cannot assume that adding a value to
a signed number will cause it to wrap. (GCC had optimized away
checks for the latter already. There are now reports that many
other compilers may optimize away both tests.)

Are we in agreement on this? The fact that your example prints
BUG! seems to imply that it is invalid to optimize away these
tests, which it isn't.

I was under the impression that the only issue here is that
there is a change in behavior. That's a very fine line to walk
when many other compilers do this. In fact, you can run into
the same change of behavior when switching from unoptimized
debug versions to optimized release versions. Recommending not
using a recent GCC as a possible remedy is dangerous (some
would probably say irresponsible). What you really mean is,
Use an older GCC or some other compiler that is known not to
take advantage of this optimization.

-Jerry

P.S. Has anyone checked how far back in the line of Microsoft
compilers you have to go to before they don't do this same
optimization (i.e., can we show irrefutably that this warning
should apply in the same degree to their compilers as well,
even without the debug versus release version argument)?
I suppose you only have to go as far back as the free version,
by inferring from the debug vs. release argument. :-)


Re: US-CERT Vulnerability Note VU#162289

2008-04-11 Thread Robert C. Seacord

Gerald,

Comments below.

 My
understanding is that it shouldn't, because the real issue here is
pointer arithmetic and the resulting type should always be a pointer.



I'm not sure what you mean by that last statement.
  
my understanding of the C99 standard is that adding an integer and a 
pointer always results in a pointer.  i'm not a compiler expert, but i 
would think that the rules that apply to adding a pointer and an integer 
of any signedness would be the same, and are different from the rules 
that would apply to adding signed or unsigned integers. 

VU#162289 only applies to pointer arithmetic.  I need to go back and 
check, but I don't believe the signedness of the integer involved is a 
factor.

I think we've already established that those tests that would
print BUG aren't actually finding bugs in the compiler.
Sorry about the string literal Bug.  This was just an artifact from 
the original program.  I should have changed it to optimization detected.

 It
is not correct to assume that adding a value to a char pointer
will wrap around in the same way as if you added a value to an
unsigned number. 

i think this depends on what you mean by value.

You also cannot assume that adding a value to
a signed number will cause it to wrap. (GCC had optimized away
checks for the latter already. There are now reports that many
other compilers may optimize away both tests.)
  

ditto.

Are we in agreement on this? The fact that your example prints
BUG! seems to imply that it is invalid to optimize away these
tests, which it isn't.
  
i agree that the optimization is allowed by C99.  i think this is a 
quality of implementation issue,  and that it would be preferable for 
gcc to emphasize security over performance, as might be expected.

I was under the impression that the only issue here is that
there is a change in behavior. That's a very fine line to walk
when many other compilers do this. In fact, you can run into
the same change of behavior when switching from unoptimized
debug versions to optimized release versions. Recommending not
using a recent GCC as a possible remedy is dangerous (some
would probably say irresponsible).
this was only one of several solutions listed, and not the first one 
listed.  You will see that the Compliant Solution provided in the 
referenced secure coding rule:
ARR38-C. Do not add or subtract an integer to a pointer if the resulting 
value does not refer to an element within the array 
https://www.securecoding.cert.org/confluence/display/seccode/ARR38-C.+Do+not+add+or+subtract+an+integer+to+a+pointer+if+the+resulting+value+does+not+refer+to+an+element+within+the+array

recommends modifying the code.

 What you really mean is,
Use an older GCC or some other compiler that is known not to
take advantage of this optimization.
  
i think we mean what we say, which is *Avoid newer versions of gcc and 
*avoiding the use of gcc versions 4.2 and later.  i don't see any 
verbiage that says use a different compiler.

P.S. Has anyone checked how far back in the line of Microsoft
compilers you have to go to before they don't do this same
optimization 
our tests shows that the 2005 version of the compiler does not perform 
this optimization.  i have not yet tested a newer version of the compiler.


rCs




RE: US-CERT Vulnerability Note VU#162289

2008-04-11 Thread Gerald.Williams
Robert C. Seacord wrote:
 this was only one of several solutions listed, and not the first one
 listed.

Yes, CERT did the right thing by recommending first that the
code be changed (kudos for that).

  What you really mean is,
 Use an older GCC or some other compiler that is known not to
 take advantage of this optimization.
 i think we mean what we say, which is *Avoid newer versions of gcc
and
 *avoiding the use of gcc versions 4.2 and later.  i don't see any
 verbiage that says use a different compiler.

I hope you can understand why that particular phrasing would be
viewed with some scorn, at least on the GCC list. Presumably,
the intent really is to suggest using a compiler that doesn't
have that optimization, not don't use recent GCC versions.

 our tests shows that the 2005 version of the compiler does not perform
 this optimization.  i have not yet tested a newer version of the
compiler.

There was a report (forwarded by Mark Mitchell) of Microsoft
Visual C++ 2005 performing that optimization (the resultant
object code was shown). Have you verified that this report
was false? If not, it may be that you were using a different
set of options or a different version of that compiler.

-Jerry


Re: US-CERT Vulnerability Note VU#162289

2008-04-11 Thread Ian Lance Taylor
Robert C. Seacord [EMAIL PROTECTED] writes:

  What you really mean is,
 Use an older GCC or some other compiler that is known not to
 take advantage of this optimization.
   
 i think we mean what we say, which is *Avoid newer versions of gcc
 and *avoiding the use of gcc versions 4.2 and later.  i don't see
 any verbiage that says use a different compiler.

I know I'm biased, but I think use a different compiler is clearly
implied by the text of the advisory.  If the advisory mentioned that
other compilers also implement the same optimization, then that
implication would not be there.

Ian


Re: US-CERT Vulnerability Note VU#162289

2008-04-11 Thread David Miller
From: Ian Lance Taylor [EMAIL PROTECTED]
Date: Fri, 11 Apr 2008 11:04:38 -0700

 Robert C. Seacord [EMAIL PROTECTED] writes:
 
   What you really mean is,
  Use an older GCC or some other compiler that is known not to
  take advantage of this optimization.

  i think we mean what we say, which is *Avoid newer versions of gcc
  and *avoiding the use of gcc versions 4.2 and later.  i don't see
  any verbiage that says use a different compiler.
 
 I know I'm biased, but I think use a different compiler is clearly
 implied by the text of the advisory.  If the advisory mentioned that
 other compilers also implement the same optimization, then that
 implication would not be there.

I completely agree.



Re: US-CERT Vulnerability Note VU#162289

2008-04-11 Thread Robert C. Seacord

Gerald,

There was a report (forwarded by Mark Mitchell) of Microsoft
Visual C++ 2005 performing that optimization (the resultant
object code was shown). Have you verified that this report
was false? 


both chad and i have tested this with various options on Visual C++ 2005 
and we have not found any combination of options that will cause this 
optimization to occur.


rCs

--
Robert C. Seacord
Senior Vulnerability Analyst
CERT/CC 


Work: 412-268-7608
FAX: 412-268-6989



Re: US-CERT Vulnerability Note VU#162289

2008-04-11 Thread Robert C. Seacord

Ian,

I know I'm biased, but I think use a different compiler is clearly
implied by the text of the advisory.  If the advisory mentioned that
other compilers also implement the same optimization, then that
implication would not be there.
  
yes, i agree we should make this change, and warn against assuming this 
optimization is not performed on other compilers.


if i understand you correctly (and based on our own tests) none of the 
compilation flags we've discussed address this issue, so we should also 
remove this as a solution.


thanks,
rCs


--
Robert C. Seacord
Senior Vulnerability Analyst
CERT/CC 


Work: 412-268-7608
FAX: 412-268-6989



Re: US-CERT Vulnerability Note VU#162289

2008-04-11 Thread Paul Koning
 Ian == Ian Lance Taylor [EMAIL PROTECTED] writes:

 Ian Robert C. Seacord [EMAIL PROTECTED] writes:
  What you really mean is, Use an older GCC or some other compiler
  that is known not to take advantage of this optimization.
  
  i think we mean what we say, which is *Avoid newer versions of
  gcc and *avoiding the use of gcc versions 4.2 and later.  i
  don't see any verbiage that says use a different compiler.

 Ian I know I'm biased, but I think use a different compiler is
 Ian clearly implied by the text of the advisory.  If the advisory
 Ian mentioned that other compilers also implement the same
 Ian optimization, then that implication would not be there.

I don't know if you're biased, actually.  

My reading is the same.  If the recommendation is avoid GCC 4.2 or
later that translates to use a compiler other than GCC 4.2 or later
(that's just standard set algebra).  Clearly that set includes:

a. GCC 4.1 or before
b. Compilers other than GCC

So the CERT note as worded is recommending (a) or (b) as workarounds.
And it is clear that (b) is NOT a correct workaround.  

Therefore the text of the note as currently worded is wrong and has to
be repaired.

   paul



Re: US-CERT Vulnerability Note VU#162289

2008-04-11 Thread Mark Mitchell

Robert C. Seacord wrote:

Gerald,

There was a report (forwarded by Mark Mitchell) of Microsoft
Visual C++ 2005 performing that optimization (the resultant
object code was shown). Have you verified that this report
was false? 


both chad and i have tested this with various options on Visual C++ 2005 
and we have not found any combination of options that will cause this 
optimization to occur.


I have not personally confirmed the VC++ 2005 report.  However, the code 
that I posted was pretty explicit about the optimization options used 
and the resulting code.  I've attached the entire listing here to try to 
give you more information; it includes the exact version of the compiler 
and shows the source code tested in comments.


Another user wrote to me regarding an unnamed version of MSVC that:


It appears that when adding large values (like 0xf000)
the if() check isn't optimized away, but for smaller
values it is (same as with gcc).
Small test:
char *buf=test;
unsigned int len = 0x4000;
if(buf+len  buf) {
LOG_MSG(1);
} else {
LOG_MSG(2);
}

It doesn't matter if len is int or unsigned int.
A maybe relevant thing is that everything up to
0x7fff for len will trigger the optimization,
whereas larger values (larger in terms of unsigned
32bit int) don't.


Presumably, when MSVC thinks the value is negative (when cast to a 
signed type), then it doesn't do the optimization, but, for positive 
values it does.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713
; Listing generated by Microsoft (R) Optimizing Compiler Version 
14.00.50727.762 

TITLE   d:\src\prova\cert\cert.cpp
.686P
.XMM
include listing.inc
.model  flat


$SG-5   DB  00H
PUBLIC  ?f@@[EMAIL PROTECTED]   ; f
; Function compile flags: /Ogtpy
; File d:\src\prova\cert\cert.cpp
_TEXT   SEGMENT
?f@@[EMAIL PROTECTED] PROC  ; f

; 2:   len = 1  30;
; 3:   if (buf + len  buf)
; 4: return 1;
; 5: 
; 6: 
; 7:   return 0;

xor eax, eax

; 8: }

ret 0
?f@@[EMAIL PROTECTED] ENDP  ; f
_TEXT   ENDS
PUBLIC  _main
; Function compile flags: /Ogtpy
_TEXT   SEGMENT
_main   PROC

; 12   :char * b = ;
; 13   :for ( int l = 0; l  1  30; ++l )
; 14   :f(b, l);
; 15   : }

xor eax, eax
ret 0
_main   ENDP
_TEXT   ENDS
END


Re: US-CERT Vulnerability Note VU#162289

2008-04-11 Thread Joe Buck
On Fri, Apr 11, 2008 at 03:00:14PM -0400, Robert C. Seacord wrote:
 Ian,
 I know I'm biased, but I think use a different compiler is clearly
 implied by the text of the advisory.  If the advisory mentioned that
 other compilers also implement the same optimization, then that
 implication would not be there.
   
 yes, i agree we should make this change, and warn against assuming this 
 optimization is not performed on other compilers.

Thanks.  I hope that you will correct the advisory promptly to avoid any
implication that one should switch from GCC to a different compiler based
on this issue, since we've already established that most of GCC's
competitors perform similar optimizations under some cicumstances (even if
the particular example that appears in the CERT report is not affected,
other, similar examples will be, particularly if they appear in a loop).

Both CERT and GCC have their reputations to consider here, and I think
that this advisory has damaged the reputations of *both*.

 if i understand you correctly (and based on our own tests) none of the 
 compilation flags we've discussed address this issue, so we should also 
 remove this as a solution.

The advisory should emphasize the solution of auditing buffer overflow
checks to make sure that they are correct C, and should help people
write such checks correctly.


Re: US-CERT Vulnerability Note VU#162289

2008-04-10 Thread Robert C. Seacord

Gerald,

Here is another version of the program (same compiler version/flags). 


#include stdio.h

void test_signed(char *buf) {
   signed int len;

   len = 130;

   printf(buf = %p; buf+len = %p; buf+len  buf = %d %d,
   buf, buf+len, buf+len  buf, (uintptr_t)buf+len  (uintptr_t)buf);
   if((buf+len  buf) != ((uintptr_t)buf+len  (uintptr_t)buf))
   printf( BUG!);
   printf(\n);
}


void test_unsigned(char *buf) {
   unsigned int len;

   len = 130;

   printf(buf = %p; buf+len = %p; buf+len  buf = %d %d,
   buf, buf+len, buf+len  buf, (uintptr_t)buf+len  (uintptr_t)buf);
   if((buf+len  buf) != ((uintptr_t)buf+len  (uintptr_t)buf))
   printf( BUG!);
   printf(\n);
}

int main(void) {
   test_signed(0);
   test_signed((char*)0x7000);
   test_signed((char*)0xf000);

   test_unsigned(0);
   test_unsigned((char*)0x7000);
   test_unsigned((char*)0xf000);
   return 0;
}

output:

buf = ; buf+len = 4000; buf+len  buf = 0 0
buf = 7000; buf+len = B000; buf+len  buf = 0 0
buf = F000; buf+len = 3000; buf+len  buf = 1 1
buf = ; buf+len = 4000; buf+len  buf = 0 0
buf = 7000; buf+len = B000; buf+len  buf = 0 0
buf = F000; buf+len = 3000; buf+len  buf = 1 1

The unsigned test was one we performed on the gcc versions.  I added the 
signed test, but it didn't make a difference on Visual Studio.  My 
understanding is that it shouldn't, because the real issue here is 
pointer arithmetic and the resulting type should always be a pointer.


rCs


Robert C. Seacord wrote:
  

void f(char *buf)  {
  unsigned int len = len = 0xFF00;

  if (buf+len  buf) puts(true);

}



You need to be more precise. That is not the same example
that you quoted for GCC.

In fact, if you vary the criteria too much, you will find
situations where GCC already behaved that way. The test in
the following example is optimized out by old versions of
GCC (certainly my version 3.4.5 compiler does it, with no
warnings even when using -Wall):

 int f(char *buf, int i)
 {
 i = 130;

 if ((int)buf + i  (int)buf)
 return 0;

 return 1;
 }

That's quite a bit less changed than your example, which
brings unsigned-ness into the picture. [This is exactly
the problem--signed overflow and pointer overflow aren't
defined, unlike unsigned overflow.]

Given that current Microsoft compilers reportedly exhibit
this behavior, it sounds like the advisory is going to at
least need some significant rewriting. :-)

-Jerry
  




RE: US-CERT Vulnerability Note VU#162289

2008-04-09 Thread Gerald.Williams
Robert C. Seacord wrote:
 void f(char *buf)  {
   unsigned int len = len = 0xFF00;
 
   if (buf+len  buf) puts(true);
 
 }

You need to be more precise. That is not the same example
that you quoted for GCC.

In fact, if you vary the criteria too much, you will find
situations where GCC already behaved that way. The test in
the following example is optimized out by old versions of
GCC (certainly my version 3.4.5 compiler does it, with no
warnings even when using -Wall):

 int f(char *buf, int i)
 {
 i = 130;

 if ((int)buf + i  (int)buf)
 return 0;

 return 1;
 }

That's quite a bit less changed than your example, which
brings unsigned-ness into the picture. [This is exactly
the problem--signed overflow and pointer overflow aren't
defined, unlike unsigned overflow.]

Given that current Microsoft compilers reportedly exhibit
this behavior, it sounds like the advisory is going to at
least need some significant rewriting. :-)

-Jerry


RE: US-CERT Vulnerability Note VU#162289

2008-04-08 Thread Dave Korn
Mark Mitchell wrote on :

 Mark Mitchell wrote:
 
 I've been told that Intel's ICC compiler also does this optimization:
 
 Apparently, IAR's Atmel AVR compiler does this optimization as well.


  Say, how do I get gcc to actually do this?  I can't reproduce this in a
real-world test.  I would have thought that the call to bar should be
optimised away in the example below, but it doesn't seem to be (tested on a
cygwin host with a recent gcc built from trunk rev.133266):


~ $ gcc -O3 -S -xc -o 2.s -
extern void foo (char *buf, int len);
extern void bar (char *buf);

void foo (char *buf, int len)
{
  if (buf+len  buf)
  {
bar (buf);
  }
  return;
}

void delay (int time)
{
  int i;
  for (i = 0; i  time; i++) ;
}

~ $ cat 2.s
.file   
.text
.p2align 4,,15
.globl _delay
.def_delay; .scl2;  .type   32; .endef
_delay:
pushl   %ebp
movl%esp, %ebp
popl%ebp
ret
.p2align 4,,15
.globl _foo
.def_foo;   .scl2;  .type   32; .endef
_foo:
pushl   %ebp
movl%esp, %ebp
movl12(%ebp), %eax
testl   %eax, %eax
js  L7
popl%ebp
ret
.p2align 4,,7
L7:
popl%ebp
jmp _bar
.def_bar;   .scl2;  .type   32; .endef
~ $ gcc --version
gcc (GCC) 4.4.0 20080316 (experimental)
Copyright (C) 2008 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

~ $


  It looks to me like it got changed into a test if len is negative, which is
right, isn't it?

  Does this optimisation perhaps *only* happen in artificial testcases like
the one at the start of the thread, where 'len' is a compile-time constant
that gcc *knows* is positive?  Because if so, surely the CERT alert is
more-or-less spurious, rather than
perhaps-at-least-a-bit-useful-to-people-who-write-invalid-code?

  BTW, as you might also notice in that example, Gcc now optimises away empty
'delay' loops.  Unlike the impossible-range-check optimisation, this really is
new behaviour, at least since 3.x series.  Theoretically, this too could have
security implications for incorrect code.  Maybe there should be another
warning issued?


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: US-CERT Vulnerability Note VU#162289

2008-04-08 Thread Daniel Jacobowitz
On Tue, Apr 08, 2008 at 01:09:18PM +0100, Dave Korn wrote:
 ~ $ gcc -O3 -S -xc -o 2.s -
 extern void foo (char *buf, int len);
 extern void bar (char *buf);
 
 void foo (char *buf, int len)
 {
   if (buf+len  buf)
   {
 bar (buf);
   }
   return;
 }

Note that if buf is a char *, there's no way to know that it's the
start of an object.  So you're not testing the same thing they were
talking about; calling foo (str[2], -1) is completely valid C.


-- 
Daniel Jacobowitz
CodeSourcery


Re: US-CERT Vulnerability Note VU#162289

2008-04-08 Thread Marc Lehmann
On Mon, Apr 07, 2008 at 02:15:14PM -0400, Robert C. Seacord [EMAIL 
PROTECTED] wrote:
 The advisory suggests that people not use GCC.  
 no, it does not.  it suggests they may not want to use the latest 
 versions.  this is one possible work around.  we never say use another 
 compiler.

Yes, it does, you are twisting words, let me quote:

   Avoid newer versions of gcc

   Application developers and vendors of large codebases that cannot be audited
   for use of the defective length checks are urged to avoiding the use of gcc
   versions 4.2 and later.

While using an older version of gcc is a way to follow that suggestion,
using another compiler is as well. So the advisory certainly *does*
suggest to not use GCC as _one_ way of working around this vulnerability
(not strictly speaking) in gcc.

If you think that the advisory should not suggest that users not use gcc,
then it should not say so.

As it is written, it clearly suggests that swiching compilers will help
to avoid this problem, and this is a great disservice for people, as it
might drive people to another compiler that cert *knows* does the same
optimisation but doesn't feel mentioning because it is not a recent
change or the compiler is not popular enough (some of the compilers
certainly are very popular, though, so this all sounds very hypocritical).

In any case, if the intention of the advisory was to make people go to
older rleases of gcc and not switch to other compilers, then it clearly
fails to say so. Instead, it *does* suggets that switching compilers is
one way around this issue.

 Some compilers (including, at least, GCC, PathScale, and xlc) 
 optimize away incorrectly coded checks for overflow.  Applications 
 containing these incorrectly coded checks may be vulnerable if 
 compiled with these compilers.
 ok, i'll review again for tone.  generally we don't try to make these 
 notes overly broad; they are only meant to draw attention to a specific 
 issue.

But the specific issue is that compilers do that optimisation, and that the
code is buggy. The issue here is clearly not gcc - about every optimising
compiler does this optimisation, and by singling out gcc it seems as if
somehow gcc would be the problem, when clearly the code is.

The biggets problem is that cert knows that this is a common problem, but
suggests that, among othersa, switching compilers would somehow solve this
- a greta disservice to people who rely on accurate information.

Again, the issue here is buggy code, not gcc.

-- 
The choice of a   Deliantra, the free code+content MORPG
  -==- _GNU_  http://www.deliantra.net
  ==-- _   generation
  ---==---(_)__  __   __  Marc Lehmann
  --==---/ / _ \/ // /\ \/ /  [EMAIL PROTECTED]
  -=/_/_//_/\_,_/ /_/\_\


Re: US-CERT Vulnerability Note VU#162289

2008-04-08 Thread Mark Mitchell

Daniel Jacobowitz wrote:

On Tue, Apr 08, 2008 at 01:09:18PM +0100, Dave Korn wrote:

~ $ gcc -O3 -S -xc -o 2.s -
extern void foo (char *buf, int len);
extern void bar (char *buf);

void foo (char *buf, int len)
{
  if (buf+len  buf)
  {
bar (buf);
  }
  return;
}


Note that if buf is a char *, there's no way to know that it's the
start of an object.  So you're not testing the same thing they were
talking about; calling foo (str[2], -1) is completely valid C.


Exactly.

Dave, that's why my test example had the:

  len = 1  30;

line.  The compiler has to know that the value of len is non-negative in 
order to do the optimization.  Using an unsigned int len parameter 
should also give it that information, but the version I had was designed 
to closely resemble the case shown to my by CERT, which used a signed 
variable.


Thanks,

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: US-CERT Vulnerability Note VU#162289

2008-04-08 Thread Mark Mitchell

Mark Mitchell wrote:

Mark Mitchell wrote:


I've been told that Intel's ICC compiler also does this optimization:


Apparently, IAR's Atmel AVR compiler does this optimization as well. 
That CPU has 16-bit addresses, so the tester changed the test case to 
use 1  14 instead of 1  30.


I've also been told that Microsoft Visual C++ 2005 compiler does this 
optimization.


Chad, Robert, are you going to update the CERT notice to reflect all of 
this additional information about other compilers that also do this 
optimization?


Here is the code generated:

; Listing generated by Microsoft (R) Optimizing Compiler Version 
14.00.50727.762

...
; Function compile flags: /Ogtpy

...
; 2:   len = 1  30;
; 3:   if (buf + len  buf)
; 4: return 1;
; 5:
; 6:
; 7:   return 0;

xor eax, eax

; 8: }

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: US-CERT Vulnerability Note VU#162289

2008-04-08 Thread Robert C. Seacord

Mark,

I will update the CERT C Secure Coding rule with a list of compilers, 
once we complete a fact check.  Chad is responsible for updating the vul 
note, so I'll need to discuss this with him.


Specifically with regards to MSVC 2005, I thought Chad had already 
checked this and found that it did not exhibit this behavior.  I just 
tested the following program.


#include stdio.h

void f(char *buf)  {
 unsigned int len = len = 0xFF00;

 if (buf+len  buf) puts(true);

}

int main(void)
{
   char buffer[100];
   f(buffer);
   return 0;
}

and compiled it with Microsoft Visual Studio 2005 Version 8.0.50727.42 
with the following flags:


/Od /D WIN32 /D _DEBUG /D _CONSOLE /D _UNICODE /D UNICODE /Gm 
/EHsc /RTC1 /MDd /FoDebug\\ /FdDebug\vc80.pdb /W4 /nologo /c /Wp64 
/ZI /TC /errorReport:prompt


And on the dissembly view I see:

 if (buf+len  buf) puts(true);
004113CB  mov eax,dword ptr [buf]
004113CE  add eax,dword ptr [len]
004113D1  cmp eax,dword ptr [buf]
004113D4  jae f+4Dh (4113EDh)
004113D6  mov esi,esp
004113D8  pushoffset string true (41563Ch)
004113DD  calldword ptr [__imp__puts (4182B8h)]
004113E3  add esp,4
004113E6  cmp esi,esp
004113E8  call@ILT+305(__RTC_CheckEsp) (411136h)

And the program prints out true.

Any explanation why I am getting different behavior? 


If I change the flags to

/O2 /D WIN32 /D _DEBUG /D _CONSOLE /D _UNICODE /D UNICODE /Gm 
/EHsc /MDd /FoDebug\\ /FdDebug\vc80.pdb /W4 /nologo /c /Wp64 /TC 
/errorReport:prompt


with /O2 optimization, true is still printed although it is harder to 
examine the dissembly .


rCs



Mark Mitchell wrote:

Mark Mitchell wrote:


I've been told that Intel's ICC compiler also does this optimization:


Apparently, IAR's Atmel AVR compiler does this optimization as well. 
That CPU has 16-bit addresses, so the tester changed the test case to 
use 1  14 instead of 1  30.


I've also been told that Microsoft Visual C++ 2005 compiler does this 
optimization.


Chad, Robert, are you going to update the CERT notice to reflect all 
of this additional information about other compilers that also do this 
optimization?


Here is the code generated:

; Listing generated by Microsoft (R) Optimizing Compiler Version 
14.00.50727.762

...
; Function compile flags: /Ogtpy

...
; 2:   len = 1  30;
; 3:   if (buf + len  buf)
; 4: return 1;
; 5:
; 6:
; 7:   return 0;

xoreax, eax

; 8: }





Re: US-CERT Vulnerability Note VU#162289

2008-04-08 Thread Mark Mitchell

Robert C. Seacord wrote:

I will update the CERT C Secure Coding rule with a list of compilers, 
once we complete a fact check.  Chad is responsible for updating the vul 
note, so I'll need to discuss this with him.


Thanks.

Specifically with regards to MSVC 2005, I thought Chad had already 
checked this and found that it did not exhibit this behavior.  I just 
tested the following program.


I should clarify that I didn't personally do the test with MSVC -- or 
the other compilers mentioned.  (I tried to suggest that in the wording 
of my email, but I should have been more explicit.)  I was reporting 
information that had been given to me.



#include stdio.h

void f(char *buf)  {
 unsigned int len = len = 0xFF00;

 if (buf+len  buf) puts(true);

}


That's a little different that my test case, in that in your case len is 
unsigned.  Your case should make the optimization easier for the 
compiler to do, not harder -- but it would be worth trying with signed 
int.  I'd also eliminate the double-assignment to len.  My test case was:


  int f(char *buf, int len) {
len = 1  30;
if (buf + len  buf)
  return 1;
return 0;
  }

which is what the person who emailed me about MSVC tested.  I did 
include the optimization flags they used in that email; they appeared in 
a comment in the MSVC output.  I don't know what those flags mean, 
though; I'm not an expert on MSVC usage.


Thanks,

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


RE: US-CERT Vulnerability Note VU#162289

2008-04-08 Thread Dave Korn
Robert C. Seacord wrote on :

 Specifically with regards to MSVC 2005, I thought Chad had already
 checked this and found that it did not exhibit this behavior.  I just
 tested the following program. 
 
 #include stdio.h
 
 void f(char *buf)  {
   unsigned int len = len = 0xFF00;

  I'm sure you didn't mean to do that, but I doubt it affects the results.

  BTW, that's a very different number from (1  30).  How about trying again
with a 'u' suffix?  Or with something that doesn't have the sign bit set?

cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: US-CERT Vulnerability Note VU#162289

2008-04-08 Thread Florian Weimer
* Robert C. Seacord:

 I agree with you that the behavior that gcc exhibits in this case is
 permitted by the ISO/IEC 9899:1999 C specification
 http://www.open-std.org/JTC1/SC22/WG14/www/docs/n1124.pdf
 (§6.5.6p8).   I believe the vulnerability is that gcc may *silently*
 discard the overflow checks and that this is a recent change in
 behavior. 

The problem is that there is no overflow check in the code.  At a purely
syntactic level, it appears that there is an overflow check.  But this
is true for integer overflows, too.

There are some issues that are debatable, like non-two's-complement
arithmetic for signed integers (also a feature of GCC and other
compilers)--or operator new[] overflows, for which I'd really like to
see run-time checks.

But treating C pointers as machine addresses is pretty clearly flawed
code.  For instance, are pointers compared as signed or unsigned
integers at the instruction level?  Which behavior do you need so that
your checks actually work as expected?


Re: US-CERT Vulnerability Note VU#162289

2008-04-08 Thread Robert C. Seacord

Dave,

I made at least one change you suggested:

#include stdio.h

void f(char *buf)  {
 unsigned int len = 0xFF00u;

 if (buf+len  buf) puts(true);

}

int main(void) {
   char buffer[100];
   printf(buffer addr = %p.\n, buffer);
   f(buffer);
   return 0;
}

And compiled with flags:

/O2 /GL /D WIN32 /D NDEBUG /D _CONSOLE /D _UNICODE /D UNICODE 
/FD /EHsc /MD /FoRelease\\ /FdRelease\vc80.pdb /W4 /nologo /c /Wp64 
/Zi /TC /errorReport:prompt


The output of this program is:

buffer addr = 0012FF18.
true

The reason the len is so high is to force an overflow when added to 
0x0012FF18.


BTW, I also tried compiling this as a C++ program under visual studio 
and got the same result.


rCs



Robert C. Seacord wrote on :

  

Specifically with regards to MSVC 2005, I thought Chad had already
checked this and found that it did not exhibit this behavior.  I just
tested the following program. 


#include stdio.h

void f(char *buf)  {
  unsigned int len = len = 0xFF00;



  I'm sure you didn't mean to do that, but I doubt it affects the results.

  BTW, that's a very different number from (1  30).  How about trying again
with a 'u' suffix?  Or with something that doesn't have the sign bit set?

cheers,
  DaveK
  




Re: US-CERT Vulnerability Note VU#162289

2008-04-07 Thread Paolo Bonzini

Rainer Emrich wrote:

http://www.kb.cert.org/vuls/id/162289

Any comments?


See http://www.airs.com/blog/archives/120 for a good blog post by Ian 
Lance Taylor about this issue.  -Wstrict-overflow=5  can be used to find 
cases where optimizations break not standard specified overflow cases, 
since GCC 4.2.


Also, -ftrapv is a little broken and may have false negatives.  On the 
other hand, -fwrapv should not cause any problems.


If you find that -fwrapv hinders performance of your application, you 
can also try -fwrapv -funsafe-loop-optimizations 
-Wunsafe-loop-optimizations.  This will restrict overflow assumptions 
to those needed to optimize loops, and also warn whenever the compiler 
made this kind of assumptions.  You can then audit any warning that you 
get to see if they have security implications for your application.


Paolo


Re: US-CERT Vulnerability Note VU#162289

2008-04-07 Thread Mark Mitchell

Paolo Bonzini wrote:


http://www.kb.cert.org/vuls/id/162289


The GCC SC was aware of this CERT posting before it was public.  Our 
feeling is that this is not a GCC bug, although it is something that we 
would like GCC to warn about.  I talked to Ian Taylor and he agreed to 
work on the warning.


Some other compilers do this optimization too, so compiling similar 
applications with them will also lead to the removal of the test.  We've 
identified one specific highly-optimizing compiler that does this 
optimization.  If you have information about other such compilers, 
please send it to me, and I will forward to CERT.  We would like to 
counter the idea that GCC is unique in this regard.  (In fact, I expect 
that the reason that this issue was found with GCC is that GCC is used 
so widely!)


Here is the test program:

int f(char *buf, int len) {
  len = 1  30;
  if (buf + len  buf)
return 1;

  return 0;
}

If you know of a non-GCC compiler that optimizes away the test (so that 
the function always returns 0), please post here, and let me know the 
name, version number, command-line options, etc. you used to demonstrate 
that.


Thanks,

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: US-CERT Vulnerability Note VU#162289

2008-04-07 Thread Robert C. Seacord


Mark,

Comments below.
The GCC SC was aware of this CERT posting before it was public. Our 
feeling is that this is not a GCC bug, although it is something that 
we would like GCC to warn about. I talked to Ian Taylor and he agreed 
to work on the warning.
I agree with you that the behavior that gcc exhibits in this case is 
permitted by the ISO/IEC 9899:1999 C specification 
http://www.open-std.org/JTC1/SC22/WG14/www/docs/n1124.pdf 
(§6.5.6p8).   I believe the vulnerability is that gcc may *silently* 
discard the overflow checks and that this is a recent change in behavior. 

Once a new version or patch is available that will warn users that this 
optimization is taking place, I will recommend that we change the work 
around from Avoid newer versions of gcc to Avoid effected versions of 
gcc and/or recommend that users download the patch / revision.


You are also right that the popularity of gcc is one of the reasons we 
decided to publish on this.  If you identify other compilers that a) are 
relatively popular, b) have changed their behavior recently, and c) 
silently optimize out overflow checks we will consider publishing 
vulnerability notes for those compilers as well.


rCs
**

--
Robert C. Seacord
Senior Vulnerability Analyst
CERT/CC 


Work: 412-268-7608
FAX: 412-268-6989



Re: US-CERT Vulnerability Note VU#162289

2008-04-07 Thread Andrew Pinski
On Mon, Apr 7, 2008 at 10:28 AM, Robert C. Seacord [EMAIL PROTECTED] wrote:
  I believe the vulnerability is that gcc may *silently* discard the overflow
 checks and that this is a recent change in behavior.

No it is not recent, unless you consider 1998 recent :).  I don't know
how many times but we have not changed the behavior of GCC with
respect of signed integer overflow being undefined.  Since the loop
optimizers have said this before, we just added an extra pass which
depends on it more.  I guess you did not read the GCC mailing list
before posting this Vulnerability because we already discussed this
many many times before around the time GCC 4.2.0 came out.

Also try -Wstrict-overflow=5 in GCC 4.2.3 and in GCC 4.3.0, we already
warn about most if not all cases already.

Thanks,
Andrew Pinski


Re: US-CERT Vulnerability Note VU#162289

2008-04-07 Thread Robert C. Seacord

Andrew,

We'll also add:

-Wstrict-overflow=5

As a work around.  


You are right, I don't regularly read the GCC mailing lists as GCC is not our 
only concern.  This problem came to our attention because it affected one of 
your users.  We did consult with Mark before publishing.

rCs



On Mon, Apr 7, 2008 at 10:28 AM, Robert C. Seacord [EMAIL PROTECTED] wrote:
  

 I believe the vulnerability is that gcc may *silently* discard the overflow
checks and that this is a recent change in behavior.



No it is not recent, unless you consider 1998 recent :).  I don't know
how many times but we have not changed the behavior of GCC with
respect of signed integer overflow being undefined.  Since the loop
optimizers have said this before, we just added an extra pass which
depends on it more.  I guess you did not read the GCC mailing list
before posting this Vulnerability because we already discussed this
many many times before around the time GCC 4.2.0 came out.

Also try -Wstrict-overflow=5 in GCC 4.2.3 and in GCC 4.3.0, we already
warn about most if not all cases already.

Thanks,
Andrew Pinski
  



--
Robert C. Seacord
Senior Vulnerability Analyst
CERT/CC 


Work: 412-268-7608
FAX: 412-268-6989



Re: US-CERT Vulnerability Note VU#162289

2008-04-07 Thread Ian Lance Taylor
Andrew Pinski [EMAIL PROTECTED] writes:

 On Mon, Apr 7, 2008 at 10:28 AM, Robert C. Seacord [EMAIL PROTECTED] wrote:
  I believe the vulnerability is that gcc may *silently* discard the overflow
 checks and that this is a recent change in behavior.

 No it is not recent, unless you consider 1998 recent :).  I don't know
 how many times but we have not changed the behavior of GCC with
 respect of signed integer overflow being undefined.  Since the loop
 optimizers have said this before, we just added an extra pass which
 depends on it more.  I guess you did not read the GCC mailing list
 before posting this Vulnerability because we already discussed this
 many many times before around the time GCC 4.2.0 came out.

 Also try -Wstrict-overflow=5 in GCC 4.2.3 and in GCC 4.3.0, we already
 warn about most if not all cases already.

No, read the advisory carefully (from a compiler perspective, it's
unfortunate that it does not give a complete test case).  This
advisory is not about undefined signed overflow.  This is about adding
a value to a pointer such that the pointer no longer points into the
same object, which is a completely different type of undefined
behaviour.  C99 section 6.5.6, paragraph 8.  gcc's behaviour changed
with the patch for PR 27039.

Although this is not undefined signed overflow, it is not unrelated
from the user's perspective, and I plan to incorporate it into the
-fstrict-overflow/ -Wstrict-overflow regime.

Ian


Re: US-CERT Vulnerability Note VU#162289

2008-04-07 Thread Joe Buck
On Mon, Apr 07, 2008 at 01:28:21PM -0400, Robert C. Seacord wrote:
 You are also right that the popularity of gcc is one of the reasons we 
 decided to publish on this.  If you identify other compilers that a) are 
 relatively popular, b) have changed their behavior recently, and c) 
 silently optimize out overflow checks we will consider publishing 
 vulnerability notes for those compilers as well.

What is the justification for requirement b)?  We identified two distinct
proprietary compilers that also do this optimization, but it isn't a
recent change in behavior.


Re: US-CERT Vulnerability Note VU#162289

2008-04-07 Thread Mark Mitchell

Robert C. Seacord wrote:

You are also right that the popularity of gcc is one of the reasons we 
decided to publish on this.  If you identify other compilers that a) are 
relatively popular, b) have changed their behavior recently, and c) 
silently optimize out overflow checks we will consider publishing 
vulnerability notes for those compilers as well.


I have sent CERT information about two other popular optimizing 
compilers which do this optimization.  Those compilers may have done it 
for longer than GCC, or not; I'm not sure.  But, users of those 
compilers are just as vulnerable.


The advisory suggests that people not use GCC.  If you don't mention 
that other compilers also do this, you may just prompt people to switch 
from GCC to some other compiler that behaves in the same way.


The tone of the note also suggests that GCC is uniquely defective in 
some way.  The title of the note mentions GCC, and the overview suggests 
that GCC is doing something wrong:


Some versions of gcc may silently discard certain checks for overflow. 
Applications compiled with these versions of gcc may be vulnerable to 
buffer overflows.


Why not change the overview to something like:

Some compilers (including, at least, GCC, PathScale, and xlc) optimize 
away incorrectly coded checks for overflow.  Applications containing 
these incorrectly coded checks may be vulnerable if compiled with these 
compilers.


?

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: US-CERT Vulnerability Note VU#162289

2008-04-07 Thread Robert C. Seacord

Joe,

Response below.

On Mon, Apr 07, 2008 at 01:28:21PM -0400, Robert C. Seacord wrote:
  
You are also right that the popularity of gcc is one of the reasons we 
decided to publish on this.  If you identify other compilers that a) are 
relatively popular, b) have changed their behavior recently, and c) 
silently optimize out overflow checks we will consider publishing 
vulnerability notes for those compilers as well.



What is the justification for requirement b)?  We identified two distinct
proprietary compilers that also do this optimization, but it isn't a
recent change in behavior.
  
my thinking is that if this behavior has been in place for many years, 
for example, users will have had the opportunity to discover the changed 
behavior.  our goal here is to disseminate this information more quickly.


on a related topic, we are trying to produce secure coding standards 
that deal with these issues in general.


we've begun the additional of several rules around the topic of pointer 
arithmetic following the release of this vul note. 

The most relevant is titled ARR38-C. Do not add or subtract an integer 
to a pointer if the resulting value does not refer to an element within 
the array and can be found here:


https://www.securecoding.cert.org/confluence/x/SgHm

We are hopeful that these rules and recommendations will help developers 
address these issues in the general sense.


rCs

--
Robert C. Seacord
Senior Vulnerability Analyst
CERT/CC 


Work: 412-268-7608
FAX: 412-268-6989



Re: US-CERT Vulnerability Note VU#162289

2008-04-07 Thread David Edelsohn
 Robert C Seacord writes:

Robert I believe the vulnerability is that gcc may *silently* 
Robert discard the overflow checks and that this is a recent change in 
behavior. 


Robert You are also right that the popularity of gcc is one of the reasons we 
Robert decided to publish on this.  If you identify other compilers that a) 
are 
Robert relatively popular, b) have changed their behavior recently, and c) 
Robert silently optimize out overflow checks we will consider publishing 
Robert vulnerability notes for those compilers as well.

All optimizing compilers silently should remove the check in the
process of optimizating the example code.  Compilers generally do not warn
by default when processing code in a way that conforms to a language
standard.

I believe that GCC developers are disappointed that CERT has
chosen to single out GCC using the above set of criteria while most other
compilers perform the same transformation.  If CERT wants to warn about
poorly-written code, it should focus on that vulnerability.

David



  1   2   >