Re: MIT discovered issue with gcc

2013-11-28 Thread Joel Rees
On Thu, Nov 28, 2013 at 6:10 AM, Wade Richards w...@wabyn.net wrote:
 One of the links Mark posted earlier addresses the The compiler should
 issue warnings issue.  The short answer is because of macro expansion and
 other code-rearranging optimizations (inlining functions, loop unrolling,
 pulling expressions out of a loop, etc.), undefined code appears and is
 removed more often than you'd expect.  Issuing a warning *every time* this
 happens would generate many confusing warnings that users wouldn't like.

I'm taking a course in embedded programming at the local employment
training center to brush up on skills I never lost, for reasons that
I won't bother to explain. The teacher during the interview to C, when
introducing pointers, was about to tell the students to not bother
introducing a pointer to an eight byte array of characters because
that wasn't enough memory to worry about.

And I'm sitting here remembering the business about dereferencing the
NULL pointer, and sysads leaving the bottom page of RAM allocated and
active to keep running code running. The problem has been
mentioned elsewhere in this thread, I think. But we aren't looking at
it straight.

Silently dropping code that produces code that is not defined within
the standard is very similar to silently leaving a page allocated at
the lowest addresses.

** Silently ** is the problem.

If programmers get used to using the bottom page of RAM as an
implicitly allocated volatile but temporary storage area, that becomes
part of the defacto standard, and if the 800 pound gorilla decides it
should then become part of the formal standard, who's to argue?

If programmers get used to saying things they don't mean because the
compiler silently optimizes it away because it's not defined according
to the standard, they learn to misunderstand the code they produce.
That's not good, is it?

 Also, the deeper you get into the optimized code, the harder it is to issue
 meaningful source-level warnings.  E.g. when the compiler optimizes:

Even unintelligible error messages would be better than silence.

You can interpret the old story about Ariane 5 in many ways, but I'm
thinking that silently optimizing improper code away doesn't help
systems to not crash.

 static int decimate(x) { return x/10; }
 int foo() {
int a=INT_MAX;
int b;
for(i=0; i100; ++i) { b=max(i, decimate(a*10));}

Why are we expecting the compiler to optimize that away for us?

Undefined behavior and system dependent behavior are two separate
things. Conflating them in the standard is going to lead to more
Ariane 5 kinds of crashes.

Anyway, if we can go deep enough in the optimizations to see that it
hits undefined behavior, going far enough to emit a warning is the
responsible behavior, not punting.

return b;
 }


 into

 int foo() { return INT_MAX; }


 What warnings should appear for which lines?

Optimizing it in the way you suggest is not the same as optimizing out
undefined behavior. There is, in fact, no reason to expect the
compiler to convert it in the way you suggest over some other
conversions.

However, to work with your example, my naive intuition would suggest
that the first warning would be in the call to decimate( a * 10 ), in
other words, the familiar significance lost in expression could be
augmented with something like for initial value and then , may
produce unintended results. Or, for something new and friendly,
Saturation on this processor results in invariant result of looped
expression. Check that this is an acceptable optimization.

 http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html (third
 page).

  --- Wade


 On Nov 27, 2013, at 12:19, Octavio Alvarez alvar...@alvarezp.ods.org
 wrote:

 On 26/11/13 11:37, Mark Haase wrote:

 Compiler developers, for better or worse, reserve the right to do

 whatever they want with undefined behavior, and it's up to the person

 writing the C code to not include undefined behavior in their own program.


 That's a fallacy.

It is, indeed, a fallacy, conflating an false argument with a result of false.

 The fact that a compiler does not violate the standard
 does not imply it is behaving sane. Thus, not violating the standard does
 not imply not having a bug.

 Considering a programmer would not ever *ever* want to fall into undefined
 behavior, the compiler should just issue warnings before making any kind of
 assumptions based after undefined behavior. Those warnings could be silenced
 with flags. This is a way of yes, I'm sure of what I'm doing.

 Therefore, a Linux distribution has 2 choices: (1) wait for upstream

 patches for bugs/vulnerabilities as they are found, or (2) recompile all

 packages with optimizations disabled. I don't think proposal #2 would

 get very far...

And, according to the article that started this thread, isn't going to
do the job, either, since many of our primary compilers now optimize
more than they are able to warn about even at the lowest level of

Re: MIT discovered issue with gcc

2013-11-28 Thread Joel Rees
Ick.

On Thu, Nov 28, 2013 at 8:28 PM, Joel Rees joel.r...@gmail.com wrote:
 On Thu, Nov 28, 2013 at 6:10 AM, Wade Richards w...@wabyn.net wrote:
 [...]
 I'm taking a course in embedded programming at the local employment
 training center to brush up on skills I never lost, for reasons that
 I won't bother to explain. The teacher during the interview to C, when

introduction to C

 introducing pointers, was about to tell the students to not bother
 introducing a pointer to an eight byte array of characters because

not bother initializing a pointer

 that wasn't enough memory to worry about.

[...]

Uninitialized pointers in my thought processes.

-- 
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAAr43iOwc7SYK1UP6p5SaWOg0MAuNdhp01QhnvzZFJSsOwN=k...@mail.gmail.com



Re: MIT discovered issue with gcc [OT]

2013-11-28 Thread Scott Ferguson
On 28/11/13 22:33, Joel Rees wrote:
snipped
 [...]
 
 Uninitialized pointers in my thought processes.
 

Made perfect sense to me.

I use ld.so.preload for everything. It's great.


Kind regards


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/52972c23@gmail.com



Re: MIT discovered issue with gcc

2013-11-28 Thread Octavio Alvarez
On 11/28/2013 03:28 AM, Joel Rees wrote:
 And, according to the article that started this thread, isn't going to
 do the job, either, since many of our primary compilers now optimize
 more than they are able to warn about even at the lowest level of
 optimization.

This should be enough to throw away that compiler.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/52977d81.8030...@alvarezp.ods.org



Re: MIT discovered issue with gcc

2013-11-27 Thread David L. Craig
On 13Nov27:1423+1100, Scott Ferguson wrote:

 On 27/11/13 13:49, David L. Craig wrote:

  On 13Nov26:1545-0500, David L. Craig wrote:
  
  On 13Nov26:1437-0500, Mark Haase wrote:
 
  Therefore, a Linux distribution has 2 choices: (1) wait for upstream
  patches for bugs/vulnerabilities as they are found, or (2) recompile all
  packages with optimizations disabled. I don't think proposal #2 would get
  very far...
 
  Well, there's always -O1 as opposed to no optimization.
  BTW, -O1 is the minimum permitted for making gcc or glibc,
  I forget which.
  
  I'm rebuilding glibc 2.18 now with -O1 after it refused -O0,
  but binutils 2.23.2, gcc 4.8.1, and g++ 4.8.1 are fine with
  -O0.
 
 And what was the result of poptck (STACK) when you tested them?

I haven't gotten that far yet, and it may be a while, since I want
to verify the internal tests and checks first but expect and dejagnu
aren't building using the deoptimized binaries (I'm using LFS 7.4
stable).  So perhaps someone way ahead of me with LLVM/CLANG would
like to report on this behavior.
-- 
not cent from sell
May the LORD God bless you exceedingly abundantly!

Dave_Craig__
So the universe is not quite as you thought it was.
 You'd better rearrange your beliefs, then.
 Because you certainly can't rearrange the universe.
__--from_Nightfall_by_Asimov/Silverberg_


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131127123708.GA14024@dlc-dt



Re: MIT discovered issue with gcc

2013-11-27 Thread Scott Ferguson
On 27/11/13 23:37, David L. Craig wrote:
 On 13Nov27:1423+1100, Scott Ferguson wrote:
 
 On 27/11/13 13:49, David L. Craig wrote:
 
 On 13Nov26:1545-0500, David L. Craig wrote:

 On 13Nov26:1437-0500, Mark Haase wrote:

 Therefore, a Linux distribution has 2 choices: (1) wait for upstream
 patches for bugs/vulnerabilities as they are found, or (2) recompile all
 packages with optimizations disabled. I don't think proposal #2 would get
 very far...

 Well, there's always -O1 as opposed to no optimization.
 BTW, -O1 is the minimum permitted for making gcc or glibc,
 I forget which.

 I'm rebuilding glibc 2.18 now with -O1 after it refused -O0,
 but binutils 2.23.2, gcc 4.8.1, and g++ 4.8.1 are fine with
 -O0.

 And what was the result of poptck (STACK) when you tested them?
 
 I haven't gotten that far yet, and it may be a while, since I want
 to verify the internal tests and checks first but expect and dejagnu
 aren't building using the deoptimized binaries (I'm using LFS 7.4
 stable).  So perhaps someone way ahead of me with LLVM/CLANG would
 like to report on this behavior.
 


I was hoping you'd do the work for me. (please)
:)


Kind regards.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5295ebe7.50...@gmail.com



Re: MIT discovered issue with gcc

2013-11-27 Thread David L. Craig
On 13Nov27:2356+1100, Scott Ferguson wrote:
 On 27/11/13 23:37, David L. Craig wrote:
  On 13Nov27:1423+1100, Scott Ferguson wrote:
  
  On 27/11/13 13:49, David L. Craig wrote:
  
  On 13Nov26:1545-0500, David L. Craig wrote:
 
  On 13Nov26:1437-0500, Mark Haase wrote:
 
  Therefore, a Linux distribution has 2 choices: (1) wait for upstream
  patches for bugs/vulnerabilities as they are found, or (2) recompile all
  packages with optimizations disabled. I don't think proposal #2 would 
  get
  very far...
 
  Well, there's always -O1 as opposed to no optimization.
  BTW, -O1 is the minimum permitted for making gcc or glibc,
  I forget which.
 
  I'm rebuilding glibc 2.18 now with -O1 after it refused -O0,
  but binutils 2.23.2, gcc 4.8.1, and g++ 4.8.1 are fine with
  -O0.
 
  And what was the result of poptck (STACK) when you tested them?
  
  I haven't gotten that far yet, and it may be a while, since I want
  to verify the internal tests and checks first but expect and dejagnu
  aren't building using the deoptimized binaries (I'm using LFS 7.4
  stable).  So perhaps someone way ahead of me with LLVM/CLANG would
  like to report on this behavior.
 
 I was hoping you'd do the work for me. (please)
 :)

I'll keep at it but I recommend not holding your breath.  Right now
I'm rebuilding using '-O1' for CFLAGS and CXXFLAGS to see if expect
and dejagnu get happy, but I suspect you're really interested in the
-O0 behavior.  Is anyone else interested?
 -- 
 To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
 with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
 Archive: http://lists.debian.org/5295ebe7.50...@gmail.com

-- 
not cent from sell
May the LORD God bless you exceedingly abundantly!

Dave_Craig__
So the universe is not quite as you thought it was.
 You'd better rearrange your beliefs, then.
 Because you certainly can't rearrange the universe.
__--from_Nightfall_by_Asimov/Silverberg_


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131127131700.GB14024@dlc-dt



Re: MIT discovered issue with gcc

2013-11-27 Thread Octavio Alvarez

On 26/11/13 11:37, Mark Haase wrote:

Compiler developers, for better or worse, reserve the right to do
whatever they want with undefined behavior, and it's up to the person
writing the C code to not include undefined behavior in their own program.


That's a fallacy. The fact that a compiler does not violate the standard 
does not imply it is behaving sane. Thus, not violating the standard 
does not imply not having a bug.


Considering a programmer would not ever *ever* want to fall into 
undefined behavior, the compiler should just issue warnings before 
making any kind of assumptions based after undefined behavior. Those 
warnings could be silenced with flags. This is a way of yes, I'm sure 
of what I'm doing.



Therefore, a Linux distribution has 2 choices: (1) wait for upstream
patches for bugs/vulnerabilities as they are found, or (2) recompile all
packages with optimizations disabled. I don't think proposal #2 would
get very far...


What about adding cppcheck warnings and gcc -Wall -pedantic be added to 
Lintian?


Or what about changing debhelper to pass some -f flags by default?


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/529653df.6010...@alvarezp.ods.org



Re: MIT discovered issue with gcc

2013-11-27 Thread Wade Richards
One of the links Mark posted earlier addresses the The compiler should issue 
warnings issue.  The short answer is because of macro expansion and other 
code-rearranging optimizations (inlining functions, loop unrolling, pulling 
expressions out of a loop, etc.), undefined code appears and is removed more 
often than you'd expect.  Issuing a warning *every time* this happens would 
generate many confusing warnings that users wouldn't like. 

Also, the deeper you get into the optimized code, the harder it is to issue 
meaningful source-level warnings.  E.g. when the compiler optimizes:
 static int decimate(x) { return x/10; }
 int foo() {
int a=INT_MAX;
int b;
for(i=0; i100; ++i) { b=max(i, decimate(a*10));}
return b;
 }

into 

 int foo() { return INT_MAX; }


What warnings should appear for which lines?

http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html (third 
page).

 --- Wade


 On Nov 27, 2013, at 12:19, Octavio Alvarez alvar...@alvarezp.ods.org wrote:
 
 On 26/11/13 11:37, Mark Haase wrote:
 Compiler developers, for better or worse, reserve the right to do
 whatever they want with undefined behavior, and it's up to the person
 writing the C code to not include undefined behavior in their own program.
 
 That's a fallacy. The fact that a compiler does not violate the standard does 
 not imply it is behaving sane. Thus, not violating the standard does not 
 imply not having a bug.
 
 Considering a programmer would not ever *ever* want to fall into undefined 
 behavior, the compiler should just issue warnings before making any kind of 
 assumptions based after undefined behavior. Those warnings could be silenced 
 with flags. This is a way of yes, I'm sure of what I'm doing.
 
 Therefore, a Linux distribution has 2 choices: (1) wait for upstream
 patches for bugs/vulnerabilities as they are found, or (2) recompile all
 packages with optimizations disabled. I don't think proposal #2 would
 get very far...
 
 What about adding cppcheck warnings and gcc -Wall -pedantic be added to 
 Lintian?
 
 Or what about changing debhelper to pass some -f flags by default?
 
 
 -- 
 To UNSUBSCRIBE, email to debian-security-requ...@lists.debian.org
 with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
 Archive: http://lists.debian.org/529653df.6010...@alvarezp.ods.org
 


Re: MIT discovered issue with gcc

2013-11-27 Thread Octavio Alvarez

On 27/11/13 13:10, Wade Richards wrote:

Also, the deeper you get into the optimized code, the harder it is to
issue meaningful source-level warnings.  E.g. when the compiler optimizes:

static int decimate(x) { return x/10; }
int foo() {
   int a=INT_MAX;
   int b;
   for(i=0; i100; ++i) { b=max(i, decimate(a*10));}
   return b;
}


into


int foo() { return INT_MAX; }


What warnings should appear for which lines?


Hi, thanks for the reply. I really hope I'm not missing your point here, 
but here it goes:


Speaking as a programmer, the following would be nice:

Warning (or error): a*10 can cause signed integer overflow on line 5 
which is undefined behavior. Not optimizing anything beyond this point 
for the rest of the function.


If I'm sure this is what I intend, and not to pay the non-optimization 
penalty, I would (and should, anyway) rewrite it like this:


int foo() {
   int a=INT_MAX;
   int b;
   int i;
   for(i=0; i100; ++i) { b=max(i, a);}
   return b;
}

... which, assuming a good max() function, would fall into the confines 
of defined behavior and thus, the compiler should feel free to optimize 
away whatever it wants without making crazy assumptions. The concept is: 
because it is defined behavior, the compiler knows what the code flow 
will be.


This would make both, my code and the compiler predictable.

In the following code, the compiler should feel free to optimize away 
without throwing any errors:


int foo() {
   int a=INT_MAX / 11;
   int b;
   int i;
   for(i=0; i100; ++i) { b=max(i, decimate(a*10));}
   return b;
}

In the following case I should get a warning because 'a' is not confined:

int foo(int a) {
   int b;
   int i;
   for(i=0; i100; ++i) { b=max(i, decimate(a*10));}
   return b;
}

The following code should be fully reduced to return a:

int foo(int a) {
   if (a  214748364)
return -1; /* or whatever */
   int b;
   int i;
   for(i=0; i100; ++i) { b=max(i, decimate(a*10));}
   return b;
}

Not sure if I'm making my point: I don't think any programmer would ever 
want his program to go into UB intentionally. Consequently, the compiler 
should never ever assume or suppose or guess anything at all. It should 
infer always, but never suppose anything. Using UB implies supposing 
as opposed to inferring.


Best regards.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/5296946b.7080...@alvarezp.ods.org



Re: MIT discovered issue with gcc

2013-11-26 Thread Michael Stone

On Mon, Nov 25, 2013 at 03:10:07PM -0700, Bob Proulx wrote:

In those systems the zero page is initially bit-zero and reading from
the zero point will return zero values from the contents there.  If
the program writes to the zero page then subsequent reads will return
whatever was written there.  This is bad behavior that was the default
due to bugs in much legacy software.  Unmapping the zero page will
cause those programs to segfault and therefore the vendors default to
having the page mapped to avoid support calls from their customers.

...

This is one of the areas that needs to be addressed when people port
software developed on a legacy Unix system over to a GNU/Linux system.
If the software wasn't written with this in mind then it might be
buggy and will need runtime testing to verify it.


To be fair, the software was already buggy, and likely had 
nearly-impossible-to-diagnose runtime errors caused by null pointer 
derefs yielding whatever junk was left in memory. 


Mike Stone


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: 
http://lists.debian.org/7f8d5e92-56c6-11e3-a86f-001cc0cda...@msgid.mathom.us



Re: MIT discovered issue with gcc

2013-11-26 Thread Miles Fidelman
Going back through the discussion on this thread, I'm taken by two main 
reactions:


- discussion of the specific class of bugs/security holes
- a lot of comments that this is an issue for upstream

What I haven't seen, so I'll add it to the discussion, is that this 
strikes me as an issue for WAY upstream - i.e., if gcc's optimizer is 
opening a class of security holes - then it's gcc that has to be fixed, 
after which that class of holes would go away after the next build of 
any impacted package.


Miles Fidelman


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/5294ee82.8050...@meetinghouse.net



Re: MIT discovered issue with gcc

2013-11-26 Thread Mark Haase
Miles, the GCC developers don't consider this to be a bug, and so I doubt
that any of it will be fixed. For example, here is a bug cited in the
paper:

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30475

If you have a moment, read through that thread. It gets pretty testy as the
developers argue over whether or not it's a bug. Eventually it was closed
as invalid', i.e. not really a true bug. It's not just GCC, either. Take a
look at this series of blog posts by the LLVM team:

http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html

Compiler developers, for better or worse, reserve the right to do whatever
they want with undefined behavior, and it's up to the person writing the C
code to not include undefined behavior in their own program.

Therefore, a Linux distribution has 2 choices: (1) wait for upstream
patches for bugs/vulnerabilities as they are found, or (2) recompile all
packages with optimizations disabled. I don't think proposal #2 would get
very far...



On Tue, Nov 26, 2013 at 1:54 PM, Miles Fidelman
mfidel...@meetinghouse.netwrote:

 Going back through the discussion on this thread, I'm taken by two main
 reactions:

 - discussion of the specific class of bugs/security holes
 - a lot of comments that this is an issue for upstream

 What I haven't seen, so I'll add it to the discussion, is that this
 strikes me as an issue for WAY upstream - i.e., if gcc's optimizer is
 opening a class of security holes - then it's gcc that has to be fixed,
 after which that class of holes would go away after the next build of any
 impacted package.

 Miles Fidelman



 --
 To UNSUBSCRIBE, email to debian-security-requ...@lists.debian.org
 with a subject of unsubscribe. Trouble? Contact
 listmas...@lists.debian.org
 Archive: http://lists.debian.org/5294ee82.8050...@meetinghouse.net




-- 
Mark E. Haase
CISSP, CEH
Sr. Security Software Engineer
www.lunarline.com
3300 N Fairfax Drive, Suite 308, Arlington, VA 22201
202-815-0201

Solutions Built on Security TM
Lunarline, Inc. is an ISO 9001 and CMMI Level 2 Certified SDVOSB
Information Assurance\ Cyber Security Services Company.


Re: MIT discovered issue with gcc

2013-11-26 Thread Miles Fidelman

Wow... that really is kind of testy. And... point taken.

Mark Haase wrote:
Miles, the GCC developers don't consider this to be a bug, and so I 
doubt that any of it will be fixed. For example, here is a bug 
cited in the paper:


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30475

If you have a moment, read through that thread. It gets pretty testy 
as the developers argue over whether or not it's a bug. Eventually it 
was closed as invalid', i.e. not really a true bug. It's not just 
GCC, either. Take a look at this series of blog posts by the LLVM team:


http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html

Compiler developers, for better or worse, reserve the right to do 
whatever they want with undefined behavior, and it's up to the person 
writing the C code to not include undefined behavior in their own program.


Therefore, a Linux distribution has 2 choices: (1) wait for upstream 
patches for bugs/vulnerabilities as they are found, or (2) recompile 
all packages with optimizations disabled. I don't think proposal #2 
would get very far...




On Tue, Nov 26, 2013 at 1:54 PM, Miles Fidelman 
mfidel...@meetinghouse.net mailto:mfidel...@meetinghouse.net wrote:


Going back through the discussion on this thread, I'm taken by two
main reactions:

- discussion of the specific class of bugs/security holes
- a lot of comments that this is an issue for upstream

What I haven't seen, so I'll add it to the discussion, is that
this strikes me as an issue for WAY upstream - i.e., if gcc's
optimizer is opening a class of security holes - then it's gcc
that has to be fixed, after which that class of holes would go
away after the next build of any impacted package.

Miles Fidelman



-- 
To UNSUBSCRIBE, email to debian-security-requ...@lists.debian.org

mailto:debian-security-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact
listmas...@lists.debian.org mailto:listmas...@lists.debian.org
Archive: http://lists.debian.org/5294ee82.8050...@meetinghouse.net




--
Mark E. Haase
CISSP, CEH
Sr. Security Software Engineer
www.lunarline.com http://www.lunarline.com
3300 N Fairfax Drive, Suite 308, Arlington, VA 22201
202-815-0201

Solutions Built on Security TM
Lunarline, Inc. is an ISO 9001 and CMMI Level 2 Certified SDVOSB 
Information Assurance\ Cyber Security Services Company.



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/529501af.2040...@meetinghouse.net



Re: MIT discovered issue with gcc

2013-11-26 Thread David L. Craig
On 13Nov26:1437-0500, Mark Haase wrote:

 Therefore, a Linux distribution has 2 choices: (1) wait for upstream
 patches for bugs/vulnerabilities as they are found, or (2) recompile all
 packages with optimizations disabled. I don't think proposal #2 would get
 very far...

Well, there's always -O1 as opposed to no optimization.
BTW, -O1 is the minimum permitted for making gcc or glibc,
I forget which.
-- 
not cent from sell
May the LORD God bless you exceedingly abundantly!

Dave_Craig__
So the universe is not quite as you thought it was.
 You'd better rearrange your beliefs, then.
 Because you certainly can't rearrange the universe.
__--from_Nightfall_by_Asimov/Silverberg_


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131126204538.GA11480@dlc-dt



Re: MIT discovered issue with gcc

2013-11-26 Thread Florian Weimer
* Bob Proulx:

 In those systems the zero page is initially bit-zero and reading from
 the zero point will return zero values from the contents there.  If
 the program writes to the zero page then subsequent reads will return
 whatever was written there.  This is bad behavior that was the default
 due to bugs in much legacy software.  Unmapping the zero page will
 cause those programs to segfault and therefore the vendors default to
 having the page mapped to avoid support calls from their customers.

There is also an optimization which allows better code generation for
loops over linked lists.  But for that, a read-only mapping is
sufficient.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/877gbuq0f3@mid.deneb.enyo.de



Re: MIT discovered issue with gcc

2013-11-26 Thread David L. Craig
On 13Nov26:1545-0500, David L. Craig wrote:

 On 13Nov26:1437-0500, Mark Haase wrote:
 
  Therefore, a Linux distribution has 2 choices: (1) wait for upstream
  patches for bugs/vulnerabilities as they are found, or (2) recompile all
  packages with optimizations disabled. I don't think proposal #2 would get
  very far...
 
 Well, there's always -O1 as opposed to no optimization.
 BTW, -O1 is the minimum permitted for making gcc or glibc,
 I forget which.

I'm rebuilding glibc 2.18 now with -O1 after it refused -O0,
but binutils 2.23.2, gcc 4.8.1, and g++ 4.8.1 are fine with
-O0.
-- 
not cent from sell
May the LORD God bless you exceedingly abundantly!

Dave_Craig__
So the universe is not quite as you thought it was.
 You'd better rearrange your beliefs, then.
 Because you certainly can't rearrange the universe.
__--from_Nightfall_by_Asimov/Silverberg_


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131127024922.GA12419@dlc-dt



Re: MIT discovered issue with gcc

2013-11-26 Thread Scott Ferguson
On 27/11/13 13:49, David L. Craig wrote:
 On 13Nov26:1545-0500, David L. Craig wrote:
 
 On 13Nov26:1437-0500, Mark Haase wrote:

 Therefore, a Linux distribution has 2 choices: (1) wait for upstream
 patches for bugs/vulnerabilities as they are found, or (2) recompile all
 packages with optimizations disabled. I don't think proposal #2 would get
 very far...

 Well, there's always -O1 as opposed to no optimization.
 BTW, -O1 is the minimum permitted for making gcc or glibc,
 I forget which.
 
 I'm rebuilding glibc 2.18 now with -O1 after it refused -O0,


 but binutils 2.23.2, gcc 4.8.1, and g++ 4.8.1 are fine with
 -O0.
 


And what was the result of poptck (STACK) when you tested them?


Kind regards


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/529565ca.9070...@gmail.com



Re: MIT discovered issue with gcc

2013-11-25 Thread Andrew McGlashan
On 25/11/2013 12:15 AM, Henrique de Moraes Holschuh wrote:
 Well, my best guess is that this is going to be considered upstream issues
 by the majority of the package maintainers, and thus they won't get much
 attention downstream (in Debian) until they start causing large headaches.

That's my greatest worry, it will almost always be someone else's problem.

When the problems extend right up to the kernel, it is a worry; if the
programming practices that give these results are normal and desired
though, the compiler needs to be *fixed*  or a simpler fix might be
just to recompile without letting the errant behaviour occur, but alas,
from this thread (as you would expect), it isn't that simple. :(

 So, yes, users should be concerned (but not alarmed).

Cheers
A.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/52936a72.4060...@affinityvision.com.au



Re: MIT discovered issue with gcc

2013-11-25 Thread Joe Pfeiffer
Robert Baron robertbartlettba...@gmail.com writes:

 Aren't many of the  constructs used as examples in the paper are commonly used
 in c programming.  For example it is very common to see a function that has a
 pointer as a parameter defined as:

 int func(void *ptr)
     {
     if(!ptr) return SOME_ERROR;
     /* rest of function*/
     return 1;
     }

 Isn't it interesting that their one example will potentially dereference the
 null pointer even before compiler optimizations (from the paper):

 struct tun_struct *tun=;
 struct sock *sk = tun-sk;
 if(*tun) return POLLERR; 

 The check to see that tun is non-null should occur before use, as in - quite
 frankly it is useless to check after as tun cannot be the null pointer (the
 program hasn't crashed):

 struct tun_struct *tun=;
 if(*tun) return POLLERR; 
 struct sock *sk = tun-sk;

The paper points out that the code contains a bug; the claim in the
paper is that it is a minor bug as written (it only gets past the
tun-sk dereference if page 0 has somehow been made readable), but
becomes a possible privilege escalation after the check has been
optimized away.

 I am under the impression that these problems are rather widely known among c
 programmers (perhaps not the kids fresh out of college).  But this is why
 teams need to have experienced people. 

 Furthermore, it is very common to find code that works before optimization,
 and fails at certain optimization levels.  Recently, I was compiling a library
 that failed its own tests under the optimization level set in the makefile but
 passed its own test at a lower level of optimization.

Isn't that, and an analysis of when this can happen, the main point of
the paper?


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1bvbzgtj13@snowball.wb.pfeifferfamily.net



Re: MIT discovered issue with gcc

2013-11-25 Thread Joe Pfeiffer
Robert Baron robertbartlettba...@gmail.com writes:

 Second question:

 Doesn't memcpy allow for overlapping memory, but strcpy does not?  Isn't this
 why memcpy is preferred over strcpy?

According to the man page for memcpy, The memory areas must not
overlap.  Use memmove(3)  if  the memory areas do overlap.

strcpy will stop copying at the first null byte.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1br4a4tiy3@snowball.wb.pfeifferfamily.net



Re: MIT discovered issue with gcc

2013-11-25 Thread Bob Proulx
Robert Baron wrote:
 struct tun_struct *tun=;
 struct sock *sk = tun-sk;
 if(*tun) return POLLERR;
 
  The check to see that tun is non-null should occur before use, as in -
 quite frankly it is useless to check after as tun cannot be the null
 pointer (the program hasn't crashed):

In Debian the default runtime environment does not map the zero page.
Therefore accessing the zero page will produce a segfault.  On Debian
the above would be trapped at runtime.  But that isn't the default on
every system.  In particular in classic legacy Unix systems the zero
page was typically mapped and available.

In those systems the zero page is initially bit-zero and reading from
the zero point will return zero values from the contents there.  If
the program writes to the zero page then subsequent reads will return
whatever was written there.  This is bad behavior that was the default
due to bugs in much legacy software.  Unmapping the zero page will
cause those programs to segfault and therefore the vendors default to
having the page mapped to avoid support calls from their customers.

  man ld (on HP-UX)
   -z Arrange for run-time dereferencing of null
  pointers to produce a SIGSEGV signal.  (This is
  the complement of the -Z option.  -Z is the
  default.)

   -Z This is the default. Allow run-time dereferencing
  of null pointers.  See the discussions of -Z and
  pointers in cc(1).  (This is the complement of the
  -z option.)

This is one of the areas that needs to be addressed when people port
software developed on a legacy Unix system over to a GNU/Linux system.
If the software wasn't written with this in mind then it might be
buggy and will need runtime testing to verify it.

Thankfully today the culture has changed and test driven development
is much more the good development paradigm and the zero page is
unmapped by default.

 I am under the impression that these problems are rather widely known among
 c programmers (perhaps not the kids fresh out of college).  But this is why
 teams need to have experienced people.

I can only say, yes, of course.  :-)

 Furthermore, it is very common to find code that works before optimization,
 and fails at certain optimization levels.  Recently, I was compiling a
 library that failed its own tests under the optimization level set in the
 makefile but passed its own test at a lower level of optimization.

Optimizer bugs have been known for as long as there have been
optimizers.  There isn't anything new here.

Bob

P.S.  On a project I worked on I had a hard time convincing the team
that we should turn on the ld -z option for our project so as to find
these types of bugs.  It was counter culture to take this type of
action!  They didn't want it to have a crash at run time.  They didn't
have any culture of testing in development.  The only test was in the
live run of the program by customers.  Therefore it would mostly be
the customers who would experience the crash.  I had to promise that I
would personally react quickly and chase down any of these bugs that
were found.


signature.asc
Description: Digital signature


Re: MIT discovered issue with gcc

2013-11-24 Thread Henrique de Moraes Holschuh
On Sat, 23 Nov 2013, Michael Tautschnig wrote:
 This should be taken with a grain of salt. (I'm doing research in the area of
 automated software analysis myself.) It clearly is a well-written paper with a
 nice tool. Yet unstable code results from code that would otherwise be
 considered bogus anyway (they give a nice list in Figure 3 in their paper), 
 thus
 it is not necessarily the case that compilers introduce completely new bugs -
 they just might make the existing ones worse. The use of the term
 vulnerabilities could be very misleading here: not all bugs yield security
 issues - many of them might just lead to unexpected behaviour, and not be
 exploitable to gain elevated privileges or the like.

The bugs the paper is about, if I recall correctly, are real code bugs made
dormant by the internal workings of the compiler (often only in some
optimization levels, so the bug might show up at -O0 and not at -O2, for
example).

Obviously these are an issue for Debian.  Not only we'd like to be able to
use c-lang/llvm as a real alternative in the not-too-distant future (say, 3
years from now), and that would likely awaken many of these latent bugs,
but also any major gcc upgrade can also awaken a subset of them.

Whether these dormant bugs will cause information security issues or not
(and most of the wouldn't), they're still a problem.

  This looks very serious indeed, but a quick search of Debian mailing
  lists didn't show anything being acknowledged for this issue should
  Debian users be concerned?

Well, my best guess is that this is going to be considered upstream issues
by the majority of the package maintainers, and thus they won't get much
attention downstream (in Debian) until they start causing large headaches.

So, yes, users should be concerned (but not alarmed).

-- 
  One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie. -- The Silicon Valley Tarot
  Henrique Holschuh


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131124131530.ga3...@khazad-dum.debian.net



Re: MIT discovered issue with gcc

2013-11-23 Thread Michael Tautschnig
Hi Andrew, hi all,

 I understand that Debian has a bunch of vulnerabilities as described in
 the following PDF.
 
 http://pdos.csail.mit.edu/~xi/papers/stack-sosp13.pdf
 
 Just a small quote:
 
 This paper presents the first systematic approach for
 reasoning about and detecting unstable code. We implement
 this approach in a static checker called Stack, and
 use it to show that unstable code is present in a wide
 range of systems software, including the Linux kernel and
 the Postgres database. We estimate that unstable code
 exists in 40% of the 8,575 Debian Wheezy packages that
 contain C/C++ code. We also show that compilers are
 increasingly taking advantage of undefined behavior for
 optimizations, leading to more vulnerabilities related to
 unstable code.

This should be taken with a grain of salt. (I'm doing research in the area of
automated software analysis myself.) It clearly is a well-written paper with a
nice tool. Yet unstable code results from code that would otherwise be
considered bogus anyway (they give a nice list in Figure 3 in their paper), thus
it is not necessarily the case that compilers introduce completely new bugs -
they just might make the existing ones worse. The use of the term
vulnerabilities could be very misleading here: not all bugs yield security
issues - many of them might just lead to unexpected behaviour, and not be
exploitable to gain elevated privileges or the like.

Consider the fact that Debian's source packages contain more than 200 million
lines of code. If we trust Steve McConnell's Code Complete book, industry
average lies at 15-50 errors per 1000 lines of code, which is more than 1 in 100
lines. In a very simplified way of reasoning, I'd dare to conclude that at least
2 million further bugs remain to be discovered.

 
 This looks very serious indeed, but a quick search of Debian mailing
 lists didn't show anything being acknowledged for this issue should
 Debian users be concerned?
 

Probably not more than before, but as much as always: you are using code that
hasn't be proved to be correct. But with open-source software at least you know
what code you are using, and which bugs are being found.

Hope this helps,
Michael




pgpBNu5wEeJR_.pgp
Description: PGP signature


Re: MIT discovered issue with gcc

2013-11-23 Thread Brad Alexander
On Sat, Nov 23, 2013 at 6:18 AM, Michael Tautschnig m...@debian.org wrote:


 
  This looks very serious indeed, but a quick search of Debian mailing
  lists didn't show anything being acknowledged for this issue should
  Debian users be concerned?
 

 Probably not more than before, but as much as always: you are using code
 that
 hasn't be proved to be correct. But with open-source software at least you
 know
 what code you are using, and which bugs are being found.


What I have told people in presentations is that the only truly secure
computer is one that is turned off, unplugged, packed in concrete, and
fired into the sun. Any program at a level not very much above Hello World
in the language of your choice is likely to have bugs. I mean, you would
have to swear off all software, turn off your computers, get rid of your
cell phone, etc. At this point, I'm not quite willing to go that far. As
Michael said, it's something to be aware of, but not something to keep you
awake at night worrying.

--b


Re: MIT discovered issue with gcc

2013-11-23 Thread Joel Rees
Deja gnu?

On Sat, Nov 23, 2013 at 10:34 AM, Andrew McGlashan
andrew.mcglas...@affinityvision.com.au wrote:
 Hi,

 The following link shows the issue in a nutshell:

 http://www.securitycurrent.com/en/research/ac_research/mot-researchers-uncover-security-flaws-in-c

 [it refers to the PDF that I mentioned]

 --
 Kind Regards
 AndrewM

I seem to remember discussing the strange optimizations that optimized
away range checks because the code that was being firewalled had to
be correct.

Ten years ago, it was engineers that understood pointers but didn't
understand logic. This time around, maybe it's a new generation of
sophomoric programmers, or maybe we have moles in our ranks.

The sky is not falling, but it sounds like I don't want to waste my
time with Clang yet. And I probably need to go make myself persona
non-grata again in some C language forums

-- 
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/caar43io_4l7+vil8vqzpzro+fdm1vhpphepomp88hiwbn+f...@mail.gmail.com



Re: MIT discovered issue with gcc

2013-11-23 Thread Robert Baron
Aren't many of the  constructs used as examples in the paper are commonly
used in c programming.  For example it is very common to see a function
that has a pointer as a parameter defined as:

int func(void *ptr)
{
if(!ptr) return SOME_ERROR;
/* rest of function*/
return 1;
}

Isn't it interesting that their one example will potentially dereference
the null pointer even before compiler optimizations (from the paper):

struct tun_struct *tun=;
struct sock *sk = tun-sk;
if(*tun) return POLLERR;

 The check to see that tun is non-null should occur before use, as in -
quite frankly it is useless to check after as tun cannot be the null
pointer (the program hasn't crashed):

struct tun_struct *tun=;
if(*tun) return POLLERR;
struct sock *sk = tun-sk;

I am under the impression that these problems are rather widely known among
c programmers (perhaps not the kids fresh out of college).  But this is why
teams need to have experienced people.

Furthermore, it is very common to find code that works before optimization,
and fails at certain optimization levels.  Recently, I was compiling a
library that failed its own tests under the optimization level set in the
makefile but passed its own test at a lower level of optimization.

PS: I liked their first example, as it appears to be problematic.



On Sat, Nov 23, 2013 at 8:17 AM, Joel Rees joel.r...@gmail.com wrote:

 Deja gnu?

 On Sat, Nov 23, 2013 at 10:34 AM, Andrew McGlashan
 andrew.mcglas...@affinityvision.com.au wrote:
  Hi,
 
  The following link shows the issue in a nutshell:
 
 
 http://www.securitycurrent.com/en/research/ac_research/mot-researchers-uncover-security-flaws-in-c
 
  [it refers to the PDF that I mentioned]
 
  --
  Kind Regards
  AndrewM

 I seem to remember discussing the strange optimizations that optimized
 away range checks because the code that was being firewalled had to
 be correct.

 Ten years ago, it was engineers that understood pointers but didn't
 understand logic. This time around, maybe it's a new generation of
 sophomoric programmers, or maybe we have moles in our ranks.

 The sky is not falling, but it sounds like I don't want to waste my
 time with Clang yet. And I probably need to go make myself persona
 non-grata again in some C language forums

 --
 Joel Rees

 Be careful where you see conspiracy.
 Look first in your own heart.


 --
 To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
 with a subject of unsubscribe. Trouble? Contact
 listmas...@lists.debian.org
 Archive:
 http://lists.debian.org/caar43io_4l7+vil8vqzpzro+fdm1vhpphepomp88hiwbn+f...@mail.gmail.com




Re: MIT discovered issue with gcc

2013-11-23 Thread Robert Baron
Second question:

Doesn't memcpy allow for overlapping memory, but strcpy does not?  Isn't
this why memcpy is preferred over strcpy?


On Sat, Nov 23, 2013 at 10:09 AM, Robert Baron 
robertbartlettba...@gmail.com wrote:

 Aren't many of the  constructs used as examples in the paper are commonly
 used in c programming.  For example it is very common to see a function
 that has a pointer as a parameter defined as:

 int func(void *ptr)
 {
 if(!ptr) return SOME_ERROR;
 /* rest of function*/
 return 1;
 }

 Isn't it interesting that their one example will potentially dereference
 the null pointer even before compiler optimizations (from the paper):

 struct tun_struct *tun=;
 struct sock *sk = tun-sk;
 if(*tun) return POLLERR;

  The check to see that tun is non-null should occur before use, as in -
 quite frankly it is useless to check after as tun cannot be the null
 pointer (the program hasn't crashed):

 struct tun_struct *tun=;
 if(*tun) return POLLERR;
 struct sock *sk = tun-sk;

 I am under the impression that these problems are rather widely known
 among c programmers (perhaps not the kids fresh out of college).  But this
 is why teams need to have experienced people.

 Furthermore, it is very common to find code that works before
 optimization, and fails at certain optimization levels.  Recently, I was
 compiling a library that failed its own tests under the optimization level
 set in the makefile but passed its own test at a lower level of
 optimization.

 PS: I liked their first example, as it appears to be problematic.



 On Sat, Nov 23, 2013 at 8:17 AM, Joel Rees joel.r...@gmail.com wrote:

 Deja gnu?

 On Sat, Nov 23, 2013 at 10:34 AM, Andrew McGlashan
 andrew.mcglas...@affinityvision.com.au wrote:
  Hi,
 
  The following link shows the issue in a nutshell:
 
 
 http://www.securitycurrent.com/en/research/ac_research/mot-researchers-uncover-security-flaws-in-c
 
  [it refers to the PDF that I mentioned]
 
  --
  Kind Regards
  AndrewM

 I seem to remember discussing the strange optimizations that optimized
 away range checks because the code that was being firewalled had to
 be correct.

 Ten years ago, it was engineers that understood pointers but didn't
 understand logic. This time around, maybe it's a new generation of
 sophomoric programmers, or maybe we have moles in our ranks.

 The sky is not falling, but it sounds like I don't want to waste my
 time with Clang yet. And I probably need to go make myself persona
 non-grata again in some C language forums

 --
 Joel Rees

 Be careful where you see conspiracy.
 Look first in your own heart.


 --
 To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
 with a subject of unsubscribe. Trouble? Contact
 listmas...@lists.debian.org
 Archive:
 http://lists.debian.org/caar43io_4l7+vil8vqzpzro+fdm1vhpphepomp88hiwbn+f...@mail.gmail.com





Re: MIT discovered issue with gcc

2013-11-23 Thread Oliver Schneider
On 2013-11-23 15:18, Robert Baron wrote:
 Second question:
 
 Doesn't memcpy allow for overlapping memory, but strcpy does not?  Isn't
 this why memcpy is preferred over strcpy?

IIRC memcpy does not, but memmove does.

See: http://linux.die.net/man/3/memcpy


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5290ca48.4050...@gmxpro.net



Re: MIT discovered issue with gcc

2013-11-23 Thread Michael Tautschnig
[...]
 Isn't it interesting that their one example will potentially dereference
 the null pointer even before compiler optimizations (from the paper):
 
 struct tun_struct *tun=;
 struct sock *sk = tun-sk;
 if(*tun) return POLLERR;
 
  The check to see that tun is non-null should occur before use, as in -
 quite frankly it is useless to check after as tun cannot be the null
 pointer (the program hasn't crashed):
 
[...]

They do say in the paper that the code possibly dereferences a null pointer,
irrespective of optimisation or not. Thus the code was always broken, it might
just have been missed, because compilers could have considered reordering the
instructions or maybe substitute the expression tun-sk for sk.

Best,
Michael



pgpYayGM_Ly7b.pgp
Description: PGP signature


Re: MIT discovered issue with gcc

2013-11-23 Thread Mark Haase
The researchers' point was that an attacker might be able to remap that memory 
page so that dereferencing a null pointer would NOT segfault. (I don't actually 
know how feasible this is; I'm just paraphrasing their argument. They footnote 
this claim but I didn't bother to read the cited sources.)

Checking if tun is null is [apparently] a valid precautionary measure -- not 
useless -- except an optimizer might remove it. The order of these statements 
is definitely wrong, but the authors are claiming that this optimization turns 
an otherwise innocuous bug into an exploitable vulnerability. 

Anyway, I don't see what this has to do with Debian. It's an interesting paper, 
but Debian can't find and fix all upstream bugs, nor do I think most users 
would be happy if suddenly everything was compiled without any optimizations. 

--
Mark E. Haase

 On Nov 23, 2013, at 10:09 AM, Robert Baron robertbartlettba...@gmail.com 
 wrote:
 
 Aren't many of the  constructs used as examples in the paper are commonly 
 used in c programming.  For example it is very common to see a function that 
 has a pointer as a parameter defined as:
 
 int func(void *ptr)
 {
 if(!ptr) return SOME_ERROR;
 /* rest of function*/
 return 1;
 }
 
 Isn't it interesting that their one example will potentially dereference the 
 null pointer even before compiler optimizations (from the paper):
 
 struct tun_struct *tun=;
 struct sock *sk = tun-sk;
 if(*tun) return POLLERR; 
 
 The check to see that tun is non-null should occur before use, as in - quite 
 frankly it is useless to check after as tun cannot be the null pointer (the 
 program hasn't crashed):
 
 struct tun_struct *tun=;
 if(*tun) return POLLERR; 
 struct sock *sk = tun-sk;
 
 I am under the impression that these problems are rather widely known among c 
 programmers (perhaps not the kids fresh out of college).  But this is why 
 teams need to have experienced people. 
 
 Furthermore, it is very common to find code that works before optimization, 
 and fails at certain optimization levels.  Recently, I was compiling a 
 library that failed its own tests under the optimization level set in the 
 makefile but passed its own test at a lower level of optimization.
 
 PS: I liked their first example, as it appears to be problematic.
 
 
 On Sat, Nov 23, 2013 at 8:17 AM, Joel Rees joel.r...@gmail.com wrote:
 Deja gnu?
 
 On Sat, Nov 23, 2013 at 10:34 AM, Andrew McGlashan
 andrew.mcglas...@affinityvision.com.au wrote:
  Hi,
 
  The following link shows the issue in a nutshell:
 
  http://www.securitycurrent.com/en/research/ac_research/mot-researchers-uncover-security-flaws-in-c
 
  [it refers to the PDF that I mentioned]
 
  --
  Kind Regards
  AndrewM
 
 I seem to remember discussing the strange optimizations that optimized
 away range checks because the code that was being firewalled had to
 be correct.
 
 Ten years ago, it was engineers that understood pointers but didn't
 understand logic. This time around, maybe it's a new generation of
 sophomoric programmers, or maybe we have moles in our ranks.
 
 The sky is not falling, but it sounds like I don't want to waste my
 time with Clang yet. And I probably need to go make myself persona
 non-grata again in some C language forums
 
 --
 Joel Rees
 
 Be careful where you see conspiracy.
 Look first in your own heart.
 
 
 --
 To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
 with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
 Archive: 
 http://lists.debian.org/caar43io_4l7+vil8vqzpzro+fdm1vhpphepomp88hiwbn+f...@mail.gmail.com
 


Re: MIT discovered issue with gcc

2013-11-23 Thread Darius Jahandarie
On Sat, Nov 23, 2013 at 1:16 PM, Mark Haase mark.ha...@lunarline.com wrote:
 Anyway, I don't see what this has to do with Debian. It's an interesting
 paper, but Debian can't find and fix all upstream bugs, nor do I think most
 users would be happy if suddenly everything was compiled without any
 optimizations.

Although Debian *developers* can't find and fix all upstream bugs, the
Debian project, as the funnel between code and users, provides an
interesting location to perform this sort of automated static analysis
on all source code flowing through it, and present that information
to both the package maintainers and users of the packages.

-- 
Darius Jahandarie


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/cafanwtw2r+2w0e3ewvcmse-zcvbxfgugs6sp+ppu9q6gv7x...@mail.gmail.com



Re: MIT discovered issue with gcc

2013-11-23 Thread Stan Hoeppner
On 11/22/2013 7:34 PM, Andrew McGlashan wrote:

 http://www.securitycurrent.com/en/research/ac_research/mot-researchers-uncover-security-flaws-in-c

the team ran Stack against the Debian Linux archive, of which 8575 out
of 17432 packages contained C/C++ code.  For a whopping 3471 packages,
STACK detected at least one instance of unstable code.

So 3471 Wheezy packages had one ore more instances of gcc introduced
anomalies.  And the kernel binary they tested had 32.

As an end user I'm not worried about this at all.  But I'd think
developers may want to start taking a closer look at how gcc does its
optimizations and creates these anomalies.  If the flaws are serious
they should obviously takes steps to mitigate or eliminate this.

I didn't read the full paper yet, but I'm wondering how/if the
optimization flag plays a part in this.  I.e. does O2 produce these
bugs but OO (default) or Og (debugging) does not?

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/52911cb9.5010...@hardwarefreak.com



Re: MIT discovered issue with gcc

2013-11-23 Thread Neal Murphy
On Saturday, November 23, 2013 04:23:05 PM Stan Hoeppner wrote:

 I didn't read the full paper yet, but I'm wondering how/if the
 optimization flag plays a part in this.  I.e. does O2 produce these
 bugs but OO (default) or Og (debugging) does not?

Or -O3...


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201311231636.18718.neal.p.mur...@alum.wpi.edu



Re: MIT discovered issue with gcc

2013-11-23 Thread Joel Rees
[Not sure this really needs to be cc-ed to security@]

On Sun, Nov 24, 2013 at 12:09 AM, Robert Baron
robertbartlettba...@gmail.com wrote:
 Aren't many of the  constructs used as examples in the paper are commonly
 used in c programming.  For example it is very common to see a function that
 has a pointer as a parameter defined as:

 int func(void *ptr)
 {
 if(!ptr) return SOME_ERROR;
 /* rest of function*/
 return 1;
 }

 Isn't it interesting that their one example will potentially dereference the
 null pointer even before compiler optimizations (from the paper):

 struct tun_struct *tun=;
 struct sock *sk = tun-sk;
 if(*tun) return POLLERR;

 The check to see that tun is non-null should occur before use, as in - quite
 frankly it is useless to check after as tun cannot be the null pointer (the
 program hasn't crashed):

This one has been thrashed to death.

Yes, the standard (after considerable reworking overseen by certain
groups with an axe to grind) says that, not only is dereferencing
before testing evil (i.e., undefined), but even adding to a pointer
before testing it is evil.

Committees really should not be allowed to define language semantics.
Make suggestions, sure, but actually define them, no.


 struct tun_struct *tun=;
 if(*tun) return POLLERR;
 struct sock *sk = tun-sk;

Yes, this arrangement is less liable to induce error on the part of
the programmer.

The compiler should be immune to such issues of induced error,
especially if it is able to reliably optimize out theoretically
undefined code (which is seriously, seriously evil).

 I am under the impression that these problems are rather widely known among
 c programmers (perhaps not the kids fresh out of college).  But this is why
 teams need to have experienced people.

 Furthermore, it is very common to find code that works before optimization,
 and fails at certain optimization levels.  Recently, I was compiling a
 library that failed its own tests under the optimization level set in the
 makefile but passed its own test at a lower level of optimization.

Completely separate issue.

 PS: I liked their first example, as it appears to be problematic.

As I noted (too obliquely, perhaps?) the my comments why you
top-posted over, this is nothing at all new. The holy grail of
optimization has been known to induce undefined behavior in compiler
writers since way before B or even Algol.

The guys responsible for optimization sometimes forget that falsifying
an argument is not falsifying the conclusion, among other things.

 On Sat, Nov 23, 2013 at 8:17 AM, Joel Rees joel.r...@gmail.com wrote:

 Deja gnu?

 On Sat, Nov 23, 2013 at 10:34 AM, Andrew McGlashan
 andrew.mcglas...@affinityvision.com.au wrote:
  Hi,
 
  The following link shows the issue in a nutshell:
 
 
  http://www.securitycurrent.com/en/research/ac_research/mot-researchers-uncover-security-flaws-in-c
 
  [it refers to the PDF that I mentioned]
 
  --
  Kind Regards
  AndrewM

 I seem to remember discussing the strange optimizations that optimized
 away range checks because the code that was being firewalled had to
 be correct.

 Ten years ago, it was engineers that understood pointers but didn't
 understand logic. This time around, maybe it's a new generation of
 sophomoric programmers, or maybe we have moles in our ranks.

 The sky is not falling, but it sounds like I don't want to waste my
 time with Clang yet. And I probably need to go make myself persona
 non-grata again in some C language forums

 --
 Joel Rees

 Be careful where you see conspiracy.
 Look first in your own heart.

-- 
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAAr43iMM1OuT_cxYADAophvNtT95VxP=bfj+-nosgp+7agf...@mail.gmail.com



Re: MIT discovered issue with gcc

2013-11-23 Thread Joel Rees
On Sun, Nov 24, 2013 at 12:18 AM, Robert Baron
robertbartlettba...@gmail.com wrote:
 Second question:

 Doesn't memcpy allow for overlapping memory, but strcpy does not?  Isn't
 this why memcpy is preferred over strcpy?
[...]

The reason memcpy() is preferred over strcpy() is the same as the
reason strncpy() is preferred over strcpy().

memcpy() is actually considered a no-no in some circles, and perhaps
correctly so.  (Especially in C++, where classes are supposed to
define their own copying, and it's almost always more optimal to
explicitly copy each member instead of calculating the size, mass
copying, and going back and overwriting the members that are subject
to issues like deep copy. Remember that memcpy() is able to copy an
odd number of bytes, so the size calculation contains a bit more than
is obvious to the programmer.)

-- 
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/caar43iof7wca3zugtzewvhddjzk98ava2oatgw_mavruoyz...@mail.gmail.com



Re: MIT discovered issue with gcc

2013-11-23 Thread Joel Rees
On Sun, Nov 24, 2013 at 6:23 AM, Stan Hoeppner s...@hardwarefreak.com wrote:
 On 11/22/2013 7:34 PM, Andrew McGlashan wrote:

 http://www.securitycurrent.com/en/research/ac_research/mot-researchers-uncover-security-flaws-in-c

 the team ran Stack against the Debian Linux archive, of which 8575 out
 of 17432 packages contained C/C++ code.  For a whopping 3471 packages,
 STACK detected at least one instance of unstable code.

 So 3471 Wheezy packages had one ore more instances of gcc introduced
 anomalies.  And the kernel binary they tested had 32.

 As an end user I'm not worried about this at all.  But I'd think
 developers may want to start taking a closer look at how gcc does its
 optimizations and creates these anomalies.  If the flaws are serious
 they should obviously takes steps to mitigate or eliminate this.

 I didn't read the full paper yet, but I'm wondering how/if the
 optimization flag plays a part in this.  I.e. does O2 produce these
 bugs but OO (default) or Og (debugging) does not?

The paper says some of the surprise optimizations happen at even the
default optimization level.

And I remember one that definitely does, although I don't remember
where I put the code where I played with it.

-- 
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAAr43iNP44POMkEYUB6c+iuXceHHFTCM+1bexE5XvKaP=-q...@mail.gmail.com



Re: MIT discovered issue with gcc

2013-11-23 Thread Paul Wise
On Sun, Nov 24, 2013 at 3:53 AM, Darius Jahandarie wrote:

 Although Debian *developers* can't find and fix all upstream bugs, the
 Debian project, as the funnel between code and users, provides an
 interesting location to perform this sort of automated static analysis
 on all source code flowing through it, and present that information
 to both the package maintainers and users of the packages.

Some Debian folks are working on that in conjunction with Fedora. We
could use some help, especially with packaging new checkers and with
writing firehose output converters for existing checkers. Please get
involved, links below.

PS: STACK isn't currently possible to package because it needs a
special build of llvm that isn't in Debian yet.

https://fedoraproject.org/wiki/StaticAnalysis
https://github.com/fedora-static-analysis/firehose
http://debile.debian.net/
http://firewoes.debian.net/
http://debuild.me/
https://wiki.debian.org/HowToPackageForDebian#Check_points_for_any_package

-- 
bye,
pabs

http://wiki.debian.org/PaulWise


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/caktje6hzr-p2nnhku_24rp7vgsb02jet_fb9cy2bwurcgaa...@mail.gmail.com



Re: MIT discovered issue with gcc

2013-11-22 Thread Andrew McGlashan
Hi,

The following link shows the issue in a nutshell:

http://www.securitycurrent.com/en/research/ac_research/mot-researchers-uncover-security-flaws-in-c

[it refers to the PDF that I mentioned]

-- 
Kind Regards
AndrewM


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/52900637.1080...@affinityvision.com.au