Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-29 Thread Florian Weimer
* Kenneth Van Wyk:

> 1) the original author of the defect thought that s/he was doing
> things correctly in using strncpy (vs. strcpy).

> 2) the original author had apparently been doing static source
> analysis using David Wheeler's Flawfinder tool, as we can tell from
> the comments.

This is not a first, BTW.  The Real folks have always been a bit
overzealous when adding those "Flawfinder: ignore" annotations:


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-28 Thread David A. Wheeler
On the comment:
> | I am not disagreeing with the fact the static source analysis is a
> | good thing, I am just saying that this is a case where it failed (or
> | maybe the user/developer of it failed or misunderstood it's use). Fair
> | enough that on this particular list you are going to defend source
> | analysis over any other method, it is about secure coding after all,
> | but I definitely still strongly disagree that other methods wouldn't
> | have found this bug.

Actually, I am _not_ of the opinion that analysis tools are always 
"better" than any other method.  I don't really believe in a silver 
bullet, but if I had to pick one, "developer education" would be my 
silver bullet, not analysis tools.  (Basically, a "fool with a tool is 
still a fool".)  I believe that for secure software you need a SET of 
methods, and tool use is just a part of it.

That said, I think tools that search for vulnerabilities usually need to 
be PART of the answer for secure software in today's world.  Customers 
are generally _unwilling_ to reduce the amount of functionality they 
want to something we can easily prove correct, and formally proving 
programs correct has not scaled well yet (though I commend the work to 
overcome this).   No language can prevent all vulnerabilities from being 
written in the first place.  Human review is _great_, but it's costly in 
many circumstances and it often misses things that tools _can_ pick up. 
So we end up needing analysis tools as part of the process, even though 
current tools have a HUGE list of problems because NOT using them is 
often worse. Other methods may have found the bug, but other methods 
typically don't scale well.

> As with almost everything in software engineering, sad to say, there is
> very little objective evidence.  It's hard and expensive to gather, and
> those who are in a position to pay for the effort rarely see a major
> payoff from making it.

This is a serious problem with software development as a whole; there's 
almost no science behind it at all.   Science requires repeatable 
experiments with measurable results, but most decisions in software 
development are based on guesses and fads, not hard scientific data.

I'm not saying educated guesses are always bad; when you don't have 
data, and you need to do something NOW, educated guesses are often the 
best you can do... and we'll never know EVERYTHING.  The problem is that 
we rarely perform or publish the experiments to get better, so we don't 
make real PROGRESS.  And we don't do the experiments in part because few 
organizations fund actual, publishable scientific work to determine 
which software development approaches work best in software development. 
  (There are exceptions, but it sure isn't the norm.)  We have some good 
science on very low-level stuff like big-O/complexity theory, syntactic 
models, and so on - hey, we can prove trivial programs correct! But 
we're hopeless if you want to use science to make decisions about 50 
million line programs.  Which design approach or processes are best, and 
for what purpose - and can you show me the experimental results to 
justify the claim?  There IS some, but not much.  We lack the scientific 
information necessary to make decisions about many real-world (big) 
applications, and what's worse, we lack a societal process to grow that 
pool of information.  I've no idea how to fix that.

--- David A. Wheeler


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-28 Thread Leichter, Jerry
On Thu, 28 Jun 2007, J. M. Seitz wrote:
| Hey there,
|  
| > If you couldn't insert "ignore" directives, many people 
| > wouldn't use such tools at all, and would release code with 
| > vulnerabilities that WOULD be found by such tools.
| 
| Of course, much like an IDS, you have to find the baseline and adjust
| your ruleset according to the norm, if it is constantly firing on
| someone accessing /index.html of your website, then that's working
| against you.
| 
| I am not disagreeing with the fact the static source analysis is a
| good thing, I am just saying that this is a case where it failed (or
| maybe the user/developer of it failed or misunderstood it's use). Fair
| enough that on this particular list you are going to defend source
| analysis over any other method, it is about secure coding after all,
| but I definitely still strongly disagree that other methods wouldn't
| have found this bug.
| 
| Shall we take a look at the customer lists of the big source analyzer
| companies, and then cross-map that to the number of vulnerabilities
| released? 
It would be great to have that information.  Even better would be to
classify the vulnerabilities in two buckets:  Those that the analyzer
was expected to find, and those that it's not ready for.

You would have to allow for the time it takes from when the analyzer
starts being used to when software that actually went through, not just
the analyzer, but the remediation process, actually hits the streets.
For large companies and widely-used commerical products, that can be a
long time.

However ... I think the chances of getting that kind for commercial
projects is just about nil.  Companies generally consider the details of
their software development processes proprietary, and at most will give
you broad generalities about the kinds of tools and processes they use.
(Given the cost of these analyzers, you'd think that potential customers
would want some data about actual payoffs.  However, I think they
recognize that no such data exists at this point.  A prudent customer
might well collect such data for himself to help in deciding whether the
contract for the analyzer is worth renewing - though even this kind of
careful analysis is very rare in the industry.)

An OSS project might be a better starting place for this kind of study.
I know that Coverity has analyzed a couple of significant pieces of OSS
for free (including, I believe, the Linux and NetBSD kernels).  It's
likely that other analyzer vendors have done something similar, though I
haven't heard about it.  A study showing how using an analyzer led to an
x% decrease in reported bugs would make for great sales copy.  (Of
course, there's always the risk that the analyzer doesn't actually help
make the software more secure!)

|   Why are we still finding bugs in software that have the SDL?
| Why are we still finding bugs in software that have been analyzed
| before the compiler has run? Why are these companies like Fortify
| charging an arm and a leg for such a technology when the bughunters
| are still beating the snot out of this stuff? 
I'm not sure that's a fair comparison.  The defenders have to plug *all*
the holes, including those using techniques that weren't known at the
time the software was produced.  The attackers only have to find one
hole.

|   You guys all have much
| more experience on that end, so I am looking forward to your
| responses!
As with almost everything in software engineering, sad to say, there is
very little objective evidence.  It's hard and expensive to gather, and
those who are in a position to pay for the effort rarely see a major
payoff from making it.  Which would you rather do:  Put up $1,000,000 to
do a study which *might* show your software/methodology/whatever helps,
or pay $1,000,000 to hire a bunch of "feet on the street" to sell more
copies?  So we have to go by gut feel and engineering judgement.  Those
certainly say, for most people, that static analyzers will help.  Then
again, I'm sure most people on this list will argue that strong static
typing is essential for secure, reliable software.  "Everyone has known
that" for 20 years or more.  Except that ... if you look around, you'll
find many arguments these days that the run-time typing characteristic
of languages like Python or Ruby is just as good and lets you produce
software much faster.  Which argument is true?  You'd think that after
50 years of software development, we'd at least know how to frame a
test ... but even that seems beyond the state of the art.

-- Jerry

| Cheers! 
| 
| JS
| 
| ___
| Secure Coding mailing list (SC-L) SC-L@securecoding.org
| List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
| List charter available at - http://www.securecoding.org/list/charter.php
| SC-L is hosted and moderated by KRvW Associat

Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-28 Thread J. M. Seitz
 
Hey there,
 
> If you couldn't insert "ignore" directives, many people 
> wouldn't use such tools at all, and would release code with 
> vulnerabilities that WOULD be found by such tools.

Of course, much like an IDS, you have to find the baseline and adjust your
ruleset according to the norm, if it is constantly firing on someone
accessing /index.html of your website, then that's working against you. 

I am not disagreeing with the fact the static source analysis is a good
thing, I am just saying that this is a case where it failed (or maybe the
user/developer of it failed or misunderstood it's use). Fair enough that on
this particular list you are going to defend source analysis over any other
method, it is about secure coding after all, but I definitely still strongly
disagree that other methods wouldn't have found this bug. 

Shall we take a look at the customer lists of the big source analyzer
companies, and then cross-map that to the number of vulnerabilities
released? Why are we still finding bugs in software that have the SDL? Why
are we still finding bugs in software that have been analyzed before the
compiler has run? Why are these companies like Fortify charging an arm and a
leg for such a technology when the bughunters are still beating the snot out
of this stuff? You guys all have much more experience on that end, so I am
looking forward to your responses!

Cheers! 

JS

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-28 Thread David A. Wheeler
In this discussion:
> | This is a perfect example of how a source code analysis tool failed,
> | because you let a developer tell it to NOT scan it. :) I wonder if
> | there are flags like that in Fortify?
> There are flags like that in *every* source code scanner I know of.  The
> state of the art is just not at a point where you don't need a way to
> turn off warnings for false positives.

That's exactly right, unfortunately.  To compensate for the problem of 
people inserting bad ignore directives, many scanning tools _also_ 
include an "ignore the ignores" command.  For example, flawfinder has a 
--neverignore (-n) flag that "ignores the ignore command".  I believe 
that such an option ("ignore ignores") is critically necessary for any 
tool that has "ignore" directives, to address this very problem.

If you couldn't insert "ignore" directives, many people wouldn't use 
such tools at all, and would release code with vulnerabilities that 
WOULD be found by such tools.

--- David A. Wheeler


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-27 Thread Leichter, Jerry
| This is a perfect example of how a source code analysis tool failed,
| because you let a developer tell it to NOT scan it. :) I wonder if
| there are flags like that in Fortify?
There are flags like that in *every* source code scanner I know of.  The
state of the art is just not at a point where you don't need a way to
turn off warnings for false positives.  The way you do it can be more or
less sophisticated, but even the most sophisticated can be fooled by
misunderstandings - much less deliberate lies - by the developer.  For
example, Coverity has a nice feature that lets you provide it with
information about the characteristics of functions to which the scanner
doesn't have access.  Rather than define a whole little language for
defining such things, Coverity has you write a dummy function that
simply undertakes the actions it will need to know about.  For example,
to tell Coverity that void f(char* p) will write to *p, you provide
the dummy function:

void f(char* p) { *p = 0; }

If you accidentally - or deliberately - provide:

void f(char* p) { *p; }

you've just told Coverity that f() dereferences p, but never writes
through it.  Needless to say, and subsequent analysis will be flawed.

Binary analysis doesn't, in principle, suffer from the problem of not
having access to everything - though even it may have a problem with
system calls (and of course in practice there may be other
difficulties).

"Unavailable source/binary" is a very simple issue.  There are all kinds
of things that are beyond the capabilities of current analysis engines -
and always will be, given the Turing-completeness of all languages
people actually use.  To take a simple example:  A properly-written
quicksort that, before scanning a subfile, sorts the first, middle, and
last element in place before using the middle element as the pivot, does
not need to do any array-bounds checking:  The element values will make
it impossible to walk "out of" the subfile.  This is trivial to prove,
but I doubt any scanner will notice.  It will complain that you *might*
exceed the array bounds.  You basically have two alternatives at this
point:

1.  Re-write the code to make it amenable to analysis.
2.  Somehow tell the analyzer to back off.

The theoretically-correct approach is (1) - but it may not be practical
in particular cases.  For quicksort applied to a type with a fast
comparison operation, doing that can double the time spent in the inner
loop and kill performance.  In many cases, the performance or other
impact is minor, but someone has to go off and restructure and retest
the code.  How much such effort can be justified to deal with an
"impossible" situation, just because the analyzer is too stupid to
understand it?  Paradoxically, doing this can also lead to difficulty in
later maintenance:  Someone looking at the code will ask "why are we
testing for null here, the value can't possibly be null because of
XYZ..."  You don't want to be wasting your time going down such
pointless paths every time you need to change the code.  (Yes, comments
can help - but all too often they aren't read or updated.)

Finally, I've seen cases where adding a check to keep the analyzer happy
can encourage a later bug.  Consider a function that as part of some
computation we can prove always produces a non-negative value.  We take
the square root of this value without checking.  The analyzer complains.
So even though it can't matter, we use take the absolute value before
taking the square root.  All is well.  Later, the definition of the
function is changed, and the first computation *may* produce a negative
value, which is then used in some other way.  A quick check of the code
reveals that negative values "are already being handled", so no attempt
is made to update the code.  Boom.

-- Jerry

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-26 Thread Steven M. Christey

> On 6/26/07 4:25 PM, "Wall, Kevin" <[EMAIL PROTECTED]> wrote:
>
> I mean, was the fix really rocket science that it had to take THAT
> LONG??? IMHO, no excuse for taking that long.

Some major vendor organizations, most notably Oracle and Microsoft, have
frequently stated that they can't always fix even simple vulnerabilities
instantly, because they have batteries of tests and platforms to verify
that the fix won't damage anything else.  I can see why this would be the
case, although I rarely hear vendors talk about what they're doing to make
their response time faster.  Open source vendors likely have similar
challenges, though maybe not on such a large scale.

I'd be interested to hear from the SDLC/CMM consultant types who work with
vendors on process, about *why* this is the case.

And in terms of future challenges: how can the lifecycle process be
changed so that developers can quickly and correctly fix show-stopping
issues (including/especially vulnerabilities)?  It would seem to me that
one way that vendors can compete, but don't, is in how quickly and
smoothly they fix issues in existing functionality, which might be a large
part of the operational expenses for an IT consumer.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-26 Thread J. M. Seitz
Hey all,

> 1) the original author of the defect thought that s/he was 
> doing things correctly in using strncpy (vs. strcpy).
> 2) the original author had apparently been doing static 
> source analysis using David Wheeler's Flawfinder tool, as we 
> can tell from the comments.
> 

This is humorous, suppose they put it there intentionally and created the
flawfinder tag so that bughunters wouldn't see it doing a quick code scan :)
Conspiracy theory++! But on the other hand, if you can make big bucks
selling 0-days, and can write code, why wouldn't you try to sneak a few into
an open source app?

> Mind you, the overrun can only be exploited when specific 
> characters are used as input to the loop in the code.  Thus, 
> I'm inclined to think that this is an interesting example of 
> a bug that would have been extraordinarily difficult to find 
> using black box testing, even fuzzing.  The iDefense team 
> doesn't say how the (anonymous) person who reported it found 
> it, but I for one would be really curious to hear that story.

I disagree, and do we include reverse engineering as black-box
testing? For example, maybe straight up RFC-style fuzzer wouldn't have hit
this one immediately, but there is the possibility that it could have
eventually found that code path, even a dumb fuzzer *could* have. Now let's
take something like Demott's EFS system which uses code-coverage and a
genetic algorithm to hammer further and further into the code. As it would
have hit this basic block of assembly, it may have found that the necessary
characters to continue through this code path had to be mutated or included
in a recombination for the next generation (it's fitness score would be
higher),it's not unreasonableI have seen it do it myself! 

Now if a RE guy would have looked at this (and some of us prefer
disassembled binaries over C-source), its VERY plausible that they would
have found that path, and found the way to exploit it. Take a look at my
blog posting on
http://www.openrce.org/blog/view/775/Tracing_Naughty_Functions where I drop
some subtle hints on how to quickly find these dangerous functions, and
begin determining the best path towards them. Definitely not a new
approach...

This is a perfect example of how a source code analysis tool failed, because
you let a developer tell it to NOT scan it. :) I wonder if there are flags
like that in Fortify?

JS

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-26 Thread Paco Hope
On 6/26/07 4:25 PM, "Wall, Kevin" <[EMAIL PROTECTED]> wrote:

I mean, was the fix really rocket science that it had to take THAT LONG??? 
IMHO, no excuse for taking that long.

8 months seems awfully long, but it doesn't surprise me that a big organization 
takes a really long time to get things like this out the door. I have worked 
with a lot of software companies over the years, and few have their entire test 
suite (unit, integration, system, regression) fully automated. It just doesn't 
work that way. People on this list would be just as condemning of a company 
that rushed a fix out the door with inadequate testing and managed to ship a 
new buffer overflow in the fix for an old one. Furthermore, it's not like the 
source code had been sitting idle with no modifications to it. They were surely 
in the middle of a dev cycle where they were adding lots of features and 
testing and so on. They have business priorities to address, since those 
features (presumably) are what bring revenue in the door and keep competitors 
off their market turf.

So, if everyone dropped everything they were doing and focused solely on fixing 
this one issue and doing a full battery of tests until it was release-worthy, 
it would have gone out a lot faster. But a company that did that on every bug 
that was reported would get no features released and go out of business. They 
have to weigh the impact of missing product goals versus the risk of exploits 
of a buffer overflow. I'm not sure we can categorically say (none of us being 
RealNetworks people) that they made the wrong decision. We don't have the 
information.

Paco
--
Paco Hope, CISSP
Technical Manager, Cigital, Inc
http://www.cigital.com/ * +1.703.585.7868
Software Confidence. Achieved.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-26 Thread Wall, Kevin
Ken,

You wrote...
> Mind you, the overrun can only be exploited when specific characters  
> are used as input to the loop in the code.  Thus, I'm inclined to  
> think that this is an interesting example of a bug that would have  
> been extraordinarily difficult to find using black box testing, even  
> fuzzing.
> <...deleted...>
> The iDefense team doesn't say how the (anonymous) person  
> who reported it found it, but I for one would be really curious to  
> hear that story.

Reading from the iDefense security advisory on this, it says:

  IV. DETECTION

  iDefense has confirmed the existence of this vulnerability in version
  10.5-GOLD of RealNetworks' RealPlayer and HelixPlayer. Confirmation of
  the existence this vulnerability within HelixPlayer was done via
SOURCE
  CODE REVIEW. Older versions are assumed to be vulnerable. 

(Emphasis mine.)

So looks like it was discovered manually, possibly with the aid of a
static source code analyzer that ignores Flawfinder comments.
Apparently,
you missed that because of your jet lag. ;-)

The sad thing is that based on the documented "Disclosure Timeline", it
seems that almost 8 full months have past since the vendor
(RealNetworks)
responded with a fix. I mean, was the fix really rocket science that it
had to take THAT LONG??? IMHO, no excuse for taking that long.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html


This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly 
prohibited and may be unlawful.  If you have received this communication 
in error, please immediately notify the sender by reply e-mail and destroy 
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-26 Thread Steven M. Christey

On Tue, 26 Jun 2007, Kenneth Van Wyk wrote:

> Mind you, the overrun can only be exploited when specific characters
> are used as input to the loop in the code.  Thus, I'm inclined to
> think that this is an interesting example of a bug that would have
> been extraordinarily difficult to find using black box testing, even
> fuzzing.

I would assume that "smart" fuzzing could have lots of manipulations of
the HH:mm:ss.f format (the intended format mentioned in the advisory), so
this might be findable using black box testing, although I don't know how
many fuzzers actually know how to muck with time strings.  Because the
programmer told flawfinder to ignore the strncpy() that it had flagged, it
also shows a limitation of manual testing.

In CVE anyway, I've seen a number of overflows involving strncpy, and
they're not all off-by-one errors.  They're hard to enumerate because we
don't usually track which function was used, but here are some:

CVE-2007-2489 - negative length

CVE-2006-4431 - empty input causes crash involving strncpy

CVE-2006-0720 - "incorrect" strncpy call

CVE-2004-0500 - another bad strncpy

CVE-2003-0465 - interesting API interaction


- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-26 Thread Kenneth Van Wyk

SC-L

I'm not quite so sure why this one (below) caught my eye -- we _all_  
get tons of product advisories -- but it did.  In particular, two  
things jump out at me:


1) the original author of the defect thought that s/he was doing  
things correctly in using strncpy (vs. strcpy).
2) the original author had apparently been doing static source  
analysis using David Wheeler's Flawfinder tool, as we can tell from  
the comments.


Yet, a simple coding mistake was made in calculating the length of a  
buffer and passing that incorrect length to strncpy.  The result was  
a buffer overrun on the stack, just like the millions that we've all  
seen.


Mind you, the overrun can only be exploited when specific characters  
are used as input to the loop in the code.  Thus, I'm inclined to  
think that this is an interesting example of a bug that would have  
been extraordinarily difficult to find using black box testing, even  
fuzzing.  The iDefense team doesn't say how the (anonymous) person  
who reported it found it, but I for one would be really curious to  
hear that story.


Just some random thoughts this afternoon...  Perhaps I'm still  
getting over the jet lag after returning from the FIRST conference in  
Seville.


Cheers,

Ken van Wyk
SC-L Moderator


Begin forwarded message:


From: iDefense Labs <[EMAIL PROTECTED]>
Date: June 26, 2007 3:53:46 PM EDT
To: [EMAIL PROTECTED], [EMAIL PROTECTED],  
[EMAIL PROTECTED]
Subject: iDefense Security Advisory 06.26.07: RealNetworks  
RealPlayer/HelixPlayer SMIL wallclock Stack Overflow Vulnerability


RealNetworks RealPlayer/HelixPlayer SMIL wallclock Stack Overflow
Vulnerability

iDefense Security Advisory 06.26.07
http://labs.idefense.com/intelligence/vulnerabilities/
Jun 26, 2007

I. BACKGROUND

RealPlayer is an application for playing various media formats,
developed by RealNetworks Inc. HelixPlayer is the open source version
of RealPlayer. More information can be found at the URLs shown below.

http://www.real.com/realplayer.html
http://helixcommunity.org/

Synchronized Multimedia Integration Language (SMIL) is a markup  
language

used to specify the use of several multi-media concepts when rendering
media. Some such concepts are timing, transitions, and embedding. More
information is available from WikiPedia at the following URL.

http://en.wikipedia.org/wiki/ 
Synchronized_Multimedia_Integration_Language


II. DESCRIPTION

Remote exploitation of a buffer overflow within RealNetworks'  
RealPlayer
and HelixPlayer allows attackers to execute arbitrary code in the  
context

of the user.

The issue specifically exists in the handling of HH:mm:ss.f time  
formats

by the 'wallclock' functionality within the code supporting SMIL2. An
excerpt from the code follows.

   924HX_RESULT
   925SmilTimeValue::parseWallClockValue(REF(const char*) pCh)
   926{
   ...
   957char buf[10]; /* Flawfinder: ignore */
   ...
   962while (*pCh)
   963{
   ...
   972 else if (isspace(*pCh) || *pCh == '+' || *pCh ==  
'-'

|| *pCh == 'Z')
   973 {
   974 // this will find the last +, - or Z...  
which is

what we want.
   975 pTimeZone = pCh;
   976 }
   ...
   982 ++pCh;
   983}
   ...
  1101if (pTimePos)
  1102{
  1103//HH:MM...
  
  1133  if (*(pos-1) == ':')
  1134  {
  
  1148if (*(pos-1) == '.')
  1149{
  1150// find end.
  1151UINT32 len = 0;
  1152if (pTimeZone)
  1153{
  1154len = pTimeZone - pos;
  1155}
  1156else
  1157{
  1158len = end - pos;
  1159}
  1160strncpy(buf, pos, len); /* Flawfinder: ignore */

The stack buffer is declared to be 10 bytes on line 957. You can see
that it has a comment which will cause the FlawFinder program to  
ignore

this buffer.

The loop, which begins on line 962, runs through the parameter to the
function looking for characters that denote different sections of the
time format. When it encounters white space, or the +, -, or Z
characters it will record the location for later use. If a time was
located and it contains both a colon and a period the vulnerable code
will be reached.

The length of data to copy into the stack buffer is calculated  
either on
line 1154 or line 1158 depending on whether or not a timezone is  
present.
Neither calculations take into consideration the constant length of  
the

'buf' buffer and therefore a stack-based buffer overflow can occur on
line 1160. Again, notice that this unsafe use of strncpy() is also
marked with a FlawFinder ignore comment.

III. ANALYSIS

Exploitation requires that an attacker persuade a user to supply
RealPlayer or HelixPlayer with a maliciously crafted SMIL file. For
example, this can be accomplished by convincing them to visit a
malicious web pag