| The adolescent minds that engage in "exploits" wouldn't know COBOL if
| a printout fell out a window and onto their heads.  
I would have thought we were beyond this kind of ignorance by now.

Sure, there are still script kiddies out there.  But these days the
attackers are sophisticated, educated, have a well developed community
that exchanges information - and they're in it for the money, not for
the kicks.

With COBOL handling so much of the world's finance, someone is bound
to look at it for vulnerabilities.

|                                                     I'm sure you can
| write COBOL programs that crash, but it must be hard to make them take
| control of the operating system.  COBOL programs are heavy into unit
| record equipment (cards, line printers), tape files, disk files,
| sorts, merges, report writing -- all the stuff that came down to
| 1959-model mainframes from tabulating equipment.  They don't do
| Internet.  What they could do and have done is incorporate malicious
| code that exploits rounding error such that many fractional pennies
| end up in a conniving programmer's bank account.
COBOL programs were written in an era when deliberate attack wasn't on
anyone's radar.  "Defensive programming" meant surviving random bugs -
and was often omitted because there was insufficient memory, and the
CPU was too slow, to get the actual work done otherwise.

COBOL, at least as it existed when I last looked at it (but didn't
actually program in it) many years ago, did indeed have no pointers, no
dynamic memory allocation, no stack-allocated variables.  So most of the
attacks characteristic of *C* programs were impossible in COBOL.

But consider:  We've had a large number of attacks against programs that
read known file formats and fail to check that the input is actually
properly formatted.  Just about every video, audio and compression
format has been attacked this way at least once.  Well, COBOL programs
worked with files of known formats.  In fact, the format was rigidly
specified within the program itself.  What if one of those programs
were fed a file that's been deliberately modified to trigger a bug?
I wouldn't place any particularly big bets on the code actually
checking that the files handed to it really conform to their alleged
specifications.  No one would have seen the point in doing so, since
the files were assumed to be *inside* of the security perimeter.  In
fact, it's only in the last couple of years that we've had to admit
that in the age of the Internet, *all* programs that read files must
carefully validate them - the old idea that "the worst that can happen
is that some CLI program crashes on the bad data" just won't do any

FORTRAN in the same era rarely did array bounds checking.  At best,
a FORTRAN compiler might have a bounds-checking option that you could
use during debugging, but it's not likely it would have been used in
production code.  In fact, at least one site I know of made a custom
modification to the linker so that it would act as if there was always
an array (well, really a common block containing an array) at location
0.  By referencing this array, you could access any memory location.
This was considered a feature, not a bug.

I remember debugging some very interesting failures due to out-of-bounds
array indices in FORTRAN code.

COBOL, as I recall it, wasn't big on arrays as such, and I have no idea
if it bounds-checked what array references it had; but I wouldn't want
to put big bets on it.

Of course, COBOL got an SQL sublanguage many years ago.  In those days,
the notion of an SQL injection attack made no sense - the way the
programs were run gave an attacker no opportunity to inject anything.
Put the same COBOL program as the back end of some Web application
and, who knows, maybe you've just opened yourself up to just an

I remember receiving electric bills with an 80-column punch card that
you returned with your check.  Your account number was punched into
the card.  I believe the way this was used was that someone took the
card, fed it into a card punch machine, and keyed in the amount on
your check.  The cards would then be read into a program to update
the accounts.

Most of these things ran on OS/360.  I always joked about taking the
punch card and replacing the first couple of columns with "//EOD"
(or something very much like that):  The marker for the end of a
deck of data in standard OS/360 JCL.  The result would be that my
card, all cards after it, would end up being discarded in the update
run.  (There were ways to make that much harder - you could chose the
form of the terminator arbitrarily - but in an innocent age, that was
commonly not done.)  Looking at this with today's nastier eyes, with a
little cooperation from others, it would be possible to slip in cards
containing other JCL commands.  Should they be encountered *after* the
//EOD, they could manipulate files, run programs - do all kinds of nasty
stuff.  "JCL injection attack", circa the early 1970's....

This stuff was (a) secure in its limited, controlled environment;
(b) probably not nearly as secure as people believed, because it really
wasn't subjected to the attack techniques we've learned in the last 50
                                                        -- Jerry
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.

Reply via email to