[SC-L] How Can You Tell It Is Written Securely?

2008-11-27 Thread Mark Rockman
OK.  So you decide to outsource your programming assignment to Asia and demand 
that they deliver code that is so locked down that it cannot misbehave.  How 
can you tell that what they deliver is truly locked down?  Will you wait until 
it gets hacked?  What simple yet thorough inspection process is there that'll 
do the job?  Doesn't exist, does it?


MARK ROCKMAN
MDRSESCO LLC  ___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] Software Assist to Find Least Privilege

2008-11-25 Thread Mark Rockman
It be difficult to determine a priori the settings for all the access control 
lists and other security parameters that one must establish for CAS to work.  
Perhaps a software assist would work according to the following scenario.  Run 
the program in the environment in which it will actually be used.  Assume 
minimal permissions.  Each time the program would fail due to violation of some 
permission, notate the event and plow on.  Assuming this is repeated for every 
use case, the resulting reports would be a very good guide to how CAS settings 
should be established for production.  Of course, everytime the program is 
changed in any way, the process would have to be repeated.

MARK ROCKMAN
MDRSESCO LLC___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] Disable Bounds Checking?

2007-11-03 Thread Mark Rockman
Back around 1980, when Ada was new, it was common for compiler manufacturers to 
claim it is best to disable bound checking for performance reasons.  Getting 
your program to run slightly faster trumped knowing that any of your buffers 
was overflowing. Code that silently trashes memory can be expected to produce 
some truly creative results.   My practice is to code defensively, to ensure my 
program is operating according to policies that I set for it.  I want to know 
when it is misbehaving.  Should there be a performance hit, I instrument the 
program to find the hot spots and optimize those and only those.___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] COBOL Exploits

2007-11-02 Thread Mark Rockman
The adolescent minds that engage in "exploits" wouldn't know COBOL if a 
printout fell out a window and onto their heads.  I'm sure you can write COBOL 
programs that crash, but it must be hard to make them take control of the 
operating system.  COBOL programs are heavy into unit record equipment (cards, 
line printers), tape files, disk files, sorts, merges, report writing -- all 
the stuff that came down to 1959-model mainframes from tabulating equipment.  
They don't do Internet.  What they could do and have done is incorporate 
malicious code that exploits rounding error such that many fractional pennies 
end up in a conniving programmer's bank account.___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] temporary directories

2006-12-30 Thread Mark Rockman
The old Sperry operating system from Unisys for successors to the 1108 computer 
has temporary files that are accessible only to the process that creates them.  
Such files can be treated as "directories," even though the file system on such 
machines is not tree-structured.  Space allocated to temporary files is 
reclaimed when the file is no longer needed (as when the process terminates) 
and when the system unexpectedly reboots with processes running.  This is 
interesting as there is no record of such files in the file naming system 
except, perhaps, for a few orphaned "granule" tables that map file relative 
addresses to device relative addresses.  The security of temporary files on 
such systems is absolute as they cannot be accessed through the file naming 
system.  Yet they are named exactly as are files that are accessible through 
the file naming system.  The beauty of such temporary files is that they take 
care of themselves when abandoned:  they don't hang around as entries in a 
catalogue waiting for some process to dispose of them.
  - Original Message - 
  From: Leichter, Jerry 
  To: ljknews 
  Cc: sc-l@securecoding.org 
  Sent: Friday, December 29, 2006 18:56
  Subject: Re: [SC-L] temporary directories


  | Not on Unix, but I tend to use temporary names based on the Process ID
  | that is executing.  And of course file protection prevents malevolent
  | access.
  | 
  | But for a temporary file, I will specify a file that is not in any
  | directory.  I presume there is such a capbility in Unix.
  You presume incorrectly.  You're talking about VMS, where you can
  open a file by file id.  The Unix analogue of a file id is an
  inode number, but no user-land call exists to access a file that
  way.  You can only get to a file by following a path through the
  directory structure.

  In fact, all kinds of Unix code would become insecure if such a
  call were to be added:  It's a common - and reasonable - assumption
  that accessing a file requires access to the (well, a) directory in
  which that file appears (not that it isn't prudent to also control
  access to the file itself).

  One can argue this both ways, but on the specific matter of safe
  access to temporary files, VMS code that uses FID access is much
  easier to get right than Unix code that inherently has to walk
  through directory trees.  On the other hand, access by file id
  isn't - or wasn't; it's been years since I used VMS - supported
  directly by higher-level languages (though I vaguely recall that
  C had a mechanism for doing it).  A mechanism that requires
  specialized, highly system-specific low-level code to do something so
  straightforward is certainly much better than no mechanism at all,
  but it's not something that will ever be used by more than a
  small couterie of advanced programmers.
  -- Jerry

  ___
  Secure Coding mailing list (SC-L) SC-L@securecoding.org
  List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
  List charter available at - http://www.securecoding.org/list/charter.php
  SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
  as a free, non-commercial service to the software security community.
  ___
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] Buffer Overrun

2004-08-02 Thread Mark Rockman
If I allocate a buffer of n bytes, open the channel and receive n+m bytes
where m>0, then where does the fault lie?  Some possibilities:  1) My choice
for n is too small, 2) the software with which I open the channel does not
permit me to specify that my buffer is only n bytes in length and it returns
more than n bytes, 3) the software with which I open the channel permits me
to specify that my buffer is only n bytes in length but I incorrectly inform
it that the buffer length is some number >= n+m bytes.

Modern techniques allow me to create an array object that cannot overflow
without causing an exception.  That is exactly the behavior a buffer should
have.  Lazy or forgetful programmers cannot write code that is able to
corrupt outside the limit of the buffer.  Malware writers are unable to
transfer control to malicious code by corrupting the stack.





Re: [SC-L] Programming languages -- the "third rail" of secure coding

2004-07-23 Thread Mark Rockman
Clearly, programming languages were intended to convey programmer intent to
a computer.  They were not designed to solve security issues.  Those issues
mostly exist at a higher level of abstraction than bit fiddling, where
programming languages excel.  This is to say that programming languages are
able to help automatically to implement certain aspects of security.  We
have already seen, for example, Microsoft deploy managed code that does not
permit the computer to overflow buffer boundaries.  Further, it does not
permit collection elements to be coerced away from their natural types.
There are no pointers hence there is no pointer arithmetic.  Similar
mechanisms exist in J2EE.  These are delightful.  I long ago asked to have
these capabilities provided, only to be told that performance was king and
no way would compiler writers add to path length (i.e. degrade performance)
to halt the program should such untoward events occur.  But, let's not get
carried away.  These things that compilers can do for us do not cover a vast
range of security issues such as closing off IP ports to any but
authenticated users and coding carefully to guarantee illicit user input
does not slide by unchallenged as a database command.

Empirically, it does seem that a whole class of security-related errors in
Microsoft software could be wiped away by porting the code to the managed
arena.  It has been simply too easy to write code that works correctly
except in the case of unanticipated (e.g. nonconformant) input.

Mark Rockman
MDRSESCO LLC
- Original Message - 
From: "Michael S Hines" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, July 22, 2004 10:32
Subject: RE: [SC-L] Programming languages -- the "third rail" of secure
coding


> Concur this is a 'rabbit trail' not worth pursuing.
>
> For those who assisted with the list, thank you.
>
> Otherwise, I suggest we return to our regularly scheduled program at this
> time.
>
> Mike Hines
> ---
> Michael S Hines
> [EMAIL PROTECTED]
>
>




Re: [SC-L] Programming languages -- the "third rail" of secure coding

2004-07-21 Thread Mark Rockman
JOVIAL goes back to the 1960s as "Jules' Own Version of the International
Algebraic Language."
ALGOL and IAL are the same thing.  JOVIAL was used almost exclusively by the
United States Air Force.

- Original Message - 
From: "Dave Aronson" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Tuesday, July 20, 2004 11:05
Subject: Re: [SC-L] Programming languages -- the "third rail" of secure
coding


> "Michael S Hines" <[EMAIL PROTECTED]> wrote:
>
>  > I've been compiling a list of programming languages..
>
> You missed FORTRAN, ICON, REXX, SNOBOL, and the assorted OS-based shell
> scripting languages (bash/csh/ksh/etc., VMS DCL, DOS .bat, etc.).  I've
> heard of JOVIAL, which I *think* is a programming language used almost
> exclusively in the US military.  Since a few companies make things that
> translate it into code, you might consider UML as well.  Then there are
> a gazillion languages for particular commercial packages -- you got
> Oracle's PL/SQL, but there are also dBase/Clipper, FrEd (Framework
> Editor, from an old integrated office suite), Lotus 1-2-3 macros, and
> many more.
>
> Also, depending on your definition of "programming language" (versus
> "markup language" and a few other types), you might have a few extras as
> well.
>
> -- 
> David J. Aronson, Contract Software Engineer in Washington DC area
> Resume and other information online at: http://destined.to/program
>
>




Re: [SC-L] Education and security -- another perspective (was "ACM Queue - Content")

2004-07-06 Thread Mark Rockman
You are not nuts.  Your course outline  is a very substantial step in the
right direction.
- Original Message - 
From: "Dana Epp" <[EMAIL PROTECTED]>
To: "Fernando Schapachnik" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Tuesday, July 06, 2004 16:42
Subject: Re: [SC-L] Education and security -- another perspective (was "ACM
Queue - Content")


> > I'd be interested to hear what people think of the two approaches
(separate
> > security courses vs. spreading security all over the curricula).
> >
> > Regards.
> >
> > Fernando.
>
> Well, I have been asked to teach a new forth year course at the British
> Columbia Institute of Technology (BCIT) this fall on Secure Programming
> (COMP4476). I have no problem sharing my course outline and breakdown,
> since a lot of this is adopted from experiences many other structured
> secure programming courses and literature are taking. The idea is that
> students need to build a strong foundation of learning that they can
> adopt in which ever discipline they follow in the future. This shouldn't
> be a first year course, but I think its a bit late being a fourth year
> course. You will note that the course I am teaching is somewhat language
> agnostic, and even platform agnostic to ensure that the foundation isn't
> tainted with 'fad-of-the-day' techniques, technologies and tools. (Hold
> that to Web applications, which the jury is still out on)
>
> Course Breakdown
> 
> 1. Essentials of Application Security
> * Types of attacks (hackers, DoS, Viruses, Trojans, Worms,
> organizational attacks etc)
> * Consequences of Poor Security (Data theft, lost productivity, damage
> reputation, loose consumer confidence, lost revenues)
> * Challenges when Implementing Security (Security vs Usability,
> Attackers vs Defenders, The misinformation about the security cost)
>
> 2. Secure Application Development Practices
> * Implementing security at every stage of the development process
> * Designing clean error code paths, and fail securely
> * Planning on Failure through results checking
> * Code review
>
> 3. Threat Modeling
> * Attack Trees
> * STRIDE Threat Modeling
> * DREAD risk analysis
>
> 4. Using Security Technologies
> * Encryption
> * Hashing
> * Digital signatures
> * Digital certificates
> * Secure communications (Using IPSec/SSL)
> * Authentication
> * Authorization
>
> 5. Detecting and fixing Memory and Arithmetic Issues
> * Buffer overflows
> * Heap overflows
> * Integer overflows
>
> 6. Defending against Faulty Inputs and Tainted Data
> * User input validation techniques
> * Regular expressions
> * Parameter checking
> * Fault injection reflection
>
> 7. Design, Develop and Deploy software through least privilege
> * Running in least privilege
> * Developing and debugging in least privilege
> * Providing secure defaults using the Pareto Principle
> * Applying native OS security contexts to processes and files (ACL,
> perms etc)
>
> 8. Securing Web applications
> * C14N
> * SQL Injection
> * Cross-site scripting
> * Parameter checking
>
> As I complete the lesson plan this summer this outline will change. I
> think more study on understanding trusted and untrusted boundaries needs
> to be added, and some areas such as Threat Modeling will be flushed out
> with more detail. Overall though, you can get an idea of areas of
> education that I feel make up a core foundation of learning in secure
> programming. I wish I could take credit for this thinking, as its a
> strong foundation to build on. Alas I cannot; pick up any number of
> secure coding books and realize that this is all covered there in some
> degree:
>
> * Building Secure Code
> * Secure Coding Principles & Practices
> * Secure Programming Cookbook
> * Security Engineering
> * Building Secure Software
>
> I only wish I could make all these books be textbook requirements for
> the curriculum. It should be mandatory reading. Although you can teach
> some aspects in any course being provided, the reality is I think a
> dedicated course helps to build on the real foundation of learning. All
> other courses can re-enforce these teachings to further drive it home
> for the student.
>
> Of course, I also think students should have to take at least one course
> in ASM to really understand how computer instructions work, so they can
> gain a foundation of learning for the heart of computer processing. And
> I think they should be taught the powers and failures of C. Since I know
> many of you think I'm nuts for that, you might want to look at this
> outline with the same amount of consideration.
>
> -- 
> Regards,
> Dana Epp
> [Blog: http://silverstr.ufies.org/blog/]
>
>




[SC-L] Origins of Security Problems

2004-06-17 Thread Mark Rockman
I had no idea I was promulgating a syllogism.  In fact, I did not intend to.
My point was that the world changed and the software didn't nor did people
change their behaviors to compensate.  Remember, the Internet until 1992 was
a community of well-behaved techies:  netizens.  Software design was not
much required to consider bad behavior.  Bad behavior could be punished by
expulsion.  No longer.  Commerce demanded the old software be deployed on
"Al Gore's invention" (heh heh) despite its manifest problems.  Eventually
software will adapt and people will be taught how to take the fun out of
abusing the Internet.  Naturally, if one does not consider input validation
and defensive programming in one's methodology, one's stuff will break.



[SC-L] Origins of Security Problems

2004-06-15 Thread Mark Rockman
Before widespread use of the Internet, computers were isolated from
malicious attacks.  Many of them were not networked.  CPUs were slow.
Memory was small.  It was common practice to "trust the user" to minimize
the size of programs to speed up processing and to make programs fit in
memory.  Non-typesafe languages permitted playing with the stack.  It
occurred to me repeatedly during that period that it would have been
extremely helpful if the compiler/runtime would have detected buffer
overflows.  Implementers always shot back that their prime concern was
minimizing path lengths (i.e. execution time) and that it was the
programmer's responsibility to guarantee buffer overflows would not occur.
With blunt instruments such as strcpy() and strcat() available to almost
guarantee occasional buffer overflows, and stacks arranged so that transfer
of control to malicious code could conveniently occur, it evidently doesn't
take a rocket scientist to figure out how to make a program misbehave by
providing invalid input that passes whatever passes for input validation.
Once code became mobile and access to vulnerable buffers became possible
over a wire, an epidemic of security breaches occurred.   Moreover, Internet
protocols were designed individually to provide a specific service.  Little
consideration went into how the protocols could be abused.   Computers are
now widespread and many of them today reside on the Internet with vulnerable
ports wide open.  The average computer owner doesn't know what a port is or
that it represents a potential avenue for abuse.  Software vendors remain
unmotivated to instruct owners as to what vulnerabilities exist and how to
minimize them because that would work against marketing and convenience.  A
small network desires file and printer sharing among the member computers.
Does this mean everybody on the Internet should have access to those files
and printers?  Of course not.  A standalone computer has the sharing port
wide open to the Internet because someday it might become a member of a
network.  Things have gotten better with additional features (e.g. Internet
Connection Firewall), default configurations set to restrict not for
convenience, and anti-virus software.  The origin of security problems lies
in widespread Internet usage and habitual lack of effort to ensure that
programs don't do things that owners don't want them to do.