Re: [SC-L] OWASP Publicity

2007-11-15 Thread Crispin Cowan
McGovern, James F (HTSC, IT) wrote:
 I have observed an interesting behavior in that the vast majority of IT
 executives still haven't heard about the principles behind secure
 coding. My take says that we are publishing information in all the wrong
 places. IT executives don't really read ACM, IEEE or other the sporadic
 posting from bloggers but they do read CIO, Wall Street Journal and most
 importantly listen to each other.

 What do folks on this list think about asking the magazines and
 newspapers to publish? I am willing to gather contact information of
 news reporters and others within the media if others are willing to
 amplify the call to action in terms of contacting them. 
   
The vast majority of IT executives are unfamiliar with all of the
principles of security, firewalls, coding, whatever.

The important thing to understand is that such principles are below
their granularity; then are *right* to not care about such principles,
because they can't do anything about them. Their granularity of decision
making is which products to buy, which strategies to adopt, which
managers to hire and fire. Suppose they did understand the principles of
secure coding; how then would they use that to decide between firewalls?
Web servers? Application servers?

If anything, the idea that needs to be pitched to IT executives is to
pay more attention to quality than to shiny buttons  features. But
there's the rub, what is quality and how can an IT executive measure it?

I have lots of informal metrics that I use to measure quality, but they
largely amount to synthesized reputation capital, derived from reading
bugtraq and the like with respect to how many vulnerabilities I see with
respect to a given product, e.g. Qmail and Postifx are extremely secure,
Pidgin not so much :)

But as soon as we formalize anything like this kind of metric, and get
executives to start buying according to it, then vendors start gaming
the system. They start developing aiming at getting the highest
whatever-metric score they can, rather than for actual quality. This
happens because metrics that approximate quality are always cheaper to
achieve than actual quality.

This is a very, very hard problem, and sad to say, but pitching articles
articles on principles to executives won't solve it.

Crispin

-- 
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin
CEO, Mercenary Linux   http://mercenarylinux.com/
   Itanium. Vista. GPLv3. Complexity at work

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Insider threats and software

2007-08-28 Thread Crispin Cowan
Paco Hope wrote:
 On 8/16/07 7:44 PM, silky [EMAIL PROTECTED] wrote:

 how is this different then sending malformed packets to an rpc interface?
 ...
 Now I'll gently disagree with Gary, who is my boss, so you know I'll hear 
 about it in the hallways... I think this feels more like privilege 
 escalation than insider threat. The distinction being that these attacks 
 allow an authorized user who has liimited privileges to escalate their 
 privileges and do things that they shouldn't be able to do. An insider (to 
 me) is a person who already had that privilege and status when they started 
 their attack. (Read Kevin Wall's follow-up on darkreading.com he has good 
 things to say on who are insiders and outsiders).  Where we are prone to 
 confusion, I think, is that outsiders or limited authorized users can have 
 the same IMPACT as an insider, when the privilege escalation is sufficiently 
 bad.
   
Gary has an interesting but fairly obvious idea, that AJAX clients are
exceptionally vulnerable to the environment they run in. Said clients
are also part of a distributed computing system between the AJAX client,
the web front end, and whatever back-end systems are involved.

Is this an insider threat? Only if the people who coded the server
were dumb enough to treat the AJAX client as if it were an insider
component. Never do that.

This is web security 101: always always always check your input
parameters, and especially if they are coming from a web client.

There is a risk here that AJAX developers will get confused, lazy,
sloppy, about whether the AJAX client component is trusted or not. It is
not clear to me yet whether the AJAX dev tools that are emerging make
that mistake pervasive, or if it requires a special kind of stupid to
make that mistake.

Is this really an insider threat? I think that is stretching things, but
not a huge amount.

Gary also brings up references to his book on hacking games. Small-scale
distributed games are the same as web apps; never trust the client.
Large scale MMORP games (everything from World of Warcraft to Second
Life) are economically mandated to shift as much computational burden
onto the client as possible, and that entails inevitably trusting the
clients more than security really can tolerate. Such games are
inherently insecure; look for more hacking to occur. Read more about it
in this Oakland 2007 paper, with an interesting solution to this problem:

/Enforcing Semantic Integrity on Untrusted Clients in Networked
Virtual Environments (Extended abstract)/
Somesh Jha, Stefan Katzenbeisser, Christian Schallhart, Helmut Veith
and Stephen Chenney

http://csdl2.computer.org/persagen/DLAbsToc.jsp?resourcePath=/dl/proceedings/toc=comp/proceedings/sp/2007/2848/00/2848toc.xmlDOI=10.1109/SP.2007.3

Crispin

-- 
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin/
Director of Software Engineering   http://novell.com
AppArmor Chat: irc.oftc.net/#apparmor

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-12 Thread Crispin Cowan
Gary McGraw wrote:
 Though I don't quite understand computer science theory in the same way that 
 Crispin does, I do think it is worth pointing out that there are two major 
 kinds of security defects in software: bugs at the implementation level, and 
 flaws at the design/spec level.  I think Crispin is driving at that point.
   
Kind of. I'm saying that specification and implementation are
relative to each other: at one level, a spec can say put an iterative
loop here and implementation of a bunch of x86 instructions. At another
level, specification says initialize this array and the implementation
says for (i=0; iARRAY_SIZE;i++){ At yet another level the
specification says get a contractor to write an air traffic control
system and the implementation is a contract :)

So when you advocate automating the implementation and focusing on
specification, you are just moving the game up. You *do* change
properties when you move the game up, some for the better, some for the
worse. Some examples:

* If you move up to type safe languages, then the compiler can prove
  some nice safety properties about your program for you. It does
  not prove total correctness, does not prove halting, just some
  nice safety properties.
* If you move further up to purely declarative languages (PROLOG,
  strict functional languages) you get a bunch more analyzability.
  But they are still Turing-complete (thanks to Church-Rosser) so
  you still can't have total correctness.
* If you moved up to some specification form that was no longer
  Turing complete, e.g. something weaker like predicate logic, then
  you are asking the compiler to contrive algorithmic solutions to
  nominally NP-hard problems. Of course they mostly aren't NP-hard
  because humans can create algorithms to solve them, but now you
  want the computer to do it. Which begs the question of the
  correctness of a compiler so powerful it can solve general purpose
  algorithms.


 If we assumed perfection at the implementation level (through better 
 languages, say), then we would end up solving roughly 50% of the software 
 security problem.
   
The 50% being rather squishy, but yes this is true. Its only vaguely
what I was talking about, really, but it is true.

Crispin

-- 
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin/
Director of Software Engineering   http://novell.com
AppArmor Chat: irc.oftc.net/#apparmor

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-12 Thread Crispin Cowan
Steven M. Christey wrote:
 On Mon, 11 Jun 2007, Crispin Cowan wrote:
   
 Kind of. I'm saying that specification and implementation are
 relative to each other: at one level, a spec can say put an iterative
 loop here and implementation of a bunch of x86 instructions.
 
 I agree with this notion.  They can overlap at what I call design
 limitations: strcpy() being overflowable (and C itself being
 overflowable) is a design limitation that enables programmers to make
 implementation errors.  I suspect I'm just rephrasing a tautology, but
 I've theorized that all implementation errors require at least one design
 limitation.  No high-level language that I know of has a built-in
 mechanism for implicitly containing files to a limited directory (barring
 chroot-style jails), which is a design limitation that enables a wide
 variety of directory traversal attacks.
   
I thought that the Java 2 security container stuff let you specify file
accesses? Similarly, I thought that Microsoft .Net managed code could
have an access specification?

AppArmor provides exactly that kind of access specification, but it is
an OS feature rather than a high level language, unless you want to view
AA policies as high level specifications.

 If we assumed perfection at the implementation level (through better
 languages, say), then we would end up solving roughly 50% of the
 software security problem.
   
 The 50% being rather squishy, but yes this is true. Its only vaguely
 what I was talking about, really, but it is true.
 
 For whatever it's worth, I think I agree with this, with the caveat that I
 don't think we collectively have a solid understanding of design issues,
 so the 50% guess is quite squishy.  For example, the terminology for
 implementation issues is much more mature than terminology for design
 issues.
   
I don't agree with that. I think it is a community gap. The academic
security community has a very mature nomenclature for design issues. The
hax0r community has a mature nomenclature for implementation issues.
That these communities are barely aware of each other's existence, never
mind talking to each other, is a problem :)

 One sort-of side note: in our vulnerability type distributions paper
 [1], which we've updated to include all of 2006, I mention how major Open
 vs. Closed source vendor advisories have different types of
 vulnerabilities in their top 10 (see table 4 analysis in the paper).
 While this discrepancy could be due to researcher/tool bias, it's probably
 also at least partially due to development practices or language/IDE
 design.  Might be interesting for someone to pursue *why* such differences
 occur.
   
Do you suppose it is because of the different techniques researchers use
to detect vulnerabilities in source code vs. binary-only code? Or is
that a bad assumption because the hax0rs have Microsoft's source code
anyway? :-)

Crispin

-- 
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin/
Director of Software Engineering   http://novell.com
AppArmor Chat: irc.oftc.net/#apparmor

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Economics of Software Vulnerabilities

2007-03-19 Thread Crispin Cowan
Gary McGraw wrote:
 I'm not sure vista is bombing because of good quality.   That certainly would 
 be ironic.   

 Word on the way down in the guts street is that vista is too many things 
 cobbled together into one big kinda functioning mess.
I.e. it is mis-featured, and lacks on some integration. This is a
variation on not having desired features. And there certainly are big
features in Vista that were supposed to be there but aren't (most of
user-land being managed code, relational file system).

It is also infamously late.

So if the resources that were put into the code quality in Vista had
instead been put into features and ship-date, would it do better in the
marketplace?

Sure, that's heretical :) but it just might be true :(

Crispin, now believes that users are fundamentally what holds back security

-- 
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin/
Director of Software Engineering   http://novell.com
AppArmor Training at CanSec West   http://cansecwest.com/dojoapparmor.html

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Economics of Software Vulnerabilities

2007-03-19 Thread Crispin Cowan
Ed Reed wrote:
 Crispin Cowan wrote:
   
 Crispin, now believes that users are fundamentally what holds back security
   
 
 I was once berated on stage by Jamie Lewis for sounding like I was
 placing the blame for poor security on customers themselves.
   
Fight back harder. Jamie is wrong. The free market is full of product
offerings of every description. If users cared about security, they
would buy different products than they do, and deploy them different
than they do. QED, lack of security is user's fault.

 I have moved on, and believe, instead, that it is the economic
 inequities - the mis-allocation of true costs - that is really to blame.
   
Since many users are economically motivated, this may explain why users
don't care much about security :)

A competitive free-market economy is really a large optimization engine
for finding the most efficient way to do things, because the more
efficient enterprises crush the less efficient. As such, I have a fair
degree of faith that senior management is applying approximately the
right amount of security to mitigate the threat that they face. If they
are not doing so, they are at risk from competitors who do apply the
right amount of security.

What has made the security industry grow for the last decade has been
the huge growth in connectivity. That has grow the attack surface, and
hence the threat, that enterprises face. And that has caused enterprises
to grow the amount of security they deploy.

 Add the slowly-warmed pot phenomenon (apocryphal as it may be) -
 customers don't jump out of the boiling pot because they're too invested
 to walk away.

 Eventually I think they'll get fed up and there'll be a consumer uprising.
   
Why do you think it will be an uprising? Why not a gradual shift of the
vendors just get better, exactly as fast as the users need them to?

 Until then let's encourage better coding practices and secure designs
 and deep thought about what policy do I want enforced. 
   
Technologists figure out how to do stuff. Economists and strategists
figure out what to do. We can encourage all we want, but we are just
shouting into the wind until enterprise users demand better security.

Crispin
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Economics of Software Vulnerabilities

2007-03-12 Thread Crispin Cowan
 is not sustainable. Read Vernor Vinge's A Deepness in the Sky
for a fascinating vision on what 10,000 years of shoddy software
development could produce. And its a damn fine book.

 What's most disappointing to me is the near-total lack of discussion
 about security policies and models in the whole computer security field,
 today.
   
I see the policy field growing, albeit slowly. SELinux and AppArmor are
getting traction now, and 5 years ago they were exotic toys for weirdos.

 If engineering is the practice of applying the logic and proofs provided
 by science to real world situations, software engineering and computer
 science seem simply to have closed their eyes to the question of system
 security and internal controls.

 Perhaps economics will reinvigorate the discussion in the coming decades.
   
I view this as completely ironic. It was economics that forced the
software industry to close its eyes to formalism and quality. the
industry won't change until economics make quality matter more than
features, and I have yet to see any hint of that happening. For example,
Microsoft Vista is:

* Much better code quality: MS invested heavily in both automated
  and human code checking before shipping.
* Feature-poor: they pulled back on most of the interesting
  features, and as a result Vista is fundamentally XP++ with a
  pretty 3D GUI.
* A year or two late.
* Bombing in the market: the street chat I see is enterprises doing
  anything possible to avoid upgrading to Vista.

So it seems that even mighty Microsoft, when they try for quality over
features, just gets punished in the market place.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hacking is exploiting the gap between intent and implementation

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] NDSS: Network and Distributed Systems Security

2007-02-13 Thread Crispin Cowan
This is the call for participation for the annual Network and
Distributed System Security conference, starting in two weeks February
28th to March 2nd in San Diego http://www.isoc.org/isoc/conferences/ndss/07/

NDSS is a traditional scholarly academic security conference with a peer
reviewed track of papers
http://www.isoc.org/isoc/conferences/ndss/07/program.shtml

However, this year we have made a special effort to make NDSS more
relevant to security practitioners by adding an invited talks track
focused on security threats by some leading practitioners. Our invited
talks schedule is:

* Keynote: Vernor Vinge, professor emeritus of computer science at
  UCSD, founder of the science fiction cyberpunk genre, quadruple
  Hugo award winner for the novels A Fire Upon the Deep and A
  Deepness in the Sky, and the stories Fast Times at Fairmont
  High and The Cookie Monster, and notable futurologist for the
  notion of the technological singularity. Of particular interest to
  me as a security geek is that software security is a key element
  of Deepness in the Sky, and it is *correct* :)
* H1kari of ToorCon speaking on Breaking Wireless and Mac OS-X
  Encryption with FPGAs
* John Viega, McAfee Chief Security Architect on Malware in the
  Real World
* Tom Liston, speaking on work with Ed Skoudis, on Virtual Machine
  Security Issues
* Jim Hoagland, speaking on work with Oliver Friedrichs on A
  Network Attack Surface Analysis of RTM Windows Vista
* Panel Red Teaming and Hacking Games: How Much Do They Really
  Help?, moderated by Crispin Cowan, with panelists:
  o John Viega, Kenshoto/Defcon CtF organizer
  o Rodney Thayer, member of a winning Kenshoto/Defcon CtF team
  o Giovanni Vigna, professor UCSB, leader of 2005 Defcon CtF
winning team
  o Dennis W. Mattison, member of organizing team for ToorCon
RootWars CtF game
  o Rizzo, member of the GhettoHackers, who dominated Defcon CtF
for 4 years, and then revolutionized the game with a new set
of rules  infrastructure in 2001

We hope for a lively exchange of views in the hall track between
academic security researchers and industrial security practitioners.
Come share your skills and frighten a professor :)

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hacking is exploiting the gap between intent and implementation

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Compilers

2006-12-26 Thread Crispin Cowan
ljknews wrote:
   2. The compiler market is so immature that some people are still
  using C, C++ and Java.
   
I'm with you on the C and C++ argument, but what is immature about Java?
I thought Java was a huge step forward, because for the first time, a
statically typesafe language was widely popular.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hacking is exploiting the gap between intent and implementation


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Could I use Java or c#? [was: Re: re-writing college books]

2006-11-14 Thread Crispin Cowan
Robin Sheat wrote:
 On Tuesday 14 November 2006 13:28, Crispin Cowan wrote:
   
 It means that compromising performance 
 
 It's not necessarily a given that runtime performance is compromised. There 
 are situations where Java is faster than C (I've tested this on trivial 
 things).
Here it is bytecode vs. native code generator, not Java vs. C.
Remember, I advocated Java over C++ in the first place :)

Even in the bytecode vs. native code generator contest, there are cases
where each will win:

* bytecode interpreters always lose; they really are just a kludge
* JIT can win if it uses dynamic profiling effectively and the
  application is amenable to optimization for decisions that need to
  be evaluated at runtime
* JIT can be a lose because of the latency required to JIT the code
  instead of compiling ahead of time

So:

* JIT will win if your application is long-lived, and has a lot of
  dynamic decision making to do, e.g. making a lot of deep object
  member function calls that are virtual, or just a lot of
  conditional branches.
* Native code will win if your applications are just short-lived,
  because they are dispatched as children from a dispatcher process
  o You pat the JIT cost each time it starts
  o The short lifespan doesn't give dynamic profiling time to do
its thing


 Personally, I find the programmer time to be much better used in Java too. 
   
No argument from me. I advocate Java, I just want a native code
generator instead of bytecode.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] p-code was created for PLATFORM PORTABILITY

2006-11-13 Thread Crispin Cowan
David A. Wheeler wrote:
 On 11/9/06, Crispin Cowan [EMAIL PROTECTED] wrote:
   
 Prior to Java, resorting to compiling to byte code (e.g. P-code back in
 the Pascal days) was considered a lame kludge because the language
 developers couldn't be bothered to write a real compiler.
   
 I believe that is completely and totally false.
 If you want to claim p-code itself was lame, fine.
 But let's keep the history accurate.

 The UCSD p-system was created in the late 1970's SPECIFICALLY for
 PORTABILITY of executable code: You could ship p-code to any
 machine, and it would run.
That is not inconsistent with my claim. The P-code is a kludge to get
around writing a real compiler is multiplied by the diversity of
architectures. Writing a native code generator is a cost you pay for
every supported architecture. So in more detail, P-code is a
performance-compromising kludge to avoid having to write a *lot* of real
code generators.

One major change between then and now is consolidation of CPUs. Then,
there really was a very broad diversity of CPU architectures (IBM
mainframe, IBM  AS/400, DEC VAX, PDP, DEC10, DEC20, Data General,
Apollo, HP, Xerox Sigma, x86, 68000, NS32K, etc. etc.) and they all more
or less mattered. It is *very* different today: the list of CPU
architectures that matter is much shorter (x86, x86-64, SPARC, POWER,
Itanium): only 4 instead of a baker's dozen, and of those 4, a single
one (x86) is a huge majority of the market.

Pascal was a student language, not often used for commercial
development, so money for Pascal development was scarce. In contrast,
real languages for commercial purposes (PL/1, COBOL, FORTRAN, C) all
used native code generators. P-code was precisely a
performance-compromising kludge to allow Pascal to be portable with less
development effort.

Of course, there was one big exception: Turbo Pascal. Arguably the most
popular Pascal implementation ever. And it used a native code generator.

The need for portability, and the cost of portability (how many
platforms you really have to port to) has dropped dramatically. Bytecode
should be going away, the the architectural mistake of Java and C#/.Net
are going to preserve it for some time to come :(

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Could I use Java or c#? [was: Re: re-writing college books]

2006-11-13 Thread Crispin Cowan
mikeiscool wrote:
 On 11/14/06, Leichter, Jerry [EMAIL PROTECTED] wrote:
   
 The joke we used to make was:  The promise of Java was Write once,
 run everywhere.  What we found was Write once, debug everywhere.
 Then came the Swing patches, which would cause old bugs to re-appear,
 or suddenly make old workaround cause problems.  So the real message
 of Java is Write once, debug everywhere - forever.

 Now, I'm exagerating for effect.  There are Java programs even quite
 substantial Java programs, that run on multiple platforms with no
 problems and no special porting efforts.  (Hell, there are C programs
 with the same property!)  But there are also Java programs that
 cause no end of porting grief.  It's certainly much more common to
 see porting problems with C than with Java, but don't kid yourself:
 Writing in Java doesn't guarantee you that there will be no platform
 issues.
 
 True, but that doesn't mean runtime portability isn't a good thing to aim for.
   
It means that compromising performance to obtain runtime portability
that does not actually exist is a poor bargain.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Could I use Java or c#? [was: Re: re-writing college books]

2006-11-12 Thread Crispin Cowan
Al Eridani wrote:
 On 11/9/06, Crispin Cowan [EMAIL PROTECTED] wrote:
   
 Prior to Java, resorting to compiling to byte code (e.g. P-code back in
 the Pascal days) was considered a lame kludge because the language
 developers couldn't be bothered to write a real compiler.
 
 Post-Java, resorting to compiling to machine code is considered a lame
 kludge because the language developers cannot be bothered to write a
 real optimizer.
   
I don't see what a bytecode intermediate stage has to do with real
optimizer. Very sophisticated optimizers have existed for native code
generators for a very long time.

Bytecode interpreter performance blows goats, so I'm going to assume you
are referring to JIT. The first order effect of JIT is slow startup
time, but that's not an advantage either. So you must be claiming that
dynamic profiling (using runtime behavior to optimize code) is a major
advantage. It had better be, because the time constraints of doing your
optimization at JIT time restrict the amount of optimization you can do
vs. with a native code generator that gets to run off-line for as long
as it needs to.

But yes, dynamic profiling can be an advantage. However, its use is not
restricted to bytecode systems. VMware, the Transmeta CPU, and DEC's
FX86 (virtual machine emulation to run x86 code on Alpha CPUs) use
dynamic translation to optimize performance. It works, in that those
systems all do gain performance from dynamic profiling, but note also
the reputation that they all have for speed: poor.

And then there's write once, run anywhere. Yeah ... right. I've run
Java applets, and Javascript applets, and the latter are vastly superior
for performance, and worse, all too often the Java applets are not run
anywhere, they only run on very specific JVM implementations.

There's the nice property that bytecode can be type safe. I really like
that. But the bytecode checker is slow; do people really run it
habitually? More important; is type safety a valuable property for
*untrusted code* that you are going to have to sandbox anyway?

So I give up; what is it that's so great about bytecode? It looks a
*lot* like the Emperor is not wearing clothes to me.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Could I use Java or c#? [was: Re: re-writing college books]

2006-11-09 Thread Crispin Cowan
ljknews wrote:
 At 4:18 PM +0100 11/9/06, SZALAY Attila wrote:
   
 Hi Al,

 On Thu, 2006-11-09 at 08:47 -0500, ljknews wrote:
 
 I think you are mixing the issue of Java vs. C* with the issue of
 interpreters vs compiled languages.
   
I agree with LJ: language issues aside, I detest bytecode interpreters.
Prior to Java, resorting to compiling to byte code (e.g. P-code back in
the Pascal days) was considered a lame kludge because the language
developers couldn't be bothered to write a real compiler. The innovation
of Java was to describe this as a feature instead of a bug :)

 Yes, you are totally right. Sorry.

 But I have not seen java or c# compiler.
 
For Java, look at JGC http://www.gnu.org/software/gcc/java/. It can
compile Java source code to Java bytecode (class files) or directly to
native machine code, and Java bytecode to native machine code.

For C#, the Mono compiler says
http://www.mono-project.com/using/relnotes/1.0-features.html that it
has an advanced native optimizing compiler is available for x86, SPARC,
s390 and PowerPC available in both an ahead-of-time (AOT) compilation
mode to reduce startup time and take advantage of all available
optimizations and a Just-in-Time (JIT) compilation mode.

However, having native code generation is different from having good
support in GDB for you generated code :) Without GDB support, the
debugger will treat your binaries like they were written in hand
assembly, and not be able to relate core dumps to high level constructs
like variables and lines of source code. Current status:

* For Java: From the JGC FAQ http://gcc.gnu.org/java/faq.html#1_6,
  gdb 5.0 ftp://ftp.gnu.org/pub/gnu/gdb/ includes support for
  debugging gcj-compiled Java programs. For more information please
  read Java Debugging with gdb http://gcc.gnu.org/java/gdb.html.
* For C#: There is a Mono Debugger
  http://www.mono-project.com/Debugging, but it is not complete.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] re-writing college books [was: Re: A banner year for software bugs | Tech News on ZDNet]

2006-11-03 Thread Crispin Cowan
David Crocker wrote:
 Unfortunately, there are at least two situations in which C++ is a more 
 suitable
 alternative to Java and C#:

 - Where performance is critical. Run time of C# code (using the faster .NET 
 2.0
 runtime) can be as much as double the run time of a C++ version of the same
 algorithm. Try telling a large company that it must double the size of its
 compute farms so you can switch to a better programming language!

 - In hard real-time applications where garbage collection pauses cannot be
 tolerated.
   
Except that in both of those cases, C++ is not appropriate either. That
is a case for C.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Why Shouldn't I use C++?

2006-11-02 Thread Crispin Cowan
Ben Corneau wrote:
 From time to time on this list, the recommendation is made to never user C++
 when given a choice (most recently by Crispin Cowan in the re-writing
 college books thread). This is a recommendation I do not understand. Now,
 I'm not an expert C++ programmer or Java or C# programmer and as you may
 have guessed based on the question, I'm not an expert on secure coding
 either. I'm also not disagreeing with the recommendation; I would just like
 a better understanding.

 I understand that C++ allows unsafe operations, like buffer overflows.
 However, if you are a halfway decent C++ programmer buffer overflows can
 easily be avoided, true? If you use the STL containers and follow basic good
 programming practices of C++ instead of using C-Arrays and pointer
 arithmetic then the unsafe C features are no longer an issue?

 C and C++ are very different. Using C++ like C is arguable unsafe, but when
 it's used as it was intended can't C++ too be considered for secure
 programming?
   
No, it cannot.

C++ is no more safe than C. C++ still supports many undefined
operations, which is what makes a language unsafe. No way can C++ be
considered a secure programming language.

If you need a lean, small language for doing embedded or kernel stuff,
then use C; you cannot afford the bloat of C++, so it is not appropriate.

If you need a powerful, abstract language for building complex
applications, then use C# or Java (or ML, or Haskell). They provide all
of the abstraction and programming convenience of C++, and they also
provide type safety. This means that there are no undefined operations,
which is what makes them secure programming languages.

There is no excuse for *choosing* C++, ever. Always avoid it. The only
excuse for *using* C++ is that some doofus before you chose it and you
have to live with the legacy code :)

So why does C++ exist? Because technology has moved. 25 years ago, when
C++ was invented, there was not a great supply of well developed type
safe object oriented programming languages. So C++ seemed like an
incremental improvement over C when it was introduced in the early
1980s. It did provide an improvement over C for developing large
applications, where development costs due to complexity were the big
problem, and bloat could be afforded.

But that lunch has now been eaten by the type safe OOP languages of Java
and C#. They are strictly better than C++ at complex applications, so
there really is no excuse for using C++ to write new application code.

And there never was an excuse for using C++ to write kernel or embedded
code. You cannot afford the bloat of C++ there, and if your kernel is so
complex that you need OOP to be able to program it, then your kernel
design is broken anyway.

I suppose there should be an IMHO in here somewhere in a rant like
this. Feel free to insert it anywhere you like :)

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] re-writing college books - erm.. ahm...

2006-10-29 Thread Crispin Cowan
Gadi Evron wrote:
 For argument sake, let's assume there are 100.

 How about campaigning for a secure coding chapter to be added to these
 semester, erm, world-wide?

 Nothing is ever easy, but we have to start somewhere. I don't see why this
 is a bad idea. Yes, it takes time. Yes, it will have a much bigger impact.
   
It is not a bad idea. But it clearly is not sufficient. Why are you
assuming that it is not already being tried? The problem is that it is
being tried with the usual degree of effectiveness, i.e. unevenly.
Saying lets try it is redundant, because that is already going on,
just not enough. To make it more, one would have to convince the people
who are currently not doing it, or doing it badly, to do better, and
they (by definition) are not listening.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


[SC-L] NDSS CFP Due September 10th

2006-09-06 Thread Crispin Cowan
Security researchers with new results may be interested to know that the
CFP deadline for NDSS is this Sunday September 10th
http://www.isoc.org/isoc/conferences/ndss/07/cfp.shtml

NDSS is a high quality academic peer reviewed conference in computer
security. Traditionally focused on network security, NDSS now covers all
aspects of computer security. This year we have a special interest in
practical security issues, and we will be interleaving the peer reviewed
technical papers with invited talk presentations from the hacker
community on the leading edge of security attacks. We expect the
blending of the (largely defense oriented) academic security community
with the (often attack oriented) hacker community to produce both
interesting presentations and interesting hall track conversations.

Please consider submitting your papers by this Sunday, and also consider
attending NDSS next February 28th - March 2nd in San Diego.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bumper sticker definition of secure software

2006-07-17 Thread Crispin Cowan
mikeiscool wrote:
 On 7/17/06, Crispin Cowan [EMAIL PROTECTED] wrote:
   Goertzel Karen wrote:
  I've been struggling for a while to synthesise a definition of secure
  software that is short and sweet, yet accurate and comprehensive.

 My favorite is by Ivan Arce, CTO of Core Software, coming out of a
 discussion between him and I on a mailing list about 5 years ago.

 Reliable software does what it is supposed to do. Secure software
 does what
 it is supposed to do, and nothing else.
 and what if it's supposed to take unsanitzed input and send it into
 a sql database using the administrators account?

 is that secure?
supposed to goes to intent. If it is a bug that allows this, then it
was not intentional. If it was intended, then (from this description) it
was likely a Trojan Horse, and it is secure from the perspective of the
attacker who put it there.

IMHO, bumper sticker slogans are necessarily short and glib. There isn't
room to put in all the qualifications and caveats to make it a perfectly
precise statement. As such, mincing words over it is a futile exercise.

Or you could just print a technical paper on a bumper sticker, in really
small font :)

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Necessity is the mother of invention ... except for pure math

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bumper sticker definition of secure software

2006-07-17 Thread Crispin Cowan
mikeiscool wrote:
 On 7/17/06, Crispin Cowan [EMAIL PROTECTED] wrote:
 supposed to goes to intent.
 I don't know. I think there is a difference between this does what
 it's supposed to do and this has no design faults. That's all I was
 trying to highlight.
The difference between supposed to, design flaw, and implementation
flaw is entirely dependent on your level of abstraction:

* Executive: build a thingie that lets good guys in and keeps bad
  guys out.
* Director: build an authentication engine that uses 2-factor
  tokens to authenticate users and only then lets them in.
* Manager: use OpenSSL and this piece of glue to implement that
  2-factor thingie.
* Coder: main() { ... :)

Errors can occur at any level of translation. When it does something
surprising, then the guy at the top can claim that it wasn't
supposed to do that, and if you dig hard enough, you will discover
*some* layer of abstraction where the vulnerability violates the upper
intent, but not the lower intent. Hence the bug.

Some example bugs at each level:

* Executive: forgot to specify who is a good guy
* Director: Forgot to provide complete mediation, so the attacker
  could bypass the authenticator.
* Manager: the glue thingie allowed proper authentication tokens,
  but also allowed tokens with a string value of 0.
* Coder: gets(token); ...

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Necessity is the mother of invention ... except for pure math

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bumper sticker definition of secure software

2006-07-16 Thread Crispin Cowan




Goertzel Karen wrote:

  
  
  "Bumper sticker" definition of secure software

  I've been struggling for a while to synthesise a
definition of secure software that is short and sweet, yet accurate and
comprehensive.

My favorite is by Ivan Arce, CTO of Core Software, coming out of a
discussion between him and I on a mailing list about 5 years ago.
Reliable software does what it is supposed to do. Secure
software does what it is supposed to do, and nothing else.

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Necessity is the mother of invention ... except for pure math



___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Dr. Dobb's | Quick-Kill Project Management | June 30, 2006

2006-07-15 Thread Crispin Cowan
Wall, Kevin wrote:
 4) Know your Brooks' _Mythical Man-Month. Management almost
 certainly
will offer to give you more developers/testers/etc. This is
 almost
always a bad ROI since you will spend more time bringing those
individuals up-to-speed on your project than you will get back
in productivity.
One of the most interesting aspects of the Open Source phenomena is that
open source projects, esp. the Linux kernel, seem to be able to violate
most of Brooks' laws with impunity. Linus has achieved absurd levels of
software development parallelism, using a very loosely knit team of
people, with different social cultures, languages, social agendas, and
most of them have never met each other. Brooks says this should be an
unmitigated disaster, yet it succeeds. Go figure :)

How Linus does this is open to lively debate. That he achieves it is
pretty hard to dispute.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Necessity is the mother of invention ... except for pure math


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Ajax one panel

2006-05-24 Thread Crispin Cowan
Gary McGraw wrote:
 Btw, bill also said they tried twice to build an OS on java and failed both 
 times.  We both agree that a type safe OS will happen one day.
   
Did he ever articulate what happened to these OS's? I recall a
presentation at OSDI 1996 by a Sun executive talking about JavaOS and
the spiffy new thin clients that Sun was going to introduce. He talked
about implementing the TCP/IP stack in pure Java, even with the problems
of type safety in marshalling raw data.

I had the impression that JavaOS failed for marketing reasons, not
technical. But that impression was formed from hearing the OSDI
presentation that described implementing JavaOS in the past tense.

So what was the real reason?

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Re: [Owasp-dotnet] RE: 4 Questions: Latest IE vulnerability, Firefox vs IE security, User vs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-04-05 Thread Crispin Cowan
Pascal Meunier wrote:
 AppArmor sounds like an excellent alternative to creating a VMWare image for
 every application you want to run but distrust, although I can think of
 cases where a VMWare image would be safer.  For example, the
 installer/uninstaller may have vulnerabilities, may be dirty (it causes
 problems by modifying things that affect other applications, or doesn't
 cleanup correctly), or phones home, etc...  I guess you could make a profile
 for the installer as well (I'm not very enthusiastic about that idea
 though).  Also, I suspect that what you need to allow in some profiles is
 possibly sufficient to enable some level of malicious activity.  It's
 regrettable that it is only available for Suse Linux.
   
That is correct. AppArmor is not a virtualization layer, and cannot be
used to create virtual copies of files for maybe-good/maybe-bad software
to mess with. More over, the LSM interface in the kernel (which both
AppArmor and SELinux depend on) is also not capable of virtualization.
There were requests for virtualization features during the LSM design
phase, but we decided that we wanted to keep LSM as unintrusive as
possible so as to maximize the chance of LSM being accepted by the 
upstream kernel.

 Perhaps one of the AppArmor mailing lists would be more appropriate to ask
 this,
apparmor-dev cc'd

  but as you posted an example profile with capability setuid, I must
 admit I am curious as to why an email client needs that.
Well now that is a very good question, but it has nothing to do with
AppArmor. The AppArmor learning mode just records the actions that the
application performs. With or without AppArmor, the Thunderbird mail
client is using cap_setuid. AppArmor gives you the opportunity to *deny*
that capability, so you can try blocking it and find out. But for
documentation on why Thunderbird needs it, you would have to look at
mozilla.org not the AppArmor pages.

   I tried looking up
 relevant documentation on the Novell site, but it seems I was unlucky and
 tried during a maintenance period because pages were loading erratically.  I
 finally got to the 3.0 Building Novell AppArmor Profiles page but it was
 empty.  I would appreciate receiving more information about it.  I am also
 interested in the Linux Security Modules Interface.
   
For an overview, look here:

Linux Security Modules: General Security Support for the Linux
Kernel. Chris Wright, Crispin Cowan, Stephen Smalley, James Morris,
and Greg Kroah-Hartman. Presented at the 11^th USENIX Security
Symposium http://www.usenix.org/events/sec02/, San Francisco, CA,
August 2002. PDF http://crispincowan.com/%7Ecrispin/lsm-usenix02.pdf.

However, this paper is only a general overview, and is now far out of
date. For an accurate view, look at the kernel source code.

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


[SC-L] Segments, eh Smithers?

2006-04-04 Thread Crispin Cowan
 EPL compiler, which seems to
 have avoided some of the characteristic programming errors that are
 still common today.  No software was written until there was an approved
 specification, with well defined interfaces and exception conditions
 that were explicitly characterized in EPL.  And so on into a visionary
 sense of a future that has been largely lost for may perceived reasons,
 some of which are bogus, some of which are just seriously short-sighted.

 *** END SOAPBOX ***

 I'm sure this message may generate all sorts of Ifs and Ands and Buts.
 But the Butt we are kicking is our own.

 Cheers!  PGN
 ___
 Secure Coding mailing list (SC-L)
 SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
   

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Re: [Owasp-dotnet] RE: 4 Questions: Latest IE vulnerability, Firefox vs IE security, User vs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-04-03 Thread Crispin Cowan
Dinis Cruz wrote:
 Jeff Williams wrote:  
 I'm a huge fan of sandboxes, but Dinis is right, the market hasn't really
 gotten there yet. No question that it would help if it was possible to run
 complex software like a browser inside a sandbox that restricted its ability
 to do bad things, even if there are vulnerabilities (or worse -- malicious
 code) in them.  
 Absolutely, and do you see any other alternative? (or we should just
 continue to TRUST every bit of code that is executed in our computers?
 and TRUST every single developer/entity that had access to that code
 during its development and deployment?)
   
This is exactly what AppArmor http://en.opensuse.org/Apparmor was
designed for: conveniently confining applications to only be able to do
what they need to do. Application's least privilege.

I am running this mail client (Thunderbird) from within a sandbox (we
call it a profile). I have attached this policy, which should be
pretty self-explanatory.

 But, if you've ever tried to configure the Java security policy file, use
 JAAS, or implement the SecurityManager interface, you know that it's *way*
 too hard to implement a tight policy this way.
 And .Net has exactly the same problem. It is super complex to create a
 .Net application that can be executed in a secure Partially Trusted Sandbox.
   
This is where AppArmor really stands out. You can build an application
profile in minutes. Here is a video
ftp://ftp.belnet.be/pub/mirror/FOSDEM/FOSDEM2006-apparmor.avi if me
demoing AppArmor in a presentation at FOSDEM 2006
http://www.fosdem.org/2006. The video is an hour-long lecture on
AppArmor, and for the impatient, the demo is from 16:30 through 26:00.

 And only the
 developer of the software could reasonably attempt it, which is backwards,
 because it's the *user* who really needs it right. 
 
 Yes, it is the user's responsibility (i.e. its IT Security and Server
 Admin staff) to define the secure environment (i.e the Sandbox) that 3rd
 party or internal-developed applications are allocated inside their data
 center,
   
It is very feasible for a user, not a developer, to build an AppArmor
profile. Prior requirements for using AppArmor are:

* know how to use bash
* know how to use chmod
* know how to run the application in question


 It's possible that sandboxes are going the way of multilevel security (MLS).
 A sort of ivory tower idea that's too complex to implement or use. 
 I don't agree that the problem is too complex. What we have today is
 very complex architectures / systems with too many interconnections.
   
too many interconnections is a Windows problem. In the UNIX world,
where (nearly) everything is a file, it is much easier to build
effective application containment policies.

 Simplify the lot, get enough resources with the correct focus involved,
 are you will see that it is doable.
   
Indeed :)

 Basically, give the user data (as in information) that he can digest and
 understand, and you will see the user(s) making the correct decision(s).
   
Well, maybe. Users are notorious for not making the right decision.
AppArmor lets the site admin create the policy and distribute it to
users. Of course that assumes we are talking about Linux users :)

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com

# vim:syntax=subdomain
# Last Modified: Sun Apr  2 15:09:49 2006
/opt/MozillaThunderbird/lib/thunderbird-bin {
  #include abstractions/X
  #include abstractions/base
  #include abstractions/bash
  #include abstractions/consoles
  #include abstractions/fonts
  #include abstractions/gnome
  #include abstractions/kde
  #include abstractions/nameservice
  #include abstractions/perl
  #include abstractions/user-mail
  #include abstractions/user-tmp

  capability ipc_lock,
  capability setuid,

  /bin/basename px,
  /bin/bash ix,
  /bin/grep ixr,
  /bin/netstat px,
  /etc/mailcap r,
  /etc/mime.types r,
  /etc/opt/gnome/gnome-vfs-2.0/modules r,
  /etc/opt/gnome/gnome-vfs-2.0/modules/* r,
  /etc/opt/gnome/pango/pango.modules r,
  /home/** rw,
  /home/*/.gnupg/* lrw,
  /home/*/.thunderbird/** lrw,
  /opt/MozillaFirefox/bin/firefox.sh pxr,
  /opt/MozillaFirefox/lib/mozilla-xremote-client ixr,
  /opt/MozillaThunderbird/lib/** r,
  /opt/gnome/bin/file-roller ixr,
  /opt/gnome/bin/gedit ixr,
  /opt/gnome/bin/gimp-remote-2.2 ixr,
  /usr/X11R6/bin/OOo-wrapper px,
  /usr/X11R6/bin/acroread px,
  /usr/X11R6/bin/xv px,
  /usr/X11R6/lib/Acrobat7/Resource/Font/** r,
  /usr/bin/display px,
  /usr/bin/gpg ix,
  /usr/bin/mplayer px,
  /usr/bin/ooo-wrapper ixr,
  /usr/bin/perl ix,
  /usr/lib/firefox/firefox.sh px,
  /usr/lib/jvm/java-1.4.2-sun-1.4.2.06/jre/lib/fonts/** r,
  /usr/lib/ooo-2.0/program/soffice px,
  /usr/lib/ooo-2.0/share/fonts/** r,
}
___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions

Re: [SC-L] Bugs and flaws

2006-02-07 Thread Crispin Cowan
Thanks for the very detailed and informative explanation.

However, I still think it sounds like IE has too large of an attack
surface :) It still seems to be the case that IE can be persuaded to
execute any of a large amount of code based on its raw (web) input, with
(fairly) arbitrary parameters, and this large attack surface allows
attackers to find vulnerabilities in any of the code that IE calls out to.

Crispin

Dana Epp wrote:
 I think I would word that differently. The design defect was when
 Microsoft decided to allow meta data to call GDI functions.
  
 Around 1990 when this was introduced the threat profile was entirely
 different; the operating system could trust the metadata. Well,
 actually I would argue that they couldn't, but no one knew any better
 yet. At the time SetAbortProc() was an important function to allow for
 print cancellation in the co-operative multitasking environment that
 was Windows 3.0.
  
 To be clear, IE was NOT DIRECTLY vulnerable to the WMF attack vector
 everyone likes to use as a test case for this discussion. IE actually
 refuses to process any type of metadata that supported META_ESCAPE
 records (which SetAbortProc relies on). Hence why its not possible to
 exploit the vulnerability by simply calling a WMF image via HTML. So
 how is IE vulnerable then? It's not actually. The attack vector uses
 IE as a conduit to actually call out to secondary library code that
 will process it. In the case of the exploits that hit the Net,
 attackers used an IFRAME hack to call out to the shell to process it.
 The shell would look up the handler for WMF, which was the Windows
 Picture Viewer that did the processing in shimgvw.dll. When the dll
 processed the WMF, it would convert it to a printable EMF format, and
 bam... we ran into problems.
  
 With the design defect being the fact metadata can call arbitrary GDI
 code, the implementation flaw is the fact applications like IE rely so
 heavily on calling out to secondary libraries that just can't be
 trusted. Even if IE has had a strong code review, it is extremely
 probable that most of the secondary library code has not had the same
 audit scrutiny. This is a weakness to all applications, not just IE.
 When you call out to untrusted code that you don't control, you put
 the application at risk. No different than any other operating system.
 Only problem is Windows is riddled with these potential holes because
 its sharing so much of the same codebase. And in the past the teams
 rarely talk to each other to figure this out.
  
 Code reuse is one thing, but some of the components in Windows are
 carry over from 15 years ago, and will continue to put us at risk due
 to the implementation flaws that haven't yet been found. But with such
 a huge master sources to begin with, its not something that will be
 fixed over night.
  
 ---
 Regards,
 Dana Epp [Microsoft Security MVP]
 Blog: http://silverstr.ufies.org/blog/

 
 *From:* [EMAIL PROTECTED] on behalf of Crispin Cowan
 *Sent:* Fri 2/3/2006 12:12 PM
 *To:* Gary McGraw
 *Cc:* Kenneth R. van Wyk; Secure Coding Mailing List
 *Subject:* Re: [SC-L] Bugs and flaws

 Gary McGraw wrote:
  To cycle this all back around to the original posting, lets talk about
  the WMF flaw in particular.  Do we believe that the best way for
  Microsoft to find similar design problems is to do code review?  Or
  should they use a higher level approach?
 
  Were they correct in saying (officially) that flaws such as WMF are hard
  to anticipate?
   
 I have heard some very insightful security researchers from Microsoft
 pushing an abstract notion of attack surface, which is the amount of
 code/data/API/whatever that is exposed to the attacker. To design for
 security, among other things, reduce your attack surface.

 The WMF design defect seems to be that IE has too large of an attack
 surface. There are way too many ways for unauthenticated remote web
 servers to induce the client to run way too much code with parameters
 provided by the attacker. The implementation flaw is that the WMF API in
 particular is vulnerable to malicious content.

 None of which strikes me as surprising, but maybe that's just me :)

 Crispin
 --
 Crispin Cowan, Ph.D. 
 http://crispincowan.com/~crispin/ http://crispincowan.com/%7Ecrispin/
 Director of Software Engineering, Novell  http://novell.com
 Olympic Games: The Bi-Annual Festival of Corruption


 ___
 Secure Coding mailing list (SC-L)
 SC-L@securecoding.org
 List information, subscriptions, etc -
 http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php


-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption

Re: [SC-L] RE: The role static analysis tools play in uncovering elements of design

2006-02-07 Thread Crispin Cowan
Jeff Williams wrote:
 I think there's a lot more that static analysis can do than what you're
 describing. They're not (necessarily) just fancy pattern matchers.
 ...
 Today's static analysis tools are only starting to help here. Tools focused
 on dumping out a list of vulnerabilities don't work well for me. Too many
 false alarms.  Maybe that's what you meant by 'inhibit'.
   
In the general case, I think that any kind of analysis tool (static
analyzer, fuzzing tool, debugger, whatever) focuses the analyst's
attention on whatever aspects the tool author thought was important.
Whether this is a good or bad thing depends on whether you agree with
the author.

Using no tools at all just imposes a different bias filter, as humans
are (relatively) good at spotting some kinds of patterns, and not others.

Crispin

 --Jeff
  
 Jeff Williams, CEO
 Aspect Security
 http://www.aspectsecurity.com
 email: [EMAIL PROTECTED]
 phone: 410-707-1487
  
 
 From: John Steven [mailto:[EMAIL PROTECTED] 
 Sent: Friday, February 03, 2006 1:40 PM
 To: Jeff Williams; Secure Coding Mailing List
 Subject: The role static analysis tools play in uncovering elements of
 design 

 Jeff,

 An unpopular opinion I’ve held is that static analysis tools, while very
 helpful in finding problems, inhibit a reviewer’s ability to find collect as
 much information about the structure, flow, and idiom of code’s design as
 the reviewer might find if he/she spelunks the code manually.

 I find it difficult to use tools other than source code navigators (source
 insight) and scripts to facilitate my code understanding (at the
 design-level). 

 Perhaps you can give some examples of static analysis library/tool use that
 overcomes my prejudice—or are you referring to the navigator tools as well?

 -
 John Steven   
 Principal, Software Security Group
 Technical Director, Office of the CTO
 703 404 5726 - Direct | 703 727 4034 - Cell
 Cigital Inc.  | [EMAIL PROTECTED]

 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

   
 snipped
 Static analysis tools can help a lot here. Used properly, they can provide
 design-level insight into a software baseline. The huge advantage is that
 it's correct.

 --Jeff 
 snipped
 
 This electronic message transmission contains information that may be
 confidential or privileged. The information contained herein is intended
 solely for the recipient and use by any other party is not authorized. If
 you are not the intended recipient (or otherwise authorized to receive this
 message by the intended recipient), any disclosure, copying, distribution or
 use of the contents of the information is prohibited. If you have received
 this electronic message transmission in error, please contact the sender by
 reply email and delete all copies of this message. Cigital, Inc. accepts no
 responsibility for any loss or damage resulting directly or indirectly from
 the use of this email or its contents.
 Thank You.
 


 ___
 Secure Coding mailing list (SC-L)
 SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
   

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-03 Thread Crispin Cowan
Gary McGraw wrote:
 To cycle this all back around to the original posting, lets talk about
 the WMF flaw in particular.  Do we believe that the best way for
 Microsoft to find similar design problems is to do code review?  Or
 should they use a higher level approach?

 Were they correct in saying (officially) that flaws such as WMF are hard
 to anticipate? 
   
I have heard some very insightful security researchers from Microsoft
pushing an abstract notion of attack surface, which is the amount of
code/data/API/whatever that is exposed to the attacker. To design for
security, among other things, reduce your attack surface.

The WMF design defect seems to be that IE has too large of an attack
surface. There are way too many ways for unauthenticated remote web
servers to induce the client to run way too much code with parameters
provided by the attacker. The implementation flaw is that the WMF API in
particular is vulnerable to malicious content.

None of which strikes me as surprising, but maybe that's just me :)

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-02 Thread Crispin Cowan
John Steven wrote:
 Re-reading my post, I realize that it came off as heavy support for
 additional terminology. Truth is, we've found that the easiest way to
 communicate this concept to our Consultants and Clients here at Cigital has
 been to build the two buckets (flaws and bugs).
   
My main problem with this terminology is that I have only ever seen it
coming from Cigital people. The rest of the world seems to treat flaw
and bug as synonyms.

The distinction here is between design flaw and implementation flaw.
There doesn't seem to be anything in these words that suggest one is
larger scale than the other.

From dictionary.com we have:

flaw^1 http://www.answers.com/flawr=67(flô) pronunciation
/n./

   1. An imperfection, often concealed, that impairs soundness: /a flaw
  in the crystal that caused it to shatter./ See synonyms at blemish
  
http://www.answers.com/main/ntquery;jsessionid=75e32c5vb2csr?method=4dsid=1555dekey=B0319900gwp=8curtab=1555_1sbid=lc04b.
   2. A defect or shortcoming in something intangible: /They share the
  character flaw of arrogance./
   3. A defect in a legal document that can render it invalid.

Bug http://www.answers.com/bugr=67 is a little more arcane, and the
only relevant part is far down the document where it discusses the
history with Grace Hopper:

bug

An unwanted and unintended property of a program or piece of
hardware, esp. one that causes it to malfunction. Antonym of
/feature/

http://www.answers.com/main/ntquery?method=4dsid=2291dekey=%2FF%2Ffeature.htmlgwp=8curtab=2291_1.
Examples: “There's a bug in the editor: it writes things out
backwards.” “The system crashed because of a hardware bug.” “Fred is
a winner, but he has a few bugs” (i.e., Fred is a good guy, but he
has a few personality problems).

Historical note: Admiral Grace Hopper (an early computing pioneer
better known for inventing /COBOL/

http://www.answers.com/main/ntquery?method=4dsid=2291dekey=%2FC%2FCOBOL.htmlgwp=8curtab=2291_1)
liked to tell a story in which a technician solved a /glitch/

http://www.answers.com/main/ntquery?method=4dsid=2291dekey=%2FG%2Fglitch.htmlgwp=8curtab=2291_1
in the Harvard Mark II machine by pulling an actual insect out from
between the contacts of one of its relays, and she subsequently
promulgated /bug/

http://www.answers.com/main/ntquery?method=4dsid=2291dekey=%2FB%2Fbug.htmlgwp=8curtab=2291_1
in its hackish sense as a joke about the incident (though, as she
was careful to admit, she was not there when it happened). For many
years the logbook associated with the incident and the actual bug in
question (a moth) sat in a display case at the Naval Surface Warfare
Center (NSWC). The entire story, with a picture of the logbook and
the moth taped into it, is recorded in the /Annals of the History of
Computing/, Vol. 3, No. 3 (July 1981), pp. 285--286.


 What I was really trying to present was that Security people could stand to
 be a bit more thorough about how they synthesize the results of their
 analysis before they communicate the vulnerabilities they've found, and what
 mitigating strategies they suggest.
   
Definitely. I think there is a deep cultural problem that people who fix
bugs or flaws tend to over-focus on the micro issue, fixing the specific
coding vulnerability, and ignore the larger architectural error that
allows the coding defect to be exploitable and cause damage. In the case
at hand, the WMF bug would be much less dangerous if there were not so
many ways to induce IE to invoke WMF decoding without asking the user.

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-01 Thread Crispin Cowan
Gary McGraw wrote:
 If the WMF vulnerability teaches us anything, it teaches us that we need
 to pay more attention to flaws.
The flaw in question seems to be validate inputs, i.e. don't just
trust network input (esp. from an untrusted source) to be well-formed.

Of special importance to the Windows family of platforms seems to be the
propensity to do security controls based on the file type extension (the
letters after the dot in the file name, such as .wmf) but to choose the
application to interpret the data based on some magic file typing based
on looking at the content.

My favorite ancient form of this flaw: .rtf files are much safer than
doc files, because the RTF standard does not allow you to attach
VBscript (where VB stands for Virus Broadcast :) while .doc files
do. Unfortunately, this safety feature is nearly useless, because if you
take an infected whatever.doc file, and just *rename* it to whatever.rtf
and send it, then MS Word will cheerfully open the file for you when you
double click on the attachment, ignore the mismatch between the file
extension and the actual file type, and run the fscking VB embedded within.

I am less familiar with the WMF flaw, but it smells like the same thing.

Validate your inputs.

There are automatic tools (taint and equivalent) that will check whether
you have validated your inputs. But they do *not* check the *quality* of
your validation of the input. Doing a consistency check on the file name
extension and the data interpreter type for the file is beyond (most?)
such checkers.

   We spend lots of time talking about
 bugs in software security (witness the perpetual flogging of the buffer
 overflow), but architectural problems are just as important and deserve
 just as much airplay.
   
IMHO the difference between bugs and architecture is just a
continuous grey scale of degree.

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-01 Thread Crispin Cowan
John Steven wrote:
 I'm not sure there's any value in discussing this minutia further, but here
 goes:
   
We'll let the moderator decide that :)

 1) Crispin, I think you've nailed one thing. The continuum from:

 Architecture -- Design -- Low-level Design -- (to) Implementation

 is a blurry one, and certainly slippery as you move from 'left' to 'right'.
   
Cool.

 But, we all should understand that there's commensurate blur in our analysis
 techniques (aka architecture and code review) to assure that as we sweep
 over software that we uncover both bugs and architectural flaws.
   
Also agreed.

 2) Flaws are different in important ways bugs when it comes to presentation,
 prioritization, and mitigation. Let's explore by physical analog first.
   
I disagree with the word usage. To me, bug and flaw are exactly
synonyms. The distinction being drawn here is between implementation
flaws vs. design flaws. You are just creating confusing jargon to
claim that flaw is somehow more abstract than bug. Flaw ::= defect
::= bug. A vulnerability is a special subset of flaws/defects/bugs that
has the property of being exploitable.

 I nearly fell through one of my consultant's tables as I leaned on it this
 morning. We explored: Bug or flaw?.
   
The wording issue aside, at the implementation level you try to
code/implement to prevent flaws, by doing things such as using higher
quality steel (for bolts) and good coding practices (for software). At
the design level, you try to design so as to *mask* flaws by avoiding
single points of failure, doing things such as using 2 bolts (for
tables) and using access controls to limit privilege escalation (for
software).

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Intel turning to hardware for rootkit detection

2005-12-14 Thread Crispin Cowan
Smashguard, if I recall correctly, offers approximately the protection
of existing compiler methods, but with the added fun of requiring
modified (non-existent) hardware.

The referenced hardware in the IEEE article and the intel.com pages
appears to be some descendant of Palladium; it is a hardware integrity
checker/attestation mechanism. A small, hardware-enforced core performs
a chain of crypto-checks prior to boot strapping the BIOS, and then the
OS, and makes itself available to applications. Thus an application can
(more or less) prove to a remote machine that the BIOS, kernel, and
application are in fact the approved versions that the remote machine
wants to see. The closest published work would be Bill Arbaugh's
dissertation and associated papers.

Real security benefit: remote machine can detect that your box has not
been rootkit'd.

Hoarding benefit: remote machine can detect that you are running the
approved DRM-enforcing media player so that (for instance) it can
enforce that you only get to play that movie the specified number of
times and you don't get to copy it.

Malignant effect: Document master at an organization can make all
documents transient, so that whistle-blowers can no longer access the
documents they are trying to use to blow the whistle on such as, say,
Enron, WorldCom, or Abu Grab-ass.

Be very, very careful about tolerating strong-attestation hardware. The
implications are profound, for both good and evil.

Crispin

mudge wrote:

 There was a lady who went to Purdue, I believe her name was Carla
 Brodley. She is a professor at Tufts currently. One of her projects,
 I'm not sure whether it is ongoing or historic, was surrounding
 hardware based stack protection. There wasn't any protection against
 heap / pointer overflows and I don't know how it fares when stack
 trampoline activities (which can be valid, but are rare outside of
 older objective-c code).

 www.smashguard.org and https://engineering.purdue.edu/
 ResearchGroups/SmashGuard/smash.html have more data.

 I'm not sure if this is a similar solution to what Intel might be
 pursuing. I believe the original smashguard work was based entirely
 on Alpha chips.

 cheers,

 .mudge


 On Dec 13, 2005, at 15:19, Michael S Hines wrote:

 Doesn't a hardware 'feature' such as this lock software into a
 two-state model
 (user/priv)?

 Who's to say that model is the best?  Will that be the model of the
 future? 

 Wouldn't a two-state software model that works be more effective?  

 It's easier to change (patch) software than to rewire hardware
 (figuratively speaking).

 Just wondering...

 Mike Hines
 ---
 Michael S Hines
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] 

 ___
 Secure Coding mailing list (SC-L)
 SC-L@securecoding.org mailto:SC-L@securecoding.org
 List information, subscriptions, etc -
 http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php

 

 ___
 Secure Coding mailing list (SC-L)
 SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
   

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Why Software Will Continue to Be Vulnerable

2005-05-03 Thread Crispin Cowan
ljknews wrote:
At 8:05 AM -0400 5/2/05, Kenneth R. van Wyk wrote:
  
Yet, despite that pessimistic outlook -- and the survey that
forked this thread -- I do think that companies are demanding
more in software security, even though consumers are not.

Companies value time spent on cleanup more than consumers do.
  

And in this morning's mailbox, we see some evidence to support the claim
that business is considerably less impressed with software quality
http://www.informationweek.com/story/showArticle.jhtml;jsessionid=IMYCZLJPHKPNMQSNDBCSKH0CJUMEKJVN?articleID=161601417

Crispin
-- 
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com




Re: [SC-L] Why Software Will Continue to Be Vulnerable

2005-05-01 Thread Crispin Cowan
Greenarrow 1 wrote:
But, the problem I see with this survey is they only polled 1,000 out of 
what over 5 million users in the USofA.
Political pollsters regularly sample 1000 Americans to get a prediction
of 100,000 voters that is accurate to 5% or so. 1000 people should be
sufficient to sample software users, unless there is something else
wrong with the sample or the questions.

  Just randomly suppose they 
accidently picked everyone that
has superb software and hardware on their systems (unlikely but probable). 
  
Just what does unlikely but probable mean?

To suppose this, we have to think there is something wrong with the
sample or the questions. What is it you think is wrong with the sample
or the questions? Or is it just that you find the result to be improbable?

On repairing systems for my customers I say 1 of of 20 are only satisfied 
with their programs so who is right Harris Poll or my customers?
No *there* is a skewed sample; the set of people currently experiencing
a problem so severe that they have to call in a professioal to repair
it. Under just about any circumstance, I would expect this group to be
highly unsatisfied with vendors. It's like taking a survey of auto
quality in the waiting room of a garage.

What really mystifies me is the anlogy to fire insurance. *Everyone*
keeps their fire insurance up to date, it costs money, and it protects
against a very rare event that most fire insurance customers have never
experienced. What is it that makes consumers exercise prudent good sense
for fire insurance, but not in selecting software?

The only factor I can think of is that mortgage carriers insist that
their customers maintain fire insurance. No fire insurance, no loan, and
most people cannot afford to pay cash for their home. So to impose a
prudence requirement on software consumers, perhaps some outside force
has to impose a pay to play requirement on them. Who could that be?

IPSs, perhaps? Similar to mortgage companys, ISPs pay a lot of the cost
of consumer software insecurity: vulnerable software leads to virus
epidemics, and to botnets of spam relays. Perhaps if ISPs recognized the
cost of consumer insecurity on their operations, they might start
imposing minimum standards on consumer connections, and cutting them off
if they fall below that standard. Larry Seltzer has advocated a form of
this, that ISPs should block port 25 for consumer broadband in most
cases http://www.eweek.com/article2/0,1759,1784276,00.asp There are
several other actions that ISPs could take:

* egress filtering on all outbound connections to block source IP
  spoofing
* deploy NIPS on outbound traffic and disconnect customers who are
  emitting attacks
* require customers to have some kind of personal firewall or host
  intrusion prevention

The catch: the above moves are all costly and, to some degree,
anti-competitive, in that they make the consumer's Internet connection
less convenient. So to be successful, ISPs would have to position these
moves as a security enhancement for the consumer, which AOL is doing
with bundled antivirus service as advertised on TV. ISPs could also
position a non-restricted account as an expert account and charge
extra for it.

Crispin
-- 
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com




Re: [SC-L] Theoretical question about vulnerabilities

2005-04-15 Thread Crispin Cowan
David Crocker wrote:
Well, that approach is certainly better than not guarding against buffer
overflows at all. However, I maintain it is grossly inferior to the approach we
use, which is to prove that all array accesses are within bounds.
Proving that all array accesses are within bounds would seem to be 
Turing undecidable. Either you are not proving what you way you are 
proving, or your programs are not full Turing machines.

Proof: insert diagonalization argument here :
Issue: Some people may regard diagonalized programs are a contrivance, 
and are only interested in correctness proofs for real programs (for 
some value of real).

Crispin's rebuttal: Suppose I want to prove that your program checker 
does not have any illegal array references ...

What exactly
is your program going to do when it detects an array bound violation at
run-time?
Hermes' kludge to address this was two-fold:
  1. There are no arrays. Rather, there are relational tables, and you
 can extract a row based on a field value. You can programatically
 get a table to act much like an array by having a field with a
 unique index number, 1, 2, 3, etc.
  2. If you try to extract a row from a table that does not have a
 matching value, then you get an exception. Exceptions are thrown
 and caught up the call chain the way most modern (Java etc.)
 languages do it.
Yes, this is a kludge because it ultimately means a run-time exception, 
which is just a pretty way of handling a seg fault.

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Theoretical question about vulnerabilities

2005-04-13 Thread Crispin Cowan
der Mouse wrote:
[B]uffer overflows can always be avoided, because if there is ANY
input whatsoever that can produce a buffer overflow, the proofs will
fail and the problem will be identified.

Then either (a) there exist programs which never access out-of-bounds
but which the checker incorrectly flags as doing so, or (b) there exist
programs for which the checker never terminates (quite possibly both).
(This is simply the Halting Theorem rephrased.)

Precisely because statically proven array bounds checking is Turing
Hard, that is not how such languages work.
Rather, languages that guarantee array bounds insert dynamic checks on
every array reference, and then use static checking to remove all of the
dynamic checks that can be proven to be unnecessary. For instance, it is
often the case that a tight inner loop has hard-coded static bounds, and
so a static checker can prove that the dynamic checks can be removed
from the inner loop, hoisting them to the outer loop and saving a large
proportion of the execution cost of dynamic array checks.
How much of this optimization can be done is arguable:
   * The JonesKelly GCC enhancement that does full array bounds
 checking makes (nearly?) no attempt at this optimization, and
 suffers slowdowns of 10X to 30X on real applications.
   * The Bounded Pointers GCC enhancement that does full array bounds
 checking but with a funky incompatible implementation that makes
 pointers bigger than a machine word, does some of these
 optimizations and suffers a slowdown of 3X to 5X. Some have argued
 that it can be improved from there, but how much remains to be seen.
   * Java compilers get the advantage of a language that was actually
 designed for type safety, in contrast with C that aggressively
 makes static type checking difficult. The last data I remember on
 Java is that turning array bounds checking on and off makes a 30%
 difference in performance.
Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Theoretical question about vulnerabilities

2005-04-13 Thread Crispin Cowan
David Crocker wrote:
Exactly. I'm not interested in trying to write a program prover that will prove
that an arbitrary program is correct, if indeed it is. I am only interested in
proving that well-structured programs are correct.
The Hermes programming language took this approach
http://www.research.ibm.com/people/d/dfb/hermes.html
Hermes proved a safety property called Type State Checking in the course
of compiling programs. Type State offers very nice safety properties for
correctness, including proving that no variable will be used before it
is initialized. But the Hermes Type State Checker was not formally
complete; there were valid programs that the checker could not *prove*
were correct, and so it would reject them. Here's an example of a case
it cannot prove:
if X then
   Y - initial value
endif
...
if X then
   Z - Y + 1
endif
The above code is correct in that Y's value is taken only when it has
been initialized. But to prove the code correct, an analyzer would have
to be flow sensitive, which is hard to do.
Here's where it gets interesting. The authors of Type State went and
analyzed a big pile of existing code that was in production but that the
Type State checker failed to prove correct. In (nearly?) every case,
they found a *latent bug* associated with the code that failed to pass
the Checker. We can infer from that result that code that depends on
flow sensitivity for its correctness is hard for humans to reason about,
and therefore likely to be wrong.
Disclaimer: I worked on Hermes as an intern at the IBM Watson lab waay
back in 1991 and 1992. Hermes is my favorite type safe programming
language, but given the dearth of implementations, applications, and
programmers, that is of little practical interest :)
Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Theoretical question about vulnerabilities

2005-04-12 Thread Crispin Cowan
David Crocker wrote:
3. Cross-site scripting. This is a particular form of HTML injection and would
be caught by the proof process in a similar way to SQL injection, provided that
the specification included a notion of the generated HTML being well-formed. If
that was missing from the specification, then HTML injection would not be
caught.

XSS occurs where client A can feed input to Server B such that client C
will accept and trust the input. The correct specification is that
Server B should do a perfect job of allowing clients to upload content
that is damaging to other clients. I submit that this is infeasible
without perfect knowledge of the vulnerabilities of all the possible
clients. This seems to be begging the definition of prove correct
pretty hard.
You can do a pretty good job of preventing XSS by stripping user posts
of all interesting features and permitting only basic HTML. But this
still does not completely eliminate XSS, as you cannot a priori know
about all the possible buffer overflows  etc. of every client that will
come to visit, and basic HTML still allows for some freaky stuff, e.g.
very long labels.
Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com


Re: [SC-L] Theoretical question about vulnerabilities

2005-04-12 Thread Crispin Cowan
Nash wrote:
** It would be extremely interesting to know how many exploits could
be expected after a reasonable period of execution time. It seems that
as execution time went up we'd be less likely to have an exploit just
show up. My intuition could be completely wrong, though.

I would think that time is pretty much irrelevant, because it depends
on the intelligence used to order the inputs you try. For instance,
time-to-exploit will be very long if you feed inputs to (say) Microsoft
IIS starting with one byte of input and going up in ASCII order.
Time-to-exploit gets much shorter if you use a fuzzer program: an
input generator that can be configured with the known semantic inputs of
the victim program, and that focuses specifically on trying to find
buffer overflows and printf format string errors by generating long
strings and using strings containing %n.
Even among fuzzers, time-to-exploit depends on how intelligent the
fuzzer is in terms of aiming at the victim program's data structures.
There are many specialized fuzzers aimed at various kinds of
applications, aimed at network stacks, aimed at IDS software, etc.
Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-11 Thread Crispin Cowan
I strongly disagree with this.
Rigorous professional standards for mechanical and structural 
engineering came about only *after* a well-defined cookbook of how to 
properly engineer things was agreed upon. Only after such standards are 
established and *proven effective* is there any utility in enforcing the 
standards upon the practitioners.

Software is *not* yet at that stage. There is no well-established cook 
book for reliably producing reliable software (both of those reliablys 
mean something :)  There are *kludges* like the SEI model, but they are 
not reliable. People can faithfully follow the SEI model and still 
produce crap. Other people can wholesale violate the SEI model and 
produce highly reliable software.

It is *grossly* premature to start imposing standards on software 
engineers. We have not a clue what those standards should be.

Crispin
Edward Rohwer wrote:
 I my humble opinion, the bridge example gets to the heart of the
matter. In the bridge example the bridge would have been design and
engineered by licensed professionals, while we in the software business
sometime call ourselves engineers but fall far short of the real,
professional, licensed engineers other professions depend upon.  Until 
we as
a profession are willing to put up with that sort of rigorous examination
and certification process, we will always fall short in many area's and of
many expectations.

Ed. Rohwer CISSP

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of [EMAIL PROTECTED]
Sent: Friday, April 08, 2005 10:54 PM
To: Margus Freudenthal
Cc: Secure Coding Mailing List
Subject: [SC-L] Re: Application Insecurity --- Who is at Fault?


Margus Freudenthal wrote:
Consider the bridge example brought up earlier. If your bridge builder
finished the job but said: ohh, the bridge isn't secure though. If
someone tries to push it at a certain angle, it will fall.
Ultimately it is a matter of economics. Sometimes releasing something
earlier
is worth more than the cost of later patches. And managers/customers are
aware
of it.
Unlike in the world of commercial software, I'm pretty sure you don't
see a whole lot of construction contracts which absolve the architect of
liability for design flaws.  I think that is at the root of our
problems.  We know how to write secure software; there's simply precious
little economic incentive to do so.
--
David Talkington
[EMAIL PROTECTED]

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Mobile phone OS security changing?

2005-04-06 Thread Crispin Cowan
Kenneth R. van Wyk wrote:
Greetings,
I noticed an interesting article about a mobile phone virus affecting 
Symbian-based phones out on Slashdot today.  It's an interesting read:

http://it.slashdot.org/it/05/04/06/0049209.shtml?tid=220tid=100tid=193tid=137
What particularly caught my attention was the sentence, Will mobile OS 
companies, like desktop OS makers, have to start an automatic update system, 
or will the OS creators have to start making their software secure?  Apart 
from the author implying that this is an or situation,

I think it is definitely an or situation: automatic updates are 
expensive to provision and fugly for the user. They are just a kludge 
used when, for some reason, the software canot be made secure.

That the desktop vendor (Microsoft) has not made their software secure 
is manifestly obvious. Whether the can't or won't is subject to 
rampant debate and speculation. The can't view says that legacy 
software and fundamentally broken architecture make securing it 
infeasible. The won't view says that it was not profitable for MS to 
spend the effort, and they are now changing.

That the alternate desktop vendors (all the UNIX and Linux vendors 
including Apple) have made secure desktops is also manifestly obvious 
(no viruses to speak of, and certainly no virus problem). Whether this 
is luck or design is subect to rampant debate and speculation. The 
luck view says that these minority desktops are not a big enough 
target to be interesting to the virus writers. The design view is that 
the virus problem is induced by: 1. running the mail client with 
root/administrator privilege, and 2. a mail client that eagerly trusts 
and executes attached code, and that until UNIX/Linux desktops have both 
of those properties in large numbers, there never will be a virus 
problem on UNIX/Linux desktops.

What the phone set people will do depends on which of the above factors 
you think apply to phone sets. Certainly the WinCE phones with Outlook 
are about to be virus-enabled. I don't know enough about Symbian to 
answer. The Linux hand sets could be designed either way; it would not 
surprise me to see phone set peole architecting a phone so that the 
keyboard is root. It is not exactly intuitive to treat a hand set as a 
multi-user platform.

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Top security papers

2004-08-09 Thread Crispin Cowan
Matt Setzer wrote:
It's been kind of quiet around here lately - hopefully just because everyone
is off enjoying a well deserved summer (or winter, for those of you in the
opposite hemisphere) break.  In an effort to stir things up a bit, I thought
I'd try to get some opinions about good foundational materials for security
professionals.  (I'm relatively new to the field, and would like to broaden
my background knowledge.)  Specifically, what are the top five or ten
security papers that you'd recommend to anyone wanting to learn more about
security?  What are the papers that you keep printed copies of and reread
every few years just to get a new perspective on them?  
 

Here's my top 5. Things to note:
  1. It is more like 1 + 4. The first paper (Saltzer and Schroeder)
 should be *required* reading for everyone who claims to have the
 slightest clue about security. Everything of significance in
 computer security is in this article in some form. The only
 significant technology missing is public key crypto, and that is
 because it had not been invented yet.
  2. The other 4 are quick  dirty skim through my bibliographic
 database. I could easily have missed some papers that are more
 seminal than these, but these 4 are very good, readable, and
 important.
  3. I excluded my own papers from consideration, but if you want to
 see them  ... :) http://immunix.com/~crispin/
Crispin
@article
 (
   salt75,
   author = Jerome H. Saltzer and Michael D. Schroeder,
   title = {The Protection of Information in Computer Systems},
   journal = Proceedings of the IEEE,
   volume = 63,
   number = 9,
   month = November,
   year = 1975
 )
@article
 (
   one96,
   author = ``Aleph One'',
   title = {Smashing The Stack For Fun And Profit},
   journal = Phrack,
   volume = 7,
   number = 49,
   month = November,
   year = 1996
 )
@article
 (
   miller90,
   author = B.P. Miller and L. Fredrikson and B. So,
   title = {An Empirical Study of the Reliability of {\sc Unix}
   Utilities},
   journal = Communications of the ACM,
   pages = 33-44,
   volume = 33,
   number = 12,
   month = December,
   year = 1990,
   lcindex = QA76.A772
 )
@inproceedings{
   badger95,
   author = Lee Badger and Daniel F. Sterne and et al,
   title = {Practical Domain and Type Enforcement for UNIX},
   booktitle = Proceedings of the IEEE Symposium on Security and Privacy,
   address = Oakland, CA,
   month = May,
   year = 1995
}
@article
 (
   land94,
   author = Carl E. Landwehr and Alan R. Bull and John P. McDermott
   and William S. Choi,
   title = {A Taxonomy of Computer Program Security Flaws},
   journal = ACM Computing Surveys,
   volume = 26,
   number = 3,
   month = September,
   pages = 211-254,
   year = 1994
 )



Re: [SC-L] Programming languages -- the third rail of secure coding

2004-07-21 Thread Crispin Cowan
I don't understand the purpose of this list. If it is to list all 
programming languages, that is hopeless, as there are thousands of 
programming languages. If it is to list all programming languages with 
security ambitions, then I'm confused, as clearly not all of the 
languages listed were intended to enhance security, and some of them 
(glaringly PHP) substantially *degrade* security vs. many languages that 
came before them.

Crispin
Michael S Hines wrote:
I've been compiling a list of programming languages..   Some of which were
developed to 'solve' the insecure programming problem.  I don't think we've
made it yet.
Perhaps it's a personnel problem, not a technology problem?
My list -- (feel free to add to it).
1.  Assembler
2.  C/C++
3.  Pascal
4.  Basic or Visual Basic
5.  Java / J#
6.  Perl
7.  Ruby
8.  PHP
9.  C#
10. COBOL
11. Perl
12. XSLT
13. Python
14. Forth
15. APL
16. Smalltalk
17. Eiffel
18. PL/1 
19. ADA
20. Hermes
21. Scheme
22. ML
23. Haskell
24. Simula 67
25. Prolog
26. OCCAM
27. Modula 2
28. PL/M or PL/X
29. PL/SQL
30. SQL
31. Jabber
32. Expect
33. Perl/Tk
34. Tcl/Tk
35. XML
36. HTML
37. AppleScript
38. JavaScript
39. VBScript
40. D
41. Algol

---
Michael S Hines
[EMAIL PROTECTED] 

 

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Programming languages used for security

2004-07-12 Thread Crispin Cowan
David Crocker wrote:
Crispin Cowan wrote:
The above is the art of programming language design. Programs written in
high-level languages are *precisely* specifications that result in the
system generating the program, thereby saving time and eliminating
coding error. You will find exactly those arguments in the preface to
the KR C book.

Whilst I agree that the distinction between specification and programming
languages is not completely clear cut, there is nevertheless a fundamental
difference between specification and programming.
 

For years, I have been trying to get formal specification advocates to 
explain the difference between high level programming languages and 
specification languages. From my point of view, formal specifications 
can be divided into two categories:

   * Those that can be mechanically translated into code, otherwise
 known as programs
   * Those that cannot be mechanically translated, otherwise known as
 documentation :)

In a programming language, you tell the computer what you want it to do,
normally by way of sequential statements and loops. You do not tell the computer
...
In a specification language, you tell the computer what you are trying to
achieve, not how to achieve it. This is typically done by expressing the desired
relationship between the input state and the output state. The state itself is
normally modelled at a higher level of abstraction than in programming (e.g. you
wouldn't refer to a hash table, because that is implementation detail; you would
refer to a set or mapping instead).
 

I agree with the other posters: the above could describe a formal 
specification, but could also describe a declarative programming language.

However, I think I do see a gap between these extremes. You could have a 
formal specification that can be mechanically transformed into a 
*checker* program that verifies that a solution is correct, but cannot 
actually generate a correct solution. The assert() statements that David 
Crocker mentioned are an incomplete form of this; incomplete because the 
do not *completely* verify the program's behavior to be correct (because 
they are haphazardly placed by hand).

So there's another midpoint in the spectrum: a formal spec that can only 
verify correctness but is complete, effectively is a program for 
non-deterministic machines (cf: NP completeness theory). A spec that is 
incomplete (does not specify all outputs) is more of an approximation.

All of which begs the question: are these formal specs that are somehow 
not programs any easier to verify than actual programs? Probably 
somewhat easier (they are necessarily simpler) but some would argue, not 
enough simpler to be worth the bother. E.g. suppose 100,000 lines of 
code reduces to 10,000 lines of formal specification in some logical 
notation. A hard problem, but solvable, is a mechanical proof that the 
10,000 line spec and the 100,000 lines of code actually conform. An 
unsolved problem is does the 10,000 line spec mean what the human 
*thinks* it means?

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Programming languages used for security

2004-07-10 Thread Crispin Cowan
Dana Epp wrote:
My what a week of interesting discussions. Lets end this week on a 
good and light hearted note.
Insert various analogies between programming languages and automobiles 
here :)

   * $MY_FAVORITE_LANGUAGE is like a $REALLY_COOL_CAR, while
 $YOUR_FAVORITE_LANGUAGE is like a Yugo.
   * $C_OR_ASSEMBLER_ITS_REALLY_THE_SAME_THING is like a thermonuclear
 missile, in that it is fast and powerful, but if you are not
 careful, you can give yourself an ouchie :)
Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Education and security -- another perspective (was ACM Queue - Content)

2004-07-09 Thread Crispin Cowan
Peter Amey wrote:
What is wrong with this picture ?
I see both of you willing to mandate the teaching of C and yet not
mandate the teaching of any of Ada, Pascal, PL/I etc.
 

Makes sense to me. what is the point of teaching dead languages like 
Ada, Pascal, and PL/I?  Teach C, Assembler, and Java/C# (for the 
mainstream), and some lisp variant (Scheme, ML, Haskell) and Prolog 
variant for variety. But Ada, Pascal, and PL/I are suitable 
only for a history of programming languages course :)
   

I do hope that is a sort of smiley at the end of your message.  Please.
 

It is a sort-of smiley. On one hand, I find the whole thing amusing. On 
the other hand, I find it patently absurd that someone would suggest 
that curriculum in 2004 would comprise Ada, Pascal, and PL/I, all of 
which are (for industrial purposes) dead languages.

On one hand, university should be about learning concepts rather than 
languages, because the concepts endure while the languages go in and out 
of fashion. Evidence: 20 years ago, when I was in college, Ada, Pascal, 
and PL/I only included one dead language :)  On the other hand, the 
students do need to get a job when they graduate, and we do them a 
disservice to not at least teach concepts using a language currently in 
use in industry.

There is also room for a lot of breadth in a college program. I was only 
overtly instructed in languages a few times, the rest were read the 
book then do this assignment. But in that approach, I learned COBOL, 
Pascal, PL/M, 68000 assembler, C, C++, FORTRAN, VAX assembler, Prolog, 
LISP, and Maple.  Its not like this list needs to be short.

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Education and security -- another perspective (was ACM Queue - Content)

2004-07-09 Thread Crispin Cowan
Peter Amey wrote:
Firstly a tactical one: Ada is by no means a dead language.  There is a great tendency in our industry to 
regard whatever is in first place at any particular point in life's race to be the winner and 
everything else to be dead.
Ada was pushed hard enough by the DoD for a decade that it is to be 
expected that there is a lot of Ada code to be maintained. I'm also 
willing to believe that your business in Ada may be growing, but that is 
likely because others are exiting the business and leaving the 
remainders for you; I do not believe (unless you have evidence to the 
contrary) in significant growth in new project starts in Ada.

I focus on new project starts because that is the only case in which 
language selection is even an interesting question. For any kind of 
on-going work, using the same language that the project was started in 
is the obvious choice most of the time.

 In practice very substantial use may continue to be made of things which are not in 
the ultra-visible first place.  For example, OS/2 was killed by Windows yet most ATMs 
in the USA still run OS/2.
But no new OS2 ATMs are being built, and they are being phased out.
 We have't discussed the dead languages Cobol and Prolog but both are actually still 
in widespread use,
COBOL: same reason, legacy systems, and LOTS of them.
Prolog: not so sure. Prolog may still be a language of choice for expert 
systems projects. But I don't work in that field. I do have a completely 
un-used Prolog text book left over from grad school if someone wants to 
buy it :)

Secondly, in response to your suggestion that we teach concepts (which I wholly agree with), languages, including dead ones, encapsulate and illustrate concepts.  Pascal was designed to teach structured programming.  Occam provides a splendid introduction to concurrency.  Modula-2 and Ada are ideal for illustrating the vital concepts of abstraction, encapsulation and the separation of specification and implementation.  The languages are worth studying for these reasons alone.  Those exposed to them will be better programmers in any language and will find adoption of new ones much easier.  
 

In programming language terms, Ada is grossly primitive. Its object 
orientation mechanisms are crude at best. A *great* deal of progress in 
language technology has been made since Ada was developed. For just 
about any kind of concept or safety feature, students and developers 
would be better served to consider Java, C#, or ML instead of Ada.

As you say, languages come in and out of fashion; what I find sad is that so many of the new languages have failed to learn and build on the lessons of those that have gone before.  I think it highly probable that this is because their designers have casually dismissed those that went before as dead and therefore of no interest.  They would have done better to emulate Newton and stood on the shoulders of giants such as Wirth.
 

And that is what I meant by history of programming languages. Java, 
C#, and ML are strictly better than Pascal and Ada for almost 
everything. But they did not spring out of the earth, they were built on 
the progress of previous languages. Java in particular contains no novel 
features at all, but rather shows good taste in the features it borrows 
from others. What made Java interesting was the accident of history that 
caused it to become the first strongly typed polymorphic programming 
language to become widely popular.

You *can* teach object orientation with Simula 67 or SmallTalk, if you 
really want to. But teaching object orientation with Java is a lot more 
approachable in the contemporary context.

I would never recruit someone just because they knew Ada rather than C; however, I would be highly unlikely to recruit someone who had such a closed mind that they thought Ada had nothing to teach them and was only fit for snide mockery.
 

I don't mock Ada for what it is: a fairly good programming language from 
the 1970s, with obvious scars from having been designed by committee 
(too big, too many features). Ada's defects are artifacts of its age and 
its history, not of poor design.

I do mock the suggestion that a large, complex, and retrograde language 
with no industrial growth is a suitable subject for undergraduate education.

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Programming languages used for security

2004-07-09 Thread Crispin Cowan
ljknews wrote:
Such typing should include specification by the programmer of the range
of values allowed in variables: -32767 to +32767, 0 to 100, 1 to 100,
Characters a-z only, characters A-Z only, -10.863 to +4.368, etc.
The language should also support exact specification of arithmetic
operations to be performed for various types (overflow semantics,
precision, decimal vs. binary arithmetic, etc.).  This is important
to ensure the desired behavior is obtained when one changes to a
new compiler/interpreter, if only to have a program rejected as
requiring behavior not supported on the new compiler or operating
system.
 

Check out the Hermes programming language 
http://www.research.ibm.com/people/d/dfb/hermes-publications.html, 
which not only does such checks, but in many cases can do the checks 
statically, and refuse to compile unsafe programs. This mechanism is 
called typestate checking 
http://www.google.com/search?hl=enlr=ie=UTF-8q=typestate+checkingbtnG=Search., 
which IMHO is one of the most interesting extensions of static type 
checking for both safety and performance.

The bad news is that Hermes, while it has many great safety features, is 
another dead programming language. That's the problem with programming 
language design: there are LOTS of great programming languages out 
there, and approximately none of them have the critical mass of 
compilers, tools, and (most important) programmers to make them viable 
for most projects.

The good news is that Hermes is among the sources that Java looted; some 
of the typestate checking features ended up in the Java bytecode checker.

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Programming languages used for security

2004-07-09 Thread Crispin Cowan
David Crocker wrote:
1. Is it appropriate to look for a single general purpose programming
language? Consider the following application areas:
a) Application packages
b) Operating systems, device drivers, network protocol stacks etc.
c) Real-time embedded software
The features you need for these applications are not the same. For example,
garbage collection is very helpful for (a) but is not acceptable in (b) and (c).
For (b) you may need to use some low-level tricks which you will not need for
(a) and probably not for (c).
 

I agree completely that one language does not fit all. But that does not 
completely obviate the question, just requires some scoping.

2. Do we need programming languages at all? Why not write precise high-level
specifications and have the system generate the program, thereby saving time and
eliminating coding error? [This is not yet feasible for operating systems, but
it is feasible for many applications, including many classes of embedded
applications].
 

The above is the art of programming language design. Programs written in 
high-level languages are *precisely* specifications that result in the 
system generating the program, thereby saving time and eliminating 
coding error. You will find exactly those arguments in the preface to 
the KR C book.

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Education and security -- another perspective (was ACM Queue - Content)

2004-07-06 Thread Crispin Cowan
 E7 2D 39  4E F1 31 3E E8 B3 27 4B
 

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Interesting article on the adoption of Software Security

2004-06-12 Thread Crispin Cowan
Andreas Saurwein wrote:
Crispin Cowan wrote:
However, where ever C made an arbitrary decision (either way is just 
as good) PL/M went the opposite direction from C, making it very 
annoying for a C programmer to use.
Does that mean it did not make any decision at all? What was the outcome?
No, just trivial decisions on syntax. It made my fingers hurt to use it, 
because I had to retrain a lot of habits. Unfortunately I no longer 
remember the specifics.

When you've been around for a while, you start to see the same features
converge..  UNIX had quotas, we got Quotas with Win XP Server (well 
earlier,
when you include the third party ISVs - as an add on).  IBM had 
Language
Environment (LE) before .NET come along.
Crispin Cowan wrote:
I think .Net borrows most heavily from Java. Java in turn borrows 
from everyone. The managed code thing in particular leads back to 
the Pascal P-code interpreter; a kludge to make the Pascal compiler 
easier to implement and port. The innovation in Java was to take this 
ugly kludge and market it as a feature :)
Michael S Hines wrote:
I'm not sure that it can be blamed on Pascal. Microsoft was shipping 
Excel for the Mac in the early 80's as P-Code application and has been 
selling P-Code generating compilers since about the same time. Ever 
since, MS was strong on P-Code generating compilers.
The UCSD Pascal P-Code system was released in 1978 
http://www.informationheadquarters.com/History_of_computing/UCSD_p-System.shtml. 
MS Excel was released in 1984 
http://www.dssresources.com/history/sshistory.html. And if anything, 
the above claim that MS has been using P-code since the early days of 
Excel only supports the claim that Pascal P-Code is the origin of the 
idea at Microsoft.

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] opinion, ACM Queue: Buffer Overrun Madness

2004-06-11 Thread Crispin Cowan
David Crocker wrote:
Apart from the obvious solution of choosing another language, there are at least
two ways to avoid these problems in C++:
1. Ban arrays (to quote Marshall Cline's C++ FAQ Lite, arrays are evil!). Use
...
2. If you really must have naked arrays, ban the use of indexing and arithmetic
on naked pointers to arrays (i.e. if p is a pointer, then p[x], p+x, p-x, ++p
 

If you want safer C and you want the compiler to enforce it, and you 
don't mind having to re-write your code some, then use one of the safer 
C dialects (CCured http://manju.cs.berkeley.edu/ccured/ and Cyclone 
http://www.research.att.com/projects/cyclone/). These tools provide a 
nice mid-point in the amount of work you have to do to reach various 
levels of security in C/C++:

   * low security, low effort
 o do nothing
 o code carefully
 o apply defensive compilers, e.g. StackGuard
 o apply code auditors, e.g. RATS, Flawfinder
 o port code to safer C dialects like CCured and Cyclone
 o re-write code in type safe languages like Java and C#
 o apply further code security techniques, e.g. formal theorem
   provers WRT a formal spec
   * high security, high effort
Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Interesting article on the adoption of Software Security

2004-06-11 Thread Crispin Cowan
Michael S Hines wrote:
Likewise for the IBM Mainframe operating systems MVS,OS/390,z/OS - much of
which is written in (I believe) PL/M - a dialect much like PL/1.
 

If PL/M is the language I am remembering from an embedded systems class 
back in the 1980s, then it is not at all like PL/1. Rather, it is a 
completely type-unsafe language. I would say similar to C, in that it 
has most of the same pitfalls. However, where ever C made an arbitrary 
decision (either way is just as good) PL/M went the opposite direction 
from C, making it very annoying for a C programmer to use.

Many of our Operating Systems seem to have evolved out of the old DEC RSTS
system.  For example, CP/M had a PIP command.  Later renamed to COPY in DOS.
 

True.
When you've been around for a while, you start to see the same features
converge..  UNIX had quotas, we got Quotas with Win XP Server (well earlier,
when you include the third party ISVs - as an add on).  IBM had Language
Environment (LE) before .NET come along.  
 

I think .Net borrows most heavily from Java. Java in turn borrows from 
everyone. The managed code thing in particular leads back to the 
Pascal P-code interpreter; a kludge to make the Pascal compiler easier 
to implement and port. The innovation in Java was to take this ugly 
kludge and market it as a feature :)

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Interesting article on the adoption of Software Security

2004-06-10 Thread Crispin Cowan
Damir Rajnovic wrote:
While this is true that only some of the bugs are fixed that fixing can
have unexpectedly high price tag attached. No matter how do you look
at this it _is_ cheaper to fix bugs as soon as possible in the process
(or not introduce them at the first place).
 

This is true in the isolation of looking at the cost of fixing any one 
individual bug, but it is not true in general. Fixing one bug early in 
the process is cheap and easy. Fixing the *last* bug in a system is 
astronomically expensive, because the cost of *finding* bugs rises 
exponentially as you further and further refine it. Worse, you 
eventually reach a point of equilibrium where your chances of inserting 
a new bug in the course of fixing a known bug are about even, and it 
becomes almost impossible to reduce the bug count further.

Personally, I do not see how this can be easily measured.
This entire area is rife with mushy psychological issues involving 
huan's ability to process information correctly. As a result, nearly all 
of the absolute statements are wrong, and they function only within 
certain ranges, .e.g. fixing bugs early in development is cheaper than 
patching in the field, but only within the bounds of digging only so 
hard for bugs.

But even this statement is self-limiting. The above claim is not true 
(or at least less true) for safety-critical systems like fly-by-wire 
systems and nuclear reactor controllers, where the cost of failure due 
to a bug is so high that it is worth paying the extra $$$ to find the 
residual bugs in the development phase.

My reaction to the feuding over whether it is better to shore up C/C++ 
or to use newer safer languages like Java and C#: each has their place.

   * There are millions of lines of existing C/C++ code running the
 world. Holding your breath until they are all replaced with type
 safe code is not going to be effective, and therefore there is
 strong motive to deploy tools (e.g. StackGuard, RATS, etc.) to
 improve the safety of this code.
   * New code should be written in type safe languages unless there is
 a very strong reason to do otherwise.
Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: Re : [SC-L] virtual server - use jail(8) on FreeBSD

2004-04-02 Thread Crispin Cowan
Serban Gh. Ghita wrote:

First of all i did not express myself very clear: (for the ones who
replied), i said virtual shared environment, not virtual machine, so i am
not talking about VMware or other software like that.
My main concern is the security in a server (eg webhosting provider),
where multiple users are hosted, and everybody must be restricted to get
out of his own home.
Immunix SubDomain http://immunix.org/subdomain.html does exactly what 
you want. You can write a profile per CGI script that describes the set 
of files the script may read, write, and execute. The profile is written 
using regular expressions, so you can add flexibility to it. The profile 
can be applied as a global default, or per script. It can even be 
applied when you are using mod_perl or mod_php, when there is no actual 
call to exec(). Here's a screen shot of what a profile looks like 
http://immunix.org/subdomain.html

The jail(8) solution seems fair to me, because i use FreeBSD on all
servers,
That is unfortunate, as SubDomain is linux only.

To those complaining that this has noting to do with secure coding. I 
disagree. This is a meta-language describing the permitted behavior of 
applications. It is secure coding in another form, with several 
attractive properties:

   * It is a meta-language, so it does not interfere with the structure
 of the base program.
   * It can be applied to closed-source binaries.
   * It is purely declarative, so it is easy to construct assurance
 arguments based on the content of the SubDomain profile.
Crispin

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com
Immunix 7.3   http://www.immunix.com/shop/





Re: [SC-L] Re: Application Sandboxing, communication limiting, etc.

2004-03-14 Thread Crispin Cowan
Jared W. Robinson wrote:

On Tue, Mar 09, 2004 at 07:12:35PM -0500, Bill Cheswick wrote:
 

One of the things I'd like to see in Linux and Windows is better sandboxing
of user-level programs, like Outlook and the browsers.  There have
been a number of approaches proposed over the years, and numerous papers, but
haven't seen anything useful deployed widely on any of these platforms.
   

I agree with the sandboxing idea. We're seeing it used more on the
server side, but the desktop arena isn't as far along.
Seems to me that the average user application doesn't need to open
TCP/UDP ports for listening. Attack bots tend to do this kind of thing.
Perhaps SELinux could be used to define a rule set that would restrict
desktop application's access to resources such as the filesystem,
network, etc. 

Note that I don't know what the scope of SELinux is, or how it works.
This is exactly what Immunix SubDomain does: define the files and
network activities that each program may access. We use use regular
expressions to specify policy, so for instance, fingerd could be
permitted to read /home/*/.plan and not read anything else.
Below my sig (apparently an attachment with a name infix of .lib 
causes a lot of AV filters to freak out) is a sample SubDomain profile 
for Mozilla 1.4. It gives read and execute access to a long list of 
library and configuration files that Mozilla needs, and then home 
directory access to things like /home/*/tmp/** so that you can store 
whatever you want into your personal temp directory, but Mozilla gone 
mad does not have total write access to your entire home directory. The 
* notation means a single path element while ** means an arbitrary 
number of path elements, i.e. a tree.

Most OSS Software also doesn't phone home (unlike software in the
Windows world). Only pre-installed apps should be allowed network
communication under normal circumstances. So if your desktop noticed
that an unknown app (one run from the user's home directory or from
/tmp) tries to communicate with a remote site, it would deny the action
by default -- or at least slow the application communication down so
that worms would spread more slowly, and could be contained.
SubDomain also has the ability to control network access, so you can
specify rules about what network connections an application should be
making. However, that is a bit challenging in a web browser: you want
the web browser to be able to make TCP connections to port 80 on just
about any server, so how can you prevent it from phoning home by just
quietly making some web connections? Even DNS requests are sufficient
for an effective phone home, such as a DNS lookup for
users-personal-information.eveilbigcorp.com would report
users-personal-information to Evil Big Corp's DNS server.
Crispin

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com
Immunix 7.3   http://www.immunix.com/shop/
-
# Copyright(c) Immunix Inc., 2004
# $Id: usr.lib.mozilla-1.4.mozilla-bin,v 1.10 2003/12/11 21:03:33 
sarnold Exp $
#
/usr/lib/mozilla-1.4/mozilla-bin  {
/bin/netstat  rx  ,
/bin/bash rx  ,
/dev/log  w   ,
/dev/null rw  ,
/dev/mixer*   rw  ,
/dev/dsp  rw  ,
/dev/urandom  rw  ,
/dev/random   rw  ,
/dev/pts/*rw  ,
/dev/tty  rw  ,
/etc/esd.conf r   ,
/etc/fstabr   ,
  /etc/gtk/*r,
/etc/hostsr   ,
/etc/host.confr   ,
/etc/ld.so.cache  r   ,
/etc/ld.so.conf   r   ,
/etc/localtimer   ,
/etc/mailcap  r   ,
/etc/mime.types   r   ,
/etc/mtab r   ,
/etc/resolv.conf  r   ,
/etc/passwd   r   ,
  /etc/pluggerrcr,
/etc/nsswitch.confr   ,
/etc/X11/fs/configr   ,
  /home/*/.mozilla/**   rwl,
  /home/*/.Xauthority   r,
  /home/*/.Xdefaultsr,
  /home/*/.gtkrcr,
  /home/*/.mailcap  r,
  /home/*/.mime.types   r,
  /home/*/tmp   r,
  /home/*/tmp/**rwl,
  /lib/ld-*.so  rx,
  /lib/lib*.so* r,
/proc/net r   ,
/proc/net/appletalk   r   ,
/proc/net/dev r   ,
/proc/net/ipx