Re: [SC-L] OWASP Publicity

2007-11-17 Thread Crispin Cowan
der Mouse wrote:
>> The vast majority of IT executives are unfamiliar with all of the
>> principles of security, firewalls, coding, whatever.
>> 
> ...
>   
>> The important thing to understand is that such principles are below
>> their granularity; the[y] are *right* to not care about such
>> principles, because they can't do anything about them.
>> 
> Perhaps - but then, they have to stop second-guessing the people who
> *do* know what they're talking about.  Trying to have it both ways -
> management that is inexpert but nevertheless imposes their opinions on
> design or buying decisions - is a recipe for disaster, and, while
> hardly universal, is all too common.
>   
I submit that really *good* managers do listen to the experts around
them. That is really basic to good management; surround yourself with
experts, and then listen to them.

Of course there's lots of bad managers, because managing is so
subjective that bad managers find it easy to survive. Measuring the
quality of management is about as difficult as measuring the quality of
software.

> I've never understood why it is that managers who would never dream of
> second-guessing an electrician about electrical wiring, a construction
> engineer about wall bracing, a mechanic about car repairs, will not
> hesitate to believe - or at least act as though they believe - they
> know better than their in-house experts when it comes to what computer,
> especially software, decisions are appropriate, and use their
> management position to dictate choices based on their inexpert,
> incompletely informed, and often totally incompetent opinions.  (Not
> just security decisions, either, though that's one of the cases with
> the most unfortunate consequences.)
>   
Because the kind of personality that seeks to become a manager is a
self-important arrogant snot, myself included :) It thus takes conscious
effort to listen to the opinions of others, and let them win when they
have a persuasive argument.

Even more simple: this trait of believing your own opinions more than
those of others is nearly universal in humans. Managers simply have the
power to indulge themselves, and only occasionally have the wisdom to
*not* indulge themselves.

Crispin

-- 
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin
CEO, Mercenary Linux   http://mercenarylinux.com/
   Itanium. Vista. GPLv3. Complexity at work

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] OWASP Publicity

2007-11-15 Thread Crispin Cowan
McGovern, James F (HTSC, IT) wrote:
> I have observed an interesting behavior in that the vast majority of IT
> executives still haven't heard about the principles behind secure
> coding. My take says that we are publishing information in all the wrong
> places. IT executives don't really read ACM, IEEE or other the sporadic
> posting from bloggers but they do read CIO, Wall Street Journal and most
> importantly listen to each other.
>
> What do folks on this list think about asking the magazines and
> newspapers to publish? I am willing to gather contact information of
> news reporters and others within the media if others are willing to
> amplify the call to action in terms of contacting them. 
>   
The vast majority of IT executives are unfamiliar with all of the
principles of security, firewalls, coding, whatever.

The important thing to understand is that such principles are below
their granularity; then are *right* to not care about such principles,
because they can't do anything about them. Their granularity of decision
making is which products to buy, which strategies to adopt, which
managers to hire and fire. Suppose they did understand the principles of
secure coding; how then would they use that to decide between firewalls?
Web servers? Application servers?

If anything, the idea that needs to be pitched to IT executives is to
pay more attention to "quality" than to shiny buttons & features. But
there's the rub, what is "quality" and how can an IT executive measure it?

I have lots of informal metrics that I use to measure quality, but they
largely amount to synthesized reputation capital, derived from reading
bugtraq and the like with respect to how many vulnerabilities I see with
respect to a given product, e.g. Qmail and Postifx are extremely secure,
Pidgin not so much :)

But as soon as we formalize anything like this kind of metric, and get
executives to start buying according to it, then vendors start gaming
the system. They start developing aiming at getting the highest
whatever-metric score they can, rather than for actual quality. This
happens because metrics that approximate quality are always cheaper to
achieve than actual quality.

This is a very, very hard problem, and sad to say, but pitching articles
articles on principles to executives won't solve it.

Crispin

-- 
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin
CEO, Mercenary Linux   http://mercenarylinux.com/
   Itanium. Vista. GPLv3. Complexity at work

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] NDSS 2008 CfP Papers Due September 21

2007-09-13 Thread Crispin Cowan
NDSS (Network and Distributed Systems Security) is a traditional
academic scholarly conference, with an emphasis on practical security
matters. If you have a result on how to make software more secure,
please consider submitting a paper.

Papers are due September 21st
http://www.isoc.org/isoc/conferences/ndss/08/cfp.shtml

The conference itself is February 10-13 in San Diego.

Thanks,
Crispin

-- 
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin/
Director of Software Engineering   http://novell.com
AppArmor Chat: irc.oftc.net/#apparmor

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Insider threats and software

2007-08-28 Thread Crispin Cowan
Paco Hope wrote:
> On 8/16/07 7:44 PM, "silky" <[EMAIL PROTECTED]> wrote:
>
> how is this different then sending malformed packets to an rpc interface?
> ...
> Now I'll gently disagree with Gary, who is my boss, so you know I'll hear 
> about it in the hallways... I think this feels more like "privilege 
> escalation" than "insider threat." The distinction being that these attacks 
> allow an authorized user who has liimited privileges to escalate their 
> privileges and do things that they shouldn't be able to do. An insider (to 
> me) is a person who already had that privilege and status when they started 
> their attack. (Read Kevin Wall's follow-up on darkreading.com he has good 
> things to say on who are insiders and outsiders).  Where we are prone to 
> confusion, I think, is that outsiders or limited authorized users can have 
> the same IMPACT as an insider, when the privilege escalation is sufficiently 
> bad.
>   
Gary has an interesting but fairly obvious idea, that AJAX clients are
exceptionally vulnerable to the environment they run in. Said clients
are also part of a distributed computing system between the AJAX client,
the web front end, and whatever back-end systems are involved.

Is this an "insider" threat? Only if the people who coded the server
were dumb enough to treat the AJAX client as if it were an insider
component. Never do that.

This is web security 101: always always always check your input
parameters, and especially if they are coming from a web client.

There is a risk here that AJAX developers will get confused, lazy,
sloppy, about whether the AJAX client component is trusted or not. It is
not clear to me yet whether the AJAX dev tools that are emerging make
that mistake pervasive, or if it requires a special kind of stupid to
make that mistake.

Is this really an insider threat? I think that is stretching things, but
not a huge amount.

Gary also brings up references to his book on hacking games. Small-scale
distributed games are the same as web apps; never trust the client.
Large scale MMORP games (everything from World of Warcraft to Second
Life) are economically mandated to shift as much computational burden
onto the client as possible, and that entails inevitably trusting the
clients more than security really can tolerate. Such games are
inherently insecure; look for more hacking to occur. Read more about it
in this Oakland 2007 paper, with an interesting solution to this problem:

/Enforcing Semantic Integrity on Untrusted Clients in Networked
Virtual Environments (Extended abstract)/
Somesh Jha, Stefan Katzenbeisser, Christian Schallhart, Helmut Veith
and Stephen Chenney

http://csdl2.computer.org/persagen/DLAbsToc.jsp?resourcePath=/dl/proceedings/&toc=comp/proceedings/sp/2007/2848/00/2848toc.xml&DOI=10.1109/SP.2007.3

Crispin

-- 
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin/
Director of Software Engineering   http://novell.com
AppArmor Chat: irc.oftc.net/#apparmor

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-12 Thread Crispin Cowan
Steven M. Christey wrote:
> On Mon, 11 Jun 2007, Crispin Cowan wrote:
>   
>> Kind of. I'm saying that "specification" and "implementation" are
>> relative to each other: at one level, a spec can say "put an iterative
>> loop here" and implementation of a bunch of x86 instructions.
>> 
> I agree with this notion.  They can overlap at what I call "design
> limitations": strcpy() being overflowable (and C itself being
> overflowable) is a design limitation that enables programmers to make
> implementation errors.  I suspect I'm just rephrasing a tautology, but
> I've theorized that all implementation errors require at least one design
> limitation.  No high-level language that I know of has a built-in
> mechanism for implicitly containing files to a limited directory (barring
> chroot-style jails), which is a design limitation that enables a wide
> variety of directory traversal attacks.
>   
I thought that the Java 2 security container stuff let you specify file
accesses? Similarly, I thought that Microsoft .Net managed code could
have an access specification?

AppArmor provides exactly that kind of access specification, but it is
an OS feature rather than a high level language, unless you want to view
AA policies as high level specifications.

>>> If we assumed perfection at the implementation level (through better
>>> languages, say), then we would end up solving roughly 50% of the
>>> software security problem.
>>>   
>> The 50% being rather squishy, but yes this is true. Its only vaguely
>> what I was talking about, really, but it is true.
>> 
> For whatever it's worth, I think I agree with this, with the caveat that I
> don't think we collectively have a solid understanding of design issues,
> so the 50% guess is quite "squishy."  For example, the terminology for
> implementation issues is much more mature than terminology for design
> issues.
>   
I don't agree with that. I think it is a community gap. The academic
security community has a very mature nomenclature for design issues. The
hax0r community has a mature nomenclature for implementation issues.
That these communities are barely aware of each other's existence, never
mind talking to each other, is a problem :)

> One sort-of side note: in our "vulnerability type distributions" paper
> [1], which we've updated to include all of 2006, I mention how major Open
> vs. Closed source vendor advisories have different types of
> vulnerabilities in their top 10 (see table 4 analysis in the paper).
> While this discrepancy could be due to researcher/tool bias, it's probably
> also at least partially due to development practices or language/IDE
> design.  Might be interesting for someone to pursue *why* such differences
> occur.
>   
Do you suppose it is because of the different techniques researchers use
to detect vulnerabilities in source code vs. binary-only code? Or is
that a bad assumption because the hax0rs have Microsoft's source code
anyway? :-)

Crispin

-- 
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin/
Director of Software Engineering   http://novell.com
AppArmor Chat: irc.oftc.net/#apparmor

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-12 Thread Crispin Cowan
Gary McGraw wrote:
> Though I don't quite understand computer science theory in the same way that 
> Crispin does, I do think it is worth pointing out that there are two major 
> kinds of security defects in software: bugs at the implementation level, and 
> flaws at the design/spec level.  I think Crispin is driving at that point.
>   
Kind of. I'm saying that "specification" and "implementation" are
relative to each other: at one level, a spec can say "put an iterative
loop here" and implementation of a bunch of x86 instructions. At another
level, specification says "initialize this array" and the implementation
says "for (i=0; i If we assumed perfection at the implementation level (through better 
> languages, say), then we would end up solving roughly 50% of the software 
> security problem.
>   
The 50% being rather squishy, but yes this is true. Its only vaguely
what I was talking about, really, but it is true.

Crispin

-- 
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin/
Director of Software Engineering   http://novell.com
AppArmor Chat: irc.oftc.net/#apparmor

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-11 Thread Crispin Cowan
IMHO, all this hand wringing is for naught. To get systems that never
fail requires total correctness. Turing tells us that total correctness
is not decidable, so you simply never will get it completely, you will
only get approximations at best.

Having humans write specifications and leaving programming to computers
is similarly a lost cause. At a sufficiently high level, that is asking
the computer to map NP to P, and that isn't going to happen. At a less
abstract level, you are just asking the human to code in a higher level
language. This will help, but will not eliminate the problem that you
just cannot have total correctness.

Programmable Turing machines are great, they do wonderful things, but
total correctness for software simply isn't feasible. People need to
understand that programs are vastly more complex than any other class of
man made artifact ever, , and there fore can never achieve the
reliability of, say, steam engines.

The complexity of software is beginning to approach living organisms.
People at least understand that living things are not totally
predictable or reliable, and s**t will happen, and so you cannot count
on a critter or a plant to do exactly what you want. When computer
complexity clearly exceeds organism complexity, perhaps people will come
to recognize software for what it is; beyond definitive analyzability.

We can never solve this problem. At best we can make it better.

Crispin

-- 
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin/
Director of Software Engineering   http://novell.com
AppArmor Chat: irc.oftc.net/#apparmor

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Economics of Software Vulnerabilities

2007-03-19 Thread Crispin Cowan
Ed Reed wrote:
> Crispin Cowan wrote:
>   
>> Crispin, now believes that users are fundamentally what holds back security
>>   
>> 
> I was once berated on stage by Jamie Lewis for sounding like I was
> placing the blame for poor security on customers themselves.
>   
Fight back harder. Jamie is wrong. The free market is full of product
offerings of every description. If users cared about security, they
would buy different products than they do, and deploy them different
than they do. QED, lack of security is user's fault.

> I have moved on, and believe, instead, that it is the economic
> inequities - the mis-allocation of true costs - that is really to blame.
>   
Since many users are economically motivated, this may explain why users
don't care much about security :)

A competitive free-market economy is really a large optimization engine
for finding the most efficient way to do things, because the more
efficient enterprises crush the less efficient. As such, I have a fair
degree of faith that senior management is applying approximately the
right amount of security to mitigate the threat that they face. If they
are not doing so, they are at risk from competitors who do apply the
right amount of security.

What has made the security industry grow for the last decade has been
the huge growth in connectivity. That has grow the attack surface, and
hence the threat, that enterprises face. And that has caused enterprises
to grow the amount of security they deploy.

> Add the slowly-warmed pot phenomenon (apocryphal as it may be) -
> customers don't jump out of the boiling pot because they're too invested
> to walk away.
>
> Eventually I think they'll get fed up and there'll be a consumer uprising.
>   
Why do you think it will be an uprising? Why not a gradual shift of the
vendors just get better, exactly as fast as the users need them to?

> Until then let's encourage better coding practices and secure designs
> and deep thought about "what policy do I want enforced". 
>   
Technologists figure out how to do stuff. Economists and strategists
figure out what to do. We can encourage all we want, but we are just
shouting into the wind until enterprise users demand better security.

Crispin
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Economics of Software Vulnerabilities

2007-03-19 Thread Crispin Cowan
Gary McGraw wrote:
> Very interesting.  Crispin is in the throes of big software.  Anybody want to 
> help me mount a rescue campaign from jamaica?
>   
It is the art of managing upwards. To get my boss to do what I want him
to do, I have to encourage him, I can't just tell him. And his boss. And
his boss. And /his/ boss is the customer. So with a very long pole with
hinges in it, I have to try to get the customer to do what I want.

With that kind of interface to the customer, the only way to get the
customer to be more secure is to make being more secure the path of
least resistance. Make the secure way of doing things so easy that
anything else is just dumb, and the users will migrate to the secure way.

This is a highly unnatural thing to do. Security is the business of
saying "no" to access requests, and so is mostly viewed as being the
enemy of convenience.

However, it can be done. SSH did it; logging in to a remote host is
easier with SSH than with telnet or rlogin, because it lets you place
public keys (so you don't even have to type a password) and tunnels your
X11 stuff so that remote graphical stuff "just works".

All this is why ease of use was the #1 design goal of my AppArmor
product. Grey beards love to go around quoting the fable that you can't
add security to an existing system, you have to design it in. Well guess
what; you can't add ease of use to an existing system either, it has to
be designed in. And if you fail to provide for ease of use, then users
won't use it, at which point the security value of your solution drops
to zero.

Crispin

-- 
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin/
Director of Software Engineering   http://novell.com
AppArmor Training at CanSec West   http://cansecwest.com/dojoapparmor.html

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Economics of Software Vulnerabilities

2007-03-19 Thread Crispin Cowan
Gary McGraw wrote:
> I'm not sure vista is bombing because of good quality.   That certainly would 
> be ironic.   
>
> Word on the "way down in the guts" street is that vista is too many things 
> cobbled together into one big kinda functioning mess.
I.e. it is mis-featured, and lacks on some integration. This is a
variation on not having desired features. And there certainly are big
features in Vista that were supposed to be there but aren't (most of
user-land being managed code, relational file system).

It is also infamously late.

So if the resources that were put into the code quality in Vista had
instead been put into features and ship-date, would it do better in the
marketplace?

Sure, that's heretical :) but it just might be true :(

Crispin, now believes that users are fundamentally what holds back security

-- 
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin/
Director of Software Engineering   http://novell.com
AppArmor Training at CanSec West   http://cansecwest.com/dojoapparmor.html

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Economics of Software Vulnerabilities

2007-03-12 Thread Crispin Cowan
mbrace the next steps in sane
> programming (we HAVE largely stamped out self-modifying code, but
> strcpy() is still a problem...)
>   
I beg to differ. There is no evidence at all that the "good enough"
modality is not sustainable. Read Vernor Vinge's "A Deepness in the Sky"
for a fascinating vision on what 10,000 years of shoddy software
development could produce. And its a damn fine book.

> What's most disappointing to me is the near-total lack of discussion
> about security policies and models in the whole computer security field,
> today.
>   
I see the policy field growing, albeit slowly. SELinux and AppArmor are
getting traction now, and 5 years ago they were exotic toys for weirdos.

> If engineering is the practice of applying the logic and proofs provided
> by science to real world situations, software engineering and computer
> science seem simply to have closed their eyes to the question of system
> security and internal controls.
>
> Perhaps economics will reinvigorate the discussion in the coming decades.
>   
I view this as completely ironic. It was economics that forced the
software industry to close its eyes to formalism and quality. the
industry won't change until economics make quality matter more than
features, and I have yet to see any hint of that happening. For example,
Microsoft Vista is:

* Much better code quality: MS invested heavily in both automated
  and human code checking before shipping.
* Feature-poor: they pulled back on most of the interesting
  features, and as a result Vista is fundamentally XP++ with a
  pretty 3D GUI.
* A year or two late.
* Bombing in the market: the street chat I see is enterprises doing
  anything possible to avoid upgrading to Vista.

So it seems that even mighty Microsoft, when they try for quality over
features, just gets punished in the market place.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hacking is exploiting the gap between "intent" and "implementation"

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] NDSS: Network and Distributed Systems Security

2007-02-13 Thread Crispin Cowan
This is the call for participation for the annual Network and
Distributed System Security conference, starting in two weeks February
28th to March 2nd in San Diego http://www.isoc.org/isoc/conferences/ndss/07/

NDSS is a traditional scholarly academic security conference with a peer
reviewed track of papers
http://www.isoc.org/isoc/conferences/ndss/07/program.shtml

However, this year we have made a special effort to make NDSS more
relevant to security practitioners by adding an invited talks track
focused on security threats by some leading practitioners. Our invited
talks schedule is:

* Keynote: Vernor Vinge, professor emeritus of computer science at
  UCSD, founder of the science fiction cyberpunk genre, quadruple
  Hugo award winner for the novels "A Fire Upon the Deep" and "A
  Deepness in the Sky", and the stories "Fast Times at Fairmont
  High" and "The Cookie Monster", and notable futurologist for the
  notion of the technological singularity. Of particular interest to
  me as a security geek is that software security is a key element
  of "Deepness in the Sky", and it is *correct* :)
* H1kari of ToorCon speaking on "Breaking Wireless and Mac OS-X
  Encryption with FPGAs"
* John Viega, McAfee Chief Security Architect on "Malware in the
  Real World"
* Tom Liston, speaking on work with Ed Skoudis, on "Virtual Machine
  Security Issues"
* Jim Hoagland, speaking on work with Oliver Friedrichs on "A
  Network Attack Surface Analysis of RTM Windows Vista"
* Panel "Red Teaming and Hacking Games: How Much Do They Really
  Help?", moderated by Crispin Cowan, with panelists:
  o John Viega, Kenshoto/Defcon CtF organizer
  o Rodney Thayer, member of a winning Kenshoto/Defcon CtF team
  o Giovanni Vigna, professor UCSB, leader of 2005 Defcon CtF
winning team
  o Dennis W. Mattison, member of organizing team for ToorCon
RootWars CtF game
  o Rizzo, member of the GhettoHackers, who dominated Defcon CtF
for 4 years, and then revolutionized the game with a new set
of rules & infrastructure in 2001

We hope for a lively exchange of views in the "hall track" between
academic security researchers and industrial security practitioners.
Come share your skills and frighten a professor :)

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hacking is exploiting the gap between "intent" and "implementation"

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Dark Reading - Discovery and management - Security Startups Make Debut - Security News Analysis

2007-01-28 Thread Crispin Cowan
ljknews wrote:
> My guess is that if a company actually is capable of analyzing
> binary code they only do it for the highest volume instruction
> sets.
>   
They certainly will focus on larger markets first. If you want them to
focus on *your* market, make it worth their while :) SUSE Linux does a
lot for the Z series mainframe market because they are willing to pay
for it. The market for, say, Motorola 88000 CPUs is relatively sparse :)

> My guess is that attackers will go after machines they feel are
> less protected.
>   
I fully disagree with that. There are 2 kinds of attackers:

   1. Bottom feeders. These people troll for very common vulnerabilities
  with scanners and worms, trying to build botnets. There are
  *plenty* of people with unprotected x86 machines, so that is what
  they target, regardless of any optional technology add-ons people
  develop for that platform.
   2. Targeted attackers. These people are professionals, and they are
  going after a specific target. They don't select targets on the
  basis of vulnerability, they select the target for external
  reasons having nothing to do with the defenses deployed.

In between would be criminals of opportunity who seek targets that are
both valuable and soft. But that is really just a more sophisticated
variant of #1.

As a defender, you need to care about the strength of your defense in
proportion to the value of your assets. If your assets are not
particularly valuable, then only deploy the basic defenses to shed the
ankle biters in class 1. If your assets are more valuable, then deploy
more thorough/expensive defenses until the cost of the defenses exceeds
the calculated risk to your assets.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hacking is exploiting the gap between "intent" and "implementation"


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Compilers

2007-01-04 Thread Crispin Cowan
Florian Weimer wrote:
> * Crispin Cowan:
>   
>> ljknews wrote:
>> 
>>> 2. The compiler market is so immature that some people are still
>>>using C, C++ and Java.
>>>   
>> I'm with you on the C and C++ argument, but what is immature about Java?
>> I thought Java was a huge step forward, because for the first time, a
>> statically typesafe language was widely popular.
>> 
> Java is not statically typesafe, see the beloved ArrayStoreException
> (and other cases, depending what you mean by "statically typesafe").
>   
So every language that supports arrays is not statically type safe? How
else can a language guarantee array bounds checking without having to
resort to array bounds checking in cases where the indicies are
dynamically computed?

What language does better on array bounds typing? Back in the day,
classic Pascal had very static array types: an array of a specific size
was a type, and you could not mix them. So if you wanted to create a
procedure that processed an array of things, the type of the procedure
was bound to the *fixed* size of the list of things. Statically type
safe, but not very useful. And then you discover the hard way that the
generated code most often didn't even enforce array bounds checking :(

The Hermes programming language (fairly arcane, back in the early 1990s)
dodged this bullet by *not* supporting arrays. Instead it had
"collections": a pile of tuples that could be indexed by value of any
field(s) in the tuple you want. Essentially a relational table. You ask
the collection for an item with a matching field value, and it either
gives it to you, or it throws an exception.

So it seems to me that "record not found" is the bottom line in array
bounds checking. It is pretty fundamentally a dynamic error condition.
Static type checking cannot prove it will never happen, and so it will
always involve a dynamic check of some kind.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hacking is exploiting the gap between "intent" and "implementation"


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Compilers

2006-12-26 Thread Crispin Cowan
ljknews wrote:
>   2. The compiler market is so immature that some people are still
>  using C, C++ and Java.
>   
I'm with you on the C and C++ argument, but what is immature about Java?
I thought Java was a huge step forward, because for the first time, a
statically typesafe language was widely popular.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hacking is exploiting the gap between "intent" and "implementation"


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Could I use Java or c#? [was: Re: re-writing college books]

2006-11-14 Thread Crispin Cowan
Robin Sheat wrote:
> On Tuesday 14 November 2006 13:28, Crispin Cowan wrote:
>   
>> It means that compromising performance 
>> 
> It's not necessarily a given that runtime performance is compromised. There 
> are situations where Java is faster than C (I've tested this on trivial 
> things).
Here it is "bytecode vs. native code generator", not "Java vs. C."
Remember, I advocated Java over C++ in the first place :)

Even in the bytecode vs. native code generator contest, there are cases
where each will win:

* bytecode interpreters always lose; they really are just a kludge
* JIT can win if it uses dynamic profiling effectively and the
  application is amenable to optimization for decisions that need to
  be evaluated at runtime
* JIT can be a lose because of the latency required to JIT the code
  instead of compiling ahead of time

So:

* JIT will win if your application is long-lived, and has a lot of
  dynamic decision making to do, e.g. making a lot of deep object
  member function calls that are virtual, or just a lot of
  conditional branches.
* Native code will win if your applications are just short-lived,
  because they are dispatched as children from a dispatcher process
  o You pat the JIT cost each time it starts
  o The short lifespan doesn't give dynamic profiling time to do
its thing


> Personally, I find the programmer time to be much better used in Java too. 
>   
No argument from me. I advocate Java, I just want a native code
generator instead of bytecode.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Could I use Java or c#? [was: Re: re-writing college books]

2006-11-13 Thread Crispin Cowan
mikeiscool wrote:
> On 11/14/06, Leichter, Jerry <[EMAIL PROTECTED]> wrote:
>   
>> The joke we used to make was:  The promise of Java was "Write once,
>> run everywhere".  What we found was "Write once, debug everywhere".
>> Then came the Swing patches, which would cause old bugs to re-appear,
>> or suddenly make old workaround cause problems.  So the real message
>> of Java is "Write once, debug everywhere - forever".
>>
>> Now, I'm exagerating for effect.  There are Java programs even quite
>> substantial Java programs, that run on multiple platforms with no
>> problems and no special porting efforts.  (Hell, there are C programs
>> with the same property!)  But there are also Java programs that
>> cause no end of porting grief.  It's certainly much more common to
>> see porting problems with C than with Java, but don't kid yourself:
>> Writing in Java doesn't guarantee you that there will be no platform
>> issues.
>> 
> True, but that doesn't mean runtime portability isn't a good thing to aim for.
>   
It means that compromising performance to obtain runtime portability
that does not actually exist is a poor bargain.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] p-code was created for PLATFORM PORTABILITY

2006-11-13 Thread Crispin Cowan
David A. Wheeler wrote:
> On 11/9/06, Crispin Cowan <[EMAIL PROTECTED]> wrote:
>   
>>> Prior to Java, resorting to compiling to byte code (e.g. P-code back in
>>> the Pascal days) was considered a lame kludge because the language
>>> developers couldn't be bothered to write a real compiler.
>>>   
> I believe that is completely and totally false.
> If you want to claim p-code itself was lame, fine.
> But let's keep the history accurate.
>
> The UCSD p-system was created in the late 1970's SPECIFICALLY for
> PORTABILITY of executable code: You could ship p-code to any
> machine, and it would run.
That is not inconsistent with my claim. The "P-code is a kludge to get
around writing a real compiler" is multiplied by the diversity of
architectures. Writing a native code generator is a cost you pay for
every supported architecture. So in more detail, P-code is a
performance-compromising kludge to avoid having to write a *lot* of real
code generators.

One major change between then and now is consolidation of CPUs. Then,
there really was a very broad diversity of CPU architectures (IBM
mainframe, IBM  AS/400, DEC VAX, PDP, DEC10, DEC20, Data General,
Apollo, HP, Xerox Sigma, x86, 68000, NS32K, etc. etc.) and they all more
or less mattered. It is *very* different today: the list of CPU
architectures that matter is much shorter (x86, x86-64, SPARC, POWER,
Itanium): only 4 instead of a baker's dozen, and of those 4, a single
one (x86) is a huge majority of the market.

Pascal was a student language, not often used for commercial
development, so money for Pascal development was scarce. In contrast,
real languages for commercial purposes (PL/1, COBOL, FORTRAN, C) all
used native code generators. P-code was precisely a
performance-compromising kludge to allow Pascal to be portable with less
development effort.

Of course, there was one big exception: Turbo Pascal. Arguably the most
popular Pascal implementation ever. And it used a native code generator.

The need for portability, and the cost of portability (how many
platforms you really have to port to) has dropped dramatically. Bytecode
should be going away, the the architectural mistake of Java and C#/.Net
are going to preserve it for some time to come :(

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Could I use Java or c#? [was: Re: re-writing college books]

2006-11-12 Thread Crispin Cowan
Al Eridani wrote:
> On 11/9/06, Crispin Cowan <[EMAIL PROTECTED]> wrote:
>   
>> Prior to Java, resorting to compiling to byte code (e.g. P-code back in
>> the Pascal days) was considered a lame kludge because the language
>> developers couldn't be bothered to write a real compiler.
>> 
> "Post-Java, resorting to compiling to machine code is considered a lame
> kludge because the language developers cannot be bothered to write a
> real optimizer."
>   
I don't see what a bytecode intermediate stage has to do with "real
optimizer". Very sophisticated optimizers have existed for native code
generators for a very long time.

Bytecode interpreter performance blows goats, so I'm going to assume you
are referring to JIT. The first order effect of JIT is slow startup
time, but that's not an advantage either. So you must be claiming that
dynamic profiling (using runtime behavior to optimize code) is a major
advantage. It had better be, because the time constraints of doing your
optimization at JIT time restrict the amount of optimization you can do
vs. with a native code generator that gets to run off-line for as long
as it needs to.

But yes, dynamic profiling can be an advantage. However, its use is not
restricted to bytecode systems. VMware, the Transmeta CPU, and DEC's
FX86 (virtual machine emulation to run x86 code on Alpha CPUs) use
dynamic translation to optimize performance. It works, in that those
systems all do gain performance from dynamic profiling, but note also
the reputation that they all have for speed: poor.

And then there's "write once, run anywhere." Yeah ... right. I've run
Java applets, and Javascript applets, and the latter are vastly superior
for performance, and worse, all too often the Java applets are not "run
anywhere", they only run on very specific JVM implementations.

There's the nice property that bytecode can be type safe. I really like
that. But the bytecode checker is slow; do people really run it
habitually? More important; is type safety a valuable property for
*untrusted code* that you are going to have to sandbox anyway?

So I give up; what is it that's so great about bytecode? It looks a
*lot* like the Emperor is not wearing clothes to me.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Could I use Java or c#? [was: Re: re-writing college books]

2006-11-09 Thread Crispin Cowan
ljknews wrote:
> At 4:18 PM +0100 11/9/06, SZALAY Attila wrote:
>   
>> Hi Al,
>>
>> On Thu, 2006-11-09 at 08:47 -0500, ljknews wrote:
>> 
>>> I think you are mixing the issue of Java vs. C* with the issue of
>>> interpreters vs compiled languages.
>>>   
I agree with LJ: language issues aside, I detest bytecode interpreters.
Prior to Java, resorting to compiling to byte code (e.g. P-code back in
the Pascal days) was considered a lame kludge because the language
developers couldn't be bothered to write a real compiler. The innovation
of Java was to describe this as a feature instead of a bug :)

>> Yes, you are totally right. Sorry.
>>
>> But I have not seen java or c# compiler.
>> 
For Java, look at JGC <http://www.gnu.org/software/gcc/java/>. "It can
compile Java source code to Java bytecode (class files) or directly to
native machine code, and Java bytecode to native machine code."

For C#, the Mono compiler says
<http://www.mono-project.com/using/relnotes/1.0-features.html> that it
has "an advanced native optimizing compiler is available for x86, SPARC,
s390 and PowerPC available in both an ahead-of-time (AOT) compilation
mode to reduce startup time and take advantage of all available
optimizations and a Just-in-Time (JIT) compilation mode."

However, having native code generation is different from having good
support in GDB for you generated code :) Without GDB support, the
debugger will treat your binaries like they were written in hand
assembly, and not be able to relate core dumps to high level constructs
like variables and lines of source code. Current status:

* For Java: From the JGC FAQ <http://gcc.gnu.org/java/faq.html#1_6>,
  "gdb 5.0 <ftp://ftp.gnu.org/pub/gnu/gdb/> includes support for
  debugging gcj-compiled Java programs. For more information please
  read Java Debugging with gdb <http://gcc.gnu.org/java/gdb.html>."
* For C#: There is a Mono Debugger
  <http://www.mono-project.com/Debugging>, but it is not complete.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] re-writing college books [was: Re: A banner year for software bugs | Tech News on ZDNet]

2006-11-03 Thread Crispin Cowan
David Crocker wrote:
> Unfortunately, there are at least two situations in which C++ is a more 
> suitable
> alternative to Java and C#:
>
> - Where performance is critical. Run time of C# code (using the faster .NET 
> 2.0
> runtime) can be as much as double the run time of a C++ version of the same
> algorithm. Try telling a large company that it must double the size of its
> compute farms so you can switch to a "better" programming language!
>
> - In hard real-time applications where garbage collection pauses cannot be
> tolerated.
>   
Except that in both of those cases, C++ is not appropriate either. That
is a case for C.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Why Shouldn't I use C++?

2006-11-02 Thread Crispin Cowan
Ben Corneau wrote:
> From time to time on this list, the recommendation is made to never user C++
> when given a choice (most recently by Crispin Cowan in the "re-writing
> college books" thread). This is a recommendation I do not understand. Now,
> I'm not an expert C++ programmer or Java or C# programmer and as you may
> have guessed based on the question, I'm not an expert on secure coding
> either. I'm also not disagreeing with the recommendation; I would just like
> a better understanding.
>
> I understand that C++ allows unsafe operations, like buffer overflows.
> However, if you are a halfway decent C++ programmer buffer overflows can
> easily be avoided, true? If you use the STL containers and follow basic good
> programming practices of C++ instead of using C-Arrays and pointer
> arithmetic then the unsafe C features are no longer an issue?
>
> C and C++ are very different. Using C++ like C is arguable unsafe, but when
> it's used as it was intended can't C++ too be considered for secure
> programming?
>   
No, it cannot.

C++ is no more safe than C. C++ still supports many undefined
operations, which is what makes a language unsafe. No way can C++ be
considered a secure programming language.

If you need a lean, small language for doing embedded or kernel stuff,
then use C; you cannot afford the bloat of C++, so it is not appropriate.

If you need a powerful, abstract language for building complex
applications, then use C# or Java (or ML, or Haskell). They provide all
of the abstraction and programming convenience of C++, and they also
provide type safety. This means that there are no undefined operations,
which is what makes them secure programming languages.

There is no excuse for *choosing* C++, ever. Always avoid it. The only
excuse for *using* C++ is that some doofus before you chose it and you
have to live with the legacy code :)

So why does C++ exist? Because technology has moved. 25 years ago, when
C++ was invented, there was not a great supply of well developed type
safe object oriented programming languages. So C++ seemed like an
incremental improvement over C when it was introduced in the early
1980s. It did provide an improvement over C for developing large
applications, where development costs due to complexity were the big
problem, and bloat could be afforded.

But that lunch has now been eaten by the type safe OOP languages of Java
and C#. They are strictly better than C++ at complex applications, so
there really is no excuse for using C++ to write new application code.

And there never was an excuse for using C++ to write kernel or embedded
code. You cannot afford the bloat of C++ there, and if your kernel is so
complex that you need OOP to be able to program it, then your kernel
design is broken anyway.

I suppose there should be an "IMHO" in here somewhere in a rant like
this. Feel free to insert it anywhere you like :)

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] re-writing college books - erm.. ahm...

2006-10-29 Thread Crispin Cowan
Robert C. Seacord wrote:
>> Seeking perfect correctness as an approach to security is a fool's
>> errand. Security is designing systems that can tolerate imperfect software.
>> 
> I could go along with "achieving perfect correctness as an approach to
> security is a fool's belief" but I believe the desire to achieve
> correctness is a prerequisite for security.
>
> More specifically, I have found that systematic schemes for providing
> software security (such as memory protection, canaries, etc.) are
> generally ineffective once a coding error (such as a buffer overflow)
> allows an attacker to penetrate the peripheral defense of code
> correctness.  Given the current state of software security, I don't
> think any security "best" practice can abandoned and that
> defense-in-depth is a practical necessity.
>   
I don't think we disagree. When I said that seeking correctness is a
fool's errand, I meant (more precisely) that *depending on achieving*
correctness is a fool's errand. You must always assume the presence of
imperfect software, and then design in defense in depth to tolerate
that. Using other software engineering techniques (secure coding, the
occasional topic of this mailing list :) certainly helps, but cannot be
the whole approach to security.

> Also, back on the book topic, I recently heard of an older but
> successful book that did nothing but take examples from other books and
> show in detail how they were incorrect.  Perhaps such a "supplemental"
> text could be developed for commonly used text books.
>   
I like it! Bugtraq for books :) My engineers are quite fond of The
*Daily WTF*  a web site that lampoons bad code.

Crispin
___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] re-writing college books - erm.. ahm...

2006-10-29 Thread Crispin Cowan
Gadi Evron wrote:
> For argument sake, let's assume there are 100.
>
> How about campaigning for a secure coding chapter to be added to these
> semester, erm, world-wide?
>
> Nothing is ever easy, but we have to start somewhere. I don't see why this
> is a bad idea. Yes, it takes time. Yes, it will have a much bigger impact.
>   
It is not a bad idea. But it clearly is not sufficient. Why are you
assuming that it is not already being tried? The problem is that it is
being tried with the usual degree of effectiveness, i.e. unevenly.
Saying "lets try it" is redundant, because that is already going on,
just not enough. To make it more, one would have to convince the people
who are currently not doing it, or doing it badly, to do better, and
they (by definition) are not listening.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] re-writing college books - erm.. ahm...

2006-10-28 Thread Crispin Cowan
Gadi Evron wrote:
> So, "dump C", "Use SML", "What secure coding classes are you doing?" and
> "we are already doing it!!" are the responses I got when I started this
> thread.
>   
What did you expect from whining about the generally poor quality of
software? :)

> Can someone mention again why re-writing the main often-used and probably
> less than 3 mostly-used basic programming books is a bad idea?
>   
Uh ... 'cause I question the assertion that there are 3 mostly-used
basic programming books. I suspect it is more like 78 mostly used books.
More importantly, if there are 3 mostly used books, then there are 78
more behind them vying for those 3 slots, and they all have the same
problems. If you write a new book, then you just join the pool of 78,
and you have the impact of a drop in the bucket.

Worse, we are talking about correctness here. Correctness is hard, and
correctness on a large scale is harder. I doubt that even a concerted
effort at a "correct" book on intro to programming would manage to
actually be correct any time before the 3rd edition, 10 years from now.

Seeking perfect correctness as an approach to security is a fool's
errand. Security is designing systems that can tolerate imperfect software.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] re-writing college books [was: Re: A banner year for software bugs | Tech News on ZDNet]

2006-10-27 Thread Crispin Cowan
Gergely Buday wrote:
> Larry Kilgallen wrote:
>   
>> Is there participation on this list from the (hopefully larger number of)
>> CMU instructors who are teaching people to use safer languages in the first
>> place ?
>> 
> May anybody not from CMU enter the discussion about safer languages? ;-)
>
> I'm in favor of SML, as it has a number of implementations (some of
> them comparable to C in speed)  and a formal definition ("well-typed
> programs do not go wrong") + a standard library.
>   
SML is a nice & clean type safe language, and I don't mean to criticize
it. However, if the goal is to be ale to use industry-popular languages
that are safe, it seems to me that we have entered a bright new phase of
history. Python, Ruby, Java, and C# are all broadly popular in industry,
and are all type safe. Java and C# are statically type safe. So why not
use them?

For me, the enemy in the room is C++. It gives you the safety of C with
the performance of SmallTalk. There is no excuse at all to be writing
anything in C++ yet vastly too many applications are written in C++
anyway. Instead of trying to coax developers to switch from C++ to
something "weird" like SML, lets encourage them to switch to Java or C#,
which are closer to their experience.

Sure, there are likely to be ways in which SML is better than C# or
Java. However, in security, the perfect is all to often the enemy of the
good-enough. The big community hears security people talk about the high
security approach that security geeks really want, consider the costs,
and go back to doing things the old way, and ignore the security people.
If security people instead pitch something that is feasible and makes
the situation better, instead of asking for the moon, we will make more
progress.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


[SC-L] NDSS CFP Due September 10th

2006-09-06 Thread Crispin Cowan
Security researchers with new results may be interested to know that the
CFP deadline for NDSS is this Sunday September 10th
http://www.isoc.org/isoc/conferences/ndss/07/cfp.shtml

NDSS is a high quality academic peer reviewed conference in computer
security. Traditionally focused on network security, NDSS now covers all
aspects of computer security. This year we have a special interest in
practical security issues, and we will be interleaving the peer reviewed
technical papers with invited talk presentations from the "hacker"
community on the leading edge of security attacks. We expect the
blending of the (largely defense oriented) academic security community
with the (often attack oriented) hacker community to produce both
interesting presentations and interesting "hall track" conversations.

Please consider submitting your papers by this Sunday, and also consider
attending NDSS next February 28th - March 2nd in San Diego.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Cost of provably-correct code

2006-07-24 Thread Crispin Cowan
David Crocker wrote:
> Crispin Cowan wrote on 21 July 2006 18:45:
>   
>> Yes, you can have provably correct code. Cost is approximately $20,000 per 
>> line
>> of code. That is what the "procedures" required for correct code cost. Oh, 
>> and
>> they are kind of super-linear, so one program of 200 lines costs more than 2
>L programs of 100 lines.
>
> To arrive at that cost, I can only assume that you are referring to a process 
> in
> which all the proofs are done by hand, as was attempted for a few projects in
> the 1980s.
I did not arrive at it. It is (allegedly) the NSA's estimate of cost per
LOC for EAL7 provably correct assurance. This was quoted to me from a
friend at a company who has an A1 (orange book) secure microkernel.

>>  We current achieve automatic proof rates of 98% to 100% (using PD),
>> and I hear that Praxis also achieves automatic proof rates well over 90% 
>> (using
>> Spark) these days. This has brought down the cost of producing provable code
>> enormously.

Interesting. That could possibly bring down the cost of High Assurance
software enormously.

How would your prover work on (say) something like the Xen hypervisor?
Or the L4 microkernel?

Caveat: they are C code :(

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes



___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] "Bumper sticker" definition of secure software

2006-07-24 Thread Crispin Cowan
mikeiscool wrote:
> On 7/21/06, Florian Weimer <[EMAIL PROTECTED]> wrote:
>   
>> Secure software costs more, requires more user training, and fails in
>> hard-to-understand patterns.  If you really need it, you lose.
>> 
> Really secure software should require _less_ user training, not more.
>   
That depends.

If "really secure" means "free of defects", then yes, it should be
easier to use, because it will have fewer surprising quirks.

However, since there is so little defect-free software, most often a
"really secure" system is one with lots of belt-and-suspenders access
controls and authentication checks all over the place. "Security" is the
business of saying "no" to the bad guys, so it necessarily involves
saying "no" if you don't have all your ducks in a row.

As a result, really secure systems tend to require lots of user training
and are a hassle to use because they require permission all the time.
Imagine if every door in your house was spring loaded and closed itself
after you went through. And locked itself. And you had to use a key to
open it each time. And each door had a different key. That would be
really secure, but it would also not be very convenient.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unaticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] bumper sticker slogan for secure software

2006-07-21 Thread Crispin Cowan
mikeiscool wrote:
> On 7/21/06, Dana Epp <[EMAIL PROTECTED]> wrote:
>   
>>> yeah.
>>> but none of this changes the fact that it IS possible to write completely 
>>> secure code.
>>>   
>> And it IS possible that a man will walk on Mars someday. But its not
>> practical or realistic in the society we live in today. I'm sorry mic,
>> but I have to disagree with you here.
>>
>> It is EXTREMELY difficult to have code be 100% correct if an application
>> has any level of real use or complexity. There will be security defects.
>> 
> Why? Why accept this as a fact? It is not a fact. If you put
> procedures in place and appropriately review and test you can be
> confident.
>   
Sorry, but it is a fact. Yes, you can have provably correct code. Cost
is approximately $20,000 per line of code. That is what the "procedures"
required for correct code cost. Oh, and they are kind of super-linear,
so one program of 200 lines costs more than 2 programs of 100 lines.

>> More importantly, threats are constantly evolving and what you may
>> consider completely secure today may not be tomorrow when a new attack
>> vector is recognized that may attack your software.
>> 
> This isn't as true and as wide spread as you make it sound. Consider,
> for example, "SQL Injection". Assuming I do not upgrade my database,
> and do not change my code and server (i.e. do not change my
> environment at all), then if I have prevented this attack initially
> nothing new will come up to suddenly make it work.
>   
Indeed, consider SQL injection attacks. They didn't exist 5 years ago,
because no one had thought of them. Same with XSS bugs. Same with printf
format string attacks. All of them are examples of processing user input
without validation, but they are all really big classes of such, and
they were discovered to occur in very large numbers in common code.

What Dana is trying to tell you is that some time in the next year or
so, someone is going to discover yet another of these major
vulnerability classes that no one has thought of before. At that point,
a lot of code that was thought to be reasonably secure suddenly is
vulnerable.

>> And unless you wrote
>> every single line of code yourself without calling out to ANY libraries,
>> you cannot rely on the security of other libraries or components that
>> may NOT have the same engineering discipline that you may have on your
>> own code base.
>> 
> Not true; you can call other libraries happily and with confidence if
> you handle the case of them going all kinds of wrong.
>   
This also is false. Consider the JPG bug that badly 0wned Microsoft
desktops a while back. It was a bug in an image processing library. You
try to view an image by processing it with the library, and the result
is that the attacker can execute arbitrary code in your process. That is
pretty difficult to defensively program against.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unaticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] "Bumper sticker" definition of secure software

2006-07-17 Thread Crispin Cowan
mikeiscool wrote:
> On 7/17/06, Crispin Cowan <[EMAIL PROTECTED]> wrote:
>> "supposed to" goes to intent.
> I don't know. I think there is a difference between "this does what
> it's supposed to do" and "this has no design faults". That's all I was
> trying to highlight.
The difference between "supposed to", "design flaw", and "implementation
flaw" is entirely dependent on your level of abstraction:

* Executive: "build a thingie that lets good guys in and keeps bad
  guys out."
* Director: "build an authentication engine that uses 2-factor
  tokens to authenticate users and only then lets them in."
* Manager: "use OpenSSL and this piece of glue to implement that
  2-factor thingie."
* Coder: "main() { ..." :)

Errors can occur at any level of translation. When it does something
"surprising", then the guy at the top can claim that it wasn't
"supposed" to do that, and if you dig hard enough, you will discover
*some* layer of abstraction where the vulnerability violates the upper
intent, but not the lower intent. Hence the bug.

Some example bugs at each level:

* Executive: forgot to specify who is a "good guy"
* Director: Forgot to provide complete mediation, so the attacker
  could bypass the authenticator.
* Manager: the glue thingie allowed proper authentication tokens,
  but also allowed tokens with a string value of 0.
* Coder: "gets(token); ..."

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Necessity is the mother of invention ... except for pure math

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] "Bumper sticker" definition of secure software

2006-07-17 Thread Crispin Cowan
mikeiscool wrote:
> On 7/17/06, Crispin Cowan <[EMAIL PROTECTED]> wrote:
>> >  Goertzel Karen wrote:
>> > I've been struggling for a while to synthesise a definition of secure
>> > software that is short and sweet, yet accurate and comprehensive.
>>
>> My favorite is by Ivan Arce, CTO of Core Software, coming out of a
>> discussion between him and I on a mailing list about 5 years ago.
>>
>> Reliable software does what it is supposed to do. Secure software
>> does what
>> it is supposed to do, and nothing else.
> and what if it's "supposed" to take unsanitzed input and send it into
> a sql database using the administrators account?
>
> is that secure?
"supposed to" goes to intent. If it is a bug that allows this, then it
was not intentional. If it was intended, then (from this description) it
was likely a Trojan Horse, and it is secure from the perspective of the
attacker who put it there.

IMHO, bumper sticker slogans are necessarily short and glib. There isn't
room to put in all the qualifications and caveats to make it a perfectly
precise statement. As such, mincing words over it is a futile exercise.

Or you could just print a technical paper on a bumper sticker, in really
small font :)

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Necessity is the mother of invention ... except for pure math

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] "Bumper sticker" definition of secure software

2006-07-16 Thread Crispin Cowan




Goertzel Karen wrote:

  
  
  "Bumper sticker" definition of secure software

  I've been struggling for a while to synthesise a
definition of secure software that is short and sweet, yet accurate and
comprehensive.

My favorite is by Ivan Arce, CTO of Core Software, coming out of a
discussion between him and I on a mailing list about 5 years ago.
Reliable software does what it is supposed to do. Secure
software does what it is supposed to do, and nothing else.

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Necessity is the mother of invention ... except for pure math



___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Dr. Dobb's | Quick-Kill Project Management | June 30, 2006

2006-07-15 Thread Crispin Cowan
Wall, Kevin wrote:
> 4) Know your Brooks' _Mythical Man-Month. Management almost
> certainly
>will offer to give you more developers/testers/etc. This is
> almost
>always a bad ROI since you will spend more time bringing those
>individuals up-to-speed on your project than you will get back
>in productivity.
One of the most interesting aspects of the Open Source phenomena is that
open source projects, esp. the Linux kernel, seem to be able to violate
most of Brooks' laws with impunity. Linus has achieved absurd levels of
software development parallelism, using a very loosely knit team of
people, with different social cultures, languages, social agendas, and
most of them have never met each other. Brooks says this should be an
unmitigated disaster, yet it succeeds. Go figure :)

How Linus does this is open to lively debate. That he achieves it is
pretty hard to dispute.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Necessity is the mother of invention ... except for pure math


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Re: Comparing Scanning Tools (false positives)

2006-06-13 Thread Crispin Cowan
David A. Wheeler wrote:
> Brian Chess (brian at fortifysoftware dot com) said:
>> False positives:
>> Nobody likes dealing with a pile of false positives, and we work hard to
>> reduce false positives without giving up potentially exploitable
>> vulnerabilities.
> I think everyone agrees that there are "way too many false positives"
> in the sense that "there are so many it's annoying and it costs money
> to check them out" in most of today's tools.
>
> But before you say "tools are useless" you have to ask, "compared to
> what?"
> Manual review can find all sorts of things, but manual review is likely
> to miss many serious problems too.  ESPECIALLY if there are only a
> few manual reviewers for a large codebase, an all-too-common situation.
I would like to introduce you to my new kick-ass scanning tool. You run
it over your source code, and it only produces a single false-positive
for you to check out. That false positive just happens to be the
complete source code listing for your entire program :)

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Ajax one panel

2006-05-24 Thread Crispin Cowan
Gary McGraw wrote:
> Btw, bill also said they tried twice to build an OS on java and failed both 
> times.  We both agree that a type safe OS will happen one day.
>   
Did he ever articulate what happened to these OS's? I recall a
presentation at OSDI 1996 by a Sun executive talking about JavaOS and
the spiffy new thin clients that Sun was going to introduce. He talked
about implementing the TCP/IP stack in pure Java, even with the problems
of type safety in marshalling raw data.

I had the impression that JavaOS failed for marketing reasons, not
technical. But that impression was formed from hearing the OSDI
presentation that described implementing JavaOS in the past tense.

So what was the real reason?

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Re: [Owasp-dotnet] RE: 4 Questions: Latest IE vulnerability, Firefox vs IE security, User vs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-04-05 Thread Crispin Cowan
Pascal Meunier wrote:
> AppArmor sounds like an excellent alternative to creating a VMWare image for
> every application you want to run but distrust, although I can think of
> cases where a VMWare image would be safer.  For example, the
> installer/uninstaller may have vulnerabilities, may be "dirty" (it causes
> problems by modifying things that affect other applications, or doesn't
> cleanup correctly), or phones home, etc...  I guess you could make a profile
> for the installer as well (I'm not very enthusiastic about that idea
> though).  Also, I suspect that what you need to allow in some profiles is
> possibly sufficient to enable "some level" of malicious activity.  It's
> regrettable that it is only available for Suse Linux.
>   
That is correct. AppArmor is not a virtualization layer, and cannot be
used to create virtual copies of files for maybe-good/maybe-bad software
to mess with. More over, the LSM interface in the kernel (which both
AppArmor and SELinux depend on) is also not capable of virtualization.
There were requests for virtualization features during the LSM design
phase, but we decided that we wanted to keep LSM as unintrusive as
possible so as to maximize the chance of LSM being accepted by the 
upstream kernel.

> Perhaps one of the AppArmor mailing lists would be more appropriate to ask
> this,
apparmor-dev cc'd

>  but as you posted an example profile with "capability setuid", I must
> admit I am curious as to why an email client needs that.
Well now that is a very good question, but it has nothing to do with
AppArmor. The AppArmor learning mode just records the actions that the
application performs. With or without AppArmor, the Thunderbird mail
client is using cap_setuid. AppArmor gives you the opportunity to *deny*
that capability, so you can try blocking it and find out. But for
documentation on why Thunderbird needs it, you would have to look at
mozilla.org not the AppArmor pages.

>   I tried looking up
> relevant documentation on the Novell site, but it seems I was unlucky and
> tried during a maintenance period because pages were loading erratically.  I
> finally got to the "3.0 Building Novell AppArmor Profiles" page but it was
> empty.  I would appreciate receiving more information about it.  I am also
> interested in the "Linux Security Modules Interface".
>   
For an overview, look here:

"Linux Security Modules: General Security Support for the Linux
Kernel". Chris Wright, Crispin Cowan, Stephen Smalley, James Morris,
and Greg Kroah-Hartman. Presented at the 11^th USENIX Security
Symposium <http://www.usenix.org/events/sec02/>, San Francisco, CA,
August 2002. PDF <http://crispincowan.com/%7Ecrispin/lsm-usenix02.pdf>.

However, this paper is only a general overview, and is now far out of
date. For an accurate view, look at the kernel source code.

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


[SC-L] Segments, eh Smithers?

2006-04-04 Thread Crispin Cowan
he programming language was the stark EPL subset of
> PL/I and the corresponding McIlroy-Morris EPL compiler, which seems to
> have avoided some of the characteristic programming errors that are
> still common today.  No software was written until there was an approved
> specification, with well defined interfaces and exception conditions
> that were explicitly characterized in EPL.  And so on into a visionary
> sense of a future that has been largely lost for may perceived reasons,
> some of which are bogus, some of which are just seriously short-sighted.
>
> *** END SOAPBOX ***
>
> I'm sure this message may generate all sorts of Ifs and Ands and Buts.
> But the Butt we are kicking is our own.
>
> Cheers!  PGN
> ___
> Secure Coding mailing list (SC-L)
> SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
>   

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Re: [Owasp-dotnet] RE: 4 Questions: Latest IE vulnerability, Firefox vs IE security, User vs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-04-03 Thread Crispin Cowan
Dinis Cruz wrote:
> Jeff Williams wrote:  
>> I'm a huge fan of sandboxes, but Dinis is right, the market hasn't really
>> gotten there yet. No question that it would help if it was possible to run
>> complex software like a browser inside a sandbox that restricted its ability
>> to do bad things, even if there are vulnerabilities (or worse -- malicious
>> code) in them.  
> Absolutely, and do you see any other alternative? (or we should just
> continue to TRUST every bit of code that is executed in our computers?
> and TRUST every single developer/entity that had access to that code
> during its development and deployment?)
>   
This is exactly what AppArmor <http://en.opensuse.org/Apparmor> was
designed for: conveniently confining applications to only be able to do
what they need to do. Application's least privilege.

I am running this mail client (Thunderbird) from within a "sandbox" (we
call it a "profile"). I have attached this policy, which should be
pretty self-explanatory.

>> But, if you've ever tried to configure the Java security policy file, use
>> JAAS, or implement the SecurityManager interface, you know that it's *way*
>> too hard to implement a tight policy this way.
> And .Net has exactly the same problem. It is super complex to create a
> .Net application that can be executed in a secure Partially Trusted Sandbox.
>   
This is where AppArmor really stands out. You can build an application
profile in minutes. Here is a video
<ftp://ftp.belnet.be/pub/mirror/FOSDEM/FOSDEM2006-apparmor.avi> if me
demoing AppArmor in a presentation at FOSDEM 2006
<http://www.fosdem.org/2006>. The video is an hour-long lecture on
AppArmor, and for the impatient, the demo is from 16:30 through 26:00.

>> And only the
>> developer of the software could reasonably attempt it, which is backwards,
>> because it's the *user* who really needs it right. 
>> 
> Yes, it is the user's responsibility (i.e. its IT Security and Server
> Admin staff) to define the secure environment (i.e the Sandbox) that 3rd
> party or internal-developed applications are allocated inside their data
> center,
>   
It is very feasible for a user, not a developer, to build an AppArmor
profile. Prior requirements for using AppArmor are:

* know how to use bash
* know how to use chmod
* know how to run the application in question


>> It's possible that sandboxes are going the way of multilevel security (MLS).
>> A sort of ivory tower idea that's too complex to implement or use. 
> I don't agree that the problem is too complex. What we have today is
> very complex architectures / systems with too many interconnections.
>   
"too many interconnections" is a Windows problem. In the UNIX world,
where (nearly) everything is a file, it is much easier to build
effective application containment policies.

> Simplify the lot, get enough resources with the correct focus involved,
> are you will see that it is doable.
>   
Indeed :)

> Basically, give the user data (as in information) that he can digest and
> understand, and you will see the user(s) making the correct decision(s).
>   
Well, maybe. Users are notorious for not making the right decision.
AppArmor lets the site admin create the policy and distribute it to
users. Of course that assumes we are talking about Linux users :)

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com

# vim:syntax=subdomain
# Last Modified: Sun Apr  2 15:09:49 2006
/opt/MozillaThunderbird/lib/thunderbird-bin {
  #include 
  #include 
  #include 
  #include 
  #include 
  #include 
  #include 
  #include 
  #include 
  #include 
  #include 

  capability ipc_lock,
  capability setuid,

  /bin/basename px,
  /bin/bash ix,
  /bin/grep ixr,
  /bin/netstat px,
  /etc/mailcap r,
  /etc/mime.types r,
  /etc/opt/gnome/gnome-vfs-2.0/modules r,
  /etc/opt/gnome/gnome-vfs-2.0/modules/* r,
  /etc/opt/gnome/pango/pango.modules r,
  /home/** rw,
  /home/*/.gnupg/* lrw,
  /home/*/.thunderbird/** lrw,
  /opt/MozillaFirefox/bin/firefox.sh pxr,
  /opt/MozillaFirefox/lib/mozilla-xremote-client ixr,
  /opt/MozillaThunderbird/lib/** r,
  /opt/gnome/bin/file-roller ixr,
  /opt/gnome/bin/gedit ixr,
  /opt/gnome/bin/gimp-remote-2.2 ixr,
  /usr/X11R6/bin/OOo-wrapper px,
  /usr/X11R6/bin/acroread px,
  /usr/X11R6/bin/xv px,
  /usr/X11R6/lib/Acrobat7/Resource/Font/** r,
  /usr/bin/display px,
  /usr/bin/gpg ix,
  /usr/bin/mplayer px,
  /usr/bin/ooo-wrapper ixr,
  /usr/bin/perl ix,
  /usr/lib/firefox/firefox.sh px,
  /usr/lib/jvm/java-1.4.2-sun-1.4.2.06/jre/lib/fonts/** r,
  /usr/lib/ooo-2.0/program/soffice px,
  /usr/lib/ooo-2.0/share/fonts/** r,
}
___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] ZDNET: LAMP lights the way in open-source security

2006-03-07 Thread Crispin Cowan
Gavin, Michael wrote:
> Yeah, statistics can allow you to say and "prove" just about anything.
>
> OK, showing my ignorance here, since I haven't checked out any of the
> LAMP source trees and reviewed the code: how much of the code making up
> those modules is written in scripting languages vs. how much of it is
> written in C, C++ (and how much, if any, is written in any other
> compiled languages)?
>   
That doesn't matter; what matters is what fraction of disclosed
vulnerabilities is in each segment of the code? If 90% of the
vulnerabilities come from the PHP part, then the fact that 90% of the
lines of code are in C doesn't help.

> If the LAMP source code itself is primarily C/C++, then arguably, the
> results are somewhat interesting, though I think they would be much more
> interesting if this DISA project was set up to test the open source code
> with a number of commercial scanners instead of just the Coverity
> scanner, then we could at least compare the merits of various scanning
> techniques and implementations.
The proprietary status of the Coverity scanner is a continuous pain.
That's why I tend to ignore it where possible :)

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] RE: The role static analysis tools play in uncovering elements of design

2006-02-07 Thread Crispin Cowan
Jeff Williams wrote:
> I think there's a lot more that static analysis can do than what you're
> describing. They're not (necessarily) just fancy pattern matchers.
> ...
> Today's static analysis tools are only starting to help here. Tools focused
> on dumping out a list of vulnerabilities don't work well for me. Too many
> false alarms.  Maybe that's what you meant by 'inhibit'.
>   
In the general case, I think that any kind of analysis tool (static
analyzer, fuzzing tool, debugger, whatever) focuses the analyst's
attention on whatever aspects the tool author thought was important.
Whether this is a good or bad thing depends on whether you agree with
the author.

Using no tools at all just imposes a different bias filter, as humans
are (relatively) good at spotting some kinds of patterns, and not others.

Crispin

> --Jeff
>  
> Jeff Williams, CEO
> Aspect Security
> http://www.aspectsecurity.com
> email: [EMAIL PROTECTED]
> phone: 410-707-1487
>  
> 
> From: John Steven [mailto:[EMAIL PROTECTED] 
> Sent: Friday, February 03, 2006 1:40 PM
> To: Jeff Williams; Secure Coding Mailing List
> Subject: The role static analysis tools play in uncovering elements of
> design 
>
> Jeff,
>
> An unpopular opinion I’ve held is that static analysis tools, while very
> helpful in finding problems, inhibit a reviewer’s ability to find collect as
> much information about the structure, flow, and idiom of code’s design as
> the reviewer might find if he/she spelunks the code manually.
>
> I find it difficult to use tools other than source code navigators (source
> insight) and scripts to facilitate my code understanding (at the
> design-level). 
>
> Perhaps you can give some examples of static analysis library/tool use that
> overcomes my prejudice—or are you referring to the navigator tools as well?
>
> -
> John Steven   
> Principal, Software Security Group
> Technical Director, Office of the CTO
> 703 404 5726 - Direct | 703 727 4034 - Cell
> Cigital Inc.  | [EMAIL PROTECTED]
>
> 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908
>
>   
> snipped
> Static analysis tools can help a lot here. Used properly, they can provide
> design-level insight into a software baseline. The huge advantage is that
> it's correct.
>
> --Jeff 
> snipped
> 
> This electronic message transmission contains information that may be
> confidential or privileged. The information contained herein is intended
> solely for the recipient and use by any other party is not authorized. If
> you are not the intended recipient (or otherwise authorized to receive this
> message by the intended recipient), any disclosure, copying, distribution or
> use of the contents of the information is prohibited. If you have received
> this electronic message transmission in error, please contact the sender by
> reply email and delete all copies of this message. Cigital, Inc. accepts no
> responsibility for any loss or damage resulting directly or indirectly from
> the use of this email or its contents.
> Thank You.
> 
>
>
> ___________
> Secure Coding mailing list (SC-L)
> SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
>   

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-07 Thread Crispin Cowan
Thanks for the very detailed and informative explanation.

However, I still think it sounds like IE has too large of an attack
surface :) It still seems to be the case that IE can be persuaded to
execute any of a large amount of code based on its raw (web) input, with
(fairly) arbitrary parameters, and this large attack surface allows
attackers to find vulnerabilities in any of the code that IE calls out to.

Crispin

Dana Epp wrote:
> I think I would word that differently. The design defect was when
> Microsoft decided to allow meta data to call GDI functions.
>  
> Around 1990 when this was introduced the threat profile was entirely
> different; the operating system could trust the metadata. Well,
> actually I would argue that they couldn't, but no one knew any better
> yet. At the time SetAbortProc() was an important function to allow for
> print cancellation in the co-operative multitasking environment that
> was Windows 3.0.
>  
> To be clear, IE was NOT DIRECTLY vulnerable to the WMF attack vector
> everyone likes to use as a test case for this discussion. IE actually
> refuses to process any type of metadata that supported META_ESCAPE
> records (which SetAbortProc relies on). Hence why its not possible to
> exploit the vulnerability by simply calling a WMF image via HTML. So
> how is IE vulnerable then? It's not actually. The attack vector uses
> IE as a conduit to actually call out to secondary library code that
> will process it. In the case of the exploits that hit the Net,
> attackers used an IFRAME hack to call out to the shell to process it.
> The shell would look up the handler for WMF, which was the Windows
> Picture Viewer that did the processing in shimgvw.dll. When the dll
> processed the WMF, it would convert it to a printable EMF format, and
> bam... we ran into problems.
>  
> With the design defect being the fact metadata can call arbitrary GDI
> code, the implementation flaw is the fact applications like IE rely so
> heavily on calling out to secondary libraries that just can't be
> trusted. Even if IE has had a strong code review, it is extremely
> probable that most of the secondary library code has not had the same
> audit scrutiny. This is a weakness to all applications, not just IE.
> When you call out to untrusted code that you don't control, you put
> the application at risk. No different than any other operating system.
> Only problem is Windows is riddled with these potential holes because
> its sharing so much of the same codebase. And in the past the teams
> rarely talk to each other to figure this out.
>  
> Code reuse is one thing, but some of the components in Windows are
> carry over from 15 years ago, and will continue to put us at risk due
> to the implementation flaws that haven't yet been found. But with such
> a huge master sources to begin with, its not something that will be
> fixed over night.
>  
> ---
> Regards,
> Dana Epp [Microsoft Security MVP]
> Blog: http://silverstr.ufies.org/blog/
>
> 
> *From:* [EMAIL PROTECTED] on behalf of Crispin Cowan
> *Sent:* Fri 2/3/2006 12:12 PM
> *To:* Gary McGraw
> *Cc:* Kenneth R. van Wyk; Secure Coding Mailing List
> *Subject:* Re: [SC-L] Bugs and flaws
>
> Gary McGraw wrote:
> > To cycle this all back around to the original posting, lets talk about
> > the WMF flaw in particular.  Do we believe that the best way for
> > Microsoft to find similar design problems is to do code review?  Or
> > should they use a higher level approach?
> >
> > Were they correct in saying (officially) that flaws such as WMF are hard
> > to anticipate?
> >  
> I have heard some very insightful security researchers from Microsoft
> pushing an abstract notion of "attack surface", which is the amount of
> code/data/API/whatever that is exposed to the attacker. To design for
> security, among other things, reduce your attack surface.
>
> The WMF design defect seems to be that IE has too large of an attack
> surface. There are way too many ways for unauthenticated remote web
> servers to induce the client to run way too much code with parameters
> provided by the attacker. The implementation flaw is that the WMF API in
> particular is vulnerable to malicious content.
>
> None of which strikes me as surprising, but maybe that's just me :)
>
> Crispin
> --
> Crispin Cowan, Ph.D. 
> http://crispincowan.com/~crispin/ <http://crispincowan.com/%7Ecrispin/>
> Director of Software Engineering, Novell  http://novell.com
> Olympic Games: The Bi-Annual Festival of Corruption
>
>
> ___
> Secure Co

Re: [SC-L] Bugs and flaws

2006-02-03 Thread Crispin Cowan
Gary McGraw wrote:
> To cycle this all back around to the original posting, lets talk about
> the WMF flaw in particular.  Do we believe that the best way for
> Microsoft to find similar design problems is to do code review?  Or
> should they use a higher level approach?
>
> Were they correct in saying (officially) that flaws such as WMF are hard
> to anticipate? 
>   
I have heard some very insightful security researchers from Microsoft
pushing an abstract notion of "attack surface", which is the amount of
code/data/API/whatever that is exposed to the attacker. To design for
security, among other things, reduce your attack surface.

The WMF design defect seems to be that IE has too large of an attack
surface. There are way too many ways for unauthenticated remote web
servers to induce the client to run way too much code with parameters
provided by the attacker. The implementation flaw is that the WMF API in
particular is vulnerable to malicious content.

None of which strikes me as surprising, but maybe that's just me :)

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-02 Thread Crispin Cowan
John Steven wrote:
> Re-reading my post, I realize that it came off as heavy support for
> additional terminology. Truth is, we've found that the easiest way to
> communicate this concept to our Consultants and Clients here at Cigital has
> been to build the two buckets (flaws and bugs).
>   
My main problem with this terminology is that I have only ever seen it
coming from Cigital people. The rest of the world seems to treat "flaw"
and "bug" as synonyms.

The distinction here is between "design flaw" and "implementation flaw".
There doesn't seem to be anything in these words that suggest one is
larger scale than the other.

>From dictionary.com we have:

flaw^1 <http://www.answers.com/flaw&r=67>(flĂ´) pronunciation
/n./

   1. An imperfection, often concealed, that impairs soundness: /a flaw
  in the crystal that caused it to shatter./ See synonyms at blemish
  
<http://www.answers.com/main/ntquery;jsessionid=75e32c5vb2csr?method=4&dsid=1555&dekey=B0319900&gwp=8&curtab=1555_1&sbid=lc04b>.
   2. A defect or shortcoming in something intangible: /They share the
  character flaw of arrogance./
   3. A defect in a legal document that can render it invalid.

"Bug" <http://www.answers.com/bug&r=67> is a little more arcane, and the
only relevant part is far down the document where it discusses the
history with Grace Hopper:

bug

An unwanted and unintended property of a program or piece of
hardware, esp. one that causes it to malfunction. Antonym of
/feature/

<http://www.answers.com/main/ntquery?method=4&dsid=2291&dekey=%2FF%2Ffeature.html&gwp=8&curtab=2291_1>.
Examples: “There's a bug in the editor: it writes things out
backwards.” “The system crashed because of a hardware bug.” “Fred is
a winner, but he has a few bugs” (i.e., Fred is a good guy, but he
has a few personality problems).

Historical note: Admiral Grace Hopper (an early computing pioneer
better known for inventing /COBOL/

<http://www.answers.com/main/ntquery?method=4&dsid=2291&dekey=%2FC%2FCOBOL.html&gwp=8&curtab=2291_1>)
liked to tell a story in which a technician solved a /glitch/

<http://www.answers.com/main/ntquery?method=4&dsid=2291&dekey=%2FG%2Fglitch.html&gwp=8&curtab=2291_1>
in the Harvard Mark II machine by pulling an actual insect out from
between the contacts of one of its relays, and she subsequently
promulgated /bug/

<http://www.answers.com/main/ntquery?method=4&dsid=2291&dekey=%2FB%2Fbug.html&gwp=8&curtab=2291_1>
in its hackish sense as a joke about the incident (though, as she
was careful to admit, she was not there when it happened). For many
years the logbook associated with the incident and the actual bug in
question (a moth) sat in a display case at the Naval Surface Warfare
Center (NSWC). The entire story, with a picture of the logbook and
the moth taped into it, is recorded in the /Annals of the History of
Computing/, Vol. 3, No. 3 (July 1981), pp. 285--286.


> What I was really trying to present was that Security people could stand to
> be a bit more thorough about how they synthesize the results of their
> analysis before they communicate the vulnerabilities they've found, and what
> mitigating strategies they suggest.
>   
Definitely. I think there is a deep cultural problem that people who fix
bugs or flaws tend to over-focus on the micro issue, fixing the specific
coding vulnerability, and ignore the larger architectural error that
allows the coding defect to be exploitable and cause damage. In the case
at hand, the WMF bug would be much less dangerous if there were not so
many ways to induce IE to invoke WMF decoding without asking the user.

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-01 Thread Crispin Cowan
John Steven wrote:
> I'm not sure there's any value in discussing this minutia further, but here
> goes:
>   
We'll let the moderator decide that :)

> 1) Crispin, I think you've nailed one thing. The continuum from:
>
> Architecture --> Design --> Low-level Design --> (to) Implementation
>
> is a blurry one, and certainly slippery as you move from 'left' to 'right'.
>   
Cool.

> But, we all should understand that there's commensurate blur in our analysis
> techniques (aka architecture and code review) to assure that as we sweep
> over software that we uncover both bugs and architectural flaws.
>   
Also agreed.

> 2) Flaws are different in important ways bugs when it comes to presentation,
> prioritization, and mitigation. Let's explore by physical analog first.
>   
I disagree with the word usage. To me, "bug" and "flaw" are exactly
synonyms. The distinction being drawn here is between "implementation
flaws" vs. "design flaws". You are just creating confusing jargon to
claim that "flaw" is somehow more abstract than "bug". Flaw ::= defect
::= bug. A vulnerability is a special subset of flaws/defects/bugs that
has the property of being exploitable.

> I nearly fell through one of my consultant's tables as I leaned on it this
> morning. We explored: "Bug or flaw?".
>   
The wording issue aside, at the implementation level you try to
code/implement to prevent flaws, by doing things such as using higher
quality steel (for bolts) and good coding practices (for software). At
the design level, you try to design so as to *mask* flaws by avoiding
single points of failure, doing things such as using 2 bolts (for
tables) and using access controls to limit privilege escalation (for
software).

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-01 Thread Crispin Cowan
Gary McGraw wrote:
> If the WMF vulnerability teaches us anything, it teaches us that we need
> to pay more attention to flaws.
The "flaw" in question seems to be "validate inputs", i.e. don't just
trust network input (esp. from an untrusted source) to be well-formed.

Of special importance to the Windows family of platforms seems to be the
propensity to do security controls based on the file type extension (the
letters after the dot in the file name, such as .wmf) but to choose the
application to interpret the data based on some magic file typing based
on looking at the content.

My favorite ancient form of this flaw: .rtf files are much safer than
doc files, because the RTF standard does not allow you to attach
VBscript (where "VB" stands for "Virus Broadcast" :) while .doc files
do. Unfortunately, this safety feature is nearly useless, because if you
take an infected whatever.doc file, and just *rename* it to whatever.rtf
and send it, then MS Word will cheerfully open the file for you when you
double click on the attachment, ignore the mismatch between the file
extension and the actual file type, and run the fscking VB embedded within.

I am less familiar with the WMF flaw, but it smells like the same thing.

Validate your inputs.

There are automatic tools (taint and equivalent) that will check whether
you have validated your inputs. But they do *not* check the *quality* of
your validation of the input. Doing a consistency check on the file name
extension and the data interpreter type for the file is beyond (most?)
such checkers.

>   We spend lots of time talking about
> bugs in software security (witness the perpetual flogging of the buffer
> overflow), but architectural problems are just as important and deserve
> just as much airplay.
>   
IMHO the difference between "bugs" and "architecture" is just a
continuous grey scale of degree.

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] eWeek: AJAX Poses Security, Performance Risks

2006-02-01 Thread Crispin Cowan
ljknews wrote:
> I have been involved in a dialog with AJAX fans (which is different from
> experts) who say "you security folks just have to bow to the inevitable
> and figure out how to secure whatever mechanism we come up with.
>   
This attitude is not unique to AJAX advocates. I remember holding this
view myself, while wrestling with the problems of producing a truly
transparent distributed operating system in the late 1980s and early
1990s; security was a bother that made things hard(er).

Of course, this is just lifetime employment for security people :) I
have certainly made a career out of securing things that are inherently
insecure.

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] eWeek says "Apple's Switch to Intel Could Allow OS X Exploits"

2006-01-27 Thread Crispin Cowan
Kenneth R. van Wyk wrote:
> Interesting article, I suppose, but I'm not convinced of its conclusion:
>
> http://www.eweek.com/article2/0,1895,1915923,00.asp
>
> The article claims that Apple's use of Intel chips will result in more 
> software exploits because, "'Attackers have been focused on the [Intel] x86 
> for over a decade. Macintosh will have a lot more exposure than when it was 
> on PowerPC,' said Oliver Friedrichs, a senior manager at Symantec Corp. 
> Security Response."
> ...
> Am I missing something here?
>   
Security by obscurity. It is lame, but for fending off bulk infections,
it works well. I agree with the article that Macs will get more exposure
and attack now.

However, Mac OS X (and Linux and *BSD) still hold the major advantage
over Windows that it is uncommon to run the mail client as
root/administrator, so the infection rate will remain much lower than on
Windows. Only when attackers have an actual exploit for the Mac/*NIX can
they 0wn the machine. On Windows, they just need a good line and a user
dumb enough to click on the attachment.

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Intel turning to hardware for rootkit detection

2005-12-14 Thread Crispin Cowan
Smashguard, if I recall correctly, offers approximately the protection
of existing compiler methods, but with the added fun of requiring
modified (non-existent) hardware.

The referenced hardware in the IEEE article and the intel.com pages
appears to be some descendant of Palladium; it is a hardware integrity
checker/attestation mechanism. A small, hardware-enforced core performs
a chain of crypto-checks prior to boot strapping the BIOS, and then the
OS, and makes itself available to applications. Thus an application can
(more or less) "prove" to a remote machine that the BIOS, kernel, and
application are in fact the "approved" versions that the remote machine
wants to see. The closest published work would be Bill Arbaugh's
dissertation and associated papers.

Real security benefit: remote machine can detect that your box has not
been rootkit'd.

Hoarding "benefit": remote machine can detect that you are running the
approved DRM-enforcing media player so that (for instance) it can
enforce that you only get to play that movie the specified number of
times and you don't get to copy it.

Malignant effect: Document master at an organization can make all
documents transient, so that whistle-blowers can no longer access the
documents they are trying to use to blow the whistle on such as, say,
Enron, WorldCom, or Abu Grab-ass.

Be very, very careful about tolerating strong-attestation hardware. The
implications are profound, for both good and evil.

Crispin

mudge wrote:
>
> There was a lady who went to Purdue, I believe her name was Carla
> Brodley. She is a professor at Tufts currently. One of her projects,
> I'm not sure whether it is ongoing or historic, was surrounding
> hardware based stack protection. There wasn't any protection against
> heap / pointer overflows and I don't know how it fares when stack
> trampoline activities (which can be valid, but are rare outside of
> older objective-c code).
>
> www.smashguard.org and https://engineering.purdue.edu/
> ResearchGroups/SmashGuard/smash.html have more data.
>
> I'm not sure if this is a similar solution to what Intel might be
> pursuing. I believe the original "smashguard" work was based entirely
> on Alpha chips.
>
> cheers,
>
> .mudge
>
>
> On Dec 13, 2005, at 15:19, Michael S Hines wrote:
>
>> Doesn't a hardware 'feature' such as this lock software into a
>> two-state model
>> (user/priv)?
>>
>> Who's to say that model is the best?  Will that be the model of the
>> future? 
>>
>> Wouldn't a two-state software model that works be more effective?  
>>
>> It's easier to change (patch) software than to rewire hardware
>> (figuratively speaking).
>>
>> Just wondering...
>>
>> Mike Hines
>> ---
>> Michael S Hines
>> [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]> 
>>
>> ___
>> Secure Coding mailing list (SC-L)
>> SC-L@securecoding.org <mailto:SC-L@securecoding.org>
>> List information, subscriptions, etc -
>> http://krvw.com/mailman/listinfo/sc-l
>> List charter available at - http://www.securecoding.org/list/charter.php
>
> --------
>
> ___
> Secure Coding mailing list (SC-L)
> SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
>   

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Why Software Will Continue to Be Vulnerable

2005-05-03 Thread Crispin Cowan
ljknews wrote:
>At 8:05 AM -0400 5/2/05, Kenneth R. van Wyk wrote:
>  
>>Yet, despite that pessimistic outlook -- and the survey that
>>forked this thread -- I do think that companies are demanding
>>more in software security, even though consumers are not.
>>
>Companies value time spent on cleanup more than consumers do.
>  

And in this morning's mailbox, we see some evidence to support the claim
that business is considerably less impressed with software quality
http://www.informationweek.com/story/showArticle.jhtml;jsessionid=IMYCZLJPHKPNMQSNDBCSKH0CJUMEKJVN?articleID=161601417

Crispin
-- 
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com




Re: [SC-L] Why Software Will Continue to Be Vulnerable

2005-05-01 Thread Crispin Cowan
Greenarrow 1 wrote:
>But, the problem I see with this survey is they only polled 1,000 out of 
>what over 5 million users in the USofA.
Political pollsters regularly sample 1000 Americans to get a prediction
of 100,000 voters that is accurate to 5% or so. 1000 people should be
sufficient to sample software users, unless there is something else
wrong with the sample or the questions.

>  Just randomly suppose they 
>accidently picked everyone that
>has superb software and hardware on their systems (unlikely but probable). 
>  
Just what does "unlikely but probable" mean?

To "suppose" this, we have to think there is something wrong with the
sample or the questions. What is it you think is wrong with the sample
or the questions? Or is it just that you find the result to be improbable?

>On repairing systems for my customers I say 1 of of 20 are only satisfied 
>with their programs so who is right Harris Poll or my customers?
No *there* is a skewed sample; the set of people currently experiencing
a problem so severe that they have to call in a professioal to repair
it. Under just about any circumstance, I would expect this group to be
highly unsatisfied with vendors. It's like taking a survey of auto
quality in the waiting room of a garage.

What really mystifies me is the anlogy to fire insurance. *Everyone*
keeps their fire insurance up to date, it costs money, and it protects
against a very rare event that most fire insurance customers have never
experienced. What is it that makes consumers exercise prudent good sense
for fire insurance, but not in selecting software?

The only factor I can think of is that mortgage carriers insist that
their customers maintain fire insurance. No fire insurance, no loan, and
most people cannot afford to pay cash for their home. So to impose a
"prudence" requirement on software consumers, perhaps some outside force
has to impose a "pay to play" requirement on them. Who could that be?

IPSs, perhaps? Similar to mortgage companys, ISPs pay a lot of the cost
of consumer software insecurity: vulnerable software leads to virus
epidemics, and to botnets of spam relays. Perhaps if ISPs recognized the
cost of consumer insecurity on their operations, they might start
imposing minimum standards on consumer connections, and cutting them off
if they fall below that standard. Larry Seltzer has advocated a form of
this, that ISPs should block port 25 for consumer broadband in most
cases http://www.eweek.com/article2/0,1759,1784276,00.asp There are
several other actions that ISPs could take:

* egress filtering on all outbound connections to block source IP
  spoofing
* deploy NIPS on outbound traffic and disconnect customers who are
  emitting attacks
* require customers to have some kind of personal firewall or host
  intrusion prevention

The catch: the above moves are all costly and, to some degree,
anti-competitive, in that they make the consumer's Internet connection
less convenient. So to be successful, ISPs would have to position these
moves as a "security enhancement" for the consumer, which AOL is doing
with bundled antivirus service as advertised on TV. ISPs could also
position a non-restricted account as an "expert" account and charge
extra for it.

Crispin
-- 
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com




[SC-L] Why Software Will Continue to Be Vulnerable

2005-04-30 Thread Crispin Cowan
Here's a depressing survey 
http://www.internetweek.com/breakingNews/showArticle.jhtml?articleID=161601958

It claims that a survey of adults asking them whether an industry was 
doing "a generally good job or a bad job of serving their customers." To 
come up with a final score in the annual survey, Harris subtracted the 
negative responses from the positive responses.

The sand result: software companies, as an industry, placed 4th in the 
top 10 of this survey. That means that consumers are generally pretty 
happy with the software they are buying.

This makes it highly unlikely that software companies are about to start 
dumping large quantities of $$ into improving software quality.

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com


Re: [SC-L] Theoretical question about vulnerabilities

2005-04-15 Thread Crispin Cowan
David Crocker wrote:
Well, that approach is certainly better than not guarding against buffer
overflows at all. However, I maintain it is grossly inferior to the approach we
use, which is to prove that all array accesses are within bounds.
Proving that all array accesses are within bounds would seem to be 
Turing undecidable. Either you are not proving what you way you are 
proving, or your programs are not full Turing machines.

Proof: 
Issue: Some people may regard diagonalized programs are a contrivance, 
and are only interested in correctness proofs for real programs (for 
some value of "real").

Crispin's rebuttal: Suppose I want to prove that your program checker 
does not have any illegal array references ...

What exactly
is your program going to do when it detects an array bound violation at
run-time?
Hermes' kludge to address this was two-fold:
  1. There are no arrays. Rather, there are relational tables, and you
 can extract a row based on a field value. You can programatically
 get a table to act much like an array by having a field with a
 unique index number, 1, 2, 3, etc.
  2. If you try to extract a row from a table that does not have a
 matching value, then you get an exception. Exceptions are thrown
 and caught up the call chain the way most modern (Java etc.)
 languages do it.
Yes, this is a kludge because it ultimately means a run-time exception, 
which is just a pretty way of handling a seg fault.

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Theoretical question about vulnerabilities

2005-04-13 Thread Crispin Cowan
David Crocker wrote:
Exactly. I'm not interested in trying to write a program prover that will prove
that an arbitrary program is correct, if indeed it is. I am only interested in
proving that well-structured programs are correct.
The Hermes programming language took this approach
http://www.research.ibm.com/people/d/dfb/hermes.html
Hermes proved a safety property called Type State Checking in the course
of compiling programs. Type State offers very nice safety properties for
correctness, including proving that no variable will be used before it
is initialized. But the Hermes Type State Checker was not formally
complete; there were valid programs that the checker could not *prove*
were correct, and so it would reject them. Here's an example of a case
it cannot prove:
if X then
   Y <- initial value
endif
...
if X then
   Z <- Y + 1
endif
The above code is "correct" in that Y's value is taken only when it has
been initialized. But to prove the code correct, an analyzer would have
to be "flow sensitive", which is hard to do.
Here's where it gets interesting. The authors of Type State went and
analyzed a big pile of existing code that was in production but that the
Type State checker failed to prove correct. In (nearly?) every case,
they found a *latent bug* associated with the code that failed to pass
the Checker. We can infer from that result that code that depends on
flow sensitivity for its correctness is hard for humans to reason about,
and therefore likely to be wrong.
Disclaimer: I worked on Hermes as an intern at the IBM Watson lab waay
back in 1991 and 1992. Hermes is my favorite type safe programming
language, but given the dearth of implementations, applications, and
programmers, that is of little practical interest :)
Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Theoretical question about vulnerabilities

2005-04-13 Thread Crispin Cowan
der Mouse wrote:
[B]uffer overflows can always be avoided, because if there is ANY
input whatsoever that can produce a buffer overflow, the proofs will
fail and the problem will be identified.

Then either (a) there exist programs which never access out-of-bounds
but which the checker incorrectly flags as doing so, or (b) there exist
programs for which the checker never terminates (quite possibly both).
(This is simply the Halting Theorem rephrased.)

Precisely because statically proven array bounds checking is Turing
Hard, that is not how such languages work.
Rather, languages that guarantee array bounds insert dynamic checks on
every array reference, and then use static checking to remove all of the
dynamic checks that can be proven to be unnecessary. For instance, it is
often the case that a tight inner loop has hard-coded static bounds, and
so a static checker can prove that the dynamic checks can be removed
from the inner loop, hoisting them to the outer loop and saving a large
proportion of the execution cost of dynamic array checks.
How much of this optimization can be done is arguable:
   * The Jones&Kelly GCC enhancement that does full array bounds
 checking makes (nearly?) no attempt at this optimization, and
 suffers slowdowns of 10X to 30X on real applications.
   * The Bounded Pointers GCC enhancement that does full array bounds
 checking but with a funky incompatible implementation that makes
 pointers bigger than a machine word, does some of these
 optimizations and suffers a slowdown of 3X to 5X. Some have argued
 that it can be improved from there, but "how much" remains to be seen.
   * Java compilers get the advantage of a language that was actually
 designed for type safety, in contrast with C that aggressively
 makes static type checking difficult. The last data I remember on
 Java is that turning array bounds checking on and off makes a 30%
 difference in performance.
Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Theoretical question about vulnerabilities

2005-04-12 Thread Crispin Cowan
Nash wrote:
** It would be extremely interesting to know how many exploits could
be expected after a reasonable period of execution time. It seems that
as execution time went up we'd be less likely to have an exploit just
"show up". My intuition could be completely wrong, though.

I would think that "time" is pretty much irrelevant, because it depends
on the intelligence used to order the inputs you try. For instance,
time-to-exploit will be very long if you feed inputs to (say) Microsoft
IIS starting with one byte of input and going up in ASCII order.
Time-to-exploit gets much shorter if you use a "fuzzer" program: an
input generator that can be configured with the known semantic inputs of
the victim program, and that focuses specifically on trying to find
buffer overflows and printf format string errors by generating long
strings and using strings containing %n.
Even among fuzzers, time-to-exploit depends on how intelligent the
fuzzer is in terms of aiming at the victim program's data structures.
There are many specialized fuzzers aimed at various kinds of
applications, aimed at network stacks, aimed at IDS software, etc.
Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Theoretical question about vulnerabilities

2005-04-12 Thread Crispin Cowan
David Crocker wrote:
3. Cross-site scripting. This is a particular form of "HTML injection" and would
be caught by the proof process in a similar way to SQL injection, provided that
the specification included a notion of the generated HTML being well-formed. If
that was missing from the specification, then HTML injection would not be
caught.

XSS occurs where client A can feed input to Server B such that client C
will accept and trust the input. The "correct" specification is that
Server B should do a perfect job of allowing clients to upload content
that is damaging to other clients. I submit that this is infeasible
without perfect knowledge of the vulnerabilities of all the possible
clients. This seems to be begging the definition of "prove correct"
pretty hard.
You can do a pretty good job of preventing XSS by stripping user posts
of all "interesting" features and permitting only "basic" HTML. But this
still does not completely eliminate XSS, as you cannot a priori know
about all the possible buffer overflows & etc. of every client that will
come to visit, and "basic" HTML still allows for some freaky stuff, e.g.
very long labels.
Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com


Re: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-11 Thread Crispin Cowan
I strongly disagree with this.
Rigorous professional standards for mechanical and structural 
engineering came about only *after* a well-defined "cookbook" of how to 
properly engineer things was agreed upon. Only after such standards are 
established and *proven effective* is there any utility in enforcing the 
standards upon the practitioners.

Software is *not* yet at that stage. There is no well-established cook 
book for reliably producing reliable software (both of those "reliably"s 
mean something :)  There are *kludges* like the SEI model, but they are 
not reliable. People can faithfully follow the SEI model and still 
produce crap. Other people can wholesale violate the SEI model and 
produce highly reliable software.

It is *grossly* premature to start imposing standards on software 
engineers. We have not a clue what those standards should be.

Crispin
Edward Rohwer wrote:
> I my humble opinion, the bridge example gets to the heart of the
>matter. In the bridge example the bridge would have been design and
>engineered by licensed professionals, while we in the software business
>sometime call ourselves "engineers" but fall far short of the real,
>professional, licensed engineers other professions depend upon.  Until 
we as
>a profession are willing to put up with that sort of rigorous examination
>and certification process, we will always fall short in many area's and of
>many expectations.
>
>Ed. Rohwer CISSP
>
>-Original Message-
>From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
>Behalf Of [EMAIL PROTECTED]
>Sent: Friday, April 08, 2005 10:54 PM
>To: Margus Freudenthal
>Cc: Secure Coding Mailing List
>Subject: [SC-L] Re: Application Insecurity --- Who is at Fault?
>

Margus Freudenthal wrote:
>>Consider the bridge example brought up earlier. If your bridge builder
>>finished the job but said: "ohh, the bridge isn't secure though. If
>>someone tries to push it at a certain angle, it will fall".
>Ultimately it is a matter of economics. Sometimes releasing something
earlier
>is worth more than the cost of later patches. And managers/customers are
aware
>of it.
Unlike in the world of commercial software, I'm pretty sure you don't
see a whole lot of construction contracts which absolve the architect of
liability for design flaws.  I think that is at the root of our
problems.  We know how to write secure software; there's simply precious
little economic incentive to do so.
--
David Talkington
[EMAIL PROTECTED]

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Theoretical question about vulnerabilities

2005-04-10 Thread Crispin Cowan
Pascal Meunier wrote:
Do you think it is possible to enumerate all the ways all vulnerabilities
can be created?  Is the set of all possible exploitable programming mistakes
bounded?
 

Yes and no.
Yes, if your enumeration is "1" and that is the set of all errors that 
allow an attacker to induce unexpected behavior from the program.

No is the serious answer: I do not believe it is possible to enumerate 
all of the ways to make a programming error. Sure, it is possible to 
enumerate all of the *commonly observed* errors that cause wide-spread 
problems. But enumerating all possible errors is impossible, because you 
cannot enumerate all programs.

I would think that what makes it possible to talk about design patterns and
attack patterns is that they reflect intentional actions towards "desirable"
(for the perpetrator) goals, and the set of desirable goals is bounded at
any given time (assuming infinite time then perhaps it is not bounded).
 

Nope, sorry, I disbelieve that the set of attacker goals is bounded.
However, once commonly repeated mistakes have been described and taken into
account, I have a feeling that attempting to enumerate all other possible
mistakes (leading to exploitable vulnerabilities), for example with the goal
of producing a complete taxonomy, classification and theory of
vulnerabilities, is not possible.
I agree that it is not possible. Consider that some time in the next 
decade, a new form of programming or technology will appear. It will 
introduce a new kind of pathology. We know that this will happen because 
it has already happened: Web forums that allow end-user content to be 
posted resulted in the phenomena of Cross Site Scripting.

 All we can hope is to come reasonably
close and produce something useful, but not theoretically strong and closed.
 

Security is very simple. Only use perfect software :) For those who can 
afford it, perfect software is great. The rest of us will be fighting 
with insecure software forever.

This should have consequences for source code vulnerability analysis
software.  It should make it impossible to write software that detects all
of the mistakes themselves.
The impossibility of a perfect source code vulnerability detector is a 
corollary of Alan Turing's original Halting Problem.

 Is it enough to look for violations of some
invariants (rules) without knowing how they happened?
 

Looking for run-time invariant violations is a basic way of getting 
around static undecidability induced by Turing's theorem. Switching from 
static to dynamic analysis comes with a host of strengths and 
weaknesses. For instance, StackGuard (which is really just a run-time 
enforcement of a single invariant) can detect buffer overflow 
vulnerabilities that static analyzers cannot detect. However, StackGuard 
cannot detect such vulnerabilities until some attacker helpfully comes 
along and tries to exploit such a vulnerability.

Any thoughts on this?  Any references to relevant theories of failures and
errors, or to explorations of this or similar ideas, would be welcome.  Of
course, Albert Einstein's quote on the difference between genius and
stupidity comes to mind :).
 

"Reliable software does what it is supposed to do. Secure software does 
what it is supposed to do and nothing else." -- Ivan Arce

"Security is very simple. Only use perfect software :) For those who can 
afford it, perfect software is great. The rest of us will be fighting 
with insecure software forever." -- me :)

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Application Insecurity --- Who is at Fault?

2005-04-08 Thread Crispin Cowan
Julie JCH Ryan, D.Sc. wrote:
Other students chimed in on the argument positing that the programming 
challenge was an inaccurate measure of student programming capability 
because the contestant was not allowed to do research on the internet 
during the challenge.  Another said the problem was that the challenge 
was too long and required contestants to have memorized too much.
Formal contests are always inaccurate abstractions of the real world. As 
you raise the value of the contest, this inevitably pressures 
contestants to "game the system" and target the artificial artifacts of 
the game rules instead of the real world. Whether this has happened to 
the ACM Programming contest is a subjective opinion. IMHO, a closed-book 
contest is no longer very relevant to the real world, where Google is 
always just seconds away.

This is particularly interesting to me because I just had a doctoral 
student come to me with an idea for dissertation research that 
included an hypothesis that organizations at SEI 1 were better able to 
estimate software development time and costs than organizations at SEI 
5.  He didn't seem to grasp the implications to quality, security, 
life cycle maintenance, etc.
Or it could be that the student is positing that the methods mandated in 
the SEI are a grand waste of time, which would be an interesting 
hypothesis to test. Certainly the successes of open source development 
models make a mockery of some of the previously thought hard rules of 
Brooks' "Mythical Man Month", and I dare say that traditional software 
engineering methods deserve questioning.

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Mobile phone OS security changing?

2005-04-06 Thread Crispin Cowan
Kenneth R. van Wyk wrote:
Greetings,
I noticed an interesting "article" about a mobile phone virus affecting 
Symbian-based phones out on Slashdot today.  It's an interesting read:

http://it.slashdot.org/it/05/04/06/0049209.shtml?tid=220&tid=100&tid=193&tid=137
What particularly caught my attention was the sentence, "Will mobile OS 
companies, like desktop OS makers, have to start an automatic update system, 
or will the OS creators have to start making their software secure?"  Apart 
from the author implying that this is an "or" situation,

I think it is definitely an "or" situation: automatic updates are 
expensive to provision and fugly for the user. They are just a kludge 
used when, for some reason, the software canot be made secure.

That the desktop vendor (Microsoft) has not made their software secure 
is manifestly obvious. Whether the "can't" or "won't" is subject to 
rampant debate and speculation. The "can't" view says that legacy 
software and fundamentally broken architecture make securing it 
infeasible. The "won't" view says that it was not profitable for MS to 
spend the effort, and they are now changing.

That the alternate desktop vendors (all the UNIX and Linux vendors 
including Apple) have made secure desktops is also manifestly obvious 
(no viruses to speak of, and certainly no virus problem). Whether this 
is "luck" or "design" is subect to rampant debate and speculation. The 
"luck" view says that these minority desktops are not a big enough 
target to be interesting to the virus writers. The "design" view is that 
the virus problem is induced by: 1. running the mail client with 
root/administrator privilege, and 2. a mail client that eagerly trusts 
and executes attached code, and that until UNIX/Linux desktops have both 
of those properties in large numbers, there never will be a virus 
problem on UNIX/Linux desktops.

What the phone set people will do depends on which of the above factors 
you think apply to phone sets. Certainly the WinCE phones with Outlook 
are about to be virus-enabled. I don't know enough about Symbian to 
answer. The Linux hand sets could be designed either way; it would not 
surprise me to see phone set peole architecting a phone so that the 
keyboard is root. It is not exactly intuitive to treat a hand set as a 
multi-user platform.

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] certification for engineers/developers?

2005-03-24 Thread Crispin Cowan
j eric townsend wrote:
The main reason I'm looking at certification is defensive -- I've been in one 
too many meetings where someone's opinion was given more weight because of 
industry certification or advanced degree.
Yeah, I give certifications weight; *negative* weight. The more 
"certifications" someone advertises, the *less* clueful I assume that 
they are. All other factors being equal, that is; I certainly know 
people who have both certs and clue, but I find that is the exception, 
not the rule.

Advanced degrees are another matter:
   * For practical matters, advanced degrees are orthogonal to clue:
 whether the person advertises an advanced degree seems to be
 independent of their practical knowledge.
   * For theoretical matters, advanced degrees do seem to actually
 predict someone's level of clue. Ask someone to explain how
 Turing's Halting Problem implies a major corollary to computer
 security. Those with an advanced degree often get it, while those
 who are self-educated often reply with "who is Turing?" or "I dunno".
 o Conclusion: learning theory is no fun, so self-educated
   people naturally avoid it unless forced into it.
Crispin
P.S. I am totally serious about the certificates, they go to the 
*bottom* of my resume pile.

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] ZDNnet: Securing data from the threat within [by buying products]

2005-01-17 Thread Crispin Cowan
Kenneth R. van Wyk wrote:
On Monday 17 January 2005 14:55, Crispin Cowan wrote:
 

I participated in a workshop on on insider attacks several years ago. We
identified 2 kinds of insider attacks:
   

(Was this Mike Skroh's (DARPA) workshop out at RAND?  If so, I also 
participated in this.  In fact, it's where I met you, Crispin.

Yes, that was it.
So we agree that more secure systems such as RBAC and Immunix do help to
address the problem of insider attackers. What they don't do is address
the problem of authorized insiders abusing their authority. That is
where this new class of products comes in: they track the movement of
sensitive organizational data by /content/ rather than by access
control, and complain when content crosses a barrier that it should not.
   

Understood, and at least much of this new class of products is based on 
statistical analysis of event logs.  Certainly, products simplify that 
scenario, but it can also be done without add-on products.
 

Some are more than just statistics, and are using signatures on phrases 
& passages of text. Obviously that is easy to bypass (just encrypt it, 
or even trivial transformations) but as with a lot of defenses, the 
attackers are often not too bright, and so simple defenses often work.

There is also a new class of products that do access control and logging at 
the PC client level, so that things like USB stick access can be (nominally) 
controlled and logged, FWIW.  I'll bet that a determined, authorized 
adversary can find ways of circumventing, though...
 

Boot from removable media, and you are running a different OS, and all 
access controls are shot. To prevent that, you have to get control over 
the machine's boot sequence. If you disable booting from removable 
media, then you also cripple auto-updates of the OS.

Because the end-game of covert channel prevention always leads to an
anal cavity search :)
   

ACKand ick!
So, where's the Software Security lesson in all of this?  IMHO, it's to ensure 
adequate application-level event logging and data access control 
capabilities.
 

I think the main lesson of the underwear attack is Marcus Ranum's rule 
that you cannot use technology to fix social problems. If an insider 
really wants to export your data, they are going to succeed. So be nice 
to your staff; it's not just the moral thing to do, it is the smart 
thing to do.

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] ZDNnet: Securing data from the threat within [by buying products]

2005-01-17 Thread Crispin Cowan
Kenneth R. van Wyk wrote:
Crispin Cowan wrote:
I completely disagree. I find the article to be timely and informative.
What Kenneth suggests (use of RBAC) will not solve the problem. First 
of all, RBAC is not practical to deploy in most situations; companies 
are still trying to cope with AV and firewalls, and just beginning to 
think about host and application security. RBAC is completely beyond 
them.

Well, my main objection to the article was its advocacy for addressing 
the insider threat problem simply by buying security products.  I 
brought up RBAC simply as one example that people may consider as they 
seek solutions.
Whether it be role-based, or a plain old-fashioned, group/ACL sort of 
access control, coupled with good event logging and monitoring, I 
think that most sites would be better served by exploring the access 
control mechanisms that they currently have instead of just buying 
more security products.  That's not to say that there aren't products 
that may be highly useful, but it is to say that the solutions should 
start with well designed and implemented access  control and logging.  
I stand by that opinion.
I participated in a workshop on on insider attacks several years ago. We 
identified 2 kinds of insider attacks:

   * authorized users: insiders who have access to sensitive data, and
 abuse their authority by leaking it outside the organization
   * non-authorized users: insiders who don't have explicit
 authorization to access sensitive data, but who take advantage of
 their "insider" status to exploit organizational security
 weaknesses. Such weaknesses would include both weak access
 controls (which Ken's RBAC suggestion would address) and otherwise
 weak system and application security (which HIPS products like
 Immunix would address).
So we agree that more secure systems such as RBAC and Immunix do help to 
address the problem of insider attackers. What they don't do is address 
the problem of authorized insiders abusing their authority. That is 
where this new class of products comes in: they track the movement of 
sensitive organizational data by /content/ rather than by access 
control, and complain when content crosses a barrier that it should not.

But as I wrote before, such products, especially network-based products, 
will fail to detect an authorized user accessing data and then dumping 
it to CDR or USP memory stick and walking it out of the building in 
their underwear.

Because the end-game of covert channel prevention always leads to an 
anal cavity search :)

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] ZDNnet: Securing data from the threat within [by buying products]

2005-01-17 Thread Crispin Cowan
I completely disagree. I find the article to be timely and informative.
What Kenneth suggests (use of RBAC) will not solve the problem. First of 
all, RBAC is not practical to deploy in most situations; companies are 
still trying to cope with AV and firewalls, and just beginning to think 
about host and application security. RBAC is completely beyond them.

But even more important, RBAC will not actually address the problem that 
this article describes. The organizational secrets that are being leaked 
are being leaked by people who actually have access to the data, and 
thus RBAC would just grant them the access. An access control solution 
to this problem would require something far stronger than RBAC, in the 
form of an MLS solution that does not allow a user to pass information 
from a "high" to a "low" security domain, and these MLS solutions are 
even less enterprise-friendly than MLS.

In light of all that, it does make sense for enterprises to consider 
network-level solutions like these.

On the other hand, enterprises should stay cognizant of the "sneakernet" 
hole: if you deploy all this stuff, it is still trivial for an insider 
to walk sensitive data out the front door on a USB memory stick, a CDR, 
a blue tooth phone, etc. that the network-level products will never see.

Crispin
Kenneth R. van Wyk wrote:
Greetings all,
I saw a moderately interesting article this morning on ZDNet (see 
http://news.zdnet.com/2100-1009_22-5520016.html?tag=zdfd.newsfeed for the 
full text).  The premise of the article is about how companies have been 
building external perimeters for years and now they need to also protect 
themselves from insiders, because, "...now discontented, reckless and greedy 
employees, and disgruntled former workers, can all be bigger threats than the 
mysterious hacker."

The article goes on to list some new products, technologies, and methods for 
protecting data from the insiders.  It says, "a whole new class of products 
has sprung up aimed at keeping employees and other insiders from sending 
confidential information outside the company."  It describes network-level 
products as well as the need for client-level products for monitoring and 
controlling data flow.

IMHO, what's missing here is a discussion on writing better enterprise 
applications that make effective use of concepts like role-based access 
control, transaction/event logging and monitoring, etc.  In fact, the article 
would lead an IT security manager to think that the only solution to insider 
problems is to buy more security products.  Frustrating...

To find a fairly "mainstream" article like this that is (again, IMHO) so 
thoroughly off base really makes me wonder whether the Software Security 
community is making progress or not.  Opinions?

Cheers,
Ken van Wyk
 

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Re: DJB's students release 44 poorly-worded, overblown advisories

2004-12-22 Thread Crispin Cowan
Paco Hope wrote:
Then reconsider whether rtf2latex or abc2midi are really "remote exploits."
I think it is safe to say that no one will have their email program or web
browser set up to run 'abc2midi' as the default option when they click an
ABC file (even though they could). Is this really remotely exploitable? It
requires the user to save the file to a disk and run a special command on
it.
 

That depends on the configuration of "helper apps" in the mail and web 
clients. It is modern default to automatically open MS Office .doc files 
when you click on them. On many systems, there are actually system-wide 
defaults set that say "Foo is the designated application for opening 
.foo files", and the mail and web browsers will automatically start up 
the application and open the file. It would not surprise me to see a 
helper app for handling MIDI files, and while I have never heard of an 
ABC file until today, it appears to be a music format 
http://abc.sourceforge.net/abcMIDI/ and it would not surprise me if a 
non-trivial number of users have an ABC helper application defined, even 
if they do not know it, just because they installed a music editing package.

I feel like your explanation backs out to a debate about what lengths we can
"reasonably expect" someone to go to infect themselves. If clicking on an
attachment and typing a password qualifies (which I think most of us will
accept as reasonable), does "save this file to disk and run this command on
it" also qualify?
 

You are right, these marginal examples do highlight the fact that 
"remotely exploitable" is not black and white, but actually describes a 
continuum.

Maybe it's just me, but I don't think filter programs like these x2y
programs (he cited "abc2midi" and "rtf2latex2e" among others) qualify.
 

If they are commonly configured as default helper apps, then they 
definitely do qualify. If they are only occasionally configured as 
default helper apps, then they marginally qualify.

There's no way someone will have their web or mail software set up to run
these converters as the default action.
Uh huh. And no one would ever have a helper app defined for .PIF files 
either; who ever heard of that? :)

The "ease" of exploit here doesn't come anywhere near the ease of
exploiting, say, xmms or some other software that is highly likely to be the
default application for a given content-type.
 

That just narrows the number of vulnerable systems. It remains remotely 
exploitable for the people who do configure these helpers.

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Re: DJB's students release 44 poorly-worded, overblown advisories

2004-12-22 Thread Crispin Cowan
ljknews wrote:
On most important systems there is no need for the users to be able
to provide executable which they then run.  Executables are provided
by the system manager.
 

While I am sympathetic to this point of view, it is no longer relevant 
to the modern context, where many data formats end up being executable, 
e.g. Office documents with executable macros in them.

Securing a MAC system in which the users are hog-tied is easy. The trick 
is to provide reasonable security *and* reasonable usability.

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] [Fwd: DJB's students release 44 *nix software vulnerability advisories]

2004-12-21 Thread Crispin Cowan
Shea, Brian A wrote:
Isn't the base problem residing in this essentially flawed statement:
"Widely deployed open source software is commonly believed to contain
fewer security vulnerabilities than similar closed source software due
to the possibility of unrestricted third party source code auditing."
To have fewer bugs due to an external audit, that external audit would
have to happen, not just be possible.  Assuming fewer bugs because an
Audit COULD happen is like saying we're all infected with Bird Flu
because it COULD happen.  
 

Not necessarily. Just the threat of public embarrassment ("lookit the 
crappy code that Jone DOe wrote! ") could cause open source 
developers to be more disciplined in the first place. This hypothesis 
has been around for quite some time as part of the "open source is 
better" hype.

However, it is also unsubstantiated.
Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com


Re: [SC-L] Re: DJB's students release 44 poorly-worded, overblown advisories

2004-12-20 Thread Crispin Cowan
Paco Hope wrote:
Bernstein has a history of being inflammatory, and in this case I think he
has done the whole security community a disservice. He has called everything
a "remotely exploitable security hole" even when exploiting it requires
explicit user actions. He's playing fast and loose with terminology, which
can't help anybody.
 

Hmmm ...
SomebodyÂąs gotta come up with a  reasonable definition of "remotely
exploitable." Consider the following statement:
 

Limin Wang has discovered two remotely exploitable security holes in
abc2midi. http://tigger.uic.edu/~jlongs2/holes/abc2midi.txt
   

If you read the exploit description, he says:
 

You are at risk if you take an ABC file from an email message (or a web
page or any other source that could be controlled by an attacker) and
feed that file through abc2midi. Whoever provides the ABC file then has
complete control over your account: she can read and modify your files,
watch the programs you're running, etc.
   

IMHO, that is a perfectly reasonable use of the term "remotely 
exploitable". The attack is to e-mail malicious content to a naive user, 
hence the "remote" and "exploit". In my experience, this is also 
compliant with standard usage.

When IE has a buffer overflow that can be exploited by carefully crafted
HTML in an email or web page, do we call that "remotely exploitable"?
Yes, actually, we do.
How
about those viruses that spread as password-protected zip files attached to
emails?
They are also remote exploits.
The user has to click them and then enter the password before
they're activated? Aren't those "viruses" or "trojans"? 

The viruses are distinguished from the worms in that the viruses require 
naive user action to propagate (click this here, use the "password" 
there, etc.) while the worms propagate without user intervention. But 
both are remote exploits.

If they exploited
notepad.exe when they activated would we announce a "remote exploit" on
notepad.exe? They exploit a buffer overflow in local software, but they
require action by the user before they can activate.
 

The difference between a local and a remote exploit, in this context, is 
that a local exploit requires overt action on the part of the user, e.g. 
take these 7 steps to perform the local exploit. A remote exploit can 
include malicious content that you can e-mail to a naive user and 
reasonably expect them to do what is required to perform the exploit, 
such as "click on the attachment".

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



[SC-L] Choices

2004-11-16 Thread Crispin Cowan
Jeff Williams wrote:
Not to be crass, but what most consumers care about is what the vendors tell
them to. It's all about the market. Currently, the market is stuck where
vendors don't disclose anything about the security of their process and
product, and consumers don't ask.  Our job is to change the market so that
it works differently.
Now you can change a market with taxation, liability (see Bruce Schneier's
most recent cryptogram for yet another plea), incentives, regulation, etc...
One of the least intrusive models, in my view, is to ensure that everyone
has the same information, and let the market sort it out.
Meanwhile, the only people who are *effectively* changing the market are
the *attackers* :) Consumers spend more on security, care more about the
security of products, pay more attention, etc. etc. in direct response
to the level of threat that they perceive. Were it not for the
attackers, we could all run highly insecure code, and not give a
tinker's damn about it.
Remember that we are fundamentally in the business of solving a problem.
Security is the business of saying "no" to requests, and that is
fundamentally inconvenient at best, and so our "solution" has to be less
annoying than the problem we solve. Taxes & etc. are just ways to make
life even more annoying so that people will choose the pain of secure
software instead. IMHO, that is only justified when one person's lack of
security causes other people gross inconvenience, such as the case of
completely unfirewalled home Windows machines chronically infected with
zombies.
I think you're right that the information has to be appropriate for the
consumer, or at least enough so that a reasonable software architect could
consume it. So if that's the challenge, I'm up for it.
Good luck getting consumers to choose cod liver oil over pop tarts :)
Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com


Re: [SC-L] Top security papers

2004-08-09 Thread Crispin Cowan
Matt Setzer wrote:
It's been kind of quiet around here lately - hopefully just because everyone
is off enjoying a well deserved summer (or winter, for those of you in the
opposite hemisphere) break.  In an effort to stir things up a bit, I thought
I'd try to get some opinions about good foundational materials for security
professionals.  (I'm relatively new to the field, and would like to broaden
my background knowledge.)  Specifically, what are the top five or ten
security papers that you'd recommend to anyone wanting to learn more about
security?  What are the papers that you keep printed copies of and reread
every few years just to get a new perspective on them?  
 

Here's my top 5. Things to note:
  1. It is more like 1 + 4. The first paper (Saltzer and Schroeder)
 should be *required* reading for everyone who claims to have the
 slightest clue about security. Everything of significance in
 computer security is in this article in some form. The only
 significant technology missing is public key crypto, and that is
 because it had not been invented yet.
  2. The other 4 are quick & dirty skim through my bibliographic
 database. I could easily have missed some papers that are more
 seminal than these, but these 4 are very good, readable, and
 important.
  3. I excluded my own papers from consideration, but if you want to
 see them  ... :) http://immunix.com/~crispin/
Crispin
@article
 (
   salt75,
   author = "Jerome H. Saltzer and Michael D. Schroeder",
   title = "{The Protection of Information in Computer Systems}",
   journal = "Proceedings of the IEEE",
   volume = 63,
   number = 9,
   month = "November",
   year = 1975
 )
@article
 (
   one96,
   author = "``Aleph One''",
   title = "{Smashing The Stack For Fun And Profit}",
   journal = "Phrack",
   volume = 7,
   number = 49,
   month = "November",
   year = 1996
 )
@article
 (
   miller90,
   author = "B.P. Miller and L. Fredrikson and B. So",
   title = "{An Empirical Study of the Reliability of {\sc Unix}
   Utilities}",
   journal = "Communications of the ACM",
   pages = "33-44",
   volume = 33,
   number = 12,
   month = "December",
   year = 1990,
   lcindex = "QA76.A772"
 )
@inproceedings{
   badger95,
   author = "Lee Badger and Daniel F. Sterne and et al",
   title = "{Practical Domain and Type Enforcement for UNIX}",
   booktitle = "Proceedings of the IEEE Symposium on Security and Privacy",
   address = "Oakland, CA",
   month = "May",
   year = 1995
}
@article
 (
   land94,
   author = "Carl E. Landwehr and Alan R. Bull and John P. McDermott
   and William S. Choi",
   title = "{A Taxonomy of Computer Program Security Flaws}",
   journal = "ACM Computing Surveys",
   volume = 26,
   number = 3,
   month = "September",
   pages = "211-254",
   year = 1994
 )



Re: [SC-L] Programming languages -- the "third rail" of secure

2004-08-06 Thread Crispin Cowan
Nick Lothian wrote:
Scripting Languages: Depends on the language. Lack of type safety can be a
problem, but on the other hand they are usually safe from buffer overflows
and the fact they you can do a lot more in fewer lines of code can make the
code safer by making errors more obvious.
 

Scripting languages are a mixed bag:
   * On one hand, the dynamic type system and dynamic memory management
 eliminate buffer overflows as a problem.
   * On the other hand, the baroque language design (that there should
 be *several* ways to do something, which is at least an explicit
 design goal of Larry Wall for PERL) means that the programs can
 actually be very hard to read. Terseness means fewer lines of code
 to read, but obscurity can make those lines *very* difficult to
 understand.
   * Many scripting languages (PERL and PHP) have an unfortunate
 tendency to try really hard to interpret data. This results in
 fast prototype implementations, and also security faults as data
 gets interpreted just a little bit surprisingly :(

Are there other languages in widespread use (ie, the language must be used
more than - say - Python) that are safer than those listed above? 
 

Ruby zealots claim it has substantial advantages over python. I would be 
interested in comparative data from people exposed to both.

Way, *way* back in the day, I dabbled with a shell scripting language 
called "rc" that came from the Plan9 community. It was a spartan 
language, which caters to my prejudice for parsimony. Is it still alive?

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Programming languages -- the "third rail" of secure coding

2004-07-21 Thread Crispin Cowan
I don't understand the purpose of this list. If it is to list all 
programming languages, that is hopeless, as there are thousands of 
programming languages. If it is to list all programming languages with 
security ambitions, then I'm confused, as clearly not all of the 
languages listed were intended to enhance security, and some of them 
(glaringly PHP) substantially *degrade* security vs. many languages that 
came before them.

Crispin
Michael S Hines wrote:
I've been compiling a list of programming languages..   Some of which were
developed to 'solve' the insecure programming problem.  I don't think we've
made it yet.
Perhaps it's a personnel problem, not a technology problem?
My list -- (feel free to add to it).
1.  Assembler
2.  C/C++
3.  Pascal
4.  Basic or Visual Basic
5.  Java / J#
6.  Perl
7.  Ruby
8.  PHP
9.  C#
10. COBOL
11. Perl
12. XSLT
13. Python
14. Forth
15. APL
16. Smalltalk
17. Eiffel
18. PL/1 
19. ADA
20. Hermes
21. Scheme
22. ML
23. Haskell
24. Simula 67
25. Prolog
26. OCCAM
27. Modula 2
28. PL/M or PL/X
29. PL/SQL
30. SQL
31. Jabber
32. Expect
33. Perl/Tk
34. Tcl/Tk
35. XML
36. HTML
37. AppleScript
38. JavaScript
39. VBScript
40. D
41. Algol

---
Michael S Hines
[EMAIL PROTECTED] 

 

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Programming languages used for security

2004-07-12 Thread Crispin Cowan
David Crocker wrote:
Crispin Cowan wrote:
The above is the art of programming language design. Programs written in
high-level languages are *precisely* specifications that result in the
system generating the program, thereby saving time and eliminating
coding error. You will find exactly those arguments in the preface to
the K&R C book.
<<
Whilst I agree that the distinction between specification and programming
languages is not completely clear cut, there is nevertheless a fundamental
difference between specification and programming.
 

For years, I have been trying to get formal specification advocates to 
explain the difference between high level programming languages and 
specification languages. From my point of view, "formal specifications" 
can be divided into two categories:

   * Those that can be mechanically translated into code, otherwise
 known as "programs"
   * Those that cannot be mechanically translated, otherwise known as
 "documentation" :)

In a programming language, you tell the computer what you want it to do,
normally by way of sequential statements and loops. You do not tell the computer
...
In a specification language, you tell the computer what you are trying to
achieve, not how to achieve it. This is typically done by expressing the desired
relationship between the input state and the output state. The state itself is
normally modelled at a higher level of abstraction than in programming (e.g. you
wouldn't refer to a hash table, because that is implementation detail; you would
refer to a set or mapping instead).
 

I agree with the other posters: the above could describe a formal 
specification, but could also describe a declarative programming language.

However, I think I do see a gap between these extremes. You could have a 
formal specification that can be mechanically transformed into a 
*checker* program that verifies that a solution is correct, but cannot 
actually generate a correct solution. The assert() statements that David 
Crocker mentioned are an incomplete form of this; incomplete because the 
do not *completely* verify the program's behavior to be correct (because 
they are haphazardly placed by hand).

So there's another midpoint in the spectrum: a formal spec that can only 
verify correctness but is complete, effectively is a program for 
non-deterministic machines (cf: NP completeness theory). A spec that is 
incomplete (does not specify all outputs) is more of an approximation.

All of which begs the question: are these formal specs that are somehow 
not programs any easier to verify than actual programs? Probably 
somewhat easier (they are necessarily simpler) but some would argue, not 
enough simpler to be worth the bother. E.g. suppose 100,000 lines of 
code reduces to 10,000 lines of formal specification in some logical 
notation. A hard problem, but solvable, is a mechanical proof that the 
10,000 line spec and the 100,000 lines of code actually conform. An 
unsolved problem is "does the 10,000 line spec mean what the human 
*thinks* it means?"

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Programming languages used for security

2004-07-10 Thread Crispin Cowan
Dana Epp wrote:
My what a week of interesting discussions. Lets end this week on a 
good and light hearted note.
Insert various analogies between programming languages and automobiles 
here :)

   * $MY_FAVORITE_LANGUAGE is like a $REALLY_COOL_CAR, while
 $YOUR_FAVORITE_LANGUAGE is like a Yugo.
   * $C_OR_ASSEMBLER_ITS_REALLY_THE_SAME_THING is like a thermonuclear
 missile, in that it is fast and powerful, but if you are not
 careful, you can give yourself an ouchie :)
Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Programming languages used for security

2004-07-09 Thread Crispin Cowan
David Crocker wrote:
1. Is it appropriate to look for a single "general purpose" programming
language? Consider the following application areas:
a) Application packages
b) Operating systems, device drivers, network protocol stacks etc.
c) Real-time embedded software
The features you need for these applications are not the same. For example,
garbage collection is very helpful for (a) but is not acceptable in (b) and (c).
For (b) you may need to use some low-level tricks which you will not need for
(a) and probably not for (c).
 

I agree completely that one language does not fit all. But that does not 
completely obviate the question, just requires some scoping.

2. Do we need programming languages at all? Why not write precise high-level
specifications and have the system generate the program, thereby saving time and
eliminating coding error? [This is not yet feasible for operating systems, but
it is feasible for many applications, including many classes of embedded
applications].
 

The above is the art of programming language design. Programs written in 
high-level languages are *precisely* specifications that result in the 
system generating the program, thereby saving time and eliminating 
coding error. You will find exactly those arguments in the preface to 
the K&R C book.

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Programming languages used for security

2004-07-09 Thread Crispin Cowan
ljknews wrote:
Such typing should include specification by the programmer of the range
of values allowed in variables: -32767 to +32767, 0 to 100, 1 to 100,
Characters a-z only, characters A-Z only, -10.863 to +4.368, etc.
The language should also support exact specification of arithmetic
operations to be performed for various types (overflow semantics,
precision, decimal vs. binary arithmetic, etc.).  This is important
to ensure the desired behavior is obtained when one changes to a
new compiler/interpreter, if only to have a program rejected as
requiring behavior not supported on the new compiler or operating
system.
 

Check out the Hermes programming language 
<http://www.research.ibm.com/people/d/dfb/hermes-publications.html>, 
which not only does such checks, but in many cases can do the checks 
statically, and refuse to compile unsafe programs. This mechanism is 
called typestate checking 
<http://www.google.com/search?hl=en&lr=&ie=UTF-8&q=typestate+checking&btnG=Search>., 
which IMHO is one of the most interesting extensions of static type 
checking for both safety and performance.

The bad news is that Hermes, while it has many great safety features, is 
another dead programming language. That's the problem with programming 
language design: there are LOTS of great programming languages out 
there, and approximately none of them have the critical mass of 
compilers, tools, and (most important) programmers to make them viable 
for most projects.

The good news is that Hermes is among the sources that Java looted; some 
of the typestate checking features ended up in the Java bytecode checker.

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Education and security -- another perspective (was "ACM Queue - Content")

2004-07-09 Thread Crispin Cowan
Peter Amey wrote:
Firstly a tactical one: Ada is by no means a dead language.  There is a great tendency in our industry to 
regard whatever is in first place at any particular point in life's race to be "the winner" and 
everything else to be "dead".
Ada was pushed hard enough by the DoD for a decade that it is to be 
expected that there is a lot of Ada code to be maintained. I'm also 
willing to believe that your business in Ada may be growing, but that is 
likely because others are exiting the business and leaving the 
remainders for you; I do not believe (unless you have evidence to the 
contrary) in significant growth in new project starts in Ada.

I focus on new project starts because that is the only case in which 
language selection is even an interesting question. For any kind of 
on-going work, using the same language that the project was started in 
is the obvious choice most of the time.

 In practice very substantial use may continue to be made of things which are not in 
the ultra-visible first place.  For example, OS/2 was killed by Windows yet most ATMs 
in the USA still run OS/2.
But no new OS2 ATMs are being built, and they are being phased out.
 We have't discussed the dead languages Cobol and Prolog but both are actually still 
in widespread use,
COBOL: same reason, legacy systems, and LOTS of them.
Prolog: not so sure. Prolog may still be a language of choice for expert 
systems projects. But I don't work in that field. I do have a completely 
un-used Prolog text book left over from grad school if someone wants to 
buy it :)

Secondly, in response to your suggestion that we teach concepts (which I wholly agree with), languages, including dead ones, encapsulate and illustrate concepts.  Pascal was designed to teach structured programming.  Occam provides a splendid introduction to concurrency.  Modula-2 and Ada are ideal for illustrating the vital concepts of abstraction, encapsulation and the separation of specification and implementation.  The languages are worth studying for these reasons alone.  Those exposed to them will be better programmers in any language and will find adoption of new ones much easier.  
 

In programming language terms, Ada is grossly primitive. Its object 
orientation mechanisms are crude at best. A *great* deal of progress in 
language technology has been made since Ada was developed. For just 
about any kind of concept or safety feature, students and developers 
would be better served to consider Java, C#, or ML instead of Ada.

As you say, languages come in and out of fashion; what I find sad is that so many of the new languages have failed to learn and build on the lessons of those that have gone before.  I think it highly probable that this is because their designers have casually dismissed those that went before as dead and therefore of no interest.  They would have done better to emulate Newton and "stood on the shoulders of giants" such as Wirth.
 

And that is what I meant by "history of programming languages". Java, 
C#, and ML are strictly better than Pascal and Ada for almost 
everything. But they did not spring out of the earth, they were built on 
the progress of previous languages. Java in particular contains no novel 
features at all, but rather shows good taste in the features it borrows 
from others. What made Java interesting was the accident of history that 
caused it to become the first strongly typed polymorphic programming 
language to become widely popular.

You *can* teach object orientation with Simula 67 or SmallTalk, if you 
really want to. But teaching object orientation with Java is a lot more 
approachable in the contemporary context.

I would never recruit someone just because they knew Ada rather than C; however, I would be highly unlikely to recruit someone who had such a closed mind that they thought Ada had nothing to teach them and was only fit for snide mockery.
 

I don't mock Ada for what it is: a fairly good programming language from 
the 1970s, with obvious scars from having been designed by committee 
(too big, too many features). Ada's defects are artifacts of its age and 
its history, not of poor design.

I do mock the suggestion that a large, complex, and retrograde language 
with no industrial growth is a suitable subject for undergraduate education.

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Education and security -- another perspective (was "ACM Queue - Content")

2004-07-09 Thread Crispin Cowan
Peter Amey wrote:
What is wrong with this picture ?
I see both of you willing to mandate the teaching of C and yet not
mandate the teaching of any of Ada, Pascal, PL/I etc.
 

Makes sense to me. what is the point of teaching dead languages like 
Ada, Pascal, and PL/I?  Teach C, Assembler, and Java/C# (for the 
mainstream), and some lisp variant (Scheme, ML, Haskell) and Prolog 
variant for variety. But Ada, Pascal, and PL/I are suitable 
only for a "history of programming languages" course :)
   

I do hope that is a sort of smiley at the end of your message.  Please.
 

It is a sort-of smiley. On one hand, I find the whole thing amusing. On 
the other hand, I find it patently absurd that someone would suggest 
that curriculum in 2004 would comprise Ada, Pascal, and PL/I, all of 
which are (for industrial purposes) dead languages.

On one hand, university should be about learning concepts rather than 
languages, because the concepts endure while the languages go in and out 
of fashion. Evidence: 20 years ago, when I was in college, "Ada, Pascal, 
and PL/I" only included one dead language :)  On the other hand, the 
students do need to get a job when they graduate, and we do them a 
disservice to not at least teach concepts using a language currently in 
use in industry.

There is also room for a lot of breadth in a college program. I was only 
overtly instructed in languages a few times, the rest were "read the 
book then do this assignment." But in that approach, I learned COBOL, 
Pascal, PL/M, 68000 assembler, C, C++, FORTRAN, VAX assembler, Prolog, 
LISP, and Maple.  Its not like this list needs to be short.

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Education and security -- another perspective (was "ACM Queue - Content")

2004-07-08 Thread Crispin Cowan
ljknews wrote:
What is wrong with this picture ?
I see both of you willing to mandate the teaching of C and yet not
mandate the teaching of any of Ada, Pascal, PL/I etc.
 

Makes sense to me. what is the point of teaching dead languages like 
Ada, Pascal, and PL/I?  Teach C, Assembler, and Java/C# (for the 
mainstream), and some lisp variant (Scheme, ML, Haskell) and Prolog 
variant for variety. But Ada, Pascal, and PL/I are suitable only for a 
"history of programming languages" course :)

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Education and security -- another perspective (was "ACM Queue - Content")

2004-07-06 Thread Crispin Cowan
e
\ / Ribbon Campaign
X  Against HTML[EMAIL PROTECTED]
/ \ Email!   7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B
 

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] SPI, Ounce Labs Target Poorly Written Code

2004-06-30 Thread Crispin Cowan
Blue Boar wrote:
I seriously doubt that there is a programming language that can do 
anything useful that one can't do something stupid with.
Gödel's Incompleteness Theorem: no non-trivial logic system can be both 
consistent (all proven theorems are true) and complete (all true 
theorems are provable).

Blue Boar's Corollary: no non-trivial programming language can be both 
useful and safe :)

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Origins of Security Problems

2004-06-18 Thread Crispin Cowan
Mark Rockman wrote:
I had no idea I was promulgating a syllogism.  In fact, I did not intend to.
My point was that the world changed and the software didn't nor did people
change their behaviors to compensate.
The threat-level changed when people hooked computers running critical
applications to the internet without taking additional precautions. The
insecurity was there in the first place, before the Internet was injected.
Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com


Re: [SC-L] Interesting article on the adoption of Software Security

2004-06-12 Thread Crispin Cowan
Andreas Saurwein wrote:
Crispin Cowan wrote:
However, where ever C made an arbitrary decision (either way is just 
as good) PL/M went the opposite direction from C, making it very 
annoying for a C programmer to use.
Does that mean it did not make any decision at all? What was the outcome?
No, just trivial decisions on syntax. It made my fingers hurt to use it, 
because I had to retrain a lot of habits. Unfortunately I no longer 
remember the specifics.

When you've been around for a while, you start to see the same features
converge..  UNIX had quotas, we got Quotas with Win XP Server (well 
earlier,
when you include the third party ISVs - as an add on).  IBM had 
Language
Environment (LE) before .NET come along.
Crispin Cowan wrote:
I think .Net borrows most heavily from Java. Java in turn borrows 
from everyone. The "managed code" thing in particular leads back to 
the Pascal P-code interpreter; a kludge to make the Pascal compiler 
easier to implement and port. The innovation in Java was to take this 
ugly kludge and market it as a feature :)
Michael S Hines wrote:
I'm not sure that it can be blamed on Pascal. Microsoft was shipping 
Excel for the Mac in the early 80's as P-Code application and has been 
selling P-Code generating compilers since about the same time. Ever 
since, MS was strong on P-Code generating compilers.
The UCSD Pascal P-Code system was released in 1978 
<http://www.informationheadquarters.com/History_of_computing/UCSD_p-System.shtml>. 
MS Excel was released in 1984 
<http://www.dssresources.com/history/sshistory.html>. And if anything, 
the above claim that MS has been using P-code since the early days of 
Excel only supports the claim that Pascal P-Code is the origin of the 
idea at Microsoft.

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Interesting article on the adoption of Software Security

2004-06-11 Thread Crispin Cowan
Michael S Hines wrote:
Likewise for the IBM Mainframe operating systems MVS,OS/390,z/OS - much of
which is written in (I believe) PL/M - a dialect much like PL/1.
 

If PL/M is the language I am remembering from an embedded systems class 
back in the 1980s, then it is not at all like PL/1. Rather, it is a 
completely type-unsafe language. I would say "similar to C", in that it 
has most of the same pitfalls. However, where ever C made an arbitrary 
decision (either way is just as good) PL/M went the opposite direction 
from C, making it very annoying for a C programmer to use.

Many of our Operating Systems seem to have evolved out of the old DEC RSTS
system.  For example, CP/M had a PIP command.  Later renamed to COPY in DOS.
 

True.
When you've been around for a while, you start to see the same features
converge..  UNIX had quotas, we got Quotas with Win XP Server (well earlier,
when you include the third party ISVs - as an add on).  IBM had Language
Environment (LE) before .NET come along.  
 

I think .Net borrows most heavily from Java. Java in turn borrows from 
everyone. The "managed code" thing in particular leads back to the 
Pascal P-code interpreter; a kludge to make the Pascal compiler easier 
to implement and port. The innovation in Java was to take this ugly 
kludge and market it as a feature :)

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Interesting article on the adoption of Software Security

2004-06-11 Thread Crispin Cowan
ljknews wrote:
At 2:00 PM -0700 6/10/04, Dana Epp wrote:
 

Ok, lets turn the tables a bit here. We talked about this a bit back last December 
when I said that you need to use the right tool for the right job, and to quit beating 
on C.
For those of us who write kernel mode / ring0 code, what language are you suggesting we write in? Name a good typesafe language that you have PRACTICALLY seen to write kernel mode code in. Especially on Windows and the Linux platform. I am not trying to fuel the argument over which language is better, it comes down to the right tool for the right job. I know back in December ljknews suggested PL/I and Ada, but who has actually seen production code in either Windows or Linux using it?
   

Restricting your domain of inquiry to C-centric operating systems
is not exactly a reasonable set of ground rules.  This is, after all,
not a mailing list restricted to Windows and Linux. 

I strongly disagree. While it is not reasonable to limit this *list* to 
C-oriented operating systems, it is a perfectly reasonable question for 
a developer for Windows and Linux kernel enhancements to ask what 
programming language or programming techniques they should use to 
improve the security of their development efforts. Windows and Linux 
collectively representing some huge plurality of all deployed computer 
systems, it is a very practical question.

Even this _topic_
is not restricted to Windows and Linux. As an advocate of strongly typed languages, I do not use either 

That's nice, but it does not help the person who has to enhance legacy C 
code, which is a very real problem.

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] opinion, ACM Queue: Buffer Overrun Madness

2004-06-11 Thread Crispin Cowan
David Crocker wrote:
Apart from the obvious solution of choosing another language, there are at least
two ways to avoid these problems in C++:
1. Ban arrays (to quote Marshall Cline's "C++ FAQ Lite", arrays are evil!). Use
...
2. If you really must have naked arrays, ban the use of indexing and arithmetic
on naked pointers to arrays (i.e. if p is a pointer, then p[x], p+x, p-x, ++p
 

If you want safer C and you want the compiler to enforce it, and you 
don't mind having to re-write your code some, then use one of the safer 
C dialects (CCured <http://manju.cs.berkeley.edu/ccured/> and Cyclone 
<http://www.research.att.com/projects/cyclone/>). These tools provide a 
nice mid-point in the amount of work you have to do to reach various 
levels of security in C/C++:

   * low security, low effort
 o do nothing
 o code carefully
 o apply defensive compilers, e.g. StackGuard
 o apply code auditors, e.g. RATS, Flawfinder
 o port code to safer C dialects like CCured and Cyclone
 o re-write code in type safe languages like Java and C#
 o apply further code security techniques, e.g. formal theorem
   provers WRT a formal spec
   * high security, high effort
Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Interesting article on the adoption of Software Security

2004-06-10 Thread Crispin Cowan
Damir Rajnovic wrote:
While this is true that only some of the bugs are fixed that fixing can
have unexpectedly high price tag attached. No matter how do you look
at this it _is_ cheaper to fix bugs as soon as possible in the process
(or not introduce them at the first place).
 

This is true in the isolation of looking at the cost of fixing any one 
individual bug, but it is not true in general. Fixing one bug early in 
the process is cheap and easy. Fixing the *last* bug in a system is 
astronomically expensive, because the cost of *finding* bugs rises 
exponentially as you further and further refine it. Worse, you 
eventually reach a point of equilibrium where your chances of inserting 
a new bug in the course of fixing a known bug are about even, and it 
becomes almost impossible to reduce the bug count further.

Personally, I do not see how this can be easily measured.
This entire area is rife with mushy psychological issues involving 
huan's ability to process information correctly. As a result, nearly all 
of the absolute statements are wrong, and they function only within 
certain ranges, .e.g. fixing bugs early in development is cheaper than 
patching in the field, but only within the bounds of digging only so 
hard for bugs.

But even this statement is self-limiting. The above claim is not true 
(or at least less true) for safety-critical systems like fly-by-wire 
systems and nuclear reactor controllers, where the cost of failure due 
to a bug is so high that it is worth paying the extra $$$ to find the 
residual bugs in the development phase.

My reaction to the feuding over whether it is better to shore up C/C++ 
or to use newer safer languages like Java and C#: each has their place.

   * There are millions of lines of existing C/C++ code running the
 world. Holding your breath until they are all replaced with type
 safe code is not going to be effective, and therefore there is
 strong motive to deploy tools (e.g. StackGuard, RATS, etc.) to
 improve the safety of this code.
   * New code should be written in type safe languages unless there is
 a very strong reason to do otherwise.
Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] More host-based production security tools unveiled

2004-06-08 Thread Crispin Cowan
Kenneth R. van Wyk wrote:
Although the Immunix suite was briefly described here earlier, the Determina 
product wasn't.  Has anyone here looked at these tools and care to share 
their experience with either or both?
 

I had never heard of Determina before today. Notably google has no 
high-ranking links to "determina", so their web site must be very new. 
However, some digging reveals that their CTO is a co-author of the 
"program shepherding" paper of 2002, and that is consistent with the 
white paper posted on the determina.com site.

Since my product is discussed in the same article as the determina 
product, I will refrain from commenting on its merits.

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] auditing

2004-05-03 Thread Crispin Cowan
jnf wrote:

Someone just suggested ctags, I've never heard of ctags or cscope- I will 
look at them. I don't really know what I was looking for,

ctags kind turns C source code into hypertext: you put your cursor on a 
function call in a source file, press the magic key, and vi[m] jumps to 
the appropriate line in the appropriate source file where that function 
is implemented. Press another magic key, and vi[m] jumps back to the 
call site. Makes it easy and convenient to do a calling-tree structured 
exploration of source code. I found it very valuable for understanding 
how a program is intended to function.

I often find it 
quite furstrating trying to keep track of whats going on across XX global 
variables inside of XX internal functions, and so on- so really anything 
that would help me keep track of it, I suppose a debugger and alot of 
 

ctags does not track variables at all. It only does the above hypertext 
trick.

Crispin

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com/



Re: [SC-L] Re: White paper: "Many Eyes" - No Assurance Against Many Spies

2004-05-03 Thread Crispin Cowan
Tad Anhalt wrote:

Jeremy Epstein wrote:
 

I agree with much of what he says about the potential for 
infiltration of bad stuff into Linux, but he's comparing apples and 
oranges.  He's comparing a large, complex open source product to a 
small, simple closed source  product.  I claim that if you ignore the
open/closed part, the difference in trustworthiness comes from the 
difference between small and large.
   

 It's a lot deeper than that.  Here's the link to the original Ken
Thompson speech for convenience sake:
	http://www.acm.org/classics/sep95
 

Ok, someone has mentioned Ken Thompson's Turing Award speech in a "my 
security is better than yours" flamewar^W discussion. This almost 
warrants a security-geek version of Godwin's law :)

But taking the remark seriously, it says that you must not trust 
anything that  you don't have source code for. The point of Thompson's 
paper is that this includes the compiler; having the source code for the 
applications and the OS is not enough, and even having the source for 
the compiler is not enough unless you bootstrap it yourself.

Extrapolating from Thompson's point, the same can be said for silicon: 
how do we know that CPUs, chipsets, drive controllers, etc. don't have 
Trojan's in them? Just how hard would it be to insert a funny hook in an 
IDE drive that did something "interesting" when the right block sequence 
comes by.

For a really interesting long-term extrapolation of this point of view, 
I strongly recommend reading "A Deepness in the Sky" by Vernor Vinge 
http://www.tor.com/sampleDeepness.html

While it is a science fiction novel, Vinge is also a professor of 
computer science at UCSD, and a noted visionary in the future of 
computing, having won multiple Hugo awards. Vinge wrote the first 
cyberpunk story "True Names" in the mid-70s.

The horrible lesson from all this is that you cannot trust anything you 
do not control. And since you cannot build everything yourself, you 
cannot really trust anything. And thus you end up taking calculated 
guesses as to what you trust without verification. Reputation becomes a 
critical factor.

It also leads to the classic security analysis technique of amassing 
*all* the threats against your system, estimating the probability and 
severity of each threat, and putting most of your resources against the 
largest threats. IMHO if you do that, then you discover that "Trojans in 
the Linux code base" is a relatively minor threat compared to "crappy 
user passwords", "0-day buffer overflows", and "lousy Perl/PHP CGIs on 
the web server". This Ken Thompson gedanken experiment is fun for 
security theorists, but is of little practical consequence to most users.

Crispin

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com
Immunix 7.3   http://www.immunix.com/shop/



Re: [SC-L] Anyone looked at security features of D programming language compared to Spark?

2004-04-26 Thread Crispin Cowan
Blue Boar wrote:

Crispin Cowan wrote:

Dynamic type checking (or any kind of run-time fail-stop checking) 
enhances security (attacks are halted) but degrades reliability 
(processes that might live with a harmlessly inconsistent state may 
be halted).
Degrades reliability of a "correct" program?  Or only degrades 
reliability of a program with bugs, harmless or not?
The latter. Run-time fault checks will never go off if the program does 
not have faults.

  If it's the latter, I would assume QA would want to see the latter, 
so the bug could be squashed.  I'm assuming, of course, that one wants 
to also squash "harmless" bugs.
QA will want to squash the bugs it sees. Run-time fault checking helps 
find *some* of those bugs, if QA checks the code paths that expose those 
bugs. Static type checking, OTOH, finds latent bugs that no one thought 
to check for, at the cost of not finding some bugs that are statically 
undecidable. Using both is of course the safest.

Crispin

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com
Immunix 7.3   http://www.immunix.com/shop/



Re: [SC-L] Anyone looked at security features of D programming language compared to Spark?

2004-04-26 Thread Crispin Cowan
Jim & Mary Ronback wrote:

I am hard put to find an example of a language feature which makes a 
system more secure but less safe or vice versa, in any context. Can 
anyone else think of one?
Dynamic type checking (or any kind of run-time fail-stop checking) 
enhances security (attacks are halted) but degrades reliability 
(processes that might live with a harmlessly inconsistent state may be 
halted).

Now, that is in isolation, considering only the language impact on an 
individual process, in response to Jim/Mary's question. Of course you 
can compose fail-stop mechanisms with redundancy techniques to archive 
strong availability in the presence of weak individual process 
reliability. In fact, it is much easier to achieve high availability in 
the presence of fail-stop failure modes instead of Byzantine failure modes.

Crispin

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com
Immunix 7.3   http://www.immunix.com/shop/




Re: [SC-L] Missing the point?

2004-04-26 Thread Crispin Cowan
Michael A. Davis wrote:

A Network World article,
http://www.nwfusion.com/news/2004/0419codereview.html, discusses the
various MS patches that came out last week. Ellen Messmer, the
author, talks about the many companies and startups that are selling
products to help with code auditing and testing to help automate the
security review process.
Isn't she missing the point? It is not the source code that is the
problem -- it is the developer.
 

I completely disagree: it is the code that counts. The developer can get 
run over by a bus, and we will still be running the code.

Developer education is *one* path to higher code quality. Better tools 
is another. But better code quality is definitely the end-goal.

Crispin

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com
Immunix 7.3   http://www.immunix.com/shop/



Re: [SC-L] Anyone looked at security features of D programming language?

2004-04-23 Thread Crispin Cowan
Greenarrow 1 wrote:

There is a comparison chart of different functions of D vs other languages 
at this site:

http://www.digitalmars.com/d/comparison.html
This "comparison" appears to be an advocacy piece by the D developers, 
and thus may be biased.

The comparison leaves out three conspicuous factors that I would want to 
see:

   * Comparison with CCured <http://manju.cs.berkeley.edu/ccured/> (a
 safer C variant)
   * Comparison with Cyclone
 <http://www.research.att.com/projects/cyclone/> (another safer C
 variant)
   * Whether or not D is statically type safe. Here's a hint:
 o D: no (inferred from the fact that it includes inline
   assembler and "Direct access to C")
 o C: no
 o C++: no
 o C#: yes (not sure how well validated that is)
 o Java: yes (though some dispute it)
 o CCured: I think so, but I'm not sure
 o Cyclone: claims to be, but I don't know how well validated
   that is
Static type safety is the gold standard of "secure" programming 
languages. Any language that asks you to pay the price of porting to a 
new language and yet does not offer static type safety is IMHO worthless 
junk. So based on D's own marketing claims, I don't think it is even 
worth a second look. Instead, consider Java, C#, Cyclone, and CCured.

Disclaimer: I have no interest in any of these products. My product is a 
safer C compiler (StackGuard) that also does not offer type safety, but 
also does not ask you to do any porting: it just compiles standard C/C++ 
programs and adds some safety checks, i.e. just a compiler enhancement, 
not a new language.

Crispin

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com
Immunix 7.3   http://www.immunix.com/shop/



Re: Re : [SC-L] virtual server - use jail(8) on FreeBSD

2004-04-02 Thread Crispin Cowan
Serban Gh. Ghita wrote:

First of all i did not express myself very clear: (for the ones who
replied), i said virtual shared environment, not virtual machine, so i am
not talking about VMware or other software like that.
My main concern is the security in a server (eg webhosting provider),
where multiple users are hosted, and everybody must be restricted to get
out of his own home.
Immunix SubDomain http://immunix.org/subdomain.html does exactly what 
you want. You can write a profile per CGI script that describes the set 
of files the script may read, write, and execute. The profile is written 
using regular expressions, so you can add flexibility to it. The profile 
can be applied as a global default, or per script. It can even be 
applied when you are using mod_perl or mod_php, when there is no actual 
call to exec(). Here's a screen shot of what a profile looks like 
http://immunix.org/subdomain.html

The jail(8) solution seems fair to me, because i use FreeBSD on all
servers,
That is unfortunate, as SubDomain is linux only.

To those complaining that this has noting to do with "secure coding." I 
disagree. This is a meta-language describing the permitted behavior of 
applications. It is secure coding in another form, with several 
attractive properties:

   * It is a meta-language, so it does not interfere with the structure
 of the base program.
   * It can be applied to closed-source binaries.
   * It is purely declarative, so it is easy to construct assurance
 arguments based on the content of the SubDomain profile.
Crispin

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com
Immunix 7.3   http://www.immunix.com/shop/





Re: [SC-L] Administrivia & Request: Aloha, the moderator is back

2004-03-29 Thread Crispin Cowan
Kenneth R. van Wyk wrote:

And here's a bit of food for thought...  I've been invited to be on an 
upcoming TechTV segment on the topic of computer viruses.  I'm not sure how 
much leeway I'll have in steering the discussions, but if appropriate, I'd 

God DAMN does TechTV suck. Their grasp of computer issues is feeble. 
IMHO, TechTV has net-negative impact on the general public's 
understanding of computer issues.

On the plus side, perhaps Kenneth's participation will improve the clue 
level of the show. On the minus side, there is vast potential for the 
clueless on TechTV to edit Kenneth's comments into nonsense. Good luck 
to you :)

Crispin

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com
Immunix 7.3   http://www.immunix.com/shop/


Re: [SC-L] Re: Application Sandboxing, communication limiting, etc.

2004-03-16 Thread Crispin Cowan
ss.

   * You can get the file access semantics of SubDomain with Systrace,
 but at the cost of more complex specification (you have to
 consistently specify the files that can accessed through open,
 creat, unlink, read, write, etc., while SubDomain just abstracts
 that to Read, Write, and Execute permissions (similar to UNIX RWX
 mode bits).
   * Systrace does not have the sub-process confinement (mod_perl and
 mod_php) capability.

Does any distribution besides Immunix use SubDomain?

No, Immunix is proprietary. We are a technology company; our goal is to 
license Immunix technologies (including SubDomain) to server appliance 
vendors to enable them to enhance their product security and reduce 
their cost of achieving security in their products.

What percentage of
applications have SubDomain policies written for them?
"Percent" is not a meaningful question. On one hand, Immunix ships with 
a SubDomain profile all of the applications that install by default with 
an open network port. On the other hand, a very small percentage of the 
6000 or so applications in Debian-unstable have SubDomain profiles. The 
important question for a given machine is "are all the threatened 
applications profiled?" and that answer varies with the configuration. 
As I said, Immunix comes with utilities to help assess that.

I imagine it's a
lot of work to write these policies.
IMHO, it is a lot easier to write SubDomain application policies than in 
any other access control system that I have seen. Your opinions may vary 
:) but that's why I provided the Mozilla profile, so people could check 
it out and see how easy or hard it is.

Crispin

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com
Immunix 7.3   http://www.immunix.com/shop/





Re: [SC-L] Re: Application Sandboxing, communication limiting, etc.

2004-03-14 Thread Crispin Cowan
Jared W. Robinson wrote:

On Tue, Mar 09, 2004 at 07:12:35PM -0500, Bill Cheswick wrote:
 

One of the things I'd like to see in Linux and Windows is better sandboxing
of user-level programs, like Outlook and the browsers.  There have
been a number of approaches proposed over the years, and numerous papers, but
haven't seen anything useful deployed widely on any of these platforms.
   

I agree with the sandboxing idea. We're seeing it used more on the
server side, but the desktop arena isn't as far along.
Seems to me that the average user application doesn't need to open
TCP/UDP ports for listening. Attack bots tend to do this kind of thing.
Perhaps SELinux could be used to define a rule set that would restrict
desktop application's access to resources such as the filesystem,
network, etc. 

Note that I don't know what the scope of SELinux is, or how it works.
This is exactly what Immunix SubDomain does: define the files and
network activities that each program may access. We use use regular
expressions to specify policy, so for instance, fingerd could be
permitted to read /home/*/.plan and not read anything else.
Below my sig (apparently an attachment with a name infix of ".lib" 
causes a lot of AV filters to freak out) is a sample SubDomain profile 
for Mozilla 1.4. It gives read and execute access to a long list of 
library and configuration files that Mozilla needs, and then home 
directory access to things like "/home/*/tmp/**" so that you can store 
whatever you want into your personal temp directory, but Mozilla gone 
mad does not have total write access to your entire home directory. The 
"*" notation means "a single path element" while "**" means an arbitrary 
number of path elements, i.e. a tree.

Most OSS Software also doesn't "phone home" (unlike software in the
Windows world). Only pre-installed apps should be allowed network
communication under normal circumstances. So if your desktop noticed
that an unknown app (one run from the user's home directory or from
/tmp) tries to communicate with a remote site, it would deny the action
by default -- or at least slow the application communication down so
that worms would spread more slowly, and could be contained.
SubDomain also has the ability to control network access, so you can
specify rules about what network connections an application should be
making. However, that is a bit challenging in a web browser: you want
the web browser to be able to make TCP connections to port 80 on just
about any server, so how can you prevent it from "phoning home" by just
quietly making some web connections? Even DNS requests are sufficient
for an effective "phone home", such as a DNS lookup for
"users-personal-information.eveilbigcorp.com" would report
"users-personal-information" to Evil Big Corp's DNS server.
Crispin

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com
Immunix 7.3   http://www.immunix.com/shop/
-
# Copyright(c) Immunix Inc., 2004
# $Id: usr.lib.mozilla-1.4.mozilla-bin,v 1.10 2003/12/11 21:03:33 
sarnold Exp $
#
/usr/lib/mozilla-1.4/mozilla-bin  {
/bin/netstat  rx  ,
/bin/bash rx  ,
/dev/log  w   ,
/dev/null rw  ,
/dev/mixer*   rw  ,
/dev/dsp  rw  ,
/dev/urandom  rw  ,
/dev/random   rw  ,
/dev/pts/*rw  ,
/dev/tty  rw  ,
/etc/esd.conf r   ,
/etc/fstabr   ,
  /etc/gtk/*r,
/etc/hostsr   ,
/etc/host.confr   ,
/etc/ld.so.cache  r   ,
/etc/ld.so.conf   r   ,
/etc/localtimer   ,
/etc/mailcap  r   ,
/etc/mime.types   r   ,
/etc/mtab r   ,
/etc/resolv.conf  r   ,
/etc/passwd   r   ,
  /etc/pluggerrcr,
/etc/nsswitch.confr   ,
/etc/X11/fs/configr   ,
  /home/*/.mozilla/**   rwl,
  /home/*/.Xauthority   r,
  /home/*/.Xdefaultsr,
  /home/*/.gtkrcr,
  /home/*/.mailcap  r,
  /home/*/.mime.types   r,
  /home/*/tmp   r,
  /home/*/tmp/**rwl,
  /lib/ld-*.so  rx,
  /lib/lib*.so* r,
/pr

Re: [SC-L] Re: Application Sandboxing, communication limiting, etc.

2004-03-13 Thread Crispin Cowan
Jose Nazario wrote:

SELinux. LIDS. systrace (Linux, BSD, MacOS X). a few things on FreeBSD i
can't recall.
SubDomain predates all of these except for SELinux (which has roots that 
go back nearly 20 years) and LIDS got design elements from SubDomain.

To be fair, similar designs pre-dating SubDomain include Janus 
(Goldberg, Wagner, et al, USENIX Security 1996) and TRON (Berman et al, 
USENIX Winter Conference 1995).

i dont know what exists for the average user on Windows at the application
level,
Long ago, Aladdin's "eSafe" product included a desktop component that 
controlled what resources a given application could access. More 
recently, "personal firewall" products like Zone Alarm have included 
that kind of functionality.

Crispin

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com
Immunix 7.3   http://www.immunix.com/shop/




  1   2   >