Re: [SC-L] "Bumper sticker" definition of secure software

2006-07-24 Thread Andrew van der Stock
NB: I am not speaking on behalf of my employer and this is my  
personal opinion.


Banks in general do not use smart cards as they suffer from the same  
issue as two factor non-transaction signing fobs - they are somewhat  
trivial to trick users into giving up a credential. Connected keys  
are the worst - they induce laziness in the user and infer security  
which is not actually there.


Smart card integration over web apps is non-existent. The HTTP 1.1  
protocol does not support two factor transaction signing nor smart  
cards in general (unless you are just using SSL with a client-side  
cert, which is just as vulnerable as a normal IB app today if the  
attacker chooses a CSRF attack). Therefore, you need *something*  
extra to make 2FA USB fob authentication work. RSA has an ActiveX  
plugin (Keon WebPassport) which works great in an Intranet  
environment and you control all the resources. However, such  
solutions have a support overhead and locks users into just Win32  
platform, and locks out pretty much any site that blocks ActiveX  
controls on their PCs.


Here's why such devices will not fly:

*) costs money to ensure that the crypto is compliant with national  
and international standards
*) costs money to develop and deploy secure internal PKI and secure  
operational procedures to issue certificates for the devices. For the  
average institution, this is a lot of overhead.
*) costs money to deploy (need to send out software, instructions,  
device, smart card)
*) costs money to register users securely (is sending through the  
mail acceptable?) <- this step was stuffed up in the UK's Chip and  
Pin roll out, so we have an excellent data point already


http://www.theregister.co.uk/2004/09/16/chip_pin_crime_wave/

*) costs money to train users to only insert their smart card when  
your app is running and not just leave it in
*) costs money to support users when your software gets the blame for  
their user's support woes (whether true or not)

*) doesn't improve security if the user can just say yes.

The typical dialog for these things is "Please press Submit to pay  
Nice Person $100 using your token". If the app suffers from an XSS,  
why is this prompt safe? Can you trust "Nice Person" or $100?


Disconnected trx signing devices are simple, cheap, and have *fewer*  
costs. Note I do not say none of the costs, but it is significantly  
less and at least we don't trust the user's browser, the user's  
browser can be any platform (MacOS X, Linux, FreeBSD, Win95, XP,  
Vista), we don't end up supporting the user's desktop, and we don't  
need to train the users so much.


That's why smart cards will not be used if the Bank has done a proper  
side-by-side comparison, and compared the relative risk versus cost.  
Smart cards (and anything which requires platform support) are less  
secure, less trustworthy, take more effort, and cost more.


thanks,
Andrew

On 23/07/2006, at 3:42 PM, mikeiscool wrote:


No I disagree still. Consider a smart card. Far easier to use then the
silly bank logins that are available these days. Far easier then even
bothering to check if the address bar is yellow, due to FF, or some
other useless addon.




smime.p7s
Description: S/MIME cryptographic signature
___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] "Bumper sticker" definition of secure software

2006-07-24 Thread mikeiscool
On 7/25/06, Dana Epp <[EMAIL PROTECTED]> wrote:
> But secure software is not a technology problem,

Yes it is.


> it's a business one.
> Focused on people.

This is part of the issue, not the whole issue.


> If smartcards were so great, why isn't every single computer in the
> world equipped with a reader?

The answer isn't that smart cards aren't great, it's that it's not a
practical possibility. Maybe oneday it will be.


> There will always be technology safeguards
> we can put in place to mitigate particular problems. But technology is
> not a panacea here.

*sigh* I never said it was. No one said it was.


> It is no different than "network security professionals" that deploy
> $30,000 firewalls to protect digital assets worth less than the computer
> they are on. (I once saw a huge Checkpoint firewall protecting an MP3
> server. Talk about waste.) Those guys should be shot for ever making
> that recommendation. As should secure software engineers who think they
> can solve all problems with technology without considering all risks and
> impacts to the business.

All this is interesting but useless for this discussion. Nobody said
you should try and solve all problems with technology without consider
the impacts to the business. Please go back and read the original
posts to find out what we were talking about before going off on a
boring, totally unoriginal, rant, that everyone here is already
intimately familiar with.


> Regards,
> Dana Epp

-- mic
___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Cost of provably-correct code

2006-07-24 Thread Ed Reed (Aesec)





  
David Crocker wrote:
  
  
> Crispin Cowan wrote on 21 July 2006 18:45:
>   


  >> Yes, you can have provably correct code. Cost is approximately $20,000 per line
>> of code. That is what the "procedures" required for correct code cost. Oh, and
>> they are kind of super-linear, so one program of 200 lines costs more than 2
  

>L programs of 100 lines.
>
> To arrive at that cost, I can only assume that you are referring to a process in
> which all the proofs are done by hand, as was attempted for a few projects in
> the 1980s.

  
  I did not arrive at it. It is (allegedly) the NSA's estimate of cost per
LOC for EAL7 provably correct assurance. This was quoted to me from a
friend at a company who has an A1 (orange book) secure microkernel.
  


Well, not specifically for an EAL7 program, perhaps, but rather for a
Class A1 system like ours, which is arguably a superset of an EAL7
protection profile corresponding to the Mandatory Component (only)
functional requirements for a Class A1 system under the Trusted Network
Interpretation of the TCSEC.  That is, for a reference monitor
verifiably enforcing the Mandatory Access Control (both integrity and
secrecy) security policy (for Multi-Level Security), including formal
top level specification and correspondence mapping to the
implementation, including trusted distribution and RAMP (Ratings
Assurance Maintenance Phase, which allows changes to the system to be
evaluated incrementally rather than requiring re-examination of the
whole system each time).

The system has a formally layered architecture (without loops) and can
be configured with no covert storage channels (the only general purpose
system we know of that can make such a claim), 

Proving a software system is correct is one thing.  Proving it is
correct as part of the hardware/software system of which it is a part
is a second thing.  And proving you can securely deliver it and
security revise it is yet something else, I suppose.

I expect that such a high cost estimate would include everything from
clean-sheet design through evaluated configuration delivery of the
product.  It's not the cost to prove something.  It's the cost to
design and develop something you can prove, and then do so.

  
  
  

  >>  We current achieve automatic proof rates of 98% to 100% (using PD),
>> and I hear that Praxis also achieves automatic proof rates well over 90% (using
>> Spark) these days. This has brought down the cost of producing provable code
>> enormously.
  

  
  
Interesting. That could possibly bring down the cost of High Assurance
software enormously.

How would your prover work on (say) something like the Xen hypervisor?
Or the L4 microkernel?

Caveat: they are C code  :( 

Crispin

The class A1 system referenced uses Pascal for the reference monitor /
security kernel, with a small amount of assembler to handle
hardware-interface things that couldn't be done any other way.

Pascal was chosen because it seemed a better fit than PL1, I
suppose...for the 286-architecture environment for which it was
originally developed.  It presently runs on Pentium-class processors.

A type-safe language was deemed necessary to support the assurance
evaluation effort, as I understand it.

The formal internal model and top level specification is written in Ina
Jo, the specification language of the Unisys Formal Development
Methodology.  Reference section 7.6 "Design Specification and
Verification" of the Final Evaluation Report for further details and
section 8.18 "Design Specification and Verification" for the evaluators
comments.  The Class A1 certificate is available at
http://www.radium.ncsc.mil/tpep/epl/entries/CSC-EPL-94-008.html or our
copy at http://www.aesec.com/eval/CSC-EPL-94-008.html (alas, the
Evaluated Products List is no longer generally accessible, outside the
.mil domain, at least - I'll be happy to provide a postscript or
Acrobat PDF copy of the FER to anyone who asks me off line for it -
please don't blast the list with such requests, though).

Ed


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] "Bumper sticker" definition of secure software

2006-07-24 Thread Dana Epp
But secure software is not a technology problem, it's a business one.
Focused on people.

If smartcards were so great, why isn't every single computer in the
world equipped with a reader? There will always be technology safeguards
we can put in place to mitigate particular problems. But technology is
not a panacea here. 

There will always be trade-offs that will trump secure design and
deployment of safeguards. It's not about putting ABSOLUTE security in...
It's about putting just enough security in to mitigate risks to
acceptable levels to the business scenario at hand, and at a cost that
is justifiable. Smartcard readers aren't deployed everywhere as they
simply are too costly to deploy, against particular PERCEIVED threats
that may or not be part of an application's threat profile.

I agree that we can significantly lessen the technology integration
problem with computers. We are, after all, supposed to be competent
developers that can leverage the IT infrastructure to our bidding. The
problem is when we keep our head in the technology bubble without
thinking about the business impacts and costs, wasting resources in the
wrong areas.

It is no different than "network security professionals" that deploy
$30,000 firewalls to protect digital assets worth less than the computer
they are on. (I once saw a huge Checkpoint firewall protecting an MP3
server. Talk about waste.) Those guys should be shot for ever making
that recommendation. As should secure software engineers who think they
can solve all problems with technology without considering all risks and
impacts to the business.


Regards,
Dana Epp 
[Microsoft Security MVP]
http://silverstr.ufies.org/blog/

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of mikeiscool
Sent: Sunday, July 23, 2006 3:42 PM
To: Crispin Cowan
Cc: Secure Coding Mailing List
Subject: Re: [SC-L] "Bumper sticker" definition of secure software

> As a result, really secure systems tend to require lots of user 
> training and are a hassle to use because they require permission all
the time.

No I disagree still. Consider a smart card. Far easier to use then the
silly bank logins that are available these days. Far easier then even
bothering to check if the address bar is yellow, due to FF, or some
other useless addon.

You just plug it in, and away you go, pretty much.

And requiring user permission does not make a system harder to use (per
se). It can be implemented well, and implemented badly.


> Imagine if every door in your house was spring loaded and closed 
> itself after you went through. And locked itself. And you had to use a

> key to open it each time. And each door had a different key. That 
> would be really secure, but it would also not be very convenient.

We're talking computers here. Technology lets you automate things.


> Crispin

-- mic
___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc -
http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] bumper sticker slogan for secure software

2006-07-24 Thread mikeiscool
> Sorry, but it is a fact. Yes, you can have provably correct code. Cost
> is approximately $20,000 per line of code. That is what the "procedures"
> required for correct code cost. Oh, and they are kind of super-linear,
> so one program of 200 lines costs more than 2 programs of 100 lines.

Someone already pointed this out but your numbers here have no basis.
Provide references or something, otherwise they are meaningless.


> > This isn't as true and as wide spread as you make it sound. Consider,
> > for example, "SQL Injection". Assuming I do not upgrade my database,
> > and do not change my code and server (i.e. do not change my
> > environment at all), then if I have prevented this attack initially
> > nothing new will come up to suddenly make it work.
>
> Indeed, consider SQL injection attacks. They didn't exist 5 years ago,

Prove it.


> because no one had thought of them. Same with XSS bugs.

Again prove it.

I might say that they didn't exist at a given time because apps that
were affected weren't widely deployed. Online BBS's are relatively
new, and that, to my memory, was the first place for XSS bugs.


> What Dana is trying to tell you is that some time in the next year or
> so, someone is going to discover yet another of these major
> vulnerability classes that no one has thought of before. At that point,
> a lot of code that was thought to be reasonably secure suddenly is
> vulnerable.

Right, but if your environment is unchanged and you've looked at all
angles, then you will not be affected. Note that I'm not saying it's
easy, but ..


> > Not true; you can call other libraries happily and with confidence if
> > you handle the case of them going all kinds of wrong.
>
> This also is false. Consider the JPG bug that badly 0wned Microsoft
> desktops a while back. It was a bug in an image processing library. You
> try to view an image by processing it with the library, and the result
> is that the attacker can execute arbitrary code in your process. That is
> pretty difficult to defensively program against.

Why?


> Crispin

-- mic
___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Cost of provably-correct code

2006-07-24 Thread Crispin Cowan
David Crocker wrote:
> Crispin Cowan wrote on 21 July 2006 18:45:
>   
>> Yes, you can have provably correct code. Cost is approximately $20,000 per 
>> line
>> of code. That is what the "procedures" required for correct code cost. Oh, 
>> and
>> they are kind of super-linear, so one program of 200 lines costs more than 2
>L programs of 100 lines.
>
> To arrive at that cost, I can only assume that you are referring to a process 
> in
> which all the proofs are done by hand, as was attempted for a few projects in
> the 1980s.
I did not arrive at it. It is (allegedly) the NSA's estimate of cost per
LOC for EAL7 provably correct assurance. This was quoted to me from a
friend at a company who has an A1 (orange book) secure microkernel.

>>  We current achieve automatic proof rates of 98% to 100% (using PD),
>> and I hear that Praxis also achieves automatic proof rates well over 90% 
>> (using
>> Spark) these days. This has brought down the cost of producing provable code
>> enormously.

Interesting. That could possibly bring down the cost of High Assurance
software enormously.

How would your prover work on (say) something like the Xen hypervisor?
Or the L4 microkernel?

Caveat: they are C code :(

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unanticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes



___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] "Bumper sticker" definition of secure software

2006-07-24 Thread mikeiscool
> As a result, really secure systems tend to require lots of user training
> and are a hassle to use because they require permission all the time.

No I disagree still. Consider a smart card. Far easier to use then the
silly bank logins that are available these days. Far easier then even
bothering to check if the address bar is yellow, due to FF, or some
other useless addon.

You just plug it in, and away you go, pretty much.

And requiring user permission does not make a system harder to use
(per se). It can be implemented well, and implemented badly.


> Imagine if every door in your house was spring loaded and closed itself
> after you went through. And locked itself. And you had to use a key to
> open it each time. And each door had a different key. That would be
> really secure, but it would also not be very convenient.

We're talking computers here. Technology lets you automate things.


> Crispin

-- mic
___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] "Bumper sticker" definition of secure software

2006-07-24 Thread Crispin Cowan
mikeiscool wrote:
> On 7/21/06, Florian Weimer <[EMAIL PROTECTED]> wrote:
>   
>> Secure software costs more, requires more user training, and fails in
>> hard-to-understand patterns.  If you really need it, you lose.
>> 
> Really secure software should require _less_ user training, not more.
>   
That depends.

If "really secure" means "free of defects", then yes, it should be
easier to use, because it will have fewer surprising quirks.

However, since there is so little defect-free software, most often a
"really secure" system is one with lots of belt-and-suspenders access
controls and authentication checks all over the place. "Security" is the
business of saying "no" to the bad guys, so it necessarily involves
saying "no" if you don't have all your ducks in a row.

As a result, really secure systems tend to require lots of user training
and are a hassle to use because they require permission all the time.
Imagine if every door in your house was spring loaded and closed itself
after you went through. And locked itself. And you had to use a key to
open it each time. And each door had a different key. That would be
really secure, but it would also not be very convenient.

Crispin

-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
 Hack: adroit engineering solution to an unaticipated problem
 Hacker: one who is adroit at pounding round pegs into square holes

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php