Re: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-06 Thread James A. Donald

Perry E. Metzger wrote:

 What you can't do, full stop, is
 know that there are no unexpected security related behaviors in the
 hardware or software. That's just not possible.


Ben Laurie wrote:
Rice's theorem says you can't _always_ solve this problem. It says 
nothing about figuring out special cases.


True, but the propensity of large teams of experts to issue horribly 
flawed protocols, and for the flaws in those protocols to go 
undiscovered for many years, despite the fact that once discovered they 
look glaringly obvious in retrospect, indicates that this problem, 
though not provably always hard, is in practice quite hard.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-06 Thread Jon Callas


On May 6, 2008, at 1:14 AM, James A. Donald wrote:


Perry E. Metzger wrote:

 What you can't do, full stop, is
 know that there are no unexpected security related behaviors in the
 hardware or software. That's just not possible.


Ben Laurie wrote:
Rice's theorem says you can't _always_ solve this problem. It says  
nothing about figuring out special cases.


True, but the propensity of large teams of experts to issue horribly  
flawed protocols, and for the flaws in those protocols to go  
undiscovered for many years, despite the fact that once discovered  
they look glaringly obvious in retrospect, indicates that this  
problem, though not provably always hard, is in practice quite hard.


Yes, but.

I tend to agree with Marcos, Ben, and others.

It is certainly true that detecting an evil actor is ultimately  
impossible because it's equivalent to a non-computable function. It  
doesn't matter whether that actor is a virus, an evil vm, evil  
hardware, or whatever.


That doesn't mean that you can't be successful at virus scanning or  
other forms of evil detection. People do that all the time.


Ben perhaps over-simplified by noting that a single gate isn't  
applicable to Rice's Theorem, but he pointed the way out. The way out  
is that you simply declare that if a problem doesn't halt before time  
T, or can't find a decision before T, you make an arbitrary decision.  
If you're optimistic, you just decide it's good. If you're  
pessimistic, you decide it's bad. You can even flip a coin.


These correspond to the adage I last heard from Dan Geer that you can  
make a secure system either by making it so simple you know it's  
secure, or so complex that no one can find an exploit.


So it is perfectly reasonable to turn a smart analyzer like Marcos on  
a system, and check in with him a week later. If he says, Man, this  
thing is so hairy that I can't figure out which end us up, then  
perhaps it is a reasonable decision to just assume it's flawed.  
Perhaps you give him more time, but by observing the lack of a halt or  
the lack of a decision, you know something, and that feeds into your  
pessimism or optimism. Those are policies driven by the data. You just  
have to decide that no data is data.


The history of secure systems has plenty of examples of things that  
were so secure they were not useful, or so useful they were not  
secure. You can, for example, create a policy system that is not  
Turing-complete, and then on to being decideably secure. The problem  
is that people will want to do cool things with your system than it  
supports, so they will extend it. It's possible they'll extend it so  
it is more-or-less secure, but usable. It's likely they'll make it  
insecure, and decideably so.


Jon


Jon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-05 Thread Scott Guthery
 
 but also a proof that the source code one has is the source of the
implementation.

This is an unsolved problem for code in tamper-resistant devices.  There are
precious few procedures to, for example, determine that the CAC card that
was issued to Pfc. Sally Green this morning bears any relationship
whatsoever to the code that went through FIPS certification. (A hash of the
code is meaningless since the card will simply burp up the right answer.)  I
have seen one such procedure but I have never seen any such procedure
implemented in real cards.

And to Marcos' point, not only do certification labs not look for backdoors
but I once had an employee of such a lab tell me that even if they found one
the are not obliged to enter this in their report unless, of course, they
had been explicitly requested to test for the absence of backdoors.  In that
regard, I have never seen a security profile that contained a claim of no
backdoors.  And I guess you know who is paying big bucks for the
certification report. 

Smart cards from F.  TPMs from C.  A 'sleep at the wheel.

Cheers, Scott

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-05 Thread Eric Rescorla
At Sun, 04 May 2008 20:14:42 -0400,
Perry E. Metzger wrote:
 
 
 Marcos el Ruptor [EMAIL PROTECTED] writes:
  All this open-source promotion is a huge waste of time. Us crackers
  know exactly how all the executables we care about (especially all
  the crypto and security related programs) work.
 
 With respect, no, you don't. If you did, then all the flaws in Windows
 would have been found at once, instead of trickling out over the
 course of decades as people slowly figure out new unintended
 behaviors. Anything sufficiently complicated to be interesting simply
 cannot be fully understood by inspection, end of story.

Without taking a position on the security of open source vs. closed
source (which strikes me as an open question), I agree with Perry
that deciding whether a given piece of software has back doors is
not really possible for a nontrivial piece of software. Note that
this is a very different problem from finding a single vulnerability
or answering specific (small) questions about the code [0].

-Ekr

[0] That said, I don't think that determining whether a nontrivial
piece of software security vulnerabilities is difficult. The
answer is yes.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-05 Thread Florian Weimer
* Perry E. Metzger:

 Marcos el Ruptor [EMAIL PROTECTED] writes:

 Nonsense. Total nonsense. A half-decent reverse engineer does not
 need the source code and can easily determine the exact operation of
 all the security-related components from the compiled executables,
 extracted ROM/EPROM code or reversed FPGA/ASIC layout

 I'm glad to know that you have managed to disprove Rice's
 Theorem.

Call me a speciest, but it's not clear if Rice's Theorem applies to
humans.

While Marcos' approach is somewhat off the mark (source-code
equivalent that works for me vs. conformance of potentially
malicious code to a harmless spec), keep in mind that object code
validation has been performed for safety-critical code for quite a
while.  The idea is that code for which some soundness property cannot
be shown simply fails validation.  It doesn't matter if the validator
is not clever enough, or if the code is actually bogus.

(And for most (all?) non-trivial software, source code acquisition
costs are way below validiation costs, so public availability of
source code is indeed a red herring.)

-- 
Florian Weimer[EMAIL PROTECTED]
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstra├če 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-05 Thread Ben Laurie

Perry E. Metzger wrote:

Marcos el Ruptor [EMAIL PROTECTED] writes:

To be sure that implementation does not contain back-doors, one needs
not only some source code but also a proof that the source code one
has is the source of the implementation.

Nonsense. Total nonsense. A half-decent reverse engineer does not
need the source code and can easily determine the exact operation of
all the security-related components from the compiled executables,
extracted ROM/EPROM code or reversed FPGA/ASIC layout


I'm glad to know that you have managed to disprove Rice's
Theorem. Could you explain to us how you did it? I suspect there's an
ACM Turing Award awaiting you.

Being slightly less sarcastic for the moment, I'm sure that a good
reverse engineer can figure out approximately what a program does by
looking at the binaries and approximately what an ASIC does given
good equipment to get the layout. What you can't do, full stop, is
know that there are no unexpected security related behaviors in the
hardware or software. That's just not possible.


I think that's blatantly untrue. For example, if I look at an AND gate, 
I can be absolutely sure about its security properties.


Rice's theorem says you can't _always_ solve this problem. It says 
nothing about figuring out special cases.


Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-05 Thread Perry E. Metzger

Ben Laurie [EMAIL PROTECTED] writes:
 I think that's blatantly untrue. For example, if I look at an AND
 gate, I can be absolutely sure about its security properties.

An AND gate isn't Turing Equivalent.

 Rice's theorem says you can't _always_ solve this problem. It says
 nothing about figuring out special cases.

Any modern processor is sufficiently larger than an AND gate that it
is no longer tractable. It isn't even possible to describe the
security properties one would need to (formally) prove.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-05 Thread Perry E. Metzger

Florian Weimer [EMAIL PROTECTED] writes:
 * Perry E. Metzger:

 Marcos el Ruptor [EMAIL PROTECTED] writes:

 Nonsense. Total nonsense. A half-decent reverse engineer does not
 need the source code and can easily determine the exact operation of
 all the security-related components from the compiled executables,
 extracted ROM/EPROM code or reversed FPGA/ASIC layout

 I'm glad to know that you have managed to disprove Rice's
 Theorem.

 Call me a speciest, but it's not clear if Rice's Theorem applies to
 humans.

If it doesn't apply to humans, that implies that humans are somehow
able to do computations that Turing Machines can't. I am sufficiently
skeptical of that to say, flat out, I don't believe it. If anything,
Turing Machines are more capable -- humans are only equivalent to
(large) finite state machines.

 While Marcos' approach is somewhat off the mark (source-code
 equivalent that works for me vs. conformance of potentially
 malicious code to a harmless spec), keep in mind that object code
 validation has been performed for safety-critical code for quite a
 while.

Certainly. You can use formal methods to prove the properties of
certain specially created systems -- the systems have to be produced
specially so that the proofs are possible. What you can't do in
general is take an existing system and prove security properties after
the fact.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [mm] OpenSparc -- the open source chip (except for the crypto parts)

2008-05-05 Thread Ben Laurie

Perry E. Metzger wrote:

Ben Laurie [EMAIL PROTECTED] writes:

I think that's blatantly untrue. For example, if I look at an AND
gate, I can be absolutely sure about its security properties.


An AND gate isn't Turing Equivalent.


Nor are most algorithms.


Rice's theorem says you can't _always_ solve this problem. It says
nothing about figuring out special cases.


Any modern processor is sufficiently larger than an AND gate that it
is no longer tractable. It isn't even possible to describe the
security properties one would need to (formally) prove.


I won't debate that, but its not a consequence of Rice's Theorem.

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-05 Thread Matt Blaze

Nonsense. Total nonsense. A half-decent reverse engineer does not
need the source code and can easily determine the exact operation of
all the security-related components from the compiled executables,
extracted ROM/EPROM code or reversed FPGA/ASIC layout


I'm glad to know that you have managed to disprove Rice's
Theorem. Could you explain to us how you did it? I suspect there's an
ACM Turing Award awaiting you.

Being slightly less sarcastic for the moment, I'm sure that a good
reverse engineer can figure out approximately what a program does by
looking at the binaries and approximately what an ASIC does given
good equipment to get the layout. What you can't do, full stop, is
know that there are no unexpected security related behaviors in the
hardware or software. That's just not possible.




In particular, while it's certainly true than an expert can often  
discover

unexpected security-related behavior by careful examination of source
(or object) code, the absence of such a discovery, no matter how
expert the examination, is no guarantee of anything, for general  
software

and hardware designs.

And on a slight tangent, this is why it was only with great  
reluctance that

I agreed to participate in the top-to-bottom voting system reviews
conducted last year by California and Ohio.  If flaws were found (as  
they

were), that would tell us that there were flaws.  But if no flaws had
been found, that would tell us nothing about whether any such flaws were
present.  It might just have been that we were bad at our job, that the
flaws were subtle, or that something prevented us from noticing  
them.  Or

maybe there really are no flaws. There'd be no way to no for sure.

I ultimately decided to participate because I suspected that it was  
likely,
based on the immaturity of the software and the apparent lack of  
security

engineering in the design process for these systems, that we would find
vulnerabilities.  But what happens when those are fixed?  Should we then
conclude that the system is now secure?  Or should we ask another set
of experts to take another look?

After some number of iterations of this cycle, the experts might stop  
finding

vulnerabilities.  What can we conclude at that point?

It's a difficult question, but the word guarantee almost certainly
does not belong in the answer (unless preceded by the word no).

-matt


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-04 Thread Alexander Klimov
On Thu, 1 May 2008, zooko wrote:
 I would think that it also helps if a company publishes the source
 code and complete verification tools for their chips, such as Sun has
 done with the Ultrasparc T2 under the GPL.

To be sure that implementation does not contain back-doors, one needs
not only some source code but also a proof that the source code one
has is the source of the implementation. With open-source software one
can get such a proof by compiling the source themself (as far as they
trust their compiler toolchain), but I don't see any way to get such a
proof for non-FPGA hardware.

-- 
Regards,
ASK

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-04 Thread Marcos el Ruptor

To be sure that implementation does not contain back-doors, one needs
not only some source code but also a proof that the source code one
has is the source of the implementation.


Nonsense. Total nonsense. A half-decent reverse engineer does not  
need the source code and can easily determine the exact operation of  
all the security-related components from the compiled executables,  
extracted ROM/EPROM code or reversed FPGA/ASIC layout (see the recent  
Karsten Nohl's extraction of Crypto-1 code for example).


All this open-source promotion is a huge waste of time. Us crackers  
know exactly how all the executables we care about (especially all  
the crypto and security related programs) work. We do not always  
publish our results, but look, somehow RC4, SecurID, DST40, KeeLoq,  
Crypto1, Hitag2, etc. all got reverse engineered and published when  
people actually cared to do it. A lot more other closed-code ciphers,  
random number generators and other components have been reverse- 
engineered and thoroughly analysed without publishing the results  
just because those results were not interesting, could do more harm  
than good if published or if keeping them secret could benefit the  
cracker.


As a reverse engineer with over 20 years of experience, I can  
guarantee everyone on this list who is not familiar with this process  
that from the security evaluation point of view there is ABSOLUTELY  
NO BENEFIT in the open-source concept. It is actually much much  
easier to hide a backdoor in the C or especially C++ code from anyone  
reading it than it is in the compiled assembly code from a reverse  
engineer, even if it is highly obfuscated like Skype. High-level  
languages offer enough opportunities to hide and cover up some sneaky  
behind-the-scenes magic that no one will notice for years or ever at  
all unless they know exactly what to look for and where. I always  
compile the open-source code, then reverse engineer it and see what  
it is actually doing.


If you want a guarantee or a proof, better ask all the reverse  
engineers you know to take a closer look at the program and tell you  
if there is a backdoor, anything malicious or anything sneaky or  
suspicious. Don't trust your own eyes. I've seen too many open-source  
applications with well-concealed backdoors or unnoticeable security  
holes. Linux's endless exploitable vulnerabilities should be enough  
of a proof of that.


Best regards,
Marcos el Ruptor
http://www.enrupt.com/ - Raising the bar.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSparc -- the open source chip (except for the crypto parts)

2008-05-04 Thread Perry E. Metzger

Marcos el Ruptor [EMAIL PROTECTED] writes:
 To be sure that implementation does not contain back-doors, one needs
 not only some source code but also a proof that the source code one
 has is the source of the implementation.

 Nonsense. Total nonsense. A half-decent reverse engineer does not
 need the source code and can easily determine the exact operation of
 all the security-related components from the compiled executables,
 extracted ROM/EPROM code or reversed FPGA/ASIC layout

I'm glad to know that you have managed to disprove Rice's
Theorem. Could you explain to us how you did it? I suspect there's an
ACM Turing Award awaiting you.

Being slightly less sarcastic for the moment, I'm sure that a good
reverse engineer can figure out approximately what a program does by
looking at the binaries and approximately what an ASIC does given
good equipment to get the layout. What you can't do, full stop, is
know that there are no unexpected security related behaviors in the
hardware or software. That's just not possible.

 All this open-source promotion is a huge waste of time. Us crackers
 know exactly how all the executables we care about (especially all
 the crypto and security related programs) work.

With respect, no, you don't. If you did, then all the flaws in Windows
would have been found at once, instead of trickling out over the
course of decades as people slowly figure out new unintended
behaviors. Anything sufficiently complicated to be interesting simply
cannot be fully understood by inspection, end of story.

Now, the original poster was speaking about knowing that a piece of
hardware does exactly what it was originally spec'ed to do. Some of
that involves (among other things) knowing that the validation
information (which a reverse engineer has no access to) applies to the
resulting chip by virtue of knowing that what was compiled was
precisely what was originally validated. There is a valid concern
there.


Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


OpenSparc -- the open source chip (except for the crypto parts)

2008-05-01 Thread zooko

On Apr 24, 2008, at 7:58 PM, Jacob Appelbaum wrote:


If we could convince (this is the hard part) companies to publish what
they think their chips should look like, we'd have a starting point.


I would think that it also helps if a company publishes the source  
code and complete verification tools for their chips, such as Sun has  
done with the Ultrasparc T2 under the GPL.


I was excited about this, and also about the fact that the T2 came  
with extremely efficient crypto implementations, until I read this  
bizarre comment in the news:


When the UltraSPARC T2 specifications are released Tuesday, Mehta  
said the company plans on releasing most of the source code,  
including the designs for the logic gate circuitry and the test  
suites. The one part of the source code that Sun can not release are  
the algorithms approved by the National Security Agency as part of  
the chip's cryptographic accelerations units.


http://www.eweek.com/c/a/Linux-and-Open-Source/Sun-Brings-Niagara-2- 
Chip-to-Open-Source/


I investigated and sure enough the crypto parts of the T2 have all  
been stubbed out of the source (all of them, not just algorithms  
approved by the NSA, whatever that means).


I sent e-mails inquiring about this to two journalists (the author of  
that article -- Scott Ferguson -- and noted cryptosecuritylibertarian  
gadfly Declan McCullagh) and three Sun employees, including Shrenik  
Mehta (quoted above), the open sparc community support e-mail  
address, and the Sun open source ombudsman, Simon Phipps.  None of  
them ever wrote back.


This experience rather dampened my enthusiasm about relying on T2  
hardware as a higher-assurance, but still pretty commodified, crypto  
implementation.


Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]