Re: Dell to Add Security Chip to PCs

2005-02-05 Thread Anne Lynn Wheeler
Peter Gutmann wrote:
Neither.  Currently they've typically been smart-card cores glued to the 
MB and accessed via I2C/SMB.
and chips that typically have had eal4+ or eal5+ evaluations. hot topic 
in 2000, 2001 ... at the intel developer's forums and rsa conferences

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Using TCPA

2005-02-05 Thread Sean Smith
On Feb 4, 2005, at 6:58 AM, Eric Murray wrote:
So a question for the TCPA proponents (or opponents):
how would I do that using TCPA?

check out
enforcer.sourceforge.net
We also had a paper at ACSAC 2004 with some of the apps we've built on 
it.

Two things we've built that haven't made it yet to the sourceforge site:
- an SELinux version
- an OpenSSL engine
--Sean

Sean W. Smith  [EMAIL PROTECTED]  www.cs.dartmouth.edu/~sws/
Hanover, NH USA  (Where the Appalachian Trail crosses the VT-NH border)
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Dell to Add Security Chip to PCs

2005-02-05 Thread Anne Lynn Wheeler
Erwann ABALEA wrote:
  I've read your objections. Maybe I wasn't clear. What's wrong in
installing a cryptographic device by default on PC motherboards?
I work for a PKI 'vendor', and for me, software private keys is a
nonsense. How will you convice Mr Smith (or Mme Michu) to buy an
expensive CC EAL4+ evaluated token, install the drivers, and solve the
inevitable conflicts that will occur, simply to store his private key? You
first have to be good to convice him to justify the extra depense.
If a standard secure hardware cryptographic device is installed by default
on PCs, it's OK! You could obviously say that Mr Smith won't be able to
move his certificates from machine A to machine B, but more than 98% of
the time, Mr Smith doesn't need to do that.
Installing a TCPA chip is not a bad idea. It is as 'trustable' as any
other cryptographic device, internal or external. What is bad is accepting
to buy a software that you won't be able to use if you decide to claim
your ownership... Palladium is bad, TCPA is not bad. Don't confuse the
two.
the cost of EAL evaluation typically has already been amortized across 
large number of chips in the smartcard market. the manufactoring costs 
of such a chip is pretty proportional to the chip size ... and the thing 
that drives chip size tends to be the amount of eeprom memory.

in tcpa track at intel developer's forum a couple years ago ... i gave a 
talk and claimed that i had designed and significantly cost reduced such 
a chip by throwing out all features that weren't absolutely necessary 
for security. I also mentioned that two years after i had finished such 
a design ... that tcpa was starting to converge to something similar. 
the head of tcpa in the audience quiped that i didn't have a committee 
of 200 helping me with the design.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Dell to Add Security Chip to PCs

2005-02-05 Thread Steven M. Bellovin
In message [EMAIL PROTECTED], Dan Kaminsky writes:

Uh, you *really* have no idea how much the black hat community is
looking forward to TCPA.  For example, Office is going to have core
components running inside a protected environment totally immune to
antivirus.



How? TCPA is only a cryptographic device, and some BIOS code, nothing
else. Does the coming of TCPA chips eliminate the bugs, buffer overflows,
stack overflows, or any other way to execute arbitrary code? If yes, isn't
that a wonderful thing? Obviously it doesn't (eliminate bugs and so on).

  

TCPA eliminates external checks and balances, such as antivirus.  As the 
user, I'm not trusted to audit operations within a TCPA-established 
sandbox.  Antivirus is essentially a user system auditing tool, and 
TCPA-based systems have these big black boxes AV isn't allowed to analyze.

Imagine a sandbox that parses input code signed to an API-derivable 
public key.  Imagine an exploit encrypted to that.  Can AV decrypt the 
payload and prevent execution?  No, of course not.  Only the TCPA 
sandbox can.  But since AV can't get inside of the TCPA sandbox, 
whatever content is protected in there is quite conspicuously unprotected.

It's a little like having a serial killer in San Quentin.  You feel 
really safe until you realize...uh, he's your cellmate.

I don't know how clear I can say this, your threat model is broken, and 
the bad guys can't stop laughing about it.


I have no idea whether or not the bad guys are laughing about it, but 
if they are, I agree with them -- I'm very afriad that this chip will 
make matters worse, not better.  With one exception -- preventing the 
theft of very sensitive user-owned private keys -- I don't think that 
the TCPA chip is solving the right problems.  *Maybe* it will solve the 
problems of a future operating system architecture; on today's systems, 
it doesn't help, and probably makes matters worse.

TCPA is a way to raise the walls between programs executing in 
different protection spaces.  So far, so good.  Now -- tell me the last 
time you saw an OS flaw that directly exploited flaws in conventional 
memory protection or process isolation?  They're *very* rare.

The problems we see are code bugs and architectural failures.  A buffer 
overflow in a Web browser still compromises the browser; if the 
now-evil browser is capable of writing files, registry entries, etc., 
the user's machine is still capable of being turned into a spam engine, 
etc.  Sure, in some new OS there might be restrictions on what such an 
application can do, but you can implement those restrictions with 
today's hardware.  Again, the problem is in the OS architecture, not in 
the limitations of its hardware isolation.

I can certainly imagine an operating system that does a much better job 
of isolating processes.  (In fact, I've worked on such things; if 
you're interested, see my papers on sub-operating systems and separate 
IP addresses per process group.)  But I don't see that TCPA chips add 
much over today's memory management architectures.  Furthermore, as Dan 
points out, it may make things worse -- the safety of the OS depends on 
the userland/kernel interface, which in turn is heavily dependent on 
the complexity of the privileged kernel modules.  If you put too much 
complex code in your kernel -- and from the talks I've heard this is 
exactly what Microsoft is planning -- it's not going to help the 
situation at all.  Indeed, as Dan points out, it may make matters worse.

Microsoft's current secure coding initiative is a good idea, and from 
what I've seen they're doing a good job of it.  In 5 years, I wouldn't 
be at all surprised if the rate of simple bugs -- the buffer overflows, 
format string errors, race conditions, etc. -- was much lower in 
Windows and Office than in competing open source products.  (I would 
add that this gain has come at a *very* high monetary cost -- training, 
code reviews, etc., aren't cheap.)  The remaining danger -- and it's a 
big one -- is the architecture flaws, where ease of use and 
functionality often lead to danger.  Getting this right -- getting it 
easy to use *and* secure -- is the real challenge.  Nor are competing 
products immune; the drive to make KDE and Gnome (and for that matter 
MacOS X) as easy to use (well, easier to use) than Windows is likely to 
lead to the same downward security sprial.

I'm ranting, and this is going off-topic.  My bottom line: does this 
chip solve real problems that aren't solvable with today's technology?  
Other than protecting keys -- and, of course, DRM -- I'm very far from 
convinced of it.  The fault, dear Brutus, is not in our stars but in 
ourselves.

--Prof. Steven M. Bellovin, http://www.cs.columbia.edu/~smb



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Dell to Add Security Chip to PCs

2005-02-05 Thread Mark Allen Earnest
Trei, Peter wrote:
It could easily be leveraged to make motherboards
which will only run 'authorized' OSs, and OSs
which will run only 'authorized' software.
And you, the owner of the computer, will NOT
neccesarily be the authority which gets to decide
what OS and software the machine can run.
If you 'take ownership' as you put it, the internal
keys and certs change, and all of a sudden you
might not have a bootable computer anymore.
Goodbye Linux.
Goodbye Freeware.
Goodbye independent software development.
It would be a very sad world if this comes
to pass.
Yes it would, many governments are turning to Linux and other freeware. 
Many huge companies make heavy use of Linux and and freeware, suddenly 
losing this would have a massive effect on their bottom line and 
possibly enough to impact the economy as a whole. Independent software 
developers are a significant part of the economy as well, and most 
politicians do not want to associate themselves with the concept of 
hurting small business. Universities and other educational 
institutions will fight anything that resembles what you have described 
tooth and nail.

To think that this kind of technology would be mandated by a government 
is laughable. Nor do I believe there will be any conspiracy on the part 
of ISPs to require to in order to get on the Internet. As it stands now 
most people are running 5+ year old computer and windows 98/me, I doubt 
this is going to change much because for most people, this does what 
they want (minus all the security vulnerabilities, but with NAT 
appliances those are not even that big a deal). There is no customer 
demand for this technology to be mandated, there is no reason why an ISP 
or vendor would want to piss off significant percentages of their 
clients in this way. The software world is becoming MORE open. Firefox 
and Openoffice are becoming legitimate in the eyes of government and 
businesses, Linux is huge these days, and the open source development 
method is being talked about in business mags, board rooms, and 
universities everywhere.

The government was not able to get the Clipper chip passed and that was 
backed with the horror stories of rampant pedophilia, terrorism, and 
organized crime. Do you honestly believe they will be able to destroy 
open source, linux, independent software development, and the like with 
just the fear of movie piracy, mp3 sharing, and such? Do you really 
think they are willing to piss off large sections of the voting 
population, the tech segment of the economy, universities, small 
businesses, and the rest of the world just because the MPAA and RIAA 
don't like customers owning devices they do not control?

It is entirely possibly that a machine like you described will be built, 
 I wish them luck because they will need it. It is attempted quite 
often and yet history shows us that there is really no widespread demand 
for iOpeners, WebTV, and their ilk. I don't see customers demanding 
this, therefor there will probably not be much of a supply. Either way, 
there is currently a HUGE market for general use PCs that the end user 
controls, so I imagine there will always be companies willing to supply 
them.

My primary fear regarding TCPA is the remote attestation component. I 
can easily picture Microsoft deciding that they do not like Samba and 
decide to make it so that Windows boxes simply cannot communicate with 
it for domain, filesystem, or authentication purposes. All they need do 
is require that the piece on the other end be signed by Microsoft. Heck 
they could render http agent spoofing useless if they decide to make it 
so that only IE could connect to ISS. Again though, doing so would piss 
off a great many of their customers, some of who are slowly jumping ship 
to other solutions anyway.

--
Mark Allen Earnest
Lead Systems Programmer
Emerging Technologies
The Pennsylvania State University


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Is 3DES Broken?

2005-02-05 Thread Ian G
John Kelsey wrote:
From: Steven M. Bellovin [EMAIL PROTECTED]
No, I meant CBC -- there's a birthday paradox attack to watch out for.
   

Yep.  In fact, there's a birthday paradox problem for all the standard chaining modes at around 2^{n/2}.  

For CBC and CFB, this ends up leaking information about the XOR of a couple plaintext blocks at a time; for OFB and counter mode, it ends up making the keystream distinguishable from random.  Also, most of the security proofs for block cipher constructions (like the secure CBC-MAC schemes) limit the number of blocks to some constant factor times 2^{n/2}.
 

It seems that the block size of an algorithm then
is a severe limiting factor.  Is there anyway to
expand the effective block size of an (old 8byte)
algorithm, in a manner akin to the TDES trick,
and get an updated 16byte composite that neuters
the birthday trick?
Hypothetically, by say having 2 keys and running
2 machines in parallel to generate a 2x blocksize.
(I'm just thinking of this as a sort of mental challenge,
although over on the OpenPGP group we were toying
with the idea of adding GOST, but faced the difficulty
of its apparent age/weakness.)
iang
--
News and views on what matters in finance+crypto:
   http://financialcryptography.com/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]