Re: Crypto dongles to secure online transactions

2009-11-17 Thread Victor Duchovni
On Tue, Nov 17, 2009 at 01:35:12AM -, John Levine wrote:

> > So should or should not an embedded system have a remote management
> > interface?
> 
> In this case, heck, no.  The whole point of this thing is that it is
> NOT remotely programmable to keep malware out.

Which is perhaps why it is not a good idea to embed an SSL engine in such
a device. Its external interface should be as simple as possible, which
suggests a message-signing device, rather than a device that participates
in complex, evolving, interactive protocols with remote network services.

The integration of the message signing device with a complete system
(computer with browser + device) should include most of the complex
and likely to change software. The device itself, is just a display +
encrypt then sign black-box for suitably simple (to unambiguously
display) messages, and the transmission of the signed message to the
appropriate destination can be left to the untrusted PC.

Such a device does however need to be able to suppor multiple mutually
distrusting verifiers, thus the destination public key is managed by
the untrusted PC + browser, only the device signing key is inside
the trust boundary. A user should be able to enroll the same device
with another "bank", ...

The proliferation multiple of SecurId tokens per user in B2B financial
services has led to a search for greater than "drawer-full of SecurId
cards (with PIN glued to the back of each)" usability. The alternatives
are not always very strong, but a would be more-secure solution needs
to keep usability in mind for the case when the user needs to conduct
secure transactions with multiple parties.

-- 
Viktor.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Crypto dongles to secure online transactions

2009-11-17 Thread Jeremy Stanley
On Mon, Nov 16, 2009 at 11:20:27PM -0500, Jerry Leichter wrote:
> I'm not sure that's the right lesson to learn.

I might have, perhaps, phrased it a little better. Regardless of
initial planning, TI continued selling devices relying on this
particular code signing implementation well past what the original
design engineers hopefully expected would be its maximum lifespan.

> A system has to be designed to work with available technology. The
> TI83 dates back to 1996, and used technology that was old even at
> the time: The CPU is a 6MHz Z80. A 512-bit RSA was probably near
> the outer limits of what one could expect to use in practice on such
> a machine, and at the time, that was quite secure.

If this is true, then it makes an interesting case study for the
topic of this thread...

> Nothing lasts forever, though, and an effective 13 year lifetime
> for cryptography in such a low-end product is pretty good.
[...]

Not such a low-end product, when compared to the bank transaction
authenticating crypto we're discussing (I had a TI-83 back when they
first came out, and it was far from cheap on a starving student
budget). Assume what TI had built was one of these banking crypto
devices... they implemented a code signing mechanism so it could be
updated in a secure fashion, since they didn't want it to be so
disposable... the best code signing mechanism the processor could
handle... in 13 years a hobbyist with a few months and basically no
budget is able to trojan these devices.

This speaks to an inherent lifespan for "low-end" devices anyway,
since a time will come when they need better code signing than their
processors can handle. If the hobbyist can do it 13 years later for
a relatively low-value target (programmable calculators), how about
something which has a lot more potential for profit? A decade ago I
was working on (relatively) low-budget beowulf distributed compute
clusters which easily rivalled the speed of the machine used to
crack TI's code signing keys. This was well within the budget of a
criminal organization--probably a tiny fraction of what they could
have made selling the code signing keys for widely-deployed bank
transaction authenticator devices.

Maybe calculators are a bad example, but if 3-4 years is all it
takes to put the code signing key for an inexpensive device in the
hands of criminals, then is it worth the risk (or even expense) to
make dedicated banking crypto hardware updateable?
-- 
{ IRL(Jeremy_Stanley); PGP(9E8DFF2E4F5995F8FEADDC5829ABF7441FB84657);
SMTP(fu...@yuggoth.org); IRC(fu...@irc.yuggoth.org#ccl); ICQ(114362511);
AIM(dreadazathoth); YAHOO(crawlingchaoslabs); FINGER(fu...@yuggoth.org);
MUD(fu...@katarsis.mudpy.org:6669); WWW(http://fungi.yuggoth.org/); }

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: TLS break

2009-11-17 Thread Stefan Kelm

Jonathan,

Anyone care to give a "layman's" explanation of the attack? The 


I find this paper to be useful:

  http://www.g-sec.lu/practicaltls.pdf

Cheers,

Stefan.

--
Stefan Kelm   
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstrasse 100 Tel: +49-721-96201-1
D-76133 Karlsruhe Fax: +49-721-96201-99

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: hedging our bets -- in case SHA-256 turns out to be insecure

2009-11-17 Thread Sandy Harris
On 11/12/09, David-Sarah Hopwood  wrote:
> Sandy Harris wrote:
>  > On 11/8/09, Zooko Wilcox-O'Hearn  wrote:
>  >
>  >>  Therefore I've been thinking about how to make Tahoe-LAFS robust against
>  >> the possibility that SHA-256 will turn out to be insecure.
>
> [...]
>
> > Since you are encrypting the files anyway, I wonder if you could
>  > use one of the modes developed for IPsec where a single pass
>  > with a block cipher gives both encrypted text and a hash-like
>  > authentication output.  That gives you a "free" value to use as
>  > H3 in my scheme or H2 in yours, and its security depends on
>  > the block cipher, not on any hash.
>
>
> Tahoe is intended to provide resistance to collision attacks by the
>  creator of an immutable file: the creator should not be able to generate
>  files with different contents, that can be read and verified by the same
>  read capability.
>
>  An authenticated encryption mode won't provide that -- unless, perhaps,
>  it relies on a collision-resistant hash.

I was suggesting using the authentication data in the construction:

 C(x) = H1(H2(x)||A(x))

where H1 is a hash with he required output size, H2 a hash with
a large block size and A the authentication data from your
encryption.

This is likely a very bad idea if you already use that data in some
other way, e.g. for authenticating stored data. However, if C is
going to be your authentication mechanism, then this might be
a cheap way to get one input to it.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Crypto dongles to secure online transactions

2009-11-17 Thread Jerry Leichter

On Nov 16, 2009, at 12:30 PM, Jeremy Stanley wrote:

If one organization distributes the dongles, they could accept
only updates signed by that organization. We have pretty good
methods for keeping private keys secret at the enterprise level,
so the risks should be manageable.


But even then, poor planning for things like key size (a la the
recent Texas Instruments signing key brute-forcing) are going to be
an issue.

I'm not sure that's the right lesson to learn.

A system has to be designed to work with available technology.  The  
TI83 dates back to 1996, and used technology that was old even at the  
time:  The CPU is a 6MHz Z80.  A 512-bit RSA was probably near the  
outer limits of what one could expect to use in practice on such a  
machine, and at the time, that was quite secure.


Nothing lasts forever, though, and an effective 13 year lifetime for  
cryptography in such a low-end product is pretty good.  (The  
*official* lifetime of DES was about 28 years, though it was seriously  
compromised well before it was officially withdrawn in 2005.)


-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: TLS break

2009-11-17 Thread Ben Laurie
On Mon, Nov 16, 2009 at 11:30 AM, Bernie Cosell  wrote:

> As I understand it, this is only really a vulnerability in situations
> where a command to do something *precedes* the authentication to enable
> the command.  The obvious place where this happens, of course, is with
> HTTPS where the command [GET or POST] comes first and the authentication
> [be it a cookie or form vbls] comes later.

This last part is not really accurate - piggybacking the evil command
onto authentication that is later presented is certainly one possible
attack, but there are others, such as the Twitter POST attack and the
SMTP attack outlined by Wietse Venema (which doesn't work because of
implementation details, but _could_ work with a different
implementation).

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Crypto dongles to secure online transactions

2009-11-17 Thread John Levine
> So should or should not an embedded system have a remote management
> interface?

In this case, heck, no.  The whole point of this thing is that it is
NOT remotely programmable to keep malware out.

If you have a modest and well-defined spec, it is well within our
abilities to produce reliable code.  People write software for medical
devices and vehicle control which is not remotely updated, and both
our pacemakers and are cars are adequately reliable.  If you define
the spec carefully enough that you can expect to make a million
devices, the cost of even very expensive software is lost in the
noise.

R's,
John

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com