Re: The bank fraud blame game

2007-07-03 Thread Stefan Lucks



[EMAIL PROTECTED] (Peter Gutmann) writes:

(The usage model is that you do the UI portion on the PC, but perform the
actual transaction on the external device, which has a two-line LCD display
for source and destination of transaction, amount, and purpose of the
transaction.  All communications enter and leave the device encrypted, with
the PC acting only as a proxy. [...]


On Sun, 1 Jul 2007, Hal Finney wrote:
In theory the TPM was supposed to allow this kind of thing. [...] 
This was one of the main goals of the TPM as I understood the concept.

Unfortunately everyone got focused on the DRM aspect and that largely
torpedoed the whole idea.


There is a big difference between a TPM providing this kind of service, 
and Peter's device. The TPM is supposed to be hard-wired into a PC -- so 
if you are using it to safe your banking applications, you can do banking 
at one single PC. On the other hand, Peter's device is portable, you can 
use it to do safe banking from your PC at home, or in the office (only 
during lunch-breaks with the employer's permission of course), or even at 
a public internet cafe. To this end, Peter's device would be much more 
useful for the customer than a TPM ever could be.


BTW, Peter, are you aware that your device looks similar to the one 
proposed in the context of the CAFE project? See

  http://citeseer.ist.psu.edu/48859.html

This has been a more ambitious project, not just supporting secure banking 
applications at an insecure host PC, but rather a digital wallet.


Nevertheless, it may be interesting to study why the project failed (or 
ended without follow-on projects). I have no quick answer to this 
question, but as much as I understand, the banks where just not interested 
in deploying such a device. I guess, it was much too expensive at that 
time. Instead, in Germany we got the "Geldkarte", a simple and very cheap 
smartcard for payment purposes with neither a display nor a keyboard. The 
"Geldkarte" has been around us for about ten years, and, as far as I can 
tell, hardly any customer is interested in using it.


So long


--
Stefan Lucks  (moved to Bauhaus-University Weimar, Germany)
   
--  I  love  the  taste  of  Cryptanalysis  in  the  morning!  --


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: passphrases with more than 160 bits of entropy

2006-03-22 Thread Stefan Lucks
> Does anyone have a good idea on how to OWF passphrases without
> reducing them to lower entropy counts?  That is, I've seen systems
> which hash the passphrase then use a PRF to expand the result --- I
> don't want to do that.  I want to have more than 160 bits of entropy
> involved.

What kind of application are you running, that > 150 bits of entropy is
insufficient?

> I was thinking that one could hash the first block, copy the
> intermediate state, finalize it, then continue the intermediate result
> with the next block, and finalize that.  Is this safe?  Is there a
> better alternative?

As I understand your proposal, you split up the passphrase P into L Blocks
P_1, ..., P_L, (padding the last block P_L as neccessary) and then you
output L*160 bit like this:

F(P) = ( H(P_1), H(P_1,P_2), ..., H(P_1, P_2, ..., P_L) ).

This does not look like a good idea to me:

   1.   If the size of the P_i is the internal block size of the hash
function (e.g., 512 bit for SHA-1, SHA-256, or RIPEMD-160) and
your message P=P_1 is just one block long, you definitively end
with (at most) 160 bit of entropy, how large the entropy in P_1
is (could be 512 bit).

   2.   If the local entropy in each block P_i (or even the conditional
entropy in P_i, given all the P_{i-1}, ..., P_1) is low, then you
can step by step find P. This function F(P) is thus *much* *worse*
than its simple and straight counterpart H(P).

   3.   In fact, to calculate the entropy of F(P), you can take the
conditional entropy in P_i. The entropy of F(P) is close to the
maximum of these conditional entropies ...


Any better solution I can think of will be significantly less performant
than just applying H(P). One idea of mine would be the function G:

   *Let  be a counter of some fixed size, say 32 bit.

   *Let J+1 be the number of 160-bit values you need (e.g., J = 4*L).

   *G(P) = ( H(P_1,<0>,P_1,P_2,<0>,P_2, ..., P_L,<0>,P_L),
 H(P_2,<1>,P_2, ..., P_L,<1>,P_L,P_1,<1>,P_1),
...
 H(P_L,,P_L,P_1,,P_1, ..., P_{L-1},,P_{L-1})
   )

Would that be OK for you application?

In any case, I think that using a 160-bit hash function as a building
block for a universal one-way function with (potentially) much more than
160-bit of entropy is a bit shaky.




-- 
Stefan Lucks  Th. Informatik, Univ. Mannheim, 68131 Mannheim, Germany
e-mail: [EMAIL PROTECTED]
home: http://th.informatik.uni-mannheim.de/people/lucks/
--  I  love  the  taste  of  Cryptanalysis  in  the  morning!  --


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Collisions for hash functions: how to exlain them to your boss

2005-06-02 Thread Stefan Lucks

Magnus Daum and myself have generated MD5-collisons for PostScript files:

  http://th.informatik.uni-mannheim.de/people/lucks/HashCollisions/

This work is somewhat similar to the work from Mikle and Kaminsky, except 
that our colliding files are not executables, but real documents. 

We hope to demonstrate how serious hash function collisions should be 
taken -- even for people without much technical background. And to help 
you, to explain these issues 

  - to your boss or your management,
  - to your customers,
  - to your children ...


Have fun

Stefan

-- 
Stefan Lucks  Th. Informatik, Univ. Mannheim, 68131 Mannheim, Germany
e-mail: [EMAIL PROTECTED]
home: http://th.informatik.uni-mannheim.de/people/lucks/
--  I  love  the  taste  of  Cryptanalysis  in  the  morning!  --

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-18 Thread Stefan Lucks
e much much simpler than ever providing such an implementation for the
TCPA hardware.

Independent implementations of the observer's soft- and hardware are
simpler than in the case of TCPA as well, but this is a minor issue. You
don't need to trust the observer, so you don't care about independent and
verified implementations.

With a Chaum/Pedersen style scheme, the chances of ever getting
independent (and possibly verified) implementations are much better than
in the case of TCPA.

As I wrote in my response to Carl Ellison's response, one of the main
advantages of the Chaum/Pedersen style approach is a clear separation of
duties. The TCPA misses this separation, and this is a sign of bad
security design.


-- 
Stefan Lucks  Th. Informatik, Univ. Mannheim, 68131 Mannheim, Germany
e-mail: [EMAIL PROTECTED]
home: http://th.informatik.uni-mannheim.de/people/lucks/
--  I  love  the  smell  of  Cryptanalysis  in  the  morning!  --

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-18 Thread Stefan Lucks
On Mon, 15 Dec 2003, Carl Ellison wrote:

[I wrote]
> > The first difference is obvious. You can plug in and later
> > remove a smart
> > card at your will, at the point of your choice. Thus, for
> > home banking with
> > bank X, you may use a smart card, for home banking with bank Y you
> > disconnect the smart card for X and use another one, and before online
> > gambling you make sure that none of your banking smart cards
> > is connected
> > to your PC. With TCPA, you have much less control over the
> > kind of stuff
> > you are using.
> >
> > This is quite an advantage of smart cards.
>
> It is an advantage for a TCPA-equipped platform, IMHO.  Smart cards cost
> money. Therefore, I am likely to have at most 1.

Strange! Currently, I have three smart cards in my wallet, which I did not
want to own and which I did never pay for. I never used any of them. They
are packaged with some ATM cards (using conventional magnetic-stripe
technoloby) and implement a "Geldkarte".  (Since a couple of years, German
banks try to push their customers into using the "Geldkarte" for
electronig money, by packaging the smart cards together with ATM cards.
For me, there ares still too few dealers accepting the "Geldkarte", so I
never use it.)

OK, the banks are paying for the smart cards they give to their customers
for free. But they would not do so, if these cards where expensive.

BTW, even if you have only one, a smart card has the advantage that you
can physically remove it.

> TCPA acts like my hardware crypto module and in that one hardware
> module, I am able to create and maintain as many private keys as I want.
> (The keys are wrapped by the TPM but stored on the disk - even backed up
> - so there's no storage limit.)

A smart card can do the same.

> Granted, you have to make sure that the S/W that switches among (selects)
> private keys for particular uses does so in a way you can trust.  The
> smartcard has the advantage of being a physical object.

Exact!

> However, if you can't trust your system S/W, then how do you know that
> the system S/W was using a private key on the smart card you so happily
> plugged in rather than one of its own (the same one all the time)?

The point is that Your system is not supposed to prevent You from doing
anything I want you not to do! TCPA is supposed to lock You out of some
parts of Your system.


[...]
> If it were my machine, I would never do remote attestation.  With that
> one choice, I get to reap the personal advantages of the TPM while
> disabling its behaviors that you find objectionable (serving the outside
> master).

I am not sure, whether I fully understand you. If you mean that TCPA
comes with the option to run a secure kernel where you (as the owner and
physical holder of the machine running) have full control over what the
system is doing and isn't doing -- ok, that is a nice thing. On the other
hand, we would not need a monster such as TCPA for this.


> Of course, you're throwing out a significant baby with that bath water.
> What if it's your secrets you want protected on my machine?  It doesn't
> have to be the RIAA's secrets.  Do you object just as much when it's
> your secrets?

Feel free to buy an overpriced second-hand car from me. I promise, I won't
object! ;-)

But seriously, with or without remote attestation, I would not consider my
secrets safe on your machine. If you can read (or play, in the case of
multimedia stuff) my secrets on your machine, I can't prevent you from
copying. TCPA (and remote attestation) can make copying less convenient
for you (and the RIAA is probably pleased with this), but can't stop a
determined adversary.

In other words, I don't think it will be too difficult to tamper with the
TCPA hardware, and to circumwent remote attestation.

Winning against the laws of information theory is not simpler than winning
against the laws of thermodynamics -- both are impossible!

> > Chaum and Pedersen  [...]

> > TCPA mixes "Your PC" and the "observer" into one "trusted kernel" and
> > is thus open to abuse.

Let me stress:

  -- Good security design means to separate duties.

  -- TCPA does exactly the opposite: It deliberately mixes duties.

> I haven't read that paper - will have to.  Thanks for the reference.
> However, when I do read it, what I will look for is the non-network
> channel between the observer and the PC.  Somehow, the observer needs to
> know that the PC has not been tampered with and needs to know, securely
> (through physical security) the state of that PC and its S/W.

It doesn't need to know, and it can't know anyway. All it needs to know
whether itself has been tampered with. Well, ... hm ... it more 

Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-15 Thread Stefan Lucks
On Sun, 14 Dec 2003, Jerrold Leichter wrote:

> Which brings up the interesting question:  Just why are the reactions to
> TCPA so strong?  Is it because MS - who no one wants to trust - is
> involved?  Is it just the pervasiveness:  Not everyone has a smart card,
> but if TCPA wins out, everyone will have this lump inside of their
> machine.

There are two differences between TCPA-hardware and a smart card.

The first difference is obvious. You can plug in and later remove a smart
card at your will, at the point of your choice. Thus, for homebanking with
bank X, you may use a smart card, for homebaning with bank Y you
disconnect the smart card for X and use another one, and before online
gambling you make sure that none of your banking smart cards is connected
to your PC. With TCPA, you have much less control over the kind of stuff
you are using.

This is quite an advantage of smart cards.

The second point is perhaps less obvious, but may be more important.
Usually, *your* PC hard- and software is supposed to to protect *your*
assets and satisfy *your* security requirements. The "trusted" hardware
add-on in TCPA is supposed to protect an *outsider's* assets and satisfy
the *outsider's* security needs -- from you.

A TCPA-"enhanced" PC is thus the servant of two masters -- your servant
and the outsider's. Since your hardware connects to the outsider directly,
you can never be sure whether it works *against* you by giving the
outsider more information about you than it should (from your point if
view).

There is nothing wrong with the idea of a trusted kernel, but "trusted"
means that some entity is supposed to "trust" the kernel (what else?). If
two entities, who do not completely trust each other, are supposed to both
"trust" such a kernel, something very very fishy is going on.


Can we do better?

More than ten years ago, Chaum and Pedersen presented a great idea how to
do such things without potentially compromising your security. Bringing
their ideas into the context of TCPA, things should look like in the
following picture

   +---+ +-+ +---+
   | Outside World | <-> | Your PC | <-> | TCPA-Observer |
   +---+ +-+ +---+

So you can trust "your PC" (possibly with a trusted kernel ... trusted by
you). And an outsider can trust the observer.

The point is, the outside world does not directly talk to the observer!

Chaum and Pedersen (and some more recent authors) defined protocols to
satisfy the outsider's security needs without giving the outsider any
chance to learn more about you and the data stored in your PC than you
want her to learn.

TCPA mixes "Your PC" and the "observer" into one "trusted kernel" and is
thus open to abuse.

Reference:

  D. Chaum and T. Pedersen. Wallet databases with observers.
  In Crypto '92, LNCS 740, pp. 89-105.



-- 
Stefan Lucks  Th. Informatik, Univ. Mannheim, 68131 Mannheim, Germany
e-mail: [EMAIL PROTECTED]
home: http://th.informatik.uni-mannheim.de/people/lucks/
--  I  love  the  smell  of  Cryptanalysis  in  the  morning!  --

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-14 Thread Stefan Lucks
On Wed, 10 Dec 2003, John S. Denker wrote:

> Scenario:  You are teaching chemistry in a non-anglophone
> country.  You are giving an exam to see how well the
> students know the periodic table.
>   -- You want to allow students to use their TI-83 calculators
>  for *calculating* things.
>   -- You want to allow the language-localization package.
>   -- You want to disallow the app that stores the entire
>  periodic table, and all other apps not explicitly
>  approved.

First "Solution": Erease and load by hand
=

What would be wrong with
  1. ereasing the memories of the students' calculators
  2. loading the approved apps and data
immediately before the exam? (I assume, the students can't load
un-approved applications during the exam.)

(This is what some our teachers actually did when I went to school.
 Since there where no approved apps and data, step 2 was trivial. ;-)


> The hardware manufacturer (TI) offers a little program
> that purports to address this problem
>http://education.ti.com/us/product/apps/83p/testguard.html
> but it appears to be entirely non-cryptologic and therefore
> easily spoofed.

Why?


2. "Solution": testguard and the like
=

  1. Load and
  2. run
a trusted application with full access to all resources (including storage
for applications and data, and CPU time, thus blocking all the other stuff
which might be running in parallel), nothing can prevent this application
from deleting all non-approved appliations and data.

I am not sure, what testguard actually does, but the above is, what it
*should* do.

The existence of a trusted kernel would only complicate things, not
simplify them. (You had to make sure that your application is running in
the highest privileges mode ...)


I think, both of my proposed "solutions" would actually solve your
problem. Else, please describe your thread model!

Without understanding your problem, no cryptographer can provide any
solution. And if (given a proper definition of the problem) it turns out
that there is a non-cryptographic solution which works -- so what?


-- 
Stefan Lucks  Th. Informatik, Univ. Mannheim, 68131 Mannheim, Germany
e-mail: [EMAIL PROTECTED]
home: http://th.informatik.uni-mannheim.de/people/lucks/
--  I  love  the  smell  of  Cryptanalysis  in  the  morning!  --

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]