Re: [Clips] Escaping Password Purgatory

2005-08-04 Thread Ian Grigg
On Thursday 04 August 2005 04:31, Bill Frantz wrote:

 Try Site Password, http://www.hpl.hp.com/personal/Alan_Karp/site_password/. 
  It takes a good master password, and a site name, and hashes them together 
 to produce a site-specific password.

I think PwdHash also does this for browsers (probably Firefox):

http://crypto.stanford.edu/PwdHash/

iang
-- 
Advances in Financial Cryptography, Issue 2:
   https://www.financialcryptography.com/mt/archives/000498.html
Mark Stiegler, An Introduction to Petname Systems
Nick Szabo, Scarce Objects
Ian Grigg, Triple Entry Accounting

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Ostiary

2005-08-02 Thread Ian Grigg
On Tuesday 02 August 2005 13:26, Udhay Shankar N wrote:
 Sounds interesting. Has anybody used this, and are there any comments?
 
 Udhay
 
 http://ingles.homeunix.org/software/ost/

 ... 
 Perhaps you only really need to remotely initiate a limited set of 
 operations. In this case, you don't need a shell prompt, just a way to 
 securely kick off scripts from elsewhere.
 
 Enter 'Ostiary'. It is designed to allow you to run a fixed set of commands 
 remotely, without giving everyone else access to the same commands. It is 
 designed to do exactly and only what is necessary for this, and no more. 

I recently wrote this as a login program that was
hard coded to run the commands concerned.

The reason for doing this instead of the Ostiary
approach is that SSH had to be running anyway,
and SSH provides the key management regime.
Without that, I'd have to invent my own which
in Ostiary's case was the Hashing mechanisms.
So on this point it would come down to whether
we cared enough to replace SSH's authentication
regime, which I'd think would be rarer (perhaps
in the embedded market where Unix doesn't need
maintaining??).

Also, efficiency of command sending was not
an issue - each send was about 10 seconds in
my tests.


 * Keep things simple. I'm no crypto expert; I know I'm not capable of 
 coming up with an ssh replacement. So I need to keep things so utterly 
 simple that I can be sure I'm not missing anything important.

I think it is smart to keep things simple regardless
of ones expertise :)  Also, I wouldn't overdo the
hackability argument.  If flaws are found, you'll
find time to fix them, and for the cost of a few
hacked boxes, you'll have the benefit of a lot
more secured boxes.

iang
-- 
Advances in Financial Cryptography, Issue 2:
   https://www.financialcryptography.com/mt/archives/000498.html
Mark Stiegler, An Introduction to Petname Systems
Nick Szabo, Scarce Objects
Ian Grigg, Triple Entry Accounting

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ID theft -- so what?

2005-07-15 Thread Ian Grigg
On Thursday 14 July 2005 15:45, Aram Perez wrote:
 RANT-PET_PEEVEWhy do cryptography folks equate PKI with  
 certificates and CAs?

Because it's the major example of what most would
agree is PKI, I'd guess.  When we talked to people
in the certs and CAs world, they call it PKI.  They
refer to lots of documents, which call it the PKI.  The
business model of PKI vendors used to at least be
partly based on selling certs.  It's an assumption
they make or made.

(John Kelsey answered this very well.)

 This fallacy is a major root cause of the   
 problem IHO. Why was the term PKI invented in the  late 70s/early  
 80s (Kohnfelder's thesis?)?. Before the invention of asymmetric  
 cryptography, didn't those people who used symmetric cryptography  
 need an SKI (secret key infrastructure) to manage keys? But no one  
 uses the term SKI or talks about how to manage secret keys (a very  
 hard problem).

Exactly.

 Anytime you use any type of cryptography, you need an   
 infrastructure (http://en.wikipedia.org/wiki/Infrastructure) to  
 manage your keys, whether secret or public. There are at least two  
 public key infrastructures that do NOT require CAs: PGP and SPKI. But


There is a sort of doublethink here - when people
look down their nose at PKI from the PGP side,
the PKI side is sometimes at pains to say that PGP's
WoT is a PKI.  Yet when the converse happens
and PGP pundits suggest using WoT with (e.g.,)
x.509 certs, the PKI people say WoT is not PKI.

Personally, I call what PGP does a Web of Trust.
And I call what browsers do a PKI.  The fact that
there is trust in PKI and there is infrastructure
in WoT is an issue, yes, but we have to have some
sense of differentiation;  and those terms are what
the people in those fields tend to be comfortable
with.

iang
-- 
Advances in Financial Cryptography, Issue 2:
   https://www.financialcryptography.com/mt/archives/000498.html
Mark Stiegler, An Introduction to Petname Systems
Nick Szabo, Scarce Objects
Ian Grigg, Triple Entry Accounting

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ID theft -- so what?

2005-07-14 Thread Ian Grigg
On Wednesday 13 July 2005 23:31, Dan Kaminsky wrote:
 
 This is yet more reason why I propose that you authorize transactions
 with public keys and not with the use of identity information. The
 identity information is widely available and passes through too many
 hands to be considered secret in any way, but a key on a token never
 will pass through anyone's hands under ordinary circumstances.
 
   
 
 It's 2005, PKI doesn't work, the horse is dead.

He's not proposing PKI, but nymous accounts.  The
account is the asset, the key is the owner;  at the
simplest conceptual level it is the difference between
Paypal and e-gold.

But, thank the heavens that we now have reached
the point where people can honestly say that PKI
is the root cause of the problem.  Can you now tell
the browser people?

 The credit-card sized  
 number dispensers under development are likely to be what comes next.

Right, alongside nyms on a spectrum is big random
number-sized tokens.  If you want to get sexy, go
for the blinded ones.  It's all the same infrastructure,
we call it FC.

 Amusingly, your face is an asymmetric authenticator -- easy to 
 recognize, hard to spoof.

True, but also easy to copy and can be stolen.  For
some value, you don't want to go there.

https://www.financialcryptography.com/mt/archives/000440.html

iang
-- 
Advances in Financial Cryptography, Issue 2:
   https://www.financialcryptography.com/mt/archives/000498.html
Mark Stiegler, An Introduction to Petname Systems
Nick Szabo, Scarce Objects
Ian Grigg, Triple Entry Accounting

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ID theft -- so what?

2005-07-14 Thread Ian Grigg
(Dan, in answer to your question on certs, below.)


On Thursday 14 July 2005 14:19, Perry E. Metzger wrote:
 
 Ian Grigg [EMAIL PROTECTED] writes:
  It's 2005, PKI doesn't work, the horse is dead.
 
  He's not proposing PKI, but nymous accounts.  The
  account is the asset, the key is the owner;
 
 Actually, I wasn't proposing that. I was just proposing that a private
 key be the authenticator for payment card transactions, instead of the
 [name, card number, expiration date, CVV2] tuple -- hardly a
 revolutionary idea. You are right, though, that I do not propose that
 any PK_I_ be involved here -- no need for certs at all for this
 application.
 
 I don't claim this is a remotely original idea, by the way. I'm just
 flogging it again.

Well, that's helpful.  Having built one or two of
these things (and I know of 3 others on the list
that have done the same thing) it helps to know
we aren't starting from scratch.

  But, thank the heavens that we now have reached
  the point where people can honestly say that PKI
  is the root cause of the problem.
 
 Root Cause of the Problem isn't correct either. It is better to say 
 that PKI doesn't solve many of the hard problems we have, or, in some
 cases, any problems -- it doesn't per se cause any problems, or at
 least not many.
 
 This is not a new realization -- this goes back a long way.


OK, so maybe this part is the new realisation:

The browser security model includes PKI for two
purposes - MITM protection and spoofing protection.
Ignoring MITM (today), the spoofing protection is
supposed to alert the user that the cert and the
site don't match.

Phishing is a spoof - the wrong site is used.  So
SSL+PKI should pick that up.  It isn't.  Why?
Simply put because the browser too easily lets
SSL's anti-spoofing protection not be seen.  It's
not being done properly.

Why is that?  Because the browser people are
under severe constraints - your words - and
nobody is correcting their missunderstandings.
No security folk, no security companies, no CAs,
just a few researchers (some lurking here...).

Too many words?  OK, here's the short version
of why phising occurs:

Browsers implement SSL+PKI and SSL+PKI is
secure so we don't need to worry about it.

PKI+SSL *is* the root cause of the problem.  It's
just not the certificate level but the business and
architecture level.  The *people* equation.

 People were saying PKI was a bad idea a decade ago or more. A number
 of the people here, including me, gave talks on that subject years
 ago. I spoke against PKI during the debate I was invited to at the
 Usenix Electronic Commerce Workshop in 1998 or so, and at many
 opportunities before and since. Dan Geer has a pretty famous screed on
 the subject. Peter Gutmann talks about the follies of X.509 so often
 it is hard to keep up. I don't mean to single us out as visionaries --
 we were just saying things lots of other people were also saying.
 
 Honestly, where have you been?

I've been over at Mozilla trying to tell them the PKI
isn't doing it's job.  Peter Gutmann and Amir Herzberg
have been there supporting this push.  They're not
visionaries either but at least they put their money
where their mouths are - trying to get Mozo people
to touch up the PKI + SSL code to deal with spoofing.

(Demos and code available on request )

We recently set up a new
group for all anti-phishing researchers so they could
congregate and cross-fertilise ideas in a scientific
fashion.  I'm proud to say that in less than one month
our understanding of phishing and the browser
security model has significantly advanced.

We've talked to dozens of programmers over on the
Mozilla camp, sadly without success and I think that's
because the crypto community has been relatively
silent on this issue.  Most over in the browser
community remain simply unaware and uneducated
on the reasoning behind the security model, and how
out of date it is.

So, where have you been, Perry?  If you wish to
patronize me (on a public list, with no right of reply!)
do so from a position of strength.

  Can you now tell the browser people?
 
 I can smell the rest of this discussion right now, Ian. You'll
 misunderstand the constraints the browser people are under, and start
 claiming SSL is bad (or unnecessary) about 20 seconds after that. I'm
 not playing the game.

Perry, for the last few months or so the game
you have been playing is disagree with Ian,
rag him in public, drop his posts.

I don't mind .. but as I showed above, you are
100% diametrically wrong about what it is I am
saying or likely to say.  Just so you're aware
that you're inventing the rest of the discussion
and disagreeing with your own invention...

iang
-- 
Advances in Financial Cryptography, Issue 2:
   https://www.financialcryptography.com/mt/archives/000498.html
Mark Stiegler, An Introduction to Petname Systems
Nick Szabo, Scarce Objects
Ian Grigg, Triple Entry Accounting

Re: the limits of crypto and authentication

2005-07-11 Thread Ian Grigg
On Saturday 09 July 2005 23:31, [EMAIL PROTECTED] wrote:
 
 Nick Owen writes:
  | I think that the cost of two-factor authentication will plummet in the
  | face of the volumes offered by e-banking.
 
 Would you or anyone here care to analyze
 what I am presuming is the market failure
 of Amex Blue in the sense of its chipcard
 and reader combo?

There was no market failure - Amex Blue was
an outstanding success that sent waves of
astonishment through the credit card industry.
Everyone was talking about how stunningly
successful it was - how it had broken the laws
of account creation by actually acquiring new
accounts in the millions instead of cannibalising
existing accounts.  (I recall a number of 4 million?)

You may be thinking that the usage of the smart
card being a total and complete flop was in some
way correlated with the market success, but it
was quite the reverse - the smart card usage was
a complete and utter failure for the obvious reasons,
but the program itself was fantastically successful.

iang
-- 
Advances in Financial Cryptography, Issue 2:
   https://www.financialcryptography.com/mt/archives/000498.html
Mark Stiegler, An Introduction to Petname Systems
Nick Szabo, Scarce Objects
Ian Grigg, Triple Entry Accounting

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: the limits of crypto and authentication

2005-07-09 Thread Ian Grigg
FTR, e-gold were aware of the general makeup of this
threat since 1998 and asked someone to look at it.  The
long and the short was that it was more difficult to solve
than at first claimed, so the project was scrapped.  This
was a good risk-based decision.  The first trojans that I
know of for e-gold weren't spotted until 12-18 months
ago, so it was also a profitable decision.  What they are
doing now I don't know.

In the payments world we've known how to solve all
this for some time, since the early 90s to my knowledge.
The only question really is, have you got a business
model that will pay for it, because any form of token is
very expensive, and the form of token that is needed -
a trusted device to put the application, display, keypad
and net connection on - is even more expensive than
the stop-gap two-factor authentication units commonly
sold.

iang
-- 
Advances in Financial Cryptography, Issue 2:
   https://www.financialcryptography.com/mt/archives/000498.html
Mark Stiegler, An Introduction to Petname Systems
Nick Szabo, Scarce Objects
Ian Grigg, Triple Entry Accounting

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: AES timing attacks, why not whiten the implementation?

2005-06-24 Thread Ian Grigg
On Friday 24 June 2005 04:36, Beryllium Sphere LLC wrote:
 1) How do you generate this in a way that does not leak information about
 the permutation generated?
 
 2) How many times can you re-use a single indirection array?
 
 3) How quickly can you generate new indirection arrays?
 
 Good questions, which probably require empirical answers. 
 
 The added cost of this particular whitening approach (question 3) is
 the cost of shuffling an array plus, I'd expect more important, the
 cost of replacing 44+ bits of randomness (log2(16!)).  
 
 How often you can afford to rearrange table access is a question much like 
 how often you can afford rekeying. The attacker still gets information from 
 timing, of course (question 1), but it's information about the pair {key, 
 table access permutation}. 

The rearrangement cost should be fairly low compared to
the cost of doing the decrypt in the first place?  And rekeying
involves network interchange which is expensive and
complicated.

 If you had unlimited entropy available you could re-permute
 on every encryption or decryption. The minimum frequency
 would depend on how many trials your attacker needs in
 order to nail down a key.   

You don't need entropy, do you?  All you need to do is generate
an unrelatable time signature for a particular decryption, and for
that you just need a stream that is unrelated in its timing effects.

What I'm not sure about is if the stream needs to be secret.  If
the listener knows how you permute, can that be then factored
into the timing statistics?  If not, then simply use the last decrypt
in the mode as a seed to create the next table.  If it has to be
kept secret then generate a new xor-chain that is keyed from
original secret key (including an enlarged key).  Either way,
it seems as if the PRNG of past decrypts would solve the table
keying need.

Further as there is no coordination required in the table keying,
the decryptor has wide flexibility in strategies, such as using
the secret key to hash the last ciphertext to hash the table.

 The questions I don't know how to answer are
 (a) How many bits worth of confusion does this really add for an attacker? 
 There must be symmetries such that some subsets of permutations will have 
 identical cache timings. The answer to (a) determines the answer to (2).
 (b) Is there a better way to scramble the timing of an AES operation without 
 going to the last resort of padding everyting to worst-case timing?

There are two distinct classes of problems here - fixes that
would work on AES, and fixes that would work on any block
cipher.  Your neat idea falls into the former.

iang
-- 
Advances in Financial Cryptography, Issue 1:
   https://www.financialcryptography.com/mt/archives/000458.html
Daniel Nagy, On Secure Knowledge-Based Authentication
Adam Shostack, Avoiding Liability: An Alternative Route to More Secure Products
Ian Grigg, Pareto-Secure

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


WYTM - but what if it was true?

2005-06-22 Thread Ian Grigg
A highly aspirated but otherwise normal watcher of black helicopters asked:

 Any idea if this is true?
  (WockerWocker, Wed Jun 22 12:07:31 2005)
 http://c0x2.de/lol/lol.html

Beats me.  But what it if it was true.  What's your advice to
clients?

iang
-- 
Advances in Financial Cryptography, Issue 1:
   https://www.financialcryptography.com/mt/archives/000458.html
Daniel Nagy, On Secure Knowledge-Based Authentication
Adam Shostack, Avoiding Liability: An Alternative Route to More Secure Products
Ian Grigg, Pareto-Secure

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: AES cache timing attack

2005-06-21 Thread Ian Grigg
On Tuesday 21 June 2005 23:00, Jerrold Leichter wrote:
  It's much 
 harder to see how one could attack a session key in a properly implemented 
 system the same way.  You would have to inject a message into the ongoing 
 session.  However, if the protocol authenticates its messages, you'll never 
 get any response to an injected message.  At best, you might be able to 
 observe some kind of reaction to the injected message.  But that's a channel 
 that can be made very noisy, since it shouldn't occur often.  (BTW, if you 
 use 
 encrypt-then-authenticate, you're completely immune to this attack, since the 
 implementation won't ever decrypt the injected message.  Of course, there may 
 be a timing attack against the *authentication*!)

I agree with your comments about the injection, but I
don't see why the attack doesn't work on the session
passively.  Are you assuming that because it is a
session, it's in some way not plausible to match the
inbound packets with outbound packets?  I would
have thought that was possible with things like keep
alives and so forth.  The only drawback I can see is
that there might not be enough data (hence desire to
tickle things along with an injection).

When I was thinking about use a mode I was more
thinking about how a mode could be the cover needed
to hide the decrypt time.  A straight CBC mode would
probably make matters worse because it is a known
length and the key doesn't change, so plausibly the
longer the total packet, the better the time estimate.

But if the key were to change for each block in a
decrypt-dependent fashion, this would presumably
render the total time as an average over many decrypts
of many block keys.  The longer the packets, the more
the cover, and no key gets used more than once anyway.

So, hypothetically a mode that XOR'd the previous output
with the key before encryption (heaven knows whether
that would be cryptographically sound, but something
along those lines, anyway).

Alternatively, if one is in the unfortunate position of being
an oracle for a single block encryption then the packet
could be augmented with a cleartext random block to be
xor'd with the key each request.

iang
-- 
Advances in Financial Cryptography, Issue 1:
   https://www.financialcryptography.com/mt/archives/000458.html
Daniel Nagy, On Secure Knowledge-Based Authentication
Adam Shostack, Avoiding Liability: An Alternative Route to More Secure Products
Ian Grigg, Pareto-Secure

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Cryptography Research wants piracy speed bump on HD DVDs

2004-12-22 Thread Ian Grigg
 What CR does instead is much simpler and more direct. It tries to cut off
 any player that has been used for mass piracy.

Let me get this right. ...

 When a pirate makes a copy of a film encoded as SPDC, the output file is
 cryptographically bound to a set of player decryption keys. So it is easy
 when looking at a pirated work on a peer to peer network, or any copies
 found on copied DVDs, to identify which player made those copies, said
 Laren When the content owner sends out any further content it can contain
 on it a revocation of just the player that was used to make a pirated copy.

A blockbuster worth $100m gets cracked ... and
the crack gets watermarked with the Id of the
$100 machine that played it.

 We picture a message popping up on a screen saying something like 'Disney
 movies won't play on your player any more please call this number for
 further information.' Or perhaps 'To fix this please call Disney with your
 credit card,' something like that anyway.

So the solution is to punish the $100 machine by
asking them to call Disney with a CC in hand?

As described this looks like snake oil.  Is this
for real?

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: 3DES performance

2004-12-08 Thread Ian Grigg
 Hi,
 I'm working on a project for a company that involves the use of 3DES. They
 have
 asked me to find out what the overheads are for encrypting a binary file.
 There
 will be quite a lot of traffic coming in (in the region of hundreds of
 thousands of files per hour). Has anyone got any figures for 3DES performance?
 I've tried bdes on OpenBSD which has given me some useful results.

Try typing:

openssl speed

on any Unix platform (until you find one with OpenSSL installed).

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-11-30 Thread Ian Grigg
Ben raises an interesting thought:

 There was some question about whether this is possible for connections that
 use client-certs, since it looks to me from the spec that those connections
 should be using one of the Diffie Hellman cipher suites, which is obviously
 not vulnerable to a passive sniffing 'attack'. Active 'attacks' will
 obviously still work. Bear in mind that we're talking about deliberate
 undermining of the SSL connection by organisations, usually against their
 website users (without talking about the goodness, badness or legality of
 that), so how do they get the private keys isn't relevant.

We have the dichotomy that DH protects against all passive
attacks, and a signed cert protects against most active attacks,
and most passive attacks, but not passive attacks where the
key is leaked, and not active attacks where the key is
forged (as a cert).

But we do not use both DH and certificates at the same time,
we generally pick one or the other.

Could we however do both?

In the act of a public key protected key exchange, Alice
generally creates a random key and encrypts that to Bob's
public key.  That random then gets used for further traffic.

However could one do a Diffie Hellman key exchange and do this
under the protection of the public key?  In which case we are
now protected from Bob aggressively leaking the public key.
(Or, to put it more precisely, Bob would now have to record
and leak all his traffic as well, which is a substantially
more expensive thing to engage in.)

(This still leaves us with the active attack of a forged
key, but that is dealt with by public key (fingerprint)
caching.)

Does that make sense?  The reason I ask is that I've just
written a new key exchange protocol element, and I thought
I was being clever by having both Bob and Alice provide
half the key each, so as to protect against either party
being non-robust with secret key generation.  (As a programmer
I'm more worried about the RNG clagging than the key leaking,
but let's leave that aside for now...)

Now I'm wondering whether the key exchange should do a DH
within the standard public key protected key exchange?
Hmmm, this sounds like I am trying to do PFS  (perfect
forward secrecy).  Any thoughts?

iang


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-11-30 Thread Ian Grigg
 Ian Grigg writes:
I note that disctinction well!  Certificate based systems
are totally vulnerable to a passive sniffing attack if the
attacker can get the key.  Whereas Diffie Hellman is not,
on the face of it.  Very curious...

 No, that is not accurate.  Diffie-Hellman is also insecure if the private
 key is revealed to the adversary.  The private key for Diffie-Hellman
 is the private exponent.  If you learn the private exponent that one
 endpoint used for a given connection, and if you have intercepted that
 connection, you can derive the session key and decrypt the intercepted
 traffic.

I wasn't familiar that one could think in those terms.  Reading
here:  http://www.rsasecurity.com/rsalabs/node.asp?id=2248 it
says:

In recent years, the original Diffie-Hellman protocol
has been understood to be an example of a much more
general cryptographic technique, the common element
being the derivation of a shared secret value (that
is, key) from one party's public key and another
party's private key. The parties' key pairs may be
generated anew at each run of the protocol, as in
the original Diffie-Hellman protocol.

It seems the compromise of *either* exponent would lead to
solution.

 Perhaps the distinction you had in mind is forward secrecy.  If you use
 a different private key for every connection, then compromise of one
 connection's private key won't affect other connections.  This is
 true whether you use RSA or Diffie-Hellman.  The main difference is
 that in Diffie-Hellman, key generation is cheap and easy (just an
 exponentiation), while in RSA key generation is more expensive.

Yes.  So if a crypto system used the technique of using
Diffie-Hellman key exchange (with unique exponents for each
session), there would be no lazy passive attack, where I
am defining the lazy attack as a once-off compromise of a
private key.  That is, the attacker would still have to
learn the individual exponent for that session, which
(assuming the attacker has to ask for it of one party)
would be equivalent in difficulty to learning the secret
key that resulted and was used for the secret key cipher.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Your source code, for sale

2004-11-18 Thread Ian Grigg
 Enzo Michelangeli writes:
 In the world of international trade, where mutual distrust between buyer
 and seller is often the rule and there is no central authority to
 enforce
 the law, this is traditionally achieved by interposing not less than
 three
 trusted third parties: the shipping line, the opening bank and the
 negotiating bank.

 Interesting.  In the e-gold case, both parties have the same bank,
 e-gold ltd.  The corresponding protocol would be for the buyer to instruct
 e-gold to set aside some money which would go to the seller once the
 seller supplied a certain receipt.  That receipt would be an email return
 receipt showing that the seller had sent the buyer the content with hash
 so-and-so, using a cryptographic email return-receipt protocol.

This is to mix up banking and payment systems.  Enzo's
description shows banks doing banking - lending money
on paper that eventually pays a rate of return.  In
contrast, in the DGC or digital gold currency world,
the issuers of gold like e-gold are payment systems and
not banks.  The distinction is that a payment system
does not issue credit.

So, in the e-gold scenario, there would need to be
similar third parties independent of the payment system
to provide the credit moving in the reverse direction to
the goods.  In the end it would be much like Enzo's
example, with a third party with the seller, a third
party with the buyer, and one or two third parties who
are dealing the physical goods.  There have been some
thoughts in the direction of credit creation in the
gold community, but nothing of any sustainability has
occurred as yet.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Your source code, for sale

2004-11-18 Thread Ian Grigg


 Yes, I'm looking at ideas like this for ecash gambling, but you have
 a who-goes-first problem.  One side or the other has to rip their
 own cash first, and then the other side can just go away and leave the
 first side screwed.  The act of ripping cash is relatively atomic and
 involves a transaction with the ecash mint, so they can't both do it at
 the same time.

 I guess the best fix is for each side to rip a little bit of cash at a
 time, so that the guy who goes first only loses a trivial amount if the
 other side aborts.  Then after a few rounds both sides are sunk pretty
 deep and both have a strong incentive to complete the transaction.

What is wrong with having a TTP, generally called a
casino?

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Financial identity is *dangerous*? (was re: Fake companies, real money)

2004-11-01 Thread Ian Grigg
Ben,

 Ian Grigg wrote:
 It should be obvious.  But it's not.  A few billions
 of investment in smart cards says that it is anything
 but obvious.

 That assumes that the goal of smartcards is to increase security instead
 of to decrease liability.

On whether the goal of smart cards is to reduce
liability:

a)  Not with any systems I was familiar:  the major Dutch
systems were defensive, oriented to filling the space
that was potentially threatened by other parties.  The
trials were goaled to increase security, which they did
not by using smart cards, but by eliminating cash, which
had created an unacceptable risk of serious theft in
unattended petrol stations.  The same happened with UK
phone cards...  I'm unfamiliar with Mondex or the Belgium/
Proton based motives, but their structures indicate that
liability was not a question uppermost on their minds.

b)  Liability reduction cannot be a goal.  If it was, then
one could achieve the goal completely - eliminate liability -
by not doing the project.  Instead, liability and/or
reduction of same is a _limitation_ on the goal of the
system.

c)  Whether liability reduction entered into any smart
card system as a limitation on their goals is a little
uncertain.  I would say no, as all the systems were
early stage in the institutional model;  in which case
there was little or no liability.  Instead, the only
drivers in that vague area would have been future
running costs reduction, which would have included well
considered security models, and partially considered
user support models, to reduce over all costs.  Including
all forms of risks, of course.

d)  Liability reduction generally comes into play when a
system is mature and/or regulatory issues come into play.
That is, liability reduction is something often seen when
the desire is to avoid surprises, and to avoid any costs
cropping up that weren't well built into the costs model.
I.e., the risk models used by credit card operators are
one example, and the customer agreement models (or whatever
they are called) used by CAs are another example of liability
reduction.

e) Perversely, banks practice liability increase as well as
reduction.  In fact, a pure banking model is about the risk
of a loan, and they specialise in measuring and managing
the risk of that loan.  But, as we are talking about payment
systems, and loans are banking, and banking is not payment
systems, that would be a change in business, so out of
scope of the original topic.

f)  And, of course, all institutions will practice liability
increase if they can turn it into a barrier to entry, that
is, cartelise the industry so as to block new entrants.  See
the eMoney directive for the European barrier to entry, which
was effectively coordinated by the Bundesbank on behalf of
the banks, and resulted in the like a bank, but not a bank,
and as costly as a bank approach to digital cash.

All of which might or might not hit the target of liability
as you wrote it?

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Financial identity is *dangerous*? (was re: Fake companies, real money)

2004-10-28 Thread Ian Grigg

Alan Barrett wrote:
On Sat, 23 Oct 2004, Aaron Whitehouse wrote:
Oh, and make it small enough to fit in the pocket,
put a display *and* a keypad on it, and tell the
user not to lose it.
How much difference is there, practically, between this and using a 
smartcard credit card in an external reader with a keypad? Aside from 
the weight of the 'computer' in your pocket...

The risks of using *somebody else's keypad* to type passwords or
instructions to your smartcard, or using *somebody else's display* to
view output that is intended to be private, should be obvious.
:-)
It should be obvious.  But it's not.  A few billions
of investment in smart cards says that it is anything
but obvious.
To be fair, the smart card investments I've been
familiar with have been at least very well aware of
the problem.  It didn't stop them proceeding with
papering over the symptoms, when they should have
gone for the underlying causes.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Printers betray document secrets

2004-10-28 Thread Ian Grigg

Ben Laurie wrote:
This only works if the marks are not such that the identity of the 
printer is linked to the marks (as opposed to being able to test whether 
a particular document was produced by a particular printer).

To be really safe, I'd suggest going somewhere without surveillance 
cameras, buying a printer for cash, using it and then destroying it.

Don't forget not to use your car and leave your mobile phone behind. Oh, 
and take the RFID tags out of your clothes.
It's actually quite an amusing problem.  When put
in those terms, it might be cheaper and more secure
to go find some druggie down back of central station,
and pay them a tenner to write out the ransom demand.
Or buy a newspaper and start cutting and pasting the
letters...
In more scientific terms, is there a way to efficiently
print an anonymous paper document?  (By anonymous,
I mean a document that leaves no easy clues back to
the author.)  When creating ones anonymous political
pamphlets revealing the latest government scandal,
one might need the help of RFC 666, how to print
anonymous pamphlets with modern printers.
E.g., something like:  acquire a HP inkjet and a
Brother laser.  Disengage the ink drying fan in the
Brother.  Print the page through the Brother then
print the same page (wet!) through the HP within 5
seconds.  For paper, use fishchip wrap, cleaned
with sarsons and dried for 30 mins under a tanning
lamp with the UV filter removed...
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Financial identity is *dangerous*? (was re: Fake companies, real money)

2004-10-25 Thread Ian Grigg
http://www.financialcryptography.com/mt/archives/000219.html
[EMAIL PROTECTED] wrote:
... to break the conundrum Ballmer finds himself
in where the road forks towards (1) fix the security
problem but lose backward compatibility, or (2) keep
the backward compatibility but never fix the problem.
I think the recent decision by Microsoft to not upgrade
browsers indicates that they are plumbing for your choice
(1).  Backwards compatibility takes a back seat.  I wrote
more about it here:
http://www.financialcryptography.com/mt/archives/000219.html
His Board would prefer (2), the annuity of locked-in
users, but it forces a bet that software liability
never happens.  Fixing the problem, for which the
calls grow more strident daily, puts the desktop
platform into play even more than it is now as
it asks the users (who, having lost compatibility,
thus have nothing to lose) to marry Redmond a
second time.  A VM-cures-all strategy is then
an attempt to avoid having to choose between (1)
and (2) by breaking backward compatibility for
new things but bridging the old things with a
magic box that both preserves the annuity revenue
stream from locked-in users while it keeps the
liability bar at bay.
I have two questions:  Does he have a board?  I
never heard of anyone but Bill Gates telling Ballmer
what to do.  Just curious!
Secondly, is a VM strategy likely to work?  Assuming
that Microsoft can make it work nicely, it also opens
the door for other OSs to be added into the mix, something
that Microsoft wouldn't be that keen to promote.
(I don't disagree with your comments, though!)
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


How to store the car-valued bearer bond? (was Financial identity...)

2004-10-23 Thread Ian Grigg

Aaron Whitehouse wrote:
None. But a machine that had one purpose in life:
to manage the bearer bond, that could be trusted
to a reasonable degree. The trick is to stop
thinking of the machine as a general purpose
computer and think of it as a platform for one
single application. Then secure that machine/OS/
stack/application combination.
Oh, and make it small enough to fit in the pocket,
put a display *and* a keypad on it, and tell the
user not to lose it.
iang

How much difference is there, practically, between this and using a 
smartcard credit card in an external reader with a keypad? Aside from 
the weight of the 'computer' in your pocket...
Theoretically, there may not be much difference, depending
on where the theory starts...
Practically there are a bunch of differences, which are
more or less issues, depending.
1.  The data store (a.k.a. the smart card) is separated
from the IO package.  Is this an advantage or a disadvantage?
For the most part it gives the user 2 tokens to worry about,
the expense of an additional interface, and more mass, as you
point out.  I can't quite see any offsetting advantage myself
in all that over one box that does the lot.  So that's a minus.
2.  The data store is in some sense secure.  If it's got
a car-valued bearer bond on it, that's probably not
secure enough.  It might give some security in the event
of loss, but so would a combined package with some other
password on it.  It is a marginal security improvement
over a single purpose non-smart package, and thus would
have a primary benefit in marketing (see Blue).  It's a
plus, but a small plus, as a single-purpose package could
just build in a smart card if it so desired.
3.  The smart card interface is not good.  It has to be
taken out of your trusted reader and put in someone else's
trusted reader.  Bad news.  So someone else's trusted
reader tells you it is paying you dividends on your bond,
when in fact it is replacing the bond with a mickey mouse
loyalty coupon.  Getting around that disadvantage costs
systems operators a bundle of money and restrictions.
This makes for a huge minus.
4.  The smart card interface, part 2.  In practice, smart
card readers are an example of historical detritus.  We
all said next year is the year of the smartcard in 1995,
and it still is.  In practice, the interfaces we want on our
bearer bond hardware token are these:  802.11x, ethernet,
bluetooth, IR, ... in that approximate order, all with IP
layered over and our real hot bearer transfer protocol, and
not some hokey old telco thing.  The smart card interface is
another huge minus, because it means that the infrastructure
is all specialised, the protocols are all closed, and the
system is all controlled at some level or other, which means
some big fella has to dig deep in the pockets to finance it.
Score card so far:  2 big minuses, one small minus, and
a small plus.
That would seem to me a more realistic expectation on consumers who are 
going to have, before too long, credit cards that fit that description 
and quite possibly the readers to go with them.
Next year is the year of the smart card!  In practice,
that advantage is just a rationalisation.  We can't use
any of those tokens to store your bearer bond.  If we
are going to ask someone to store a bearer bond, we
have to give that person the token.  Which means we can
start with a blank sheet of paper, we don't need to use
any smart card patriotism to justify your choices.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Are new passports [an] identity-theft risk?

2004-10-22 Thread Ian Grigg

R.A. Hettinga wrote:
http://worldnetdaily.com/news/printer-friendly.asp?ARTICLE_ID=41030

 An engineer and RFID expert with Intel claims there is little danger of
unauthorized people reading the new passports. Roy Want told the newssite:
It is actually quite hard to read RFID at a distance, saying a person's
keys, bag and body interfere with the radio waves.
Who was it that pointed out that radio waves don't
interfere, rather, receivers can't discriminate?
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Printers betray document secrets

2004-10-21 Thread Ian Grigg

R.A. Hettinga wrote:
http://news.bbc.co.uk/2/low/technology/3753886.stm

 US scientists have discovered that every desktop printer has a signature
style that it invisibly leaves on all the documents it produces.
I don't think this is new - I'm pretty sure it was
published about 6 or 7 years back as a technique.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Financial identity is *dangerous*? (was re: Fake companies, real money)

2004-10-21 Thread Ian Grigg
Hi John,
John Kelsey wrote:
Today, most of what I'm trying to defend myself from online is done as either a kind of hobby (most viruses), or as fairly low-end scams that probably net the criminals reasonable amounts of money, but probably don't make them rich.  Imagine a world where there are a few hundred million dollars in untraceable assets waiting to be stolen, but only on Windows XP boxes with the latest patches, firewalls and scanners installed, and reasonable security settings.  IMO, that's a world where every day is day zero.  All bugs are shallow, given enough qualified eyeballs, and with that kind of money on the table, there would be plenty of eyeballs looking.  
We are way way past that point in security,
phishing is happening on an industrial scale, and
the virus, phish and spam people are united, or
at least working together.  Internet payment
systems are being DDOS/extorted on a regular
basis, and hack attempts are routine.
We literally already have that world.
And once it's done, several thousand early adopters are out thousands of dollars each.  This isn't much of an advertisement for the payment system.  It's anonymous and based on bearer instruments, so there's no way to run the fraudulent transactions back.  The money's gone, and the attackers are richer, and the next, more demanding round of attacks has been capitalized.  
Again, we're well past that point.  There have been
hundreds and hundreds of payment systems out there,
and maybe order of a thousand have failed by now,
mostly due to business reasons.  Some simply due
to hacks and attacks, but it is rare, because:
What happens is that beyond a certain threshold, the
payment system delivers valuable payments.  At that
point, it starts getting attacked.  If those attacks
are survived, then it moves on to the next phase.
Which would be more attacks of a different nature...
(In fact, one seems to have failed in the last few
days - EvoCash -  and another is on the watch list
for failure - DMT/Alta.  Both of them suffered from
business style attacks it seemed, rather than what
we would call security hacks.)
The notion that suddenly it's all over isn't what
happens.  It's a trickle, then it builds up to a
flood.  Some small hacks come in, and people either
look at them or they don't.  Those that are diligent
and keep an eye on these things respond.  Those that
don't go out of business.  There are more dead
payment systems than people on this list, I'd guess,
we do have plenty of experience in this.
In practice, we've also seen what happens when
money that gets stolen can't be traced or stopped.
Even though not bearer, systems like e-gold are
plenty anon enough, and they don't easily reverse.
I doubt bearer systems would necessarily face a
problem because of users losing their bearer tokens
(but there are plenty of other problems out there
like the rather hard insider theft problem).
They also have to be able to do something about it.  What would you tell a reasonably bright computer programmer with no particular expertise in security about how to keep a bearer asset as valuable as his car stored securely on a networked computer?  If you can't give him an answer that will really work in a world where these bearer assets are  common, you're just not going to get a widespread bearer payment system working, for the same reason that there's probably nobody jogging with an iPod through random the streets of Sadr City, no matter how careful they're being.
When we get to that point, we will have an answer
for him.  I can assert that with a fair degree of
confidence, because a) we can't ever get to that
point until we have an answer, and b) we already
have the answer, and have had it for a decade:
store it on a trusted machine.  Just say no to
Windows XP.  It's easy, especially when he's
storing a bearer bond worth a car.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Financial identity is *dangerous*? (was re: Fake companies, real money)

2004-10-21 Thread Ian Grigg
James A. Donald wrote:
we already have the answer, and have had it for a decade: 
store it on a trusted machine.  Just say no to Windows XP. 
It's easy, especially when he's storing a bearer bond worth a 
car.

What machine, attached to a network, using a web browser, and 
sending and receiving mail, would you trust? 

None.  But a machine that had one purpose in life:
to manage the bearer bond, that could be trusted
to a reasonable degree.  The trick is to stop
thinking of the machine as a general purpose
computer and think of it as a platform for one
single application.  Then secure that machine/OS/
stack/application combination.
Oh, and make it small enough to fit in the pocket,
put a display *and* a keypad on it, and tell the
user not to lose it.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: AES Modes

2004-10-12 Thread Ian Grigg
Jack Lloyd also passed along lots of good comments I'd
like to forward (having gained permission) FTR.  I've
edited them for brevity and pertinence.
Jack Lloyd wrote:
 If it's small messages, CCM would probably work pretty well. Personally I think
 CCM is really poorly designed (in terms of easy implementation/usage), but take
 a look. There is also EAX, which is IMO significantly nicer. There are a ton of
 others (most of the ones on the page you link to support encrypt+MAC), but it
 seems like EAX and CCM are the only two that are going anywhere (many of the
 others are patented and/or rather painful to implement).

 CCM and EAX are both going to be slower than AES+HMAC because they use AES in
 some variant of CBC-MAC. Some of the others have faster MACs, mostly ones based
 on universal hash functions, but the best of them (OCB in particular) have been
 patented.
I'm obviously being naive here ... I had thought that
the combined mode would be faster, as it would run through
the data once only, and that AES seems to clip along
faster than SHA1.
Are you saying that as far as speed goes, I may as well
do EAS (using CBC) and add a HMAC on the end?
Or are you saying that only the patented ones manage to
deliver the savings we all expect?  Hmm, reading about
OCB on Phil Rogaway's site does clarify this somewhat.
http://www.cs.ucdavis.edu/~rogaway/ocb/ocb-back.htm
iang
== To which jack replied:
I'm obviously being naive here ... I had thought that the combined mode would
 be faster, as it would run through the data once only, and that AES seems to
 clip along faster than SHA1.
AFAIK all of the modes that use only one block cipher invocation per block of
input are patented. EAX+CCM both use two AES operations per block, and
byte-for-byte SHA-1 is 2-5x faster than AES (at least in the implementations
I've seen/used/written), so using AES+HMAC is probably going to be faster than
AES/EAX or AES/CCM. The obvious exception being boxes with hardware AES chips
and slow CPUs (eg, an ARM7 with an AES coprocessor), where AES will of course
be much faster than SHA-1.
 Are you saying that as far as speed goes, I may as well do EAS (using CBC)
 and add a HMAC on the end?
At least on general purpose CPUs, yes.
 Or are you saying that only the patented ones manage to deliver the savings
 we all expect?  Hmm, reading about OCB on Phil Rogaway's site does clarify
 this somewhat.  http://www.cs.ucdavis.edu/~rogaway/ocb/ocb-back.htm
Pretty much. Though I just remembered that CWC has not been patented by it's
creators, but I wouldn't be at all surprised if it was covered by one of the
others. Even CWC is probably slower than AES+HMAC is software, though
apparently it's pretty fast in hardware.
-Jack
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: AES Modes

2004-10-11 Thread Ian Grigg
Zooko provided a bunch of useful comments in private mail,
which I've edited and forward for list consumption.
Zooko Wilcox-O'Hearn wrote:
EAX is in the same class as CCM.  I think its slightly better.  Also 
there is GCM mode, which is perhaps a tiny bit faster, although maybe 
not if you have to re-key every datagram.  Not sure about the 
key-agility of these.

... I guess the IPv6 sec project has already specified such a thing in 
detail.  I'm not familiar with their solution.

If you really want interop and wide adoption, then the obvious thing to 
do is backport IPsec to IPv4.  Nobody can resist the authority of IETF!

Alternately, if you don't use a combined mode like EAX, then you 
should follow the generic composition cookbook from Bellare and 
Rogaway [1, 2].

Next time I do something like this for fun, I'll abandon AES entirely 
(whee!  how exciting) and try Helix [3].  Also, I printed out this 
intriguing document yesterday [4].  Haven't read it yet.  It focusses on 
higher-layer stuff -- freshness and sequencing.

Feel free to post to metzcrypt and give me credit for bringing the 
following four URLs to your attention.

[1] http://www.cs.ucdavis.edu/~rogaway/ocb/ocb-back.htm#alternatives
[2] http://www.cs.ucsd.edu/users/mihir/papers/oem.html
[3] http://citeseer.ist.psu.edu/561058.html
[4] http://citeseer.ist.psu.edu/661955.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [anonsec] Re: potential new IETF WG on anonymous IPSec (fwd from [EMAIL PROTECTED]) (fwd from [EMAIL PROTECTED])

2004-09-19 Thread Ian Grigg
Hadmut Danisch wrote:
On Thu, Sep 16, 2004 at 12:41:41AM +0100, Ian Grigg wrote:
It occurs to me that a number of these ideas could
be written up over time ... a wiki, anyone?  I think
it is high past time to start documenting crypto
patterns.

Wikis are not that good for discussions, and I do believe
that this requires some discussion.
I'd propose a separate mailing list for that.
It possibly requires both.  A mailing list by itself
tends to generate great thoughts that don't get finished
by being turned into summaries.  Also, those in charge
tend to slow the process, just through being too busy.
(I'm not talking about just this list, I've noticed
the effect on RFC lists where the editor wakes up after
a week and skips all the debate and starts again.)
A wiki working with a mailing list might address both
those issues.
(It's just a guess, I've never really worked with a
Wiki, just read some entries over at wikipedia.)
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: public-key: the wrong model for email?

2004-09-17 Thread Ian Grigg
lrk wrote:
Perhaps it is time to define an e-mail definition of crypto to keep the
postman from reading the postcards. That should be easy enough to
implement for the average user and provide some degree of privacy for
their mail. Call it envelopes rather than crypto. Real security 
requires more than a Windoz program.
Oh, that's really easy.  Each mailer (MUA) should (on
install) generate a self-signed cert.  Stick the fingerprint
in the headers of every mail going out.  An MUA that sees
the fingerpring in an incoming mail can send a request email
to acquire the full key.  Or stick the entire cert in there,
it's not as if anyone would care.
Then each MUA can start encrypting to that key opportunistically.
Lots of variations.  But the key thing is that the MUA
should simply generate the key, sign it, and send it out
on demand, or more freuqently.  There's really no reason
why this can't all be automated.  After all, the existing
email system is automated, and trusted well enough to
deliver email, so why can't it deliver self-signed certs?
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: public-key: the wrong model for email?

2004-09-16 Thread Ian Grigg
Adam Shostack wrote:
Given our failure to deploy PKC in any meaningful way*, I think that
systems like Voltage, and the new PGP Universal are great.
I think the consensus from debate back last year on
this group when Voltage first surfaced was that it
didn't do anything that couldn't be done with PGP,
and added more risks to boot.  So, yet another biz
idea with some hand wavey crypto, which is great if
it works, but it's not necessarily security.
* I don't see Verisign's web server tax as meaningful; they accept no
liability, and numerous companies foist you off to unrelted domains.
We could get roughly the same security level from fully opportunistic
or memory-oportunistic models.
Yes, or worse;  it turns out that Verisign may very
well be the threat as well as the solution.  As I
wrote here:
http://www.financialcryptography.com/mt/archives/000206.html
Verisign are in the eavesdropping business, which
not only calls into doubt their own certs, but also
all other CAs, and the notion of a trusted third
party as a workable concept.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: dual-use digital signature [EMAIL PROTECTED]

2004-07-28 Thread Ian Grigg
Peter Gutmann wrote:
A depressing number of CAs generate the private key themselves and mail out to
the client.  This is another type of PoP, the CA knows the client has the
private key because they've generated it for them.
It's also cost-effective.  The CA model as presented
is too expensive.  If a group makes the decision to
utilise the infrastructure for signing or encryption,
then it can significantly reduce costs by rolling out
from the centre.
I see this choice as smart.  They either don't do it
at all, or they do it cheaply.  This way they have a
benefit.
(Then, there is still the option for upgrading to self-
created keys later on, if the project proves successful,
and the need can be shown.)
As a landmark, I received my first ever correctly
signed x.509 message the other day.  I've yet to find
the button on my mailer to generate a cert, so I could
not send a signed reply.  Another landmark for the
future, of course.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Identity theft case could be largest so far

2004-07-22 Thread Ian Grigg
R. A. Hettinga wrote:
http://www.cnn.com/2004/LAW/07/21/cyber.theft/index.html

Identity theft case could be largest so far
From other reports, the indictment alleges that Levine
gained access ... by misusing a legitimate password and user
name while working for a company doing business with Acxiom.
I.e., not even a hack, but an insider theft.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Using crypto against Phishing, Spoofing and Spamming...

2004-07-21 Thread Ian Grigg
Steve,
thanks for addressing the issues with some actual
anecdotal evidence.  The conclusions still don't
hold, IMHO.
Steven M. Bellovin wrote:
In message [EMAIL PROTECTED], Ian Grigg writes:

Right...  It's easy to claim that it went away
because we protected against it.  Unfortunately,
that's just a claim - there is no evidence of
that.
This is why I ask whether there has been any
evidence of MITMs, and listening attacks.  We
know for example that there were password
sniffing attacks back in the old days, by
hackers.  Hence SSH.  Costs - Solution.
But, there is precious little to suggest that
credit cards would be sniffed - I've heard one
isolated and unconfirmable case.  And, there is
similar levels of MITM evidence - anecdotes and
some experiences in other fields, as reported
here on this list.

I think that Eric is 100% correct here: it doesn't happen because it's 
a low-probability attack, because most sites do use SSL.
The trick is to show cause and effect.  We know the
effect and we know the cause(s).  The question is, how
are they related?  The reason it is important is that
we may misapply one cause if the effect results from
some other cause.
I think that people are forgetting just how serious the password 
capture attacks were in 1993-94.  The eavesdropping machines were on 
backbones of major ISPs; a *lot* of passwords were captured. 
Which led to SSH, presumably, and was pre-credit card
days, so can only be used as a prediction of eavesdropping.
Question - are we facing a situation today whereby it is
easy to eavesdrop from the backbone of a major ISP and
capture a lot of traffic?  As far as I can see, that's
not likely to happen, but it could happen.
Secondly, who were the people doing those attacks?  Back
in 93-94, I'd postulate they weren't criminal types, but
hacker types.  That is, they were hackers looking for
machines.  Those people are still around - defeated by
SSH in large measure - and use other techniques now.
(Hackers had no liability in those days.  Criminals do
have liability, and are more concerned to cover their
tracks.  This makes active attacks less useful to them.
Criminals are getting braver though.)
Thirdly, why aren't we seeing more reports of this on
802.11b networks?  I've seen a few, but in each case,
the attack has been to hack into some machine.  I've
yet to see a case where listeners have scarfed up some
free email account passwords, although I suppose that
this must happen.
The point of all this is that we need to establish how
frequent and risky these things are.  Back in the pre-
commerce days, a certain amount of FUD was to be expected.
Now however, it's been a decade - whether that FUD was
warranted then is an issue for the historians, but now
we should be able to scientifically make a case that
the posture matches the threats.  Because it's been a
decade (almost).
As far as I can see, there *some* justification for
expecting eavesdropping attacks to credit cards.  There
is a lot more justification with unprotected non-commerce.
And in contrast, there is little justification for
expecting active attacks for purposes of theft.

What this leads to is not whether SSL should have been
deployed or changed in its current form (it is fruitless
to debate that, IMHO, except in order to lay down the
facts) but a discussion of certificates.
There seems some justification in suggesting that SSL be
(continued to be) deployed in any form.  Mostly, IMHO,
in areas outside commerce, and mostly, in the future,
not now.
There seems a lot of justfication for utilising certs as
they enable relationship-protection.  There seems quite a
bit of justification for utilising CA-signed certs because
they permit more advanced relationship protection such as
Amir's logo ideas and my branding ideas, and more so every
day.
What there doesn't appear to be any justification for is
the effective or defacto mandating of CA-signed certs.
And there appears to be a quite serious cost involved in
that mandating - the loss of protection from the resultant
*very* low levels of SSL deployment.
This all hangs on the MITM - hence the question of frequency.
It seems to be very low, an extraordinarily desparate attack
for a criminal, especially in the light of experience.  He
does phishing and hacking with ease, but he doesn't like
leaving tracks in the infrastructure that point back to him.
If the MITM cannot be justified as an ever-present danger,
then there is no justification for the defacto mandating of
CA-signed certs.  Permitting and encouraging self-signed
certs would then make deployment of SSL much easier, and
thus increase use of SSL - in my view, dramatically -
which would lead to much better protection.  (Primarily
by relationship management on the client side, and also
by branding/logo management with the CAs, but that needs
to be enabled in code first at the browsers.)
(It has to be said that encouraging anon-diffie-hellman
SSL would also lead to dramatically improved levels of
SSL

Re: Using crypto against Phishing, Spoofing and Spamming...

2004-07-18 Thread Ian Grigg
Eric Rescorla wrote:
Ian Grigg [EMAIL PROTECTED] writes:
Notwithstanding that, I would suggest that the money
already lost is in excess of the amount paid out to
Certificate Authorities for secure ecommerce certificates
(somewhere around $100 million I guess) to date.  As
predicted, the CA-signed certificate missed the mark,
secure browsing is not secure, and the continued
resistance against revision of the browser's useless
padlock display is the barrier to addressing phishing.

I don't accept this argument at all.
There are at least three potential kinds of attack here:
(1) Completely passive capture attacks.
(2) Semi-active attacks that don't involve screwing with
the network infrastructure (standard phishing attacks)
By (2) I guess you mean a bypass MITM?
(3) Active attacks on the network infrastructure.
By (3) I guess you mean a protocol level MITM.
Then, there is:
(4) Active attacks against the client.  By this I mean
hacking the client, installing a virus, malware,
spyware or whathaveyou.  (This is now real, folks.)
(5) Active attacks against the server.  Basically,
hacking the server and stealing all the good stuff.
(This has always been real, ever since there have
been servers.)
(6), (7) Insider attacks against client, server.
Just read off the data and misuse it.  (This has
been real since the dawn of time...)
Of course, SSL/SB doesn't protect against any of these,
and many people therefore assume the thinking stops
there.  Sadly, no.  Even though SSL doesn't protect
against these attacks, the frequency  cost of these
attacks directly impacts on the design choices of
secure browsing.
SSL does a fine job of protecting against (1) and a fairly adequate
job of protecting against (3). Certainly you could do a better job
against (3) if either:
(a) You could directly connect to sites with SSL a la
https://www.expedia.com/
(b) The identities were more user-friendly as we anticipated back in
the days of S-HTTP rather than being domain names, as required by
SSL. 

It does a lousy job of protecting against (3).
Sorry, I'm having trouble parsing fairly adequate
versus lousy job for threat (3)...  Both (a) and (b)
seem to deserve some examples?  I can connect directly
to expedia, and https://www.paypal.com/ is friendly
enough?
(Hmmm... I tell a lie, there is no https://www.expedia.com/
as it redirects.)
Now, my threat model mostly includes (1),  does not really include
(3), and I'm careful not to do things that leave me susceptible
to (2), so SSL does in fact protect against the attacks in my
threat model. I know a number of other people with similar threat
models. Accordingly, I think the claim that secure browsing
is not secure rather overstates the case.
(1) OK.  Now, granted, SSL protects against (1), fairly
finely.  It does so in all its guises, although the
CA-signed variant in secure browsing does so at some
additional unneeded expense, as it eliminates certain
secure options, being SSCs and ADH.  OTOH, this is a
really rare attack - actual damage from sniffing HTTP
traffic doesn't seem to be recorded anywhere as a real
attack on people, so forgive me if I downgrade this one
as almost not a threat.
(2) Then we come to (2), what i'd call a bypass MITM.  Or
a phish or a spoof.   (I'm not sure what semi active
and infrastructure have to do with it.)  This one is
certainly a threat.
When the browser is presented with a URL which happens
to purport only to be some secure site, without really
being that site, this is a spoof.  Your defence is to
be careful against this attack.  So, your defence is
nothing to do with SSL or secure browsing or anything really,
literally, (2) is unprotected against by SSL and secure
browsing in all their guises.  You yourself provide the
protection, because SSL / secure browsing does not.  Of
course.
That is my point - secure browsing does not protect
against any real  present threat.
(3)  I don't understand at all.  But you suggest that
it's not your threat and it isn't protected well against.

In summary - we are left with one attack that is well
protected against, but isn't really seen that much,
and could be done with ADH.  Then, another attack that
you deal with yourself, so that's not really relevant
coz you're smart and experienced, and those using
browsers on the average are not, and they are hit by
the attack.  Then there is (3).
(And we haven't even begun on (4) thru (7).  What then,
is a threat model that only includes some threats?)
So in sum, I think my argument remains unchallenged:
secure browsing fails to secure.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: On `SSL considered harmful`, correct use of condoms and SSL abuse

2004-07-18 Thread Ian Grigg
Amir Herzberg wrote:
(Amir, I replied to your other comments over on the
Mozilla security forum, which is presumably where they
will be more useful.  That just leaves this:)
So while `SSL is harmful` sounds sexy, I think it is misleading. Maybe 
`Stop SSL-Abuse!`
Ha!  I wondered when someone would take me to task over
that title :-)
Here's the thing:  the title comes from a seminal paper
called Gotos considered harmful [1]  This was a highly
controversial paper in the 70s or so that in no small
part helped the development of structured programming.
What the author of that paper was trying to say was not
that the Goto was bad, but its use was substantially
related to poor programming practice.
And that's the point I'm making.  The Goto is just a
tool like any other.  But, the Goto became a tool over-
deployed and widely abused, as its early and liberal
use by a programmer took no account of later maintenance
costs that were incurred by the owner of the code.  So
the Goto became synonymous with bad programming and
excessive costs.
The same situation exists with SSL/TLS.  As a protocol,
it's a fine tool.  It's strong, it's well reviewed, and
it has corrected its deficiencies over time.
But, it also comes with a wider security model.  For
starters, the CA-signed regime.  As well as that, it
comes with a variety of other baggage, which basically
amounts to use SSL/TLS as it is recommended and you
will be secure.
Unfortunately, this is wrong, and the result is bad
security practice.  Yet, we do have a generation of
people out there believing that because they have put
huge amounts of effort into implementing SSL with
its certs regime that they are secure.
We can see this ludicrous situation with the email
and chat variants of SSL / cert protected traffic.
In those cases the result is the same:  If one
suggests that the correct approach is for them to
use SSCs (self signed certs) or equivalent, people
go all weak and wobbly at the knees and start ranting
on about how those are insecure.
Yet these same systems are totally open to attacks
at the nodes and often to the intermediate hops,
which of course is where 99% of the attacks are [2].
These programmers truly believe that in order to
get security, they must deploy SSL.  As the manual
tells them to.  They are truly wrong.  In this,
SSL has harmed them, because it has blinded them
to the real risks that they are facing.
It's not the tool that has hurt them, but as you
suggest the abuse of the tool.  Edsgar Dijkstra
called for the abolition of Gotos as the way to
address the harm he saw being done.  That solution
may offend, as the tool itself cannot have harmed.
But, how else can we stop people deploying the tool
so abusively?
iang
[1] Edsger W. Dijkstra, Go To Statement Considered Harmful,
http://www.acm.org/classics/oct95/
[2] Jabber's use of SSL seems to mirror STARTTLS.
They both protect the traffic on the wire, but not
at rest on the hops.  The certificate system built
into mailers (name?) at least organises an end-to-end
packet protection, thus leaving the two end nodes
as the places at most risk, still by far the most
likely place to be attacked.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Using crypto against Phishing, Spoofing and Spamming...

2004-07-18 Thread Ian Grigg
Enzo Michelangeli wrote:
Can someone explain me how the phishermen escape identification and
prosecution? Gaining online access to someone's account allows, at most,
to execute wire transfers to other bank accounts: but in these days
anonymous accounts are not exactly easy to get in any country, and anyway
any bank large enough to be part of the SWIFT network would cooperate in
the resolution of obviously criminal cases.
In practice something like this:  Most of the
money is wired through to some stolen account,
and then moved out of the system to another system.
This might be a foreign account, or it might be a non-
bank such as a broker/dealer (E*Trade is being hit at
the moment, it seems) or it might be a digital gold
currency.  From there, it is moved once or twice more,
than back to the country where the phisher is.  This
might be the US or Russia, or anywhere else, but those
two countries seem to be quite big on this (maybe we
should blame Reagan :-) )
A couple of things:  it is very hard, but not impossible
to reverse a SWIFT style international wire.  I've seen
it done once, so I know it is not impossible.  If the
cash has gone, then reversing it doesn't make sense.
Also, phishing
isn't exactly a recognised and obvious criminal case.
Any particular instance might be, but getting to that
determination might take months.  Further, opening
accounts for anonymous purposes is still rather easy
in many countries, the chief perpertrator of this being
the USA.  Finally, every attempt to make money less like
money (by closing off easy accounts, for example) results
in what some call unintended consequences - the money
goes elsewhere.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Using crypto against Phishing, Spoofing and Spamming...

2004-07-17 Thread Ian Grigg
At 10:46 AM 7/10/2004, Florian Weimer wrote:
But is it so harmful?  How much money is lost in a typical phishing
attack against a large US bank, or PayPal?  (I mean direct losses due
to partially rolled back transactions, not indirect losses because of
bad press or customer feeling insecure.)
I estimated phishing losses about a month ago at about
a GigaBuck.
http://www.financialcryptography.com/mt/archives/000159.html
You'll also see two other numbers in that blog entry,
being $5 billion and $400 million (the latter taken
from Lynn's posted articles).
Of course these figures are very delicate, so we need
to wait a bit to get the real damage with any degree
of reliability.  Scientific skepticism should abound.
Notwithstanding that, I would suggest that the money
already lost is in excess of the amount paid out to
Certificate Authorities for secure ecommerce certificates
(somewhere around $100 million I guess) to date.  As
predicted, the CA-signed certificate missed the mark,
secure browsing is not secure, and the continued
resistance against revision of the browser's useless
padlock display is the barrier to addressing phishing.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New Attack on Secure Browsing

2004-07-16 Thread Ian Grigg
Aram,
It's now pretty clear that PGP had no clue what this was
all about.  Apologies to all, that was my mistake.  Also,
to clarify, there was no SSL involved.
What we are looking at is a case of being able to put a
padlock on the browser in a place that *could* be confused
by a user.  This is an unintended consequence of the
favicon design by Microsoft.
Now, another thing becomes clearer, from your report and
others:  Microsoft implemented the display of the favicon
only as accepted / chosen by the user.  You have to add
this site as a favourite.
Other browsers - the competitors - went further and
displayed the favicon on arrival at the site.  I guess
they felt that it could be more useful than Microsoft
had intended.  But, in this case, it seems that they
may have stumbled on something that goes too far.
What will save them in this case is that the numbers of
users of such non-Microsoft browsers are relatively small.
If the tables were turned, and it was Microsoft that was
vulnerable, I'd confidentally predict that we would see
some attempted exploits of this in the next month's
phishing traffic.
iang
Aram Perez wrote:
Hi Ian,

Congratulations go to PGP Inc - who was it, guys, don't be shy this
time? - for discovering a new way to futz with secure browsing.
Click on http://www.pgp.com/ and you will see an SSL-protected page
with that cute little padlock next to domain name.  And they managed
that over HTTP, as well!  (This may not be seen in IE version 5 which
doesn't load the padlock unless you add it to favourites, or some
such.)

Here what I saw when going to the PGP site:
Windows XP Pro:
IE 6.x: No padlock
Firefox 0.9.2:  Padlock on address bar and tab
Mac OS 10.2.8:
IE 5.2: No padlock
Safari 1.0.2:   Padlock on address bar but no on tab
Fixfox 0.8: Padlock on address bar and tab
Camino 0.7: Padlock on address bar and tab
You stated that http://www.pgp.com is an SSL-protected page, but did you
mean https://www.pgp.com? On my Powerbook, with all the browsers I get an
error that the certificate is wrong and they end up at http://www.pgp.com.
I'm not sure if PGP deliberately set out to confuse naïve users since their
logo has been the padlock for a while. Many web sites have their logo
displayed on the address bar (and tab) when you go to there site, see
http://www.yahoo.com or http://www.google.com. Maybe Jon can answer the
question.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New Attack on Secure Browsing

2004-07-16 Thread Ian Grigg
Anton Stiglic wrote:
You stated that http://www.pgp.com is an SSL-protected page, but did you
mean https://www.pgp.com? On my Powerbook, with all the browsers I get an
error that the certificate is wrong and they end up at http://www.pgp.com.

What I get is a bad certificate, and this is due to the fact that the
certificate is issued to store.pgp.com and not www.pgp.com.
Interestingly (maybe?), when you go and browse on their on-line store, and
check something out to buy, the session is secured but with another
certificate, one issued to secure.pgpstore.com.

Just to clarify, there is no SSL cert involved - or
there shouldn't be?!  My original post was pointing
out that it is possible to fool users by putting a
favicon padlock in place.  This seems to work only
on non-IE browsers, as these are the ones that went
further and display the favicon without further
user intervention.
If users can be so fooled, then they can be encouraged
to enter their details as if they are logging into the
site (not PGP but say e*Trade).  Hey presto, stolen
authentication, and stolen money.
I didn't expect so much confusion on this point, but
if indeed that wasn't obvious so much the better:
that was the issue, that people could be easily
confused!
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


New Attack on Secure Browsing

2004-07-15 Thread Ian Grigg
 Financial Cryptography Update: New Attack on Secure Browsing )
 July 15, 2004

http://www.financialcryptography.com/mt/archives/000179.html


Congratulations go to PGP Inc - who was it, guys, don't be shy this
time? - for discovering a new way to futz with secure browsing.
Click on http://www.pgp.com/ and you will see an SSL-protected page
with that cute little padlock next to domain name.  And they managed
that over HTTP, as well!  (This may not be seen in IE version 5 which
doesn't load the padlock unless you add it to favourites, or some
such.)
Whoops!  That padlock is in the wrong place, but who's going to notice?
 It looks pretty bona fide to me, and you know, for half the browsers I
use, I often can't find the darn thing anyway.  This is so good, I just
had to add one to my SSL page (http://iang.org/ssl/ ).  I feel so much
safer now, and it's cheaper than the ones that those snake oil vendors
sell :-)
What does this mean?  It's a bit of a laugh, is all, maybe.  But it
could fool some users, and as Mozilla Foundation recently stated, the
goal is to protect those that don't know how to protect themselves.  Us
techies may laugh, but we'll be laughing on the other side when some
phisher tricks users with the little favicon.
It all puts more pressure on the oh-so-long overdue project to bring
the secure back into secure browsing.  Microsoft have befuddled the
already next-to-invisible security model even further with their
favicon invention, and getting it back under control should really be a
priority.
Putting the CA logo on the chrome now seems inspired - clearly the
padlock is useless.  See countless rants [1] listing the 4 steps needed
and also a new draft paper from Amir Herzberg and Ahmad Gbara [2]
exploring the use of logos on the chrome.
[1] SSL considered harmful
http://iang.org/ssl/
[2]  Protecting (even) Naïve Web Users,
or: Preventing Spoofing and Establishing Credentials of Web Sites
http://www.cs.biu.ac.il/~herzbea/Papers/ecommerce/spoofing.htm
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Humorous anti-SSL PR

2004-07-15 Thread Ian Grigg
J Harper wrote:
This barely deserves mention, but is worth it for the humor:
Information Security Expert says SSL (Secure Socket Layer) is Nothing More
Than a Condom that Just Protects the Pipe
http://www.prweb.com/releases/2004/7/prweb141248.htm
I guess the intention was to provide more end-to-end
security for transaction data.  After a reasonable start,
if a bit scattered, it breaks down with this:
What we can be certain of is that it is not possible
to have a man-in-the-middle attack with FormsAssurity
 encryption ensures that the form has really come from
the claimed web site, the form has not been altered,
and the only person that can read the information
filled in on the form is the authorized site.
Which is quite inconsistent - so much so that it seems
that the press release writer got confused over which
system he or she was talking about.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Jabber does Simple Crypto - Yoo Hoo!

2004-07-12 Thread Ian Grigg
(( Financial Cryptography Update: Jabber does Simple Crypto - Yoo Hoo! ))
 July 12, 2004

http://www.financialcryptography.com/mt/archives/000176.html

Over in the Jabber community, the long awaited arisal of opportunistic,
ad hoc cryptography has spawned a really simple protocol to use OpenPGP
messages over chat.  It's so simple, you can see everything you want in
this piece of XML (click above).
http://www.jabber.org/jeps/jep-0027.html
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Using crypto against Phishing, Spoofing and Spamming...

2004-07-11 Thread Ian Grigg
Florian Weimer wrote:
There are simply too many of them, and not all of them implement
checks for conflicts.  I'm pretty sure I could legally register
Metzdowd in Germany for say, restaurant service.
This indeed is the crux of the weakness of the
SSL/secure browsing/CA system.  The concept
called for all CAs are equal which is an
assumption that is easily shown to be nonsense.
Until that assumption is reversed, the secure
browsing application is ... insecure.  (I of
course include no CA and self-signed certs
within the set of all CAs.)
The essence of any fixes in the browsers should
be to address the (rather fruitful) diversity
amongst CAs, and help the user to make choices
amongst the brands of same.
Some CAs are more equal than others... and the
sooner a browser recognises this, the better.
These bodies could issue logo certificates.

These certificates would only have value if there is extensive
verification.  We probably lack the technology to do that cheaply
right now, and the necessary level of international cooperation.
I'm not sure I understand how logo certs would
work, as there is still the possibility of same
being issued by CA-Nigeria and having remarkable
similarity to those issued by USPTO.
Until the CA is surfaced and thrust at the face
of the user, each browser's 100 or so root CAs
will be a fundamental weakness.  Including of
course the absence of CA, which is something
that is nicely hidden from the user.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: EZ Pass and the fast lane ....

2004-07-10 Thread Ian Grigg
John Gilmore wrote:
[By the way, [EMAIL PROTECTED] is being left out of this conversation,
 by his own configuration, because his site censors all emails from me.  --gnu]
Sourceforge was doing that to me today!
Well, I am presuming that ... the EZ Pass does have an account
number, right?  And then, the car does have a licence place?  So,
just correlate the account numbers with the licence plates as they
go through the gates.

If they could read the license plates reliably, then they wouldn't
need the EZ Pass at all.  They can't.  It takes human effort, which is
in short supply.
No, that is to confuse the collecting of tolls
with the catching of defrauders.  Consider one
to be the automatic turnstile and the other to
be the ticket inspector.  One records the tolls,
the other looks for error conditions.
The thing about phones is that they have no licence plates and no
toll gates.  Oh, and no cars.

Actually, cellphones DO have other identifying information in them,
akin to license plates.  And their toll gates are cell sites.
Yes, but so ineffective.  I can pass through the
toll gate - the cell site - and nobody can see
where I am.  I can make a call, and nobody can read
my location without doing complicated tracking stuff
with many cells.  The day that the cops get their
dream of cell phones being able to signal location,
that might change, but in the meantime, a cell phone
is for most purposes unlocatable.
Another factor is that the reward is very different,
one can save a lot more on a cellphone than a toll
way trip.
It's not clear what your remark about phones having no cars has to do
with the issue of whether EZ Pass is likely to be widely spoofed.
Sorry, yes:  if I catch a fraudster with a cell
phone, I can haul him down the station and seize
his phone.  BFD, it was probably stolen anyway.
If I catch a EZ Passter I can seize his car.
What incentive does a miscreant have to reprogram hundreds or
thousands of other cars???

(1) Same one they have for releasing viruses or breaking into
thousands of networked systems.  Because they can; it's a fun way to
learn.  Like John Draper calling the adjacent phone booth via
operators in seven countries.  (2) The miscreant gets a cheap toll
along with hundreds of other people who get altered tolls.
OK, so run this past me again.  I get to send a
virus or whatever that causes EZ Pass to go down
or mis-bill thousands of their customers, and I
also have to drive down the free way and drive
through their toll gates, in order to collect my
prize of ... a free ride on the toll way?
[Cory Doctorow's latest novel (Eastern Standard Tribe, available free
online, or in bookstores) hypothesizes MP3-trading networks among
moving cars, swapping automatically with whoever they pass near enough
for a short range WiFi connection.  Sounds plausible to me; there are
already MP3 players with built-in short range FM transmitters, so
nearby cars can hear your current selection.  Extending that to faster
WiFi transfers based on listening preferences would just require a
simple matter of software.  An iPod built by a non-DRM company might
well offer such a firmware option -- at least in countries where
networking is not a crime.  Much of the music I have is freely
tradeable.]
All of which is irrelevant.  The MP3s you are trading
do not generate a transaction request, being fraudulent
or otherwise, do not hit a server that has details on
who you are, and are probably encrypted so nobody can
tell what it is you are doing, thus forcing the cops
(IP terrorists being your #3 priority) to pull the car
to a halt and search for contraband music.
The only questions here are:  do the EZ Pass people have
your licence plate and your EZ Pass account number?  Do
they have the budget to employ some students with cameras?
Do they have the ability to target people who should be
travelling A - D but keep getting billed from B - C?
And, do the drivers who decide to defraud the EZ Pass
system have the ability to avoid 2 points, being any 2
of A, B, C, D?
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: EZ Pass and the fast lane ....

2004-07-09 Thread Ian Grigg
Date: Fri, 2 Jul 2004 21:34:20 -0400
From: Dave Emery [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: EZ Pass and the fast lane 

No mention is made of encryption or challenge response
authentication but I guess that may or may not be part of the design
(one would think it had better be, as picking off the ESN should be duck
soup with suitable gear if not encrypted).
From a business perspective, it makes no
sense to spend any money on crypto for this
application.  If it is free, sure use it,
but if not, then worry about the 0.01% of
users who fiddle the system later on.
It would be relatively easy to catch someone
doing this - just cross-correlate with other
information (address of home and work) and
then photograph the car at the on-ramp.
If the end result isn't as shown through
other means, then you have the evidence.
One high profile court case later, and the
chances of anyone copying this to escape
a toll fare shrink into the ignorable.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Mark Shuttleworth On Open Source

2004-07-09 Thread Ian Grigg
Security Theatre:  From the man who made hundreds of
millions selling signatures on your keys:
--
It is your data, why do you have to pay a licence
fee for the application needed to access the data?
-- Mark Shuttleworth
http://www.tectonic.co.za/default.php?action=viewid=309topic=Open%20Source
http://www.go-opensource.org/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: EZ Pass and the fast lane ....

2004-07-09 Thread Ian Grigg
John Gilmore wrote:
It would be relatively easy to catch someone
doing this - just cross-correlate with other
information (address of home and work) and
then photograph the car at the on-ramp.

Am I missing something?
It seems to me that EZ Pass spoofing should become as popular as
cellphone cloning, until they change the protocol.  You pick up a
tracking number by listening to other peoples' transmissions, then
impersonate them once so that their account gets charged for your toll
(or so that it looks like their car is traveling down a monitored
stretch of road).  It should be easy to automate picking up dozens or
hundreds of tracking numbers while just driving around; and this can
foil both track-the-whole-populace surveillance, AND toll collection.
Miscreants would appear to be other cars; tracking them would not
be feasible.
Well, I am presuming that ... the EZ Pass
does have an account number, right?  And
then, the car does have a licence place?
So, just correlate the account numbers
with the licence plates as they go through
the gates.
The thing about phones is that they have
no licence plates and no toll gates.  Oh,
and no cars.
The rewriteable parts of the chip (for recording the entry gate to
charge variable tolls) would also allow one miscreant to reprogram the
transponders on hundreds or thousands of cars, mischarging them when
they exit.  Of course, the miscreant's misprogrammed transponder would
just look like one of the innocents who got munged.
What incentive does a miscreant have to
reprogram hundreds or thousands of other
cars???
[I believe, by the way, that the EZ Pass system works just like many
other chip-sized RFID systems.  It seems like a good student project
to build some totally reprogrammable RFID chips that will respond to a
ping with any info statically or dynamically programmed into them by
the owner.  That would allow these hypotheses to be experimentally tested.]
Phones are great for spoofing because the
value can be high.  And, the risk of being
physically apprehended is low.  Cars and
toll ways are a different matter.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


The Ricardian Contract - using mundane cryptography to achieve powerful governance

2004-07-08 Thread Ian Grigg

 Original Message 
Subject: Financial Cryptography Update: The Ricardian Contract
Date: Wed, 7 Jul 2004 11:17:46 +0100
From: [EMAIL PROTECTED]
( Financial Cryptography Update: The Ricardian Contract )
 July 07, 2004

http://www.financialcryptography.com/mt/archives/000175.html


Presented yesterday at the IEEE's first Workshop on Electronic
Contracting, a new paper entitled The Ricardian Contract covers the
background and essential structure of Systemics'' innovation in digital
contracts.  It is with much sadness that I am writing this blog instead
of presenting, but also with much gladness that Mark Miller, of E and
capabilities fame, was able to step in at only a few hours notice.
http://iang.org/papers/ricardian_contract.html
That which I invented (with help from Gary Howland, my co-architect of
the Ricardo system for secure assets transfer) was a fairly mundane
document, digitised mundanely, and wrapped in some equally mundane
crypto.  If anything, it's a wonderful example of how to use very basic
crypto and software tools in a very basic fashion to achieve something
much bigger than its parts.
In fact, we thought it so basic that we ignored it, thinking that
people will just copy it.  But, no-one else did, so nearly a decade
after the fact, I've finally admitted defeat and gone back to
documenting why the concept was so important.
The Ricardian Contract worked to the extent that when people got it,
they got it big.  In a religious sense, which meant that its audience
was those who'd already issued, and intiutively felt the need.  Hasan
coined the phrase that the contract is the keystone of issuance, and
now Mark points out that a major element of the innovation was in the
bringing together of the requirements from the real business across to
the tech.
They are both right.  Much stuff didn't make it into the paper - it had
hit 20 pages by the time I was told I was allowed 8.  Slashing
mercilessly reduced it, but I had to drop the requirements section,
something I now regret.
Mark's comment on business requirements matches the central message of
FC7 - that financial cryptography is a cross-discipline game.  Hide
yourself in your small box, at your peril.  But, no person can
appreciate all the apposite components within FC7, so we are forced to
build powerful, cross-discipline tools that ease that burden.  The
Ricardian Contract is one such - a tool for bringing the technical
world and the legal world together in issuance of robust financial
value.
--
Powered by Movable Type
Version 2.64
http://www.movabletype.org/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: authentication and authorization

2004-07-07 Thread Ian Grigg
John Denker wrote:
[identity theft v. phishing?]
That's true but unhelpful.  In a typical dictionary you will
find that words such as
Identity theft is a fairly well established
definition / crime.  Last I heard it was the
number one complaint at the US FTC.
Leaving that aside, the reason that phishing
is lumped in there is that it is *like* id
theft, rather than being id theft.  Just like
as many have pointed out that phishing is
*like* spam, and now we are dealing with the
fact that it is not spam.
...
But I don't approve of the rest of his paragraph:
  So the reality of it is, the predeliction with
  identity being the root key to all power is the
  way society is heading. I don't like it, but
  I'm not in a position to stop the world turning.

First of all, not everything is heading the wrong way.
The Apache server has for eons had privilege separation
features.  The openssh daemon acquired such features
recently.  As far as I can see, the trend (in the open
software world at least) is in the right direction.
You are quoting a couple of obscure Internet
systems as evidence that society isn't moving
in the direction I indicated?
Yet, every day the papers are filled with the
progress the government is making on moving to
an identity-based system of control and commerce.
National drivers licences, foreigners being hit
with biometrics, etc etc.  Next time I cross the
borders, I probably have to be fingerprinted.
How many banks are introducing these obscure
features?  How many know what a capability is?
How to do a transactional security system, rather
than an identity system?
My claim seems unweakened as yet...

I don't know whether to laugh or cry when I think about how
phishing works, e.g.
http://www.esmartcorp.com/Hacker%20Articles/ar_Watch%20a%20hacker%20work%20the%20system.htm 

The so-called ID is doing all sorts of things it shouldn't
and not doing the things it should.  The attacker has to
prove he knows my home address, but does not have to prove
he is physically at that address (or any other physical place)
... so he doesn't risk arrest.
Curious - now that's a different phishing, but I
suppose it is close enough.  Need to think about
that one, I wouldn't call it phishing, just yet.
I'd call it invoice fraud, at first blush.
What I'd call phishing is this - mass mailings
to people about their bank accounts, collection
of the data, and then using the account details
to wire money out.
I guess we need some phishing experts to tell us
the real full definition.
Earlier Ian G. wrote:
  the security experts have shot their wad.

It doesn't even take a security expert to figure out easy
ways of making the current system less ridiculous.
It's not at issue whether you can or you can't -
what I was asserting is that no-one is asking you
(or me or anyone else).  Instead, cartels are being
formed, solutions being sold, congressmen lobbied,
etc, etc, and the real issues are being unaddressed.
...
which is consistent with what I've been saying.  I don't
think people have tried and failed to solve the phishing
problem --- au contraire, I think they've hardly tried.
I agree with that.
[1Gbux]
If the industry devoted even a fraction of that sum to
anti-scam activities, they could greatly reduce the losses.
Yes, but it won't.  This is the question - why not?
Here's the question:
http://www.financialcryptography.com/mt/archives/000169.html
And here's *an* answer:
http://www.financialcryptography.com/mt/archives/000174.html
I've been to the Anti-Phishing Working Group site, e.g.
  http://www.antiphishing.org/resources.html
They have nice charts on the amount of phishing observed
as a function of time.  But I haven't been able to find
any hard information about what they are actually doing
to address the problem.  The email forwarded by Dan Geer
was similarly vaporous.
I'm afraid I agree.  The purpose seems to be to
create a cartel, suck in some fees, and ... do
some stuff.  As the fees base ensures that only
corporations join, only those with solutions to
sell have an incentive to join.  So in a while
you'll see that they have a list of preferred
solutions.  None of which will address the
problem, but they'll sure make you feel safe
from the size of the price tag.
Here's an interesting link, describing the application of
actual cryptology to the problem:
  http://news.zdnet.co.uk/0,39020330,39159671,00.htm
IMHO it's at a remarkable place in the price/performance
space:  neither the cheapest quickdirty solution, nor the
ultimate high performance solution.  At least it refutes
the assertion about security experts' wads having been
shot.  This is one of the first signs I've seen that real
security experts have even set foot in this theater of
operations, let alone shot anything.
That's a standard solution in mainland Europe
for accessing online accounts.
I'm not sure how it addresses phishing (of the
sort that I know) as the MITM just sits in the
middle and passes the query and response back
and forth, no?
Those tokens 

Re: Question on the state of the security industry

2004-07-04 Thread Ian Grigg
[EMAIL PROTECTED] wrote:
I shared the gist of the question with a leader
of the Anti-Phishing Working Group, Peter Cassidy.
Thanks Dan, and thanks Peter,
...
I think we have that situation.  For the first
time we are facing a real, difficult security
problem.  And the security experts have shot
their wad. 
--- Part One
(just addressing Part one in this email)
I think the reason that, to date, the security community has
been largely silent on phishing is that this sort of attack was
considered a confidence scheme that was only potent against
dim-wits - and we all know how symathetic the IT
security/cryptography community is to those with less than
powerful intellects.

OK.  It could well be that the community has an
inbuilt bias against protecting those that aren't
able to protect themselves.  If so, this would be
cognitive dissonance on a community scale:  in this
case, SSL, CAs, browsers are all set up to meet
the goal of totally secure by default.
Yet, we know there aren't any secure systems, this
is Adi Shamir's 1st law.
http://www.financialcryptography.com/mt/archives/000147.html
Ignoring attacks on dimwits is one way to meet that
goal, comfortably.
But, let's go back to the goal.  Why has it been
set?  Because it's been widely recognised and assumed
that the user is not capable of dealing with their own
security.  In fact, in its lifetime over the last decade,
browsers have migrated from a ternary security rating
presented to the user, to whit, the old 40 bit crypto
security, to a binary security rating, confirming
the basic principle that users don't know and don't
care, and thus the secure browsing model has to do
all the security for the user.  Further, they've been
protected from the infamous half-way house of self-
signed certs, presumably because they are too dim-
witted to recognise when they need less or more
security against the evil and pervasive MITM.
http://www.iang.org/ssl/mallory_wolf.html
Who is thus a dimwit.  And, in order to bring it
together with Adi's 1st law, we ignore attacks
on dimwits (or in more technical terms, we assume
that those attacks are outside the security model).
(A further piece of evidence for this is a recent
policy debate conducted by Frank Hecker of Mozilla,
which confirmed that the default build and root
list for distribution of Mozilla is designed for
users who could not make security choices for
themselves.)
So, I think you're right.
 Also, it is true, it was considered a
 sub-set of SPAM.
And?  If we characterise phishing as a sub-set
of spam, does this mean we simply pass the buck
to anti-spam vendors?  Or is this just another
way of cataloging the problem in a convenient
box so we can ignore it?
(Not that I'm disagreeing with the observation,
just curious as to where it leads...)

The reliance on broadcast spam as a vehicle for consumer data
recruitment is remaining but the payload is changing and, I
think, in that advance is room for important contributions by
the IT security/cryptography community. In a classic phishing
scenario, the mark gets a bogus e-mail, believes it and
surrenders his consumer data and then gets a big surprise on his
next bank statement. What is emerging is the use of spam to
spread trojans to plant key-loggers to intercept consumer data
or, in the future, to silently mine it from the consumer's PC.
Some of this malware is surprizingly clever. One of the APWG
committeemen has been watching the devleopment of trojans that
arrive as seemingly random blobs of ASCII that decrypt
themselves with a one-time key embedded in the message - they
all go singing straight past anti-virus.
This is actually much more serious, and I've
noticed that the media has picked up on this,
but the security community remains
characteristically silent.
What is happening now is that we are getting
much more complex attacks - and viruses are
being deployed for commercial theft rather
than spyware - information theft - or ego
proofs.  This feels like the nightmare
scenario, but I suppose it's ok because it
only happens to dimwits?
(On another note, as this is a cryptography
list, I'd encourage Peter and Dan to report
on the nature of the crypto used in the
trojans!)
Since phishing, when successful, can return real money the
approaches will become ever more sophisticated, relying far less
on deception and more on subterfuge.
I agree this is to be expected.  Once a
revenue stream is earnt, we can expect that
money to be invested back into areas that
are fruitful.  So we can expect much more
and more complex and difficult attacks.
I.e., it's only just starting.

--- Part Two

iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: authentication and authorization

2004-07-03 Thread Ian Grigg
Hi John,
thanks for your reply!
John Denker wrote:
The object of phishing is to perpetrate so-called identity
theft, so I must begin by objecting to that concept on two
different grounds.
1) For starters, identity theft is a misnomer.  My identity
is my identity, and cannot be stolen.
I think I'd echo Lynn's comments - it's the label
in use, so we might as well get used to it.  In
fact, the more I think of it, the more I realise
that a desire to get the right terms in place
might be part of the answer to the original question!
You are right that it's important to separate out
the two cases: the theft of the immediate account
(and money therein) which is more what phishing is,
from the acquisition of identity data in order to
open new places to steal from (credit ... see my
rantcomments on why this is an American issue and
hence may have escaped the rest of the world's attention:
http://www.financialcryptography.com/mt/archives/000146.html
2) Even more importantly, the whole focus on _identity_ is
pernicious.  For the vast majority of cases in which people
claim to want ID, the purpose would be better served by
something else, such as _authorization_.  For example,
when I walk into a seedy bar in a foreign country, they can
reasonably ask for proof that I am authorized to do so,
which in most cases boils down to proof of age.  They do
*not* need proof of my car-driving privileges, they do not
need my real name, they do not need my home address, and
they really, really, don't need some ID number that some
foolish bank might mistake for sufficient authorization to
withdraw large sums of money from my account.  They really,
really, reeeally don't need other information such as what
SCI clearances I hold, what third-country visas I hold, my
medical history, et cetera.  I could cite many additional
colorful examples, but you get the idea:  The more info is
linked to my ID (either by writing it on the ID card or
by linking databases via ID number) the _less_ secure
everything becomes.  Power-hungry governments and power-
hungry corporations desire such linkage, because it makes
me easier to exploit ... but any claim that such linkable
ID is needed for _security_ is diametrically untrue.
Again, I see here an answer to why it is the
security industry is being ignored - all that
above is well and good in theory, but it doesn't
translate as easily to practice.  I mean, as a
hypothetical test - just how do you deliver some
form of privileges system that allows one person
to know my age, and another to know my sex, and
another to know my drinking problems?
That's not really a solved *cheap* problem, is it?
So the reality of it is, the predeliction with
identity being the root key to all power is the
way society is heading.  I don't like it, but
I'm not in a position to stop the world turning.
===
Returning to:
   For the first
  time we are facing a real, difficult security
  problem.  And the security experts have shot
  their wad.
I think a better description is that banks long ago
deployed a system that was laughably insecure.  (They got
away with it for years ... but that's irrelevant.)  Now
that there is widespread breakage, they act surprised, but
none of this should have come as a surprise to anybody,
expert or otherwise.
I think the security industry must at least
acknowledge their part in this.  For a decade
now we as a field have been telling everyone
that secure browsing with SSL and CA-signed
certs and all that stuff is ... secure.
What was that quote?  The Netscape and Microsoft
Secure E-Commerce System ??
In fact, we're still saying it, and mentally,
about half the field refuses to believe that
the secure browsing security model has been
breached.  The issue runs very deep, and a
lot of sacred cows have to be slaughtered
before this one will be resolved.
I mean, we could just go on ignoring it, but
that might explain why we are being ignored?
Now banks and their customers are paying the price.  As
soon as the price to the banks gets a little higher, they
will deploy a more-secure payment authorization scheme,
and the problem will go away.
Well, it is true, in a sense, that as the problem
gets more expensive, there is more incentive to
fix it.  So far the banks have fiddled at the
edges with server based stuff.  But that can't
help them much.  About the only thing that can
help them directly is if they lock out other IP
numbers but that's a difficult one.
The issue is one for the client side to solve.
The user is the one who is being enticed with
the dodgy link.  So it's one of these three
agents:  user, mailer, browser.
(Note that I didn't say ID scheme.  I don't care who
knows my SSN and other ID numbers ... so long as they
cannot use them to steal stuff.  And as soon as there
is no value in knowing ID numbers, people will stop
phishing for them.)
I think if we re-characterise phishing as the
part of identity theft where accounts are stolen
directly, we might have more of an acceptable
compromise on 

Question on the state of the security industry

2004-06-30 Thread Ian Grigg
The phishing thing has now reached the mainstream,
epidemic proportions that were feared and predicted
in this list over the last year or two.  Many of
the solution providers are bailing in with ill-
thought out tools, presumably in the hope of cashing
in on a buying splurge, and hoping to turn the
result into lucrative cash flows.
In other news, Verisign just bailed in with a
service offering [1].  This is quite cunning,
as they have offered the service primarily as
a spam protection service, with a nod to phishing.
In this way they have something, a toe in the
water, but they avoid the embarrassing questions
about whatever happened to the last security
solution they sold.
Meanwhile, the security field has been deathly
silent.  (I recently had someone from the security
industry authoritively tell me phishing wasn't
a problem  ... because the local plod said he
couldn't find any!)
Here's my question - is anyone in the security
field of any sort of repute being asked about
phishing, consulted about solutions, contracted
to build?  Anything?
Or, are security professionals as a body being
totally ignored in the first major financial
attack that belongs totally to the Internet?
What I'm thinking of here is Scott's warning of
last year:
  Subject: Re: Maybe It's Snake Oil All the Way Down
  At 08:32 PM 5/31/03 -0400, Scott wrote:
  ...
  When I drill down on the many pontifications made by computer
  security and cryptography experts all I find is given wisdom.  Maybe
  the reason that folks roll their own is because as far as they can see
  that's what everyone does.  Roll your own then whip out your dick and
  start swinging around just like the experts.
I think we have that situation.  For the first
time we are facing a real, difficult security
problem.  And the security experts have shot
their wad.
Comments?
iang
[1] Lynn Wheeler's links below if anyone is interested:
VeriSign Joins The Fight Against Online Fraud
http://www.informationweek.com/story/showArticle.jhtml;jsessionid=25FLNINV0L5DCQSNDBCCKHQ?articleID=22102218
http://www.infoworld.com/article/04/06/28/HNverisignantiphishing_1.html
http://zdnet.com.com/2100-1105_2-5250010.html
http://news.com.com/VeriSign+unveils+e-mail+protection+service/2100-7355_3-5250010.html?part=rsstag=5250010subj=news.7355.5
[2] sorry, the original email I couldn't
find, but here's the snippet, routed at:
http://www.mail-archive.com/[EMAIL PROTECTED]/msg01435.html
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


threat modelling tool by Microsoft?

2004-06-09 Thread Ian Grigg
Has anyone tried out the threat modelling tool
mentioned in the link below, or reviewed the
book out this month:
http://aeble.dyndns.org/blogs/Security/archives/000419.php
The Threat Modeling Tool allows users to create threat
model documents for applications. It organizes relevant
data points, such as entry points, assets, trust levels,
data flow diagrams, threats, threat trees, and vulnerabilities
into an easy-to-use tree-based view. The tool saves the
document as XML, and will export to HTML and MHT using
the included XSLTs, or a custom transform supplied by
the user.
The Threat Modeling Tool was built by Microsoft Security
Software Engineer Frank Swiderski, the author of Threat
Modeling (Microsoft Press, June 2004).
--
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Yahoo releases internet standard draft for using DNS as public key server

2004-06-01 Thread Ian Grigg
Dave Howe wrote:
Peter Gutmann wrote:
It *is* happening, only it's now called STARTTLS (and if certain vendors
(Micromumblemumble) didn't make it such a pain to set up certs for 
their MTAs
but simply generated self-signed certs on install and turned it on by 
default,
it'd be happening even more).
TLS for SMTP is a nice, efficient way to encrypt the channel. However, 
it offers little or no assurance that your mail will *stay* encrypted 
all the way to the recipients.

That's correct.  But, the goal is not to secure
email to the extent that there is no risk, that's
impossible, and arguing that the existence of a
weakness means you shouldn't do it just means that
we should never use crypto at all.
See those slides that Adi Shamir put up, I collected
the 3 useful ones in a recent blog:
http://www.financialcryptography.com/mt/archives/000147.html
I'd print these three out and post them on the wall,
if I had a printer!
The goal is to make it more difficult, within a
tight budget.  Using TLS for SMTP is free.  Why
not do it?
(Well, it's free if self-signed certs are used.
If CA-signed certs are used, I agree, that exceeds
the likely benefit.)

Most of us (including me most of the time) are in the position of using 
their ISPs or Employer's smarthost to relay email to its final 
destination; in fact, most employers (and many ISPs) actually enforce 
this, redirecting or blocking port 25 traffic.
If my employer or isp accept TLS traffic from me, but then turn around 
and send that completely unprotected to my final recipient, I have no 
way of preventing or even knowing that.
Sendmail's documentation certainly used to warn this was the case - 
probably still does :)

a) Once a bunch of people send mail via TLS/SMTP,
the ISP is incentivised to look at onward forwarding
it that way.
b) It may be that your local threat is the biggest,
if for example you are using 802.11b to send your
mail.  The threat of listening from the ISP onwards
is relatively small compared to what goes on closer
to the end nodes.
c) every node that starts protecting traffic this
way helps - because it boxes the attacker into
narrower and narrower attacks.  It may be that the
emails are totally open over the backbone, but who
cares if the attacker can't easily get there?
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Yahoo releases internet standard draft for using DNS as public key server

2004-06-01 Thread Ian Grigg
Dave Howe wrote:
Ian Grigg wrote:
 Dave Howe wrote:
 TLS for SMTP is a nice, efficient way to encrypt the channel.
 However, it offers little or no assurance that your mail will
 *stay* encrypted all the way to the recipients.
 That's correct. But, the goal is not to secure email to the extent
 that there is no risk, that's impossible, and arguing that the
 existence of a weakness means you shouldn't do it just means that we
 should never use crypto at all.
No - it means you might want to consider a system that guarantees 
end-to-end encryption - not just first link, then maybe if it feels 
like it
That doesn't mean TLS is worthless - on the contrary, it adds an 
additional layer of both user authentication and session encryption that 
are both beneficial - but that *relying* on it to protect your messages 
is overoptimistic at best, dangerous at worst.

This I believe is a bad way to start looking
at cryptography.  There is no system that you
can put in place that you can *rely* upon to
protect your message.
(Adi Shamir again: #1 there are no secure systems,
ergo, it is not possible to rely on them, and
to think about relying will take one down false
paths.)
In general terms, most ordinary users cannot
rely on their platform to be secure.  Even in
specific terms, those of us running BSD systems
on laptops that we have with us all the time
still have to sleep and shower...  There are
people out there who have the technology to
defeat my house alarm, install a custom
key logger designed for my model of laptop,
and get out before the hot water runs out.
For that reason, I and just about everyone
else do not *rely* on tech to keep my message
safe.  If I need to really rely on it, I do what
Adolf Hitler did in November of 1944 - deliver
all the orders for the great breakout by secure
courier, because he suspected the enigma codes
were being read.  (He was right.)
Otherwise, we adopt what military people call
tactical security:  strong enough to keep
the message secure enough so that most of the
time it does the job.
The principle which needs to be hammered time
and time again is that cryptography, like all
other security systems, should be about risk
and return - do what you can and put up with
the things you can't.
Applying the specifics to things like TLS and
mail delivery - yes, it looks very ropey.  Why
for example people think that they need CA-signed
certs for such a thing when (as you point out)
the mail is probably totally unprotected for half
the journey is just totally mysterious.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The future of security

2004-05-26 Thread Ian Grigg
Ben Laurie wrote:
Steven M. Bellovin wrote:

The spammers are playing with other people's money, cycles, etc.  They 
don't care.

We took that into account in the paper. Perhaps you should read it?
http://www.dtc.umn.edu/weis2004/clayton.pdf

(Most of the people on this list are far too
professional and busy to fall for that.  If
the argument has merit, please summarise it.
If it really has merit, the summary might
tease people into reading the full paper.)
I for one don't see it.  I like hashcash as
an idea, but fundamentally, as Steve suggests,
we expect email from anyone, and it's free.
We have to change one of those basic features
to stop spam.  Either make it non-free, or
make it non-authorised.  Hashcash doesn't
achieve either of those, although a similar
system such as a payment based system might
achieve it.
Mind you, I would claim that if we change either
of the two fundamental characteristics of email,
then it is no longer email.  For this reason,
I predict that email will die out (ever so
slowly and painfully) to be replaced by better
and more appropriate forms of chat/IM.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Mutual Funds - Timestamping

2004-05-25 Thread Ian Grigg
 Original Message 
http://www.financialcryptography.com/mt/archives/000141.html

In a rare arisal of a useful use of cryptography in real life, the
mutual funds industry is looking to digital timestamping to save its
bacon [1].  Timestamping is one of those oh-so-simple applications of
cryptography that most observers dismiss for its triviality.
Timestamping is simply where an institution offers to construct a hash
or message digest over your document and the current time.  By this,
evidence is created that your document was seen at that time.  There
are a few details as to how to show that the time in ones receipt is
the right one, but this is trivial  (meaning we know how to do it, not
that it is cheap to code up..) by interlinking a timestamp with the
preceeding and following ones.  So without even relying on the
integrity of the institution, we can make strong statements such as
after this other one and before this next one.
The SEC is proposing rule changes to make the 4pm deadline more serious
and proposes USPS timestamping as one way to manage this [2].  There
are several things wrong with the USPS and SEC going into this venture.
 But there are several things right with timestamping in general, to
balance this.  On the whole, given the complicated panopoly of
strategic issues outlined earlier, timestamping could be a useful
addition to the mutual funds situation [3].
First what's wrong:  timestamping doesn't need to be regulated or
charged for, as it could easily be offered as a loss leader by any
institution.  A server can run a timestamping service and do 100,000
documents a day without noticing.  If there is any feeling that a
service might not be reliable, use two!  And, handing this commercial
service over to the USPS makes no regulatory sense in a competitive
market, especially when there are many others out there already [4].
Further, timestamping is just a small technical solution.  It shouldn't
need to be regulated at all, as it should be treated in any forum as
evidence.  Either the mutual fund accepts orders with timestamps, or it
doesn't.  If it doesn't, then it is taking a risk of being gamed, and
not having anything to cover it.  An action will now be possible
against it.  If it does only accept timestamped orders, then it's
covered.  Timestamping is better seen as best practices not as
Regulation XXX.
Especially, there are better ways of doing it.  A proper RTGS
transactional system has better protections built in of its nature than
timestamping can ever provide, and in fact a regulation requiring
timestamping will interfere with the implementation of proper solutions
(see for example the NSCC solution in [1]).  It will become just
another useless reg that has to be complied with, at cost to all and no
benefit to anyone.
Further, it should be appreciated that timestamping does not solve the
problem (but neither does the NSCC option).  What it allows for is
evidence that orders were received by a certain time.  As explained
elsewhere, putting a late order in is simply one way of gaming the fund
[5].  There are plenty of other ways.
Coming back to where we are now, though, timestamping will allow the
many small pension traders to identify when they got their order in.
One existing gaping loophole is that small operators are manual
processors and can take a long time about what they do.  Hence 4pm was
something that could occur the next day, as agreed by the SEC!  With
timestamping, 4pm could still be permitted to occur tomorrow, as long
as the pension trader has timestamped some key piece of info that
signals the intent.
For this reason, timestamping helps, and it won't hinder if chosen.
The SEC is to be applauded for pushing this forward with a white paper.
 Just as long as they hold short of regulation, and encourage mutual
funds to adopt this on an open, flexible basis as we really don't want
to slow down the real solutions, later on.
--
Powered by Movable Type
Version 2.64
http://www.movabletype.org/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


US intelligence exposed as student decodes Iraq memo

2004-05-25 Thread Ian Grigg

 Original Message 
Subject: Financial Cryptography Update: US intelligence exposed as student decodes 
Iraq memo
http://www.financialcryptography.com/mt/archives/000137.html

13 May 2004 DECLAN BUTLER
[http://www.nature.com/nature/].
http://www.nature.com/cgi-taf/DynaPage.taf?file=/nature/journal/v429/n6988/full/429116b_fs.html
(subscription required)
[IMAGE]It took less then a week to decipher the blotted out words.
Armed with little more than an electronic dictionary and text-analysis
software, Claire Whelan, a graduate student in computer science at
Dublin City University in Ireland, has managed to decrypt words that
had been blotted out from declassified documents to protect
intelligence sources.
She and one of her PhD supervisors, David Naccache, a cryptographer
with Gemplus, which manufactures banking and security cards, tackled
two high-profile documents. One was a memo to US President George Bush
that had been declassified in April for an inquiry into the 11
September 2001 terrorist attacks. The other was a US Department of
Defense memo about who helped Iraq to 'militarize' civilian Hughes
helicopters.
It all started when Naccache saw the Bush memo on television over
Easter. I was bored, and I was looking for challenges for Claire to
solve. She's a wild problem solver, so I thought that with this one I'd
get peace for a week, Naccache says. Whelan produced a solution in
slightly less than that.
Demasking blotted out words was easy, Naccache told Nature. Optical
recognition easily identified the font type - in this case Arial - and
its size, he says. Knowing this, you can estimate the size of the
word behind the blot. Then you just take every word in the dictionary
and calculate whether or not, in that font, it is the right size to fit
in the space, plus or minus 3 pixels.

A computerized dictionary search yielded 1,530 candidates for a blotted
out word in this sentence of the Bush memo: An Egyptian Islamic Jihad
(EIJ) operative told an  service at the same time that Bin
Ladin was planning to exploit the operative's access to the US to mount
a terrorist strike. A grammatical analyser yielded just 346 of these
that would make sense in English.
A cursory human scan of the 346 removed unlikely contenders such as
acetose, leaving just seven possibilities: Ugandan, Ukrainian,
Egyptian, uninvited, incursive, indebted and unofficial. Egyptian seems
most likely, says Naccache. A similar analysis of the defence
department's memo identified South Korea as the most likely anonymous
supplier of helicopter knowledge to Iraq.
Intelligence experts say the technique is cause for concern, and that
they may think about changing procedures. One expert adds that
rumour-mongering on probable fits might engender as much confusion and
damage as just releasing the full, unadulterated text.
Naccache accepts the criticism that although the technique works
reasonably well on single words, the number of candidates for more than
two or three consecutively blotted out words would severely limit it.
Many declassified documents contain whole paragraphs blotted out.
That's impossible to tackle, he says, adding that, the most
important conclusion of this work is that censoring text by blotting
out words and re-scanning is not a secure practice.
Naccache and Whelan presented their results at Eurocrypt 2004, a
meeting of security researchers held in Interlaken, Switzerland, in
early May. They did not present at the formal sessions, but at a
Tuesday evening informal 'rump session', where participants discuss
work in progress. We came away with the prize for the best
rump-session talk - a huge cow-bell, says Naccache.
(c) Nature News Service / Macmillan Magazines Ltd 2004
--
Powered by Movable Type
Version 2.64
http://www.movabletype.org/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


SSL secure browsing - attack tree Mindmap

2004-05-25 Thread Ian Grigg
 Original Message 
Subject: Financial Cryptography Update: SSL secure browsing - attack tree Mindmap
http://www.financialcryptography.com/mt/archives/000136.html

Here is a /work in progress/ Mindmap on the threats to the secure
browsing process.
http://iang.org/maps/browser_attack_tree.html
The mindmap purports to be an attack tree, which is a technique to
include and categorise all possible threats to a process.  An attack
tree is one possible aid to constructing a threat model, which latter
is a required step to constructing a security model.  The mindmap
supports another /work in progress/ on threat modelling for secure
browsing at http://iang.org/ssl/browser_threat_model.html for the
Mozilla project.
(The secure browsing security model uses SSL as a protocol and the
Certificate Authority model as the public key authentication regime,
all wrapped up in HTTPS within the browser.  Technically, the protocol
and key regime are separate, but in practice they are joined at the
hip, so any security modelling needs to consider them both.  SSL - the
protocol part - has been widely scrutinised and has evolved to what is
considered a secure form.  In contrast the CA model has been widely
criticised, and has not really evolved since its inception.  It remains
the weak link in security.
As part of a debate on how to address the security issues in secure
browsing and other applications that use SSL/CA such as S/MIME, the
threat model is required before we can improve the security model.
Unfortunately, the original one is not much use, as it was a
theoretical prediction of the MITM that did not come to pass.)
--
Powered by Movable Type
Version 2.64
http://www.movabletype.org/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The future of security

2004-05-08 Thread Ian Grigg
Graeme Burnett wrote:
Hello folks,
I am doing a presentation on the future of security,
which of course includes a component on cryptography.
That will be given at this conference on payments
systems and security: http://www.enhyper.com/paysec/
Would anyone there have any good predictions on how
cryptography is going to unfold in the next few years
or so?  I have my own ideas, but I would love
to see what others see in the crystal ball.

I would see these things, in no particular
order, and no huge thought process applied.
a.  a hype cycle in QC that will peak in a year
or two, then disappear as purchasers realise that
the boxes aren't any different to ones that are
half the price.
b.  much more use of opportunistic cryptography,
whereby crypto systems align their costs against
the risks being faced.  E.g., self-signed certs
and cert caching in SSL systems, caching and
application integration in other systems.
c.  much less emphasis on deductive no-risk
systems (PKIs like x.509 with SSL) due to the
poor security and market results of the CA
model.
d.  more systems being built with basic, simple
home-grown techniques, including ones that are
only mildly secure.  These would be built by
programmers, not cryptoplumbers.  They would
require refits of proper crypto as/if they migrate
into successful user bases.  In project terms,
this is the same as b. above - more use of
opportunistic tactics to secure stuff basically
and quickly.
e.  greater and more costs to browser users
from phishing [1] will eventually result in
mods to security model to protect users.  In
the meantime, lots of snakeoil security solutions
will be sold to banks.  The day Microsoft decides
to fix the browser security model, phishing will
reduce to a just another risk.
f.  arisal of mass crypto in the chat field,
and slow painful demise of email.  This is
because the chat protocols can be updated
within the power of small teams, including
adding simple crypto.  Email will continue to
defy the mass employment of crypto, although
if someone were to add a create self-signed
cert now button, things might improve.
g.  much interest in simple crypto in the p2p
field, especially file sharing, as the need
for protection and privacy increases due to
IP attacks.  All of the techniques will flow
across to other applications that need it less.
h.  almost all press will be in areas where
crypto is sure to make a difference.  Voting,
QC, startups with sexy crypto algorithms, etc.
i.  Cryptographers will continue to be pressed
into service as security architects, because it
sounds like the same thing.  Security architects
will continue to do most of their work with
little or no crypto.
j.  a cryptographic solution for spam and
viruses won't be found.  Nor for DRM.
iang
[1] one phisher took $75,000 from 400 victims:
http://www.financialcryptography.com/mt/archives/000129.html
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Bank transfer via quantum crypto

2004-04-28 Thread Ian Grigg
Ivan Krstic wrote:
I have to agree with Perry on this one: I simply can't see a compelling 
reason for the push currently being given to ridiculously overpriced 
implementations of what started off as a lab toy, and what offers - in 
all seriousness - almost no practical benefits over the proper use of 
conventional techniques.

You are looking at QC from a scientific perspective.
What is happening is not scientific, but business.
There are a few background issues that need to be
brought into focus.
1) The QC business is concentrated in the finance
industry, not national security.  Most of the
fiber runs are within range.  10 miles not 100.
2) Within the finance industry, the security
of links is done majorly by using private lines.
Put in a private line, and call it secure because
only the operator can listen in to it.
3) This model has broken down somewhat due to the
arisal of open market net carriers, open colos, etc.
So, even though the mindset of private telco line
is secure is still prevalent, the access to those
lines is much wider than thought.
4) there is eavesdropping going on.  This is clear,
although it is difficult to find confirmable
evidence on it or any stats:
  Security forces in the US discovered an illegally installed fiber
  eavesdropping device in Verizons optical network. It was placed at a
  mutual fund company..shortly before the release of their quarterly
  numbers   Wolf Report March, 2003
(some PDF that google knows about.)  These things
are known as vampire taps.  Anecdotal evidence
suggests that it is widespread, if not exactly
rampant.  That is, there are dozens or maybe hundreds
of people capable of setting up vampire taps.  And,
this would suggest maybe dozens or hundreds of taps
in place.  The vampires are not exactly cooperating
with hard information, of course.
5) What's in it for them?  That part is all too
clear.
The vampire taps are placed on funds managers to
see what they are up to.  When the vulnerabilities
are revealed over the fibre, the attacker can put
in trades that take advantage.  In such a case,
the profit from each single trade might be in the
order of a million (plus or minus a wide range).
6) I have not as yet seen any suggestion that an
*active* attack is taking place on the fibres,
so far, this is simply a listening attack.  The
use of the information happens elsewhere, some
batch of trades gets initiated over other means.
7) Finally, another thing to bear in mind is that
the mutual funds industry is going through what
is likely to be the biggest scandal ever.  Fines
to date are at 1.7bn, and it's only just started.
This is bigger than SL, and LTCM, but as the
press does not understand it, they have not
presented it as such.  The suggested assumption
to draw from this is that the mutual funds are
*easy* to game, and are being gamed in very many
and various fashions.  A vampire tap is just one
way amongst many that are going on.

So, in the presence of quite open use of open
lines, and in the presence of quite frequent
attacking on mutual funds and the like in order
to game their systems (endemic), the question
has arisen how to secure the lines.
Hence, quantum cryptogtaphy.  Cryptographers and
engineers will recognise that this is a pure FUD
play.  But, QC is cool, and only cool sells.  The
business circumstances are ripe for a big cool
play that eases the fears of funds that their
info is being collected with impunity.  It shows
them doing something.
Where we are now is the start of a new hype
cycle.  This is to be expected, as the prior
hype cycle(s) have passed.  PKI has flopped and
is now known in the customer base (finance
industry and government) as a disaster.  But,
these same customers are desparate for solutions,
and as always are vulnerable to a sales pitch.
QC is a technology who's time has come.  Expect
it to get bigger and bigger for several years,
before companies work it out, and it becomes the
same disputed, angry white elephant that PKI is
now.
If anyone is interested in a business idea, now
is the time to start building boxes that do just
like QC but in software at half the price.  And
wait for the bubble to burst.
iang
PS:  Points 1-7 are correct AFAIK.  Conclusions,
beyond those points, are just how I see it, IMHO.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Financial Cryptography Update: El Qaeda substitution ciphers

2004-04-19 Thread Ian Grigg


( Financial Cryptography Update: El Qaeda substitution ciphers )

 April 19, 2004



http://www.financialcryptography.com/mt/archives/000119.html





The Smoking Gun has an alleged British translation of an El Qaeda
training manual entitled
http://www.thesmokinggun.com/archive/jihadmanual.html _Military Studies
in the Jihad Against the Tyrants_
Lesson 13, http://www.thesmokinggun.com/archive/jihad13chap1.html
_Secret Writing And Ciphers And Codes_ shows the basic coding
techniques that they use.  In short, substitution ciphers, with some
home-grown wrinkles to make it harder for the enemy.
If this were as good as it got, then claims that the terrorists use
advanced cryptography would seem to be exaggerated.  However, it's
difficult to know for sure.  How valid was the book?  Who is given the
book?
This is a basic soldier's manual, and thus includes a basic code that
could be employed in the field, under stress.  From my own military
experience, working out simple encoded messages under battle conditions
(in the dark, with freezing fingers, lying in a foxhole, and under
fire, are all various impediments to careful coding) can be quite a
fragile process, so not too much should be made of the lack of
sophistication.
Also, bear in mind that your basic soldier has a lot of other things to
worry about and one of the perennial problems is getting them to bother
with letting the command structure know what they are up to.  No
soldier cares what happens at headquarters.  Another factor that might
shock the 90's generation of Internet cryptographers is that your basic
soldiers' codes are often tactical, which means they are only secure
for a day or so.  They are not meant to hide information that would be
stale and known by tomorrow, anyway.
How far this code is employed up the chain of command is the
interesting question.  My guess would be, not far, but, there is no
reason for this being accurate.  When I was a young soldier struggling
with codes, the entire forces used a single basic code with key changes
4 times a day, presumably so that an army grunt could call in support
from a ship off shore or a circling aircraft.  If that grunt lost the
codes, the whole forces structure was compromised, until the codes
rotated outside the lost window (48 hours worth of codes might be
carried at one time).
--
Powered by Movable Type
Version 2.64
http://www.movabletype.org/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Firm invites experts to punch holes in ballot software

2004-04-09 Thread Ian Grigg
Brian McGroarty wrote:
On Wed, Apr 07, 2004 at 03:42:47PM -0400, Ian Grigg wrote:

It seems to me that the requirement for after-the-vote
verification (to prove your vote was counted) clashes
rather directly with the requirement to protect voters
from coercion (I can't prove I voted in a particular
way.) or other incentives-based attacks.
You can have one, or the other, but not both, right?


Suppose individual ballots weren't usable to verify a vote, but
instead confirming data was distributed across 2-3 future ballot
receipts such that all of them were needed to reconstruct another
ballot's vote.
It would then be possible to verify an election with reasonable
confidence if a large number of ballot receipts were collected, but
individual ballot receipts would be worthless.


If I'm happy to pervert the electoral
process, then I'm quite happy to do it
in busloads.  In fact, this is a common
approach, busses are paid for by a party
candidate, the 1st stop is the polling
booth, the 2nd stop is the party booth.
In the west, this is done with old people's
homes, so I hear.
Now, one could say that we'd distribute
the verifiability over a random set of
pollees, but that would make the verification
impractically expensive.
iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Firm invites experts to punch holes in ballot software

2004-04-07 Thread Ian Grigg
Trei, Peter wrote:
Frankly, the whole online-verification step seems like an
unneccesary complication.


It seems to me that the requirement for after-the-vote
verification (to prove your vote was counted) clashes
rather directly with the requirement to protect voters
from coercion (I can't prove I voted in a particular
way.) or other incentives-based attacks.
You can have one, or the other, but not both, right?

It would seem that the former must give way to the latter,
at least in political voting.  I.e., no verification after
the vote.
iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Do Cryptographers burn?

2004-04-03 Thread Ian Grigg
Hadmut Danisch wrote:
Hi,

this is not a technical question, but a rather
academic or abstract one: 

Do Cryptographers burn?

Cryptography is a lot about math, information theory, 
proofs, etc. But there's a certain level where all this
is too complicated and time-consuming to follow all those
theories and claims. At a certain point cryptography is based
on trusting the experts. Is anyone here on this list who can 
claim to have read and understood all those publications 
about cryptography? Is anyone here who can definitely tell
whether the factorization and discrete logarithm problems 
are hard or not? Today's cryptography is to a certain degree
based on trusting a handful of experts, maybe the world's top 100 
(300? 1000?) in cryptography.


On a related note, this was one of the core premises
behind my paper on Financial Cryptography in 7 Layers.
The notion was that building systems involving the
two key words, finance and crypto, had almost always
failed due to great gaping holes, that amounted
to the designers ignoring one or more disciplines.
In that paper I attempt to map out all the core
areas that are must dos.  I don't think it's
possible to cover *all* the fields to a professional
level, one would likely need 3 or 4 degrees to do
it.  E.g., within crypto and software, two of the
disciplines that are common on this group, there are
very few people who can crossover and seriously
contribute to the other discipline.  I know of a
handful (and wouldn't include me, as my crypto
knowledge is very basic).
Yet the challenge remains that all these things need
to be considered in an FC application.

Does this require those people to be trustworthy?


No, it requires their contribution to be simple
and verifiable.  If the crypto goes beyond the
half dozen basics (Hashes, PK, SK, ...), then its
viability reduces rapidly, as the programmers
and others in higher layers will have trouble
dealing with it.

What if a cryptographer is found to intentionally have given a false
expertise in cryptography and security just to do a colleague a favor,
when he erroneously assumed the expertise would be kept secret? Would
such a cryptographer be considered as burned? Wouldn't he give more
false expertises once he's getting paid for or asked by his government?


It's much much more likely that when a perfect
crypto algorithm is mated to a perfect protocol
and then mated to a perfect algorithm, the result
is swiss cheese.  That is, errors at the borders of
disciplines are a more likely error.
Security is a top-to-bottom
requirement, and integration is key.  That's why
a complex system is not a good idea, because you
can't mate it into any usable app without breaking
the complex and hidden assumptions.
iang

PS: http://iang.org/papers/fc7.html
_Financial Cryptography in 7 Layers_,
Conference in Financial Cryptography, Feb 2000,
Proceedings are in
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


All Internet voting is insecure: report

2004-04-01 Thread Ian Grigg
http://www.theregister.co.uk/content/6/35078.html
http://www.eetimes.com/at/news/OEG20040123S0036

=
All Internet voting is insecure: report
By electricnews.net
Posted: 23/01/2004 at 11:37 GMT
Get The Reg wherever you are, with The Mobile Register


Online voting is fundamentally insecure due to the architecture of the
Internet, according to leading cyber-security experts.

Using a voting system based upon the Internet poses a serious and
unacceptable risk for election fraud and is not secure enough for
something as serious as the election of government officials, according to
the four members of the Security Peer Review Group, an advisory group
formed by the US Department of Defense to evaluate a new on-line voting
system.

The review group's members, and the authors of the damning report, include
David Wagner, Avi Rubin and David Jefferson from the University of
California, Berkeley, Johns Hopkins University and the Lawrence Livermore
National Laboratory, respectively, and Barbara Simons, a computer
scientist and technology policy consultant.

The federally-funded Secure Electronic Registration and Voting Experiment
(SERVE) system is currently slated for use in the US in this year's
primary and general elections. It will allow eligible voters to register
to vote at home and then to vote via the Internet from anywhere in the
world. The first tryout of SERVE is early in February for South Carolina's
presidential primary and its eventual goal is to provide voting services
to all eligible US citizens overseas and to US military personnel and
their dependents, a population estimated at six million.

After studying the prototype system the four researchers said that from
anywhere in the world a hacker could disrupt an election or influence its
outcome by employing any of several common types of cyber-attacks.
Attacks could occur on a large scale and could be launched by anyone from
a disaffected lone individual to a well-financed enemy agency outside the
reach of US law, state the three computer science professors and a former
IBM researcher in the report.

A denial-of-service attack would delay or prevent a voter from casting a
ballot through a Web site. A man in the middle or spoofing attack
would involve the insertion of a phoney Web page between the voter and the
authentic server to prevent the vote from being counted or to alter the
voter's choice. What is particularly problematic, the authors say, is that
victims of spoofing may never know that their votes were not counted.

A third type of attack involves the use a virus or other malicious
software on the voter's computer to allow an outside party to monitor or
modify a voter's choices. The malicious software might then erase itself
and never be detected, according to the report.

While acknowledging the difficulties facing absentee voters, the authors
of the security analysis conclude that Internet voting presents far too
many opportunities fo

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [Fwd: Re: Non-repudiation (was RE: The PAIN mnemonic)]

2004-01-09 Thread Ian Grigg
Ed Gerck wrote:


 Likewise, in a communication process, when repudiation of an act by a party is
 anticipated, some system security designers find it useful to define 
 non-repudiation
 as a service that prevents the effective denial of an act. Thus, lawyers should
 not squirm when we feel the same need they feel -- to provide for processes
 that *can be* conclusive.

The problem with this is that the squirms happen at
many levels.  It seems unlikely that we can provide
for conclusive processes when it comes to mixing
humans and tech and law.  If we try, we end up with
the Ross Anderson scenario - our work being trashed
in front of the courts.

Hence the need for a new framework.  Talk of non-
repudiation has gone to the extent of permitting
law makers to create new presumptions which - I
suggest - aren't going to help anyone.  For example,
the law that Pelle posted recently said one thing to
me:  no sane person wants to be caught dead using
these things:

   Pelle wrote:
   The real meat of the matter is handled in Article 31 (Page 10). Guarantees 
   derived from the acceptance of a Certificate:

The subscriber, at the time of accepting a certificate, guarantees all the
 
people of good faith to be free of fault, and his information contained 
within is correct, and that: 

1. The authenticated electronic company/signature verified by means of this 
certificate, was created under his exclusive control.

2. No person has had access to the procedure of generation of the electronic 
signature.

3. The information contained in the certificate is true and corresponds to 
the provided one by this one to the certification organization.


Is that for real?  Would you recommend that to
your mother?  I wouldn't be embarrassed to predict
that there will be no certificate systems in
Panama that rely upon that law.



I think aiming at conclusivity might be a noble
goal for protocol designers and others lower
down in the stack.  When humans are involved,
the emphasis should switch to reduction in costs:
strength of evidence, fast surfacing of problems,
sharing of information, crafting humans' part in
the protocol.

When I design financial systems, I generally think
in these terms:  what can I do to reduce the cost
and frequency of disputes?  I don't aim for any
sort of conclusivity at any costs, because that
can only be done by by setting up assumptions
that are later easily broken by real life.

Instead, I tend to examine the disputes that
might occur and examine their highest costs.
One of the easiest ways to deal with them is
to cause them to occur frequently, and thus
absorb them into the protocol.  For example,
a TCP connection breaks - did the packet get
there or not?  Conclusion: connections cannot
be relied upon.  Protocol response:  use a
datagram + request-reply + replay paradigm,
and lose a lot of connections, deliberately.
Conclusivity is achieved, at the cost of some
efficiency.

Another example - did the user sign the message?
We can't show what the user did with the key.
So, make the private key the agent, and give it
the legal standing.  Remove the human from the
loop.  Make lots of keys, and make the system
psuedonymous.  We can conclusively show that
the private key signed the message, and that
agent is to whom our contractual obligations
are directed.

Technical conclusivity is achieved, at the
expense of removing humans.  The dispute that
occurs then is when humans enter the loop
without fully understanding how they have
delegated their rights to their software
agent (a.k.a. private key).  We don't deny
his repudiating, we simply don't accept his
standing - only the key has standing.

Which brings us full circle to Panama :-)
Except, we've done it on our own contract
terms, not on the terms of the legislature,
so we can craft it with appropriate limits
rather than their irrebuttable presumptions.

From this pov, the mistake that CAs make
is to presume one key and one irrebuttable
presumption.  It's a capabilities thing;
there should be a squillion keys, each with
tightly controlled and surfaced rights.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: digsig - when a MAC or MD is good enough?

2004-01-03 Thread Ian Grigg
John Gilmore wrote:
 
  Sarbanes-Oxley Act in the US.  Section 1102 of that act:
  Whoever corruptly--
 (1) alters, destroys, mutilates, or conceals a
 record, document, or other object, or attempts to
 do so, with the intent to impair the object's
 integrity or availability for use in an official
 proceeding; ...
  shall be fined under this title or imprisoned not
  more than 20 years, or both..
 
 The flaw in this ointment is the intent requirement.  Corporate
 lawyers regularly advise their client companies to shred all
 non-essential records older than, e.g. two years.  The big reason to
 do so is to impair their availability in case of future litigation.
 But if that intent becomes illegal, then the advice will be to shred
 them to reduce clutter or to save storage space.


Battles like that will go on, although you raise an
interesting point - most docs have legal shelf life
limits.

The main observation here is that signatures, once
made, in whatever form, have a power well beyond the
bits that they consume or the paper they cover. This
law and others like it add more power, which in some
imprecise sense stacks up against the MD's recalculability.

Where it becomes interesting is if two parties in a
dispute both retain records.  If this is the case,
then it reduces the chance that someone might fiddle
with them or destroy them, as the other party has the
copies.

I suspect this makes more sense within corporates, or
for b2b scenarios.  For retail and other areas, there
are more complications.


  Can we surmise that a digital record with an MD attached and
  logged would fall within object ?
 
 What's the point of keeping a message digest of a logged item?  If the
 log can be altered, then the message digest can be altered to match.
 (Imagine a sendmail log file, where each line is the same as now, but
 ends with the MD of the line in some gibberish characters...)


The message digest and the record so digested can
travel different paths.  The MDs can be logged, and
the messages can be lost or disposed of.  Or some
such.  As long as the message digests are no longer
in control of a single party, they may be sufficient,
given the weight of the above, to strongly limit any
temptation to recording.

When it comes to auditing or validating of of any
records, searching on message digests is very easy.
If the message digest is with the record it covers,
it is a simple matter to quickly grep through mountains
of logs to find the entries.  It allows a positive
comparison to be done very quickly, which means those
that fail are the ones to pay attention to.

Another technique is to include a cookie in each
record which relates to the state of the log, being
a chained message digest.  If any attempt is made to
adjust a record, it throws out the following cookies.
Still, this is getting us further and further from
the original question - under what grounds could
an MD be considered a sufficient signature for
accuracy purposes?


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Non-repudiation (was RE: The PAIN mnemonic)

2003-12-28 Thread Ian Grigg
Carl Ellison wrote:

  From where I sit, it is better to term these
  as legal non-repudiability or cryptographic
  non-repudiability so as to reduce confusion.
 
 To me, repudiation is the action only of a human being (not of a key) and
 therefore there is no such thing as cryptographic non-repudiability.


Ah.  Now I understand.  The verb is wrong, as it
necessarily implies the act of the human who is
accused of the act.  (And, thus, my claim that it
is possible, was also wrong.)

Whereas the cryptographic property implies no such
thing, and a cryptographic actor can only affirm
or not, not repudiate.  I.e., it's a meaningless
term.


 We
 need a different, more precise term for that -


Would irrefutable be a better term?  Or non-
refutability, if one desires to preserve the N?

The advantage of this verb is that it has no
actor involved, and evidence can be refuted on
its own merits, as it were.

As a test, if one were to replace repudiate
with refute in the ISO definition, would it
then stand?


 and we need to rid our
 literature and conversation of any reference to the former - except to
 strongly discredit it if/when it ever appears again.

I think more is needed.  A better definition is
required, as absence is too easy to ignore.  People
and courts will use what they have available, so it
is necessary to do more; indeed it is necessary to
actively replace that term with another.

Generally, the way the legal people work is to
create simple tests.  Such as:

  A Document was signed by a private key if:

  1. The signature is verifiable by the public key,
  2. the public key is paired with the private key,
  3. the signature is over a cryptographically strong
 message digest,
  4. the Message Digest was over the Document.

Now, this would lead to a definition of irrefutable
evidence.  How such evidence would be used would be
of course dependent on the circumstances;  it then
becomes a further challenge to tie a human's action
to that act / event.



iang


PS: Doing a bit of googling, I found the ISO definition
to be something like:

http://lists.w3.org/Archives/Public/w3c-ietf-xmldsig/1999OctDec/0149.html
 ... The ISO
 10181-4 document (called non repudiation Framework) starts with:
 The goal of the non-repudiation service is to collect, maintain,
 make available and validate irrefutable evidence concerning a
 claimed event or action in order to solve disputes about the
 occurrence of the event or action.

But, the actual standard costs money (!?) so it is
not surprising that it is the subject of much
controversy :)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Non-repudiation (was RE: The PAIN mnemonic)

2003-12-28 Thread Ian Grigg
Ben Laurie wrote:
 
 Ian Grigg wrote:
  Carl and Ben have rubbished non-repudiation
  without defining what they mean, making it
  rather difficult to respond.
 
 I define it quite carefully in my paper, which I pointed to.


Ah.  I did read your paper, but deferred any comment
on it, in part because I didn't understand what its
draft/publication status was.


Ben Laurie said:
 Probably because non-repudiation is a stupid idea:
 http://www.apache-ssl.org/tech-legal.pdf.


You didn't state which of the two definitions
you were rubbishing, so I shall respond to both!



Let's take the first definition - your technical
definition (2.7):

  Non-repudiation, in its technical sense, is a property of a communications
  system such that the system attributes the sending of a message to a person
  if, but only if, he did in fact send it, and records a person as having received
  a message if, but only if, he did in fact receive it. If such systems exist at all,
  they are very rare.

  Non-repudiability is often claimed to be a property of electronic signatures of
  the kind described above. This claim is unintelligible if non-repudiation is
  used in its correct technical sense, and in fact represents an attempt to confer a
  bogus technical respectability on the purely commercial assertion the the owners
  of private keys should be made responsible for their use, whoever in fact uses
  them.

Some comments.

1. This definition seems to be only one of the many
out there [1].  The use of the term correct technical
sense then would be meaningless as well as brave
without some support of references.  Although it does
suffice to ground the use within the paper.

2. The definition is muddied by including the attack
inside the definition.  The attack on the definition would
fit better in section 6. Is \non-repudiation a useful
concept?

3. Nothing in either the definition 2.7 or the proper
section of 6. tells us above why the claim is unintelligable.

To find this, we have to go back to Carl's comment
which gets to the nub of the legal and literal meaning
of the term:

To me, repudiation is the action only of a human being (not of a key)...

Repudiate can only be done by a human [2].  A key cannot
repudiate, nor can a system of technical capabilities [3].
(Imagine here, a debate on how to tie the human to the
key.)

That is, it is an agency problem, and unless clearly
cast in those terms, for which there exists a strong
literature, no strong foundation can be made of any
conclusions [4].



4. The discussion resigns itself to being somewhat
dismissive, by leaving open the possibility that
there are alternative possibilities.  There is
a name for this fallacy, stating the general and
showing only the specific, but I forget its name.

In the first para, 2.7, it states that If such systems
exist at all, they are very rare.  Thus, allowing
for existance.  Yet in the second para, one context
is left as unintelligable.  In section 6, again,
most discussions ... are more confusing than helpful.

This hole is created, IMHO, by the absence of Carl's
killer argument in 3. above.  Only once it is possible
to move on from the fallacy embodied in the term
repudiation itself, is it possible to start considering
what is good and useful about the irrefutability (or
otherwise) of a digital signature [5].

I.e., throwing out the bathwater is a fine and regular
thing to do.  Let's now start looking for the baby.



  But, whilst challenging, it is possible to
  achieve legal non-repudiability, depending
  on your careful use of assumptions.  Whether
  that is a sensible thing or a nice depends
  on the circumstances ... (e.g., the game that
  banks play with pin codes).
 
 Actually, its very easy to achieve legal non-repudiability. You pass a
 law saying that whatever-it-is is non-repudiable. I also cite an example
 of this in my paper (electronic VAT returns are non-repudiable, IIRC).

Which brings us to your second definition, again,
in 2.7:

To lawyers, non-repudiation was not a technical legal term before techies gave
it to them. Legally it refers to a rule which defines circumstances in which a
person is treated for legal purposes as having sent a message, whether in fact
he did or not, or is treated as having received a message, whether in fact he
did or not. Its legal meaning is thus almost exactly the opposite of its technical
meaning.


I am not sure that I'd agree that the legal
fraternity thinks in the terms outlined in the
second sentance.  I'd be surprised if the legal
fraternity said any more than what you are
trying to say is perhaps best seen by these
sorts of rules...

Much of law already duplicates what is implied
above, anyway, which makes one wonder (a) what
is the difference between the above and the
rules of evidence and presumption, etc, etc
and (b) why did the legal fraternity adopt
the techies' term with such abandon that they
didn't bother to define it?

In practice, the process

CIA - the cryptographer's intelligent aid?

2003-12-28 Thread Ian Grigg
Richard Johnson wrote:
 
 On Sun, Dec 21, 2003 at 09:45:54AM -0700, Anne  Lynn Wheeler wrote:
  note, however, when I did reference PAIN as (one possible) security
  taxonomy  i tended to skip over the term non-repudiation and primarily
  made references to privacy, authentication, and integrity.
 
 In my eperience, the terminology has more often been confidentiality,
 integrity, and authentication.  Call it CIA if you need an acronym easy
 to memorize, if only due to its ironic similarity with that for the name of
 a certain US government agency. :-)


I would agree that CIA reins supreme.  It's easy to
remember, and easy to teach.  It covers the basic
crypto techniques, those that we are sure about and
can be crafted simply with primitives.

CIA doesn't overreach itself.  CAIN, by introducing
non-repudiation, brings in a complex multilayer
function that leads people down the wrong track.

PAIN is worse, as it introduces Privacy instead of
Confidentiality.  The former is a higher level term
that implies application requirements, arguably, not
a crypto term at all.  At least with Confidentiality
it is possible to focus on packets and connections
and events as being confidential at some point in
time; but with Privacy, we are launched out of basic
crypto and protocols into the realm of applications.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Repudiating non-repudiation

2003-12-28 Thread Ian Grigg
In response to Ed and Amir,

I have to agree with Carl here and stress that the
issue is not that the definition is bad or whatever,
but the word is simply out of place.  Repudiation is
an act of a human being.  So is the denial of that
or any other act, to take a word from Ed's 1st definition.

We can actually learn a lot more from the legal world
here, in how they solve this dilemma.  Apologies in
advance, as what follows is my untrained understanding,
derived from a legal case I was involved with in
recent years [1].  It is an attempt to show why the
use of the word repudiation will never help us and
will always hinder us.



The (civil) courts resolve disputes.  They do *not*
make contracts right, or tell wrong-doers to do the
right thing, as is commonly thought.

Dispute resolution by definition starts out with a
dispute, of course.  That dispute, for sake of argument,
is generally grounded in a denial, or a repudiation.

One party - a person - repudiates a contract or a
bill or a something.

So, one might think that it would be in the courts'
interest to reduce the number of repudiations.  Quite
the reverse - the courts bend over backwards, sideways,
and tie themselves in knots to permit and encourage
repudiations.  In general, the rule is that anyone
can file *anything* into a court.

The notion of non-repudiation is thus anathema to
the courts.  From a legal point of view, we, the
crypto community, will never make headway if we use
this term [2].  What terms we should use, I suggest
below, but to see that, we need to get the whole
process of the courts in focus.



Courts encourage repudiations so as to encourage
all the claims to get placed in front of the forum
[3].  The full process that is then used to resolve
the dispute is:

   1. filing of claims, a.k.a. pleadings.
   2. presentation of evidence
   3. application of law to the evidence
   4. a reasoned ruling on 1 is delivered based on 2,3

Now, here's where cryptographer's have made the
mistake that has led us astray.  In the mind of a
cryptographer, a statement is useless if it cannot
be proven beyond a shred of doubt.

The courts don't operate that way - and neither does
real life.  In this, it is the cryptographers that
are the outsiders [4].

What the courts do is to encourage the presentation
of all evidence, even the bad stuff.  (That's what
hearings are, the presentation of evidence.)

Then, the law is applied - and this means that each
piece of evidence is measured and filtered and
rated.  It is mulled over, tested, probed, and
brought into relationship with all the other pieces
of evidence.

Unlike no-risk cryptography, there isn't such a
thing as bad evidence.  There is, instead, strong
evidence and weak evidence.  There is stuff that
is hard to ignore, and stuff that doesn't add
much. But, even the stuff that adds little is not
discriminated against, at least in the early phases.



And this is where the cryptography field can help:
a digital signature, prima facea, is just another
piece of evidence.  In the initial presentation of
evidence, it is neither weak nor strong.

It is certainly not non-repudiable.  What it is
is another input to be processed.  The digsig is
as good as all the others, first off.  Later on,
it might become stronger or weaker, depending.

We, cryptographers, help by assisting in the
process of determining the strength of the
evidence.  We can do it in, I think, three ways:



Firstly, the emphasis should switch from the notion
of non-repudiation to the strength of evidence.  A
digital signature is evidence - our job as crypto
guys is to improve the strength of that evidence,
with an eye to the economic cost of that strength,
of course.

Secondly, any piece of evidence will, we know, be
scrutinised by the courts, and assessed for its
strength.  So, we can help the process of dispute
resolution by clearly laying out the assumptions
and tests that can be applied.  In advance.  In
as accessible a form as we know how.

For example, a simple test might be that a
receipt is signed validly if:

   a. the receipt has a valid hash,
   b. that hash is signed by a private key,
   c. the signature is verified by a public
  key, paired with that private key

Now, as cryptographers, we can see problems,
which we can present as caveats, beyond the
strict statement that the receipt has a valid
signature from the signing key:

   d. the public key has been presented by
  the signing party (person) as valid
  for the purpose of receipts
   e. the signing party has not lost the
  private key
   f. the signature was made based on best
  and honest intents...

That's where it gets murky.  But, the proper
place to deal with these murky issues is in
the courts.  We can't solve those issues in
the code, and we shouldn't try.  What we should
do is instead surface all the assumptions we
make, and list out the areas where further
care is needed.

Thirdly, we can create protocols that bear
in mind the concept of 

Re: Non-repudiation (was RE: The PAIN mnemonic)

2003-12-26 Thread Ian Grigg
Amir Herzberg wrote:
 
 Ben, Carl and others,
 
 At 18:23 21/12/2003, Carl Ellison wrote:
 
   and it included non-repudiation which is an unachievable,
   nonsense concept.
 
 Any alternative definition or concept to cover what protocol designers
 usually refer to as non-repudiation specifications? For example
 non-repudiation of origin, i.e. the ability of recipient to convince a
 third party that a message was sent (to him) by a particular sender (at
 certain time)?
 
 Or - do you think this is not an important requirement?
 Or what?


I would second this call for some definition!

FWIW, I understand there are two meanings:

   some form of legal inability to deny
   responsibility for an event, and

   cryptographically strong and repeatable
   evidence that a certain piece of data
   was in the presence of a private key at
   some point.

Carl and Ben have rubbished non-repudiation
without defining what they mean, making it
rather difficult to respond.

Now, presumably, they mean the first, in
that it is a rather hard problem to take the
cryptographic property of public keys and
then bootstrap that into some form of property
that reliably stands in court.

But, whilst challenging, it is possible to
achieve legal non-repudiability, depending
on your careful use of assumptions.  Whether
that is a sensible thing or a nice depends
on the circumstances ... (e.g., the game that
banks play with pin codes).

So, as a point of clarification, are we saying
that non-repudiability is ONLY the first of
the above meanings?  And if so, what do we call
the second?  Or, what is the definition here?

From where I sit, it is better to term these
as legal non-repudiability or cryptographic
non-repudiability so as to reduce confusion.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Ousourced Trust (was Re: Difference between TCPA-Hardware anda smart card and something else before

2003-12-23 Thread Ian Grigg
Ed Reed wrote:
 
  Ian Grigg [EMAIL PROTECTED] 12/20/2003 12:15:51 PM 
 
 One of the (many) reasons that PKI failed is
 that businesses simply don't outsource trust.
 
 Of course they do.  Examples:
 
 DB and other credit reporting agencies.
 SEC for fair reporting of financial results.
 International Banking Letters of Credit when no shared root of trust
 exists.
 Errors and Ommissions Professional Liability insurance for consultants
 you don't know.
 Workman's Compensation insurance for independent contractors you don't
 know.


Of course they don't.  What they do is they
outsource the collection of certain bases of
information, from which to make trust decisions.
The trust is still in house.  The reports are
acquired from elsewhere.

That's the case for DB and credit reporting.
For the SEC, I don't understand why it's on
that list.  All they do is offer to store the
filings, they don't analyse them or promise
that they are true.  They are like a library.

International Banking Letters of Credit - that's
money, not trust.  What happens there is that
the receiver gets a letter, and then takes it
to his bank.  If his bank accepts it, it is
acceptable.  The only difference between using
that and a credit card, at a grand level, is
that you are relying on a single custom piece
of paper, with manual checks at every point,
rather than a big automated system that mechanises
the letter of credit into a piece of plastic.
(Actually, I'm totally unsure on these points,
as I've never examined in detail how they work :-)

Insurance - is not the outsourcing of trust,
but the sharing of risks.



Unfortunately, most of the suppliers of these
small factors in the overall trust process of
a company, PKI included, like to tell the
companies that they can, and are, outsourcing
trust.  That works well, because, if the victim
believes it (regardless of whether he is doing
it) then it is easier to sell some other part
of the services.  It's basically a technique
to lull the customer into handing over more
cash without thinking.

But, make no mistake!  Trust itself - the way
it marshalls its information and makes its
decisions - is part of the company's core
business.  Any business that outsources its
core specialties goes broke eventually.

And, bringing this back to PKI, the people
who pushed PKI fell for the notion that
trust could be outsourced.  They thus didn't
understand what trust was, and consequently
confused the labelling of PKI as trust with
the efficacy of PKI as a useful component
in any trust model (see Lynn's post).


 The point is that the real world has monitized risk.  But the
 crytpo-elite have concentrated too hard on eliminating environmental
 factors from proofs of correctness of algorithms, protocols, and most
 importantly, business processes.


I agree with this, and all the rest.  The no-
risk computing school is fascinated with the
possibility of eliminating entire classes of
risk, so much so that they often introduce
excessive business costs, which results in
general failures of the whole crypto process.

In theory, it's a really good thing to
eliminate classes of attack.  But it can
carry a heavy cost, in any practical
implementation.

We are seeing a lot more attention to
opportunistic cryptography, which is a good
thing.  The 90s was the decade of the no-risk
school, and the result was pathetically low
levels of adoption.  In the future, we'll see
a lot more bad designs, and a lot more corners
cut.  This is partly because serious crypto
people - those you call the crypto-elite - have
burnt out their credibility and are rarely
consulted, and partly because it simply costs
too much for projects to put in a complete
and full crypto infrastructure in the early
stages.


 Crypto is not business-critical.  It's the processes its supposed to be
 protecting that are, and those are the ones that are insured.
 
 Legal and regulatory frameworks define how and where liability can be
 assigned, and that allows insurance companies to factor in stop-loss
 estimates for their exposure.  Without that, everything is a crap
 shoot.
 
 Watching how regulation is evolving right now, we may not see explicit
 liability assignments to software vendors for their vulnerabilities,
 whether for operating systems or for S/MIME email clients.  Those are
 all far too limited in what they could offer, anyway.
 
 What's happening, instead, is that consumers of those products are
 themselves facing regulatory pressure to assure their customers and
 regulators that they're providing adequate systematic security through
 technology as well as business policies, procedures and (ultimately)
 controls (ie, auditable tests for control failures and adequacy).  When
 customers can no longer say gee, we collected all this information, and
 who knew our web server wouldn't keep it from being published on the
 NYTimes classified pages?, then vendors will be compelled to deliver
 pieces of the solution that allow THE CUSTOMER (product

Re: IP2Location.com Releases Database to Identify IP's Geography

2003-12-23 Thread Ian Grigg
Rich Salz wrote:
 
  The IP2Location(TM) database contains more than 2.5 million records for all
  IP addresses. It has over 95 percent matching accuracy at the country
  level. Available at only US$499 per year, the database is available via
  download with free twelve monthly updates.
 
 And since the charge is per-server, not per-query, you could easily
 set up an international free service on a big piece of iron.


These have existed for some time.  Google knows
where they are, although they were a little tough
to find.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was:example: secure computing kernel needed)

2003-12-22 Thread Ian Grigg
Anne  Lynn Wheeler wrote:
 At issue in business continuity are business requirements for things like
 no single point of failure,  offsite storage of backups, etc. The threat
 model is 1) data in business files can be one of its most valuable assets,
 2) it can't afford to have unauthorized access to the data, 3) it can't
 afford to loose access to data, 4) encryption is used to help prevent
 unauthorized access to the data, 5) if the encryption keys are protected by
 a TCPA chip, are the encryption keys recoverable if the TCPA chip fails?

You may have hit upon something there, Lynn.

One of the (many) reasons that PKI failed is
that businesses simply don't outsource trust.

If the use of TCPA is such that the business
must trust in its workings, then it can fairly
easily be predicted that it won't happen.  For
business, at least (that still leaves retail
and software sales based on IP considerations).

It is curious that in the IT trust business,
there seems to be a continuing supply of
charlatan ventures.  Even as news of PKI
slinking out of town reaches us, people are
lining up to buy tickets for the quantum
crypotagraphy miracle cure show and bottles
of the new wonder TCPA elixir.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and other forms of trust

2003-12-22 Thread Ian Grigg
Bill Frantz wrote:

 [I always considered the biggest contribution from Mondex was the idea of
 deposit-only purses, which might reduce the incentive to rob late-night
 business.]

This was more than just a side effect, it was also
the genesis of the earliest successes with smart
card money.

The first smart card money system in the Netherlands
was a service-station system for selling fuel to
truck drivers.  As security costs kept on rising,
due to constant hold-ups, the smart card system
was put in to create stations that had no money
on hand, so no need for guards or even tellers.

This absence of night time staff created a great
cost saving, and the programme was a big success.
Unfortunately, the early lessons were lost as time
went on, and attention switched from single-purpose
to multi-purpose applications.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example:secure computing kernel needed)

2003-12-22 Thread Ian Grigg
Bill Stewart wrote:
 
 At 09:38 AM 12/16/2003 -0500, Ian Grigg wrote:
 
 In the late nineties, the smart card world
 worked out that each smart card was so expensive,
 it would only work if the issuer could do multiple
 apps on each card.  That is, if they could share
 the cost with different uses (or users).
 
 Of course, at this point the assertion that a smart card
 (that doesn't also have independent user I/O)
 costs enough to care about is pretty bogus.
 Dumb smartcards are cost-effective enough to use them
 to carry $5 in telephone minutes.


Sorry, yes, each actual smart card is, at
the margin, cheap.  But, as a project, the
smart card is expensive.  There's a big
difference between project costs and the
marginal cost, and that generally makes
*the* difference.

I suppose the confusion is endemic;  as
everyone thinks about the project costs in
terms of per person and this is considered
by assumption to be one smart card per person,
but the cost per person is not the single 50c
per actual smart card.

Smart cards are a lot like Christmas, it's
not the gift, but the act of giving that
makes it special.

 The real constraint is that you're unlikely to have
 more than one card reader in a machine,
 so multifunction cards provide the opportunity to
 run multiple applications without switching cards in and out,
 but that only works if the application vendors cooperate.
 
 For instance, you may have some encrypted session application
 that needs to have your card stay in the machine during the session
 (e.g. VOIP, or secure login, SSH-like things, remote file system access),
 and you may want to pay for something using your bank smartcard
 during the session.  That's not likely to work out,
 because the secure session software vendors are
 unlikely to have a relationship with your bank that lets
 both of them trust each other with their information,
 compared to the simpliciy of having multiple cards.


For example, yes.  So it all comes down to
whether you can afford to role out the hardware
to all the vendors, and all the associated
nodes.  At this point, the penny drops, and
smart cards start looking very expensive.

Hence, to date, only single-purpose projects
have succeeded - ones where the economics
where clearly based on narrowly focused,
single activities:  phones, transit systems,
etc, and they justified themselves on those
activities, alone, without relying on the
economics of unmeasurable and unmeetable
hyperbole.

iang

PS: all those Europeans with all those
smart cards in their pockets - ask them
how many times they use the smart card
features!

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Ross Anderson's Trusted Computing FAQ

2003-12-20 Thread Ian Grigg
Ross Anderson's Trusted Computing FAQ has a lot
to say about recent threads:

http://www.cl.cam.ac.uk/~rja14/tcpa-faq.html

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


I don't know PAIN...

2003-12-20 Thread Ian Grigg
What is the source of the acronym PAIN?


Lynn said:

 ... A security taxonomy, PAIN:
 * privacy (aka thinks like encryption)
 * authentication (origin)
 * integrity (contents)
 * non-repudiation


I.e., its provenance?

Google shows only a few hits, indicating
it is not widespread.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Open Source Embedded SSL - (License and Memory)

2003-11-28 Thread Ian Grigg
J Harper wrote:
 
  1) Not GPL or LPGL, please.  I'm a fan of the GPL for most things, but
 
  for embedded software, especially in the security domain, it's a
  killer.  I'm supposed to allow users to modify the software that runs
  on their secure token?  And on a small platform where there won't be
  such things as loadable modules, or even process separation, the
  (L)GPL really does become viral.  This is, I think, why Red Hat
  releases eCos under a non-GPL (but still open source) license.
 
 We're aware of these issues.  How do other people on the group feel?

I think this applies more generally, but especially
for crypto software, because of the legal environment
and the complicated usage to which it is often put.

Placing any burdens of a non-technical nature on the
user is generally a downer.  Crypto-newbies are often
unsure and under rather intense pressure to get
something out.  If uncertainties of code licensing
issue are added, it can have a marked effect on the
results.

The general result is a choice between no crypto and
poorly done crypto.  (Rarely is good crypto done in
the first instance.)  Opinions differ on this point,
but I generally err on the side of recommending less
than perfect crypto, which can be repaired later on
at a lower cost.  It's a lot easier to sell a manager
on replacing poor crypto when it becomes needed
than on we need to add a crypto layer.

For that reason, we (Cryptix) have always placed all
our code under a BSD style licence, except a few cases
where it has been placed under public domain (AES).  Our
view has always been, with crypto, the least barriers
the better.

In essence, get it out there is the mantra.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Cryptophone locks out snoopers

2003-11-25 Thread Ian Grigg
(link is very slow:)
http://theregister.co.uk/content/68/34096.html


Cryptophone locks out snoopers 
By electricnews.net
Posted: 20/11/2003 at 10:16 GMT


A German firm has launched a GSM mobile phone that
promises strong end-to-end encryption on calls,
preventing the possibility of anybody listening in. 

If you think that you'll soon be seeing this on the shelves
of your local mobile phone shop though, think again. For
a start, the Cryptophone sells for EUR1,799 per handset,
which puts it out of the reach of most buyers. Second,
the phone's maker, Berlin-based GSMK, say the phone
will not be sold off the shelf because of the measures
needed to ensure that the product received by the
customer is untampered with and secure. Buyers must
buy the phone direct from GSMK. 

According to GSMK, the new phone is designed to
counteract known measures used to intercept mobile
phone calls. While GSM networks are far more secure
than their analogue predecessors, there are ways and
means to circumvent security measures. 

The encryption in GSM is only used to protect the call
while it is in the air between the GSM base station and
the phone. During its entire route through the telephone
network, which may include other wireless links, the call
is not protected by encryption. Encryption on the GSM
network can also be broken. The equipment needed to do
this is extremely expensive and is said to be only
available to law enforcement agencies, but it has be
known to fall into the hands of criminal organisations. 

The Cryptophone is a very familiar-looking device, since
it is based around the same HTC smartphone that O2
used as its original XDA platform. The phone runs on a
heavily modified version of Microsoft Pocket PC 2002. 

GSMK says it is the only manufacturer of such devices
that has its source code publicly available for review. It
says this will prove that there are no back-doors in the
software, thus allaying the fears of the
security-conscious. Publication of the source code
doesn't compromise the phone's security, according to
GSMK. The Cryptophone is engineered in such a way
that the encryption key is only stored in the phone for the
duration of the call and securely erased immediately
afterwards. 

One drawback of the device is that it requires the
recipient of calls to also use a Cryptophone to ensure
security. GSMK does sell the device in pairs, but also
offers a free software download that allows any PC with
a modem to be used as a Cryptophone. 

GSMK says that the Cryptophone comples with German
and EU export law. This means the device can be sold
freely within the EU and a number of other states such
as the US, Japan and Australia. It cannot be sold to
customers within Afghanistan, Syria, Iraq, Iran, Libya
and North Korea. A number of other states are subject
to tight export controls and a special licence will have to
be obtained. 

© ElectricNews.Net

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-11-12 Thread Ian Grigg
Tom Weinstein wrote:

 The economic view might be a reasonable view for an end-user to take,
 but it's not a good one for a protocol designer. The protocol designer
 doesn't have an economic model for how end-users will end up using the
 protocol, and it's dangerous to assume one. This is especially true for
 a protocol like TLS that is intended to be used as a general solution
 for a wide range of applications.


I agree with this.  Especially, I think we are
all coming to the view that TLS/SSL is in fact
a general purpose channel security protocol,
and should not be viewed as being designed to
protect credit cards or e-commerce especially.

Given this, it is unreasonable to talk about
threat models at all, when discussing just the
protocol.  I'm coming to the view that protocols
don't have threat models, they only have
characteristics.  They meet requirements, and
they get deployed according to the demands of
higher layers.

Applications have threat models, and in this is
seen the mistake that was made with the ITM.
Each application has to develop its own threat
model, and from there, its security model.

Once so developed, a set of requirements can
be passed on to the protocol.  Does SSL/TLS
meet the requirements passed on from on high?
That of course depends on the application and
what requirements are set.

So, yes, it is not really fair for a protocol
designer to have to undertake an economic
analysis, as much as they don't get involved
in threat models and security models.  It's
up to the application team to do that.

Where we get into trouble a lot in the crypto
world is that crypto has an exaggerated
importance, an almost magical property of
appearing to make everything safe.  Designers
expect a lot from cryptographers for these
reasons.  Too much, really.  Managers demand
some special sprinkling of crypto fairy dust
because it seems to make the brochure look
good.

This will always be a problem.  Which is why
it's important for the crypto guy to ask the
question - what's *your* threat model?  Stick
to his scientific guys, as it were.


 In some ways, I think this is something that all standards face. For any
 particular application, the standard might be less cost effective than a
 custom solution. But it's much cheaper to design something once that
 works for everyone off the shelf than it would be to custom design a new
 one each and every time.


Right.  It is however the case that secure
browsing is facing a bit of a crisis in
security.  So, there may have to be some
changes, one way or another.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Ian Grigg
Tom Otvos wrote:

 As far as I can glean, the general consensus in WYTM is that MITM attacks are very 
 low (read:
 inconsequential) probability.  Is this *really* true?


The frequency of MITM attacks is very low, in the sense
that there are few or no reported occurrences.  This
makes it a challenge to respond to in any measured way.


 I came across this paper last year, at the
 SANS reading room:
 
 http://rr.sans.org/threats/man_in_the_middle.php
 
 I found it both fascinating and disturbing, and I have since confirmed much of what 
 it was
 describing.  This leads me to think that an MITM attack is not merely of academic 
 interest but one
 that can occur in practice.


Nobody doubts that it can occur, and that it *can*
occur in practice.  It is whether it *does* occur
that is where the problem lies.

The question is one of costs and benefits - how much
should we spend to defend against this attack?  How
much do we save if we do defend?

[ Mind you, the issues that are raised by the paper
are to do with MITM attacks, when SSL/TLS is employed
in an anti-MITM role.  (I only skimmed it briefly I
could be wrong.)  We in the SSL/TLS/secure browsing
debate have always assumed that SSL/TLS when fully
employed covers that attack - although it's not the
first time I've seen evidence that the assumption
is unwarranted. ]


 Having said that then, I would like to suggest that one of the really big flaws in 
 the way SSL is
 used for HTTP is that the server rarely, if ever, requires client certs.  We all 
 seem to agree that
 convincing server certs can be crafted with ease so that a significant portion of 
 the Web population
 can be fooled into communicating with a MITM, especially when one takes into account 
 Bruce
 Schneier's observations of legitimate uses of server certs (as quoted by Bryce 
 O'Whielacronx).  But
 as long as servers do *no* authentication on client certs (to the point of not even 
 asking for
 them), then the essential handshaking built into SSL is wasted.
 
 I can think of numerous online examples where requiring client certs would be a good 
 thing: online
 banking and stock trading are two examples that immediately leap to mind.  So the 
 question is, why
 are client certs not more prevalent?  Is is simply an ease of use thing?


I think the failure of client certs has the same
root cause as the failure of SSL/TLS to branch
beyond its mandated role of protecting e-
commerce.  Literally, the requirement that
the cert be supplied (signed) by a third party
killed it dead.  If there had been a button on
every browser that said generate self-signed
client cert now then the whole world would be
using them.

Mind you, the whole client cert thing was a bit
of an afterthought, wasn't it?  The orientation
that it was at server discretion also didn't help.


 Since the Internet threat
 model upon which SSL is based makes the assumption that the channel is *not* 
 secure, why is MITM
 not taken more seriously?


People often say that there are no successful MITM
attacks because of the presence of SSL/TLS !

The existance of the bugs in Microsoft browsers
puts the lie to this - literally, nobody has bothered
with MITM attacks, simply because they are way way
down on the average crook's list of sensible things
to do.

Hence, that rant was in part intended to separate
out 1994's view of threat models to today's view
of threat models.  MITM is simply not anywhere in
sight - but a whole heap of other stuff is!

So, why bother with something that isn't a threat?
Why can't we spend more time on something that *is*
a threat, one that occurs daily, even hourly, some
times?


 Why, if SSL is designed to solve a problem that can be solved, namely
 securing the channel (and people are content with just that), are not more people 
 jumping up and
 down yelling that it is being used incorrectly?


Because it's not necessary.  Nobody loses anything
much over the wire, that we know of.  There are
isolated cases of MITMs in other areas, and in
hacker conferences for example.  But, if 10 bit
crypto and ADH was used all the time, it would
still be the least of all risks.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Ian Grigg
Tom Weinstein wrote:
 
 Ian Grigg wrote:
 
  Nobody doubts that it can occur, and that it *can* occur in practice.
  It is whether it *does* occur that is where the problem lies.
 
 This sort of statement bothers me.
 
 In threat analysis, you have to base your assessment on capabilities,
 not intentions. If an attack is possible, then you must guard against
 it. It doesn't matter if you think potential attackers don't intend to
 attack you that way, because you really don't know if that's true or not
 and they can always change their minds without telling you.

In threat analysis, you base your assessment on
economics of what is reasonable to protect.  It
is perfectly valid to decline to protect against
a possible threat, if the cost thereof is too high,
as compared against the benefits.

This is the reason that we cannot simply accept
the possible as a basis for engineering of any
form, let alone cryptography.  And this is the
reason why, if we can't measure it, then we are
probably justified in assuming it's not a threat
we need to worry about.

(Of course, anecdotal evidence helps in that
respect, hence there is a lot of discussion
about MITMs in other forums.)

iang

Here's Eric Rescorla's words on this:

http://www.iang.org/ssl/rescorla_1.html

The first thing that we need to do is define our ithreat model./i
A threat model describes resources we expect the attacker to
have available and what attacks the attacker can be expected
to mount.  Nearly every security system is vulnerable to some
threat or another.  To see this, imagine that you keep your
papers in a completely unbreakable safe.  That's all well and
good, but if someone has planted a video camera in your office
they can see your confidential information whenever you take it
out to use it, so the safe hasn't bought you that much.

Therefore, when we define a threat model, we're concerned
not only with defining what attacks we are going to worry
about but also those we're not going to worry about.
Failure to take this important step typically leads to
complete deadlock as designers try to figure out how to
counter every possible threat.  What's important is to
figure out which threats are realistic and which ones we
can hope to counter with the tools available.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Ian Grigg
Perry E. Metzger wrote:
 
 Ian Grigg [EMAIL PROTECTED] writes:
  In threat analysis, you base your assessment on
  economics of what is reasonable to protect.  It
  is perfectly valid to decline to protect against
  a possible threat, if the cost thereof is too high,
  as compared against the benefits.
 
 The cost of MITM protection is, in practice, zero.


Not true!  The cost is from 10 million dollars to
100 million dollars per annum.  Those certs cost
money, Perry!  All that sysadmin time costs money,
too!  And all that managerial time trying to figure
out why the servers don't just work.  All those
consultants that come in and look after all those
secure servers and secure key storage and all that.

In fact, it costs so much money that nobody bothers
to do it *unless* they are forced to do it by people
telling them that they are being irresponsibly
vulnerable to the MITM!  Whatever that means.

Literally, nobody - 1% of everyone - runs an SSL
server, and even only a quarter of those do it
properly.  Which should be indisputable evidence
that there is huge resistance to spending money
on MITM.


 Indeed, if you
 wanted to produce an alternative to TLS without MITM protection, you
 would have to spend lots of time and money crafting and evaluating a
 new protocol that is still reasonably secure without that
 protection. One might therefore call the cost of using TLS, which may
 be used for free, to be substantially lower than that of an
 alternative.


I'm not sure how you come to that conclusion.  Simply
use TLS with self-signed certs.  Save the cost of the
cert, and save the cost of the re-evaluation.

If we could do that on a widespread basis, then it
would be worth going to the next step, which is caching
the self-signed certs, and we'd get our MITM protection
back!  Albeit with a bootstrap weakness, but at real
zero cost.

Any merchant who wants more, well, there *will* be
ten offers in his mailbox to upgrade the self-signed
cert to a better one.  Vendors of certs may not be
the smartest cookies in the jar, but they aren't so
dumb that they'll miss the financial benefit of self-
signed certs once it's been explained to them.

(If you mean, use TLS without certs - yes, I agree,
that's a no-won.)


 How low does the risk have to get before you will be willing not just
 to pay NOT to protect against it? Because that is, in practice, what
 you would have to do. You would actually have to burn money to get
 lower protection. The cost burden is on doing less, not on doing
 more.


This is a well known metric.  Half is a good rule of
thumb.  People will happily spend X to protect themselves
from X/2.  Not all the people all the time, but it's
enough to make a business model out of.  So if you
were able to show that certs protected us from 5-50
million dollars of damage every year, then you'd be
there.

(Mind you, where you would be is, proposing that certs
would be good to make available.  Not compulsory for
applications.)


 There is, of course, also the cost of what happens when someone MITM's
 you.


So I should spend the money.  Sure.  My choice.


 You keep claiming we have to do a cost benefit analysis, but what is
 the actual measurable financial benefit of paying more for less
 protection?


Can you take that to the specific case?

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-16 Thread Ian Grigg
Jon Snader wrote:
 
 On Mon, Oct 13, 2003 at 06:49:30PM -0400, Ian Grigg wrote:
  Yet others say to be sure we are talking
  to the merchant.  Sorry, that's not a good
  answer either because in my email box today
  there are about 10 different attacks on the
  secure sites that I care about.  And mostly,
  they don't care about ... certs.  But they
  care enough to keep doing it.  Why is that?
 
 
 I don't understand this.  Let's suppose, for the
 sake of argument, that MitM is impossible.  It's
 still trivially easy to make a fake site and harvest
 sensitive information.


Yes.  This is the attack that is going on.  This
is today's threat.  (In that it is a new threat.
The old threat still exists - hack the node.)


 If we assume (perhaps erroneously)
 that all but the most naive user will check that they
 are talking to a ``secure site'' before they type in
 that credit card number, doesn't the cert provide assurance
 that you're talking to whom you think you are?


Nope.  It would seem that only the more sophisticated
users can be relied upon to correctly check that they
are at the correct secure site.  In practice almost
all of these attacks bypass any cert altogether and
do not use an SSL protected HTTPS site.

They use a variety of techniques to distract the
attention of the user, some highly imaginative.

For example, if you target the right browser, then it
is possible to popup a box that covers the appropriate
parts.  Or to put a display inside the window that
duplicates the browser display.  Or the URL is one
of those with strange features in there or funny
letters that look like something else.

In practice, these attacks are all statistical,
they look close enough, and the fool some of the
people some of the time.

Finally, just in the last month, they have also
started doing actual cert spoofs.  This was quite
exciting to me to see a spoof site using a cert,
so I went in and followed it.  Hey presto, it
showed me the cert, as it said it was wrong!  So
I clicked on the links and tried to see what was
wrong.

Here's the interesting thing:  I couldn't easily
tell, and my first diagnosis was wrong.  So then
I realised that *even* if the spoof is using a
cert, the victim falls to a confusion attack (see
Tom Weinstein's comments on bad GUIs).

(But, for the most part, 95% or so ignore the cert,
and the user may or may not notice.)

Now, we have no statistics on how many of these
attacks work, other than the following:  they keep
happening, and with increasing frequency over time.

From this I conclude they are working, enough to
justify the cost of the attack at least.

I guess the best thing to say is that the raw
claim that the cert ensures that you are talking
to the merchant is not 100% true.  It will help
a sophisticated user.  An attack will bypass some
of the users a lot.  It might fool many of the
users only occasionally.


 If the argument is that Verisign and the others don't do
 enough checking before issuing the cert, I don't see
 how that somehow means that SSL is flawed.


SSL isn't flawed, per se.  It's just not appropriately
being used in the secure browser application.  It's
fair to say that its use is misaligned to requirements,
and a lot of things could be done to improve matters.

But, one of the perceptions that exist in the browser
world is that SSL secures ecommerce.  Until that view
is rectified, we can't really build the consensus to
have efforts like Ye  Smith, and Close, and others,
be treated as serious and desirable.

(In practice, I don't think it matters how Verisign
and others check the cert.  This is shown by the
fact that almost all of these attacks have bypassed
the cert altogether.)

iang

http://www.iang.org/ssl/maginot_web.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-15 Thread Ian Grigg
Eric Rescorla wrote:
 
 Ian Grigg [EMAIL PROTECTED] writes:
  I'm sorry, but, yes, I do find great difficulty
  in not dismissing it.  Indeed being other than
  dismissive about it!
 
  Cryptography is a special product, it may
  appear to be working, but that isn't really
  good enough.  Coincidence would lead us to
  believe that clear text or ROT13 were good
  enough, in the absence of any attackers.
 
  For this reason, we have a process.  If the
  process is not followed, then coincidence
  doesn't help to save our bacon.

 Disagree. Once again, SSL meets the consensus threat
 model. It was designed that way partly unconsciously,
 partly due to inertia, and partly due to bullying by
 people who did have the consensus threat model in mind.


(If you mean that the ITM is consenus, I grant
you that two less successful protocols follow
it - S/MIME and IPSec (partly) but I don't
think that makes it consensus.  I know there
are a lot of people who don't think in any other
terms than this model, and that is the issue!
There are also a lot of people who think in
terms completely opposed to ITM.

So to say that ITM is consensus is something
that is going to have to be established.

If that's not what you mean, can you please
define?)


 That's not the design process I would have liked,
 but it's silly to say that a protocol that matches
 the threat model is somehow automatically the wrong
 thing just because the designers weren't as conscious
 as one would have liked.


I'm not sure I ever said that the protocol
doesn't match the threat model - did I?  What
I should have said and hoped to say was that
the protocol doesn't match the application.

I don't think I said automatically, either.
I did hold out hope in that rant of mine that
the designers could have accidentally got it
right.  But, they didn't.

Now, SSL, by itself, within the bounds of the
ITM is actually probably pretty good.  By all
reports, if you want ITM, then SSL is your
best choice.

But, we have to be very careful to understand
that any protocol has a given set of characteristics,
and its applicability to an application is an
uncertain thing;  hence the process of the threat
model and the security model.  In SSL's case, one
needs to say use SSL, but only if your threat
model is close to ITM.  Or similar.  Hence the
title of this rant.

The error of the past has been that too many
people have said something like Use SSL, because
we already got it right.  Which, unfortunately,
skips the whole issue of what threat model one
is dealing with.  Just like happened with secure
browsing.

In this case, the ITM was a) agreed upon after
the fact to fill in the hole, and b) not the right
one for the application.


   And on the client side the user can, of course, click ok to the do
   you want to accept this cert dialog. Really, Ian, I don't understand
   what it is you want to do. Is all you're asking for to have that
   dialog worded differently?
 
 
  There should be no dialogue at all.  Going from
  HTTP to HTTPS/self signed is a mammoth increase
  in security.  Why does the browser say it is
  less/not secure?
 Because it's giving you a chance to accept the certificate,
 and letting you know in case you expected a real cert that
 you're not getting one.


My interpretation - which you won't like - is that
it is telling me that this certificate is bad, and
asking whether me if I am sure I want to do this.

A popup is symonymous with bad news.  It shouldn't be
used for good news.  As a general theme, that is,
although this is the reason I cited that paper:  others
have done work on this and they are a long way ahead
in their thinking, far beyond me.


   It's not THAT different from what
   SSH pops up.
 
 
  (Actually, I'm not sure what SSH pops up, it's
  never popped up anything to me?  Are you talking
  about a windows version?)
 SSH in terminal mode says:
 
 The authenticity of host 'hacker.stanford.edu (171.64.78.90)' can't be established.
 RSA key fingerprint is d3:a8:90:6a:e8:ef:fa:43:18:47:4c:02:ab:06:04:7f.
 Are you sure you want to continue connecting (yes/no)? 
 
 I actually find the Firebird popup vastly more understandable
 and helpful.


I'm not sure I can make much of your point,
as I've never heard of nor seen a Firebird?


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-15 Thread Ian Grigg
Tim Dierks wrote:
 
 At 12:28 AM 10/13/2003, Ian Grigg wrote:
 Problem is, it's also wrong.  The end systems
 are not secure, and the comms in the middle is
 actually remarkably safe.
 
 I think this is an interesting, insightful analysis, but I also think it's
 drawing a stronger contrast between the real world and the Internet threat
 model than is warranted.
 
 It's true that a large number of machines are compromised, but they were
 generally compromised by malicious communications that came over the
 network. If correctly implemented systems had protected these machines from
 untrustworthy Internet data, they wouldn't have been compromised.

The point is, any compromise of any system is more
likely to come from a node compromise than a wire
compromise.

How much more likely?  We don't know for sure, but
I'd say it is in the many thousand times as much.  E.g.,
look at those statistics.  Basically, the wire threat
is unmeasurable - there are no stats that I've ever
seen, and the node compromise is subject of some
great scrutiny, not to mention 13,000 odd Linux
reinstalls every month.

Does it mean that we should ignore the wire threat?
No, but it does mean that we are foolish to let any
protection of the wire threat cause us any grief.

Protecting against any wire attack is fun, but no
more than that - if it costs us a dime, it needs to
be justified, and that is really hard given that we
are thousands of times more likely to see a compromise
on the node.

If we spend 10c protecting against the wire attack,
should we then spend $1,300 spending against the
node attack?

The situation is so ludicrously unbalanced, that if
one really wanted to be serious about this issue,
instead of dismissing certs out of hand (which would
be the engineering approach c.f., SSH), one would
run ADH across the net and wait to see what happened.

Or, spit credit cards in open HTTP, and check how
many were tried by credit card snafflers.  You might
be waiting a long time :-)  But, that would be a
serious way for credit card companies to measure
whether they care one iota about certs or even
crypto at all.

 Similarly, the statement is true at large (many systems are compromised),
 but not necessarily true in the small (I'm fairly confident that my SSL
 endpoints are not compromised). This means that the threat model is valid
 for individuals who take care to make sure that they comply with its
 assumptions, even if it may be less valid for the Internet at large.


If the threat model is valid for individuals who
happen to understand what all this means, then  
by all means they should use the resultant
security model.  I don't think that anyone is
saying that people can't use SSL in its current
recommended form.  JUst that more people would
use SSL if software didn't push them in the
direction of using overly fraught security
levels.


 And it's true that we define the threat model to be as large as the problem
 we know how to solve: we protect against the things we know how to protect
 against, and don't address problems at this level that we don't know how to
 protect against at this level.


(See my first reply to Erik, where I quoted two
sections, earlier today.)

We protect against things which are cost-effective
to protect against.  That is, we use risk analysis
to work out the costs v. the benefits.

We know how to protect against an awful lot.  We
simply don't, unless the cost is less than the
benefit, in general.

And, this is the point:  SSL protected against
the MITM because it could.  Not because it was
present as a threat, and not because it was cost-
effective.  It was infamously and deplorably
weak security logic;  what it should do is
protect against things that are a threat, and
for a cost that matches the threat.


 So, I disagree: I don't think that the SSL model is wrong: it's the right
 model for the component of the full problem it looks to address. And I
 don't think that the Internet threat model has failed to address the
 problem of host compromise: the fact is that these host compromises
 resulted, in part, from the failure of operating systems and other software
 to adequately protect against threats described in the Internet threat
 model: namely, that data coming in over the network cannot be trusted.
 
 That doesn't change the fact that we should worry about the risk in
 practice that those assumptions of endpoint security will not hold.

It's about relative risks - I'm not saying that
SSL should protect the node.  What I'm saying is
that it is ludicrous to worry overly much the
risk that SSL deals with - the ITM, supposedly -
in most practical environments, because that's
not where the trouble lies.

Another Analogy:  Soldiers don't carry umbrellas
into battle.   But it does rain!

The reasoning is simple - unless the umbrella
is *free* it's ludicrous to worry about water
when someone is shooting bullets at you.

We do a risk-analysis on the umbrella, and we
discover that it has a cost of making

WYTM?

2003-10-13 Thread Ian Grigg
As many have decried in recent threads, it all
comes down the WYTM - What's Your Threat Model.

It's hard to come up with anything more important
in crypto.  It's the starting point for ... every-
thing.  This seems increasingly evident because we
haven't successfully reverse-engineered the threat
model for the Quantum crypto stuff, for the Linux
VPN game, and for Tom's qd channel security.

Which results in, at best, a sinking feeling, or
at worst, endless arguments as to whether we are
dealing with yet another a hype cycle, yet another
practically worthless crypto protocol, yet another
newbie leading users on to disaster through belief
in simple, hidden, insecure factors, or...

WYTM?

It's the first question, and I've thought it about
a lot in the context of SSL.  This rant is about
what I've found.  Please excuse the weak cross over!



For $40, you can pick up SSL  TLS by Eric
Rescorla [1].  It's is about as close as I could
get to finding serious commentary on the threat
model for SSL [2].

The threat model is in Section 1.2, and the reader
might like to run through that, in the flesh, here:

  http://www.iang.org/ssl/rescorla_1.html

perhaps for the benefit of at least one unbiased
reading.  Please, read it.  I typed it in by hand,
and my fingers want to know it was worth it [3].

The rest of this rant is about what the Threat
model says, in totally biased, opinionated terms
[4].  My commentary rails on the left, the book
composes centermost.



  1.2  The Internet Threat Model

  Designers of Internet security protocols
  typically share a more or less common
  threat model.  

Eric doesn't say so explicitly, but this is pretty
much the SSL threat model.  Here comes the first
key point:

  First, it's assumed that the actual end
  systems that the protocol is being
  executed on are secure

(And then some testing of that claim.  To round
this out, let's skip to the next paragraph:)

  ... we assume that the attacker has more or
  less complete control of the communications
  channel between any two machines. 



Ladies and Gentlemen, there you have it.  The
Internet Threat Model (ITM), in a nutshell, or,
two nutshells, if we are using those earlier two
sentance models.

It's a strong model:  the end nodes are secure and
the middle is not.  It's clean, it's simple, and
we just happen to have a solution for it.



Problem is, it's also wrong.  The end systems
are not secure, and the comms in the middle is
actually remarkably safe.

(Whoa!  Did he say that?)  Yep, I surely did: the
systems are insecure, and, the wire is safe.

Let's quantify that:  Windows.  Is most of the
end systems (and we don't need to belabour that
point).  Are infected with viruses, hacks, macros,
configuration tools, passwords, Norton recovery
tools, my kid sister...

And then there's Linux.  13,000 boxen hacked per
month... [5].  In fact, Linux beats Windows 4 to 1
and it hasn't even challenged the user's desktop
market yet!

It shows in the statistics, it shows in experience;
pretty much all of us have seen a cracked box at
close quarters at one point or another [6].

Windows systems are perverted in their millions by
worms, viruses, and other upgrades to the social
networking infrastructure.  Linux systems aren't
much more trust-inspiring, on the face of it.

Pretty much all of us present in this forum would
feel fairly confident about downloading some sort
of crack disc, walking into a public library and
taking over one of their machines.

Mind you... in that same library, could we walk
in and start listening to each other's comms?

Nope.  Probably not.

On the one hand, we'd have trouble on the cables,
without being spotted by that pesky librarian.
And those darn $100 switches, they so ruin the
party these days.

Admittedly, OTOH, we do have that wonderful 802.11b
stuff and there we can really listen in [7].

But, in practice, we can conclude, nobody much
listens to our traffic.  Really, so close to nobody
that nobody in reality worries about it [8].

But, every sumbitch is trying to hack into our
machine, everyone has a virus scanner, a firewall,
etc etc.  I'm sure we've all shared that wierd
feeling when we install a new firewall that
notifies when your machine is being port scanned?
A new machine can be put on a totally new IP, and
almost immediately, ports are being scanned

How do they do that so fast?



Hence the point:  the comms is pretty darn safe.
And the node is in trouble.  We might have trouble
measuring it, but we can assert this fact:

the node is way more insecure than the comms.

That's a good enough assumption for now;  which
takes us back to the so-called Internet Threat
Model and by extension and assumption, the SSL
threat model:

the actual end systems ... are secure.
  the attacker has more or less complete
 control of the communications channel between
 any two machines.

Quite the reverse pertains [5].  So where does that

Re: WYTM?

2003-10-13 Thread Ian Grigg
Minor errata:

Eric Rescorla wrote:
  I totally agree that the systems are
 insecure (obligatory pitch for my Internet is Too
 Secure Already) http://www.rtfm.com/TooSecure.pdf,

I found this link had moved to here;

http://www.rtfm.com/TooSecure-usenix.pdf

 which makes some of the same points you're making,
 though not all.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-13 Thread Ian Grigg
Eric,

thanks for your reply!

My point is strictly limited to something
approximating there was no threat model
for SSL / secure browsing.  And, as you
say, you don't really disagree with that
100% :-)

With that in mind, I think we agree on this:


  [9] I'd love to hear the inside scoop, but all I
  have is Eric's book.  Oh, and for the record,
  Eric wasn't anywhere near this game when it was
  all being cast out in concrete.  He's just the
  historian on this one.  Or, that's the way I
  understand it.
 
 Actually, I was there, though I was an outsider to the
 process. Netscape was doing the design and not taking much
 input. However, they did send copies to a few people and one
 of them was my colleague Allan Schiffman, so I saw it.

OK!

 It's really a mistake to think of SSL as being designed
 with an explicit threat model. That just wasn't how the
 designers at Netscape thought, as far as I can tell.


Well, that's the sort of confirmation I'm looking
for.  From the documents and everything, it seems
as though the threat model wasn't analysed, it was
just picked out of a book somewhere.  Or, as you
say, even that is too kind, they simply didn't
think that way.

But, this is a very important point.  It means that
when we talk about secure browsing, it is wrong to
defend it on the basis of the threat model.  There
was no threat model.  What we have is an accident
of the past.

Which is great.  This means there is no real objection
to building a real threat model.  One more appropriate
to the times, the people, the applications, the needs.

And the today-threats.  Not the bogeyman threats.


 Incidentally, Ian, I'd like to propose a counterargument
 to your argument. It's true that most web traffic
 could be encrypted if we had a more opportunistic key
 exchange system. But if there isn't any substantial
 sniffing (i.e. the wire is secure) then who cares?


Exactly.  Why do I care?  Why do you care?

It is mantra in the SSL community and in the
browsing world that we do care.  That's why
the software is arranged in a a double lock-
in, between the server and the browser, to
force use of a CA cert.

So, if we don't care, why do we care?  What
is the reason for doing this?  Why are we
paying to use free software?  What paycheck
does Ben draw from all our money being spent
on this i don't care thing called a cert?

Some people say because of the threat model.

And that's what this thread is about:  we
agree that there is no threat model, in any
proper sense.  So this is a null and void
answer.

Other people say to protect against MITM.
But, as we've discussed at length, there is
little or no real or measurable threat of MITM.

Yet others say to be sure we are talking
to the merchant.  Sorry, that's not a good
answer either because in my email box today
there are about 10 different attacks on the
secure sites that I care about.  And mostly,
they don't care about ... certs.  But they
care enough to keep doing it.  Why is that?



Someone made a judgement call, 9 or so years
ago, and we're still paying for that person
caring on our behalf, erroneously.

Let's not care anymore.  Let's stop paying.

I don't care who it was, even.  I just want
to stop paying for his person, caring for me.

Let's start making our own security choices?

Let crypto run free!

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-13 Thread Ian Grigg
Eric Rescorla wrote:
 
 Ian Grigg [EMAIL PROTECTED] writes:
   It's really a mistake to think of SSL as being designed
   with an explicit threat model. That just wasn't how the
   designers at Netscape thought, as far as I can tell.
 
 
  Well, that's the sort of confirmation I'm looking
  for.  From the documents and everything, it seems
  as though the threat model wasn't analysed, it was
  just picked out of a book somewhere.  Or, as you
  say, even that is too kind, they simply didn't
  think that way.
 
  But, this is a very important point.  It means that
  when we talk about secure browsing, it is wrong to
  defend it on the basis of the threat model.  There
  was no threat model.  What we have is an accident
  of the past.
 
 Maybe so, but it coincides relatively well with the
 common Internet threat model, so I think you can't
 just dismiss that out of hand as if it were pulled
 out of the air.


I'm sorry, but, yes, I do find great difficulty
in not dismissing it.  Indeed being other than
dismissive about it!

Cryptography is a special product, it may
appear to be working, but that isn't really
good enough.  Coincidence would lead us to
believe that clear text or ROT13 were good
enough, in the absence of any attackers.

For this reason, we have a process.  If the
process is not followed, then coincidence
doesn't help to save our bacon.

It has to follow, for it to be valuable.  If
it doesn't follow, to treat it as anything
other than a mere coincidence to be dismissed
out of hand is leading us on to make other
errors.

I think that Matt Blaze said it fairly well.
There are some security practices that in
the recent past are now considered appalling.

It's time to be a little bit appalled, and
to recognise SSL for what it is - a job that
survived not on its cryptographic merits, but
through market and structural conditions at
the time.


   Incidentally, Ian, I'd like to propose a counterargument
   to your argument. It's true that most web traffic
   could be encrypted if we had a more opportunistic key
   exchange system. But if there isn't any substantial
   sniffing (i.e. the wire is secure) then who cares?
 
 
  Exactly.  Why do I care?  Why do you care?
 
  It is mantra in the SSL community and in the
  browsing world that we do care.  That's why
  the software is arranged in a a double lock-
  in, between the server and the browser, to
  force use of a CA cert.
 
 You keep talking about the server locking you in, but it doesn't.


(No, it's a double-lock-in, or maybe more.  It's
a complex interrelated scenario.)

Here's specifically what the server does:  When
it is installed, it doesn't also install and
start up the SSL server.  You know that page
that has the feather on?  It should also start
up on the SSL side as well, perhaps with a
different colour.

Specifically, when you install the server, it
should create a self-signed certificate and use
it.  Straight away.  No questions asked.

Then, it becomes an administrator issue to
replace that with a custom signed one, if the
admin guy cares.


 The world is full of people who run SSL servers with self-signed
 certs.


Right.  I'm looking to improve those numbers,
my guess would be 10-fold is not unreasonable.


 And on the client side the user can, of course, click ok to the do
 you want to accept this cert dialog. Really, Ian, I don't understand
 what it is you want to do. Is all you're asking for to have that
 dialog worded differently?


There should be no dialogue at all.  Going from
HTTP to HTTPS/self signed is a mammoth increase
in security.  Why does the browser say it is
less/not secure?

Further, the popups are a bad way to tell the
user what the security level is.  The user can't
grok them and easily mucks up on any complex
qeustions.  There needs to be a security display
on the secured area that is more prominent and
also more graded (caching numbers) than the
current binary lock symbol.

There has been some research on this area, I
think it was Sean Smith (Dartmouth College)
that posted on this subject.  Yes, here it is:

  From: Sean Smith [EMAIL PROTECTED]
   Or, if we should bother to secure it, shouldn't
   we mandate the security model as applying to the
   browser as well?

  Exactly.

  That was the whole point of our Usenix paper last year

  E. Ye, S.W. Smith.
  ``Trusted Paths for Browsers.''
  11th Usenix Security Symposium. August 2002
  http://www.cs.dartmouth.edu/~sws/papers/usenix02.pdf

Oh, and:

  Advertisement: we also built this into Mozilla, for Linux and Windows.
  http://www.cs.dartmouth.edu/~pkilab/demos/countermeasures/



 It's not THAT different from what
 SSH pops up.


(Actually, I'm not sure what SSH pops up, it's
never popped up anything to me?  Are you talking
about a windows version?)


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: NCipher Takes Hardware Security To Network Level

2003-10-11 Thread Ian Grigg
Anton Stiglic wrote:
 
 - Original Message -
 From: Peter Gutmann [EMAIL PROTECTED]
  [...]
 
  The problem is
  that what we really need to be able to evaluate is how committed a vendor
 is
  to creating a truly secure product.
  [...]
 
 I agree 100% with what you said.  Your 3 group classification seems
 accurate.
 But the problem is how can people who know nothing about security evaluate
 which vendor is most committed to security?


(I am guessing you mean, in some sort of objective sense.)

Is there any reason to believe that people who
know nothing about security can actually evaluate
questions about security?

It's often been said that security is an inverted
product.  (I'm scratching to think of the proper
economic term here.)

That is, with security, you can measure easily when
it is letting the good stuff through, but you don't
know when and if and how well it is stopping the bad
stuff *.

The classical answer to difficult to evaluate
products is to concentrate on brand, or independant
assessors.  But, brands are based on revenues, not
on the underlying product.  Hence widespread confusion
as to whether Microsoft delivers secure product - the
brand gets in the way of any objective assessment.

And, independant assessors are generally subvertable
by special interests (mostly, the large incumbents
encourage independant assessors to raise barriers
to keep out low cost providers).  Hence, Peter's
points.  This is a very normal economic pattern, in
fact, it is the expected result.

So, right now, I'd say the answer to that question
is that there is no way for someone who knows nothing
about security to objectively evaluate a security
product.

iang

* In contrast, someone who knows little about cars,
can objectively evaluate a car.  They can take it
for a test drive and see if it feels right.  Using
it is proving it.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Easy VPNs?

2003-10-11 Thread Ian Grigg
Dave Howe wrote:

 so as I say - think of vpn as two components - intercept (the virtual
 network functionality) and transport (a secure, authenticated,
 encapsulated communications standard) and how vpn over *anything* becomes
 more clear.


Thanks.  That's the key!  Then, the answer
might really be that a good system would
do the transport over UDP it if could, or
it would fall back to a connection in the
worst case.

You know, when placed in that context, the
discussion of whether the transport is done
over SSL, IPSec, or carrier pigeons is a
storm in a teacup.  If someone is concerned,
buy the upgrade that gets you the better
transport.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


credit card threat model

2003-10-08 Thread Ian Grigg
Anne  Lynn Wheeler wrote:

 what i said was that it was specifying a simplified SSL/TLS based on the
 business requirements for the primary use of SSL/TLS  as opposed to a
 simplified SSL/TLS based on the existing technical specifications and
 existing implementations.


I totally agree that the business requirements
for protecting credit cards have scant
relationship to the security model of SSL/TLS!


 I don't say it was technical TLS  I claimed it met the business
 requirements of the primary use of SSL/TLS.
 
 I didn't preclude that there could simplified SSL/TLS based on existing
 technical specifications as opposed to implementation based on business
 requirements for the primary use.
 
 I thot I was very fair to distinguish between the business requirements use
 of SSL/TLS as opposed to technical specifications for SSL/TLS.


I think the key here is that SSL/TLS is a channel
security protocol.  But, to harken back to its days
of origin, where Netscape asked for something to
protect credit cards, is going to confuse the
issue for a lot of people.

In preference, if we want something to protect
credit cards, then the threat models should be
established, and the protocol should be created.

Yes, SSL/TLS protects credit cards a little bit
in one part of their flight, but SSL/TLS is much
bigger and grander than that small part.  It's
fair to say, I think, that it's whole security
model plays little attention to credit cards, it's
oriented to creating a good channel over which
any developer/implementor can pass *any* data.

Hence, for example, the emphasis on replay
prevention - which is at a higher layer in a
financial protocol, and was AFAIK in place in
credit cards since whenever.  But if one is
doing a channel security product, it has to
be there, as the overlaying application won't
consider it.

 There are lots of really great implementations in this world  many of
 which have absolutely no relationship at all with a good business reason to
 exist.
 
 The real observation was that in the early deployments of SSL  it was
 thot it would be used for something entirely different ... and therefor had
 a bunch of stuff that would meet those business requirements. However, we
 come to find out that it was actually being used for something quite a bit
 different with different business requirements.


This history of how the business requirements
led to the SSL model are possibly closed to us
at this point...  I wasn't there, and I'm a
bit scared to ask :)


 So a possible conjecture is that if there had been better foreknowledge as
 to how SSL was going to be actually be used  one might conjecture that
 it would have looked more like something I suggest (since that is a better
 match to the business requirements) ... as opposed to matching some
 business requirements for which it turned out not to be used.


My own view - in conjecture - is that it comes
back to that old chestnut, what's your threat
model.  It would appear that this was one missing
phase in the early development of SSL.  Or, if
it was asked, it certainly wasn't validated, it
was predicted only.

But, in terms of useful posture today, 9 years
down the track, I personally think it is time to
give up the ghost and not ever mention credit
cards again.  Others will  do differ ...
but I don't think it is possible nor helpful to
mix and match the credit card mission and the
SSL result as if they are strongly related.


 I've repeatedly claimed that the credit card number in flight has never
 been the major threat/vulnerability  the major threat (even before the
 internet) has always been the harvesting of the merchant files with
 hundreds, thousands, tens of thousands, even millions of numbers  all
 neatly arranged.


Yep.  This was obvious in 94.  In fact it was
obvious in 84 - the Internet has always been a
very safe place as far as eavesdroppers go, it
ranks up there with telcos and well above
physical mail as far as reliability and privacy
goes.

Yes, of course, eavesdropping is possible, and
of course there have been many incidents.  But,
in terms of the amount of traffic, the risk is
miniscule, and probably well below the credit
card companies' real threshholds.

And, even in the presence of widespread delivery
of credit card numbers in the clear, it's easy
to show that the prime threat is and was and will
always be the hacking into some easy Linux box
and scarfing up the millions from the database.

Why they didn't see that in '94 I don't know.


 The issue that we were asked to do in the X9A10 working group was to
 preserve the integrity of the financial infrastructure for all electronic
 retail payments.  A major problem is that in the existing infrastructure,
 the account number is effectively a shared-secret and therefor has to be
 hidden. Given that there is a dozen of business processes that require it
 to be in the clear and potentially millions of locations  there is no
 practical way of addressing 

Re: anonymity +- credentials

2003-10-08 Thread Ian Grigg
Anton Stiglic wrote:
 
 - Original Message -
 From: Ian Grigg [EMAIL PROTECTED]
 
  [...]
  In terms of actual practical systems, ones
  that implement to Brands' level don't exist,
  as far as I know?
 
 There were however several projects that implemented
 and tested the credentials system.  There was CAFE, an
 ESPRIT project.


CAFE now has a published report on it, so it
might actually be accessible.  I'm not sure
if any of the tech is available.


 At Zeroknowledge there was working implementation written
 in Java, with a client that ran on a blackberry.
 
 There was also the implementation at ZKS of a library in C
 that implemented Brands's stuff, of which I participated in.
 The library implemented issuing and showing of credentials,
 with a limit on the number of possible showing (if you passed
 the limit, identity was revealed, thus allowing for off-line
 verification of payments for example.  If you did not pass the
 limit, no information about your identity was revealed).
 The underlying math was modular, you could work in a
 subgroup of Z*p for prime p, or use Elliptic curves, or
 base it on the RSA problem.  We plugged in OpenSSL
 library to test all of these cases.
 Basically we implemented the protocols described in
 [1], with some of the extensions mentioned in the conclusion.
 
 The library was presented by Ulf Moller at some coding
 conference which I don't recall the name of...


Is any of this published?  I'd assumed not,
ZKS were another company obscuring their
obvious projects with secrecy.

 It was to be used in Freedom, for payment of services,
 but you know what happended to that projet.


Reality caught up to them, I heard :)  As
Eric R recently commented, there are no
shortage of encrypted comms projects being
funded and .. collapsing when they discover
that selling secure comms is not a demand-
driven business model.


 Somebody had suggested that to build an ecash system
 for example, you could start out by implementing David
 Wagner's suggestion as described in Lucre [2], and then
 if you sell and want extra features and flexibility get the
 patents and implement Brands stuff.


Back in '98 or so, I got involved with a project
to do bearer stuff.  I even went so far as to
commission a review of all the bearer protocols
(Cavendish, Chaum, Brands, Wagner, Mariott, etc
etc).  Brands came out as the best (please don't
ask me why), so Stefan and I spent many a pleasurable
negotiating session in Dutch bars trying to hammer
out a licence.  Unfortunately we didn't move fast
enough to lock up the terms, and he went off to
bigger and better things - ZKS.

Since then, we toyed around adding tokens to WebFunds.
We started out thinking about Wagner, but what
transpired was that it was just as easy to make
the whole lot available at once.  Now we have a
framework.  (It's an incomplete project, but we
recently picked it up again after a long period
of inactivity, as there is a group that has figured
out how to use it for a cool project.)  The protocol
only covers single phase withdrawals, not two
phase, so far.


 Similar strategy
 would seem to apply for digital credentials in general.


Perhaps!  I don't understand the model for credentials,
but if they can all be put into a block-level protocol,
then sharing the code base is a mighty fine idea.


  There is an alternate approach, the E/capabilities
  world.  Capabilities probably easily support the
  development of psuedonyms and credentials, probably
  more easily than any other system.   But, it would
  seem that the E development is still a research
  project, showing lots of promise, not yet breaking
  out into the wider applications space.
 
  A further alternate is what could be called the
  hard-coded psuedonym approach as characterised
  by SOX.  (That's the protocol that my company
  wrote, so normal biases expected.)  This approach
  builds psuedonyms from the ground up, which results
  in a capabilities model like E, but every separate
  use of the capability must be then re-coded in hard
  lines by hardened coders.
 
 Do you have any references on this?


The capabilities guys hang around here:

http://erights.org/
http://www.eros-os.org/

SOX protocol is described here:

http://webfunds.org/guide/sox.html


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [dgc.chat] EU directive could spark patent war

2003-10-08 Thread Ian Grigg
Steve Schear wrote:
 
 [I wonder what if any effect this might have on crypto patents, e.g.,
 Chaumian blinding?]


My guess is, nix, nada.  Patents are a red herring
in the blinding skirmishes, they became a convenient
excuse and a point to place the flag when rallying
the troops.  The battle was elsewhere, but it was
good to have something to keep the press distracted.

You can see this in, for example, the long available
Wagner variation, and the availability of a bunch of
other variations.  Even when people started doing
demo code of the various alternates (Magic Money,
Ben Laurie's Lucre, etc) there was little to no
amounts of interest.  (There is one guy working
to turn BLL into a system, and then there is our
WebFunds project, originally started from on an
old port of MM back in 1999 or so.  That's it as
far as I know, what is clear is that there is no
inundation of monetary offers for the tech.  I
know a couple of people who put or promised some
money, but it was all pocket change.)

Any one with any business experience realises that
the patents were a huge risk factor, so the obvious
thing was to de-risk it.  Hence, use Wagner first
and shop for another method later (we figured this
out in 2001 after the first coder's Chaum code was
replaced by the second's Wagner efforts...  Or was
it Brands).

Hence, there are no business analysies being done,
and therefore, no business.

Here we remain within sight of the expiry of the
first of Chaum's patents, and still lukewarm
interest in blinding.  I predict the date will
pass and nothing will change.

The real barriers to token money systems are these:

   1. lack of a viable application
   2. tokens require downloaded clients
   3. bearer is a dirty word
   4. full implementation requires too many
  skills

(not authoritive)

As against approximations (DGCs, Paypals, nymous)
blinded token money systems don't attract enough
real business zing to make them attractive enough
to overcome the barriers.

(I personally am somewhat agnostic on blinding,
to the annoyance of many high priests of the
order.  I think the bank robbery problem is a
bit of a devil, but OTOH, I just spent today
working on getting token withdrawals going
again.  That's because I know of a group that
wants it for a very interesting application
to do vaguely with the 3rd world :-)


 The European Parliament's decision to limit patents... risks creating a
 patent war with a fallout that could make it illegal to access some
 European e-commerce sites from the United States...
 
 Pure software should not be patentable, the parliament argued, and
 software makers should not be required to license patented technology for
 the purposes of interoperability--for example, creating a device that can
 play a patented media format, or allowing a computer program to read and
 write a competitor's patented file formats. 
 
 The amendments also sought to ban the patenting of business methods such
 as Amazon.com's patent on one-click purchasing. 
 
 Full story at http://news.com.com/2100-1014_3-5086062.html?tag=nefd_top


Another factor is that Europe has effectively
emasculated the entrepreneurial digital money
field with the E-money directive.  It's been
a while since I read it, but it basically forces
the small guy to be just like a bank or to be
so small as to not have a future.  Empirically,
I know two people - entrepreneurs - who've tried
to get into it, then read the directive, and said
it can't be done (both from different countries
that actually claim to promote the field).

(The USA, under the quiet guidance of certain
very smart people, went the other way and
deliberately held off from doing or saying
anything.  They realised that they could do
nothing but harm... so they declined to get
involved.  Also, in the US, there is very
much more of a spirit of doing something if
it is not explicitly banned.  In Europe, there
is much more of a spirit of getting permission
if it is not explicitly permitted, on the
assumption that the government knows what it
is talking about.)

The only ones who are interested in reducing
transaction costs (in the blinding fashion) are
new outsiders looking to set up new payment
systems.  Hence, the arisal of the digital
gold currencies was centered around the US, and
the smart card efforts of the Europeans were
centered around the national banking structures.

Smart card schemes cost O($100,000,000)
whereas these days a DGC costs O($100,000).

Go figure.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-06 Thread Ian Grigg
Taral wrote:
 
 On Mon, Oct 06, 2003 at 11:43:21AM -0400, Anton Stiglic wrote:
  You started by talking about anonymous communication, but ended up
  suggesting a scheme for pseudonymous communication.
 
  Anonymous != pseudonymous.
 
  Let us be clear on that!
  It is an important difference.
 
 Yes it is. An anonymous system can be constructed from a pseudonymous
 system by never reusing a pseudonym.

True, I think!  Is there a practical application for this?

( I can think of one trivial example: a message system is
psuedonymous, but I want to send an anonymous message! )

I'm asking myself whether anonymous DH is confusingly named.
Perhaps it should be called psuedonymous DH because it creates
psuedonyms for the life of the session?  Or, we need a name
that describes the creation of psuedonyms, de novo, from
an anonymous starting position?

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Simple SSL/TLS - Some Questions

2003-10-06 Thread Ian Grigg
Jill Ramonsky wrote:

 First, the primary design goal is simple to use.


This is the highest goal of all.  If it is not simple
to use, it misses out on a lot of opportunities.  And
missing out results in less crypto being deployed.

If you have to choose between simple-but-incomplete,
versus complex-but-complete, then choose the former
every time.  Later on, you can always upgrade - or
the programmer using the system can upgrade - to the
full needs if they have shown themselves the need
for the complete solution that you optimised away.

On these lines, I'd suggest something like:



1.  select one cipher suit only, and reject the
rest.  Select the strongest cipher suit, such as
large RSA keys / Rijndael / SHA-256 or somesuch,
so there are no discussions about security.

1.b,  means basically do TLS only.  Don't offer
any fallback.  If someone is using your protocol,
they can select it to talk to their own apps.
If someone has to talk to another app using TLS
or SSL, then they almost certainly have to talk
all suites, so they are more or less forced to
do OpenSSL already.  Hence, almost by definition,
you are forced into the market where poeple don't
want to negotiate cipher suites, they want the
channel product between their own apps up and
running with no fuss.

2.  Notwishtanding 1. above, leave the hooks in
to add another cipher suite.  You should really
only plan on one or two more.  One for embedded
purposes, for example, where you really push the
envelope of security for slower devices.  And
another because someone pays you to do it :-)

3.  Ditch Anon-DH as a separate suite.  Concentrate
on pure certificate comms.  Never deviate more than
briefly from the true flavour of the tools you are
working with.

4.  Ignore all complex certificate operations such
as CA work, etc.  If someone wants that order of
complexity, then they want OpenSSL, which includes
most of those tools.

5.  To meet the dilemma posed by 3, 4, generate
self-signed certificates on the fly.  Then, the
protocol should bootstrap up and get running
easily.  SSH model.  Anyone who wants more, can
replace the certs with alternately named and
signed certs, as created with more specialised
tools.  Or they can help you to write those
parts.

Good protocols divide into two parts, the second
part of which starts trust this key totally.
Ignore the first part for now, being, how you
got the key.

6.  Pick a good X.509 / ASN1 tool.  Don't do
that part yourself.  See all the writings on how
hard this is to do.  If you want to join the
guild of people who've done an ASN1 tool and can
therefore call it easy, do so, but tell your
family you won't be home for Christmas :-)

7.  Produce a complete working channel security
product before going for first release.  Nothing
slows down progress than a bunch of people trying
to help build something that they can't agree on.
Build it, then ask for help to round it out.

8.  What ever you do ... try and work on the code
that is most beneficial for other reasons.  Don't
plan on completing the project.  In the event
that you don't complete, make sure that what you
did do was worthwhile for other reasons!

9.  Take all expert advice, including the above,
with some skepticism.  You will have much more
intuition because you will be deep in the issues.



  With that in mind, I believe I could do a safer
 implementation in C++ than I could in C.


Go with it, then.  Being right means you win,
being wrong is an opportunity to learn :-)


 ... /Of course/ one should be
 able to communicate with standard TLS implementations, otherwise the
 toolkit would be worthless.


Is that the case?  I wide variety of uses for
any protocol are application to same application.
The notion of client-to-server is an alternate,
but it's only an alternate.  It is not a given
that apps builders want to talk to other TLS libs.

TLS is there to be used, as is all other software
and standards.  It is at your option whether you
wish to join the group of people that can express
comms in *standard* TLS, talking heterogeneously.


 (1) THE LICENCE
 
 I confess ignorance in matters concerning licensing. The basic rules
 which I want, and which I believe are appropriate are:
 (i) Anyone can use it, royalty free. Even commercial applications.
 (ii) Anyone can get the source code, and should be able to compile it to
 executable from this.
 (iii) Copyright notices must be distributed along with the toolkit.
 (iv) Anyone can modify the code (this is important for fixing bugs and
 adding new features) and redistribute the modified version. (Not sure
 what happens to the copyright notices if this happens though).

Sounds like BSD-2 clause or one of the equivalents.

The only question I wasn't quite sure of
was whether, if I take your code, and modify it,
can I distribute a binary only version, and keep
the source changes proprietary?

If so, that's BSD.  If not, you need some sort
of restriction like Mozilla (heading towards GPL).

My own philosophy 

  1   2   >