Re: Maybe It's Snake Oil All the Way Down

2003-06-04 Thread Ian Grigg
Tim Dierks wrote:
 
 At 09:11 AM 6/3/2003, Peter Gutmann wrote:
 Lucky Green [EMAIL PROTECTED] writes:
  Given that SSL use is orders of magnitude higher than that of SSH, with no
  change in sight, primarily due to SSL's ease-of-use, I am a bit puzzled by
  your assertion that ssh, not SSL, is the only really successful net crypto
  system.
 
 I think the assertion was that SSH is used in places where it matters, while
 SSL is used where no-one really cares (or even knows) about it.  Joe Sixpack
 will trust any site with a padlock GIF on the page.  Most techies won't access
 a Unix box without SSH.  Quantity != quality.
 
 I have my own opinion on what this assertion means. :-) I believe it
 intends to state that ssh is more successful because it is the only
 Internet crypto system which has captured a large share of its use base.
 This is probably true: I think the ratio of ssh to telnet is much higher
 than the ratio of https to http, pgp to unencrypted e-mail, or what have you.


Certainly, in measureable terms, Tim's description
is spot on.  I agree with Peter's comments, but
that's another issue indeed.


 However, I think SSL has been much more successful in general than SSH, if
 only because it's actually used as a transport layer building block rather
 than as a component of an application protocol. SSL is used for more
 Internet protocols than HTTP: it's the standardized way to secure POP,
 IMAP, SMTP, etc. It's also used by many databases and other application
 protocols. In addition, a large number of proprietary protocols and custom
 systems use SSL for security: I know that Certicom's SSL Plus product
 (which I originally wrote) is (or was) used to secure everything from
 submitting your taxes with TurboTax to slot machine jackpot notification
 protocols, to the tune of hundreds of customers. I'm sure that when you add
 in RSA's customers, those of other companies, and people using
 OpenSSL/SSLeay, you'll find that SSL is much more broadly used than ssh.


Design wins!  Yes, indeed, another way of measuring
the success is to measure the design wins.  Using
this measure, SSL is indeed ahead.  This probably
also correlates with the wider support that SSL
garners in the cryptography field.


 I'd guess that SSL is more broadly used, in a dollars-secured or
 data-secure metric, than any other Internet protocol. Most of these uses
 are not particularly visible to the consumer, or happen inside of
 enterprises. Of course, the big winners in the $-secured and data-secured
 categories are certainly systems inside of the financial industry and
 governmental systems.


That would depend an awful lot on what was meant
by dollars-secured and data-secured ?  Sysadmins
move some pretty hefty backups by SSH on a routine
basis.

-- 
iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Session Fixation Vulnerability in Web Based Apps

2003-06-17 Thread Ian Grigg
Ben Laurie wrote:
 
 James A. Donald wrote:

  I do not see how this flaw can be avoided unless one
  consciously takes special measures that the development
  environment is not designed or intended to support.
 
 The obvious answer is you always switch to a new session after login.
 Nothing cleverer is required, surely?

Having read all these discussions and having looked
in my own PHP code and the PHP documentation, I have
to agree with James D.  This cleverness challenges!

I knew how to start and maintain a session, I think.

(That was no easy task.  The PHP documentation is
a mess, and over the last several versions different
ways started and stopped working...  I'm sure the
obvious answer is to use a better tool, but I'm a bit
stuck with a huge dose of reality at the moment, being
one of the million or so PHP developers, and can't junk
the man-years of habit just this month :-)

I just spent an hour or so skimming the doco for PHP,
and apparently, there is an ability to set another
session id with a call called session_id(), oddly
enough :-)

Which only leaves the problems of a) inventing a new
session id, b) rewriting the code so that it carefully
implements the unclever notion of setting this at the
new login, c) deleting this at logout, and finally d)
praying that this works as expected.

On the face of it, PHP doesn't appear to have much
support for this.  It will require each developer to
(re-)implement their own solution.  I'd love to be
wrong in this:  does anyone know how the easy way to
secure a PHP website against session_fixation?  Or is
it another case of you gotta write it all yourself
again?

Rich Salz wrote:
 From
 http://www.modssl.org/docs/2.8/ssl_reference.html#ToC25
 
 The following environment variables are exported into SSI files
 and CGI scripts:
 SSL_SESSION_ID The hex-encoded SSL session id
 
 Care to try again?

Please.  How does one get access to that in PHP?  That
would be a wonderful answer to a) above.  Which would
only leave me with b) thru d)   :-(

PS:  Steve, thanks for the aviso!  Very interesting
attack!

-- 
iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Mozilla tool to self-verify HTTPS site

2003-06-24 Thread Ian Grigg
http://sslbar.metropipe.net/

Fantastic news:  coders are starting to work
on the failed security model of secure browsing
and improve it where it matters, in the browser.

This plugin for Mozilla shows the SSL certificate's
fingerprint on the web browser's toolbar.

It's a small step for the user, but a giant leap
for userland security.  It means that someone is
thinking about solving the hacks against secure
browsing.  Caching and distributing techniques
for certificates can't be that far off...

-- 
iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Mozilla tool to self-verify HTTPS site

2003-06-24 Thread Ian Grigg
[EMAIL PROTECTED] wrote:

 How many users can remember MD5 checksums??? If they were rendered into
 something pronounceable via S/Key like dictionaries it might be more
 useful...

You forgot this bit:

 It's a small step for the user, but a giant leap
 for userland security.  It means that someone is
 thinking about solving the hacks against secure
 browsing.  Caching and distributing techniques
 for certificates can't be that far off...

-- 
iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New toy: SSLbar

2003-06-25 Thread Ian Grigg
Steven M. Bellovin wrote:

 Please don't take this personally...

None taken here, and I doubt that the author
of the tool (who has just joined this list
it seems) would take any!

 From a security point of view, why should anyone download any plug-in
 from an unknown party?  In this very specific case, why should someone
 download a a plug-in that by its own description is playing around in
 the crypto arena.  How do we know it's not going to steal keys?  Is the
 Mozilla API strong enough that it can't possibly do that?  Is it
 implemented well enough that we trust it?  (I see that in this case,
 the guts of the plug-in are in Javascript.  Given how often Javascript
 has played a starring role in assorted security flaws, that doesn't
 reassure me.  But I do appreciate open source.)

It's an issue.  I think the answer requires the same
analysis as always:  someone would download this
plug-in if the result were likely more security in
the overall browsing experience.

So, the question then arises, could this plug-in
give more security than the exposure to an
untrustworthy party warrants?



On the one hand, the plug-in isn't likely to be
terribly effective, as is fairly obvious, as has
been pointed out.



OTOH, one might be downloading a trojan.  Well,
that's possible.  Is it likely?  I don't think
so, and here's why:

If this were an attack, it would be unlikely to
be effective.  There is a known site (albeit
with a masked identity) with a webpage, etc.
So there are tracks, and angry emails to the
owner of the site will incur a cost for the
attacker.

Few people use keys, making this an obscure
approach.  I suppose if the target really *was*
keys, then the challenge would be to target
those key users ... against which, the users
of keys are likely to be more security conscious
than other victims.

If the person was indeed a crook, why would he
use open source?  And, even though Javascript
may have a poor security record, that's to do
with bugs in its model and code efforts and
potential security breachs, not with crooks
acutally inserting code to steal value.  I.e.,
theoretical breaches of security, not actual
breaches of security.

Also, to impune the plug-in arrangement is to
impune all plug-ins, and to impune the download
from an unknown is to impune all downloads from
unknowns.  What is the risk of downloads being
trojaned, and the risk of plug-ins being aggressive?

These are unknowable risks, a priori, so we
have to resort to statistics and cost-benefit
to work out the probability.  And here,
statistics is on our side.  In practice, an
attack is rarely initiated via a download,
or via a plug-in.

I.e., download this fantastic tool which
just so annoyingly includes a trojan from the
person who manages the site doesn't seem to
occur as a real attack with any frequency.

(Partly because it takes a long time to find
the right victim, and partly because it
leaves the attacker static and vulnerable,
I'm guessing.  In comparison, it seems that
attackers get much better results by using
targetted mass mailings tools to deliver
their EMD.)



So on balance, I won't download the tool,
because its effectiveness is low.  But so
is its risk.  Other people might come to
other conclusions, but I personally don't
buy the argument that just because I don't
know the site, it shouldn't be touched.

Life is full of risks.  Only by taking
risks do we understand what works and what
doesn't.  Real-life security is like that,
as in practice, we know that not all can
be covered in security, as it is simply
too expensive to be 100% safe.  So we have
to take some risks in some areas.



EMD - emails of mass destruction?

-- 
iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Mozilla tool to self-verify HTTPS site

2003-07-02 Thread Ian Grigg
Marc Branchaud wrote:
 
 Ian Grigg wrote:
 
  Tying the certificate into the core crypto protocol seems to be a
  poor design choice;  outsourcing any certification to a higher layer
  seems to work much better out in the field.
 
 I'll reserve judgement about the significance of SSLBar, but I couldn't
 agree more with the above point.  The only way to use non-X.509 certs
 with TLS 1.0 is by rather clunkily extending the ciphersuites to also
 identify some kind of certificate type.

I'm currently reading Eric Rescorla's SSLTLS book,
and a significant proportion of the problems within
the SSL/TLS protocol seem to come from the assumption
that the cert should be supplied *within* the core
protocol, and not outsourced to a higher layer.

I.e., if SSL/TLS was re-written around this simple
separation into two separate sub-protocols:

1. get the/a/all certificate(s)
2. use the key within

a lot of the complexity would disappear.

(I understand the argument that SSL/TLS does not
require a cert, but to all intents and purposes,
everything and everyone assumes it, AFAICS.  As a
practical issue, as it effects the implementations
out there, I'm not sure it makes sense to even
consider SSL/TLS without certs.)

It seems to me to be a developing principle.

We are all agreed that the delivery of (any/the)
cert is a very hard problem.  We are mostly agreed
that it is an unsolved problem.

So, as a corollary to the hard problem, the key
to use as the starting point for any crypto protocol
should be provided to it, not bootstrapped within.

(I wonder if there is a pithy way of stating this
principle?  Good crypto divorces bad PKIs?  Cost
effective crypto starts with an assumed key?)

 IMO, this fact has significantly contributed to the lack of adoption of
 PGP, SPKI, and alternative PKIs on the Internet.

(I'm not quite sure what the issue here is with
PGP ... it works fine without any certification,
and it works slightly better when 3rd party sigs
(certs?) are added by the user?  Although I
grant you that the key structure is .. costly
to code, to the point of being impermeable to
new implementations.)

 TLS's new extension mechanism can help address this (see
 draft-ietf-tls-openpgp-keys), but it'll be a while before extension
 support is common.

Yes, my company's protocol (SOX) extends the
certificate layer by using OpenPGP.  Configuring
the issuance of a new monetary contract is a bit
of a bear, in no small part due to the chain of
signatures in the OpenPGP PKI that we use.  But,
it works, and it doesn't feel as though the big
costly PKI process built out of OpenPGP slows
down adoption any.

[ We tried x.509 for a while, but it was a
mistake;  it lacked cleartext signing (minor
point, we hacked our own) and its fixed PKI
doesn't map to financial relationships, which
are based on WoT, not centralised permissions. ]

SSH chooses the simplest solution - opportunistic
crypto - create the certs on demand and caching them
for future checking.  That is the best success formula
I have seen so far.

-- 
iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fwd: [IP] A Simpler, More Personal Key to Protect OnlineMessages

2003-07-09 Thread Ian Grigg
Tim Dierks wrote:
...
 the fact that the private key, is, in essence, escrowed by the trusted
 third party, causes me to believe that this system doesn't fill an
 important unmet need.

I'm not sure that's the case!

There are some markets out there where there are some
contradictory rules.  By this I mean, all messages must
be private, and all messages must be readable.

Now, the challenges that these markets must meet point
them in the direction of having a central server doing
key escrow.  But, the central server is not allowed to
escrow the messages or be able to read the messages.

A further challenge is that these markets are full off
leakages, and so what is needed is a way of taking the
crypto capability away from users.

This solution seems to do this latter part, in that it
achieves the contradictory requirements of making every
message unreadable, but crackable, and it - in theory -
does not give users any ability to do their own crypto
and thus bypass the system.



A (purely hypothetical) example, to clarify what this
market looks like:  Imagine the NSA had to outsource
its encrypted comms.  They want all messages to be secret
because .. that's kind of their mission.  But, they are
worried about moles in the organisation, so they want
to be able to open up the whole shebang somehow and go
trolling for data.

So how do we rationalise all this?  Simple - the people
who use the system are not the people who buy the system.
The market for this system is not users but corporates
with special needs.  In fact if we look at the website,
it's oriented to selling into 4 markets:  corporates,
financial, health, and government,  If we ignore the
first as a catchall phrase, the remaining three all have
special needs when it comes to privacy.  And those needs
aren't so much to do with the user as with the organisation.

It was for these markets that companies like PGP Inc put
in their fabled alternate decryption key, and companies
like Hushmail sell corporate packages.

-- 
iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


replay integrity

2003-07-09 Thread Ian Grigg
Eric Rescorla wrote:

 You keep harping on certs, but that's fundamentally not relevant to
 the point I was trying to make,

OK!

 which is whether or not one provides
 proper message integrity and anti-replay. As far as I'm concerned,
 there are almost no situations in which not providing those services
 is appropriate. That kind of infrastructure is already built into
 SSL and shouldn't be reinvented.

Welcome to the applications world!

Integrity:  Financial protocols that use crypto
(as opposed to ones abused by crypto) generally
include signed messages.  The signature provides
for its own integrity, as well as a few other
things.

Replay:  One of the commonest problems in HTTPS
sites is replay failure.  The solution is well
known out in the real world - you have to have
replay prevention at the higher layers.

(Credit card processors have replay prevention
too!)

So, some protocols don't need replay prevention
from lower layers because they have sufficient
checks built in.  This would apply to any protocols
that have financial significance;  in general, no
protocol should be without its own unique Ids.

I wouldn't say that this is a good reason to take
these features out of SSL.  But assuming they are
needed is a cautious assumption, and assuming
that SSL meets the needs for replay  integrity
makes even less sense when we are dealing with a
serious top-to-bottom security model.

It's simply the case that a serious financial
protocol would have to have its own replay 
integrity, because its threat model and failure
model is so much broader than SSL's.  For example,
a serious payments scenario works across end-to-
end, and assumes that nodes on both end-points
can be compromised and/or faulty.  And, it's not
only just faults, many higher layers actively
replay as part of the protocol.

SSL just doesn't address the security needs of
protocols as well as all that.  Where I've seen
it used, the core need for it is privacy of the
data stream, not anything else.

(As a sort of oxymoron, a payments or similar
protocol that didn't have its own replay  integrity
would not work.  Ideally, a good test of a payments
protocol is to see if it would work over unprotected
UDP or email.  Some do and some don't.)

-- 
iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Announcing httpsy://, a YURL scheme

2003-07-16 Thread Ian Grigg
[EMAIL PROTECTED] wrote:

 A YURL aware search engine may find multiple independent references to a
 YURL, thus giving you parallel reporting channels, and increasing trust.
 Of course, this method differs from the YURL method for trust. The
 parallel channel method assigns a trust value to a site by querying the
 YURL aware search engine.

That's an extraordinarily good idea!  It reminds
me of the technique for determining banks SWIFT
codes.  It seems that the banks often don't really
know themselves, so if you do a google search on
the bank name and the word 'SWIFT' you will find
lots of merchants that already quote it on the net!

Now, one thing that could be done against such a
situation is to poison the search engine with false
URLs in advance of some mailing.  This is relatively
easy, although, will result in a lot of trails which
might give indicators to the perp, so I'd count that
as an expensive technique, and thus, the utility
of the URL searching still remains high.

YURLs are meant to be cached by the browser, I found
that somewhere in the documents but do not recall
where.  The same obviously goes for Simon Josefsson's
crypto-URLs, as mentioned by Trevoer Perrin.

This is
the really neat part, in that when we start to think
of server authentication as a volume  correlation
problem - as expounded on by Mark Miller - rather
than a one-supreme-quality problem, not only do we
achieve sufficient security for most purposes, we
do it with no more than the free net resources.

And, it has the additional benefits of matching
real life, and returning our Internet back to a no
permission needed society.

-- 
iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: invoicing with PKI

2003-09-03 Thread Ian Grigg
Peter Gutmann wrote:
 
 Hadmut Danisch [EMAIL PROTECTED] writes:
 
 There was an interesting speech held on the Usenix conference by Eric
 Rescorla (http://www.rtfm.com/TooSecure-usenix.pdf, unfortunately I did not
 have the time to visit the conference) about cryptographic (real world)
 protocols and why they failed to improve security.
 
 It was definitely a must hear talk.  If you haven't at least read the slides
 (were the invited talks recorded this year?  Any MPEGs available?), do so now.
 I'll wait here.

I read them twice the other say.  Recordings
would be nice.

 [Pause]
 
 The main point he made was that designers are resorting to fixing mostly
 irrelevant theoretical problems in protocols because they've run out of other
 things to do, while ignoring addressing how to make the stuff they're building
 usable, or do what customers want.  My favourite example of this (coming from
 the PKI world, not in the talk) is an RFC on adding animations and theme music
 to certificates, because that's obviously what's holding PKI deployment back.

:-) Was this an April 1st RFC?  Or a stealth DRM
effort?

 From the logfiles I've visited I'd estimate that more than 97% of SMTP relays
 do not use TLS at all, not even the oportunistic mode without PKI.
 
 I did a talk last year at Usenix Security where I said that all SSL really
 needed was anon-DH, because in most deployments that's how certificates are
 being used (self-signed, expired, snake-oil CAs, even Verisign's handed-out-
 like-confetti certs).

It's worth looking at these figures from 1st Sep 2003:

  Description  Count

  Valid35709
  Self Signed  9769
  Unknown Signer   27507
  Cert-Host Mismatch   40276
  Expired  54578

http://www.securityspace.com/s_survey/sdata/200308/certca.html

I used the total in my calculation to get 1.24% server
penetration, but the true story is way worse - only a
quarter are supposed PKI-valid.  The rest are deviant
in some form.

 It's no less secure than what's being done now, and
 since you can make it completely invisible to the user at least it'll get
 used.  If all new MTA releases automatically generated a self-signed cert and
 enabled STARTTLS, we'd see opportunistic email encryption adopted at a rate
 that tracks MTA software upgrades.

I've thought about this a lot, and I've come to the
conclusion that trying to bootstrap using ADH is not
worth the effort.  I think the best thing is if the
web servers were to automatically generate self-signed
certs on install, and present them by default.

Then, at least, we offer the opportunity for browsers
to do SSH-style time-trust analysis.

The forces of crypto-conservatism are so strong that
I suspect we only get one shot at saving the HTTPS
protocol.  Trying to get browsers and servers to agree
to like ADH seems too much a challenge.  Using self-
signed certs seems to promise more bang for buck.

For new applications, using ADH is definately a good
way to go.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


SSL's threat model

2003-09-06 Thread Ian Grigg
Does anyone have any pointers to the SSL threat model?

I have Eric Rescorla's book and slides talking about the
Internet threat model.

The TLS RFC (http://www.faqs.org/rfcs/rfc2246.html) says
nothing about threat models that I found.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is cryptography where security took the wrong branch?

2003-09-07 Thread Ian Grigg
Eric Rescorla wrote:
 
 Ian Grigg [EMAIL PROTECTED] writes:
 
  Eric Rescorla wrote:
  ...
The other thing to be aware of is that ecommerce itself
is being stinted badly by the server and browser limits.
There's little doubt that because servers and browsers
made poorly contrived decisions on certificates, they
increased the overall risks to the net by reducing the
deployment, and probably reduced the revenue flow for
certificate providers by a factor of 2-5.
   I doubt that. Do you have any data to support this claim?
 
  Sure.  SSH.
 That's not data, it's an anecdote--and not a very supportive one
 at that. As far as I know, there isn't actually more total
 SSH deployment than SSL, so you've got to do some kind of
 adjustment for the total potential size of the market, which
 is a notoriously tricky calculation.

It's more than an anecdote.  If I quote from your
slides, SSH has achieved an almost total domination
of where it can be deployed.  Wherever there are Unix
servers, we suspect the domination of SSH.

(I haven't got a good figure on that.  Some stats
have been done Neils Provos and Peter Honeyman in
a paper, but I can't interpret the results sufficiently
to show SSH server distribution, nor penetration [1].
It's now a hot topic, so I believe the figures will
become available in time.)

 Do you have any actual
 data or did you just pull 2-5 out of the air?


There is a middle ground between data and the air,
which is analysis.  I've been meaning to write it
up, but I'm working on the SSL threat model right
now.


  It's about take up models.  HTTPS'
  model of take-up is almost deliberately designed
  to reduce take-up.  It uses a double interlocking
  enforcement on purchase of a certificate.  Because
  both the browser and server insist on the cert
  being correct and CA-signed and present, it places
  a barrier of size X in front of users.
 I don't know where you got the idea that the server insists on cert
 correctness. Neither ApacheSSL nor mod_SSL does.


I take the following approach here.  I think that
for Apache to promote the interests of the users,
it should configure automatically to run SSL, and
automatically generate a self-signed cert on install
(unless there is one there already).  I admit I
haven't looked to see whether that is reasonable
or possible, but I gather it does neither of those
things, and it certainly doesn't make doing self-
signed certs so easy.

Oh, and Apache does lead one astray by calling the
self-signed cert a snake-oil cert.  This misleads
the users into thinking there is something wrong
with a self-signed cert.  I'm not sure how easy
that is to correct.


  Instead, if there were two barriers, each of half-X,
  being the setup of the SSL server (a properly set
  up browser would have no barrier to using crypto),
  and the upgrade to a CA-signed cert, then many more
  users would clear the hurdles, one after the other.
 Maybe, maybe not. You've never heard of price inelasticity?
 
 The fact of the matter is that we have no real idea how
 elastic the demand for certs is, and we won't until someone
 does some real econometrics on the topic. Unless you've
 done that, you're just speculating.

The reason we have no idea how elastic the demand
for certs is, is because a) we've never tried it,
and b) we've not looked at the data that exists.

(Yes, those reasons are contradictory.  That's part
of the world that we want to change.)

It's nothing to do with whether the ivory tower
brigade does some econowhatsists on their models
and then speculates as to what this all means.

Have a look at the data that is available [2].  You
will see elasticity.  Have a look at the history
of a little company called Thawte.  There, you will
see how elasticity contributed to several hundred
millions of buyout money.

Mark S prays to the god of elasticity every night.

Check out the Utah digsig model.  If you can see
a better proof of cert elasticity, I'd like to know
about it.

iang

[1] http://www.citi.umich.edu/u/provos/ssh/
http://www.citi.umich.edu/techreports/reports/citi-tr-01-13.pdf

[2] http://www.securityspace.com/
http://www.securityspace.com/s_survey/sdata/200308/certca.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is cryptography where security took the wrong branch?

2003-09-07 Thread Ian Grigg
Ed,

I've left your entire email here, because it needs to
be re-read several times.  Understanding it is key to
developing protocols for security.

Ed Gerck wrote:
 
 Arguments such as we don't want to reduce the fraud level because
 it would cost more to reduce the fraud than the fraud costs are just a
 marketing way to say that a fraud has become a sale. Because fraud
 is an hemorrhage that adds up, while efforts to fix it -- if done correctly
 -- are mostly an up front cost that is incurred only once.  So, to accept
 fraud debits is to accept that there is also a credit that continuously
 compensates the debit. Which credit ultimately flows from the customer
 -- just like in car theft.

What you are talking about there is a misalignment
of interests.  That is, the car manufacturer has no
incentive to reduce the theft (by better locks, for
e.g.) if each theft results in a replacement sale.

Conventionally, this is dealt with by another interested
party, the insurer.  He arranges for the owner to have
more incentive to look after her car.  He also publishes
ratings and costs for different cars.  Eventually, the
car maker works out that there is a demand for a car
that doesn't incur so many follow-on costs for the owner.

This is what we call a free market solution to a
problem.  The alternative would be some form of
intervention into the marketplace, by some well-
meaning authority.

The problem with the intervention is that it generally
fails to arise and align according to the underlying
problem.  That is, the authority is no such, and puts
in place some crock according to his own interests.

E.g., ordering all car manufacturers to fit NIST
standard locks (as lobbied for by NIST-standard
lock makers).  Or giving every car owner a free
steering lock.

And, that's more or less what we have with HTTPS.  A
security decision by the authority - the early designers
- that rides on a specious logical chain with no bearing
on the marketplace, and the result being a double block
against deployment.

(It's interesting to study these twin lock-ins, where
two parties are dependant on the other for their
mutual protocol.  For those interested, the longest
running commercial double cartel is about to come
crashing down:  DeBeers is now threatened by the the
advent of gem quality stones for throwaway prices,
its grip on the mines and retailers won't last out
the decade.  Understanding how DeBeers created its
twin interlocking cartels is perhaps the best single
path to understanding how cartels work.)

 Some 10 years ago I was officially discussing a national
 security system to hep prevent car theft. A lawyer representing
 a large car manufacturer told me that a car stolen is a car sold
 -- and that's why they did not have much incentive to reduce
 car theft. Having the car stolen was an acceptable risk for
 the consumer and a sure revenue for the manufacturer. In fact, a
 car stolen will need replacement that will be provided by insurance
 or by the customer working again to buy another car.  While the
 stolen car continues to generate revenue for the manufacturer in
 service and parts.
 
 The acceptable risk concept is an euphemism for that business
 model that shifts the burden of fraud to the customer, and eventually
 penalizes us all with its costs.
 
 Today, IT security hears the same argument over and over again.
 For example, the dirty little secret of the credit card industry is that
 they are very happy with +10% of credit card fraud over the Internet.
 In fact, if they would reduce fraud to zero today, their revenue
 would decrease as well as their profits.

Correct!  You've revealed it.  IMHO, not understanding
that fact has been at the root cause of more crypto biz
failures than almost any other issue.  My seat of the
pants view is that over a billion was lost in the late
eighties on payments ventures alone (I worked for a
project that lost about 250 million before it gave up
and let itself be swallowed up...).

In reality, the finance industry cares little about
reducing fraud.  This is easy to show, as you've done.

 There is really no incentive to reduce fraud. On the contrary, keeping
 the status quo is just fine.
 
 This is so mostly because of a slanted use of insurance. Up to a certain
 level,  which is well within the operational boundaries, a fraudulent
 transaction does not go unpaid through VISA,  American Express or
 Mastercard servers.  The transaction is fully paid, with its insurance cost
 paid by the merchant and, ultimately, by the customer.
 
 Thus, the credit card industry has successfully turned fraud into
 a sale.  This is the same attitude reported to me by that car manufacturer
 representative who said: A car stolen is a car sold.
 
 The important lesson here is that whenever we see continued fraud, we must
 be certain: the defrauded is profiting from it.  Because no company will accept
 a continued  loss ithout doing anything to reduce it.

It'e perverse, because as you say, the 

Re: Code breakers crack GSM cellphone encryption

2003-09-08 Thread Ian Grigg
Trei, Peter wrote:

 Why the heck would a government agency have to break the GSM encryption
 at all?

Once upon a time, it used to be the favourite
sport of spy agencies to listen in on the
activities of other countries.  In that case,
access to the radio waves was much more juicy
than access to the POTS.

I've not heard anything explicitly on this,
but I'd expect satellites to be able to pick
up GSM calls.  (One of the things I have heard
is that the Chinese sold fibre networking to
Iraq, and the Russians sold special phones
with better crypto.  Don't know how true any
of that is.)

Also, the patent issue will work very well in
countries where there are laws against hacking
and cracking and so forth.  Rather than have
such laws subject to challenge in the supreme
court, a perp can be hit with both patent
infringement and illegal digital entry.  The
chances that anyone can defeat both of those
are slim.

(OTOH, I wonder if it is possible to patent or
licence something that depends on an illegal
act?)


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


x9.59

2003-09-09 Thread Ian Grigg
Anne  Lynn Wheeler wrote:
 
  The result is X9.59 which addresses all the major
 exploits at both POS as well as internet (and not just credit, but debit,
 stored-value, ACH, etc ... as well).
 http://www.garlic.com/~lynn/index.html#x959


Lynn,

Whatever happened to x9.59?

Also, is there a single short summary description of what
x9.59 does?  I don't mean a bucket full of links to plough
through, I mean some sort of technical overview that wasn't
approved by the marketing department.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: PGP makes email encryption easier

2003-09-16 Thread Ian Grigg
Eric Murray wrote:

  For the record, AFAIK, this approach was invented and
  deployed by Dr. Ian Brown as his undergraduate thesis,
  back in 1996 or so.
 
 Not to take anything away from Dr Brown, but I wrote something very
 similar to what PGP's selling for internal use at SUN in 1995 (to secure
 communications between some eastern european offices).   I'd thought
 about it a couple years before that as I needed something to secure
 communications between the company I worked for and their law firm,
 and teaching executives and chip designers to use PGP wasn't working
 very well.

Thanks for the correction!  Was this project ever released
or documented?  I never heard of it before.

 I don't beleive that I was the first to think of it or the first to
 do it; it's a pretty obvious solution.

:-)  Many inventions are obvious once well understood.

Although I would agree that such an invention should not
deserve to be patented.  Whether that's because it is too
obvious, or too useful, depends on ones pov...

  It's a good approach.  It trades some sysadmin complexity
  for the key admin complexity, but it also raises some
  interesting challenges for deciding when to encrypt,
  when not to encrypt, and also, when to block outgoing
  mail that should be encrypted...
 
 Yep.
 
 Eric

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


The Code Book - in CD form

2003-09-21 Thread Ian Grigg
Has anyone reviewed Simon Singh's CD version
of The Code Book ?


=
http://www.simonsingh.net/The_CDROM.html

After 12 months of intense development, the interactive
CD-ROM version of The Code Book is now available. I might
be biased, but I think that it is brilliant. Don't be
confused by the ridiculously low price, because this
CD-ROM contains tons of fascinating and dynamic material,
including:
 
  1. Encryption tools,
  2. Code breaking tools,
  3. Dozens of video clips,
  4. Coded messages to crack,
  5. Material for teachers, e.g., worksheets,
  6. A realistic, virtual Enigma cipher machine,
  7. A beginner's cryptography tutorial,
  8. A history of codes from 1000BC to 2000AD,
  9. Material for junior codebreakers,
10. Interviews with Whit Diffie and Clifford Cocks, 
11. Sections on public key crypto  RSA,
12. An animated section on quantum cryptography.
 
 
The CD-ROM is ideal for teenagers, parents who want to
encourage an interest in science and mathematics in their
children, grown-ups interested in the history of cryptography,
amateur codebreakers and anybody who wants to know about
encryption in the Information Age.
snip
=

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Who is this Mallory guy anyway?

2003-09-22 Thread Ian Grigg
 someone wrote:
 
 Hiya.
 
 Dumb question. Why is the bad guy called Mallory in
 this thread? I always thought that traditionally the
 two correspondents were called Alice and Bob and that
 the bad guy was called Eve. (As in, short for eavesdropper?).
 Intercepting the bits and sending them is precisely
 the sort of thing that Eve does all the time.


Mallory is the Man-in-the-Middle.  He is the one
that inserts himself into a connection, in an
active attack, and sends packets to both Alice
and Bob.  He can send one thing to Bob, and
send another thing to Bob.  In this way, he
can insert himself into a Diffie-Hellman key
exchange, and send completely separate numbers
to both both parties.

Eve is indeed the eavesdropper.  She can only
listen.

(As a further point, there are other personas,
being Trent, the trusted third party.  Also,
Victor, a verifier.  In financial cryptography
we use Ivan as an Issuer and sometimes Matilda
as a merchant.  Carol and Dave can assist
Alice and Bob in more complex protocols.)


 I would have said Mallory is acting as Eve, not
 Eve is acting as Mallory. But then, I'm surprisingly
 ignorant about all sorts of obvious things, Maybe
 you could clear this up for me?

Well, that's the question - is Eve allowed to
forward packets, in the act of listening, or
is that the Mallory's job?  I don't know.

Given the silence on the issue, and the differeng
usages, I'd say we've reached an uncertainty in
the definition.

The question revolves around whether Eve's name
derives from her eavesdropping, or whether she
is passive, and can only do stuff that can be
done by observation.  If she is allowed to resend
because she is eavesdropping then that's ok.  But,
if she must only passively listen - measure - and
cannot resend, then what this Quantum stuff does
is eliminate her from consideration because she
will always give herself away.  Hence, only
Mallory, the MITM, can do the job.  In effect,
it is very close to Anon-DH - in that Eve cannot
crack the crypto, but Mallory can.

It's a minor point, it doesn't really change the
crypto at all, but it can evoke different images
in different people if they don't agree on which
it is.  So one has to be careful, as the essence
of naming is, after all, efficient communication.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: why are CAs charging so much for certs anyway? (Re: End of the line for Ireland's dotcom star)

2003-09-24 Thread Ian Grigg
Adam Back wrote:

 You'd have thought there would be plenty of scope for certs to be sold
 for a couple of $ / year.

Excuse me?  Why are they being sold per year in the
first place?

It's not as if there are any root servers to run!

Outrageous!

:-)

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Reliance on Microsoft called risk to U.S. security

2003-09-25 Thread Ian Grigg
R. A. Hettinga wrote:
 
 http://channels.netscape.com/ns/news/story.jsp?id=200309241951000228064dt=20030924195100w=RTRcoview=
 
 Reliance on Microsoft called risk to U.S. security

 But the security experts said the issue of computer security
 had more to do with the ubiquity of Microsoft's software than
 any flaws in the software.

 I wouldn't put all of the blame on Microsoft, Schneier said,
 the problem is the monoculture.

On the face of it, this is being too kind and not
striking at the core of Microsoft's insecure OS.  For
example, viruses are almost totally a Microsoft game,
simply because most other systems aren't that vulnerable.

But, it is also possible to secure M$ OSs, so maybe there
is some merit to not putting all the blame on Microsoft.

Either way, it can be tested.  There is one market where
M$ has not dominated, and that is the server platform.

I haven't looked for a while, but last I looked, the
#1,2,3 players were Linux, Microsoft, FreeBSD, and only
a percentage point or two separated them.  (I'm unsure
of the relative orders.  And this relates to testable
web server platforms, rather than all servers.)

So, in the market for server platform OSs, is there
any view as to which are more secure, and whether that
insecurity can be traced to the OS?  Or external factors
such as a culture of laziness in installing patches, or
derivative vulnerability from being part of the monoculture?

(I raise this as a research question, not expecting any
answers!)

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Tinc's response to Linux's answer to MS-PPTP

2003-09-28 Thread Ian Grigg
M Taylor wrote:

 Oh, and they fixed their flaws. SSHv1 is not recommended for use at all,
 and most systems use SSHv2 now which is based upon a draft IETF standard.
 SSL went through SSLv1, SSLv2, SSLv3, TLSv1.0, and TLSv1.1 is a draft IETF
 standard.


It is curious, is it not, that there has been no well
written protocol that became successful on its first
attempt?  And, contrariwise, all successful systems
started out with crypto that slept shamefully with
ROT13.


 If Guus Sliepen and Ivo Timmermans are willing to seriously rethink their
 high tolerance for unncessary weakness, I think tinc 2.0 could end up being
 a secure piece of software. I hope Guus and Ivo circulate their version 2.0
 protocol before they do any coding, so that any remaining flaws can be easily
 fixed in the paper design without changing a single line of code, saving time
 and effort.


This is the best thing written so far.  Even if Guus
and Ivo were not to distribute their designs for 2.0,
I would salute their efforts so far.

It is clear that they have users.  Hoorah! I say.  It
is clear that they have successfully enabled millions
of VPN connections.  There art we happy!  It is fair
to say that through their efforts, many hundreds or
thousands of Linux boxen have escaped becoming part
of the lamented and hacked 43,000.  A pack of blessings
light upon the backs of cryptographers!

The notion that Guus and Ivo have done anything in the
slightest sense, wrong, is mysterious to me.  It defies
explanation.  They built a product.  They protected users.

Now, later on, after *proving* the product meets the
needs of the market place, is the time to clean up the
stopgap home-brewed crypto.  It's not the most urgent
thing.  Only if the product is under sustained and
unavoidable attack by the bad guys - like HTTPS - is
it urgent to get in there and fix the security.

And from the absence of any commentary on actual attacks,
there seems all the time in Mantua to prepare a killer 2.0
crypto layer.

Or am I missing something?

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Ian Grigg
Matt Blaze wrote:

  I imagine the Plumbers  Electricians Union must have used similar
  arguments to enclose the business to themselves, and keep out unlicensed
  newcomers.  No longer acceptable indeed.  Too much competition boys?
 

 Rich,

 Oh come on.  Are you willfully misinterpreting what I wrote, or
 did you honestly believe that that was my intent?


Sadly, there is a shared culture amongst cryptography   
professionals that presses a certain logical, scientific 
viewpoint.

What is written in these posts (not just the present one)
does derive from that viewpoint and although one can   
quibble about the details, it does look very much from
the outside that there is an informal Cryptographers  
Guild in place [1].

I don't think the jury has reached an opinion on why
the cryptography group looks like a guild as yet,
and it may never do so.  A guild, of course, is either
a group of well-meaning skilled people serving the
community, or a cartel for raising prices, depending
on who is doing the answering.

But, even if a surprise to some, I think it is a fact
that the crypto community looks like and acts as if a
guild.


 I'd encourage the designer of the protocol who asked the original question
 to learn the field.  Unfortunately, he's going about it a sub-optimally.
 Instead of hoping to design a just protocol and getting others to throw
 darts at it (or bless it), he might have better luck (and learn far
 more) by looking at the recent literature of protocol design and analysis
 and trying to emulate the analysis and design process of other protocols
 when designing his own.  Then when he throws it over the wall to the rest
 of the world, the question would be not is my protocol any good but
 rather are my arguments convincing and sufficient?


This is where maybe the guild and the outside world part
ways.

The guild would like the application builder to learn the
field.  They would like him to read up on all the literature,
the analysies.  To emulate the successes and avoid the
pitfalls of those protocols that went before them.  The  
guild would like the builder to present his protocol and  
hope it be taken seriously.  The guild would like the
builder of applications to reach acceptable standards.

And, the guild would like the builder to take the guild
seriously, in recognition of the large amounts of time
guildmembers invest in their knowledge.



None of that is likely to happen.  The barrier to entry
into serious cryptographic protocol design is too high
for the average builder of new applications [2].  He has,
after all, an application to build.

What *is* going to happen is this:  builders will continue
to ignore the guild.  They will build their application,
and throw any old shonk crypto in there.  Then, they will
deploy their application, in the marketplace, and they will
prove it, in the marketplace.

The builder will find users, again, in the marketplace.   

At some point along this evolution, certain truths will   
become evident:  the app is successful (or not).  The code
is good enough (or not).  People get benefit (or not).
Companies with value start depending on the app (or not).
Security is adequate (or is not).  Someone comes along and
finds some easy breaches (or not).  That embarrasses (or
not).

And, maybe someone nasty comes along and starts doing
damage (or not).

What may not be clear is that the investment of the security
protocol does not earn its effort until well down the track.
And, as an unfortunate but inescapable corollary, if the app
never gets to travel the full distance of its evolutionary
path, then any effort spent up front on high-end security
is wasted.

Crypto is high up-front cost, and long term payoff.  In
such a scenario, standard finance theory would say that
if the project is risky, do not add expensive, heavy duty
crypto in up front.

This tradeoff is so strong that when we look about the
security field, we find very few applications that
succeeded when also built with security in mind from
the initial stages.

And, almost all successful apps had little or bad security
in them up front.  If they needed it later, they required
expensive add-ons.  Later on.

There are no successful systems that started with perfect
crypto, to my knowledge.  There are only perfect protocols
and successful systems.  A successful system can evolve
to enjoy a great crypto protocol, but it would seem that
a great protocol can only spoil the success of a system
in the first instance.



The best we can hope for, therefore, in the initial phase,
is a compromise: maybe the builder can be encouraged to
think about security as an add-on in the future?

Maybe some cheap and nasty crypto can be stuck in there
as a placemarker?  The equivalent of TEA or 40 bit RC4,
but in a protocol sense.

Or, maybe he can encourage a journeyman of the guild to
add the stuff in, on the side, as a fun project.

Maybe, just maybe, someone can create Bob's Simple Crypto
Library.  As a stopgap 

Re: Monoculture

2003-10-01 Thread Ian Grigg
Don Davis wrote:
 
 EKR writes:
  I'm trying to figure out why you want to invent a new authentication
  protocol rather than just going back to the literature ...

 note that customers aren't usually dissatisfied with
 the crypto protocols per se;  they just want the
 protocol's implementation to meet their needs exactly,
 without extra baggage of flexibility, configuration
 complexity, and bulk.  they want their crypto clothing
 to fit well, but what's available off-the-rack is
 a choice between frumpy one-size-fits-all, and a
 difficult sew-your-own kit, complete with pattern,
 fabric, and sewing machine.  so, they often opt for
 tailor-made crypto clothing.


This is also security-minded thinking on the part
of the customer.

Including extra functionality means that they have
to understand it, they have to agree with its choices,
they have to follow the rules in using it, and have
to pay the costs.  If they can ditch the stuff they
don't want, that means they are generally much safer
in making simple statements about the security model
that they have left.

So, coming up with a tailor-made solution has the
security advantage of reducing complexity.  If one
is striving to develop the whole security model on
ones own, without the benefit of formal methods,
that approach is a big advantage.

(None of which goes to say that they won't ditch a
critical component, of course.  I'm just trying to
get into their heads here when they act like this.)


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Ian Grigg
Perry E. Metzger wrote:

...

Dumb cryptography kills people.


What's your threat model?  Or, that's your threat
model?

Applying the above threat model as written up in
The Codebreakers to, for example, SSL and its
original credit card nreeds would seem to be a
mismatch.

On the face of it, that is.  Correct me if I'm
wrong, but I don't recall anyone ever mentioning
that anyone was ever killed over a sniffed credit
card.

And, I'm not sure it is wise to draw threat models
from military and national security history and
apply it to commercial and individual life.

There are scenarios where people may get killed
and there was crypto in the story.  But they are
far and few between [1].  And in general, those
parties gradually find themselves taking the crypto
seriously enough to match their own threat model
to an appropriate security model.

But, for the rest of us, that's not a good threat
model, IMHO.

  Well, the opposition to the guild is one of pro-market
  people who get out there and build applications.
 
 I don't see any truth to that. You can build applications just as
 easily using things like TLS -- and perhaps even more easily. The
 alternatives aren't any simpler or easier, and are almost always
 dangerous.


OK, that's a statement.  What is clear is that,
regardless of the truth of the that statement,
developers time and time again look at the crypto
that is there and conclude that it is too much.

The issue is that the gulf is there, not whether
it is a fair gulf.


 There isn't a guild.

BTW, just to clarify.  The intent of my post was not to
claim that there is a guild.  Just to claim that there
is an environment that is guild-like.

 People just finally realize what is needed in
 order to make critical -- and I do mean critical -- pieces of
 infrastructure safe enough for use.


I find this mysterious.  When I send encrypted email
to my girlfriend with saucy chat in there, is that
what you mean by critical ?  Or perhaps, when I send
a credit card number that is limited to $50 losses, is
verified directly by the merchant, and has a home
delivery address, do you mean, that's critical ?  Or,
if I implement a VPN between my customers and suppliers,
do you mean that this is critical ?

I think not.  For most purposes, I'm looking to reduce
the statistical occurrences of breaches.  I'll take
elimination of breaches if it is free, but in the
absence of a perfect world, for most comms needs, near
enough is fine by me, and anyone that tells me that the
crypto is 100% secure is more than likely selling snake
oil.

For those applications that *are* critical, surely the
people best placed to understand and deal with that
criticality are the people who run the application
themselves?  Surely it's their call as to whether they
take their responsibilities fully, or not?


iang


[1] the human rights activities of http://www.cryptorights.org/
do in fact present a case where people can get killed, and their
safety may depend to a lesser or greater extent on crypto.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-01 Thread Ian Grigg
M Taylor wrote:
 
 Stupid question I'm sure, but does TLS's anonymous DH protect against
 man-in-the-middle attacks? If so, how? I cannot figure out how it would,


Ah, there's the rub.  ADH does not protect against
MITM, as far as I am aware.


 and it would seem TLS would be wide open to abuse without MITM protection so
 I cannot imagine it would be acceptable practice without some form of
 security.

View A:

MITM is extremely rare.  It's quite a valid threat
model to say that MITM is a possibility that won't
need to be defended against, 100%.

E.g.1, SSH which successfully defends most online
Unix servers, by assuming the first contact is a
good contact.  E.g.2, PGP, which bounces MITM
protection up to a higher layer.

Or, what's your threat model?  Why does it include
MITM and how much do you want to pay?

View B:

MITM is a real and valid threat, and should be
considered.  By this motive, ADH is not a recommended
mode in TLS, and is also deprecated.

Ergo, your threat model must include MITM, and you
will pay the cost.

(Presumably this logic is behind the decision by the
TLS RFC writers to deprecate ADH.  Hence, talking
about ADH in TLS is a waste of time, which is why I
have stopped suggesting that ADH be used to secure
browsing, and am concentrating on self-signed certs.
Anybody care to comment from the TLS team as to what
the posture is?)

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: VeriSign tapped to secure Internet voting

2003-10-01 Thread Ian Grigg
Roy M. Silvernail wrote:
 
 On Wednesday 01 October 2003 17:33, R. A. Hettinga forwarded:
 
  VeriSign tapped to secure Internet voting
 
  The solution we are building will enable absentee voters to exercise
  their right to vote, said George Schu, a vice president at VeriSign. The
  sanctity of the vote can't be compromised nor can the integrity of the
  system be compromised--it's security at all levels.
 
 One would wish that were a design constraint.  Sadly, I'm afraid it's just a
 bullet point from the brochure.

It's actually quite cunning.  The reason that this
is going to work is because the voters are service
men  women, and if they attack the system, they'll
get their backsides tanned.  Basically, it should
be relatively easy to put together a secure voting
application under the limitations, control structures
and security infrastructure found within the US military.

It would be a mistake to apply the solution to wider
circumstances, and indeed another mistake to assume
that Verisign had anything to do with any purported
success in solving the voting problem.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-02 Thread Ian Grigg
Steven M. Bellovin wrote:
 
 In message [EMAIL PROTECTED], Ian Grigg writes:
 M Taylor wrote:
 
 
 MITM is a real and valid threat, and should be
 considered.  By this motive, ADH is not a recommended
 mode in TLS, and is also deprecated.
 
 Ergo, your threat model must include MITM, and you
 will pay the cost.
 
 (Presumably this logic is behind the decision by the
 TLS RFC writers to deprecate ADH.  Hence, talking
 about ADH in TLS is a waste of time, which is why I
 have stopped suggesting that ADH be used to secure
 browsing, and am concentrating on self-signed certs.
 Anybody care to comment from the TLS team as to what
 the posture is?)
 
 What's your threat model?  Self-signed certs are no better than ADH
 against MITM attacks.

I agree.  As a side note, I think it is probably
a good idea for TLS to deprecate ADH, simply
because self-signed certs are more or less
equivalent, and by unifying the protocol around
certificates, it reduces some amount of complexity
without major loss of functionality.

(AFAIK, self-signed certs in every way dominate
ADH in functional terms.)

 Until you understand your threat model, you don't
 have any grounds to make that decision.

I think we are in agreement on that!?

 MITM is certainly possible -- I've seen it happen.  The dsniff package
 includes a MITM tool, as do many other packages; at the Usenix Security
 conference a few years ago, someone intercepted all web-bound traffic
 and displayed a page All your packets are belong to us.


An appropriate security model for a security conference
might be to put a sign up at the door saying

All your assumptions are belong to us

At least that way everyone would be in tune with the
nature of the conference.

Anything that happens at the Usenix Security Conference
is, in my book, ruled out of ones regular, commercially
relevant threat model.  Same goes for demos in a Uni
student lab.

We all know it's possible.  The question is, should we
worry about it?  And, following on from Perry's method,
should we impose our own fears on others?

A threat must occur sufficiently in real use, and incur
sufficient costs in excess of protecting against it, in
order to be included in the threat model on its merits.


 Anyone on
 the same LAN (switched or unswitched) could have done the same.  If
 you're not on the same LAN, a routing attack or a DNS attack could
 result in the same thing, and those are happening, too, in the wild.


I know a couple of instances were posted maybe 6
months back.  What we need really is some sort of
repository of MITM attacks in the wild.  Costs
would be very useful too.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


crypto licence

2003-10-02 Thread Ian Grigg
Guus Sliepen wrote:

  Some advice on licensing wouldn't go amiss either. (GPL? ... LGPL? ...
  something else?)
 
 I'd say LGPL or BSD, without any funny clauses.

With crypto code, we have taken the view that it
should BSD 2 clause.  The reason for this is that
crypto code has enough other baggage, and corporates
are often the prime users.  These users are often
scared very easily by complex licences.

We'd tended to vacilate somewhat with applications,
between various Mozilla/Sun community models, but
with the underlying crypto, always as free as possible.

If you wanted to be in the GPL community, then LGPL.
GPL itself will infect any apps, so unless you have
a really great belief that you want those users and
no others, stick to LGPL.

Mind you, those have been our experiences.  It's
quite plausible that we'd have attracted a bigger
developer base simply by going GPL.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


using SMS challenge/response to secure web sites

2003-10-03 Thread Ian Grigg
Merchants who *really* rely on their web site being
secure are those that take instructions for the
delivery of value over them.  It's a given that they
have to work very hard to secure their websites, and
it is instructive to watch their efforts.

The cutting edge in making web sites secure is occuring
in gold community and presumably the PayPal community (I
don't really follow the latter).  AFAIK, this has been
the case since the late 90's, before that, some of the
European banks were doing heavy duty stuff with expensive
tokens.

e-gold have a sort of graphical number that displays
and has to be entered in by hand [1].  This works against
bots, but of course, the bot writers have conquered
it somehow.  e-gold are of course the recurrent victim
of the spoofers, and it is not clear why they have not
taken serious steps to protect themselves against
attacks on their system.

eBullion sell an expensive hardware token that I have
heard stops attacks cold, but suffers from poor take
up because of its cost [2].

Goldmoney relies on client certs, which also seems
to be poor in takeup.  Probably more to do with the
clumsiness of them, due to the early uncertain support
in the browser and in the protocol.  Also, goldmoney
has structured themselves to be an unattractive target
for attackers, using governance and marketing techniques,
so I expect them to be the last to experience real tests
of their security.

Another small player called Pecunix allows you to integrate
your PGP key into your account, and confirm your nymity
using PGP signatures.  At least one other player had
decided to try smart cards.

Now a company called NetPay.TV - I have no idea about
them, really - have started a service that sends out
a 6 digit pin over the SMS messaging features of the
GSM network for the user to type in to the website [4].

It's highly innovative and great security to use a
completely different network to communicate with the
user and confirm their nymity.  On the face of it,
it would seem to pretty much knock a hole into the
incessant, boring and mind-bogglingly simple attacks
against the recommended SSL web site approach.

What remains to be seen is if users are prepared to
pay 15c each time for the SMS message.  In Europe,
SMS messaging is the rage, so there won't be much
of a problem there, I suspect.

What's interesing here is that we are seeing the
market for security evolve and bypass the rather
broken model that was invented by Netscape back in
'94 or so.  In the absence of structured, institutional,
or mandated approaches, we now have half a dozen distinct
approaches to web site application security [4].

As each of the programmes are voluntary, we have a
fair and honest market test of the security results [5].

iang



[1]  here's one if it can be seen:
https://www.e-gold.com/acct/gen3.asp?x=3061y=62744C0EB1324BD58D24CA4389877672
Hopefully that doesn't let you into my account!
It's curious, if you change the numbers in the above
URL, you get a similar drawing, but it is wrong...

[2] All companies are .com, unless otherwise noted.

[3] As well as the activity on the gold side, there
are the adventures of PayPal with its pairs of tiny
payments made to users' conventional bank accounts.


[4]  Below is their announcement, for the record.

[5]  I just thought of an attack against NetPay.TV,
but I'll keep quiet so as not to enjoy anyone else's
fun :-)

== 
N E T P A Y. T V N E W S L E T T E R 
October 3rd, 2003 
Sent to NetPay members only, removal instructions at the
end of the message 
==
1. SMS entry - Unique Patent pending entry system -
World first! 
==

http://www.netpay.tv/news.htm 

 

What is this new form of entry? 

 

Do you own a mobile phone? Can you receive SMS
messages? Would you like to have your own personal
NetPay security officer contact you when entry to your
account is required? Netpay would like to introduce a world
first in account security. This new feature is so simple, yet
so effective - we believe every member will utilize it. 

 

If you answered yes to the above, then your SMS capable
mobile is a powerful security device, which will stop any
unforced attempts of entry into your Netpay account. No
need to purchase expensive security token hardware, no
need to be utterly confused on how to use the security
device. If you know how to use your mobile, then you know
how to totally protect your Netpay account from any
possible unlawful entry. 

 

This new system sends you an automated 6 digit secure
random PIN direct to your phone whenever you try to
access your account. Without this PIN, it is impossible to
login. The PIN arrives direct to your mobile within seconds!
It is as good as having your own personal security officer
calling you whenever someone is trying to access your
account! 

 

SMS AUTHENTICATED 

threat modelling strategies

2003-10-03 Thread Ian Grigg
Arnold G. Reinhold wrote:
 
 At 11:50 PM -0400 10/1/03, Ian Grigg wrote:
 ...
 A threat must occur sufficiently in real use, and incur
 sufficient costs in excess of protecting against it, in
 order to be included in the threat model on its merits.
 
 
 I think that is an excellent summation of the history-based approach
 to threat modeling. There is another approach, however,
 capability-based threat modeling. What attacks will adversaries whom
 I reasonably expect to encounter mount once the system I am
 developing is deployed? Military planners call this the responsive
 threat.  There are many famous failures of history-based threat
 modeling: tanks vs. cavalry, bombers vs. battleships, vacuum tubes
 vs. electromechanical cipher machines, box cutters vs skyscrapers,
 etc.


A very nice distinction.  The problem with this approach
is that it depends heavily on the notion of reasonably
expect, which is highly obvious, after the fact.

In each of those cases, it was possible to trace the
development of the attack through history, again,
after the fact [1], [2], [3].

In each case, the history was mostly readable.  Just
like security today.  In each case, it was very difficult
to predict the future.  And, for those lucky few who
did, they were ignored.  And, for those lucky few
who did predict correctly, there were many score more
who predicted the wrong thing.

Military affairs are fairly typecast.  You are stuck
with the weapons of the past, chasing an infinite
number of possibilities in the future.  In all that,
you have to fight the current war.  Prepare for some
unlikely future at your peril.  If you pick the wrong
one, you'll be accused of being a dreamer, or of
fighting the last war.  Pick a future that actually
happens, and you'll be called a genius.

Crypto systems get pretty much deployed like that
as well.  Reasonable threat models are built up,
a point in the future is aimed for, and the system
gets deployed.  Then, you hope that attacks like
that of Adi Shamir's student don't happen until
the very end of life.  You watch, and you hope.


 In the world of the Internet the time available to put in place
 counteract new threats once they are publicized appears to be
 shrinking rapidly. And we are only seeing one class of adversaries:
 the informal network of hackers. For the most part, they have not
 tried to maximize the damage they cause. There is another class,
 hostile governments and terrorists, who have so far not shown their
 hands but are presumably following developments closely.  I don't
 think we can restrict ourselves to threats already proven in the wild.


The alternate is to prepare for every possible
threat.  That's hard.  It may be that you can
justify this level of expenditure, but for most
ordinary missions, this is simply too expensive.

Mind you, I'm not sure of your first claim there,
can you explain why the security field has not
moved quickly to counter the threat of web site
spoofing?  It's been around for yonks, and it's
resulting in losses

 Then there is the matter of costs and who pays them. Industry is
 often willing to absorb small costs, or, better, fob them off onto
 consumers. Moderate costs can be insured against or written off as
 extraordinary expenses. Stockholders are shielded from the full
 impact of catastrophic costs by the bankruptcy laws and can sometimes
 even get governments to subsidize such losses.
 
 Perhaps guilds are the right model for cryptography. At their best,
 guilds preserve knowledge and uphold standards that would otherwise
 be ignored by market forces. Anyone out there willing to have open
 heart surgery performed by someone other than a member of the
 surgeon's guild?

Anyone out there willing to send a chat message
that is protected by ROT13?

As we have defined our mission, we can set our
requirements, and build our threat model.

I don't see that the presence of huge costs in
some exotic industries means the rest of us have
to pay for heart surgery every time we want to
send a chat message.  Or face death threats every
time we pay for flowers with a credit card.

But, I grant you that FUD will play a part in
the ongoing evolution of the Cryptologists'
Guild, just as it has in the past.  It's too
powerful a card to ignore, just because it is
unscientific.

YMMV :-)

iang

[1] Although Guderian's development of Blitzkreig was
kept a secret, as was all German war planning, it wasn't
totally unemulated by the Allies, just not up-played
as well as it might have been .  C.f., Patton, who
famously read Rommel's book, and de Gaulle, who
parlied a presidency out of his success at holding
back the Guderian advances, albeit briefly.

In fact, the French tanks outnumbered, outgunned, and
out armoured the Germans,  The Versaille Treaty
banned Germany from having *any* armoured vehicles.

That's preparation!

 _Panzer Leader_, General Heinz Guderian, 1952.


[2] box cutters v. skyscrapers - I have a collection of
films that predict

Re: Strong-Enough Pseudonymity as Functional Anonymity

2003-10-04 Thread Ian Grigg
Zooko O'Whielacronx wrote:

 I imagine it might be nice to have Goal B achievable in a certain setting
 where Goal A remains unachievable.

In a strictly theoretical sense, isn't this essentially
the job of the (perfect) TTP?  At least that's the way
many protocols seem to brush away the difficulty.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Simple SSL/TLS - Some Questions

2003-10-06 Thread Ian Grigg
Jill Ramonsky wrote:

 First, the primary design goal is simple to use.


This is the highest goal of all.  If it is not simple
to use, it misses out on a lot of opportunities.  And
missing out results in less crypto being deployed.

If you have to choose between simple-but-incomplete,
versus complex-but-complete, then choose the former
every time.  Later on, you can always upgrade - or
the programmer using the system can upgrade - to the
full needs if they have shown themselves the need
for the complete solution that you optimised away.

On these lines, I'd suggest something like:



1.  select one cipher suit only, and reject the
rest.  Select the strongest cipher suit, such as
large RSA keys / Rijndael / SHA-256 or somesuch,
so there are no discussions about security.

1.b,  means basically do TLS only.  Don't offer
any fallback.  If someone is using your protocol,
they can select it to talk to their own apps.
If someone has to talk to another app using TLS
or SSL, then they almost certainly have to talk
all suites, so they are more or less forced to
do OpenSSL already.  Hence, almost by definition,
you are forced into the market where poeple don't
want to negotiate cipher suites, they want the
channel product between their own apps up and
running with no fuss.

2.  Notwishtanding 1. above, leave the hooks in
to add another cipher suite.  You should really
only plan on one or two more.  One for embedded
purposes, for example, where you really push the
envelope of security for slower devices.  And
another because someone pays you to do it :-)

3.  Ditch Anon-DH as a separate suite.  Concentrate
on pure certificate comms.  Never deviate more than
briefly from the true flavour of the tools you are
working with.

4.  Ignore all complex certificate operations such
as CA work, etc.  If someone wants that order of
complexity, then they want OpenSSL, which includes
most of those tools.

5.  To meet the dilemma posed by 3, 4, generate
self-signed certificates on the fly.  Then, the
protocol should bootstrap up and get running
easily.  SSH model.  Anyone who wants more, can
replace the certs with alternately named and
signed certs, as created with more specialised
tools.  Or they can help you to write those
parts.

Good protocols divide into two parts, the second
part of which starts trust this key totally.
Ignore the first part for now, being, how you
got the key.

6.  Pick a good X.509 / ASN1 tool.  Don't do
that part yourself.  See all the writings on how
hard this is to do.  If you want to join the
guild of people who've done an ASN1 tool and can
therefore call it easy, do so, but tell your
family you won't be home for Christmas :-)

7.  Produce a complete working channel security
product before going for first release.  Nothing
slows down progress than a bunch of people trying
to help build something that they can't agree on.
Build it, then ask for help to round it out.

8.  What ever you do ... try and work on the code
that is most beneficial for other reasons.  Don't
plan on completing the project.  In the event
that you don't complete, make sure that what you
did do was worthwhile for other reasons!

9.  Take all expert advice, including the above,
with some skepticism.  You will have much more
intuition because you will be deep in the issues.



  With that in mind, I believe I could do a safer
 implementation in C++ than I could in C.


Go with it, then.  Being right means you win,
being wrong is an opportunity to learn :-)


 ... /Of course/ one should be
 able to communicate with standard TLS implementations, otherwise the
 toolkit would be worthless.


Is that the case?  I wide variety of uses for
any protocol are application to same application.
The notion of client-to-server is an alternate,
but it's only an alternate.  It is not a given
that apps builders want to talk to other TLS libs.

TLS is there to be used, as is all other software
and standards.  It is at your option whether you
wish to join the group of people that can express
comms in *standard* TLS, talking heterogeneously.


 (1) THE LICENCE
 
 I confess ignorance in matters concerning licensing. The basic rules
 which I want, and which I believe are appropriate are:
 (i) Anyone can use it, royalty free. Even commercial applications.
 (ii) Anyone can get the source code, and should be able to compile it to
 executable from this.
 (iii) Copyright notices must be distributed along with the toolkit.
 (iv) Anyone can modify the code (this is important for fixing bugs and
 adding new features) and redistribute the modified version. (Not sure
 what happens to the copyright notices if this happens though).

Sounds like BSD-2 clause or one of the equivalents.

The only question I wasn't quite sure of
was whether, if I take your code, and modify it,
can I distribute a binary only version, and keep
the source changes proprietary?

If so, that's BSD.  If not, you need some sort
of restriction like Mozilla (heading towards GPL).

My own philosophy 

Re: anonymity +- credentials

2003-10-06 Thread Ian Grigg
Anton Stiglic wrote:

  We need a practical system for anonymous/pseudonymous
  credentials.  Can somebody tell us, what's the state of
  the art?  What's currently deployed?  What's on the
  drawing boards?
 
  The state of the art, AFAIK, is Chaum's credential system.
 
 The state of the art is Brands' credentials.


Thanks for clearing up the record there - it was
also my understanding that Brands' work was the
current theoretical state of the art!

In terms of actual practical systems, ones
that implement to Brands' level don't exist,
as far as I know?  Also, the use of Brands work
would need to consider that he holds a swag of
patents over it all (as also applies to all of
the Chaum concepts).

There is an alternate approach, the E/capabilities
world.  Capabilities probably easily support the
development of psuedonyms and credentials, probably
more easily than any other system.   But, it would
seem that the E development is still a research
project, showing lots of promise, not yet breaking
out into the wider applications space.

A further alternate is what could be called the
hard-coded psuedonym approach as characterised
by SOX.  (That's the protocol that my company
wrote, so normal biases expected.)  This approach
builds psuedonyms from the ground up, which results
in a capabilities model like E, but every separate
use of the capability must be then re-coded in hard
lines by hardened coders.

Which means, for example, that whilst the E crowd
can knock up a new capability over lunchtime, it
takes us about a year of hard work to get a new
capability in place (we've done several - payments,
messaging, trading, projects, ...).  The plus side
is that these capabilities are far more suited to
purpose than something built over a high level
platform.

In summary, the state of the art would seem to be
just that, an art in a state.  There is no clear
view as to how this will pan out in the future,
to my mind.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


credit card threat model

2003-10-08 Thread Ian Grigg
Anne  Lynn Wheeler wrote:

 what i said was that it was specifying a simplified SSL/TLS based on the
 business requirements for the primary use of SSL/TLS  as opposed to a
 simplified SSL/TLS based on the existing technical specifications and
 existing implementations.


I totally agree that the business requirements
for protecting credit cards have scant
relationship to the security model of SSL/TLS!


 I don't say it was technical TLS  I claimed it met the business
 requirements of the primary use of SSL/TLS.
 
 I didn't preclude that there could simplified SSL/TLS based on existing
 technical specifications as opposed to implementation based on business
 requirements for the primary use.
 
 I thot I was very fair to distinguish between the business requirements use
 of SSL/TLS as opposed to technical specifications for SSL/TLS.


I think the key here is that SSL/TLS is a channel
security protocol.  But, to harken back to its days
of origin, where Netscape asked for something to
protect credit cards, is going to confuse the
issue for a lot of people.

In preference, if we want something to protect
credit cards, then the threat models should be
established, and the protocol should be created.

Yes, SSL/TLS protects credit cards a little bit
in one part of their flight, but SSL/TLS is much
bigger and grander than that small part.  It's
fair to say, I think, that it's whole security
model plays little attention to credit cards, it's
oriented to creating a good channel over which
any developer/implementor can pass *any* data.

Hence, for example, the emphasis on replay
prevention - which is at a higher layer in a
financial protocol, and was AFAIK in place in
credit cards since whenever.  But if one is
doing a channel security product, it has to
be there, as the overlaying application won't
consider it.

 There are lots of really great implementations in this world  many of
 which have absolutely no relationship at all with a good business reason to
 exist.
 
 The real observation was that in the early deployments of SSL  it was
 thot it would be used for something entirely different ... and therefor had
 a bunch of stuff that would meet those business requirements. However, we
 come to find out that it was actually being used for something quite a bit
 different with different business requirements.


This history of how the business requirements
led to the SSL model are possibly closed to us
at this point...  I wasn't there, and I'm a
bit scared to ask :)


 So a possible conjecture is that if there had been better foreknowledge as
 to how SSL was going to be actually be used  one might conjecture that
 it would have looked more like something I suggest (since that is a better
 match to the business requirements) ... as opposed to matching some
 business requirements for which it turned out not to be used.


My own view - in conjecture - is that it comes
back to that old chestnut, what's your threat
model.  It would appear that this was one missing
phase in the early development of SSL.  Or, if
it was asked, it certainly wasn't validated, it
was predicted only.

But, in terms of useful posture today, 9 years
down the track, I personally think it is time to
give up the ghost and not ever mention credit
cards again.  Others will  do differ ...
but I don't think it is possible nor helpful to
mix and match the credit card mission and the
SSL result as if they are strongly related.


 I've repeatedly claimed that the credit card number in flight has never
 been the major threat/vulnerability  the major threat (even before the
 internet) has always been the harvesting of the merchant files with
 hundreds, thousands, tens of thousands, even millions of numbers  all
 neatly arranged.


Yep.  This was obvious in 94.  In fact it was
obvious in 84 - the Internet has always been a
very safe place as far as eavesdroppers go, it
ranks up there with telcos and well above
physical mail as far as reliability and privacy
goes.

Yes, of course, eavesdropping is possible, and
of course there have been many incidents.  But,
in terms of the amount of traffic, the risk is
miniscule, and probably well below the credit
card companies' real threshholds.

And, even in the presence of widespread delivery
of credit card numbers in the clear, it's easy
to show that the prime threat is and was and will
always be the hacking into some easy Linux box
and scarfing up the millions from the database.

Why they didn't see that in '94 I don't know.


 The issue that we were asked to do in the X9A10 working group was to
 preserve the integrity of the financial infrastructure for all electronic
 retail payments.  A major problem is that in the existing infrastructure,
 the account number is effectively a shared-secret and therefor has to be
 hidden. Given that there is a dozen of business processes that require it
 to be in the clear and potentially millions of locations  there is no
 practical way of addressing 

Re: anonymity +- credentials

2003-10-08 Thread Ian Grigg
Anton Stiglic wrote:
 
 - Original Message -
 From: Ian Grigg [EMAIL PROTECTED]
 
  [...]
  In terms of actual practical systems, ones
  that implement to Brands' level don't exist,
  as far as I know?
 
 There were however several projects that implemented
 and tested the credentials system.  There was CAFE, an
 ESPRIT project.


CAFE now has a published report on it, so it
might actually be accessible.  I'm not sure
if any of the tech is available.


 At Zeroknowledge there was working implementation written
 in Java, with a client that ran on a blackberry.
 
 There was also the implementation at ZKS of a library in C
 that implemented Brands's stuff, of which I participated in.
 The library implemented issuing and showing of credentials,
 with a limit on the number of possible showing (if you passed
 the limit, identity was revealed, thus allowing for off-line
 verification of payments for example.  If you did not pass the
 limit, no information about your identity was revealed).
 The underlying math was modular, you could work in a
 subgroup of Z*p for prime p, or use Elliptic curves, or
 base it on the RSA problem.  We plugged in OpenSSL
 library to test all of these cases.
 Basically we implemented the protocols described in
 [1], with some of the extensions mentioned in the conclusion.
 
 The library was presented by Ulf Moller at some coding
 conference which I don't recall the name of...


Is any of this published?  I'd assumed not,
ZKS were another company obscuring their
obvious projects with secrecy.

 It was to be used in Freedom, for payment of services,
 but you know what happended to that projet.


Reality caught up to them, I heard :)  As
Eric R recently commented, there are no
shortage of encrypted comms projects being
funded and .. collapsing when they discover
that selling secure comms is not a demand-
driven business model.


 Somebody had suggested that to build an ecash system
 for example, you could start out by implementing David
 Wagner's suggestion as described in Lucre [2], and then
 if you sell and want extra features and flexibility get the
 patents and implement Brands stuff.


Back in '98 or so, I got involved with a project
to do bearer stuff.  I even went so far as to
commission a review of all the bearer protocols
(Cavendish, Chaum, Brands, Wagner, Mariott, etc
etc).  Brands came out as the best (please don't
ask me why), so Stefan and I spent many a pleasurable
negotiating session in Dutch bars trying to hammer
out a licence.  Unfortunately we didn't move fast
enough to lock up the terms, and he went off to
bigger and better things - ZKS.

Since then, we toyed around adding tokens to WebFunds.
We started out thinking about Wagner, but what
transpired was that it was just as easy to make
the whole lot available at once.  Now we have a
framework.  (It's an incomplete project, but we
recently picked it up again after a long period
of inactivity, as there is a group that has figured
out how to use it for a cool project.)  The protocol
only covers single phase withdrawals, not two
phase, so far.


 Similar strategy
 would seem to apply for digital credentials in general.


Perhaps!  I don't understand the model for credentials,
but if they can all be put into a block-level protocol,
then sharing the code base is a mighty fine idea.


  There is an alternate approach, the E/capabilities
  world.  Capabilities probably easily support the
  development of psuedonyms and credentials, probably
  more easily than any other system.   But, it would
  seem that the E development is still a research
  project, showing lots of promise, not yet breaking
  out into the wider applications space.
 
  A further alternate is what could be called the
  hard-coded psuedonym approach as characterised
  by SOX.  (That's the protocol that my company
  wrote, so normal biases expected.)  This approach
  builds psuedonyms from the ground up, which results
  in a capabilities model like E, but every separate
  use of the capability must be then re-coded in hard
  lines by hardened coders.
 
 Do you have any references on this?


The capabilities guys hang around here:

http://erights.org/
http://www.eros-os.org/

SOX protocol is described here:

http://webfunds.org/guide/sox.html


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [dgc.chat] EU directive could spark patent war

2003-10-08 Thread Ian Grigg
Steve Schear wrote:
 
 [I wonder what if any effect this might have on crypto patents, e.g.,
 Chaumian blinding?]


My guess is, nix, nada.  Patents are a red herring
in the blinding skirmishes, they became a convenient
excuse and a point to place the flag when rallying
the troops.  The battle was elsewhere, but it was
good to have something to keep the press distracted.

You can see this in, for example, the long available
Wagner variation, and the availability of a bunch of
other variations.  Even when people started doing
demo code of the various alternates (Magic Money,
Ben Laurie's Lucre, etc) there was little to no
amounts of interest.  (There is one guy working
to turn BLL into a system, and then there is our
WebFunds project, originally started from on an
old port of MM back in 1999 or so.  That's it as
far as I know, what is clear is that there is no
inundation of monetary offers for the tech.  I
know a couple of people who put or promised some
money, but it was all pocket change.)

Any one with any business experience realises that
the patents were a huge risk factor, so the obvious
thing was to de-risk it.  Hence, use Wagner first
and shop for another method later (we figured this
out in 2001 after the first coder's Chaum code was
replaced by the second's Wagner efforts...  Or was
it Brands).

Hence, there are no business analysies being done,
and therefore, no business.

Here we remain within sight of the expiry of the
first of Chaum's patents, and still lukewarm
interest in blinding.  I predict the date will
pass and nothing will change.

The real barriers to token money systems are these:

   1. lack of a viable application
   2. tokens require downloaded clients
   3. bearer is a dirty word
   4. full implementation requires too many
  skills

(not authoritive)

As against approximations (DGCs, Paypals, nymous)
blinded token money systems don't attract enough
real business zing to make them attractive enough
to overcome the barriers.

(I personally am somewhat agnostic on blinding,
to the annoyance of many high priests of the
order.  I think the bank robbery problem is a
bit of a devil, but OTOH, I just spent today
working on getting token withdrawals going
again.  That's because I know of a group that
wants it for a very interesting application
to do vaguely with the 3rd world :-)


 The European Parliament's decision to limit patents... risks creating a
 patent war with a fallout that could make it illegal to access some
 European e-commerce sites from the United States...
 
 Pure software should not be patentable, the parliament argued, and
 software makers should not be required to license patented technology for
 the purposes of interoperability--for example, creating a device that can
 play a patented media format, or allowing a computer program to read and
 write a competitor's patented file formats. 
 
 The amendments also sought to ban the patenting of business methods such
 as Amazon.com's patent on one-click purchasing. 
 
 Full story at http://news.com.com/2100-1014_3-5086062.html?tag=nefd_top


Another factor is that Europe has effectively
emasculated the entrepreneurial digital money
field with the E-money directive.  It's been
a while since I read it, but it basically forces
the small guy to be just like a bank or to be
so small as to not have a future.  Empirically,
I know two people - entrepreneurs - who've tried
to get into it, then read the directive, and said
it can't be done (both from different countries
that actually claim to promote the field).

(The USA, under the quiet guidance of certain
very smart people, went the other way and
deliberately held off from doing or saying
anything.  They realised that they could do
nothing but harm... so they declined to get
involved.  Also, in the US, there is very
much more of a spirit of doing something if
it is not explicitly banned.  In Europe, there
is much more of a spirit of getting permission
if it is not explicitly permitted, on the
assumption that the government knows what it
is talking about.)

The only ones who are interested in reducing
transaction costs (in the blinding fashion) are
new outsiders looking to set up new payment
systems.  Hence, the arisal of the digital
gold currencies was centered around the US, and
the smart card efforts of the Europeans were
centered around the national banking structures.

Smart card schemes cost O($100,000,000)
whereas these days a DGC costs O($100,000).

Go figure.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: NCipher Takes Hardware Security To Network Level

2003-10-11 Thread Ian Grigg
Anton Stiglic wrote:
 
 - Original Message -
 From: Peter Gutmann [EMAIL PROTECTED]
  [...]
 
  The problem is
  that what we really need to be able to evaluate is how committed a vendor
 is
  to creating a truly secure product.
  [...]
 
 I agree 100% with what you said.  Your 3 group classification seems
 accurate.
 But the problem is how can people who know nothing about security evaluate
 which vendor is most committed to security?


(I am guessing you mean, in some sort of objective sense.)

Is there any reason to believe that people who
know nothing about security can actually evaluate
questions about security?

It's often been said that security is an inverted
product.  (I'm scratching to think of the proper
economic term here.)

That is, with security, you can measure easily when
it is letting the good stuff through, but you don't
know when and if and how well it is stopping the bad
stuff *.

The classical answer to difficult to evaluate
products is to concentrate on brand, or independant
assessors.  But, brands are based on revenues, not
on the underlying product.  Hence widespread confusion
as to whether Microsoft delivers secure product - the
brand gets in the way of any objective assessment.

And, independant assessors are generally subvertable
by special interests (mostly, the large incumbents
encourage independant assessors to raise barriers
to keep out low cost providers).  Hence, Peter's
points.  This is a very normal economic pattern, in
fact, it is the expected result.

So, right now, I'd say the answer to that question
is that there is no way for someone who knows nothing
about security to objectively evaluate a security
product.

iang

* In contrast, someone who knows little about cars,
can objectively evaluate a car.  They can take it
for a test drive and see if it feels right.  Using
it is proving it.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


WYTM?

2003-10-13 Thread Ian Grigg
As many have decried in recent threads, it all
comes down the WYTM - What's Your Threat Model.

It's hard to come up with anything more important
in crypto.  It's the starting point for ... every-
thing.  This seems increasingly evident because we
haven't successfully reverse-engineered the threat
model for the Quantum crypto stuff, for the Linux
VPN game, and for Tom's qd channel security.

Which results in, at best, a sinking feeling, or
at worst, endless arguments as to whether we are
dealing with yet another a hype cycle, yet another
practically worthless crypto protocol, yet another
newbie leading users on to disaster through belief
in simple, hidden, insecure factors, or...

WYTM?

It's the first question, and I've thought it about
a lot in the context of SSL.  This rant is about
what I've found.  Please excuse the weak cross over!



For $40, you can pick up SSL  TLS by Eric
Rescorla [1].  It's is about as close as I could
get to finding serious commentary on the threat
model for SSL [2].

The threat model is in Section 1.2, and the reader
might like to run through that, in the flesh, here:

  http://www.iang.org/ssl/rescorla_1.html

perhaps for the benefit of at least one unbiased
reading.  Please, read it.  I typed it in by hand,
and my fingers want to know it was worth it [3].

The rest of this rant is about what the Threat
model says, in totally biased, opinionated terms
[4].  My commentary rails on the left, the book
composes centermost.



  1.2  The Internet Threat Model

  Designers of Internet security protocols
  typically share a more or less common
  threat model.  

Eric doesn't say so explicitly, but this is pretty
much the SSL threat model.  Here comes the first
key point:

  First, it's assumed that the actual end
  systems that the protocol is being
  executed on are secure

(And then some testing of that claim.  To round
this out, let's skip to the next paragraph:)

  ... we assume that the attacker has more or
  less complete control of the communications
  channel between any two machines. 



Ladies and Gentlemen, there you have it.  The
Internet Threat Model (ITM), in a nutshell, or,
two nutshells, if we are using those earlier two
sentance models.

It's a strong model:  the end nodes are secure and
the middle is not.  It's clean, it's simple, and
we just happen to have a solution for it.



Problem is, it's also wrong.  The end systems
are not secure, and the comms in the middle is
actually remarkably safe.

(Whoa!  Did he say that?)  Yep, I surely did: the
systems are insecure, and, the wire is safe.

Let's quantify that:  Windows.  Is most of the
end systems (and we don't need to belabour that
point).  Are infected with viruses, hacks, macros,
configuration tools, passwords, Norton recovery
tools, my kid sister...

And then there's Linux.  13,000 boxen hacked per
month... [5].  In fact, Linux beats Windows 4 to 1
and it hasn't even challenged the user's desktop
market yet!

It shows in the statistics, it shows in experience;
pretty much all of us have seen a cracked box at
close quarters at one point or another [6].

Windows systems are perverted in their millions by
worms, viruses, and other upgrades to the social
networking infrastructure.  Linux systems aren't
much more trust-inspiring, on the face of it.

Pretty much all of us present in this forum would
feel fairly confident about downloading some sort
of crack disc, walking into a public library and
taking over one of their machines.

Mind you... in that same library, could we walk
in and start listening to each other's comms?

Nope.  Probably not.

On the one hand, we'd have trouble on the cables,
without being spotted by that pesky librarian.
And those darn $100 switches, they so ruin the
party these days.

Admittedly, OTOH, we do have that wonderful 802.11b
stuff and there we can really listen in [7].

But, in practice, we can conclude, nobody much
listens to our traffic.  Really, so close to nobody
that nobody in reality worries about it [8].

But, every sumbitch is trying to hack into our
machine, everyone has a virus scanner, a firewall,
etc etc.  I'm sure we've all shared that wierd
feeling when we install a new firewall that
notifies when your machine is being port scanned?
A new machine can be put on a totally new IP, and
almost immediately, ports are being scanned

How do they do that so fast?



Hence the point:  the comms is pretty darn safe.
And the node is in trouble.  We might have trouble
measuring it, but we can assert this fact:

the node is way more insecure than the comms.

That's a good enough assumption for now;  which
takes us back to the so-called Internet Threat
Model and by extension and assumption, the SSL
threat model:

the actual end systems ... are secure.
  the attacker has more or less complete
 control of the communications channel between
 any two machines.

Quite the reverse pertains [5].  So where does that

Re: WYTM?

2003-10-13 Thread Ian Grigg
Minor errata:

Eric Rescorla wrote:
  I totally agree that the systems are
 insecure (obligatory pitch for my Internet is Too
 Secure Already) http://www.rtfm.com/TooSecure.pdf,

I found this link had moved to here;

http://www.rtfm.com/TooSecure-usenix.pdf

 which makes some of the same points you're making,
 though not all.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-13 Thread Ian Grigg
Eric,

thanks for your reply!

My point is strictly limited to something
approximating there was no threat model
for SSL / secure browsing.  And, as you
say, you don't really disagree with that
100% :-)

With that in mind, I think we agree on this:


  [9] I'd love to hear the inside scoop, but all I
  have is Eric's book.  Oh, and for the record,
  Eric wasn't anywhere near this game when it was
  all being cast out in concrete.  He's just the
  historian on this one.  Or, that's the way I
  understand it.
 
 Actually, I was there, though I was an outsider to the
 process. Netscape was doing the design and not taking much
 input. However, they did send copies to a few people and one
 of them was my colleague Allan Schiffman, so I saw it.

OK!

 It's really a mistake to think of SSL as being designed
 with an explicit threat model. That just wasn't how the
 designers at Netscape thought, as far as I can tell.


Well, that's the sort of confirmation I'm looking
for.  From the documents and everything, it seems
as though the threat model wasn't analysed, it was
just picked out of a book somewhere.  Or, as you
say, even that is too kind, they simply didn't
think that way.

But, this is a very important point.  It means that
when we talk about secure browsing, it is wrong to
defend it on the basis of the threat model.  There
was no threat model.  What we have is an accident
of the past.

Which is great.  This means there is no real objection
to building a real threat model.  One more appropriate
to the times, the people, the applications, the needs.

And the today-threats.  Not the bogeyman threats.


 Incidentally, Ian, I'd like to propose a counterargument
 to your argument. It's true that most web traffic
 could be encrypted if we had a more opportunistic key
 exchange system. But if there isn't any substantial
 sniffing (i.e. the wire is secure) then who cares?


Exactly.  Why do I care?  Why do you care?

It is mantra in the SSL community and in the
browsing world that we do care.  That's why
the software is arranged in a a double lock-
in, between the server and the browser, to
force use of a CA cert.

So, if we don't care, why do we care?  What
is the reason for doing this?  Why are we
paying to use free software?  What paycheck
does Ben draw from all our money being spent
on this i don't care thing called a cert?

Some people say because of the threat model.

And that's what this thread is about:  we
agree that there is no threat model, in any
proper sense.  So this is a null and void
answer.

Other people say to protect against MITM.
But, as we've discussed at length, there is
little or no real or measurable threat of MITM.

Yet others say to be sure we are talking
to the merchant.  Sorry, that's not a good
answer either because in my email box today
there are about 10 different attacks on the
secure sites that I care about.  And mostly,
they don't care about ... certs.  But they
care enough to keep doing it.  Why is that?



Someone made a judgement call, 9 or so years
ago, and we're still paying for that person
caring on our behalf, erroneously.

Let's not care anymore.  Let's stop paying.

I don't care who it was, even.  I just want
to stop paying for his person, caring for me.

Let's start making our own security choices?

Let crypto run free!

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-15 Thread Ian Grigg
Eric Rescorla wrote:
 
 Ian Grigg [EMAIL PROTECTED] writes:
  I'm sorry, but, yes, I do find great difficulty
  in not dismissing it.  Indeed being other than
  dismissive about it!
 
  Cryptography is a special product, it may
  appear to be working, but that isn't really
  good enough.  Coincidence would lead us to
  believe that clear text or ROT13 were good
  enough, in the absence of any attackers.
 
  For this reason, we have a process.  If the
  process is not followed, then coincidence
  doesn't help to save our bacon.

 Disagree. Once again, SSL meets the consensus threat
 model. It was designed that way partly unconsciously,
 partly due to inertia, and partly due to bullying by
 people who did have the consensus threat model in mind.


(If you mean that the ITM is consenus, I grant
you that two less successful protocols follow
it - S/MIME and IPSec (partly) but I don't
think that makes it consensus.  I know there
are a lot of people who don't think in any other
terms than this model, and that is the issue!
There are also a lot of people who think in
terms completely opposed to ITM.

So to say that ITM is consensus is something
that is going to have to be established.

If that's not what you mean, can you please
define?)


 That's not the design process I would have liked,
 but it's silly to say that a protocol that matches
 the threat model is somehow automatically the wrong
 thing just because the designers weren't as conscious
 as one would have liked.


I'm not sure I ever said that the protocol
doesn't match the threat model - did I?  What
I should have said and hoped to say was that
the protocol doesn't match the application.

I don't think I said automatically, either.
I did hold out hope in that rant of mine that
the designers could have accidentally got it
right.  But, they didn't.

Now, SSL, by itself, within the bounds of the
ITM is actually probably pretty good.  By all
reports, if you want ITM, then SSL is your
best choice.

But, we have to be very careful to understand
that any protocol has a given set of characteristics,
and its applicability to an application is an
uncertain thing;  hence the process of the threat
model and the security model.  In SSL's case, one
needs to say use SSL, but only if your threat
model is close to ITM.  Or similar.  Hence the
title of this rant.

The error of the past has been that too many
people have said something like Use SSL, because
we already got it right.  Which, unfortunately,
skips the whole issue of what threat model one
is dealing with.  Just like happened with secure
browsing.

In this case, the ITM was a) agreed upon after
the fact to fill in the hole, and b) not the right
one for the application.


   And on the client side the user can, of course, click ok to the do
   you want to accept this cert dialog. Really, Ian, I don't understand
   what it is you want to do. Is all you're asking for to have that
   dialog worded differently?
 
 
  There should be no dialogue at all.  Going from
  HTTP to HTTPS/self signed is a mammoth increase
  in security.  Why does the browser say it is
  less/not secure?
 Because it's giving you a chance to accept the certificate,
 and letting you know in case you expected a real cert that
 you're not getting one.


My interpretation - which you won't like - is that
it is telling me that this certificate is bad, and
asking whether me if I am sure I want to do this.

A popup is symonymous with bad news.  It shouldn't be
used for good news.  As a general theme, that is,
although this is the reason I cited that paper:  others
have done work on this and they are a long way ahead
in their thinking, far beyond me.


   It's not THAT different from what
   SSH pops up.
 
 
  (Actually, I'm not sure what SSH pops up, it's
  never popped up anything to me?  Are you talking
  about a windows version?)
 SSH in terminal mode says:
 
 The authenticity of host 'hacker.stanford.edu (171.64.78.90)' can't be established.
 RSA key fingerprint is d3:a8:90:6a:e8:ef:fa:43:18:47:4c:02:ab:06:04:7f.
 Are you sure you want to continue connecting (yes/no)? 
 
 I actually find the Firebird popup vastly more understandable
 and helpful.


I'm not sure I can make much of your point,
as I've never heard of nor seen a Firebird?


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-16 Thread Ian Grigg
Jon Snader wrote:
 
 On Mon, Oct 13, 2003 at 06:49:30PM -0400, Ian Grigg wrote:
  Yet others say to be sure we are talking
  to the merchant.  Sorry, that's not a good
  answer either because in my email box today
  there are about 10 different attacks on the
  secure sites that I care about.  And mostly,
  they don't care about ... certs.  But they
  care enough to keep doing it.  Why is that?
 
 
 I don't understand this.  Let's suppose, for the
 sake of argument, that MitM is impossible.  It's
 still trivially easy to make a fake site and harvest
 sensitive information.


Yes.  This is the attack that is going on.  This
is today's threat.  (In that it is a new threat.
The old threat still exists - hack the node.)


 If we assume (perhaps erroneously)
 that all but the most naive user will check that they
 are talking to a ``secure site'' before they type in
 that credit card number, doesn't the cert provide assurance
 that you're talking to whom you think you are?


Nope.  It would seem that only the more sophisticated
users can be relied upon to correctly check that they
are at the correct secure site.  In practice almost
all of these attacks bypass any cert altogether and
do not use an SSL protected HTTPS site.

They use a variety of techniques to distract the
attention of the user, some highly imaginative.

For example, if you target the right browser, then it
is possible to popup a box that covers the appropriate
parts.  Or to put a display inside the window that
duplicates the browser display.  Or the URL is one
of those with strange features in there or funny
letters that look like something else.

In practice, these attacks are all statistical,
they look close enough, and the fool some of the
people some of the time.

Finally, just in the last month, they have also
started doing actual cert spoofs.  This was quite
exciting to me to see a spoof site using a cert,
so I went in and followed it.  Hey presto, it
showed me the cert, as it said it was wrong!  So
I clicked on the links and tried to see what was
wrong.

Here's the interesting thing:  I couldn't easily
tell, and my first diagnosis was wrong.  So then
I realised that *even* if the spoof is using a
cert, the victim falls to a confusion attack (see
Tom Weinstein's comments on bad GUIs).

(But, for the most part, 95% or so ignore the cert,
and the user may or may not notice.)

Now, we have no statistics on how many of these
attacks work, other than the following:  they keep
happening, and with increasing frequency over time.

From this I conclude they are working, enough to
justify the cost of the attack at least.

I guess the best thing to say is that the raw
claim that the cert ensures that you are talking
to the merchant is not 100% true.  It will help
a sophisticated user.  An attack will bypass some
of the users a lot.  It might fool many of the
users only occasionally.


 If the argument is that Verisign and the others don't do
 enough checking before issuing the cert, I don't see
 how that somehow means that SSL is flawed.


SSL isn't flawed, per se.  It's just not appropriately
being used in the secure browser application.  It's
fair to say that its use is misaligned to requirements,
and a lot of things could be done to improve matters.

But, one of the perceptions that exist in the browser
world is that SSL secures ecommerce.  Until that view
is rectified, we can't really build the consensus to
have efforts like Ye  Smith, and Close, and others,
be treated as serious and desirable.

(In practice, I don't think it matters how Verisign
and others check the cert.  This is shown by the
fact that almost all of these attacks have bypassed
the cert altogether.)

iang

http://www.iang.org/ssl/maginot_web.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Ian Grigg
Tom Otvos wrote:

 As far as I can glean, the general consensus in WYTM is that MITM attacks are very 
 low (read:
 inconsequential) probability.  Is this *really* true?


The frequency of MITM attacks is very low, in the sense
that there are few or no reported occurrences.  This
makes it a challenge to respond to in any measured way.


 I came across this paper last year, at the
 SANS reading room:
 
 http://rr.sans.org/threats/man_in_the_middle.php
 
 I found it both fascinating and disturbing, and I have since confirmed much of what 
 it was
 describing.  This leads me to think that an MITM attack is not merely of academic 
 interest but one
 that can occur in practice.


Nobody doubts that it can occur, and that it *can*
occur in practice.  It is whether it *does* occur
that is where the problem lies.

The question is one of costs and benefits - how much
should we spend to defend against this attack?  How
much do we save if we do defend?

[ Mind you, the issues that are raised by the paper
are to do with MITM attacks, when SSL/TLS is employed
in an anti-MITM role.  (I only skimmed it briefly I
could be wrong.)  We in the SSL/TLS/secure browsing
debate have always assumed that SSL/TLS when fully
employed covers that attack - although it's not the
first time I've seen evidence that the assumption
is unwarranted. ]


 Having said that then, I would like to suggest that one of the really big flaws in 
 the way SSL is
 used for HTTP is that the server rarely, if ever, requires client certs.  We all 
 seem to agree that
 convincing server certs can be crafted with ease so that a significant portion of 
 the Web population
 can be fooled into communicating with a MITM, especially when one takes into account 
 Bruce
 Schneier's observations of legitimate uses of server certs (as quoted by Bryce 
 O'Whielacronx).  But
 as long as servers do *no* authentication on client certs (to the point of not even 
 asking for
 them), then the essential handshaking built into SSL is wasted.
 
 I can think of numerous online examples where requiring client certs would be a good 
 thing: online
 banking and stock trading are two examples that immediately leap to mind.  So the 
 question is, why
 are client certs not more prevalent?  Is is simply an ease of use thing?


I think the failure of client certs has the same
root cause as the failure of SSL/TLS to branch
beyond its mandated role of protecting e-
commerce.  Literally, the requirement that
the cert be supplied (signed) by a third party
killed it dead.  If there had been a button on
every browser that said generate self-signed
client cert now then the whole world would be
using them.

Mind you, the whole client cert thing was a bit
of an afterthought, wasn't it?  The orientation
that it was at server discretion also didn't help.


 Since the Internet threat
 model upon which SSL is based makes the assumption that the channel is *not* 
 secure, why is MITM
 not taken more seriously?


People often say that there are no successful MITM
attacks because of the presence of SSL/TLS !

The existance of the bugs in Microsoft browsers
puts the lie to this - literally, nobody has bothered
with MITM attacks, simply because they are way way
down on the average crook's list of sensible things
to do.

Hence, that rant was in part intended to separate
out 1994's view of threat models to today's view
of threat models.  MITM is simply not anywhere in
sight - but a whole heap of other stuff is!

So, why bother with something that isn't a threat?
Why can't we spend more time on something that *is*
a threat, one that occurs daily, even hourly, some
times?


 Why, if SSL is designed to solve a problem that can be solved, namely
 securing the channel (and people are content with just that), are not more people 
 jumping up and
 down yelling that it is being used incorrectly?


Because it's not necessary.  Nobody loses anything
much over the wire, that we know of.  There are
isolated cases of MITMs in other areas, and in
hacker conferences for example.  But, if 10 bit
crypto and ADH was used all the time, it would
still be the least of all risks.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Ian Grigg
Tom Weinstein wrote:
 
 Ian Grigg wrote:
 
  Nobody doubts that it can occur, and that it *can* occur in practice.
  It is whether it *does* occur that is where the problem lies.
 
 This sort of statement bothers me.
 
 In threat analysis, you have to base your assessment on capabilities,
 not intentions. If an attack is possible, then you must guard against
 it. It doesn't matter if you think potential attackers don't intend to
 attack you that way, because you really don't know if that's true or not
 and they can always change their minds without telling you.

In threat analysis, you base your assessment on
economics of what is reasonable to protect.  It
is perfectly valid to decline to protect against
a possible threat, if the cost thereof is too high,
as compared against the benefits.

This is the reason that we cannot simply accept
the possible as a basis for engineering of any
form, let alone cryptography.  And this is the
reason why, if we can't measure it, then we are
probably justified in assuming it's not a threat
we need to worry about.

(Of course, anecdotal evidence helps in that
respect, hence there is a lot of discussion
about MITMs in other forums.)

iang

Here's Eric Rescorla's words on this:

http://www.iang.org/ssl/rescorla_1.html

The first thing that we need to do is define our ithreat model./i
A threat model describes resources we expect the attacker to
have available and what attacks the attacker can be expected
to mount.  Nearly every security system is vulnerable to some
threat or another.  To see this, imagine that you keep your
papers in a completely unbreakable safe.  That's all well and
good, but if someone has planted a video camera in your office
they can see your confidential information whenever you take it
out to use it, so the safe hasn't bought you that much.

Therefore, when we define a threat model, we're concerned
not only with defining what attacks we are going to worry
about but also those we're not going to worry about.
Failure to take this important step typically leads to
complete deadlock as designers try to figure out how to
counter every possible threat.  What's important is to
figure out which threats are realistic and which ones we
can hope to counter with the tools available.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Ian Grigg
Perry E. Metzger wrote:
 
 Ian Grigg [EMAIL PROTECTED] writes:
  In threat analysis, you base your assessment on
  economics of what is reasonable to protect.  It
  is perfectly valid to decline to protect against
  a possible threat, if the cost thereof is too high,
  as compared against the benefits.
 
 The cost of MITM protection is, in practice, zero.


Not true!  The cost is from 10 million dollars to
100 million dollars per annum.  Those certs cost
money, Perry!  All that sysadmin time costs money,
too!  And all that managerial time trying to figure
out why the servers don't just work.  All those
consultants that come in and look after all those
secure servers and secure key storage and all that.

In fact, it costs so much money that nobody bothers
to do it *unless* they are forced to do it by people
telling them that they are being irresponsibly
vulnerable to the MITM!  Whatever that means.

Literally, nobody - 1% of everyone - runs an SSL
server, and even only a quarter of those do it
properly.  Which should be indisputable evidence
that there is huge resistance to spending money
on MITM.


 Indeed, if you
 wanted to produce an alternative to TLS without MITM protection, you
 would have to spend lots of time and money crafting and evaluating a
 new protocol that is still reasonably secure without that
 protection. One might therefore call the cost of using TLS, which may
 be used for free, to be substantially lower than that of an
 alternative.


I'm not sure how you come to that conclusion.  Simply
use TLS with self-signed certs.  Save the cost of the
cert, and save the cost of the re-evaluation.

If we could do that on a widespread basis, then it
would be worth going to the next step, which is caching
the self-signed certs, and we'd get our MITM protection
back!  Albeit with a bootstrap weakness, but at real
zero cost.

Any merchant who wants more, well, there *will* be
ten offers in his mailbox to upgrade the self-signed
cert to a better one.  Vendors of certs may not be
the smartest cookies in the jar, but they aren't so
dumb that they'll miss the financial benefit of self-
signed certs once it's been explained to them.

(If you mean, use TLS without certs - yes, I agree,
that's a no-won.)


 How low does the risk have to get before you will be willing not just
 to pay NOT to protect against it? Because that is, in practice, what
 you would have to do. You would actually have to burn money to get
 lower protection. The cost burden is on doing less, not on doing
 more.


This is a well known metric.  Half is a good rule of
thumb.  People will happily spend X to protect themselves
from X/2.  Not all the people all the time, but it's
enough to make a business model out of.  So if you
were able to show that certs protected us from 5-50
million dollars of damage every year, then you'd be
there.

(Mind you, where you would be is, proposing that certs
would be good to make available.  Not compulsory for
applications.)


 There is, of course, also the cost of what happens when someone MITM's
 you.


So I should spend the money.  Sure.  My choice.


 You keep claiming we have to do a cost benefit analysis, but what is
 the actual measurable financial benefit of paying more for less
 protection?


Can you take that to the specific case?

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-11-12 Thread Ian Grigg
Tom Weinstein wrote:

 The economic view might be a reasonable view for an end-user to take,
 but it's not a good one for a protocol designer. The protocol designer
 doesn't have an economic model for how end-users will end up using the
 protocol, and it's dangerous to assume one. This is especially true for
 a protocol like TLS that is intended to be used as a general solution
 for a wide range of applications.


I agree with this.  Especially, I think we are
all coming to the view that TLS/SSL is in fact
a general purpose channel security protocol,
and should not be viewed as being designed to
protect credit cards or e-commerce especially.

Given this, it is unreasonable to talk about
threat models at all, when discussing just the
protocol.  I'm coming to the view that protocols
don't have threat models, they only have
characteristics.  They meet requirements, and
they get deployed according to the demands of
higher layers.

Applications have threat models, and in this is
seen the mistake that was made with the ITM.
Each application has to develop its own threat
model, and from there, its security model.

Once so developed, a set of requirements can
be passed on to the protocol.  Does SSL/TLS
meet the requirements passed on from on high?
That of course depends on the application and
what requirements are set.

So, yes, it is not really fair for a protocol
designer to have to undertake an economic
analysis, as much as they don't get involved
in threat models and security models.  It's
up to the application team to do that.

Where we get into trouble a lot in the crypto
world is that crypto has an exaggerated
importance, an almost magical property of
appearing to make everything safe.  Designers
expect a lot from cryptographers for these
reasons.  Too much, really.  Managers demand
some special sprinkling of crypto fairy dust
because it seems to make the brochure look
good.

This will always be a problem.  Which is why
it's important for the crypto guy to ask the
question - what's *your* threat model?  Stick
to his scientific guys, as it were.


 In some ways, I think this is something that all standards face. For any
 particular application, the standard might be less cost effective than a
 custom solution. But it's much cheaper to design something once that
 works for everyone off the shelf than it would be to custom design a new
 one each and every time.


Right.  It is however the case that secure
browsing is facing a bit of a crisis in
security.  So, there may have to be some
changes, one way or another.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Cryptophone locks out snoopers

2003-11-25 Thread Ian Grigg
(link is very slow:)
http://theregister.co.uk/content/68/34096.html


Cryptophone locks out snoopers 
By electricnews.net
Posted: 20/11/2003 at 10:16 GMT


A German firm has launched a GSM mobile phone that
promises strong end-to-end encryption on calls,
preventing the possibility of anybody listening in. 

If you think that you'll soon be seeing this on the shelves
of your local mobile phone shop though, think again. For
a start, the Cryptophone sells for EUR1,799 per handset,
which puts it out of the reach of most buyers. Second,
the phone's maker, Berlin-based GSMK, say the phone
will not be sold off the shelf because of the measures
needed to ensure that the product received by the
customer is untampered with and secure. Buyers must
buy the phone direct from GSMK. 

According to GSMK, the new phone is designed to
counteract known measures used to intercept mobile
phone calls. While GSM networks are far more secure
than their analogue predecessors, there are ways and
means to circumvent security measures. 

The encryption in GSM is only used to protect the call
while it is in the air between the GSM base station and
the phone. During its entire route through the telephone
network, which may include other wireless links, the call
is not protected by encryption. Encryption on the GSM
network can also be broken. The equipment needed to do
this is extremely expensive and is said to be only
available to law enforcement agencies, but it has be
known to fall into the hands of criminal organisations. 

The Cryptophone is a very familiar-looking device, since
it is based around the same HTC smartphone that O2
used as its original XDA platform. The phone runs on a
heavily modified version of Microsoft Pocket PC 2002. 

GSMK says it is the only manufacturer of such devices
that has its source code publicly available for review. It
says this will prove that there are no back-doors in the
software, thus allaying the fears of the
security-conscious. Publication of the source code
doesn't compromise the phone's security, according to
GSMK. The Cryptophone is engineered in such a way
that the encryption key is only stored in the phone for the
duration of the call and securely erased immediately
afterwards. 

One drawback of the device is that it requires the
recipient of calls to also use a Cryptophone to ensure
security. GSMK does sell the device in pairs, but also
offers a free software download that allows any PC with
a modem to be used as a Cryptophone. 

GSMK says that the Cryptophone comples with German
and EU export law. This means the device can be sold
freely within the EU and a number of other states such
as the US, Japan and Australia. It cannot be sold to
customers within Afghanistan, Syria, Iraq, Iran, Libya
and North Korea. A number of other states are subject
to tight export controls and a special licence will have to
be obtained. 

© ElectricNews.Net

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Open Source Embedded SSL - (License and Memory)

2003-11-28 Thread Ian Grigg
J Harper wrote:
 
  1) Not GPL or LPGL, please.  I'm a fan of the GPL for most things, but
 
  for embedded software, especially in the security domain, it's a
  killer.  I'm supposed to allow users to modify the software that runs
  on their secure token?  And on a small platform where there won't be
  such things as loadable modules, or even process separation, the
  (L)GPL really does become viral.  This is, I think, why Red Hat
  releases eCos under a non-GPL (but still open source) license.
 
 We're aware of these issues.  How do other people on the group feel?

I think this applies more generally, but especially
for crypto software, because of the legal environment
and the complicated usage to which it is often put.

Placing any burdens of a non-technical nature on the
user is generally a downer.  Crypto-newbies are often
unsure and under rather intense pressure to get
something out.  If uncertainties of code licensing
issue are added, it can have a marked effect on the
results.

The general result is a choice between no crypto and
poorly done crypto.  (Rarely is good crypto done in
the first instance.)  Opinions differ on this point,
but I generally err on the side of recommending less
than perfect crypto, which can be repaired later on
at a lower cost.  It's a lot easier to sell a manager
on replacing poor crypto when it becomes needed
than on we need to add a crypto layer.

For that reason, we (Cryptix) have always placed all
our code under a BSD style licence, except a few cases
where it has been placed under public domain (AES).  Our
view has always been, with crypto, the least barriers
the better.

In essence, get it out there is the mantra.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Ross Anderson's Trusted Computing FAQ

2003-12-20 Thread Ian Grigg
Ross Anderson's Trusted Computing FAQ has a lot
to say about recent threads:

http://www.cl.cam.ac.uk/~rja14/tcpa-faq.html

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


I don't know PAIN...

2003-12-20 Thread Ian Grigg
What is the source of the acronym PAIN?


Lynn said:

 ... A security taxonomy, PAIN:
 * privacy (aka thinks like encryption)
 * authentication (origin)
 * integrity (contents)
 * non-repudiation


I.e., its provenance?

Google shows only a few hits, indicating
it is not widespread.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was:example: secure computing kernel needed)

2003-12-22 Thread Ian Grigg
Anne  Lynn Wheeler wrote:
 At issue in business continuity are business requirements for things like
 no single point of failure,  offsite storage of backups, etc. The threat
 model is 1) data in business files can be one of its most valuable assets,
 2) it can't afford to have unauthorized access to the data, 3) it can't
 afford to loose access to data, 4) encryption is used to help prevent
 unauthorized access to the data, 5) if the encryption keys are protected by
 a TCPA chip, are the encryption keys recoverable if the TCPA chip fails?

You may have hit upon something there, Lynn.

One of the (many) reasons that PKI failed is
that businesses simply don't outsource trust.

If the use of TCPA is such that the business
must trust in its workings, then it can fairly
easily be predicted that it won't happen.  For
business, at least (that still leaves retail
and software sales based on IP considerations).

It is curious that in the IT trust business,
there seems to be a continuing supply of
charlatan ventures.  Even as news of PKI
slinking out of town reaches us, people are
lining up to buy tickets for the quantum
crypotagraphy miracle cure show and bottles
of the new wonder TCPA elixir.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and other forms of trust

2003-12-22 Thread Ian Grigg
Bill Frantz wrote:

 [I always considered the biggest contribution from Mondex was the idea of
 deposit-only purses, which might reduce the incentive to rob late-night
 business.]

This was more than just a side effect, it was also
the genesis of the earliest successes with smart
card money.

The first smart card money system in the Netherlands
was a service-station system for selling fuel to
truck drivers.  As security costs kept on rising,
due to constant hold-ups, the smart card system
was put in to create stations that had no money
on hand, so no need for guards or even tellers.

This absence of night time staff created a great
cost saving, and the programme was a big success.
Unfortunately, the early lessons were lost as time
went on, and attention switched from single-purpose
to multi-purpose applications.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example:secure computing kernel needed)

2003-12-22 Thread Ian Grigg
Bill Stewart wrote:
 
 At 09:38 AM 12/16/2003 -0500, Ian Grigg wrote:
 
 In the late nineties, the smart card world
 worked out that each smart card was so expensive,
 it would only work if the issuer could do multiple
 apps on each card.  That is, if they could share
 the cost with different uses (or users).
 
 Of course, at this point the assertion that a smart card
 (that doesn't also have independent user I/O)
 costs enough to care about is pretty bogus.
 Dumb smartcards are cost-effective enough to use them
 to carry $5 in telephone minutes.


Sorry, yes, each actual smart card is, at
the margin, cheap.  But, as a project, the
smart card is expensive.  There's a big
difference between project costs and the
marginal cost, and that generally makes
*the* difference.

I suppose the confusion is endemic;  as
everyone thinks about the project costs in
terms of per person and this is considered
by assumption to be one smart card per person,
but the cost per person is not the single 50c
per actual smart card.

Smart cards are a lot like Christmas, it's
not the gift, but the act of giving that
makes it special.

 The real constraint is that you're unlikely to have
 more than one card reader in a machine,
 so multifunction cards provide the opportunity to
 run multiple applications without switching cards in and out,
 but that only works if the application vendors cooperate.
 
 For instance, you may have some encrypted session application
 that needs to have your card stay in the machine during the session
 (e.g. VOIP, or secure login, SSH-like things, remote file system access),
 and you may want to pay for something using your bank smartcard
 during the session.  That's not likely to work out,
 because the secure session software vendors are
 unlikely to have a relationship with your bank that lets
 both of them trust each other with their information,
 compared to the simpliciy of having multiple cards.


For example, yes.  So it all comes down to
whether you can afford to role out the hardware
to all the vendors, and all the associated
nodes.  At this point, the penny drops, and
smart cards start looking very expensive.

Hence, to date, only single-purpose projects
have succeeded - ones where the economics
where clearly based on narrowly focused,
single activities:  phones, transit systems,
etc, and they justified themselves on those
activities, alone, without relying on the
economics of unmeasurable and unmeetable
hyperbole.

iang

PS: all those Europeans with all those
smart cards in their pockets - ask them
how many times they use the smart card
features!

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Ousourced Trust (was Re: Difference between TCPA-Hardware anda smart card and something else before

2003-12-23 Thread Ian Grigg
Ed Reed wrote:
 
  Ian Grigg [EMAIL PROTECTED] 12/20/2003 12:15:51 PM 
 
 One of the (many) reasons that PKI failed is
 that businesses simply don't outsource trust.
 
 Of course they do.  Examples:
 
 DB and other credit reporting agencies.
 SEC for fair reporting of financial results.
 International Banking Letters of Credit when no shared root of trust
 exists.
 Errors and Ommissions Professional Liability insurance for consultants
 you don't know.
 Workman's Compensation insurance for independent contractors you don't
 know.


Of course they don't.  What they do is they
outsource the collection of certain bases of
information, from which to make trust decisions.
The trust is still in house.  The reports are
acquired from elsewhere.

That's the case for DB and credit reporting.
For the SEC, I don't understand why it's on
that list.  All they do is offer to store the
filings, they don't analyse them or promise
that they are true.  They are like a library.

International Banking Letters of Credit - that's
money, not trust.  What happens there is that
the receiver gets a letter, and then takes it
to his bank.  If his bank accepts it, it is
acceptable.  The only difference between using
that and a credit card, at a grand level, is
that you are relying on a single custom piece
of paper, with manual checks at every point,
rather than a big automated system that mechanises
the letter of credit into a piece of plastic.
(Actually, I'm totally unsure on these points,
as I've never examined in detail how they work :-)

Insurance - is not the outsourcing of trust,
but the sharing of risks.



Unfortunately, most of the suppliers of these
small factors in the overall trust process of
a company, PKI included, like to tell the
companies that they can, and are, outsourcing
trust.  That works well, because, if the victim
believes it (regardless of whether he is doing
it) then it is easier to sell some other part
of the services.  It's basically a technique
to lull the customer into handing over more
cash without thinking.

But, make no mistake!  Trust itself - the way
it marshalls its information and makes its
decisions - is part of the company's core
business.  Any business that outsources its
core specialties goes broke eventually.

And, bringing this back to PKI, the people
who pushed PKI fell for the notion that
trust could be outsourced.  They thus didn't
understand what trust was, and consequently
confused the labelling of PKI as trust with
the efficacy of PKI as a useful component
in any trust model (see Lynn's post).


 The point is that the real world has monitized risk.  But the
 crytpo-elite have concentrated too hard on eliminating environmental
 factors from proofs of correctness of algorithms, protocols, and most
 importantly, business processes.


I agree with this, and all the rest.  The no-
risk computing school is fascinated with the
possibility of eliminating entire classes of
risk, so much so that they often introduce
excessive business costs, which results in
general failures of the whole crypto process.

In theory, it's a really good thing to
eliminate classes of attack.  But it can
carry a heavy cost, in any practical
implementation.

We are seeing a lot more attention to
opportunistic cryptography, which is a good
thing.  The 90s was the decade of the no-risk
school, and the result was pathetically low
levels of adoption.  In the future, we'll see
a lot more bad designs, and a lot more corners
cut.  This is partly because serious crypto
people - those you call the crypto-elite - have
burnt out their credibility and are rarely
consulted, and partly because it simply costs
too much for projects to put in a complete
and full crypto infrastructure in the early
stages.


 Crypto is not business-critical.  It's the processes its supposed to be
 protecting that are, and those are the ones that are insured.
 
 Legal and regulatory frameworks define how and where liability can be
 assigned, and that allows insurance companies to factor in stop-loss
 estimates for their exposure.  Without that, everything is a crap
 shoot.
 
 Watching how regulation is evolving right now, we may not see explicit
 liability assignments to software vendors for their vulnerabilities,
 whether for operating systems or for S/MIME email clients.  Those are
 all far too limited in what they could offer, anyway.
 
 What's happening, instead, is that consumers of those products are
 themselves facing regulatory pressure to assure their customers and
 regulators that they're providing adequate systematic security through
 technology as well as business policies, procedures and (ultimately)
 controls (ie, auditable tests for control failures and adequacy).  When
 customers can no longer say gee, we collected all this information, and
 who knew our web server wouldn't keep it from being published on the
 NYTimes classified pages?, then vendors will be compelled to deliver
 pieces of the solution that allow THE CUSTOMER (product

Re: IP2Location.com Releases Database to Identify IP's Geography

2003-12-23 Thread Ian Grigg
Rich Salz wrote:
 
  The IP2Location(TM) database contains more than 2.5 million records for all
  IP addresses. It has over 95 percent matching accuracy at the country
  level. Available at only US$499 per year, the database is available via
  download with free twelve monthly updates.
 
 And since the charge is per-server, not per-query, you could easily
 set up an international free service on a big piece of iron.


These have existed for some time.  Google knows
where they are, although they were a little tough
to find.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Non-repudiation (was RE: The PAIN mnemonic)

2003-12-26 Thread Ian Grigg
Amir Herzberg wrote:
 
 Ben, Carl and others,
 
 At 18:23 21/12/2003, Carl Ellison wrote:
 
   and it included non-repudiation which is an unachievable,
   nonsense concept.
 
 Any alternative definition or concept to cover what protocol designers
 usually refer to as non-repudiation specifications? For example
 non-repudiation of origin, i.e. the ability of recipient to convince a
 third party that a message was sent (to him) by a particular sender (at
 certain time)?
 
 Or - do you think this is not an important requirement?
 Or what?


I would second this call for some definition!

FWIW, I understand there are two meanings:

   some form of legal inability to deny
   responsibility for an event, and

   cryptographically strong and repeatable
   evidence that a certain piece of data
   was in the presence of a private key at
   some point.

Carl and Ben have rubbished non-repudiation
without defining what they mean, making it
rather difficult to respond.

Now, presumably, they mean the first, in
that it is a rather hard problem to take the
cryptographic property of public keys and
then bootstrap that into some form of property
that reliably stands in court.

But, whilst challenging, it is possible to
achieve legal non-repudiability, depending
on your careful use of assumptions.  Whether
that is a sensible thing or a nice depends
on the circumstances ... (e.g., the game that
banks play with pin codes).

So, as a point of clarification, are we saying
that non-repudiability is ONLY the first of
the above meanings?  And if so, what do we call
the second?  Or, what is the definition here?

From where I sit, it is better to term these
as legal non-repudiability or cryptographic
non-repudiability so as to reduce confusion.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Non-repudiation (was RE: The PAIN mnemonic)

2003-12-28 Thread Ian Grigg
Carl Ellison wrote:

  From where I sit, it is better to term these
  as legal non-repudiability or cryptographic
  non-repudiability so as to reduce confusion.
 
 To me, repudiation is the action only of a human being (not of a key) and
 therefore there is no such thing as cryptographic non-repudiability.


Ah.  Now I understand.  The verb is wrong, as it
necessarily implies the act of the human who is
accused of the act.  (And, thus, my claim that it
is possible, was also wrong.)

Whereas the cryptographic property implies no such
thing, and a cryptographic actor can only affirm
or not, not repudiate.  I.e., it's a meaningless
term.


 We
 need a different, more precise term for that -


Would irrefutable be a better term?  Or non-
refutability, if one desires to preserve the N?

The advantage of this verb is that it has no
actor involved, and evidence can be refuted on
its own merits, as it were.

As a test, if one were to replace repudiate
with refute in the ISO definition, would it
then stand?


 and we need to rid our
 literature and conversation of any reference to the former - except to
 strongly discredit it if/when it ever appears again.

I think more is needed.  A better definition is
required, as absence is too easy to ignore.  People
and courts will use what they have available, so it
is necessary to do more; indeed it is necessary to
actively replace that term with another.

Generally, the way the legal people work is to
create simple tests.  Such as:

  A Document was signed by a private key if:

  1. The signature is verifiable by the public key,
  2. the public key is paired with the private key,
  3. the signature is over a cryptographically strong
 message digest,
  4. the Message Digest was over the Document.

Now, this would lead to a definition of irrefutable
evidence.  How such evidence would be used would be
of course dependent on the circumstances;  it then
becomes a further challenge to tie a human's action
to that act / event.



iang


PS: Doing a bit of googling, I found the ISO definition
to be something like:

http://lists.w3.org/Archives/Public/w3c-ietf-xmldsig/1999OctDec/0149.html
 ... The ISO
 10181-4 document (called non repudiation Framework) starts with:
 The goal of the non-repudiation service is to collect, maintain,
 make available and validate irrefutable evidence concerning a
 claimed event or action in order to solve disputes about the
 occurrence of the event or action.

But, the actual standard costs money (!?) so it is
not surprising that it is the subject of much
controversy :)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Non-repudiation (was RE: The PAIN mnemonic)

2003-12-28 Thread Ian Grigg
Ben Laurie wrote:
 
 Ian Grigg wrote:
  Carl and Ben have rubbished non-repudiation
  without defining what they mean, making it
  rather difficult to respond.
 
 I define it quite carefully in my paper, which I pointed to.


Ah.  I did read your paper, but deferred any comment
on it, in part because I didn't understand what its
draft/publication status was.


Ben Laurie said:
 Probably because non-repudiation is a stupid idea:
 http://www.apache-ssl.org/tech-legal.pdf.


You didn't state which of the two definitions
you were rubbishing, so I shall respond to both!



Let's take the first definition - your technical
definition (2.7):

  Non-repudiation, in its technical sense, is a property of a communications
  system such that the system attributes the sending of a message to a person
  if, but only if, he did in fact send it, and records a person as having received
  a message if, but only if, he did in fact receive it. If such systems exist at all,
  they are very rare.

  Non-repudiability is often claimed to be a property of electronic signatures of
  the kind described above. This claim is unintelligible if non-repudiation is
  used in its correct technical sense, and in fact represents an attempt to confer a
  bogus technical respectability on the purely commercial assertion the the owners
  of private keys should be made responsible for their use, whoever in fact uses
  them.

Some comments.

1. This definition seems to be only one of the many
out there [1].  The use of the term correct technical
sense then would be meaningless as well as brave
without some support of references.  Although it does
suffice to ground the use within the paper.

2. The definition is muddied by including the attack
inside the definition.  The attack on the definition would
fit better in section 6. Is \non-repudiation a useful
concept?

3. Nothing in either the definition 2.7 or the proper
section of 6. tells us above why the claim is unintelligable.

To find this, we have to go back to Carl's comment
which gets to the nub of the legal and literal meaning
of the term:

To me, repudiation is the action only of a human being (not of a key)...

Repudiate can only be done by a human [2].  A key cannot
repudiate, nor can a system of technical capabilities [3].
(Imagine here, a debate on how to tie the human to the
key.)

That is, it is an agency problem, and unless clearly
cast in those terms, for which there exists a strong
literature, no strong foundation can be made of any
conclusions [4].



4. The discussion resigns itself to being somewhat
dismissive, by leaving open the possibility that
there are alternative possibilities.  There is
a name for this fallacy, stating the general and
showing only the specific, but I forget its name.

In the first para, 2.7, it states that If such systems
exist at all, they are very rare.  Thus, allowing
for existance.  Yet in the second para, one context
is left as unintelligable.  In section 6, again,
most discussions ... are more confusing than helpful.

This hole is created, IMHO, by the absence of Carl's
killer argument in 3. above.  Only once it is possible
to move on from the fallacy embodied in the term
repudiation itself, is it possible to start considering
what is good and useful about the irrefutability (or
otherwise) of a digital signature [5].

I.e., throwing out the bathwater is a fine and regular
thing to do.  Let's now start looking for the baby.



  But, whilst challenging, it is possible to
  achieve legal non-repudiability, depending
  on your careful use of assumptions.  Whether
  that is a sensible thing or a nice depends
  on the circumstances ... (e.g., the game that
  banks play with pin codes).
 
 Actually, its very easy to achieve legal non-repudiability. You pass a
 law saying that whatever-it-is is non-repudiable. I also cite an example
 of this in my paper (electronic VAT returns are non-repudiable, IIRC).

Which brings us to your second definition, again,
in 2.7:

To lawyers, non-repudiation was not a technical legal term before techies gave
it to them. Legally it refers to a rule which defines circumstances in which a
person is treated for legal purposes as having sent a message, whether in fact
he did or not, or is treated as having received a message, whether in fact he
did or not. Its legal meaning is thus almost exactly the opposite of its technical
meaning.


I am not sure that I'd agree that the legal
fraternity thinks in the terms outlined in the
second sentance.  I'd be surprised if the legal
fraternity said any more than what you are
trying to say is perhaps best seen by these
sorts of rules...

Much of law already duplicates what is implied
above, anyway, which makes one wonder (a) what
is the difference between the above and the
rules of evidence and presumption, etc, etc
and (b) why did the legal fraternity adopt
the techies' term with such abandon that they
didn't bother to define it?

In practice, the process

CIA - the cryptographer's intelligent aid?

2003-12-28 Thread Ian Grigg
Richard Johnson wrote:
 
 On Sun, Dec 21, 2003 at 09:45:54AM -0700, Anne  Lynn Wheeler wrote:
  note, however, when I did reference PAIN as (one possible) security
  taxonomy  i tended to skip over the term non-repudiation and primarily
  made references to privacy, authentication, and integrity.
 
 In my eperience, the terminology has more often been confidentiality,
 integrity, and authentication.  Call it CIA if you need an acronym easy
 to memorize, if only due to its ironic similarity with that for the name of
 a certain US government agency. :-)


I would agree that CIA reins supreme.  It's easy to
remember, and easy to teach.  It covers the basic
crypto techniques, those that we are sure about and
can be crafted simply with primitives.

CIA doesn't overreach itself.  CAIN, by introducing
non-repudiation, brings in a complex multilayer
function that leads people down the wrong track.

PAIN is worse, as it introduces Privacy instead of
Confidentiality.  The former is a higher level term
that implies application requirements, arguably, not
a crypto term at all.  At least with Confidentiality
it is possible to focus on packets and connections
and events as being confidential at some point in
time; but with Privacy, we are launched out of basic
crypto and protocols into the realm of applications.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Repudiating non-repudiation

2003-12-28 Thread Ian Grigg
In response to Ed and Amir,

I have to agree with Carl here and stress that the
issue is not that the definition is bad or whatever,
but the word is simply out of place.  Repudiation is
an act of a human being.  So is the denial of that
or any other act, to take a word from Ed's 1st definition.

We can actually learn a lot more from the legal world
here, in how they solve this dilemma.  Apologies in
advance, as what follows is my untrained understanding,
derived from a legal case I was involved with in
recent years [1].  It is an attempt to show why the
use of the word repudiation will never help us and
will always hinder us.



The (civil) courts resolve disputes.  They do *not*
make contracts right, or tell wrong-doers to do the
right thing, as is commonly thought.

Dispute resolution by definition starts out with a
dispute, of course.  That dispute, for sake of argument,
is generally grounded in a denial, or a repudiation.

One party - a person - repudiates a contract or a
bill or a something.

So, one might think that it would be in the courts'
interest to reduce the number of repudiations.  Quite
the reverse - the courts bend over backwards, sideways,
and tie themselves in knots to permit and encourage
repudiations.  In general, the rule is that anyone
can file *anything* into a court.

The notion of non-repudiation is thus anathema to
the courts.  From a legal point of view, we, the
crypto community, will never make headway if we use
this term [2].  What terms we should use, I suggest
below, but to see that, we need to get the whole
process of the courts in focus.



Courts encourage repudiations so as to encourage
all the claims to get placed in front of the forum
[3].  The full process that is then used to resolve
the dispute is:

   1. filing of claims, a.k.a. pleadings.
   2. presentation of evidence
   3. application of law to the evidence
   4. a reasoned ruling on 1 is delivered based on 2,3

Now, here's where cryptographer's have made the
mistake that has led us astray.  In the mind of a
cryptographer, a statement is useless if it cannot
be proven beyond a shred of doubt.

The courts don't operate that way - and neither does
real life.  In this, it is the cryptographers that
are the outsiders [4].

What the courts do is to encourage the presentation
of all evidence, even the bad stuff.  (That's what
hearings are, the presentation of evidence.)

Then, the law is applied - and this means that each
piece of evidence is measured and filtered and
rated.  It is mulled over, tested, probed, and
brought into relationship with all the other pieces
of evidence.

Unlike no-risk cryptography, there isn't such a
thing as bad evidence.  There is, instead, strong
evidence and weak evidence.  There is stuff that
is hard to ignore, and stuff that doesn't add
much. But, even the stuff that adds little is not
discriminated against, at least in the early phases.



And this is where the cryptography field can help:
a digital signature, prima facea, is just another
piece of evidence.  In the initial presentation of
evidence, it is neither weak nor strong.

It is certainly not non-repudiable.  What it is
is another input to be processed.  The digsig is
as good as all the others, first off.  Later on,
it might become stronger or weaker, depending.

We, cryptographers, help by assisting in the
process of determining the strength of the
evidence.  We can do it in, I think, three ways:



Firstly, the emphasis should switch from the notion
of non-repudiation to the strength of evidence.  A
digital signature is evidence - our job as crypto
guys is to improve the strength of that evidence,
with an eye to the economic cost of that strength,
of course.

Secondly, any piece of evidence will, we know, be
scrutinised by the courts, and assessed for its
strength.  So, we can help the process of dispute
resolution by clearly laying out the assumptions
and tests that can be applied.  In advance.  In
as accessible a form as we know how.

For example, a simple test might be that a
receipt is signed validly if:

   a. the receipt has a valid hash,
   b. that hash is signed by a private key,
   c. the signature is verified by a public
  key, paired with that private key

Now, as cryptographers, we can see problems,
which we can present as caveats, beyond the
strict statement that the receipt has a valid
signature from the signing key:

   d. the public key has been presented by
  the signing party (person) as valid
  for the purpose of receipts
   e. the signing party has not lost the
  private key
   f. the signature was made based on best
  and honest intents...

That's where it gets murky.  But, the proper
place to deal with these murky issues is in
the courts.  We can't solve those issues in
the code, and we shouldn't try.  What we should
do is instead surface all the assumptions we
make, and list out the areas where further
care is needed.

Thirdly, we can create protocols that bear
in mind the concept of 

Re: digsig - when a MAC or MD is good enough?

2004-01-03 Thread Ian Grigg
John Gilmore wrote:
 
  Sarbanes-Oxley Act in the US.  Section 1102 of that act:
  Whoever corruptly--
 (1) alters, destroys, mutilates, or conceals a
 record, document, or other object, or attempts to
 do so, with the intent to impair the object's
 integrity or availability for use in an official
 proceeding; ...
  shall be fined under this title or imprisoned not
  more than 20 years, or both..
 
 The flaw in this ointment is the intent requirement.  Corporate
 lawyers regularly advise their client companies to shred all
 non-essential records older than, e.g. two years.  The big reason to
 do so is to impair their availability in case of future litigation.
 But if that intent becomes illegal, then the advice will be to shred
 them to reduce clutter or to save storage space.


Battles like that will go on, although you raise an
interesting point - most docs have legal shelf life
limits.

The main observation here is that signatures, once
made, in whatever form, have a power well beyond the
bits that they consume or the paper they cover. This
law and others like it add more power, which in some
imprecise sense stacks up against the MD's recalculability.

Where it becomes interesting is if two parties in a
dispute both retain records.  If this is the case,
then it reduces the chance that someone might fiddle
with them or destroy them, as the other party has the
copies.

I suspect this makes more sense within corporates, or
for b2b scenarios.  For retail and other areas, there
are more complications.


  Can we surmise that a digital record with an MD attached and
  logged would fall within object ?
 
 What's the point of keeping a message digest of a logged item?  If the
 log can be altered, then the message digest can be altered to match.
 (Imagine a sendmail log file, where each line is the same as now, but
 ends with the MD of the line in some gibberish characters...)


The message digest and the record so digested can
travel different paths.  The MDs can be logged, and
the messages can be lost or disposed of.  Or some
such.  As long as the message digests are no longer
in control of a single party, they may be sufficient,
given the weight of the above, to strongly limit any
temptation to recording.

When it comes to auditing or validating of of any
records, searching on message digests is very easy.
If the message digest is with the record it covers,
it is a simple matter to quickly grep through mountains
of logs to find the entries.  It allows a positive
comparison to be done very quickly, which means those
that fail are the ones to pay attention to.

Another technique is to include a cookie in each
record which relates to the state of the log, being
a chained message digest.  If any attempt is made to
adjust a record, it throws out the following cookies.
Still, this is getting us further and further from
the original question - under what grounds could
an MD be considered a sufficient signature for
accuracy purposes?


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [Fwd: Re: Non-repudiation (was RE: The PAIN mnemonic)]

2004-01-09 Thread Ian Grigg
Ed Gerck wrote:


 Likewise, in a communication process, when repudiation of an act by a party is
 anticipated, some system security designers find it useful to define 
 non-repudiation
 as a service that prevents the effective denial of an act. Thus, lawyers should
 not squirm when we feel the same need they feel -- to provide for processes
 that *can be* conclusive.

The problem with this is that the squirms happen at
many levels.  It seems unlikely that we can provide
for conclusive processes when it comes to mixing
humans and tech and law.  If we try, we end up with
the Ross Anderson scenario - our work being trashed
in front of the courts.

Hence the need for a new framework.  Talk of non-
repudiation has gone to the extent of permitting
law makers to create new presumptions which - I
suggest - aren't going to help anyone.  For example,
the law that Pelle posted recently said one thing to
me:  no sane person wants to be caught dead using
these things:

   Pelle wrote:
   The real meat of the matter is handled in Article 31 (Page 10). Guarantees 
   derived from the acceptance of a Certificate:

The subscriber, at the time of accepting a certificate, guarantees all the
 
people of good faith to be free of fault, and his information contained 
within is correct, and that: 

1. The authenticated electronic company/signature verified by means of this 
certificate, was created under his exclusive control.

2. No person has had access to the procedure of generation of the electronic 
signature.

3. The information contained in the certificate is true and corresponds to 
the provided one by this one to the certification organization.


Is that for real?  Would you recommend that to
your mother?  I wouldn't be embarrassed to predict
that there will be no certificate systems in
Panama that rely upon that law.



I think aiming at conclusivity might be a noble
goal for protocol designers and others lower
down in the stack.  When humans are involved,
the emphasis should switch to reduction in costs:
strength of evidence, fast surfacing of problems,
sharing of information, crafting humans' part in
the protocol.

When I design financial systems, I generally think
in these terms:  what can I do to reduce the cost
and frequency of disputes?  I don't aim for any
sort of conclusivity at any costs, because that
can only be done by by setting up assumptions
that are later easily broken by real life.

Instead, I tend to examine the disputes that
might occur and examine their highest costs.
One of the easiest ways to deal with them is
to cause them to occur frequently, and thus
absorb them into the protocol.  For example,
a TCP connection breaks - did the packet get
there or not?  Conclusion: connections cannot
be relied upon.  Protocol response:  use a
datagram + request-reply + replay paradigm,
and lose a lot of connections, deliberately.
Conclusivity is achieved, at the cost of some
efficiency.

Another example - did the user sign the message?
We can't show what the user did with the key.
So, make the private key the agent, and give it
the legal standing.  Remove the human from the
loop.  Make lots of keys, and make the system
psuedonymous.  We can conclusively show that
the private key signed the message, and that
agent is to whom our contractual obligations
are directed.

Technical conclusivity is achieved, at the
expense of removing humans.  The dispute that
occurs then is when humans enter the loop
without fully understanding how they have
delegated their rights to their software
agent (a.k.a. private key).  We don't deny
his repudiating, we simply don't accept his
standing - only the key has standing.

Which brings us full circle to Panama :-)
Except, we've done it on our own contract
terms, not on the terms of the legislature,
so we can craft it with appropriate limits
rather than their irrebuttable presumptions.

From this pov, the mistake that CAs make
is to presume one key and one irrebuttable
presumption.  It's a capabilities thing;
there should be a squillion keys, each with
tightly controlled and surfaced rights.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


All Internet voting is insecure: report

2004-04-01 Thread Ian Grigg
http://www.theregister.co.uk/content/6/35078.html
http://www.eetimes.com/at/news/OEG20040123S0036

=
All Internet voting is insecure: report
By electricnews.net
Posted: 23/01/2004 at 11:37 GMT
Get The Reg wherever you are, with The Mobile Register


Online voting is fundamentally insecure due to the architecture of the
Internet, according to leading cyber-security experts.

Using a voting system based upon the Internet poses a serious and
unacceptable risk for election fraud and is not secure enough for
something as serious as the election of government officials, according to
the four members of the Security Peer Review Group, an advisory group
formed by the US Department of Defense to evaluate a new on-line voting
system.

The review group's members, and the authors of the damning report, include
David Wagner, Avi Rubin and David Jefferson from the University of
California, Berkeley, Johns Hopkins University and the Lawrence Livermore
National Laboratory, respectively, and Barbara Simons, a computer
scientist and technology policy consultant.

The federally-funded Secure Electronic Registration and Voting Experiment
(SERVE) system is currently slated for use in the US in this year's
primary and general elections. It will allow eligible voters to register
to vote at home and then to vote via the Internet from anywhere in the
world. The first tryout of SERVE is early in February for South Carolina's
presidential primary and its eventual goal is to provide voting services
to all eligible US citizens overseas and to US military personnel and
their dependents, a population estimated at six million.

After studying the prototype system the four researchers said that from
anywhere in the world a hacker could disrupt an election or influence its
outcome by employing any of several common types of cyber-attacks.
Attacks could occur on a large scale and could be launched by anyone from
a disaffected lone individual to a well-financed enemy agency outside the
reach of US law, state the three computer science professors and a former
IBM researcher in the report.

A denial-of-service attack would delay or prevent a voter from casting a
ballot through a Web site. A man in the middle or spoofing attack
would involve the insertion of a phoney Web page between the voter and the
authentic server to prevent the vote from being counted or to alter the
voter's choice. What is particularly problematic, the authors say, is that
victims of spoofing may never know that their votes were not counted.

A third type of attack involves the use a virus or other malicious
software on the voter's computer to allow an outside party to monitor or
modify a voter's choices. The malicious software might then erase itself
and never be detected, according to the report.

While acknowledging the difficulties facing absentee voters, the authors
of the security analysis conclude that Internet voting presents far too
many opportunities fo

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Firm invites experts to punch holes in ballot software

2004-04-07 Thread Ian Grigg
Trei, Peter wrote:
Frankly, the whole online-verification step seems like an
unneccesary complication.


It seems to me that the requirement for after-the-vote
verification (to prove your vote was counted) clashes
rather directly with the requirement to protect voters
from coercion (I can't prove I voted in a particular
way.) or other incentives-based attacks.
You can have one, or the other, but not both, right?

It would seem that the former must give way to the latter,
at least in political voting.  I.e., no verification after
the vote.
iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Firm invites experts to punch holes in ballot software

2004-04-09 Thread Ian Grigg
Brian McGroarty wrote:
On Wed, Apr 07, 2004 at 03:42:47PM -0400, Ian Grigg wrote:

It seems to me that the requirement for after-the-vote
verification (to prove your vote was counted) clashes
rather directly with the requirement to protect voters
from coercion (I can't prove I voted in a particular
way.) or other incentives-based attacks.
You can have one, or the other, but not both, right?


Suppose individual ballots weren't usable to verify a vote, but
instead confirming data was distributed across 2-3 future ballot
receipts such that all of them were needed to reconstruct another
ballot's vote.
It would then be possible to verify an election with reasonable
confidence if a large number of ballot receipts were collected, but
individual ballot receipts would be worthless.


If I'm happy to pervert the electoral
process, then I'm quite happy to do it
in busloads.  In fact, this is a common
approach, busses are paid for by a party
candidate, the 1st stop is the polling
booth, the 2nd stop is the party booth.
In the west, this is done with old people's
homes, so I hear.
Now, one could say that we'd distribute
the verifiability over a random set of
pollees, but that would make the verification
impractically expensive.
iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Financial Cryptography Update: El Qaeda substitution ciphers

2004-04-19 Thread Ian Grigg


( Financial Cryptography Update: El Qaeda substitution ciphers )

 April 19, 2004



http://www.financialcryptography.com/mt/archives/000119.html





The Smoking Gun has an alleged British translation of an El Qaeda
training manual entitled
http://www.thesmokinggun.com/archive/jihadmanual.html _Military Studies
in the Jihad Against the Tyrants_
Lesson 13, http://www.thesmokinggun.com/archive/jihad13chap1.html
_Secret Writing And Ciphers And Codes_ shows the basic coding
techniques that they use.  In short, substitution ciphers, with some
home-grown wrinkles to make it harder for the enemy.
If this were as good as it got, then claims that the terrorists use
advanced cryptography would seem to be exaggerated.  However, it's
difficult to know for sure.  How valid was the book?  Who is given the
book?
This is a basic soldier's manual, and thus includes a basic code that
could be employed in the field, under stress.  From my own military
experience, working out simple encoded messages under battle conditions
(in the dark, with freezing fingers, lying in a foxhole, and under
fire, are all various impediments to careful coding) can be quite a
fragile process, so not too much should be made of the lack of
sophistication.
Also, bear in mind that your basic soldier has a lot of other things to
worry about and one of the perennial problems is getting them to bother
with letting the command structure know what they are up to.  No
soldier cares what happens at headquarters.  Another factor that might
shock the 90's generation of Internet cryptographers is that your basic
soldiers' codes are often tactical, which means they are only secure
for a day or so.  They are not meant to hide information that would be
stale and known by tomorrow, anyway.
How far this code is employed up the chain of command is the
interesting question.  My guess would be, not far, but, there is no
reason for this being accurate.  When I was a young soldier struggling
with codes, the entire forces used a single basic code with key changes
4 times a day, presumably so that an army grunt could call in support
from a ship off shore or a circling aircraft.  If that grunt lost the
codes, the whole forces structure was compromised, until the codes
rotated outside the lost window (48 hours worth of codes might be
carried at one time).
--
Powered by Movable Type
Version 2.64
http://www.movabletype.org/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The future of security

2004-05-08 Thread Ian Grigg
Graeme Burnett wrote:
Hello folks,
I am doing a presentation on the future of security,
which of course includes a component on cryptography.
That will be given at this conference on payments
systems and security: http://www.enhyper.com/paysec/
Would anyone there have any good predictions on how
cryptography is going to unfold in the next few years
or so?  I have my own ideas, but I would love
to see what others see in the crystal ball.

I would see these things, in no particular
order, and no huge thought process applied.
a.  a hype cycle in QC that will peak in a year
or two, then disappear as purchasers realise that
the boxes aren't any different to ones that are
half the price.
b.  much more use of opportunistic cryptography,
whereby crypto systems align their costs against
the risks being faced.  E.g., self-signed certs
and cert caching in SSL systems, caching and
application integration in other systems.
c.  much less emphasis on deductive no-risk
systems (PKIs like x.509 with SSL) due to the
poor security and market results of the CA
model.
d.  more systems being built with basic, simple
home-grown techniques, including ones that are
only mildly secure.  These would be built by
programmers, not cryptoplumbers.  They would
require refits of proper crypto as/if they migrate
into successful user bases.  In project terms,
this is the same as b. above - more use of
opportunistic tactics to secure stuff basically
and quickly.
e.  greater and more costs to browser users
from phishing [1] will eventually result in
mods to security model to protect users.  In
the meantime, lots of snakeoil security solutions
will be sold to banks.  The day Microsoft decides
to fix the browser security model, phishing will
reduce to a just another risk.
f.  arisal of mass crypto in the chat field,
and slow painful demise of email.  This is
because the chat protocols can be updated
within the power of small teams, including
adding simple crypto.  Email will continue to
defy the mass employment of crypto, although
if someone were to add a create self-signed
cert now button, things might improve.
g.  much interest in simple crypto in the p2p
field, especially file sharing, as the need
for protection and privacy increases due to
IP attacks.  All of the techniques will flow
across to other applications that need it less.
h.  almost all press will be in areas where
crypto is sure to make a difference.  Voting,
QC, startups with sexy crypto algorithms, etc.
i.  Cryptographers will continue to be pressed
into service as security architects, because it
sounds like the same thing.  Security architects
will continue to do most of their work with
little or no crypto.
j.  a cryptographic solution for spam and
viruses won't be found.  Nor for DRM.
iang
[1] one phisher took $75,000 from 400 victims:
http://www.financialcryptography.com/mt/archives/000129.html
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Bank transfer via quantum crypto

2004-04-28 Thread Ian Grigg
Ivan Krstic wrote:
I have to agree with Perry on this one: I simply can't see a compelling 
reason for the push currently being given to ridiculously overpriced 
implementations of what started off as a lab toy, and what offers - in 
all seriousness - almost no practical benefits over the proper use of 
conventional techniques.

You are looking at QC from a scientific perspective.
What is happening is not scientific, but business.
There are a few background issues that need to be
brought into focus.
1) The QC business is concentrated in the finance
industry, not national security.  Most of the
fiber runs are within range.  10 miles not 100.
2) Within the finance industry, the security
of links is done majorly by using private lines.
Put in a private line, and call it secure because
only the operator can listen in to it.
3) This model has broken down somewhat due to the
arisal of open market net carriers, open colos, etc.
So, even though the mindset of private telco line
is secure is still prevalent, the access to those
lines is much wider than thought.
4) there is eavesdropping going on.  This is clear,
although it is difficult to find confirmable
evidence on it or any stats:
  Security forces in the US discovered an illegally installed fiber
  eavesdropping device in Verizons optical network. It was placed at a
  mutual fund company..shortly before the release of their quarterly
  numbers   Wolf Report March, 2003
(some PDF that google knows about.)  These things
are known as vampire taps.  Anecdotal evidence
suggests that it is widespread, if not exactly
rampant.  That is, there are dozens or maybe hundreds
of people capable of setting up vampire taps.  And,
this would suggest maybe dozens or hundreds of taps
in place.  The vampires are not exactly cooperating
with hard information, of course.
5) What's in it for them?  That part is all too
clear.
The vampire taps are placed on funds managers to
see what they are up to.  When the vulnerabilities
are revealed over the fibre, the attacker can put
in trades that take advantage.  In such a case,
the profit from each single trade might be in the
order of a million (plus or minus a wide range).
6) I have not as yet seen any suggestion that an
*active* attack is taking place on the fibres,
so far, this is simply a listening attack.  The
use of the information happens elsewhere, some
batch of trades gets initiated over other means.
7) Finally, another thing to bear in mind is that
the mutual funds industry is going through what
is likely to be the biggest scandal ever.  Fines
to date are at 1.7bn, and it's only just started.
This is bigger than SL, and LTCM, but as the
press does not understand it, they have not
presented it as such.  The suggested assumption
to draw from this is that the mutual funds are
*easy* to game, and are being gamed in very many
and various fashions.  A vampire tap is just one
way amongst many that are going on.

So, in the presence of quite open use of open
lines, and in the presence of quite frequent
attacking on mutual funds and the like in order
to game their systems (endemic), the question
has arisen how to secure the lines.
Hence, quantum cryptogtaphy.  Cryptographers and
engineers will recognise that this is a pure FUD
play.  But, QC is cool, and only cool sells.  The
business circumstances are ripe for a big cool
play that eases the fears of funds that their
info is being collected with impunity.  It shows
them doing something.
Where we are now is the start of a new hype
cycle.  This is to be expected, as the prior
hype cycle(s) have passed.  PKI has flopped and
is now known in the customer base (finance
industry and government) as a disaster.  But,
these same customers are desparate for solutions,
and as always are vulnerable to a sales pitch.
QC is a technology who's time has come.  Expect
it to get bigger and bigger for several years,
before companies work it out, and it becomes the
same disputed, angry white elephant that PKI is
now.
If anyone is interested in a business idea, now
is the time to start building boxes that do just
like QC but in software at half the price.  And
wait for the bubble to burst.
iang
PS:  Points 1-7 are correct AFAIK.  Conclusions,
beyond those points, are just how I see it, IMHO.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Mutual Funds - Timestamping

2004-05-25 Thread Ian Grigg
 Original Message 
http://www.financialcryptography.com/mt/archives/000141.html

In a rare arisal of a useful use of cryptography in real life, the
mutual funds industry is looking to digital timestamping to save its
bacon [1].  Timestamping is one of those oh-so-simple applications of
cryptography that most observers dismiss for its triviality.
Timestamping is simply where an institution offers to construct a hash
or message digest over your document and the current time.  By this,
evidence is created that your document was seen at that time.  There
are a few details as to how to show that the time in ones receipt is
the right one, but this is trivial  (meaning we know how to do it, not
that it is cheap to code up..) by interlinking a timestamp with the
preceeding and following ones.  So without even relying on the
integrity of the institution, we can make strong statements such as
after this other one and before this next one.
The SEC is proposing rule changes to make the 4pm deadline more serious
and proposes USPS timestamping as one way to manage this [2].  There
are several things wrong with the USPS and SEC going into this venture.
 But there are several things right with timestamping in general, to
balance this.  On the whole, given the complicated panopoly of
strategic issues outlined earlier, timestamping could be a useful
addition to the mutual funds situation [3].
First what's wrong:  timestamping doesn't need to be regulated or
charged for, as it could easily be offered as a loss leader by any
institution.  A server can run a timestamping service and do 100,000
documents a day without noticing.  If there is any feeling that a
service might not be reliable, use two!  And, handing this commercial
service over to the USPS makes no regulatory sense in a competitive
market, especially when there are many others out there already [4].
Further, timestamping is just a small technical solution.  It shouldn't
need to be regulated at all, as it should be treated in any forum as
evidence.  Either the mutual fund accepts orders with timestamps, or it
doesn't.  If it doesn't, then it is taking a risk of being gamed, and
not having anything to cover it.  An action will now be possible
against it.  If it does only accept timestamped orders, then it's
covered.  Timestamping is better seen as best practices not as
Regulation XXX.
Especially, there are better ways of doing it.  A proper RTGS
transactional system has better protections built in of its nature than
timestamping can ever provide, and in fact a regulation requiring
timestamping will interfere with the implementation of proper solutions
(see for example the NSCC solution in [1]).  It will become just
another useless reg that has to be complied with, at cost to all and no
benefit to anyone.
Further, it should be appreciated that timestamping does not solve the
problem (but neither does the NSCC option).  What it allows for is
evidence that orders were received by a certain time.  As explained
elsewhere, putting a late order in is simply one way of gaming the fund
[5].  There are plenty of other ways.
Coming back to where we are now, though, timestamping will allow the
many small pension traders to identify when they got their order in.
One existing gaping loophole is that small operators are manual
processors and can take a long time about what they do.  Hence 4pm was
something that could occur the next day, as agreed by the SEC!  With
timestamping, 4pm could still be permitted to occur tomorrow, as long
as the pension trader has timestamped some key piece of info that
signals the intent.
For this reason, timestamping helps, and it won't hinder if chosen.
The SEC is to be applauded for pushing this forward with a white paper.
 Just as long as they hold short of regulation, and encourage mutual
funds to adopt this on an open, flexible basis as we really don't want
to slow down the real solutions, later on.
--
Powered by Movable Type
Version 2.64
http://www.movabletype.org/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


US intelligence exposed as student decodes Iraq memo

2004-05-25 Thread Ian Grigg

 Original Message 
Subject: Financial Cryptography Update: US intelligence exposed as student decodes 
Iraq memo
http://www.financialcryptography.com/mt/archives/000137.html

13 May 2004 DECLAN BUTLER
[http://www.nature.com/nature/].
http://www.nature.com/cgi-taf/DynaPage.taf?file=/nature/journal/v429/n6988/full/429116b_fs.html
(subscription required)
[IMAGE]It took less then a week to decipher the blotted out words.
Armed with little more than an electronic dictionary and text-analysis
software, Claire Whelan, a graduate student in computer science at
Dublin City University in Ireland, has managed to decrypt words that
had been blotted out from declassified documents to protect
intelligence sources.
She and one of her PhD supervisors, David Naccache, a cryptographer
with Gemplus, which manufactures banking and security cards, tackled
two high-profile documents. One was a memo to US President George Bush
that had been declassified in April for an inquiry into the 11
September 2001 terrorist attacks. The other was a US Department of
Defense memo about who helped Iraq to 'militarize' civilian Hughes
helicopters.
It all started when Naccache saw the Bush memo on television over
Easter. I was bored, and I was looking for challenges for Claire to
solve. She's a wild problem solver, so I thought that with this one I'd
get peace for a week, Naccache says. Whelan produced a solution in
slightly less than that.
Demasking blotted out words was easy, Naccache told Nature. Optical
recognition easily identified the font type - in this case Arial - and
its size, he says. Knowing this, you can estimate the size of the
word behind the blot. Then you just take every word in the dictionary
and calculate whether or not, in that font, it is the right size to fit
in the space, plus or minus 3 pixels.

A computerized dictionary search yielded 1,530 candidates for a blotted
out word in this sentence of the Bush memo: An Egyptian Islamic Jihad
(EIJ) operative told an  service at the same time that Bin
Ladin was planning to exploit the operative's access to the US to mount
a terrorist strike. A grammatical analyser yielded just 346 of these
that would make sense in English.
A cursory human scan of the 346 removed unlikely contenders such as
acetose, leaving just seven possibilities: Ugandan, Ukrainian,
Egyptian, uninvited, incursive, indebted and unofficial. Egyptian seems
most likely, says Naccache. A similar analysis of the defence
department's memo identified South Korea as the most likely anonymous
supplier of helicopter knowledge to Iraq.
Intelligence experts say the technique is cause for concern, and that
they may think about changing procedures. One expert adds that
rumour-mongering on probable fits might engender as much confusion and
damage as just releasing the full, unadulterated text.
Naccache accepts the criticism that although the technique works
reasonably well on single words, the number of candidates for more than
two or three consecutively blotted out words would severely limit it.
Many declassified documents contain whole paragraphs blotted out.
That's impossible to tackle, he says, adding that, the most
important conclusion of this work is that censoring text by blotting
out words and re-scanning is not a secure practice.
Naccache and Whelan presented their results at Eurocrypt 2004, a
meeting of security researchers held in Interlaken, Switzerland, in
early May. They did not present at the formal sessions, but at a
Tuesday evening informal 'rump session', where participants discuss
work in progress. We came away with the prize for the best
rump-session talk - a huge cow-bell, says Naccache.
(c) Nature News Service / Macmillan Magazines Ltd 2004
--
Powered by Movable Type
Version 2.64
http://www.movabletype.org/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


SSL secure browsing - attack tree Mindmap

2004-05-25 Thread Ian Grigg
 Original Message 
Subject: Financial Cryptography Update: SSL secure browsing - attack tree Mindmap
http://www.financialcryptography.com/mt/archives/000136.html

Here is a /work in progress/ Mindmap on the threats to the secure
browsing process.
http://iang.org/maps/browser_attack_tree.html
The mindmap purports to be an attack tree, which is a technique to
include and categorise all possible threats to a process.  An attack
tree is one possible aid to constructing a threat model, which latter
is a required step to constructing a security model.  The mindmap
supports another /work in progress/ on threat modelling for secure
browsing at http://iang.org/ssl/browser_threat_model.html for the
Mozilla project.
(The secure browsing security model uses SSL as a protocol and the
Certificate Authority model as the public key authentication regime,
all wrapped up in HTTPS within the browser.  Technically, the protocol
and key regime are separate, but in practice they are joined at the
hip, so any security modelling needs to consider them both.  SSL - the
protocol part - has been widely scrutinised and has evolved to what is
considered a secure form.  In contrast the CA model has been widely
criticised, and has not really evolved since its inception.  It remains
the weak link in security.
As part of a debate on how to address the security issues in secure
browsing and other applications that use SSL/CA such as S/MIME, the
threat model is required before we can improve the security model.
Unfortunately, the original one is not much use, as it was a
theoretical prediction of the MITM that did not come to pass.)
--
Powered by Movable Type
Version 2.64
http://www.movabletype.org/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The future of security

2004-05-26 Thread Ian Grigg
Ben Laurie wrote:
Steven M. Bellovin wrote:

The spammers are playing with other people's money, cycles, etc.  They 
don't care.

We took that into account in the paper. Perhaps you should read it?
http://www.dtc.umn.edu/weis2004/clayton.pdf

(Most of the people on this list are far too
professional and busy to fall for that.  If
the argument has merit, please summarise it.
If it really has merit, the summary might
tease people into reading the full paper.)
I for one don't see it.  I like hashcash as
an idea, but fundamentally, as Steve suggests,
we expect email from anyone, and it's free.
We have to change one of those basic features
to stop spam.  Either make it non-free, or
make it non-authorised.  Hashcash doesn't
achieve either of those, although a similar
system such as a payment based system might
achieve it.
Mind you, I would claim that if we change either
of the two fundamental characteristics of email,
then it is no longer email.  For this reason,
I predict that email will die out (ever so
slowly and painfully) to be replaced by better
and more appropriate forms of chat/IM.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Yahoo releases internet standard draft for using DNS as public key server

2004-06-01 Thread Ian Grigg
Dave Howe wrote:
Peter Gutmann wrote:
It *is* happening, only it's now called STARTTLS (and if certain vendors
(Micromumblemumble) didn't make it such a pain to set up certs for 
their MTAs
but simply generated self-signed certs on install and turned it on by 
default,
it'd be happening even more).
TLS for SMTP is a nice, efficient way to encrypt the channel. However, 
it offers little or no assurance that your mail will *stay* encrypted 
all the way to the recipients.

That's correct.  But, the goal is not to secure
email to the extent that there is no risk, that's
impossible, and arguing that the existence of a
weakness means you shouldn't do it just means that
we should never use crypto at all.
See those slides that Adi Shamir put up, I collected
the 3 useful ones in a recent blog:
http://www.financialcryptography.com/mt/archives/000147.html
I'd print these three out and post them on the wall,
if I had a printer!
The goal is to make it more difficult, within a
tight budget.  Using TLS for SMTP is free.  Why
not do it?
(Well, it's free if self-signed certs are used.
If CA-signed certs are used, I agree, that exceeds
the likely benefit.)

Most of us (including me most of the time) are in the position of using 
their ISPs or Employer's smarthost to relay email to its final 
destination; in fact, most employers (and many ISPs) actually enforce 
this, redirecting or blocking port 25 traffic.
If my employer or isp accept TLS traffic from me, but then turn around 
and send that completely unprotected to my final recipient, I have no 
way of preventing or even knowing that.
Sendmail's documentation certainly used to warn this was the case - 
probably still does :)

a) Once a bunch of people send mail via TLS/SMTP,
the ISP is incentivised to look at onward forwarding
it that way.
b) It may be that your local threat is the biggest,
if for example you are using 802.11b to send your
mail.  The threat of listening from the ISP onwards
is relatively small compared to what goes on closer
to the end nodes.
c) every node that starts protecting traffic this
way helps - because it boxes the attacker into
narrower and narrower attacks.  It may be that the
emails are totally open over the backbone, but who
cares if the attacker can't easily get there?
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Yahoo releases internet standard draft for using DNS as public key server

2004-06-01 Thread Ian Grigg
Dave Howe wrote:
Ian Grigg wrote:
 Dave Howe wrote:
 TLS for SMTP is a nice, efficient way to encrypt the channel.
 However, it offers little or no assurance that your mail will
 *stay* encrypted all the way to the recipients.
 That's correct. But, the goal is not to secure email to the extent
 that there is no risk, that's impossible, and arguing that the
 existence of a weakness means you shouldn't do it just means that we
 should never use crypto at all.
No - it means you might want to consider a system that guarantees 
end-to-end encryption - not just first link, then maybe if it feels 
like it
That doesn't mean TLS is worthless - on the contrary, it adds an 
additional layer of both user authentication and session encryption that 
are both beneficial - but that *relying* on it to protect your messages 
is overoptimistic at best, dangerous at worst.

This I believe is a bad way to start looking
at cryptography.  There is no system that you
can put in place that you can *rely* upon to
protect your message.
(Adi Shamir again: #1 there are no secure systems,
ergo, it is not possible to rely on them, and
to think about relying will take one down false
paths.)
In general terms, most ordinary users cannot
rely on their platform to be secure.  Even in
specific terms, those of us running BSD systems
on laptops that we have with us all the time
still have to sleep and shower...  There are
people out there who have the technology to
defeat my house alarm, install a custom
key logger designed for my model of laptop,
and get out before the hot water runs out.
For that reason, I and just about everyone
else do not *rely* on tech to keep my message
safe.  If I need to really rely on it, I do what
Adolf Hitler did in November of 1944 - deliver
all the orders for the great breakout by secure
courier, because he suspected the enigma codes
were being read.  (He was right.)
Otherwise, we adopt what military people call
tactical security:  strong enough to keep
the message secure enough so that most of the
time it does the job.
The principle which needs to be hammered time
and time again is that cryptography, like all
other security systems, should be about risk
and return - do what you can and put up with
the things you can't.
Applying the specifics to things like TLS and
mail delivery - yes, it looks very ropey.  Why
for example people think that they need CA-signed
certs for such a thing when (as you point out)
the mail is probably totally unprotected for half
the journey is just totally mysterious.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


threat modelling tool by Microsoft?

2004-06-09 Thread Ian Grigg
Has anyone tried out the threat modelling tool
mentioned in the link below, or reviewed the
book out this month:
http://aeble.dyndns.org/blogs/Security/archives/000419.php
The Threat Modeling Tool allows users to create threat
model documents for applications. It organizes relevant
data points, such as entry points, assets, trust levels,
data flow diagrams, threats, threat trees, and vulnerabilities
into an easy-to-use tree-based view. The tool saves the
document as XML, and will export to HTML and MHT using
the included XSLTs, or a custom transform supplied by
the user.
The Threat Modeling Tool was built by Microsoft Security
Software Engineer Frank Swiderski, the author of Threat
Modeling (Microsoft Press, June 2004).
--
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Question on the state of the security industry

2004-06-30 Thread Ian Grigg
The phishing thing has now reached the mainstream,
epidemic proportions that were feared and predicted
in this list over the last year or two.  Many of
the solution providers are bailing in with ill-
thought out tools, presumably in the hope of cashing
in on a buying splurge, and hoping to turn the
result into lucrative cash flows.
In other news, Verisign just bailed in with a
service offering [1].  This is quite cunning,
as they have offered the service primarily as
a spam protection service, with a nod to phishing.
In this way they have something, a toe in the
water, but they avoid the embarrassing questions
about whatever happened to the last security
solution they sold.
Meanwhile, the security field has been deathly
silent.  (I recently had someone from the security
industry authoritively tell me phishing wasn't
a problem  ... because the local plod said he
couldn't find any!)
Here's my question - is anyone in the security
field of any sort of repute being asked about
phishing, consulted about solutions, contracted
to build?  Anything?
Or, are security professionals as a body being
totally ignored in the first major financial
attack that belongs totally to the Internet?
What I'm thinking of here is Scott's warning of
last year:
  Subject: Re: Maybe It's Snake Oil All the Way Down
  At 08:32 PM 5/31/03 -0400, Scott wrote:
  ...
  When I drill down on the many pontifications made by computer
  security and cryptography experts all I find is given wisdom.  Maybe
  the reason that folks roll their own is because as far as they can see
  that's what everyone does.  Roll your own then whip out your dick and
  start swinging around just like the experts.
I think we have that situation.  For the first
time we are facing a real, difficult security
problem.  And the security experts have shot
their wad.
Comments?
iang
[1] Lynn Wheeler's links below if anyone is interested:
VeriSign Joins The Fight Against Online Fraud
http://www.informationweek.com/story/showArticle.jhtml;jsessionid=25FLNINV0L5DCQSNDBCCKHQ?articleID=22102218
http://www.infoworld.com/article/04/06/28/HNverisignantiphishing_1.html
http://zdnet.com.com/2100-1105_2-5250010.html
http://news.com.com/VeriSign+unveils+e-mail+protection+service/2100-7355_3-5250010.html?part=rsstag=5250010subj=news.7355.5
[2] sorry, the original email I couldn't
find, but here's the snippet, routed at:
http://www.mail-archive.com/[EMAIL PROTECTED]/msg01435.html
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: authentication and authorization

2004-07-03 Thread Ian Grigg
Hi John,
thanks for your reply!
John Denker wrote:
The object of phishing is to perpetrate so-called identity
theft, so I must begin by objecting to that concept on two
different grounds.
1) For starters, identity theft is a misnomer.  My identity
is my identity, and cannot be stolen.
I think I'd echo Lynn's comments - it's the label
in use, so we might as well get used to it.  In
fact, the more I think of it, the more I realise
that a desire to get the right terms in place
might be part of the answer to the original question!
You are right that it's important to separate out
the two cases: the theft of the immediate account
(and money therein) which is more what phishing is,
from the acquisition of identity data in order to
open new places to steal from (credit ... see my
rantcomments on why this is an American issue and
hence may have escaped the rest of the world's attention:
http://www.financialcryptography.com/mt/archives/000146.html
2) Even more importantly, the whole focus on _identity_ is
pernicious.  For the vast majority of cases in which people
claim to want ID, the purpose would be better served by
something else, such as _authorization_.  For example,
when I walk into a seedy bar in a foreign country, they can
reasonably ask for proof that I am authorized to do so,
which in most cases boils down to proof of age.  They do
*not* need proof of my car-driving privileges, they do not
need my real name, they do not need my home address, and
they really, really, don't need some ID number that some
foolish bank might mistake for sufficient authorization to
withdraw large sums of money from my account.  They really,
really, reeeally don't need other information such as what
SCI clearances I hold, what third-country visas I hold, my
medical history, et cetera.  I could cite many additional
colorful examples, but you get the idea:  The more info is
linked to my ID (either by writing it on the ID card or
by linking databases via ID number) the _less_ secure
everything becomes.  Power-hungry governments and power-
hungry corporations desire such linkage, because it makes
me easier to exploit ... but any claim that such linkable
ID is needed for _security_ is diametrically untrue.
Again, I see here an answer to why it is the
security industry is being ignored - all that
above is well and good in theory, but it doesn't
translate as easily to practice.  I mean, as a
hypothetical test - just how do you deliver some
form of privileges system that allows one person
to know my age, and another to know my sex, and
another to know my drinking problems?
That's not really a solved *cheap* problem, is it?
So the reality of it is, the predeliction with
identity being the root key to all power is the
way society is heading.  I don't like it, but
I'm not in a position to stop the world turning.
===
Returning to:
   For the first
  time we are facing a real, difficult security
  problem.  And the security experts have shot
  their wad.
I think a better description is that banks long ago
deployed a system that was laughably insecure.  (They got
away with it for years ... but that's irrelevant.)  Now
that there is widespread breakage, they act surprised, but
none of this should have come as a surprise to anybody,
expert or otherwise.
I think the security industry must at least
acknowledge their part in this.  For a decade
now we as a field have been telling everyone
that secure browsing with SSL and CA-signed
certs and all that stuff is ... secure.
What was that quote?  The Netscape and Microsoft
Secure E-Commerce System ??
In fact, we're still saying it, and mentally,
about half the field refuses to believe that
the secure browsing security model has been
breached.  The issue runs very deep, and a
lot of sacred cows have to be slaughtered
before this one will be resolved.
I mean, we could just go on ignoring it, but
that might explain why we are being ignored?
Now banks and their customers are paying the price.  As
soon as the price to the banks gets a little higher, they
will deploy a more-secure payment authorization scheme,
and the problem will go away.
Well, it is true, in a sense, that as the problem
gets more expensive, there is more incentive to
fix it.  So far the banks have fiddled at the
edges with server based stuff.  But that can't
help them much.  About the only thing that can
help them directly is if they lock out other IP
numbers but that's a difficult one.
The issue is one for the client side to solve.
The user is the one who is being enticed with
the dodgy link.  So it's one of these three
agents:  user, mailer, browser.
(Note that I didn't say ID scheme.  I don't care who
knows my SSN and other ID numbers ... so long as they
cannot use them to steal stuff.  And as soon as there
is no value in knowing ID numbers, people will stop
phishing for them.)
I think if we re-characterise phishing as the
part of identity theft where accounts are stolen
directly, we might have more of an acceptable
compromise on 

Re: Question on the state of the security industry

2004-07-04 Thread Ian Grigg
[EMAIL PROTECTED] wrote:
I shared the gist of the question with a leader
of the Anti-Phishing Working Group, Peter Cassidy.
Thanks Dan, and thanks Peter,
...
I think we have that situation.  For the first
time we are facing a real, difficult security
problem.  And the security experts have shot
their wad. 
--- Part One
(just addressing Part one in this email)
I think the reason that, to date, the security community has
been largely silent on phishing is that this sort of attack was
considered a confidence scheme that was only potent against
dim-wits - and we all know how symathetic the IT
security/cryptography community is to those with less than
powerful intellects.

OK.  It could well be that the community has an
inbuilt bias against protecting those that aren't
able to protect themselves.  If so, this would be
cognitive dissonance on a community scale:  in this
case, SSL, CAs, browsers are all set up to meet
the goal of totally secure by default.
Yet, we know there aren't any secure systems, this
is Adi Shamir's 1st law.
http://www.financialcryptography.com/mt/archives/000147.html
Ignoring attacks on dimwits is one way to meet that
goal, comfortably.
But, let's go back to the goal.  Why has it been
set?  Because it's been widely recognised and assumed
that the user is not capable of dealing with their own
security.  In fact, in its lifetime over the last decade,
browsers have migrated from a ternary security rating
presented to the user, to whit, the old 40 bit crypto
security, to a binary security rating, confirming
the basic principle that users don't know and don't
care, and thus the secure browsing model has to do
all the security for the user.  Further, they've been
protected from the infamous half-way house of self-
signed certs, presumably because they are too dim-
witted to recognise when they need less or more
security against the evil and pervasive MITM.
http://www.iang.org/ssl/mallory_wolf.html
Who is thus a dimwit.  And, in order to bring it
together with Adi's 1st law, we ignore attacks
on dimwits (or in more technical terms, we assume
that those attacks are outside the security model).
(A further piece of evidence for this is a recent
policy debate conducted by Frank Hecker of Mozilla,
which confirmed that the default build and root
list for distribution of Mozilla is designed for
users who could not make security choices for
themselves.)
So, I think you're right.
 Also, it is true, it was considered a
 sub-set of SPAM.
And?  If we characterise phishing as a sub-set
of spam, does this mean we simply pass the buck
to anti-spam vendors?  Or is this just another
way of cataloging the problem in a convenient
box so we can ignore it?
(Not that I'm disagreeing with the observation,
just curious as to where it leads...)

The reliance on broadcast spam as a vehicle for consumer data
recruitment is remaining but the payload is changing and, I
think, in that advance is room for important contributions by
the IT security/cryptography community. In a classic phishing
scenario, the mark gets a bogus e-mail, believes it and
surrenders his consumer data and then gets a big surprise on his
next bank statement. What is emerging is the use of spam to
spread trojans to plant key-loggers to intercept consumer data
or, in the future, to silently mine it from the consumer's PC.
Some of this malware is surprizingly clever. One of the APWG
committeemen has been watching the devleopment of trojans that
arrive as seemingly random blobs of ASCII that decrypt
themselves with a one-time key embedded in the message - they
all go singing straight past anti-virus.
This is actually much more serious, and I've
noticed that the media has picked up on this,
but the security community remains
characteristically silent.
What is happening now is that we are getting
much more complex attacks - and viruses are
being deployed for commercial theft rather
than spyware - information theft - or ego
proofs.  This feels like the nightmare
scenario, but I suppose it's ok because it
only happens to dimwits?
(On another note, as this is a cryptography
list, I'd encourage Peter and Dan to report
on the nature of the crypto used in the
trojans!)
Since phishing, when successful, can return real money the
approaches will become ever more sophisticated, relying far less
on deception and more on subterfuge.
I agree this is to be expected.  Once a
revenue stream is earnt, we can expect that
money to be invested back into areas that
are fruitful.  So we can expect much more
and more complex and difficult attacks.
I.e., it's only just starting.

--- Part Two

iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: authentication and authorization

2004-07-07 Thread Ian Grigg
John Denker wrote:
[identity theft v. phishing?]
That's true but unhelpful.  In a typical dictionary you will
find that words such as
Identity theft is a fairly well established
definition / crime.  Last I heard it was the
number one complaint at the US FTC.
Leaving that aside, the reason that phishing
is lumped in there is that it is *like* id
theft, rather than being id theft.  Just like
as many have pointed out that phishing is
*like* spam, and now we are dealing with the
fact that it is not spam.
...
But I don't approve of the rest of his paragraph:
  So the reality of it is, the predeliction with
  identity being the root key to all power is the
  way society is heading. I don't like it, but
  I'm not in a position to stop the world turning.

First of all, not everything is heading the wrong way.
The Apache server has for eons had privilege separation
features.  The openssh daemon acquired such features
recently.  As far as I can see, the trend (in the open
software world at least) is in the right direction.
You are quoting a couple of obscure Internet
systems as evidence that society isn't moving
in the direction I indicated?
Yet, every day the papers are filled with the
progress the government is making on moving to
an identity-based system of control and commerce.
National drivers licences, foreigners being hit
with biometrics, etc etc.  Next time I cross the
borders, I probably have to be fingerprinted.
How many banks are introducing these obscure
features?  How many know what a capability is?
How to do a transactional security system, rather
than an identity system?
My claim seems unweakened as yet...

I don't know whether to laugh or cry when I think about how
phishing works, e.g.
http://www.esmartcorp.com/Hacker%20Articles/ar_Watch%20a%20hacker%20work%20the%20system.htm 

The so-called ID is doing all sorts of things it shouldn't
and not doing the things it should.  The attacker has to
prove he knows my home address, but does not have to prove
he is physically at that address (or any other physical place)
... so he doesn't risk arrest.
Curious - now that's a different phishing, but I
suppose it is close enough.  Need to think about
that one, I wouldn't call it phishing, just yet.
I'd call it invoice fraud, at first blush.
What I'd call phishing is this - mass mailings
to people about their bank accounts, collection
of the data, and then using the account details
to wire money out.
I guess we need some phishing experts to tell us
the real full definition.
Earlier Ian G. wrote:
  the security experts have shot their wad.

It doesn't even take a security expert to figure out easy
ways of making the current system less ridiculous.
It's not at issue whether you can or you can't -
what I was asserting is that no-one is asking you
(or me or anyone else).  Instead, cartels are being
formed, solutions being sold, congressmen lobbied,
etc, etc, and the real issues are being unaddressed.
...
which is consistent with what I've been saying.  I don't
think people have tried and failed to solve the phishing
problem --- au contraire, I think they've hardly tried.
I agree with that.
[1Gbux]
If the industry devoted even a fraction of that sum to
anti-scam activities, they could greatly reduce the losses.
Yes, but it won't.  This is the question - why not?
Here's the question:
http://www.financialcryptography.com/mt/archives/000169.html
And here's *an* answer:
http://www.financialcryptography.com/mt/archives/000174.html
I've been to the Anti-Phishing Working Group site, e.g.
  http://www.antiphishing.org/resources.html
They have nice charts on the amount of phishing observed
as a function of time.  But I haven't been able to find
any hard information about what they are actually doing
to address the problem.  The email forwarded by Dan Geer
was similarly vaporous.
I'm afraid I agree.  The purpose seems to be to
create a cartel, suck in some fees, and ... do
some stuff.  As the fees base ensures that only
corporations join, only those with solutions to
sell have an incentive to join.  So in a while
you'll see that they have a list of preferred
solutions.  None of which will address the
problem, but they'll sure make you feel safe
from the size of the price tag.
Here's an interesting link, describing the application of
actual cryptology to the problem:
  http://news.zdnet.co.uk/0,39020330,39159671,00.htm
IMHO it's at a remarkable place in the price/performance
space:  neither the cheapest quickdirty solution, nor the
ultimate high performance solution.  At least it refutes
the assertion about security experts' wads having been
shot.  This is one of the first signs I've seen that real
security experts have even set foot in this theater of
operations, let alone shot anything.
That's a standard solution in mainland Europe
for accessing online accounts.
I'm not sure how it addresses phishing (of the
sort that I know) as the MITM just sits in the
middle and passes the query and response back
and forth, no?
Those tokens 

The Ricardian Contract - using mundane cryptography to achieve powerful governance

2004-07-08 Thread Ian Grigg

 Original Message 
Subject: Financial Cryptography Update: The Ricardian Contract
Date: Wed, 7 Jul 2004 11:17:46 +0100
From: [EMAIL PROTECTED]
( Financial Cryptography Update: The Ricardian Contract )
 July 07, 2004

http://www.financialcryptography.com/mt/archives/000175.html


Presented yesterday at the IEEE's first Workshop on Electronic
Contracting, a new paper entitled The Ricardian Contract covers the
background and essential structure of Systemics'' innovation in digital
contracts.  It is with much sadness that I am writing this blog instead
of presenting, but also with much gladness that Mark Miller, of E and
capabilities fame, was able to step in at only a few hours notice.
http://iang.org/papers/ricardian_contract.html
That which I invented (with help from Gary Howland, my co-architect of
the Ricardo system for secure assets transfer) was a fairly mundane
document, digitised mundanely, and wrapped in some equally mundane
crypto.  If anything, it's a wonderful example of how to use very basic
crypto and software tools in a very basic fashion to achieve something
much bigger than its parts.
In fact, we thought it so basic that we ignored it, thinking that
people will just copy it.  But, no-one else did, so nearly a decade
after the fact, I've finally admitted defeat and gone back to
documenting why the concept was so important.
The Ricardian Contract worked to the extent that when people got it,
they got it big.  In a religious sense, which meant that its audience
was those who'd already issued, and intiutively felt the need.  Hasan
coined the phrase that the contract is the keystone of issuance, and
now Mark points out that a major element of the innovation was in the
bringing together of the requirements from the real business across to
the tech.
They are both right.  Much stuff didn't make it into the paper - it had
hit 20 pages by the time I was told I was allowed 8.  Slashing
mercilessly reduced it, but I had to drop the requirements section,
something I now regret.
Mark's comment on business requirements matches the central message of
FC7 - that financial cryptography is a cross-discipline game.  Hide
yourself in your small box, at your peril.  But, no person can
appreciate all the apposite components within FC7, so we are forced to
build powerful, cross-discipline tools that ease that burden.  The
Ricardian Contract is one such - a tool for bringing the technical
world and the legal world together in issuance of robust financial
value.
--
Powered by Movable Type
Version 2.64
http://www.movabletype.org/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: EZ Pass and the fast lane ....

2004-07-09 Thread Ian Grigg
Date: Fri, 2 Jul 2004 21:34:20 -0400
From: Dave Emery [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: EZ Pass and the fast lane 

No mention is made of encryption or challenge response
authentication but I guess that may or may not be part of the design
(one would think it had better be, as picking off the ESN should be duck
soup with suitable gear if not encrypted).
From a business perspective, it makes no
sense to spend any money on crypto for this
application.  If it is free, sure use it,
but if not, then worry about the 0.01% of
users who fiddle the system later on.
It would be relatively easy to catch someone
doing this - just cross-correlate with other
information (address of home and work) and
then photograph the car at the on-ramp.
If the end result isn't as shown through
other means, then you have the evidence.
One high profile court case later, and the
chances of anyone copying this to escape
a toll fare shrink into the ignorable.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Mark Shuttleworth On Open Source

2004-07-09 Thread Ian Grigg
Security Theatre:  From the man who made hundreds of
millions selling signatures on your keys:
--
It is your data, why do you have to pay a licence
fee for the application needed to access the data?
-- Mark Shuttleworth
http://www.tectonic.co.za/default.php?action=viewid=309topic=Open%20Source
http://www.go-opensource.org/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: EZ Pass and the fast lane ....

2004-07-09 Thread Ian Grigg
John Gilmore wrote:
It would be relatively easy to catch someone
doing this - just cross-correlate with other
information (address of home and work) and
then photograph the car at the on-ramp.

Am I missing something?
It seems to me that EZ Pass spoofing should become as popular as
cellphone cloning, until they change the protocol.  You pick up a
tracking number by listening to other peoples' transmissions, then
impersonate them once so that their account gets charged for your toll
(or so that it looks like their car is traveling down a monitored
stretch of road).  It should be easy to automate picking up dozens or
hundreds of tracking numbers while just driving around; and this can
foil both track-the-whole-populace surveillance, AND toll collection.
Miscreants would appear to be other cars; tracking them would not
be feasible.
Well, I am presuming that ... the EZ Pass
does have an account number, right?  And
then, the car does have a licence place?
So, just correlate the account numbers
with the licence plates as they go through
the gates.
The thing about phones is that they have
no licence plates and no toll gates.  Oh,
and no cars.
The rewriteable parts of the chip (for recording the entry gate to
charge variable tolls) would also allow one miscreant to reprogram the
transponders on hundreds or thousands of cars, mischarging them when
they exit.  Of course, the miscreant's misprogrammed transponder would
just look like one of the innocents who got munged.
What incentive does a miscreant have to
reprogram hundreds or thousands of other
cars???
[I believe, by the way, that the EZ Pass system works just like many
other chip-sized RFID systems.  It seems like a good student project
to build some totally reprogrammable RFID chips that will respond to a
ping with any info statically or dynamically programmed into them by
the owner.  That would allow these hypotheses to be experimentally tested.]
Phones are great for spoofing because the
value can be high.  And, the risk of being
physically apprehended is low.  Cars and
toll ways are a different matter.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: EZ Pass and the fast lane ....

2004-07-10 Thread Ian Grigg
John Gilmore wrote:
[By the way, [EMAIL PROTECTED] is being left out of this conversation,
 by his own configuration, because his site censors all emails from me.  --gnu]
Sourceforge was doing that to me today!
Well, I am presuming that ... the EZ Pass does have an account
number, right?  And then, the car does have a licence place?  So,
just correlate the account numbers with the licence plates as they
go through the gates.

If they could read the license plates reliably, then they wouldn't
need the EZ Pass at all.  They can't.  It takes human effort, which is
in short supply.
No, that is to confuse the collecting of tolls
with the catching of defrauders.  Consider one
to be the automatic turnstile and the other to
be the ticket inspector.  One records the tolls,
the other looks for error conditions.
The thing about phones is that they have no licence plates and no
toll gates.  Oh, and no cars.

Actually, cellphones DO have other identifying information in them,
akin to license plates.  And their toll gates are cell sites.
Yes, but so ineffective.  I can pass through the
toll gate - the cell site - and nobody can see
where I am.  I can make a call, and nobody can read
my location without doing complicated tracking stuff
with many cells.  The day that the cops get their
dream of cell phones being able to signal location,
that might change, but in the meantime, a cell phone
is for most purposes unlocatable.
Another factor is that the reward is very different,
one can save a lot more on a cellphone than a toll
way trip.
It's not clear what your remark about phones having no cars has to do
with the issue of whether EZ Pass is likely to be widely spoofed.
Sorry, yes:  if I catch a fraudster with a cell
phone, I can haul him down the station and seize
his phone.  BFD, it was probably stolen anyway.
If I catch a EZ Passter I can seize his car.
What incentive does a miscreant have to reprogram hundreds or
thousands of other cars???

(1) Same one they have for releasing viruses or breaking into
thousands of networked systems.  Because they can; it's a fun way to
learn.  Like John Draper calling the adjacent phone booth via
operators in seven countries.  (2) The miscreant gets a cheap toll
along with hundreds of other people who get altered tolls.
OK, so run this past me again.  I get to send a
virus or whatever that causes EZ Pass to go down
or mis-bill thousands of their customers, and I
also have to drive down the free way and drive
through their toll gates, in order to collect my
prize of ... a free ride on the toll way?
[Cory Doctorow's latest novel (Eastern Standard Tribe, available free
online, or in bookstores) hypothesizes MP3-trading networks among
moving cars, swapping automatically with whoever they pass near enough
for a short range WiFi connection.  Sounds plausible to me; there are
already MP3 players with built-in short range FM transmitters, so
nearby cars can hear your current selection.  Extending that to faster
WiFi transfers based on listening preferences would just require a
simple matter of software.  An iPod built by a non-DRM company might
well offer such a firmware option -- at least in countries where
networking is not a crime.  Much of the music I have is freely
tradeable.]
All of which is irrelevant.  The MP3s you are trading
do not generate a transaction request, being fraudulent
or otherwise, do not hit a server that has details on
who you are, and are probably encrypted so nobody can
tell what it is you are doing, thus forcing the cops
(IP terrorists being your #3 priority) to pull the car
to a halt and search for contraband music.
The only questions here are:  do the EZ Pass people have
your licence plate and your EZ Pass account number?  Do
they have the budget to employ some students with cameras?
Do they have the ability to target people who should be
travelling A - D but keep getting billed from B - C?
And, do the drivers who decide to defraud the EZ Pass
system have the ability to avoid 2 points, being any 2
of A, B, C, D?
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Using crypto against Phishing, Spoofing and Spamming...

2004-07-11 Thread Ian Grigg
Florian Weimer wrote:
There are simply too many of them, and not all of them implement
checks for conflicts.  I'm pretty sure I could legally register
Metzdowd in Germany for say, restaurant service.
This indeed is the crux of the weakness of the
SSL/secure browsing/CA system.  The concept
called for all CAs are equal which is an
assumption that is easily shown to be nonsense.
Until that assumption is reversed, the secure
browsing application is ... insecure.  (I of
course include no CA and self-signed certs
within the set of all CAs.)
The essence of any fixes in the browsers should
be to address the (rather fruitful) diversity
amongst CAs, and help the user to make choices
amongst the brands of same.
Some CAs are more equal than others... and the
sooner a browser recognises this, the better.
These bodies could issue logo certificates.

These certificates would only have value if there is extensive
verification.  We probably lack the technology to do that cheaply
right now, and the necessary level of international cooperation.
I'm not sure I understand how logo certs would
work, as there is still the possibility of same
being issued by CA-Nigeria and having remarkable
similarity to those issued by USPTO.
Until the CA is surfaced and thrust at the face
of the user, each browser's 100 or so root CAs
will be a fundamental weakness.  Including of
course the absence of CA, which is something
that is nicely hidden from the user.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Jabber does Simple Crypto - Yoo Hoo!

2004-07-12 Thread Ian Grigg
(( Financial Cryptography Update: Jabber does Simple Crypto - Yoo Hoo! ))
 July 12, 2004

http://www.financialcryptography.com/mt/archives/000176.html

Over in the Jabber community, the long awaited arisal of opportunistic,
ad hoc cryptography has spawned a really simple protocol to use OpenPGP
messages over chat.  It's so simple, you can see everything you want in
this piece of XML (click above).
http://www.jabber.org/jeps/jep-0027.html
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


New Attack on Secure Browsing

2004-07-15 Thread Ian Grigg
 Financial Cryptography Update: New Attack on Secure Browsing )
 July 15, 2004

http://www.financialcryptography.com/mt/archives/000179.html


Congratulations go to PGP Inc - who was it, guys, don't be shy this
time? - for discovering a new way to futz with secure browsing.
Click on http://www.pgp.com/ and you will see an SSL-protected page
with that cute little padlock next to domain name.  And they managed
that over HTTP, as well!  (This may not be seen in IE version 5 which
doesn't load the padlock unless you add it to favourites, or some
such.)
Whoops!  That padlock is in the wrong place, but who's going to notice?
 It looks pretty bona fide to me, and you know, for half the browsers I
use, I often can't find the darn thing anyway.  This is so good, I just
had to add one to my SSL page (http://iang.org/ssl/ ).  I feel so much
safer now, and it's cheaper than the ones that those snake oil vendors
sell :-)
What does this mean?  It's a bit of a laugh, is all, maybe.  But it
could fool some users, and as Mozilla Foundation recently stated, the
goal is to protect those that don't know how to protect themselves.  Us
techies may laugh, but we'll be laughing on the other side when some
phisher tricks users with the little favicon.
It all puts more pressure on the oh-so-long overdue project to bring
the secure back into secure browsing.  Microsoft have befuddled the
already next-to-invisible security model even further with their
favicon invention, and getting it back under control should really be a
priority.
Putting the CA logo on the chrome now seems inspired - clearly the
padlock is useless.  See countless rants [1] listing the 4 steps needed
and also a new draft paper from Amir Herzberg and Ahmad Gbara [2]
exploring the use of logos on the chrome.
[1] SSL considered harmful
http://iang.org/ssl/
[2]  Protecting (even) Naïve Web Users,
or: Preventing Spoofing and Establishing Credentials of Web Sites
http://www.cs.biu.ac.il/~herzbea/Papers/ecommerce/spoofing.htm
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Humorous anti-SSL PR

2004-07-15 Thread Ian Grigg
J Harper wrote:
This barely deserves mention, but is worth it for the humor:
Information Security Expert says SSL (Secure Socket Layer) is Nothing More
Than a Condom that Just Protects the Pipe
http://www.prweb.com/releases/2004/7/prweb141248.htm
I guess the intention was to provide more end-to-end
security for transaction data.  After a reasonable start,
if a bit scattered, it breaks down with this:
What we can be certain of is that it is not possible
to have a man-in-the-middle attack with FormsAssurity
 encryption ensures that the form has really come from
the claimed web site, the form has not been altered,
and the only person that can read the information
filled in on the form is the authorized site.
Which is quite inconsistent - so much so that it seems
that the press release writer got confused over which
system he or she was talking about.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New Attack on Secure Browsing

2004-07-16 Thread Ian Grigg
Aram,
It's now pretty clear that PGP had no clue what this was
all about.  Apologies to all, that was my mistake.  Also,
to clarify, there was no SSL involved.
What we are looking at is a case of being able to put a
padlock on the browser in a place that *could* be confused
by a user.  This is an unintended consequence of the
favicon design by Microsoft.
Now, another thing becomes clearer, from your report and
others:  Microsoft implemented the display of the favicon
only as accepted / chosen by the user.  You have to add
this site as a favourite.
Other browsers - the competitors - went further and
displayed the favicon on arrival at the site.  I guess
they felt that it could be more useful than Microsoft
had intended.  But, in this case, it seems that they
may have stumbled on something that goes too far.
What will save them in this case is that the numbers of
users of such non-Microsoft browsers are relatively small.
If the tables were turned, and it was Microsoft that was
vulnerable, I'd confidentally predict that we would see
some attempted exploits of this in the next month's
phishing traffic.
iang
Aram Perez wrote:
Hi Ian,

Congratulations go to PGP Inc - who was it, guys, don't be shy this
time? - for discovering a new way to futz with secure browsing.
Click on http://www.pgp.com/ and you will see an SSL-protected page
with that cute little padlock next to domain name.  And they managed
that over HTTP, as well!  (This may not be seen in IE version 5 which
doesn't load the padlock unless you add it to favourites, or some
such.)

Here what I saw when going to the PGP site:
Windows XP Pro:
IE 6.x: No padlock
Firefox 0.9.2:  Padlock on address bar and tab
Mac OS 10.2.8:
IE 5.2: No padlock
Safari 1.0.2:   Padlock on address bar but no on tab
Fixfox 0.8: Padlock on address bar and tab
Camino 0.7: Padlock on address bar and tab
You stated that http://www.pgp.com is an SSL-protected page, but did you
mean https://www.pgp.com? On my Powerbook, with all the browsers I get an
error that the certificate is wrong and they end up at http://www.pgp.com.
I'm not sure if PGP deliberately set out to confuse naïve users since their
logo has been the padlock for a while. Many web sites have their logo
displayed on the address bar (and tab) when you go to there site, see
http://www.yahoo.com or http://www.google.com. Maybe Jon can answer the
question.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New Attack on Secure Browsing

2004-07-16 Thread Ian Grigg
Anton Stiglic wrote:
You stated that http://www.pgp.com is an SSL-protected page, but did you
mean https://www.pgp.com? On my Powerbook, with all the browsers I get an
error that the certificate is wrong and they end up at http://www.pgp.com.

What I get is a bad certificate, and this is due to the fact that the
certificate is issued to store.pgp.com and not www.pgp.com.
Interestingly (maybe?), when you go and browse on their on-line store, and
check something out to buy, the session is secured but with another
certificate, one issued to secure.pgpstore.com.

Just to clarify, there is no SSL cert involved - or
there shouldn't be?!  My original post was pointing
out that it is possible to fool users by putting a
favicon padlock in place.  This seems to work only
on non-IE browsers, as these are the ones that went
further and display the favicon without further
user intervention.
If users can be so fooled, then they can be encouraged
to enter their details as if they are logging into the
site (not PGP but say e*Trade).  Hey presto, stolen
authentication, and stolen money.
I didn't expect so much confusion on this point, but
if indeed that wasn't obvious so much the better:
that was the issue, that people could be easily
confused!
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Using crypto against Phishing, Spoofing and Spamming...

2004-07-17 Thread Ian Grigg
At 10:46 AM 7/10/2004, Florian Weimer wrote:
But is it so harmful?  How much money is lost in a typical phishing
attack against a large US bank, or PayPal?  (I mean direct losses due
to partially rolled back transactions, not indirect losses because of
bad press or customer feeling insecure.)
I estimated phishing losses about a month ago at about
a GigaBuck.
http://www.financialcryptography.com/mt/archives/000159.html
You'll also see two other numbers in that blog entry,
being $5 billion and $400 million (the latter taken
from Lynn's posted articles).
Of course these figures are very delicate, so we need
to wait a bit to get the real damage with any degree
of reliability.  Scientific skepticism should abound.
Notwithstanding that, I would suggest that the money
already lost is in excess of the amount paid out to
Certificate Authorities for secure ecommerce certificates
(somewhere around $100 million I guess) to date.  As
predicted, the CA-signed certificate missed the mark,
secure browsing is not secure, and the continued
resistance against revision of the browser's useless
padlock display is the barrier to addressing phishing.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Using crypto against Phishing, Spoofing and Spamming...

2004-07-18 Thread Ian Grigg
Eric Rescorla wrote:
Ian Grigg [EMAIL PROTECTED] writes:
Notwithstanding that, I would suggest that the money
already lost is in excess of the amount paid out to
Certificate Authorities for secure ecommerce certificates
(somewhere around $100 million I guess) to date.  As
predicted, the CA-signed certificate missed the mark,
secure browsing is not secure, and the continued
resistance against revision of the browser's useless
padlock display is the barrier to addressing phishing.

I don't accept this argument at all.
There are at least three potential kinds of attack here:
(1) Completely passive capture attacks.
(2) Semi-active attacks that don't involve screwing with
the network infrastructure (standard phishing attacks)
By (2) I guess you mean a bypass MITM?
(3) Active attacks on the network infrastructure.
By (3) I guess you mean a protocol level MITM.
Then, there is:
(4) Active attacks against the client.  By this I mean
hacking the client, installing a virus, malware,
spyware or whathaveyou.  (This is now real, folks.)
(5) Active attacks against the server.  Basically,
hacking the server and stealing all the good stuff.
(This has always been real, ever since there have
been servers.)
(6), (7) Insider attacks against client, server.
Just read off the data and misuse it.  (This has
been real since the dawn of time...)
Of course, SSL/SB doesn't protect against any of these,
and many people therefore assume the thinking stops
there.  Sadly, no.  Even though SSL doesn't protect
against these attacks, the frequency  cost of these
attacks directly impacts on the design choices of
secure browsing.
SSL does a fine job of protecting against (1) and a fairly adequate
job of protecting against (3). Certainly you could do a better job
against (3) if either:
(a) You could directly connect to sites with SSL a la
https://www.expedia.com/
(b) The identities were more user-friendly as we anticipated back in
the days of S-HTTP rather than being domain names, as required by
SSL. 

It does a lousy job of protecting against (3).
Sorry, I'm having trouble parsing fairly adequate
versus lousy job for threat (3)...  Both (a) and (b)
seem to deserve some examples?  I can connect directly
to expedia, and https://www.paypal.com/ is friendly
enough?
(Hmmm... I tell a lie, there is no https://www.expedia.com/
as it redirects.)
Now, my threat model mostly includes (1),  does not really include
(3), and I'm careful not to do things that leave me susceptible
to (2), so SSL does in fact protect against the attacks in my
threat model. I know a number of other people with similar threat
models. Accordingly, I think the claim that secure browsing
is not secure rather overstates the case.
(1) OK.  Now, granted, SSL protects against (1), fairly
finely.  It does so in all its guises, although the
CA-signed variant in secure browsing does so at some
additional unneeded expense, as it eliminates certain
secure options, being SSCs and ADH.  OTOH, this is a
really rare attack - actual damage from sniffing HTTP
traffic doesn't seem to be recorded anywhere as a real
attack on people, so forgive me if I downgrade this one
as almost not a threat.
(2) Then we come to (2), what i'd call a bypass MITM.  Or
a phish or a spoof.   (I'm not sure what semi active
and infrastructure have to do with it.)  This one is
certainly a threat.
When the browser is presented with a URL which happens
to purport only to be some secure site, without really
being that site, this is a spoof.  Your defence is to
be careful against this attack.  So, your defence is
nothing to do with SSL or secure browsing or anything really,
literally, (2) is unprotected against by SSL and secure
browsing in all their guises.  You yourself provide the
protection, because SSL / secure browsing does not.  Of
course.
That is my point - secure browsing does not protect
against any real  present threat.
(3)  I don't understand at all.  But you suggest that
it's not your threat and it isn't protected well against.

In summary - we are left with one attack that is well
protected against, but isn't really seen that much,
and could be done with ADH.  Then, another attack that
you deal with yourself, so that's not really relevant
coz you're smart and experienced, and those using
browsers on the average are not, and they are hit by
the attack.  Then there is (3).
(And we haven't even begun on (4) thru (7).  What then,
is a threat model that only includes some threats?)
So in sum, I think my argument remains unchallenged:
secure browsing fails to secure.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: On `SSL considered harmful`, correct use of condoms and SSL abuse

2004-07-18 Thread Ian Grigg
Amir Herzberg wrote:
(Amir, I replied to your other comments over on the
Mozilla security forum, which is presumably where they
will be more useful.  That just leaves this:)
So while `SSL is harmful` sounds sexy, I think it is misleading. Maybe 
`Stop SSL-Abuse!`
Ha!  I wondered when someone would take me to task over
that title :-)
Here's the thing:  the title comes from a seminal paper
called Gotos considered harmful [1]  This was a highly
controversial paper in the 70s or so that in no small
part helped the development of structured programming.
What the author of that paper was trying to say was not
that the Goto was bad, but its use was substantially
related to poor programming practice.
And that's the point I'm making.  The Goto is just a
tool like any other.  But, the Goto became a tool over-
deployed and widely abused, as its early and liberal
use by a programmer took no account of later maintenance
costs that were incurred by the owner of the code.  So
the Goto became synonymous with bad programming and
excessive costs.
The same situation exists with SSL/TLS.  As a protocol,
it's a fine tool.  It's strong, it's well reviewed, and
it has corrected its deficiencies over time.
But, it also comes with a wider security model.  For
starters, the CA-signed regime.  As well as that, it
comes with a variety of other baggage, which basically
amounts to use SSL/TLS as it is recommended and you
will be secure.
Unfortunately, this is wrong, and the result is bad
security practice.  Yet, we do have a generation of
people out there believing that because they have put
huge amounts of effort into implementing SSL with
its certs regime that they are secure.
We can see this ludicrous situation with the email
and chat variants of SSL / cert protected traffic.
In those cases the result is the same:  If one
suggests that the correct approach is for them to
use SSCs (self signed certs) or equivalent, people
go all weak and wobbly at the knees and start ranting
on about how those are insecure.
Yet these same systems are totally open to attacks
at the nodes and often to the intermediate hops,
which of course is where 99% of the attacks are [2].
These programmers truly believe that in order to
get security, they must deploy SSL.  As the manual
tells them to.  They are truly wrong.  In this,
SSL has harmed them, because it has blinded them
to the real risks that they are facing.
It's not the tool that has hurt them, but as you
suggest the abuse of the tool.  Edsgar Dijkstra
called for the abolition of Gotos as the way to
address the harm he saw being done.  That solution
may offend, as the tool itself cannot have harmed.
But, how else can we stop people deploying the tool
so abusively?
iang
[1] Edsger W. Dijkstra, Go To Statement Considered Harmful,
http://www.acm.org/classics/oct95/
[2] Jabber's use of SSL seems to mirror STARTTLS.
They both protect the traffic on the wire, but not
at rest on the hops.  The certificate system built
into mailers (name?) at least organises an end-to-end
packet protection, thus leaving the two end nodes
as the places at most risk, still by far the most
likely place to be attacked.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Using crypto against Phishing, Spoofing and Spamming...

2004-07-18 Thread Ian Grigg
Enzo Michelangeli wrote:
Can someone explain me how the phishermen escape identification and
prosecution? Gaining online access to someone's account allows, at most,
to execute wire transfers to other bank accounts: but in these days
anonymous accounts are not exactly easy to get in any country, and anyway
any bank large enough to be part of the SWIFT network would cooperate in
the resolution of obviously criminal cases.
In practice something like this:  Most of the
money is wired through to some stolen account,
and then moved out of the system to another system.
This might be a foreign account, or it might be a non-
bank such as a broker/dealer (E*Trade is being hit at
the moment, it seems) or it might be a digital gold
currency.  From there, it is moved once or twice more,
than back to the country where the phisher is.  This
might be the US or Russia, or anywhere else, but those
two countries seem to be quite big on this (maybe we
should blame Reagan :-) )
A couple of things:  it is very hard, but not impossible
to reverse a SWIFT style international wire.  I've seen
it done once, so I know it is not impossible.  If the
cash has gone, then reversing it doesn't make sense.
Also, phishing
isn't exactly a recognised and obvious criminal case.
Any particular instance might be, but getting to that
determination might take months.  Further, opening
accounts for anonymous purposes is still rather easy
in many countries, the chief perpertrator of this being
the USA.  Finally, every attempt to make money less like
money (by closing off easy accounts, for example) results
in what some call unintended consequences - the money
goes elsewhere.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Using crypto against Phishing, Spoofing and Spamming...

2004-07-21 Thread Ian Grigg
Steve,
thanks for addressing the issues with some actual
anecdotal evidence.  The conclusions still don't
hold, IMHO.
Steven M. Bellovin wrote:
In message [EMAIL PROTECTED], Ian Grigg writes:

Right...  It's easy to claim that it went away
because we protected against it.  Unfortunately,
that's just a claim - there is no evidence of
that.
This is why I ask whether there has been any
evidence of MITMs, and listening attacks.  We
know for example that there were password
sniffing attacks back in the old days, by
hackers.  Hence SSH.  Costs - Solution.
But, there is precious little to suggest that
credit cards would be sniffed - I've heard one
isolated and unconfirmable case.  And, there is
similar levels of MITM evidence - anecdotes and
some experiences in other fields, as reported
here on this list.

I think that Eric is 100% correct here: it doesn't happen because it's 
a low-probability attack, because most sites do use SSL.
The trick is to show cause and effect.  We know the
effect and we know the cause(s).  The question is, how
are they related?  The reason it is important is that
we may misapply one cause if the effect results from
some other cause.
I think that people are forgetting just how serious the password 
capture attacks were in 1993-94.  The eavesdropping machines were on 
backbones of major ISPs; a *lot* of passwords were captured. 
Which led to SSH, presumably, and was pre-credit card
days, so can only be used as a prediction of eavesdropping.
Question - are we facing a situation today whereby it is
easy to eavesdrop from the backbone of a major ISP and
capture a lot of traffic?  As far as I can see, that's
not likely to happen, but it could happen.
Secondly, who were the people doing those attacks?  Back
in 93-94, I'd postulate they weren't criminal types, but
hacker types.  That is, they were hackers looking for
machines.  Those people are still around - defeated by
SSH in large measure - and use other techniques now.
(Hackers had no liability in those days.  Criminals do
have liability, and are more concerned to cover their
tracks.  This makes active attacks less useful to them.
Criminals are getting braver though.)
Thirdly, why aren't we seeing more reports of this on
802.11b networks?  I've seen a few, but in each case,
the attack has been to hack into some machine.  I've
yet to see a case where listeners have scarfed up some
free email account passwords, although I suppose that
this must happen.
The point of all this is that we need to establish how
frequent and risky these things are.  Back in the pre-
commerce days, a certain amount of FUD was to be expected.
Now however, it's been a decade - whether that FUD was
warranted then is an issue for the historians, but now
we should be able to scientifically make a case that
the posture matches the threats.  Because it's been a
decade (almost).
As far as I can see, there *some* justification for
expecting eavesdropping attacks to credit cards.  There
is a lot more justification with unprotected non-commerce.
And in contrast, there is little justification for
expecting active attacks for purposes of theft.

What this leads to is not whether SSL should have been
deployed or changed in its current form (it is fruitless
to debate that, IMHO, except in order to lay down the
facts) but a discussion of certificates.
There seems some justification in suggesting that SSL be
(continued to be) deployed in any form.  Mostly, IMHO,
in areas outside commerce, and mostly, in the future,
not now.
There seems a lot of justfication for utilising certs as
they enable relationship-protection.  There seems quite a
bit of justification for utilising CA-signed certs because
they permit more advanced relationship protection such as
Amir's logo ideas and my branding ideas, and more so every
day.
What there doesn't appear to be any justification for is
the effective or defacto mandating of CA-signed certs.
And there appears to be a quite serious cost involved in
that mandating - the loss of protection from the resultant
*very* low levels of SSL deployment.
This all hangs on the MITM - hence the question of frequency.
It seems to be very low, an extraordinarily desparate attack
for a criminal, especially in the light of experience.  He
does phishing and hacking with ease, but he doesn't like
leaving tracks in the infrastructure that point back to him.
If the MITM cannot be justified as an ever-present danger,
then there is no justification for the defacto mandating of
CA-signed certs.  Permitting and encouraging self-signed
certs would then make deployment of SSL much easier, and
thus increase use of SSL - in my view, dramatically -
which would lead to much better protection.  (Primarily
by relationship management on the client side, and also
by branding/logo management with the CAs, but that needs
to be enabled in code first at the browsers.)
(It has to be said that encouraging anon-diffie-hellman
SSL would also lead to dramatically improved levels of
SSL

Re: dual-use digital signature [EMAIL PROTECTED]

2004-07-28 Thread Ian Grigg
Peter Gutmann wrote:
A depressing number of CAs generate the private key themselves and mail out to
the client.  This is another type of PoP, the CA knows the client has the
private key because they've generated it for them.
It's also cost-effective.  The CA model as presented
is too expensive.  If a group makes the decision to
utilise the infrastructure for signing or encryption,
then it can significantly reduce costs by rolling out
from the centre.
I see this choice as smart.  They either don't do it
at all, or they do it cheaply.  This way they have a
benefit.
(Then, there is still the option for upgrading to self-
created keys later on, if the project proves successful,
and the need can be shown.)
As a landmark, I received my first ever correctly
signed x.509 message the other day.  I've yet to find
the button on my mailer to generate a cert, so I could
not send a signed reply.  Another landmark for the
future, of course.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: public-key: the wrong model for email?

2004-09-16 Thread Ian Grigg
Adam Shostack wrote:
Given our failure to deploy PKC in any meaningful way*, I think that
systems like Voltage, and the new PGP Universal are great.
I think the consensus from debate back last year on
this group when Voltage first surfaced was that it
didn't do anything that couldn't be done with PGP,
and added more risks to boot.  So, yet another biz
idea with some hand wavey crypto, which is great if
it works, but it's not necessarily security.
* I don't see Verisign's web server tax as meaningful; they accept no
liability, and numerous companies foist you off to unrelted domains.
We could get roughly the same security level from fully opportunistic
or memory-oportunistic models.
Yes, or worse;  it turns out that Verisign may very
well be the threat as well as the solution.  As I
wrote here:
http://www.financialcryptography.com/mt/archives/000206.html
Verisign are in the eavesdropping business, which
not only calls into doubt their own certs, but also
all other CAs, and the notion of a trusted third
party as a workable concept.
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: public-key: the wrong model for email?

2004-09-17 Thread Ian Grigg
lrk wrote:
Perhaps it is time to define an e-mail definition of crypto to keep the
postman from reading the postcards. That should be easy enough to
implement for the average user and provide some degree of privacy for
their mail. Call it envelopes rather than crypto. Real security 
requires more than a Windoz program.
Oh, that's really easy.  Each mailer (MUA) should (on
install) generate a self-signed cert.  Stick the fingerprint
in the headers of every mail going out.  An MUA that sees
the fingerpring in an incoming mail can send a request email
to acquire the full key.  Or stick the entire cert in there,
it's not as if anyone would care.
Then each MUA can start encrypting to that key opportunistically.
Lots of variations.  But the key thing is that the MUA
should simply generate the key, sign it, and send it out
on demand, or more freuqently.  There's really no reason
why this can't all be automated.  After all, the existing
email system is automated, and trusted well enough to
deliver email, so why can't it deliver self-signed certs?
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [anonsec] Re: potential new IETF WG on anonymous IPSec (fwd from [EMAIL PROTECTED]) (fwd from [EMAIL PROTECTED])

2004-09-19 Thread Ian Grigg
Hadmut Danisch wrote:
On Thu, Sep 16, 2004 at 12:41:41AM +0100, Ian Grigg wrote:
It occurs to me that a number of these ideas could
be written up over time ... a wiki, anyone?  I think
it is high past time to start documenting crypto
patterns.

Wikis are not that good for discussions, and I do believe
that this requires some discussion.
I'd propose a separate mailing list for that.
It possibly requires both.  A mailing list by itself
tends to generate great thoughts that don't get finished
by being turned into summaries.  Also, those in charge
tend to slow the process, just through being too busy.
(I'm not talking about just this list, I've noticed
the effect on RFC lists where the editor wakes up after
a week and skips all the debate and starts again.)
A wiki working with a mailing list might address both
those issues.
(It's just a guess, I've never really worked with a
Wiki, just read some entries over at wikipedia.)
iang
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: AES Modes

2004-10-11 Thread Ian Grigg
Zooko provided a bunch of useful comments in private mail,
which I've edited and forward for list consumption.
Zooko Wilcox-O'Hearn wrote:
EAX is in the same class as CCM.  I think its slightly better.  Also 
there is GCM mode, which is perhaps a tiny bit faster, although maybe 
not if you have to re-key every datagram.  Not sure about the 
key-agility of these.

... I guess the IPv6 sec project has already specified such a thing in 
detail.  I'm not familiar with their solution.

If you really want interop and wide adoption, then the obvious thing to 
do is backport IPsec to IPv4.  Nobody can resist the authority of IETF!

Alternately, if you don't use a combined mode like EAX, then you 
should follow the generic composition cookbook from Bellare and 
Rogaway [1, 2].

Next time I do something like this for fun, I'll abandon AES entirely 
(whee!  how exciting) and try Helix [3].  Also, I printed out this 
intriguing document yesterday [4].  Haven't read it yet.  It focusses on 
higher-layer stuff -- freshness and sequencing.

Feel free to post to metzcrypt and give me credit for bringing the 
following four URLs to your attention.

[1] http://www.cs.ucdavis.edu/~rogaway/ocb/ocb-back.htm#alternatives
[2] http://www.cs.ucsd.edu/users/mihir/papers/oem.html
[3] http://citeseer.ist.psu.edu/561058.html
[4] http://citeseer.ist.psu.edu/661955.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


  1   2   >