Re: Hashing algorithm needed

2010-09-14 Thread Ian G

On 14/09/10 2:26 PM, Marsh Ray wrote:

On 09/13/2010 07:24 PM, Ian G wrote:



1. In your initial account creation / login, trigger a creation of a
client certificate in the browser.


There may be a way to get a browser to generate a cert or CSR, but I
don't know it. But you can simply generate it at the server side.


Just to be frank here, I'm also not sure what the implementation details 
are here.  I somewhat avoided implementation until it becomes useful.


Marsh's notes +1 from me.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)

2010-08-26 Thread Ian G

On 25/08/10 11:04 PM, Richard Salz wrote:

A really knowledgeable net-head told me the other day that the problem
with SSL/TLS is that it has too many round-trips.  In fact, the RTT costs
are now more prohibitive than the crypto costs.  I was quite surprised to
hear this; he was stunned to find it out.


Yes, it is inherent in the design assumptions of the early 1990s.  At 
the time, the idea was to secure HTTP, which was (is) a request-response 
protocol layered over TCP.  Now, some of the design features that the 
designers settled on were:


+ ignore HTTP and secure TCP
+ make SSL look just like TCP
+ third-party authority authentication
+ no client-side caching of certs

And those features they delivered reasonably well.

However, if they had dug a bit deeper at the time (unlikely, really 
unlikely) they would have discovered that the core HTTP protocol is 
request-response, which means it is two packets, one for request and one 
for response.


Layering HTTP over TCP was a simplification, because just about everyone 
does that, and still does it for whatever reason.  However it was a 
simplification that ultimately caused a lot more cost than they 
realised, because it led to further layering, and further unreliability.


The original assumptions can be challenged.  If one goes to pure 
request-respose, then the whole lot can be done over datagrams (UDP). 
Once that is done properly, the protocol can move to 4 packets startup, 
then cached 2 packets mode.  The improvement in reliability is a gift.


This is possible, but you have to think outside the box, discard the 
obsession of layering and the mindtrap of reliable TCP.  I've done it, 
so I know it's possible.  Fast, and reliable, too.  Lynn as well, it 
seems.  James points out the architectural secret, that security has to 
be baked into the app, any security below the app is unreliable.





Look at the tlsnextprotonec IETF draft, the Google involvement in SPDY,


SPDY only takes the low-hanging fruit, IIRC.  Very cautious, very 
conservative, hardly seems worth the effort to me.



and perhaps this message as a jumping-off point for both:
http://web.archiveorange.com/archive/v/c2Jaqz6aELyC8Ec4SrLY

I was happy to see that the interest is in piggy-backing, not in changing
SSL/TLS.



If you're content with slow, stick with TLS :)  Fast starts with a clean 
sheet of paper.  It is of course a complete rewrite, but IMHO the work 
effort is less than working with layered mistakes of the past.




iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-02 Thread Ian G

On 1/08/10 9:08 PM, Peter Gutmann wrote:

John Levinejo...@iecc.com  writes:


Geotrust, to pick the one I use, has a warranty of $10K on their cheap certs
and $150K on their green bar certs.  Scroll down to the bottom of this page
where it says Protection Plan:

http://www.geotrust.com/resources/repository/legal/

It's not clear to me how much this is worth, since it seems to warrant mostly
that they won't screw up, e.g., leak your private key, and they'll only pay
to the party that bought the certificate, not third parties that might have
relied on it.


A number of CAs provide (very limited) warranty cover, but as you say it's
unclear that this provides any value because it's so locked down that it's
almost impossible to claim on it.


Although distasteful, this is more or less essential.  The problem is 
best seen like this:  take all the potential relying parties for a large 
site / large CA, and multiply that by the damages in (hypothetically) 
fat-ass class action suit.  Think phishing, or an MD5 crunch, or a 
random debian code downsizing.


What results is a Very Large Number (tm).

By fairly standard business processes one ends up at the sad but 
inevitable principle:


   the CA sets expected liabilities to zero

And must do so.  Note that there is a difference between expected 
liabilities and liabilities stated in some document.  I use the term 
expected in the finance sense (c.f. Net Present Value calculations).


In practice, this is what could be called best practices, to the extent 
that I've seen it.


http://www.iang.org/papers/open_audit_lisa.html#rlo says the same thing 
in many many pages, and shows how CAcert does it.




Does anyone know of someone actually
collecting on this?


I've never heard of anyone collecting, but I wish I had (heard).


Could an affected third party sue the cert owner


In theory, yes.  This is expected.  In some sense, the certificate's 
name might be interpreted as suggesting that because the name is 
validated, then you can sue that person.


However, I'd stress that's a theory.  See above paper for my trashing of 
that, What's in a Name? at an individual level.  I'd speculate that 
the problem will be some class action suit because of the enourmous 
costs involved.




who can
then claim against the CA to recover the loss?


If the cause of loss is listed in the documentation . . .


Is there any way that a
relying party can actually make this work, or is the warranty cover more or
less just for show?


We are facing Dan Geer's disambiguation problem:

 The design goal for any security system is that the
 number of failures is small but non-zero, i.e., N0.
 If the number of failures is zero, there is no way
 to disambiguate good luck from spending too much.
 Calibration requires differing outcomes.


Maybe money can buy luck ;)



iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fwd: Introduction, plus: Open Transactions -- digital cash library

2010-07-29 Thread Ian G

Hi Bob,

On 28/07/10 9:08 PM, R.A. Hettinga wrote:

Anyone out there with a coding.clue wanna poke inside this thing and see if 
it's an actual bearer certificate -- and not yet another book-entry --  
transaction system?


Sorry to get your hopes up ... Just reading the words below not the 
code:  it is basically modelled on the SOX/Ricardo concepts, AFAICS.


As you know, the SOX concept used (PGP) keys to make an account with the 
server/issuer Ivan, or a long term persistent relationship, call them 
Alice and Bob.  DigiCash also had something like this too, it's 
essential for application robustness.


The simplest payments metaphor then is a signed instruction to transfer 
from Alice to Bob, which Ivan follows by issuing a signed receipt.  What 
you'd call double entry, but in Ricardo is distinct enough to deserve 
the monika triple-entry (not triple-signed, that is something different, 
another possible innovation).


Then, the blinding formula/transaction is simply a replacement for the 
standard payments tranaction above:  Alice withdraws a coin from Ivan, 
sends it to Bob, who deposits it with Ivan.


(Ricardo had Wagner too from around 2001, and like this author, had a 
path to add Chaum, with future extension to Brands.  The code for Chaum 
was mostly written, but wasn't factored correctly...)


Another possible clue:  the author has obviously taken on board the 
lessons of the Ricardian Contract form, and put that in there (albeit in 
XML).  I find that very encouraging, even the guys from DigiCash never 
understood that one!  So I'm guessing that they have studied their stuff.


BTW, FTR, I do not know who this is.


Cheers,
RAH
Who sees lucre down there in the mousetype and takes heart...



Lucre was 1-2k lines.  Ones heart beats blood into thin air until there 
is another 1-2 orders of body parts built on...  This is looking much 
more like that 1-2 orders of magnitude down the track.




iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Trusted timestamping

2009-10-05 Thread Ian G

On 04/10/2009 23:42, Alex Pankratov wrote:


I guess my main confusion at the moment is why large CAs of
Verisign's size not offering any standalone timestamping
services.



My view is that there is no demand for this as a service.  The apparent 
need for it is more a paper requirement that came out of PKI world's 
search for a perfect product than any business need.


E.g., if you think you want it, you might be better rewarded by 
re-examining your assumptions as to why it is needed, than building it...



iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: FileVault on other than home directories on MacOS?

2009-09-23 Thread Ian G

On 22/09/2009 14:57, Darren J Moffat wrote:


There is also a sleep mode issue identified by the NSA:


An extremely minor point, that looks like Jacob and Ralf-Philipp perhaps 
aka nsa.org, rather than the NSA.gov.


Still useful.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Detecting attempts to decrypt with incorrect secret key in OWASP ESAPI

2009-09-18 Thread Ian G

On 17/09/2009 21:42, David Wagner wrote:

Kevin W. Wall wrote:

So given these limited choices, what are the best options to the
questions I posed in my original post yesterday?


Given these choices, I'd suggest that you first encrypt with AES-CBC mode.
Then apply a message authentication code (MAC) to the whole ciphertext
(including the IV).  You then send the ciphertext followed the MAC digest.

SHA1-HMAC would be a reasonable choice of algorithm for message
authentication.



I have to add vote+1 on this selection.  For various reasons, today's 
safe choice seems to be:


  * CBC
  * AES-128
  * HMAC-SHA-1 on the outside of the ciphertext

What is left is padding so that the message is clearly deliminated.  I 
suggest you treat this as a software engineering thing, not a crypto 
thing, and make sure that you have a length in your packet layout so 
that it is totally clear what is the packet and what is not.


If you want to see such a design exercise, following Dave's 
prescription, have a look at SDP1 which Zooko and I did a few years back.


http://www.webfunds.org/guide/sdp/sdp1.html
http://www.webfunds.org/guide/sdp/

It's a straight forward secret-key encrypted packet layout.  It has one 
novelty in it, which is how it solves the padding / IV issues.  Other 
than that it should be boring.



iang


PS: you are on the right track in trying to avoid any sensitivity to 
JCE.  As long as you can design your layout without any dependency on 
JCE it should work.  JCE is basically a slock design that was put in 
place for market- and crypto-control reasons, it has no place in 
software engineering.  I speak from experience, I managed the Cryptix 
project, which was the first Java crypto engine.




PPS: you haven't said enough about the application (or I missed it) to 
be able to comment on keys.  Generally, try to separate the protocol 
around the key:  every good protocol divides into two parts, the first 
of which says to the second, trust this key completely.  Software 
engineering ...


http://iang.org/ssl/h2_divide_and_conquer.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: The password-reset paradox

2009-02-23 Thread Ian G

On 19/2/09 14:36, Peter Gutmann wrote:

There are a variety of password cost-estimation surveys floating around that
put the cost of password resets at $100-200 per user per year, depending on
which survey you use (Gartner says so, it must be true).

You can get OTP tokens as little as $5.  Barely anyone uses them.



The two numbers are not comparable.  One is the business cost to a 
company including all the internal, absorbed costs (see Steve's email), 
while the other is the pricelist of the supplier, without internal 
user-company costs.


If we compared each method using the other's methodology, passwords 
would list at $0 per reset, and tokens recoveries would estimate at 
$105 to $205, plus shipping.




Can anyone explain why, if the cost of password resets is so high, banks and
the like don't want to spend $5 (plus one-off background infrastructure costs
and whatnot) on a token like this?



It is a typical claim of the smart card  tokens industry that that the 
bulk unit cost of their product is an important number.  This is 
possibly because the sellers of such product cannot offer the real 
project work because they are too product oriented and/or too small.  So 
they have to sell on somthing, and push the number.  It is for this 
reason that IBM once ruled the world, they bypassed the whole 
listprice/commodity issue.


As a humourous aside, here's another deceptive sales approach available 
to the token world, the end of something we know security, as we know 
it :)


http://www.technologyreview.com/computing/22201/?a=f



iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-3 Round 1: Buffer Overflows

2009-02-23 Thread Ian G

On 22/2/09 23:09, R.A. Hettinga wrote:

http://blog.fortify.com/blog/fortify/2009/02/20/SHA-3-Round-1



This just emphasizes what we already knew about C, even the most
careful, security conscious developer messes up memory management.



No controversy there.


Some
of you are saying, so what? These are reference implementations and this
is only Round 1. There are a few problems with that thought.
Reference implementations don't disappear, they serve as a starting
point for future implementations or are used directly. A bug in the RSA
reference implementation was responsible for vulnerabilities in OpenSSL
and two seperate SSH implementations. They can also be used to design
hardware implementations, using buffer sizes to decide how much silicon
should be used.



It is certainly appreciated that work is put in to improve the 
implementations during the competition (my group did something similar 
for the Java parts of AES, so I know how much work it can be).


However I think it is not really efficient at this stage to insist on 
secure programming for submission implementations.  For the simple 
reason that there are 42 submissions, and 41 of those will be thrown 
away, more or less.  There isn't much point in making the 41 secure; 
better off to save the energy until the one is found.  Then 
concentrate the energy, no?




iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: User interface, security, and simplicity

2008-05-04 Thread Ian G

Perry E. Metzger wrote:


It is obvious to anyone using modern IPSec implementations that their
configuration files are a major source of pain. In spite of this, the
designers don't seem to see any problem. The result has been that
people see IPSec as unpleasant and write things like OpenVPN when the
underlying IPSec protocol is just fine and it is the implementations
that are unpleasant.



Kerckhoffs' 6th, providing great entertainment for the 
security world, since 1883.


=
6. Finally, it is necessary, given the circumstances that 
command its application, that the system be easy to use, 
requiring neither mental strain nor the knowledge of a long 
series of rules to observe.

=



iang


PS:  Although his 6th is arguably the most important, his 
others are well worth considering:


https://www.financialcryptography.com/mt/archives/000195.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Cruising the stacks and finding stuff

2008-04-24 Thread Ian G

Allen wrote:

Add Moore's Law, a bigger budget and a more efficient machine, how long 
before AES-128 can be decoded in less than a day?


It does make one ponder.



Wander over to http://keylength.com/ and poke at their 
models.  They have 6 or so to choose from, and they have it 
coded up in the webapplication so you can get the 
appropriate comparisons.


Each model is reasonably well-founded (some work was put in 
by somebody who knows something) and they've been doing it 
for a few years now.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: TLS-SRP TLS-PSK support in browsers (Re: Dutch Transport Card Broken)

2008-02-10 Thread Ian G

Peter Gutmann wrote:

Victor Duchovni [EMAIL PROTECTED] writes:


While Firefox should ideally be developing and testing PSK now, without
stable libraries to use in servers and browsers, we can't yet expect anything
to be released.


Is that the FF devlopers' reason for holding back?  Just wondering... why not
release it with TLS-PSK/SRP anyway (particularly with 3.0 being in the beta
stage, it'd be the perfect time to test new features), tested against existing
implementations, then at least it's ready for when server support appears.  At
the moment we seem to be in a catch-22, servers don't support it because
browsers don't, and browsers don't support it because servers don't.



I would say that this would not hold the FF developers back, 
as they were definately capable of implementing TLS/SNI 
extension a year or two back, without any support from 
stable libraries in Apache httpd, Microsoft IIS, etc (still 
waiting...).


I'd also suggest that the TLS/SNI (which will apparently 
turn up one day in Apache) will have a much more dramatic 
effect on phishing than TLS-PSK/SRP ... because of the 
economics of course.  Lowering the barriers on all TLS use 
is far more important than making existing TLS use easier.


Of course, this is not a competition, as the effect adds, 
not competes.  The good thing is that we may actually get to 
see the effects of both fixes to TLS rollout at similar 
times.  In economics, it is a truism that we can't run the 
experiment, we have to watch real life, Heisenberg style, 
and this may give us a chance to do that.


Also, we can observe another significant factor in the mix: 
 the rollout of virtual machine platforms (xen and the 
like) is dramatically changed the economics of IP#s, these 
now becoming more the limiting factor than they were, which 
might also put more pressure on Apache ... to release 
earlier and more often.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fixing SSL (was Re: Dutch Transport Card Broken)

2008-02-01 Thread Ian G

Eric Rescorla wrote:

(as if anyone uses client certificates anyway)? 

Guess why so few people are using it ...
If it were secure, more people would be able to use it.


No, if it were *convenient* people would use it. I know of absolutely
zero evidence (nor have you presented any) that people choose not
to use certs because of this kind of privacy issue--but I know
of plenty that they find getting certs way too inconvenient.



In a CA I have something to do with, I'm observing a site 
that just started experimenting with client certs (100 
users, will reach 1000, maybe more).


When we discovered that the certificate includes PII 
(personally identifying information) and the website stores 
additional PII, the service was directed to drop all 
additional PII, and some thought was put into the in-cert PII.


Current view is that the service must engage the user in a 
contract to accept the storing of that in-cert PII, 
otherwise it must not store the info in the cert (which 
means no identity, no persistence, and no point to the 
client certs).


Writing contracts and securing agreement of course is a 
barrier, a burden.  If this were a general requirement, then 
this would be enough (imho) to not recommend client certs, 
because contracts need lawyers, they cost real money, they 
don't solve the problem, and not recommending them is 
likewise unacceptable.


(Then, as you say, there are convenience issues.)

This is an experiment to force client certs to be used, so 
they are plugging on.  It's a CA so it is trying to prove 
that there is value in these things.


So... there are two slight variations that could be 
employed.  Firstly, all data placed in the cert could be 
declared public in advance, and then no contract is required 
to use it in a context that is compatible with public data. 
 That is, the question of the contract is pushed to the CA/CPS.


(You mentioned that the premise is that it is all public 
data...)


Another variation is to switch to username + password, of 
course, in which case the username is freely given and 
expected to be stored (certs being more or less invisible to 
users, so we can presume no such).


(definately open to other ideas...)

The PII equation is particularly daunting, echoing Lynn's 
early '90s experiences.  I am told (but haven't really 
verified) that the certificate serial number is PII and 
therefore falls under the full weight of privacy law  regs 
... this may sound ludicrous, but privacy and security are 
different fields with different logics.  If that is true, 
the liability is far too high for something that should be 
private, but is already public by dint of its exposure in 
certificates.  Privacy liabilities are sky-high in some 
places, and not only that, they are incalculable, 
unknowable, and vary with the person you are talking to.


So a superficial conclusion would be don't use client 
certificates because of the privacy issues although the 
issues are somewhat more complex than PII revealed in SSL 
key exchange.


As I say, they'll plug on, as they need to prove that the 
cert is worth issuing.  It's a data point, no more, and it 
doesn't exactly answer your spec above.  But I'm having fun 
observing them trying to prove that client certs are worth 
any amount of effort.


iang

PS: normal disclosures of interest + conflicts, included.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-01 Thread Ian G

James A. Donald wrote:

I have been considering the problem of encrypted channels over UDP or 
IP.  TLS will not work for this, since it assumes and provides a 
reliable, and therefore non timely channel, whereas what one wishes to 
provide is a channel where timeliness may be required at the expense of 
reliability.



This is what Guus was getting at:


- We needed to tunnel data over UDP, with UDP semantics.
  SSL requires a reliable stream. Therefore, we had to
  use something other that SSL to tunnel data.


To put it in more fundamental terms, TLS assumes that what 
you want is a stream.  If you want packets, then TLS is a 
millstone around your neck.  It's not that it can't deliver 
packets, but that it forces all your application to think in 
stream-mode, which results in messes up and down the stack 
(including the human).


The vast majority of applications are not pure stream.  The 
vast majority are not pure packet, either ... so they are 
all somewhere in between.


The selection of where your app is on the spectrum and what 
tools you need is the job of the protocol architect; 
unfortunately, the prevailing wisdom is that as we only have 
a widely deployed stream protocol (TLS) then that should be 
used for everything.  This has resulted in some easy wins 
and some intractable messes as well the current thread 
(repeated into the past and will be repeated into the future).


Advising TLS for a packet delivery requirement is simply 
wrong.  You might be wise to give that advice, if you 
can show some other factors, but that requires ... more 
subtlety than simply repeating that TLS has to be used for 
everything.




I have figured out a solution, which I may post here if you are interested.



I'm interested.  FTR, zooko and I worked on part of the 
problem, documented briefly here: 
http://www.webfunds.org/guide/sdp/index.html


I've successfully got that going in 3 UDP transport 
scenarios, with different key exchange scenarios and 
languages.  (I was never able to deploy it tho, for business 
reasons.)  For the most part, the requirements include no 
relationship between packets, but an expectation of a return 
path  ... a.k.a. connections, but without the streaming 
assumption ... which means having to relearn how to do 
context over UDP.


One can compare that approach to the DTLS, which has the 
benefit of leveraging SSL technology and history.  My 
impression was that it assumed too much of the nature of SSL 
at the core, so it didn't cover enough of the territory to 
satisfy me.  But if it becomes widely deployed, that may be 
the better bet than designing another one or a home-brew. 
Deployment counts over elegance, most times.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: TLS-SRP TLS-PSK support in browsers (Re: Dutch Transport Card Broken)

2008-02-01 Thread Ian G

Frank Siebenlist wrote:


Why do the browser companies not care?


I spent a few years trying to interest (at least) one 
browser vendor with looking at new security problems 
(phishing) and using the knowledge that we had to solve this 
(opportunistic cryptography).  No luck whatsoever.  My view 
of why it is impractical / impossible to interest the 
browser vendors in new ideas and new security might be 
summed as this:


* Browser vendors operate a closed security shop.  I think 
this is because of a combination of things.  Mostly, all 
security shops are closed, and there aren't any good 
examples of open security shops (at least that I can think 
of).  We see some outreach in the last few years (blogs or 
lists by some) but they are very ... protected, the moat is 
still there.


* Browser vendors are influenced heavily by companies, which 
have strong agendas.  Security programmers at the open 
browsers are often employed by big companies who want their 
security in.  They are not interested in user security. 
Security programmers need jobs, they don't do this stuff for 
fun.  So it is not as if you can blame them.


* Browser vendors don't employ security people as we know 
them on this mailgroup, they employ cryptoplumbers. 
Completely different layer.  These people are mostly good 
(and often very good) at fixing security bugs.  We thank 
them for that!  But they are completely at sea when it comes 
to systemic security failings or designing new systems.


* Which also means it is rather difficult to have a 
conversation with them.  For example, programmers don't know 
what governance is, so they don't know how to deal with PKI 
(which is governance with some certificate sugar), and they 
can't readily map a multi-party failure.  OTOH, they know 
what code is, so if you code it up you can have a 
conversation.  But if your conversation needs non-code 
elements ... glug glug...


* Browser vendors work to a limited subset of the old PKI 
book.  Unfortunately, the book itself isn't written, with 
consequent problems.  So certain myths (like all CAs must 
be the same) have arisen which are out of sync with the 
original PKI thinking ... and out of sync with reality ... 
but there is no easy way to deal with this because of the 
previous points.


* Browser vendors may be on the hook for phishing.  When you 
start to talk in terms like that, legal considerations make 
people go gooey and vague.  Nobody in a browser vendor can 
have that conversation.


Which is all to say ... it's not the people!  It's the 
assumptions and history and finance and all other structural 
issues.  That won't change until they are ready to change, 
and there are only limited things that outsiders can do.


Just a personal opinion.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: two-person login?

2008-01-29 Thread Ian G

John Denker wrote:

We need to talk about threat models:
  a) The purveyors of the system in question don't have any clue
   as to what their threat model is.  I conjecture that they might
   be motivated by the non-apt analogies itemized above.
  b) In the system in question, there are myriad reasons why Joe
   would need to log in.  If Joe wanted to do something nefarious,
   it would take him less than a second to come up with a seemingly
   non-nefarious pretext.  When the approver approves Joe's login,
   the approver has no idea what the consequences of that approval
   will be.  The two-person login requires the approver to be
   present at login time, but does not require the approver to
   remain present, let alone take responsibility what Joe does 
   after login.

  c) The only threat model I can come up with is the case where
   Joe's password has been compromised, and nobody else's has.
   Two-person login would provide an extra layer of security
   in this case.  This threat is real, but there are other ways
   of dealing with this threat (e.g. two-factor authentication)
   ... so this seems like quite a lame justification for the
   two-person login.
  d) Or have I overlooked something?



OK, putting on the devil's advocate hat  cape here...

Consider the latest case with SocGen where a trader goes 
rogue (so the news has it at least).  One might argue that 
the system you are talking about provides a control over that.




From the foregoing, you might conclude that the two-person login

system is worthless security theater ... but harmless security
theater, and therefore not worth worrying about either way.



There is the possibility of compliance controls.  In audits 
and sarbanes-oxley and other things there is frequent talk 
of dual control and 4 eyes principle.  Now, it could be that 
these points can be easily covered by employing a system 
that enforces this.  Often, auditors will be convinced if 
they can see something in place, and not feel the need to 
audit the system itself.  The auditor's job is done when he 
can safely say management has put in place procedures... 
and the system you mention meets that protocol in words at 
least.




But the plot thickens.  The purveyors have implemented two-person
login in a way that manifestly /reduces/ security.  Details 
available on request.


So now I throw it open for discussion.  Is there any significant
value in two-person login?  That is, can you identify any threat 
that is alleviated by two-person login, that is not more wisely 
alleviated in some other way?



It might be useful for management to decree that all juniors 
must work with a senior watching over.  Also e.g.,  critical 
systems where two systems administrators work together.  In 
linux there is a program called screen(1) that allows two 
sysadms to share the same screen and type together.  This 
has a lot of value when two minds are better than one. 
But, yes, this is not quite what you are describing.


Also, it might be a control to enforce other procedures.  If 
the sysadm is given the controls to some departmental 
system, then instead of just waltzing in and playing with 
it, he has to ask the non-techie boss, who then asks what 
the story is.  This way she can know that the appropriate 
procedures are in place, such as notification to users.


It's far easier to figure out what the sysadm is up to if he 
is forced to have a conversation every time he wants to log 
in...  this addresses your point b above, in that it now 
clearly labels any disaster as something the sysadm should 
have told the boss about before, instead of leaving it in 
the murky area of of course I intended to scrub the disks, 
that's my job!



If so, is there any advice you can give on how to do this right?  
Any DOs and DONTs?



I'd expect a proper physical token to be the manager's login 
mechanism.  If it was a password he typed in there would be 
too much incentive to share the password.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Lack of fraud reporting paths considered harmful.

2008-01-27 Thread Ian G

John Ioannidis wrote:

Perry E. Metzger wrote:


That's not practical. If you're a large online merchant, and your
automated systems are picking up lots of fraud, you want an automated
system for reporting it. Having a team of people on the phone 24x7
talking to your acquirer and reading them credit card numbers over the
phone, and then expecting the acquirer to do something with them when
they don't have an automated system either, is just not reasonable.




But how can the issuer know that the merchant's fraud detection systems 
work, for any value of work? This could just become one more avenue 
for denial of service, where a hacked online merchant suddenly reports 
millions of cards as compromised.  I'm sure there is some interesting 
work to be done here.



There is an interesting analogue in the area of SAR 
(suspicious activity report) filings through financial 
services.  This has been in place with various providers for 
maybe a decade or so.  I'm not aware of any serious economic 
analysis that would suggest copying the lessons, though.


There is a philosophical problem with suggesting an 
automated protocol method for reporting fraud, in that one 
might be better off ... fixing the underlying fraud.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: forward-secrecy for email? (Re: Hushmail in U.S. v. Tyler Stumbo)

2007-11-08 Thread Ian G

Adam Back wrote:

On Fri, Nov 02, 2007 at 06:23:30PM +0100, Ian G wrote:

I was involved in one case where super-secret stuff was shared
through hushmail, and was also dual encrypted with non-hushmail-PGP
for added security.  In the end, the lawyers came in and scarfed up
the lot with subpoenas ... all the secrets were revealed to everyone
they should never have been revealed to.  We don't have a crypto
tool for embarrassing secrets to fade away.


What about deleting the private key periodically?

Like issue one pgp sub-key per month, make sure it has expiry date etc
appropriately, and the sending client will be smart enough to not use
expired keys.

Need support for that kind of thing in the PGP clients.

And hope your months key expires before the lawyers get to it.

Companies have document retention policies for stuff like
this... dictating that data with no current use be deleted within some
time-period to avoid subpoenas reaching back too far.



Hi Adam,

many people have suggested that.  On paper, it looks like a 
solution to the problem, at least to us.


I think however it is going to require quite significant 
support from the user tools to do this.  That is, the user 
application is going to have to manage the sense of lifetime 
over the message.


One tool that does approach this issue at least 
superficially is Skype.  It can be configured to save chat 
messages for different periods of time, I have mine set to 
around 2 weeks currently.


But, then we run slap-bang into the problem that the *other* 
client also keeps messages.  How long are they kept for? 
I'm not told, and of course even if I was told, we can all 
imagine the limitations of that.


I hypothesise that it might be possible to use contracts to 
address this issue, at least for a civil-not-criminal scope. 
 That is, client software could arrange a contractual 
exchange between Alice and Bob where they both agree to keep 
messages for X weeks, and if not, then commitments and 
penalties might apply.  Judges will look at contracts like 
that and might rule the evidence out of court, in a civil 
dispute.


OK, so we need a lawyer to work that out, and I'm definately 
whiteboarding here, I'm not sure if the solution is worth 
the effort.


Which is why I am skeptical of schemes like delete the 
private key periodically.  Unless we solve or address the 
counterparty problem, it just isn't worth the effort to be 
totally secure on our own node.


We know how to do invisible ink in cryptography.  How do we 
do its converse, fading ink?


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Full Disk Encryption solutions selected for US Government use

2007-10-08 Thread Ian G

Peter Gutmann wrote:

Ben Laurie [EMAIL PROTECTED] writes:

Peter Gutmann wrote:



Given that it's for USG use, I imagine the FIPS 140 entry barrier for the
government gravy train would be fairly effective in keeping any OSS products
out.

? OpenSSL has FIPS 140.


But if you build a FDE product with it you've got to get the entire product
certified, not just the crypto component.

(Actually given the vagueness of what's being certified you might be able to
get away with getting just one corner certified, but then if you have to use a
SISWG mode you'd need to modify OpenSSL, which in turn means getting another
certification.  Or the changes you'd need to make to get it to work as a
kernel driver would require recertification, because you can't just link in
libssl for that.  Or...).



A slightly off-topic question:  if we accept that current 
processes (FIPS-140, CC, etc) are inadequate indicators of 
quality for OSS products, is there something that can be 
done about it?  Is there a reasonable criteria / process 
that can be built that is more suitable?


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: open source digital cash packages

2007-09-23 Thread Ian G

Steven M. Bellovin wrote:

Are there any open source digital cash packages available?  I need one
as part of another research project.



I can think of a few ways to answer this question.

1.  blinded money demo programs:  there is magic money, in C 
and in Java.  Also I think Ben Laurie wrote another one 
demo'd at EFCE.  These demos are generally around 1-4kloc.


2.  hard money systems:  These allow you to actually issue 
money and survive aggressive communities.  epointsystem is 
GPL I think, Ricardo is something or other but I haven't the 
energy to support the server side as an open source project. 
 Ricardo is 100-150kloc, epointsystem is much smaller (and 
lighter in features and scope).


3.  soft community money systems:  cyclos and similar (one 
from south africa, another from NZ from memory).  These 
products are designed for small communities where trust is 
implicit, they have no internal governance capabilities and 
only limited external security exposures.  But you can use 
them to issue money.


4.  then there are other variants like barcode money.  A lot 
of interest is being put into mobile phone money atm.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Scare tactic?

2007-09-23 Thread Ian G

Ivan Krsti? wrote:

On Sep 19, 2007, at 5:01 PM, Nash Foster wrote:

Any actual cryptographers care to comment on this? I don't feel
qualified to judge.


If the affected software is doing DH with a malicious/compromised peer, 
the peer can make it arrive at a predictable secret -- which would be 
known to some passive listener. But hey, if the peer is malicious or 
compromised to begin with, it could just as well do DH normally and 
explicitly send the secret to the listener when it's done. Not much to 
see here.



I agree that this is minutia, but there is a difference.  If 
the peer can arrange the key to be some predictable secret, 
it can do so without revealing itself.  Eve is happy.  If 
however it has to leak the key some other way, it needs some 
covert channel.  This channel is the sort of thing that 
security reviews might more easily stumble over.  E.g., IDS 
guy asking why these strange packets emanate from the crypto 
server...


Which is to say, it's worth closing off this particular form 
of attack if it can be done without undue cost.  When I did 
a key exchange last in a protocol design, I attempted to 
address it by inserting some hashing steps.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Another Snake Oil Candidate

2007-09-13 Thread Ian G

Hagai Bar-El wrote:

Hi,

On 12/09/07 08:56, Aram Perez wrote:

The IronKey appears to provide decent security while it is NOT plugged
into a PC. But as soon as you plug it in and you have to enter a
password to unlock it, the security level quickly drops. This would be
the case even if they supported Mac OS or *nix.

As I stated in my response to Jerry Leichter, in my opinion, their
marketing department is selling snake oil.


I think there is a difference between a product that is susceptible to
an attack and the pure distilled 100% natural snake oil, as we usually
define it.



So, is snake oil:

   * a crap product?
   * a fine product with weaknesses?
   * a marketing campaign that goes OTT?
   * a term used to slander the opposing security model?
   * an adjective that applies to any of the above?

iang

OTT == over-the-top, excessive and dangerous.  Derives from 
WW1 trench warfare.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New article on root certificate problems with Windows

2007-07-19 Thread Ian G

[EMAIL PROTECTED] wrote:
 From a security point of view, this is really bad.  From a usability 
point of

view, it's necessary.



I agree with all the above, including deleted.


The solution is to let the HCI people into the 
design
process, something that's very rarely, if ever, done in the security 
field [0].



To jump up and down ... if that was the solution, it would 
have been done by now :)


I would instead state that the solution was whatever Skype 
and SSH did.  And the opposite of whatever IPSec, SSL, 
Clipper, S/MIME, DRM, and all the other failures did.


HCI was one of the things, but others were as important: 
lack of open critique, service-before-security, 
crypto-for-free, total solution, narrow problem, etc.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The bank fraud blame game

2007-07-01 Thread Ian G

Florian Weimer wrote:

* Jerry Leichter:


OK, I could live with that as stated.  But:

The code also adds: We reserve the right to request access to
your computer or device in order to verify that you have taken
all reasonable steps to protect your computer or device and
safeguard your secure information in accordance with this code.

If you refuse our request for access then we may refuse your
claim.



The delay between when you were defrauded and when they request
access is unspecified.


But if you don't do this, customers can repudiate *any* transaction,
even those they have actually issued.  In other words, you run into
tons of secondary fraud, where people claim they are victims, but they
actually aren't.

Customers need to provide some evidence that they are actually
victims.  Just claiming the virus did it can't be sufficient.



Banks are the larger and more informed party.  They need to 
provide systems that are reasonable given the situation 
(anglo courts generally take this line, when pushed, I'm 
unsure what continental courts would do with that logic). 
Customers aren't in any position to dictate security 
requirements to banks.


Unfortunately for the banks, there is a vast body of 
evidence that we knew and they knew or should have known 
that the PC was insecure [1].  So, by fielding a system -- 
online commerce -- with a known weakness, they took 
responsibility for the fraud (from all places).


Now they are in the dilemma.  The customer can't provide 
evidence of the fraud, because the system fielded doesn't 
support it (it's login authentication not transaction 
authorisation).  The NZ response above is simply not facing 
up to the facts, it is trying to create an easy way out that 
(again) shifts the liability to the customer.


They now face the question of whether to roll-back online 
access or to upgrade with some form of dual-channel 
authorisation [2].


iang

[1] To my knowledge, continental banks knew of the risks and 
acted in the 90s, then scaled it down because the risks 
proved overstated.  Brit banks knew of the risks and didn't 
care.  American banks didn't care.


[2] Again, continental banks are shifting to SMS 
authorisation (dual-channel) ... Brit banks are unsure what 
to do ... American banks apparently don't care.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Blackberries insecure?

2007-06-21 Thread Ian G

Steven M. Bellovin wrote:

According to the AP (which is quoting Le Monde), French government
defense experts have advised officials in France's corridors of power
to stop using BlackBerry, reportedly to avoid snooping by U.S.
intelligence agencies.

That's a bit puzzling.  My understanding is that email is encrypted
from the organization's (Exchange?) server to the receiving Blackberry,
and that it's not in the clear while in transit or on RIM's servers.


(quick reply) they specifically mentioned the servers:

The ban has been prompted by SGDN concerns that the 
BlackBerry system is based on servers located in the US and 
the UK,...


https://financialcryptography.com/mt/archives/000856.html
http://www.ft.com/cms/s/dde45086-1e97-11dc-bc22-000b5df10621.html

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: A crazy thought?

2007-06-09 Thread Ian G

Allen wrote:

Which lead me to the thought that if it is possible, what could be done 
to reduce the risk of it happening?


It occurred to me that perhaps some variation of separation of duties 
like two CAs located in different political environments might be used 
to accomplish this by having each cross-signing the certificate so that 
the compromise of one CA would trigger an invalid certificate. This 
might work if the compromise of the CA happened *after* the original 
certificate was issued, but what if the compromise was long standing? Is 
there any way to accomplish this?



What you are suggesting is called Web of Trust (WoT). 
That's what the PGP world does, more or less, and I gather 
that the SPKI concept includes it, too.


However, x.509 does not support it.  There is no easy way to 
add multiple signatures to an x.509 certificate without 
running into support problems (that is, of course you can 
hack it in, but browsers won't understand it, and developers 
won't support you).


(Anecdote 1:  I pushed all of the Ricardo financial 
transaction stuff over to x.509 for a time in 1998, but when 
I discovered the lack of multiple sigs, and a few other 
things, I was forced to go back to PGP.  Unfortunately, 
finance is fundamentally web of trust, and hierarchical PKI 
concepts such as coded into x.509, etc, will not work in 
that environment.)


(Anecdote 2: over at CAcert they attempt to graft a web of 
trust on to the PKI, and they sort of succeed.  But the 
result is not truly WoT, it is a hybrid, in that there is 
still only one sig on the cert, and we are back to the 
scenario that you suggest.  Disclosure:  I have something to 
do with CAcert...)


So as a practical matter, that which is known as x.509 PKI 
cannot do this.  For this reason, some critics have 
relabeled the CAs as Centralised Vulnerability Parties 
(CVPs) instead of the more familiar Trusted Third Parties 
(TTPs).


As a side note, outside the cryptography layer, there are 
legal, contractual, customary defences against the attacks 
that you outline.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: no surprise - Sun fails to open source the crypto part of Java

2007-05-15 Thread Ian G

Nicolas Williams wrote:

On Mon, May 14, 2007 at 11:06:47AM -0600, [EMAIL PROTECTED] wrote:

 Ian G wrote:
* Being dependent on PKI style certificates for signing, 

...

The most important motivation at the time was to avoid the risk of Java being
export-controlled as crypto.  The theory within Sun was that crypto with a
hole would be free from export controls but also be useful for programmers.


crypto with a hole (i.e., a framework where anyone can plug anyone
else's crypto) is what was seen as bad.



But that's what they've got.  If the theory was that they 
needed to provide crypto without a hole, then they shouldn't 
have provided the crypto.  *The framework is the hole*, and 
pretending to stop other holes from being added is a fool's 
game.


Which isn't to say that this is the end of the story.  The 
story was no doubt very complex.


Some topic drift (as Lynn would say):  I did a fair bit of 
investigation on the SSL v1 - v2 transition and discovered 
10 different forces working at the time.  As a historical 
comparison, we can suggest that Sun's Java group faced the 
same messy cauldron of forces at the time of the JCA being 
designed.  In that historical investigation I concluded that 
Netscape could not avoid the forces, and quite possibly (no 
surprise) the Sun group cannot avoid the forces either.




The requirement for having providers signed by a vendor's key certified
by Sun was to make sure that only providers from suppliers not from,
say, North Korea etc., can be loaded by the pluggable frameworks.



OK, but can we agree that this is a motive outside normal 
engineering practices?  And it is definately nothing to do 
with security as understood at the language and application 
levels?


The point is that once we agree that this is an outside 
requirement, then we can see that as it starts to impact the 
security architecture, it can only worsen the security.




As
far as I know the process for getting a certificate for this is no more
burdensome to any third parties, whether open source communities or
otherwise, than is needed to meet the legal requirements then, and
since, in force.



From what the guys in Cryptix have told me, this is true. 
Getting the certificate is simply a bureaucratic hurdle, at 
the current time.  This part is good.  But, in the big picture:


J1.0:  no crypto
J1.1:  crypto with no barriers
J1.2:  JCA with no encryption, but replaceable
J1.4:  JCA with low encryption, stuck, but providers are easy
J1.5:  JCA, low encryption, signed providers, easy to get a 
key for your provider

J1.6:  ??

(The java version numbers are descriptive, not accurate.)

The really lucky part here is that (due to circumstances 
outside control) the entire language or implementation has 
gone open source.


No more games are possible ==  outside requirements are 
neutered.  This may save crypto security in Java.




Of course, IANAL and I don't represent Sun, and you are free not to
believe me and try getting a certificate as described in Chapter 8 of
the Solaris Security Developers Guide for Solaris 10, which you can find
at:



Sure.  There are two issues here, one backwards-looking and 
one forwards-looking.


1.  What is the way this should be done?  the Java story is 
a good case study of how the software engineering department 
put in place a heavyweight structure that drifted away from 
security.  We can learn from that.


2.  What is needed now?  Florian says the provider is 
missing and the root list is empty.  What to do?  Is it 
time to reinvigorate the open source Java crypto scene?


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: no surprise - Sun fails to open source the crypto part of Java

2007-05-14 Thread Ian G

Nicolas Williams wrote:

Subject: Re: no surprise - Sun fails to open source the crypto part of Java


Were you not surprised because you knew that said source is encumbered,
or because you think Sun has some nefarious motive to not open source
that code?



Third option:  the architecture of Sun's Java crypto 
framework is based on motives that should have been avoided, 
and have come back to bite (again).


The crypto framework in Java as designed by Sun was built on 
 motives (nefarious, warped or just plain stupid, I don't 
know) such as


* the need or desire to separate out encryption from 
authentication, and deliver two compatible but varying 
implementations in one variable body of code.  With a 
switch.  Somewhere.
* some notion that crypto code should be (must be) a 
competitive market, one that is created by Sun, and is 
controlled by Sun.
* circular dependency where we have to install a signed 
provider which means we need signing which means we need 
crypto ...
* Being dependent on PKI style certificates for signing, so 
for example, if your machine doesn't have a properly 
configured domain name, touching the crypto caused DNS 
timeouts ... (1.5 from memory, might be fixed).


Hence, the framework is clumsy in practice, and trying to 
change it (in any way) was likely to run into roadblocks at 
the legal, policy and other areas like rights ...


As an aside, security is the baby that got thrown out with 
the bathwater.




If the latter then keep in mind that you can find plenty of crypto code
in OpenSolaris, which, unless you think the CDDL does not qualify as
open source, is open source.  I've no first hand knowledge, but I
suspect that the news story you quoted from is correct: the code is
encumbered and Sun couldn't get the copyright holders to permit release
under the GPL in time for the release of Java source under the GPL.



The real interest was whether there was any difficulty in 
modifying the source code to add in the parts needed.  As 
Florian points out (thanks!), it is Sun's Provider that has 
not been delivered.


This is good, that is the part that is intended to be 
replaceable, so any of the Cryptix or Bouncy Castle or IAIK 
providers can be easy alternatives.


My worry was that they hadn't open sourced the architecture 
component, the part that wasn't meant to be replaceable. 
However even if open sourced, Sun may still wield a stick 
over the providers by insisting that they manage the signing 
process for the providers.


(This is in effect what open source organisations like 
Mozilla do with their source.  There is a tiny hook in there 
that stops people from changing the root list.)



iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


no surprise - Sun fails to open source the crypto part of Java

2007-05-12 Thread Ian G
Does anyone know what Sun failed to opensource in the crypto 
part of Java?


http://news.com.com/Open-source+Java-except+for+the+exceptions/2100-7344_3-6182416.html

They also involve some elements of sound and cryptography, 
said Tom Marble, Sun's OpenJDK ambassador. We have already 
contacted the copyright holders. We were unable to negotiate 
release under an open-source license, Marble said.


To sidestep the issue, Sun for now includes the proprietary 
software as prebuilt binary modules that programmers can 
attach to the versions of Java built from source code.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Was a mistake made in the design of AACS?

2007-05-02 Thread Ian G

Hal Finney wrote:

Perry Metzger writes:
Once the release window has passed,
the attacker will use the compromise aggressively and the authority
will then blacklist the compromised player, which essentially starts
the game over. The studio collects revenue during the release window,
and sometimes beyond the release window when the attacker gets unlucky
and takes a long time to find another compromise.



This seems to assume that when a crack is announced, all 
revenue stops.  This would appear to be false.  When cracks 
are announced in such systems, normally revenues aren't 
strongly effected.  C.f. DVDs.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Cryptome cut off by NTT/Verio

2007-04-29 Thread Ian G

Perry E. Metzger wrote:

Slightly off topic, but not deeply. Many of you are familiar with
John Young's Cryptome web site. Apparently NTT/Verio has suddenly
(after many years) decided that Cryptome violates the ISP's AUP,
though they haven't made it particularly clear why.

The following link will work for at least a few days I imagine:

http://cryptome.org/cryptome-shut.htm



Quintessenz seem to be maintaining a mirror:

http://cryptome.quintessenz.org/mirror/

http://cryptome.quintessenz.org/mirror/cryptome-shut.htm

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: crypto component services - is there a market?

2007-04-19 Thread Ian G

Stefan Kelm wrote:

Same with digital timestamping.


Here in Europe, e-invoicing very slowly seems to be
becoming a (or should I say the?) long-awaited
application for (qualified) electronic signatures.



Hmmm... last I heard, qualified certificates can only be 
issued to individuals, and invoicing (of the e-form that the 
regulations speak) can only be done by VAT-registered companies.


Is that not the case?  How is Germany resolving the 
contradictions?




Since electronic invoices need to be archived in
most countries some vendors apply time-stamps and
recommend to re-apply time-stamps from time to time.



Easier to invoice with paper!

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Failure of PKI in messaging

2007-02-13 Thread Ian G

Steven M. Bellovin wrote:

On Mon, 12 Feb 2007 17:03:32 -0500
Matt Blaze [EMAIL PROTECTED] wrote:


I'm all for email encryption and signatures, but I don't see
how this would help against today's phishing attacks very much,
at least not without a much better trust management interface on
email clients (of a kind much better than currently exists
in web browsers).

Otherwise the phishers could just sign their email messages with
valid, certified email keys (that don't belong to the bank)
the same way their decoy web traffic is sometimes signed with
valid, certified SSL keys (that don't belong to the bank).

And even if this problem were solved, most customers still
wouldn't know not to trust unsigned messages purporting
to be from their bank.



Precisely.  The real problem is the human interface, where we're asking
people to suddenly notice the absence of something they're not used to
seeing in the first place.



Actually, there are many problems.  If you ask the low-level 
crypto guys, they say that the HI is the problem.  If you 
ask the HI guys, they say that the PKI concept is the 
problem.  If you ask the PKI people, they say the users are 
not playing the game, and if you ask the users they say the 
deployment is broken ...  Everyone has got someone else to 
blame.


They are all right, in some sense.  The PKI concepts need 
loosening up, emails should be digsig'd for authentication 
(**), and the HI should start to look at what those digsigs 
could be used for.


But, until someone breaks the deadly embrace, nothing is 
going to happen.  That's what James is alluding to:  what 
part can we fix, and will it help the others to move?


iang

** I didn't say digital signing ... that's another problem 
that needs fixing before it is safe to use, from the ask 
the lawyers basket.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: NPR : E-Mail Encryption Rare in Everyday Use

2006-02-26 Thread Ian G

Peter Saint-Andre wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Ian G wrote:



To get people to do something they will say no
to, we have to give them a freebie, and tie it
to the unpleasantry.  E.g., in SSH, we get a better
telnet, and there is only the encrypted version.



We could just as well say that encryption of remote server sessions is
rare in everyday use. It's just that only geeks even do remote server
sessions, so they use SSH instead of telnet.

The thing is that email is in wide use (unlike remote server sessions).


Well!  Within the context of any given application,
we can learn lessons.  Just because SSH is only used
by geeks is meaningless, really, we need to ground
that criticism in something that relates it to other
areas.  The fact is that SSH came in with a solution
and beat the other guy - Telnet secured over SSL.  It
wasn't the crypto that did this, it was the key management,
plain and simple.

Telnet was in widespread use - but was incapable of
making the jump to secure.  Just like email.  So if
the SSH example were illuminating, we would predict
that some completely different *non-compatible* app
would replace email.

Hence, IM/chat, Skype, TLS experiments at Jabber, as
well as the OpenPGP attempts.

There are important lessons to be learnt in the rise of
IM over email.  Email is held back by its standardisation,
chat seems to overcome spam quite nicely.  Email is hard
to get encrypted, but it didn't stop Skype from doing
encryped IMs easily.  Phishing is possible over chat,
but has also been relatively easy to address - because
the system owners have incentives and can adjust.

The competition between the IM systems is what is driving
the security forward.  As there is no competition in the
email world, at least at the level of the basic protocol
and standard, there is no way for the security to move
forward.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: long-term GPG signing key

2006-01-13 Thread Ian G

Alexander Klimov wrote:

On Wed, 11 Jan 2006, Ian G wrote:



Even though triple-DES is still considered to have avoided that
trap, its relatively small block size means you can now put the
entire decrypt table on a dvd (or somesuch, I forget the maths).



This would need 8 x 2^{64} bytes of storage which is approximately
2,000,000,000 DVD's (~ 4 x 2^{32} bytes on each).

Probably, you are referring to the fact that during encryption of a
whole DVD, say, in CBC mode two blocks are likely to be the same
since there are an order of 2^{32} x 2^{32} pairs.


Thanks for the correction, yes, so obviously I
muffed that one.  I saw it mentioned on this list
about a year ago, but didn't pay enough attention
to recall the precise difficulty that the small
block size of 8 bytes now has.

A few calculations here - even blueray disks won't
help me out.

iang



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: long-term GPG signing key

2006-01-11 Thread Ian G

Amir Herzberg wrote:

Ian G wrote:


Travis H. wrote:


I'd like to make a long-term key for signing communication keys using
GPG and I'm wondering what the current recommendation is for such.  I
remember a problem with Elgamal signing keys and I'm under the
impression that the 1024 bit strength provided by p in the DSA is not
sufficiently strong when compared to my encryption keys, which are
typically at least 4096-bit D/H, which I typically use for a year.



1. Signing keys face a different set of
non-crypto threats than to encryption
keys.  


Agreed.


In practice, the attack envelope
is much smaller, less likely.  


Huh? It depends on the application. Many applications care only (or 
mostly) on authentication, e.g. secure DNS (or CA). Or secure payment 
protocols (not based on sending `secrets` such as credit card numbers...).


Well, yes, depends on the application of course!

With this particular application - signing
people's keys for WoT - that's generally true.
If I was to crack your signing key for example,
then wander around impersonating you, this is
unlikely to do anything useful except confuse
people a lot until you all figure it out.

If we limit our discussion to actual extent
and popular protocols, it is easier to see.
Take for example this *extreme* case of the CA
application.  If I was to publish Verisign's
private key on usenet, what difference would
that make?

Other than a lot
of red faces, not as much as one would think;
they would simply roll another key, then re-sign
everyone's certs and post them out with a free
year for the nuisance factor.  Then a CERT
advisory would tell every merchant to roll
over their certs, and browsers would ship new
roots.

(Actually it's probably worse than that.  We
stand at the cusp of SSL attacks, 450 seen
last year, so this would spur a bunch of forged
cert attacks.  Compare this to a couple of years
back when someone noticed that IE had a cert
bug in it, and nobody noticed.  And nobody ever
bothered to attack it.)

But that's the *extreme* case, more or less like
Microsoft faces every month.

For the regular case of say Amazon's private key,
well, Amazon would have a lot of nuisance to
deal with, but in practice it would just be in
up-tick in normal phishing against them for a
few months.

Various random posts:
Netcraft - 450 phishing cases using SSL / HTTPS certs
https://www.financialcryptography.com/mt/archives/000624.html
RSA comes clean: MITM on the rise, Hardware Tokens don't cut it, Certificate 
Model to be Replaced!
https://www.financialcryptography.com/mt/archives/000633.html
GP4.3 - Growth and Fraud - Case #3 - Phishing
https://www.financialcryptography.com/mt/archives/000609.html


  3. The RSA patent expired, which means that


RSA no longer has everyone over a barrel.
For various reasons, many projects are
drifting back to RSA for signing and for
encryption.


Yes, but depending on how many years you need, the length of key can 
become substantial/a concern. In which case, you may consider some of 
the EC signatures or other short signatures. Be careful regarding the 
hashing, though.


I don't think EC is available for OpenPGP although
GPG may have some experimental product in it?

On the whole - another complete generalisation -
open projects tend to shy away from EC as there
is no clear patent situation, and putting all
the work in only to discover some claim later on
is not effective use of time.  Our Cryptix project
to do EC in Java (Paulo in Brazil) stalled when he
discovered that the so-called unencumbered set
was actually quite slow...

iang



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: long-term GPG signing key

2006-01-11 Thread Ian G

Travis H. wrote:

On 1/10/06, Ian G [EMAIL PROTECTED] wrote:


2. DSA has a problem, it relies on a 160
bit hash, which is for most purposes the
SHA-1 hash.  Upgrading the crypto to cope
with current hash circumstances is not
worthwhile;  we currently are waiting on
NIST to lead review in hashes so as to
craft a new generation.



What's wrong with SHA-256 and SHA-512?

http://csrc.nist.gov/cryptval/shs/sha256-384-512.pdf

I agree though that hashes (I hate the term, hashing has little to do
with creating OWFs) are not as advanced as block cipher design, and
160 bits seems rather small, but surely SHA-256 would be better than
throwing one's hands up, claiming it's unsolvable, and sticking with
SHA-1, right?


Well, it's a pragmatic situation:

  * all SHA algorithms are under a cloud
  * anything 160 bits or less is under a dark-ish cloud
  * the bigger ones won't break, but maybe
the engineering will all change anyway
  * DSA has to be upgraded anyway
  * what's wrong with RSA in this role?
  * where's the threat to the DSA algorithm given that
the attack is the birthday attack?
  * where's the threat to any extent usage of DSA
(within its application profile)?

Pragmatically, wait and see is a good choice here,
IMO, but others disagree.


If the problem is size, the answer is there.  If the problem is
structural, a temporary answer is there.


DSA is fixed to a 160 bit hash (or is it DSS?).
So, it's possible to do RIPEM or a chopped off
version of SHA-256.  The question is, what does
that gain you?  Not that much, and probably not
as much as the pain of rolling out a new digsig
algorithm.


Using two structurally different hashes seems like a grand idea for
collision restistance, but bad for one-wayness.  One-wayness seems to
matter for message encryption, but doesn't seem to matter for signing
public keys - or am I missing something?


Well, using two different MDs to cover one
failing is a plausible idea - but at a logical
and cryptographic level, all you are doing is
inventing your own hash algorithm, constructed
from some prior work.

So, we can look at for example cipher chaining
like triple-DES.  There are strange artifacts
such as groups where non-obvious things come
in and trip you up.  Even though triple-DES
is still considered to have avoided that trap,
its relatively small block size means you can
now put the entire decrypt table on a dvd (or
somesuch, I forget the maths).

So in general, it's not a good idea to just
invent your own algorithms;  if you could do
better so easily, so could the professional
cryptographers, and they would have by now.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: long-term GPG signing key

2006-01-11 Thread Ian G

Perry E. Metzger wrote:

Ian G [EMAIL PROTECTED] writes:


Travis H. wrote:


I'd like to make a long-term key for signing communication keys using
GPG and I'm wondering what the current recommendation is for such.  I
remember a problem with Elgamal signing keys and I'm under the
impression that the 1024 bit strength provided by p in the DSA is not
sufficiently strong when compared to my encryption keys, which are
typically at least 4096-bit D/H, which I typically use for a year.


1. Signing keys face a different set of
non-crypto threats than to encryption
keys.  In practice, the attack envelope
is much smaller, less likely.



I call bull.

You have no idea what his usage pattern is like, and you have no idea
what the consequences for him of a forged signature key might be. It
is therefore unreasonable -- indeed, unprofessional -- to make such
claims off the cuff.


You seem to have missed the next sentance:

    Unless you have
   particular circumstances, it's not
   as important to have massive strength in
   signing keys as it is in encryption keys.

As he asked what the current recommendation
is it seems reasonable to assume the general
case, not the particular, and invite him to
elaborate if so needed.  Etc etc.

Errata - if you (Travis) are using 4096-bit D/H
as your encryption keys, you might want something
a bit beefier for signing keys.  Check out
the key length calculator:

http://www.keylength.com/

and click on NIST 2005 Recommendations and
also ECRYPT 2005 Report for comparison.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: long-term GPG signing key

2006-01-10 Thread Ian G

Travis H. wrote:

I'd like to make a long-term key for signing communication keys using
GPG and I'm wondering what the current recommendation is for such.  I
remember a problem with Elgamal signing keys and I'm under the
impression that the 1024 bit strength provided by p in the DSA is not
sufficiently strong when compared to my encryption keys, which are
typically at least 4096-bit D/H, which I typically use for a year.


1. Signing keys face a different set of
non-crypto threats than to encryption
keys.  In practice, the attack envelope
is much smaller, less likely.  Unless you
have particular circumstances, it's not
as important to have massive strength in
signing keys as it is in encryption keys.

2. DSA has a problem, it relies on a 160
bit hash, which is for most purposes the
SHA-1 hash.  Upgrading the crypto to cope
with current hash circumstances is not
worthwhile;  we currently are waiting on
NIST to lead review in hashes so as to
craft a new generation.  Only after that
is it possible to start on a new DSA.
So any replacement / fix for DSA is years
away, IMO.  The OpenPGP group has wrestled
with this and more or less decided to defer
it.

3. The RSA patent expired, which means that
RSA no longer has everyone over a barrel.
For various reasons, many projects are
drifting back to RSA for signing and for
encryption.



Does anyone have any suggestions on how to do this, or suggestions to
the effect that I should be doing something else?


If you want something stronger, then I'd
suggest you just use a big RSA key for
signing.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: browser vendors and CAs agreeing on high-assurance certificat es

2005-12-27 Thread Ian G

Ben Laurie wrote:

Ian G wrote:

...

http://wiki.cacert.org/wiki/VhostTaskForce



(The big problem of course is that you can use
one cert to describe many domains only if they
are the same administrative entity.)



If they share an IP address (which they must, otherwise there's no
problem), then they must share a webserver, which means they can share a
cert, surely?


Certainly they *can* share a cert.  But a cert
speaks to identity - at the human level the cert
is supposed to (by some readings) indicate who
the site is purporting to be and in some scenarios,
there are people who think the cert actually
proves that the site is who it claims to be.

So regardless of the technical details of the
underlying software (complex, I grant), websites
SHOULD NOT share a cert.

(by capitals I mean the RFC sense, not the shouting
sense.)



What we really need is for the webservers to
implement the TLS extension which I think is
called server name indication.

And we need SSL v2 to die so it doesn't interfere
with the above.



Actually, you just disable it in the server. I don't see why we need
anything more than that.


If browsers don't know what is available on the
server, they send a Hello message that asks for
what protocol versions and ciphersuites to use.
This is the SSL v2 message, just in case so
to rectify this situation we need to get all
the browsers distro'd with SSL v2 turned off by
default.  The shorthand for this is SSL v2 must
die...  Thankfully, they did decide to do just
that at last month's browser pow-wow.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: browser vendors and CAs agreeing on high-assurance certificat es

2005-12-27 Thread Ian G

Ben Laurie wrote:

Ian G wrote:



http://wiki.cacert.org/wiki/VhostTaskForce



(The big problem of course is that you can use
one cert to describe many domains only if they
are the same administrative entity.)



If they share an IP address (which they must, otherwise there's no
problem), then they must share a webserver, which means they can share a
cert, surely?


Certainly they *can* share a cert.  But a cert
speaks to identity - at the human level the cert
is supposed to (by some readings) indicate who
the site is purporting to be and in some scenarios,
there are people who think the cert actually
proves that the site is who it claims to be.

So regardless of the technical details of the
underlying software (complex, I grant), websites
SHOULD NOT share a cert.



I don't see why not - the technical details actually matter. Since the
servers will all share a socket, on any normal architecture, they'll all
have access to everyone's private keys. So, what is gained by having
separate certs?


1. Because the activity is being done in the name
of the site.  When a business signs or otherwise
represents a site as purporting to be in the name of
some business, we still want to do it in a way that
separates out that business from every other.

2. The system operator has access to the private
keys, yes, but he's just the agent, and this does
not mean that anyone else has access.  We have
systems in place to separate out the protection
of the keys from the rest of the business.

Most small businesses have some level of cooperation
where they share techies, systems, and other services,
so it is probably more seen and more useful in the
SOHO (small office home office) world.  Of course,
this is less interesting to the security world,
because there isn't the money to pay for consultants
there...

All the more reason why the software should provide
the best it can for free!


I do agree that the process by which the additional names get added to
the main cert needs to reflect ownership of the name, but that's a
different matter.

And I'm not claiming, btw, that this mechanism is better than the server
name extension. However, I don't believe its as broken as some are claiming.


Well, sure.  For many uses it will be a useful
stopgap measure, until SNI is deployed.  It's
only broken if you like a binary world, and you
happen to fall on the zero side of the question.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: browser vendors and CAs agreeing on high-assurance certificat es

2005-12-24 Thread Ian G

Ben Laurie wrote:
...

Hopefully over the next year, the webserver (Apache)
will be capable of doing the TLS extension for sharing
certs so then it will be reasonable to upgrade.



In fact, I'm told (I'll dig up the reference) that there's an X509v3
extension that allows you to specify alternate names in the certificate.
I'm also told that pretty much every browser supports it.


The best info I know of on the subject is here:

http://wiki.cacert.org/wiki/VhostTaskForce

Philipp has a script which he claims automates
the best method(s) described within to create
the alt-names cert.

(The big problem of course is that you can use
one cert to describe many domains only if they
are the same administrative entity.)

What we really need is for the webservers to
implement the TLS extension which I think is
called server name indication.

And we need SSL v2 to die so it doesn't interfere
with the above.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: browser vendors and CAs agreeing on high-assurance certificat es

2005-12-23 Thread Ian G



BTW, illustrating points made here, the cert is for
financialcryptography.com
but your link was to www.financialcryptography.com.  So of course Firefox
generated a warning


Indeed and even if that gets fixed we still have
to contend with:

  * the blog software can't handle the nature of a
TLS site (internal problems like non-working
trackbacks, internal links, posts, ...)
  * the cert has to be shared with 3 other sites
  * Firefox will still warn about it being a CAcert
signed certificate
  * ...  I'm sure there's more.

Hopefully over the next year, the webserver (Apache)
will be capable of doing the TLS extension for sharing
certs so then it will be reasonable to upgrade.

iang

PS:  SSL v2 must die!  Wot, you mean you haven't
turned it off in your browser yet?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [Clips] Banks Seek Better Online-Security Tools

2005-12-06 Thread Ian G

[EMAIL PROTECTED] wrote:

okay, i read this story from 7/2005 reporting an incident in 5/2005.  the short 
form of it is:


Not a bad summary.  I'd say that when one is
dealing with any such crime, there are always
unanswered questions, and issues of confusion
(probably as much for the attacker as the victim).


even more off-topic:
i'm surprised that the people on this list don't feel as if they have 
enough
personal connections that at least they could figure out what happened 
to them
as *some* financial institution.  doesn't anyone else ask, as a basis 
for imputing
	trust  exactly who did that {protocol, architecture, code} review as a basis for 
	imputing trust?  maybe i'm delusional, but i give fidelity some residual credit 
	for having adam shostack there, even some years ago, and there are some firms

i'd use because i've been there enough to see their level of care.


Well, even though phishing has been discussed
on this list for about 2 years, it is only in
the last 6 months or so that there has been a
wider acceptance in the subject.  I think your
specific question has been asked so many times
that people's eyes glaze over.

Only in the last few *weeks* did two of the browser
manufacturers acknowledge it publically.  So I
wouldn't expect too much from the banks, who have
to receive authoritive press, institution  regulatory
input before they will shift on matters of security.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [Clips] Banks Seek Better Online-Security Tools

2005-12-05 Thread Ian G

[EMAIL PROTECTED] wrote:

dan, maybe you should just keep less money in the bank.

i use online banking and financial services of almost every kind
(except bill presentment, because i like paper bills).  i ccannot do
without it.

it seems to me the question is how much liability do i expose myself to by
doing this, in return for what savings and convenience.  


That part I agree with, but this part:


i don't keep a lot of money in banks (why would anyone?)  -- most of
the assets are in (e.g.)  brokerage accounts.  at most  i'm exposing
a month of payroll check to an attacker briefly until it pays some
bill or is transferred to another asset account.  


George's story - watching my Ameritrade account get phished out in 3 minutes
https://www.financialcryptography.com/mt/archives/000515.html

Seems like a hopeful categorisation!

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [Clips] Banks Seek Better Online-Security Tools

2005-12-04 Thread Ian G

[EMAIL PROTECTED] wrote:

You know, I'd wonder how many people on this
list use or have used online banking.  


To start the ball rolling, I have not and won't.


I have not!  I declined the chance when my
bank told me that I had to download their
special client that only runs on windows...

However, I have used and/or written many
online DGC tools (which is for the sake of
this discussion, gold-denominated online
payments) which are honed through experience,
incentive and willingness to deal with the
issues.

( As an aside, e-gold was generally the first
to be hit by these problems as well as all the
other problems that have only effected banks
in passing.  Generally the DGC sector is much
more savvy about threats, through repetitive
losses, at least. )

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Session Key Negotiation

2005-12-03 Thread Ian G

Will Morton wrote:

I am designing a transport-layer encryption protocol, and obviously wish
to use as much existing knowledge as possible, in particular TLS, which
AFAICT seems to be the state of the art.

In TLS/SSL, the client and the server negotiate a 'master secret' value
which is passed through a PRNG and used to create session keys.

My question is: why does this secret need to be negotiated?  Why can one
side or another (preference for client) not just pick a secret key and
use that?

I guess that one reason would be to give both sides some degree of
confidence over the security in the key.  Is this true, and if so is it
the only reason?


One reason is that one side or the other might have
a screwed implementation.  For example, an RNG that
spits out zeroes.

Another reason is that one side or other might have
reasons for screwing the key deliberately;  a server
might for example fix its key so that it can be
listened to outside.  If a simple XOR is negotiated,
then the server could always choose its part to
XOR to a certain value.  This is plausible if a
server operator has done a deal to reveal to an
eavesdropper, but doesn't want to reveal its private
key.  (I suspect the newer ciphersuites in TLS may
have been motivated by this.)

Hence, slop in lots of random from both sides, and
hash the result, so you have at least the key space
of the one side that is behaving well.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Haskell crypto

2005-11-19 Thread Ian G

Someone mailed me with this question, anyone know
anything about Haskell?

 Original Message 

I just recently stepped into open source cryptography directly, rather
than just as a user.  I'm writing a SHA-2 library completely in
Haskell, which I recently got a thing for in a bad way.  Seems to me
that nearly all of the message digest implementations out there are
written in C/C++, or maybe Java or in hw as an ASIC, but I can't find
any in a purely functional programming language, let alone in one that
can have properties of programs proved.  Haskell can, and also has a
very good optimizing compiler.  I'm not sure where to submit for
publication when I'm done and have it all written up, though!

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ISAKMP flaws?

2005-11-18 Thread Ian G

Florian Weimer wrote:


Photuris uses a baroque variable-length integer encoding similar to
that of OpenPGP, a clear warning sign. 8-/


Actually, if one variable-length integer
encoding is used instead of 5 other formats
in all sorts of strange places, I'd say this
is a good sign.  Although I didn't originally
like the variable-length integer I've seen
used, I've come to appreciate how much simpler
and thus much more secure it makes the code.


The protocol also contains
nested containers which may specify conflicting lengths.  This is one
common source of parser bugs.


Containers for things are inevitable.  I've
found they should be encapsulated in their
own protected container, so that bugs do not
cross boundaries.  Yes, this makes for redundancy
and possibly conflict, but wasn't it said that
in security programming, we should be precise
in what we write out and precise in what we
accept?  Any conflict - reject it.

iang

PS: I think it was Dan Bernstein who said that,
in opposition to the aphorism be gentle in what
you accept?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Some thoughts on high-assurance certificates

2005-11-02 Thread Ian G

Ed Reed wrote:


Getting PKI baked into the every day representations people routinely
manage seems desirable and necessary to me.  The pricing model that has
precluded that in the past (you need a separate PKi certificate for each
INSURANCE policy?) is finally melting away.  We may be ready to watch
the maturation of the industry.


In your long and interesting email you outlined
some issues with the tool known as PKI.  What I'm
curious about is why, given these issues and maybe
100 more documented elsewhere **, you propose that:

   Getting PKI baked into the every day representations
   people routinely manage seems desirable and necessary to me.

We have this tool.  It has many and huge issues.
What I don't understand is why the desire is so
strong to put this tool into play, when it has
singularly failed to prove itself?

Where does the bottom-up drive come from?  Why is
it that what people do routinely isn't driven
top-down, so that the tools they need are application
driven, but is instead subjugated to the tools-first
approach, even against such negative experience and
theory?

iang

** some here: http://iang.org/ssl/pki_considered_harmful.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [fc-discuss] Financial Cryptography Update: On Digital Cash-like Payment Systems

2005-10-22 Thread Ian G

R. Hirschfeld wrote:

Date: Thu, 20 Oct 2005 11:31:39 -0700
From: cyphrpunk [EMAIL PROTECTED]




2. Cash payments are final. After the fact, the paying party has no
means to reverse the payment. We call this property of cash
transactions _irreversibility_.


Certainly Chaum ecash has this property. Because deposits are
unlinkable to withdrawals, there is no way even in principle to
reverse a transaction.



This is not strictly correct.  The payer can reveal the blinding
factor, making the payment traceable.  I believe Chaum deliberately
chose for one-way untraceability (untraceable by the payee but not by
the payer) in order to address concerns such as blackmailing,
extortion, etc.  The protocol can be modified to make it fully
untraceable, but that's not how it is designed.



Huh - first I've heard of that, would be
encouraging if that worked.  How does it
handle an intermediary fall guy?   Say
Bad Guy Bob extorts Alice, and organises
the payoff to Freddy Fall Guy.  This would
mean that Alice can strip her blinding
factors and reveal that she paid to Freddy,
but as Freddy is not to be found, he can't
be encouraged to reveal his blinding factors
so as to reveal that Bob bolted with the
dosh.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: NSA Suite B Cryptography

2005-10-14 Thread Ian G

Sidney Markowitz wrote:

Excerpt from


Fact Sheet on NSA Suite B Cryptography
http://www.nsa.gov/ia/industry/crypto_suite_b.cfm



NSA has determined that beyond the 1024-bit public key cryptography in
common use today, rather than increase key sizes beyond 1024-bits, a
switch to elliptic curve technology is warranted. In order to facilitate
adoption of Suite B by industry, NSA has licensed the rights to 26
patents held by Certicom Inc. covering a variety of elliptic curve
technology. Under the license, NSA has a right to sublicense vendors
building equipment or components in support of US national security
interests.

Does this prevent free software interoperability with Suite B standards?
It potentially could be used to block non-US vendors, certainly anyone
who is in the US Government's disfavor, but it seems to me that even
with no further intentional action by the NSA it would preclude software
under the GPL and maybe FOSS in general in countries in which the
patents are valid.


I didn't read it that way at all.  AFAICS,
the NSA has acquired the licences it needs
to deliver (have delivered) software to its
government customers.  As all the government
customers will need to use approved software
anyway, it will be acquired on some approved
list, and the licences will be automatically
extended.

Anyone outside the national security market
will need to negotiate separately with Certicom
if they need to use it.  This represents a big
subsidy to Certicom, but as they are a Canadian
company it is harder to argue against on purely
statist grounds.

Which is to say, NSA solved its problem and it
is nothing to do with FOSS.

The big question (to me perhaps) is where and
how far the Certicom patents are granted.  If
they are widely granted across the world then
the software standards won't spread as there
won't be enough of an initial free market to
make it bloom (like happened to RSA).  But if
for example they are not granted in Europe
then Europeans will get the free ride on NSA
DD and this will cause the package to become
widespread, which will create the market in
the US.  Of course predicting the future is
tough...

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [Anti-fraud] simple (secure??) PW-based web login (was Re: Another entry in theinternet security hall of shame....)

2005-09-14 Thread Ian G

Amir Herzberg wrote:
For a stationary user, the extension compares _Iterations_ and confirm 
it is at most one less than previous value of _Iterations_ used with 
this site.


(Minor point - if relying on incrementing
Iterations, this may impact password sharing
scenarios.  Whether that's a good thing or a
bad thing depends...)

Also, the extension keeps a table r(PK) mapping the public 
key PK of each login site to an independant random value (we need this 
as real random value and can't derive them from the PW, to prevent 
dictionary attacks by fake sites).


I suspect this would not work so well in the
(common enough?) cases where a site uses a farm
of SSL boxes and certs;  a couple of sites I've
come across provide different certs every time
(although admittedly I saw this with IMAP TLS not
with browsing).

Now, using a single PW, extension computes for a site with given PK the 
value H(0)=h(PK, h(r(PK), PW).


What is the reason for hashing twice?  Instead of
the more obvious H(0)=h(PK, r(PK), PW) ?

(Also, you are missing a closing parenthesis there
so maybe your intent was other.)



(Somewhat challenging your assumptions here) your
design does not seem to cope with MITM.  But, it
may do if you are assuming there is an extension
that is handling the client side, and the exchange
is the setup for later transactions, not the
transaction itself:  the server can send back its
token X which needs to be further hashed in order
to gain the useful token for later:

   Y = h(..., X, PW);

where Y could be the session identifier or cookie
or something.  In this way both sides have proven
their possession of the password and hopefully
eliminated other parties from further comms.

(But I may have misunderstood something...)

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [Anti-fraud] Re: Another entry in the internet security hall of shame....

2005-09-07 Thread Ian G

Alaric Dailey wrote:

Thus ATMs and the weak 2 factor authentication system they use are
untrustworthy, I knew that already, but as I said, its better than not
having the multifactor authentication.  The fact that many cards may be
used as credit card and you thus bypass the second factor, is a HUMAN
failing, the entity accepting the cards are supposed to check the
signature, against a photo id, ESPECIALLY if the card says See ID.

But this overabundance of text doesn't solve my problems with the
assertion that PSKs are somehow more secure than Public/Private key
encryption with a trusted third party as verifier, and specifically the
X.509 Certificate Authority System that is the backbone for SSL.


This statement is only plausible if you consider
the paper cryptography domain.  When applied to
the business / user world, the statement fails
due to the way that real life breaks the assumptions.


No one is touching on the key issues
 sharing of keys securely and how to validate that they haven't been
comprimised.


Generally, for most apps, there is already a way
to share stuff.  Just to look at one particular
application such as online banking, the bank and
the user generally communicate through post and
other means such as email at a minimum so as to
set up a relationship.  These methods may not be
secure (according to paper crypto metrics) but
they are multi-factor and are uncorrelated with
the threats.  So they work;  so the keys can be
shared securely, according to some risk measure.


 how a user is supposed to keep track of the secure keys (kind of a side
point)


Software?


 how the validation of identity of these systems would work


Shared keys validate in that they are shared; the
keys themselves aren't sent over the wire in the
setup, and if the other party doesn't have the key,
then the setup fails.  This amounts to validation
of identity being measured by has a copy of my key
too.

Now, you'll probably think this is woefully insecure
because it falls to MITM.  True, but so does most every
other system.  online browsing fails to MITM by means
of phishing in such an evidently purile fashion that
heads should be hung with shame .. if not lopped off.
Even if the browser were to do more here, the MITM is
still possible within the ivory tower of the CA,
which have't exactly inspired of late given that
they now sell lots and lots of domain-certs for
bargain basement prices.  ($7 was the latest I saw!)

So, in a business context, PSK does identity validation
more or less as well as anything else, at least on
paper (coz it hasn't been tried yet!).



Unless the issues I pointed out are addressed , PSK is a much WORSE
solution than trusting a third party for verification of the other
entities identity.  Especially since you profess that certs are
redundant and superfluous standin for the real information, how I am
to verify that a given website in Timbucktoo, is owned and operated be
the entity making the claim without going there myself, with SSL we have
SOME assurance that someone verified it. 



Not really.  With SSL in the browser you have
approximately zero assurance that anyone verified
it.  If you look at the browser, and find a padlock
that gives you maybe 5% of what you need.  If you
go searching into the cert then you might be able
to establish the CA which would perhaps give you
5-20% of what you need, but to actually work out
whether a website is really the right one, you
are going to have to go elsewhere for assurance.


Its no different than trusting that the people at the DMV did their jobs
when a drivers license was issued, but even drivers licenses aren't
acceptable authoritative proof that someone is who they profess to be. 
Here in Nebraska we have one of the most difficult drivers licenses to

forge, so what did the criminals do? they stole the machines, so they
could make perfect forgeries.


Sure.  When it matters, expect phishers to set up
SSL sites, to steal domains, to steal email confirmations,
to do all sorts of things.  Right now, they are dealing
with low hanging fruit.


Trust must lie somewhere, if you have a
structure that gives some assurance of that the entities are who they
say they are, that is a world better than lacking those arrurances. 


No, I'd challenge your underlying assumption here that
the intention is to deliver trust.  Trust cannot be
delivered, it can't be sent, it can't lie anywhere.

Trust is something that only each individual can find
for themselves on their own checks.  Trust never leaves
a person.

What the system can do is make statements and present
evidence.  It's up to the user to decide whether to
trust those statements and whether to seek further
evidence or risk it with what she has.

The difference in these two approaches is immense.  In
your view you have to get it right;  except you have no
way to establish trust that actually makes sense and
hence you're trapped into an ever increasing quality
cycle, while the businesses selling that 

Re: Another entry in the internet security hall of shame....

2005-08-31 Thread Ian G

James A. Donald wrote:

--
From:   [EMAIL PROTECTED] (Peter
Gutmann)


TLS-PSK fixes this problem by providing mutual
authentication of client and server as part of the key
exchange.  Both sides demonstrate proof-of- possession
of the password (without actually communicating the
password), if either side fails to do this then the
TLS handshake fails.  Its only downside is that it
isn't widely supported yet, it's only just been added
to OpenSSL, and who knows when it'll appear in
Windows/MSIE, Mozilla, Konqueror, Safari,



This will take out 90% of phishing spam, when widely
adopted.


Having read this now [1] I wonder if it is too hopeful
to expect TLS-PKS to be widely adopted in browsing.

( I've guessing that you mean that the user's password
and username will be used to bootstrap the secure TLS
session - notwithstanding the comment in section 8 that
this is not the intention [2]. )

The issue I see here is that while the browser may have
access to this data, the server doesn't necessarily
have access to it.  In these days and times, major
websites are constructed with a plethora of methods
to do authentication, and they use a lot frameworks
to handle all that.  In any given framework, the
distance (in code and layers and backends) between
the TLS code and the password code can be quite
large.  One artifact of this is the use of straight
forms to deliver the password rather than use the
inbuilt underlying unix-style password mechanism;
it is far too popular to implement the password
authentication of a user over the top of any
framework as it is - in the application code - as
the framework never quite does what is needed.

Not only is there this distance, it is duplicated
across all languages and all the different auth
regimes and also for homegrown password auth,
over every application!  I'd wonder if given these
barriers it will ever be possible to get change to
happen?

Or have I misunderstood something here?

(Note that this shouldn't be interpreted as saying
anything about the general utility of TLS-PSK in
other environments as per [2]...)

iang


[1] Pre-Shared Key Ciphersuites for Transport Layer Security (TLS)
http://www.ietf.org/internet-drafts/draft-ietf-tls-psk-09.txt
[2] However, this draft is not intended for web password
authentication, but rather for other uses of TLS.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Another entry in the internet security hall of shame....

2005-08-29 Thread Ian G

Anne  Lynn Wheeler wrote:

the major ISPs are already starting to provide a lot of security
software to their customers.

a very straight forward one would be if they provided public key
software ... to (generate if necessary) and register a public key in
lieu of password ... and also support the PPP  radius option of having
digital signature authentication in lieu of password checking
http://www.garlic.com/~lynn/subpubkey.html#radius


Right.  And do the primary authentication of the key
using some other mechanism that is outside the strict
crypto.

(IOW, Dave, your plan will work, as long as it is
built from ground up with no prior baggage!  IMHO!)

This is such a no-brainer that when I first came
across the solution over a decade ago now, I never
gave a thought as to how it could be anything but
the one way to do things.  It just works, and very
little else works anywhere as well.

Yet, we are still grubbing around like cavemen in
the mud.  And then there is this:

http://www.business2.com/b2/web/articles/print/0,17925,1096807,00.html

$5M  Mobile ID for Credit Card Purchases
WHO: John Occhipinti, Woodside Fund, Redwood Shores, Calif.
WHO HE IS: A former executive at Oracle and Netscape, Occhipinti is a managing 
director and security specialist, leading investments in BorderWare and Tacit.
WHAT HE WANTS: Fraudproof credit card authorization via cell phones and PDAs.
WHY IT'S SMART: Credit card fraud is more rampant than ever, and consumers aren't the only ones 
feeling the pain. Last year banks and merchants lost more than $2 billion to fraud. Most of that 
could be eliminated if they offered two-part authentication with credit and debit purchases -- 
something akin to using a SecureID code as well as a password to access e-mail. Occhipinti thinks 
the cell phone, packaged with the right software, presents an ideal solution. Imagine getting a 
text message on your phone from a merchant, prompting you for a password or code to approve the 
$100 purchase you just made on your home PC or at the mall. It's an extra step, but one that most 
consumers would be happy to take to safeguard their privacy. More important, Occhipinti says, big 
banks would pay dearly to be able to offer the service. It's a killer app no one's touched 
yet, Occhipinti says, but the technology's within reach.
WHAT HE WANTS FROM YOU: A finished prototype application within eight months. I'm 
looking for the best technologists in security and wireless, the top 2 percent in their 
industry, Occhipinti says. The team would need to be working with a handful of 
banks and merchants ready to start trials, in hopes of licensing the technology or 
selling the company.
SEND YOUR PLAN TO: [EMAIL PROTECTED]

The funniest part of all is that even though we
know how to do it in our sleep, Paypal actually
built one as their original offering and threw
it away...


at that point your public key is now registered with your ISP ... and
possibly could be used for other things as well ... and scaffolding for
a certificateless trust infrastructure.


Yup.  But this will only work if you go back to
basics and build the structure naturally around
the keys.  IOW, not using anything from PKI.


lots  lots of past postings on SSL landscape
http://www.garlic.com/~lynn/subpubkey.html#sslcert


Watching security thinking advance is like watching
primates evolve from close distance.  Either we die
of old age before anything happens, or we get clubbed
to death...

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: e2e all the way (Re: Another entry in the internet security hall of shame....)

2005-08-27 Thread Ian G

Steven M. Bellovin wrote:

Do I support e2e crypto?  Of course I do!  But the cost -- not the
computational cost; the management cost -- is quite high; you need
to get authentic public keys for all of your correspondents.  That's
beyond the ability of most people.


I don't think it is that hard to do e2e security.  Skype does it.
Fully transparently.



Really?  You know that the public key you're talking to corresponds to 
a private key held by the person to whom you're talking?  Or is there a 
MITM at Skype which uses a per-user key of its own?


yes, this is the optimisation that makes Skype work,
it is (probably) vulnerable to an MITM at the center.

This is a tradeoff.  What it means is that the center
can do an active attack.  But it can't do a passive
attack (this is speculation but it seems reasonable
or at least achievable).

That's a good deal for users, when you consider their
alternative.  Fantastic value for money, really, it's
really very hard to criticise...


Another option: I would prefer ssh style cached keys and warnings if
keys later change (opportunistic encryption) to a secure channel to
the UTP (MITM as part of the protocol!).

Ssh-style is definitely not hard.  I mean nothing is easier no doubt
than slapping an SSL tunnel over the server mediated IM protocol,


The evidence suggests that if you just slap an SSL
tunnel in place, you end up with an ongoing mess of
key management - ref - what this thread started with
from google.  If you do the thing properly, and
build it opportunistically, with the option of
upgrading to signed certs for those that really
want that, you can avoid all that.  But few do, for
some reason, or maybe those successful cases we just
never hear about because they work without fuss...

When SSL is your hammer, everything gets nailed as
a server.


Here's the problem for a protocol designer.  You want to design a 
protocol that will work as securely as possible, on a wide range of 
platforms, over a reasonably long period of time.


On this I think we'd all agree.  Although I'd also
add that it should be economic - if it doesn't deploy
then it does not good.

What do you do?  If 
you engineer only for e2e security, you end up in a serious human 
factors trap (cue Why Johnny Can't Encrypt and Simson Garfinkel's 
dissertation).  Instead, I recommend engineering for a hybrid scenario 
-- add easy-to-use client-to-server security, which solves at least 75% 
of most people's threats (I suspect it's really more like 90-95%), 
while leaving room in the protocol for e2e security for people who need 
it and/or can use it, especially as operating environments change.  
This is precisely what Jabber has done.


It's fascinating that you see this and I wish you'd
share the threats you see.  I see only node threats,
you see only wire threats.  Why is this?

(I can quote reams and reams of news articles that
point to merchant data losses and PC malware and virus
attacks... but it would be boring.  Just ask Lynn for
his feed ...)

My view of the p2p threat model:

  other party - 70%
  own node- 20%
  center  - 10%

To an accuracy of +/- X%.  Obviously, the wire
threats - that are protected by Jabber's SSL and the
like - are in the noise somewhere there (but I expect
them to get much more aggressive in the future).

Another way of looking at this is to ask what the damage
is.  If your chat traffic is breached by some random
threatening outsider, what does he gain?  Nothing, so
it doesn't take a PhD to realise nobody's interested.
But if your listener is a *related* other party and
has your messages, then that's a whole other story...
This is why for example the most popular IM security
system is the discarded nym.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Another entry in the internet security hall of shame....

2005-08-27 Thread Ian G

Steven M. Bellovin wrote:

But this underscores one of my points: communications security is fine, 
but the real problem is *information* security, which includes the 
endpoint.  (Insert here Gene Spafford's comment about the Internet, 
park benches, cardboard shacks, and armored cars.)


*That* makes much more sense, ignore my earlier email.

http://homes.cerias.purdue.edu/~spaf/quotes.html

  Secure web servers are the equivalent of heavy armored cars.
  The problem is, they are being used to transfer rolls of
  coins and checks written in crayon by people on park benches
  to merchants doing business in cardboard boxes from beneath
  highway bridges. Further, the roads are subject to random
  detours, anyone with a screwdriver can control the traffic
  lights, and there are no police.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Another entry in the internet security hall of shame....

2005-08-25 Thread Ian G

Trei, Peter wrote:


Self-signed certs are only useful for showing that a given
set of messages are from the same source - they don't provide
any trustworthy information as to the binding of that source
to anything.


Perfectly acceptable over chat, no?  That is,
who else would you ask to confirm that your
chatting to your buddy?

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Another entry in the internet security hall of shame....

2005-08-25 Thread Ian G

Tim Dierks wrote:

[resending due to e-mail address / cryptography list membership issue]

On 8/24/05, Ian G [EMAIL PROTECTED] wrote:


Once you've configured iChat to connect to the Google Talk service, you may
receive a warning message that states your username and password will be
transferred insecurely. This error message is incorrect; your username and
password will be safely transferred.



iChat pops up the warning dialog whenever the password is sent to the
server, rather than used in a hash-based authentication protocol.
However, it warns even if the password is transmitted over an
authenticated SSL connection.

I'll leave it to you to decide if this is:
 - an iChat bug
 - a Google security problem
 - in need of better documentation
 - all of the above
 - none of the above


none of the above.  Using SSL is the wrong tool
for the job.  It's a chat message - it should be
encrypted end to end, using either OpenPGP or
something like OTR.  And even then, you've only
covered about 10% of the threat model - the
server.

But, if people do use the wrong tool for the
job, they will strike these issues...

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Another entry in the internet security hall of shame....

2005-08-24 Thread Ian G

In another routine event in the adventure known as
getting security to work in spite of the security,
I just received this ...

[fwd]

When creating a google talk compatible IM personality in Apple's iChat you
get the following warning on the Google Help pages:
-=-=-
12. Check the boxes next to 'Connect using SSL' and 'Allow self-signed
certificates.' You don't need to check the box next to 'Warn before
password is sent insecurely' -- your password is always secure with Google
Talk.

Congratulations! You are now ready to connect to the Google Talk service
using iChat.

Once you've configured iChat to connect to the Google Talk service, you may
receive a warning message that states your username and password will be
transferred insecurely. This error message is incorrect; your username and
password will be safely transferred.
-=-=-

hmm

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ID theft -- so what?

2005-08-14 Thread Ian G

Ben Laurie wrote:

Ian Grigg wrote:


Too many words?  OK, here's the short version
of why phising occurs:

Browsers implement SSL+PKI and SSL+PKI is
secure so we don't need to worry about it.

PKI+SSL *is* the root cause of the problem.  It's
just not the certificate level but the business and
architecture level.  The *people* equation.



PKI+SSL does not _cause_ the problem, it merely fails to solve it. You 
may as well blame HTTP - in fact, it would be fairer.


Well, blaming a protocol which is an inanimate
invention of man is always unfair, but so is
avoiding the issues by quibbling on the meanings.

Blaming HTTP is totally unfair as it never ever
promised to protect against spoofs.

PKI+SSL promised to detect and cover spoofs.  In
fact, the original point of PKI was to close out
the MITM or spoof, and was then enlarged somewhat
confusingly to provide some sort of commerce
guarantee on the stated identity (c.f, Lynn's
amusing stories of CAs gone mad with dollarlust.)

Originally, Netscape's browser implemented the
complete anti-spoofing UI and included more info
on the screen.  This was then dropped in the
screen wars, against the advice of security
engineers at Netscape.  (Ref:  comments by Bob
R.)

So, to repeat:It's not the certificate
level but the business and architecture level.
The *people* equation.  It's the people who
implement the PKI+SSL model and don't do it
properly that are the root cause of phishing.

Petnames, Trustbar, DSS are some of the solutions
that work *positively* and *constructively* to
close the loopholes in the browser's implementation
of PKI+SSL.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


The encrypt everything problem

2005-06-08 Thread Ian G
On Wednesday 08 June 2005 18:33, [EMAIL PROTECTED] wrote:
 Ken Buchanan wrote:

 Another area where I predict vendors will (should) offer built in
 solutions is with database encryption.  Allot of laws require need-to-know
 based access control, and with DBA's being able to see all entries that is
 a problem.  Also backups of db data can be a risk.
 Oracle, for example, provides encryption functions, but the real problem
 is the key handling (how to make sure the DBA can't get the key, cannot
 call functions that decrypt the data, key not copied with the backup,
 etc.).
 There are several solutions for the key management, but the vendors should
 start offering them.


Yes, this is a perfect example of where we need tools
that can make this use of crypto more transparent.

Of course, anyone who's worked on big database
projects must have realised that they've drifted somewhat
away from the idealistic vision of the relational story
(as told by Coase? Date?  some other guys no doubt)
and adding encryption and key handling to that is just
like throwing sand into the machine.

I'd suspect most of us here could have a fair stab at
the encrypted tapes problem.  But we'd not get nearly
as far with the encrypted database problem.

I think this is one area where databases are going to
continue to create more noise than value, and things
like capabilities are more likely to advance, simply as
they are looking more clearly at the underlying data
and the connections and authorisations that need to
be protected.

iang
-- 
Advances in Financial Cryptography:
   https://www.financialcryptography.com/mt/archives/000458.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Papers about Algorithm hiding ?

2005-06-07 Thread Ian G
On Tuesday 07 June 2005 14:52, John Kelsey wrote:
 From: Ian G [EMAIL PROTECTED]
 Sent: Jun 7, 2005 7:43 AM
 To: John Kelsey [EMAIL PROTECTED]
 Cc: Steve Furlong [EMAIL PROTECTED], cryptography@metzdowd.com
 Subject: Re: Papers about Algorithm hiding ?

 [My comment was that better crypto would never have prevented the
 Choicepoint data leakage. --JMK]

OK, yes, you are right, we are talking about two
different things.

The difficulty here is that there is what we might call
the Choicepoint syndrome and then there is the
specific facts about the actual Choicepoint heist.
When I say Choicepoint I mean the former, and the
great long list of similar failures as posted last week.
I.e., it is a syndrome that might be characterised as
companies are not protecting data or in other words
the threat is on the node not the wire.

Whereas in the specific Choicepoint heist, there is
the precise issue that they are selling their data to
someone.  That's much more complex, and crypto
won't change that issue, easily.


 Sure it would.  The reason they are not using the tools is because
 they are too hard to use.  If the tools were so easy to use that it
 was harder to not use them, then they'd be used.  Consider Citigroup
 posted today by Bob.  They didn't encrypt the tapes because the tools
 don't work easily enough for them.

 So, this argument might make sense for some small business, but
 Citigroup uses a *lot* of advanced technology in lots of areas, right?
 I agree crypto programs could be made simpler, but this is really not
 rocket science.  Here's my guess: encrypting the data would have
 required that someone make a policy decision that the data be
 encrypted, and would have required some coordination with the credit
 agency that was receiving the tapes.  After that, there would have
 been some implementation costs, but not all *that* many costs.
 Someone has to think through key management for the tapes, and
 that's potentially a pain, but it's not intractible.  Is this really
 more complicated than, say, maintaining security on their publically
 accessible servers, or on their internal network?

No it's not rocket science - it's economic science.
It makes no difference in whether the business is
small or large - it is simply a question of costs.  If
it costs money to do it then it has to deliver a
reward.

In the case of the backup tapes there was no reward
to be enjoyed.  So they could never justify encrypting
them if it were to cost any money.  Now, in an unusual
exception to the rule that laws cause costs without
delivering useful rewards, the California law SB
changed all that by adding a new cost:  disclosure.
(Considering that banks probably lose a set of backups
each every year and have been doing so since whenever,
it's not the cost of the tapes or the potential for ID theft
that we care about...)

Now consider what happens when we change the
cost structure of crypto such that it is easier to do it
than not.  This is a *hypothetical* discussion of course.

Take tar(1) and change it such that every archive is
created as an encrypted archive to many public keys.
Remove the mode where it puts the data in the clear.
Then encrypt to a big set of public keys such that
anyone who can remotely want the data can decrypt
it (this covers the biggest headache which is when
you want the data it is no longer readable).

So, now it becomes trivial to make an encrypted
backup.  In fact, it is harder to make an unencrypted
backup.  What are companies going to do?  Encrypt,
of course.  Because it costs to do anything else.



 The other way of looking at Choicepoint - change the incentives - is
 a disaster.  It will make for a compliance trap.  Compliance *may*
 protect the data or it may have completely the opposite effect, the
 situation with 'unintended consequences' in such a case is likely to
 be completely unpredictable.  The only thing we can guarantee is that
 costs will go up.

 Well, Choicepoint is a bit different, right?  I mean, as I understand
 it the big disclosure happened because they sold peoples' data to
 criminals, but they were in the business of selling peoples' data.
 They just intended to sell it only to people of good intention, as far
 as I can tell.  (Perhaps they should have demanded X.509 certificates
 from the businesses buying the data and checked the evil bit.)  I
 just can't see how cryptography could have helped prevent that attack,
 other than by making the data that Choicepoint depends on harder to
 get in the first place.

Yes, you are right, I was thinking Choicepoint syndrome
here.  In order to address Choicepoint-actual with crypto
we'd have to look at Rights systems: nyms, caps and Brands,
or address it at the business level.

 It's much cheaper and much more secure to simply
 improve the tools.

 But this does no good whatsoever if there's not some reason for the
 people holding the data to use those tools.

Yes, that's why I'm saying that the tools should actually
make

Re: Papers about Algorithm hiding ?

2005-06-04 Thread Ian G
On Thursday 02 June 2005 13:50, Steve Furlong wrote:
 On 5/31/05, Ian G [EMAIL PROTECTED] wrote:
  I don't agree with your conclusion that hiding algorithms
  is a requirement.  I think there is a much better direction:
  spread more algorithms.  If everyone is using crypto then
  how can that be relevant to the case?

 This is so, in the ideal. But if everyone would only... never seems
 to work out in practice. Better to rely on what you can on your own or
 with a small group.

The number of people who are involved is actually quite
small if you think it through.  It's more a shift in attitude that
is the barrier, not a large number of people who have to
be sold.

GPG is an application that could be delivered by default
in all free OSs.  BSD is more or less installed automatically
with SSH installed.  Linux machines that are set up are
also generally set up with SSH.

From there it isn't a large step conceptually to install GPG
in the base installs.  Start with the BSDs (because they
understand security) and Linux (because they understand
cool).

It's also not a large step to add a special hook into SSH
and browsers to add a simple file encryption utility.  Just
like OpenPGP's secret key mode.  It doesn't have to be
good, it just has to be there.  A lot of machines have OpenSSL
in them (this is how we get easy access to SHA1).  Can we
add a simple file encrypt to that?

Once all the Unixen have these, the next step is to encourage
a little usage...  All you need to do is have one person that
you communicate with like your brother or sister for the fun
of doing some crypto chat, and it now becomes a regular
*non-relevant* issue.  All we need to do is to encrypt and
protect one file and encryption becomes easy.

 In response to Hadmut's question, for instance, I'd hide the crypto
 app by renaming the executable. This wouldn't work for a complex app
 like PGP Suite but would suffice for a simple app. Rename the
 encrypted files as well and you're fairly safe. (I've consulted with
 firms that do disk drive analysis. From what I've seen, unless the
 application name or the data file extensions are in a known list, they
 won't be seen. But my work has been in the realm of civil suits,
 contract disputes, SEC claims, and the like; the investigators might
 be more thorough when trying to nail someone for kiddie porn.)

Right.  If they find any evidence of information hiding
other than a boring OpenPGP install that is as common
as crazy frog mp3s then that's what I'd call highly relevent
evidence.  That would make matters worse for the particular
case at hand.

Information hiding is real sexy.  I wouldn't recommend it
for anyone who isn't really sure of their situation, and is
willing to understand that if he gets caught with it, he's
dead.

 Or use another app which by the way has crypto. Winzip apparently has
 some implementation flaws
 (http://www.cse.ucsd.edu/users/tkohno/papers/WinZip/ ) but a quick
 google doesn't show anything but brute force and dictionary attacks
 against WinRar.

Certainly using another app is fine.  What would be more
relevant to the direct issue is that it becomes routine to
encrypt and to have encryption installed.  See the recent
threads on where all the data is being lost - user data is
being lost simply because the companies don't protect
it.  Why aren't they protecting it?  Because there are no
easy tools that are built in to automatically and easily
protect it.

The picture here is becoming overwhelmingly clear - in order
to protect users we should be employing as much crypto as
we can openly, opportunistically, and easily.  Anything that
holds back from users protecting their data is a bad, and
anything that moves them forward in protecting their data
is a good.

iang
-- 
Advances in Financial Cryptography:
   https://www.financialcryptography.com/mt/archives/000458.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [Clips] Storm Brews Over Encryption 'Safe Harbor' in Data Breach Bills

2005-06-03 Thread Ian G
On Friday 03 June 2005 14:38, Greg Rose wrote:
 At 00:48 2005-06-03 +0100, Ian G wrote:
 Just to make it more interesting, the AG of New York, Elliot Spitzer
 has introduced a  package of legislation intended to rein in identity
  theft including:
 
Facilitating prosecutions against computer hackers by creating
specific criminal penalties for the use of encryption to conceal
a crime, to conceal the identity of another person who commits
a crime, or to disrupt the normal operation of a computer;

 Ah, imagine the beautiful circularity of the Justice Department using
 encryption to protect their criminal identity database from disclosure...
 or not.

They might have a problem with meeting the legal requirements
for disclosure if the alleged criminals were not as yet behind bars... 
I wonder if bin Laden would have an action against the Justice
Department if his file was stolen?

Anyway...

FBI Probes Theft of Justice Dept. Data
http://www.washingtonpost.com/wp-dyn/content/article/2005/05/31/AR2005053101379.html


The FBI is investigating the theft of a laptop computer containing travel 
account information for as many as 80,000 Justice Department employees, but 
it is unclear how much personal data are at risk of falling into the wrong 
hands.
Authorities think the computer was stolen between May 7 and May 9 from Omega 
World Travel of Fairfax, which is one of the largest travel companies in the 
Washington area and does extensive business with government agencies.

 
  Justice Department spokeswoman Gina Talamona said the data included names 
and account numbers from travel account credit cards issued to government 
employees by J.P Morgan Chase  Co. and its subsidiary Bank One Corp.
She said the information did not include Social Security numbers or home 
addresses that often are used by identity thieves to establish credit or to 
purchase goods in other people's names.
In addition, she said the account information was protected by passwords, 
although sophisticated hackers often can break into stored databases.
Omega World Travel officials declined to comment on how the laptop was stolen 
or other elements of the case, as did the FBI, which is investigating.
The theft is one of a spate of incidents over the past several months that 
have resulted in sensitive data on millions of U.S. consumers being stolen or 
exposed.
In December, Bank of America Corp. lost computer tapes containing records on 
1.2 million federal workers, including several U.S. senators.
Talamona said that no Justice Department worker has reported suspicious 
activity on his or her financial accounts since the incident.
The banks issuing the travel cards have placed alerts on the workers' 
accounts, Talamona said.
She added that Omega World Travel has agreed to several changes to its 
security practices, including beefing up physical security at its offices, 
conducting a computer security review and ensuring that the stolen computer 
cannot be reconnected to the firm's network.
The travel cards have not been canceled, Talamona said.

-- 
Advances in Financial Cryptography:
   https://www.financialcryptography.com/mt/archives/000458.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Digital signatures have a big problem with meaning

2005-06-02 Thread Ian G
On Wednesday 01 June 2005 15:07, [EMAIL PROTECTED] wrote:
 Ian G writes:
  | In the end, the digital signature was just crypto
  | candy...

 On the one hand a digital signature should matter more
 the bigger the transaction that it protects.  On the
 other hand, the bigger the transaction the lower the
 probability that it is between strangers who have no
 other leverage for recourse.

Yes, indeed!  The thing about a signature is that
*it* itself - the mark on paper or the digital result
of some formula - isn't the essence of signing.

The essence of the process is something that
lawyers call intent (I'm definately not clear on
these words so if there are any real lawyers in
the house...).  And, when the dispute comes to
court, the process is not one of proving the
signature but of showing intent.

And as the transaction gets bigger, the process
of making and showing intent gets more involved,
more complex.  So it is naturally ramped up to the
transaction, in a way that digsigs just totally miss
out on.

Which means that the digital signature school
got it completely wrong.  A digital signature is
only just one more element in a process that
is quite complex, involved, and goes back into
history more years than we can count.  It is
therefore completely unlikely that a digsig will
ever replace all that;  however it is quite possible
that a digsig could comfortably add a new element
to that process.

(Speaking here of common law, which is not
universally applicable...)

 And, of course, proving anything by way of dueling
 experts doesn't provide much predictability in a jury
 system, e.g., OJ Simpson.

And this is where we found for example the OpenPGP
cleartext digital signature to be the only one that
has any merit.  Because it can be printed on paper,
and that piece of paper can be presented to the
jury of an O.J.Simpson style case, or even a Homer
Simpson style case, this carries weight.

An OpenPGP clear text signature carries weight
because it is there, in black and white, and no
side would dare to deny that because they know
it would be a simple matter to go to the next level.

But any other form of non-printable digital signature
is not presentable to a jury.  What are you going
to do? Throw a number in front of a jury and say its a
signature on another number?  It's a mental leap of
orders of magnitude more effort, and there are many
ways the other side could sidestep that.

iang

PS: To get this in x.509, we coded up cleartext
sigs into the x.509 format.
-- 
Advances in Financial Cryptography:
   https://www.financialcryptography.com/mt/archives/000458.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-02 Thread Ian G
Ahh-oops!  That particular reply was scrappily written
late at night and wasn't meant to be sent!  Apologies
belatedly, I'd since actually come to the conclusion
that Steve's statement was strictly correct, in that
we won't ever *see* sniffing because SSL is in place,
whereas I interpreted this incorrectly perhaps as
SSL *stopped* sniffing.  Subtle distinctions can
sometimes matter.

So please ignore the previous email, unless a cruel
and unusual punishment is demanded...

iang


On Wednesday 01 June 2005 16:24, Ian G wrote:
 On Tuesday 31 May 2005 19:38, Steven M. Bellovin wrote:
  In message [EMAIL PROTECTED], Ian G writes:
  On Tuesday 31 May 2005 02:17, Steven M. Bellovin wrote:
   In message [EMAIL PROTECTED], James A. Donald 
writes:
   --
   PKI was designed to defeat man in the middle attacks
   based on network sniffing, or DNS hijacking, which
   turned out to be less of a threat than expected.
  
   First, you mean the Web PKI, not PKI in general.
  
   The next part of this is circular reasoning.  We don't see network
   sniffing for credit card numbers *because* we have SSL.
  
  I think you meant to write that James' reasoning is
  circular, but strangely, your reasoning is at least as
  unfounded - correlation not causality.  And I think
  the evidence is pretty much against any causality,
  although this will be something that is hard to show,
  in the absence.
 
  Given the prevalance of password sniffers as early as 1993, and given
  that credit card number sniffing is technically easier -- credit card
  numbers will tend to be in a single packet, and comprise a
  self-checking string, I stand by my statement.

 Well, I'm not arguing it is technically hard.  It's just
 un-economic.  In the same sense that it is not technically
 difficult for us to get in a car and go run someone
 over;  but we still don't do it.  And we don't ban the
 roads nor insist on our butlers walking with a red
 flag in front of the car, either.  Well, not any more.

 So I stand by my statement - correlation is not causality.

   * AFAICS, a non-trivial proportion of credit
  card traffic occurs over totally unprotected
  traffic, and that has never been sniffed as far as
  anyone has ever reported.  (By this I mean lots of
  small merchants with MOTO accounts that don't
  bother to set up proper SSL servers.)
 
  Given what a small percentage of ecommerce goes to those sites, I don't
  think it's really noticeable.

 Exactly my point.  Sniffing isn't noticeable.  Neither
 in the cases we know it could happen, nor in the
 areas.  The one place where it has been noticed is
 with passwords and what we know from that experience
 is that even the slightest security works to overcome
 that threat.  SSH is overkill, compared to the passwords
 mailouts that successfully protect online password sites.

   * We know that from our experiences
  of the wireless 802.11 crypto - even though we've
  got repeated breaks and the FBI even demonstrating
  how to break it, and the majority of people don't even
  bother to turn on the crypto, there remains practically
  zero evidence that anyone is listening.
  
FBI tells you how to do it:
https://www.financialcryptography.com/mt/archives/000476.
 
  Sure -- but setting up WEP is a nuisance.  SSL (mostly) just works.

 SSH just works - and it worked directly against the
 threat you listed above (password sniffing).  But it
 has no PKI to speak of, and this discussion is about
 whether PKI protects people, because it is PKI that is
 supposed to protect against spoofing - a.k.a. phishing.

 And it is PKI that makes SSL just doesn't set up.
 Anyone who's ever had to set up an Apache web
 server for SSL has to have asked themselves the
 question ... why doesn't this just work ?

  As
  for your assertion that no one is listening, I'm not sure what kind of
  evidence you'd seek.  There's plenty of evidence that people abuse
  unprotected access points to gain connectivity.

 Simply, evidence that people are listening.  Sniffing
 by means of the wire.

 Evidence that people abuse to gain unprotected
 access is nothing to do with sniffing traffic to steal
 information.  That's theft of access, which is a fairly
 minor issue, especially as it doesn't have any
 economic damages worth speaking of.  In fact,
 many cases seem to be more accidental access
 where neighbours end up using each other's access
 points because the software doesn't know where the
 property lines are.

   Since many of
   the worm-spread pieces of spyware incorporate sniffers, I'd say that
   part of the threat model is correct.
  
  But this is totally incorrect!  The spyware installs on the
  users' machines, and thus does not need to sniff the
  wire.  The assumption of SSL is (as written up in Eric's
  fine book) that the wire is insecure and the node is
  secure, and if the node is insecure then we are sunk.
 
  I meant precisely what I said and I stand by my statement.  I'm quite
  well aware

Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-02 Thread Ian G
On Thursday 02 June 2005 11:33, Birger Tödtmann wrote:
 Am Mittwoch, den 01.06.2005, 15:23 +0100 schrieb Ian G:
 [...]

  For an example of the latter, look at Netcraft.  This is
  quite serious - they are putting out a tool that totally
  bypasses PKI/SSL in securing browsing.  Is it insecure?
  Yes of course, and it leaks my data like a seive as
  one PKI guy said.

 [...]

 What I currently fail see is the link to SSL.  Or, to its PKI model.

That's the point.  There is no link to SSL or PKI.
The only thing in common is the objective - to
protect the user when browsing.  Secure browsing
is now being offered by centralised database sans
crypto.

 Netcraft bypasses it, but I won't use Netcraft exclusively because I'm
 happy to use the crypto in SSL.  Netcraft and Trustbar are really nice
 add-ons to improve my security *with SSL*.  So where is the point?

Sure, I think it is a piece of junk, myself.  But I
am not important, I'm not an average user.
The only thing that is important is what the user
thinks and does.

When Netcraft announced their plugin had been
ported from IE to Firefox last week, they also
revealed that they had 60,000 downloads in
hours.  That tells us a few things.

Firstly, users want protection from phishing.

Secondly, Netcraft have succeeded enough
in the IE world in creating a user base for their
solution that it easily jumped across to the
Firefox userbase and scored impressive numbers
straight away.  Which tells us that it actually
delivers something useful (which may or may
not be security).  So we cannot discount that
the centralised database concept works well
enough by some measure or other.

So now we wait to see which model wins in
protecting the user from spoofing.

iang
-- 
Advances in Financial Cryptography:
   https://www.financialcryptography.com/mt/archives/000458.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Citibank discloses private information to improve security

2005-06-02 Thread Ian G
On Wednesday 01 June 2005 23:38, Anne  Lynn Wheeler wrote:
 in theory, the KISS part of SSL's countermeasure for MITM-attack ... is
 does the URL you entered match the URL in the provided certificate. An
 attack is inducing a fraudulent URL to be entered for which the
 attackers have a valid certificates.

Firefox have added a cert domain into the status bar
on the bottom of the browser.  This is part way to what
you suggest and a very welcome improvement to
browser security.

It falls short for (IMHO) 3 reasons:  1. the domain that
is shown isn't the certificate domain, but is something
amalgamated from the URL and the cert;  which then
breaks the independent check you are hoping for above.

2., the CA should be listed so as to complete the
security statement.  Something like ThisCA signed the
This.Domain.Com cert.  This is done in the Mouseover,
but not displayed all the time, and it is possible to get a
Mouseover that shows a statement that is strictly false
because of 1. above.  (Bugs filed and all the rest...)

3. Another issue is that it is not big enough nor loud enough
in the Trustbar sense to break through the current user
teachings that they can ignore everything as its all safe.

 Rather than complex defense in depth ... all with cracks and
 vulnerabilities that attackers can wiggle around ... a better approach
 would be KISS solution that had integrated approach to existing systemic
 vulnerabilities. For instance, some sort of clear, un-obfuscated
 indications integrated with URL selection that can leverage the existing
 SSL MITM-attack countermeasures.

Yes, this would be a much better way forward.  Now,
bear in mind that the people writing the plugins would
give their left legs to get the attention and respect of
the browser manufacturers so as to create this
integrated solution.  See various other rants...

 The downside of a KISS integrated solution that eliminates existing
 systemic problems (and avoids creating complex layers, each with their
 individual cracks that the attackers can wiggle thru) ... is that the
 only current special interest for such a solution seems to be the
 victims. Some sort of fix that allows naive users to relate and enter
 specific trusted URLs associated with specific tasks could fix many of
 the existing infrastructure vulnerabilities. The issue is what
 institutions have financial interest in designing, implementing, and
 marketing such a likely free add-on to existing mostly free based
 infrastructure. It appears to be much easier justify the design,
 implementation and marketing of a totally new feature that can be
 separately charge for.

This will change,.  I predict that the banks will end up
with the liability for phishing, for good or for bad, and
they will then find it in their hearts to finance the add-ons,
which will battle it out, thus leading to the 'best practices'
which will be incorporated into the browsers.

(Seeing as this is prediction time, I'll stick my neck
out another several kms and say it will be in about 6
months that the banks are asked to take on the liability.)

iang
-- 
Advances in Financial Cryptography:
   https://www.financialcryptography.com/mt/archives/000458.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [Clips] Storm Brews Over Encryption 'Safe Harbor' in Data Breach Bills

2005-06-02 Thread Ian G
On Thursday 02 June 2005 19:28, R.A. Hettinga wrote:
 http://www.eweek.com/print_article2/0,2533,a=153008,00.asp
 Storm Brews Over Encryption 'Safe Harbor' in Data Breach Bills
 May 31, 2005

Just to make it more interesting, the AG of New York, Elliot Spitzer
has introduced a  package of legislation intended to rein in identity theft
including:

  Facilitating prosecutions against computer hackers by creating
  specific criminal penalties for the use of encryption to conceal
  a crime, to conceal the identity of another person who commits
  a crime, or to disrupt the normal operation of a computer;

Full PR is here:
https://www.financialcryptography.com/mt/archives/000449.html

I'm hoping this was a trial balloon.

iang
-- 
Advances in Financial Cryptography:
   https://www.financialcryptography.com/mt/archives/000458.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Cell phone crypto aims to baffle eavesdroppers

2005-06-02 Thread Ian G

Cell phone crypto aims to baffle eavesdroppers
By Munir Kotadia, ZDNet Australia

Published on ZDNet News: May 31, 2005, 4:10 PM PT

An Australian company last week launched a security tool for GSM mobile
 phones that encrypts transmissions to avoid eavesdroppers.

GSM, or Global System for Mobile Communications, is one of the most popular
 mobile phone standards and is built to provide a basic level of security.
 However, for more than five years the security has been cracked, and
 commercial scanners that can emulate GSM base stations are becoming more
 common. That prompted Melbourne-based SecureGSM to launch its encryption
 tool at the CeBit exhibition in Sydney last week.

Roman Korolik, managing director of SecureGSM, said that because GSM security
 was cracked so long ago, there was a lot of information and equipment
 available that could be used for intercepting GSM calls.

There are devices available for interception and decoding (GSM calls) in
 real time...Although they are, strictly speaking, illegal in most countries,
 you can buy them, said Korolik, who believes that these scanners are
 already being used to intercept sensitive calls. You can imagine that in
 places like the stock exchange, where the traders are on their mobile
 phones...there could be a few scanners there.

As far back as 1999, the security used by GSM has been questioned. In a paper
 published by Lauri Pesonen from the Department of Computer Science and
 Engineering at Helsinki University of Technology, the GSM model was said to
 have been broken on many levels.

The GSM security model is broken on many levels and is thus vulnerable to
 numerous attacks targeted at different parts of an operator's network...If
 somebody wants to intercept a GSM call, he can do so. It cannot be assumed
 that the GSM security model provides any kind of security against a
 dedicated attacker, Pesonen wrote in the paper.

However, additional GSM security is unlikely to be used by the masses,
 according to Neil Campbell, national security manager of IT services company
 Dimension Data, who said companies are likely to have higher priorities.

This is a security control like any other control--like a firewall or a
 policy. An organization needs to believe it is appropriate for their risks
 to implement this control. Obviously the military is one that you would
 expect to have a need for secure communications, but I wouldn't expect there
 to be too many organizations in this country that would think it necessary
 to encrypt their mobile phone conversations, said Campbell.

SecureGSM requires Windows Mobile Phone Edition
http://news.zdnet.com/2100-1040_22-5697127.html?tag=nl with an ARM or
 compatible processor running at 200MHz or better. It also requires 6Mb of
 RAM (random access memory) and 2MB of storage space.

The SecureGSM application uses 256-bit, triple cipher, layered encryption
 based on AES, Twofish and Serpent ciphers. According to SecureGSM, all of
 these algorithms are considered unbreakable and the triple layer ensures
 that encrypted data is future proof. The product costs $188 (AU$249) for a
 single-user license, and each secure device requires a license.

Dimension Data's Campbell said that companies thinking about implementing
 such a solution will need to calculate how much they could lose if their
 communications were intercepted.

Share traders may need it, but this is for an organization that communicates
 by mobile telephone and understands that the risk of interception is
 generally extremely low, but that risk is completely unacceptable, Campbell
 said.

Munir Kotadia of ZDNet Australia reported from Sydney

Copyright ©2005 CNET Networks, Inc. All Rights Reserved.
http://news.zdnet.com/2100-1009_22-5726814.html


-- 
Advances in Financial Cryptography:
   https://www.financialcryptography.com/mt/archives/000458.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-01 Thread Ian G
On Wednesday 01 June 2005 10:35, Birger Tödtmann wrote:
 Am Dienstag, den 31.05.2005, 18:31 +0100 schrieb Ian G:
 [...]

  As an alternate hypothesis, credit cards are not
  sniffed and never will be sniffed simply because
  that is not economic.  If you can hack a database
  and lift 10,000++ credit card numbers, or simply
  buy the info from some insider, why would an
  attacker ever bother to try and sniff the wire to
  pick up one credit card number at a time?

 [...]

 And never will be...?  Not being economic today does not mean it
 couldn't be economic tomorrow.  Today it's far more economic to lift
 data-in-rest because it's fairly easy to get on an insider or break into
 the database itself.

Right, so we are agreed that listening to credit cards
is not an economic attack - regardless of the presence
of SSL.

Now, the point of this is somewhat subtle.  It is not
that you should turn off SSL.

The point is this:  you *could*
turn off SSL and it wouldn't make much difference
to actual security in the short term at least, and maybe
not even in the long term depending on the economic
shifts.

OK, so, are we agreed on that:  we *could* turn off
SSL, but that isn't the same thing as should* ?

If we've got that far we can go to the next step.

If we *could* turn off SSL then we have some breathing
space, some room to manouvre.  Some wiggle room.

Which means we could modify the model.  Which
means we could change the model, we could tune
the crypto or the PKI.  And in the short term, that
would not be a problem for security because there
isn't an economic attack anyway.  Right now, at
least.

OK so far?

This means that we could improve or decrease
its strength ... as our objectives suggest ... or we
could *re-purpose* SSL if this were so desired.

So we could for example use SSL and PKI to
protect from something else.  If that were an issue.

Let's assume phishing is an issue (1.2 billion
dollars of american money is the favourite number).

If we could figure out a way to change the usage
of SSL and PKI to protect against phishing, would
that be a good idea?

It wouldn't be a bad idea, would it?  How could it
be a bad idea when the infrastructure is in place,
and is not currently being used to defeat any
attack?

So, even in a stupidly aggressive worst case
scenario, if were to turn off SSL/PKI in the process
and turn its benefit over to phishing, and discover
that it no longer protects against listening attacks
at all - remember I'm being ridiculously hypothetical
here - then as long as it did *some* benefit in
stopping phishing, that would still be a net good.

That is, there would be some phishing victims
who would thank you for saving them, and there
would *not* be any Visa merchants who would
necessarily damn your grandmother for losing
credit cards.  Not in the short term at least.

And if listening were to erupt in a frenzy in the
future it would likely be possible to turn off the
anti-phishing tasking and turn SSL/PKI back to
protecting against eavesdropping.  Perhaps as
a tradeoff between the credit card victim and
the phishing victim.

But that's just stupidly hypothetical.  The main
thing is that we can fiddle with SSL/PKI if we want
to and we can even afford to make some mistakes.

So the question then results in - could it be used
to benefit phishing?  I can point at some stuff that
says it will be.

But every time this good stuff is suggested, the
developers, cryptographers, security experts and
what have you suck air between their teeth in and
say you can't change SSL or PKI because of this
crypto blah blah reason.

My point is you can change it.  Of course you
can change it - and here's why:  it's not being
economically used over here (listening), and
right over there (phishing), there is an economic
loss waiting attention.


 However, when companies finally find some 
 countermeasures against both attack vectors, adversaries will adapt and
 recalculate the economics.  And they may very well fall back to sniffing
 for data-in-flight, just as they did (and still do sometimes now) to get
 hold of logins and passwords inside corporate networks in the 80s and
 90s.  If it's more difficult to hack into the database itself than to
 break into a small, not-so-protected system at a large network provider
 and install a sniffer there that silently collects 10,000++ credit card
 numbers over some weeks - then sniffing *is* an issue.  We have seen it,
 and we will see it again.  SSL is a very good countermeasure against
 passive eavesdropping of this kind, and a lot of data suggests that
 active attacks like MITM are seen much less frequently.


All that is absolutely true, in that we can conjecture
that if we close everything else off, then sniffing will
become economic.  That's a fair statement.

But, go and work in one of these places for a while,
or see what Perry said yesterday:

 The day to day problem of security at real financial institutions is
 the fact that humans are very poor

Digital signatures have a big problem with meaning

2005-06-01 Thread Ian G
On Tuesday 31 May 2005 23:43, Anne  Lynn Wheeler wrote:
 in most business scenarios ... the relying party has previous knowledge
 and contact with the entity that they are dealing with (making the
 introduction of PKI digital certificates redundant and superfluous).

Yes, this is directly what we found with the signed
contracts for digital instruments (aka ecash).  We did
all the normal digital signature infrastructure (using PGP
WoT and even x.509 PKI for a while) but the digsig
never actually made or delivered any meaningful biz
results.  In contrast, it was all the other steps that
we considered from the biz environment that made
the difference:  a readable contract, a guaruntee
that it wouldn't change, a solid linkage to every
transaction, and so forth and so on.

In the end, the digital signature was just crypto
candy.  We preserve it still because we want to
experiment with WoT between issuers and governance
roles, and because we need a signing process of
some form.  In any small scenario (1000 users)
that sort of linkage is better done outside the tech
and for large scenarios it is simply unproven whether
it can deliver.

http://iang.org/papers/ricardian_contract.html

iang

PS: must look up the exec summary of aads one day!
-- 
Advances in Financial Cryptography:
   https://www.financialcryptography.com/mt/archives/000458.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-01 Thread Ian G
On Tuesday 31 May 2005 23:43, Perry E. Metzger wrote:
 Ian G [EMAIL PROTECTED] writes:

Just on the narrow issue of data - I hope I've
addressed the other substantial points in the
other posts.

  The only way we can overcome this issue is data.

 You aren't going to get it. The companies that get victimized have a
 very strong incentive not to share incident information very
 widely.

On the issue of sharing data by victims, I'd strongly
recommend the paper by Schechter and Smith, FC03.
 How Much Security is Enough to Stop a Thief?
http://www.eecs.harvard.edu/~stuart/papers/fc03.pdf
I've also got a draft paper that argues the same thing
and speaks directly and contrarily to your statement:

Sharing data is part of the way towards better security.

(But I argue it from a different perspective to SS.)


 1) You have one anecdote. You really have no idea how
frequently this happens, etc.

The world for security in the USA changed dramatically
when Choicepoint hit.  Check out the data at:

http://pipeda.blogspot.com/2005/02/summaries-of-incidents-cataloged-on.html
http://www.strongauth.com/regulations/sb1386/sb1386Disclosures.html

Also, check out Adam's blog at

http://www.emergentchaos.com/

He has a whole category entitled Choicepoint for
background reading:

http://www.emergentchaos.com/archives/cat_choicepoint.html

Finally we have our data in the internal governance
and hacking breaches.  As someone said today, Amen
to that.  No more arguments, just say Choicepoint.

 2) It doesn't matter how frequently it happens, because no two
companies are identical. You can't run 100 choicepoints and see
what percentage have problems.

We all know that the attacker is active and can
change tactics.  But locksmiths still recommend
that you put a lock on your door that is a) a bit
stronger than the door and b) a bit better than your
neighbours.  Just because there are interesting
quirks and edge cases in these sciences doesn't
mean we should wipe out other aspects of our
knowledge of scientific method.

 3) If you're deciding on how to set up your firm's security, you can't
say 95% of the time no one attacks you so we won't bother, for
the same reason that you can't say if I drive my car while
slightly drunk 95% of the time I'll arrive safe, because the 95%
of the time that nothing happens doesn't matter if the cost of the
5% is so painful (like, say, death) that you can't recover from
it.

Which is true regardless of whether you are
slightly drunk or not at all or whether a few
pills had been taken or tiredness hits.

Literally, like driving when not 100% fit, the
decision maker makes a quick decision based
on what they know.  The more they know, the
better off they are.  The more data they have,
the better informed their decision.

In particular, you don't want to be someone on who's watch a 
major breech happens. Your career is over even if it never happens
to anyone else in the industry.

Sure.  Life's a bitch.  One can only do ones
best and hope it doesn't hit.  But have a read
of SS' paper, and if you still have the appetite,
try my draft:

http://iang.org/papers/market_for_silver_bullets.html

 Statistics and the sort of economic analysis you speak of depends on
 assumptions like statistical independence and the ability to do
 calculations. If you have no basis for calculation and statistical
 independence doesn't hold because your actors are not random processes
 but intelligent actors, the method is worthless.

No, that's way beyond what I was saying.

I was simply asserting one thing:  without data, we do
not know if an issue exists.  Without even a vaguely
measured sense of seeing it in enough cases to know
it is not an anomoly, we simply can't differentiate it
from all the other conspiracy theories, FUD sales,
government agendas, regulatory hobby horses,
history lessons written by victors, or what-have-you.

Ask any manager.  Go to him or her with a new
threat.  He or she will ask who has this happened
to?

If the answer is it used to happen all the time in
1994 ... then a manager could be forgiven for
deciding the data was stale.  If the answer is
no-one, then no matter how risky, the likely
answer is get out!  If the answer is these X
companies in the last month then you've got
some mileage.

Data is everything.

iang
-- 
Advances in Financial Cryptography:
   https://www.financialcryptography.com/mt/archives/000458.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-01 Thread Ian G
Hi Birger,

Nice debate!


On Wednesday 01 June 2005 13:52, Birger Tödtmann wrote:
 Am Mittwoch, den 01.06.2005, 12:16 +0100 schrieb Ian G:
 [...]

  The point is this:  you *could*
  turn off SSL and it wouldn't make much difference
  to actual security in the short term at least, and maybe
  not even in the long term depending on the economic
  shifts.

 Which depends a bit on the scale of your could switch of.  If some
 researchers start switching it off / inventing / testing something new,
 then your favourite phisher would not care, that's right.

Right.  That's the point.  It is not a universal
and inescapable bad to fiddle with SSL/PKI.

 [...]

  But every time this good stuff is suggested, the
  developers, cryptographers, security experts and
  what have you suck air between their teeth in and
  say you can't change SSL or PKI because of this
  crypto blah blah reason.
 
  My point is you can change it.  Of course you
  can change it - and here's why:  it's not being
  economically used over here (listening), and
  right over there (phishing), there is an economic
  loss waiting attention.

 Maybe.  But there's a flip-side to that coin.  SSL and correlated
 technology helped to shift the common attack methods from sniffing (it
 was widely popular back then to install a sniffer whereever a hacker got
 his foot inside a network) towards advanced, in some sense social
 engineering attacks like phishing *because* it shifted the economics
 for the adversaries as it was more and more used to protect sensitive
 data-in-flight (and sniffing wasn't going to get him a lot of credit
 card data anymore).


OK, and that's where we get into poor use of
data.  Yes, sniffing of passwords existed back
then.  So we know that sniffing is quite possible
and on reasonable scale, plausible technically.

But the motive of sniffing back then was different.
It was for attacking boxes.  Access attack.  Not
for the purpose of theft of commercial data.  It
was a postulation that those that attacked boxes
for access would also sniff for credit cards.  But,
we think that to have been a stretch (hence the
outrageous title of this post) at least up until
recently.

Before 2004, these forces and
attackers were disconnected.  In 2004 they joined
forces.  In which case, you do now have quite a
good case that the installation of sniffers could be
used if there was nothing else worth picking up.
So at least we now have the motive cleared up,
if not the economic attack.

(Darn ... I seem to have argued your case for you ;-) )

 That this behaviour (sniffing) is a thing of the past does not mean it's
 not coming back to you if things are turned around: adversaries are
 strategically thinking people that adapt very fast to new circum-
 stances.

Indeed.  It also doesn't mean that they will come
and attack.  Maybe it is a choice between the
attack that is happening right now and the attack
that will come back.  Or maybe the choice is
not really there, maybe we can cover both if
we put our thinking caps on?

 The discussion reminds me a bit of other popular economic issues: Many
 politicians and some economists all over the world, every year, are
 coming back to asking Can't we loosen the control on inflation a bit?
 Look, inflation is a thing of the past, we never got over 3% the last
 umteenth years, lets trigger some employment by relaxing monetary
 discipline now.  The point is: it might work - but if not, your economy
 may end up in tiny little pieces.  It's quite a risk, because you cannot
 test it.  So the stance of many people is to be very conservative on
 things like that - and security folks are no exception.  Maybe fiddling
 with SSL is really a nice idea.  But if it fails at some point and we
 don't have a fallback infrastructure that's going to protect us from the
 sniffer-collector of the 90s, adversaries will be quite happy to bring
 them to new interesting uses then

Nice analogy!  Like all analogies it should be taken
for descriptive power not presecription.

The point being that one should not slavishly stick
to an argument, one needs to establish principles.
One principle is that we protect where money is being
lost, over and above somewhere where someone
says it was once lost in the past.  And at least then
we'll learn the appropriate balance when we get it
wrong, which can't be much worse than now, coz
we are getting it really wrong at the moment.

(On the monetary economics analogy, if you said your
principle was to eliminate inflation, I'd say fine!  There
is an easy way to do just that, just use gold as money,
which has maintained its value throughout recorded
history, not just the last century!  The targets debate
has been echoing on for decades, and there is no
real end in sight.)

  So I would suggest that listening for credit cards will
  never ever be an economic attack.  Sniffing for random
  credit cards at the doorsteps of amazon will never ever
  be an economic attack, not because it isn't possible

Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-01 Thread Ian G
On Tuesday 31 May 2005 19:38, Steven M. Bellovin wrote:
 In message [EMAIL PROTECTED], Ian G writes:
 On Tuesday 31 May 2005 02:17, Steven M. Bellovin wrote:
  In message [EMAIL PROTECTED], James A. Donald writes:
  --
  PKI was designed to defeat man in the middle attacks
  based on network sniffing, or DNS hijacking, which
  turned out to be less of a threat than expected.
 
  First, you mean the Web PKI, not PKI in general.
 
  The next part of this is circular reasoning.  We don't see network
  sniffing for credit card numbers *because* we have SSL.
 
 I think you meant to write that James' reasoning is
 circular, but strangely, your reasoning is at least as
 unfounded - correlation not causality.  And I think
 the evidence is pretty much against any causality,
 although this will be something that is hard to show,
 in the absence.

 Given the prevalance of password sniffers as early as 1993, and given
 that credit card number sniffing is technically easier -- credit card
 numbers will tend to be in a single packet, and comprise a
 self-checking string, I stand by my statement.


Well, I'm not arguing it is technically hard.  It's just
un-economic.  In the same sense that it is not technically
difficult for us to get in a car and go run someone
over;  but we still don't do it.  And we don't ban the
roads nor insist on our butlers walking with a red
flag in front of the car, either.  Well, not any more.

So I stand by my statement - correlation is not causality.

  * AFAICS, a non-trivial proportion of credit
 card traffic occurs over totally unprotected
 traffic, and that has never been sniffed as far as
 anyone has ever reported.  (By this I mean lots of
 small merchants with MOTO accounts that don't
 bother to set up proper SSL servers.)

 Given what a small percentage of ecommerce goes to those sites, I don't
 think it's really noticeable.


Exactly my point.  Sniffing isn't noticeable.  Neither
in the cases we know it could happen, nor in the
areas.  The one place where it has been noticed is
with passwords and what we know from that experience
is that even the slightest security works to overcome
that threat.  SSH is overkill, compared to the passwords
mailouts that successfully protect online password sites.

  * We know that from our experiences
 of the wireless 802.11 crypto - even though we've
 got repeated breaks and the FBI even demonstrating
 how to break it, and the majority of people don't even
 bother to turn on the crypto, there remains practically
 zero evidence that anyone is listening.
 
   FBI tells you how to do it:
   https://www.financialcryptography.com/mt/archives/000476.

 Sure -- but setting up WEP is a nuisance.  SSL (mostly) just works.

SSH just works - and it worked directly against the
threat you listed above (password sniffing).  But it
has no PKI to speak of, and this discussion is about
whether PKI protects people, because it is PKI that is
supposed to protect against spoofing - a.k.a. phishing.

And it is PKI that makes SSL just doesn't set up.
Anyone who's ever had to set up an Apache web
server for SSL has to have asked themselves the
question ... why doesn't this just work ?

 As 
 for your assertion that no one is listening, I'm not sure what kind of
 evidence you'd seek.  There's plenty of evidence that people abuse
 unprotected access points to gain connectivity.

Simply, evidence that people are listening.  Sniffing
by means of the wire.

Evidence that people abuse to gain unprotected
access is nothing to do with sniffing traffic to steal
information.  That's theft of access, which is a fairly
minor issue, especially as it doesn't have any
economic damages worth speaking of.  In fact,
many cases seem to be more accidental access
where neighbours end up using each other's access
points because the software doesn't know where the
property lines are.


  Since many of
  the worm-spread pieces of spyware incorporate sniffers, I'd say that
  part of the threat model is correct.
 
 But this is totally incorrect!  The spyware installs on the
 users' machines, and thus does not need to sniff the
 wire.  The assumption of SSL is (as written up in Eric's
 fine book) that the wire is insecure and the node is
 secure, and if the node is insecure then we are sunk.

 I meant precisely what I said and I stand by my statement.  I'm quite
 well aware of the difference between network sniffers and keystroke
 loggers.


OK, so maybe I am incorrectly reading this - are you
saying that spyware is being delivered that incorporates
wire sniffers?  Sniffers that listen to the ethernet traffic?

If that's the case, that is the first I've heard of it.  What
is it that these sniffers are listening for?

   Eric's book and 1.2 The Internet Threat Model
   http://iang.org/ssl/rescorla_1.html
 
 Presence of keyboard sniffing does not give us any
 evidence at all towards wire sniffing and only serves
 to further embarrass the SSL threat model.
 
  As for DNS hijacking -- that's what's

Re: Citibank discloses private information to improve security

2005-05-31 Thread Ian G
On Saturday 28 May 2005 18:47, James A. Donald wrote:

 Do we have any comparable experience on SSH logins?
 Existing SSH uses tend to be geek oriented, and do not
 secure stuff that is under heavy attack.  Does anyone
 have any examples of SSH securing something that was
 valuable to the user, under attack, and then the key
 changed without warning?  How then did the users react?

I've heard an anecdote on 2 out of 3 of those criteria:

In a bank that makes heavy use of SSH, the users have
to phone the help desk to get the key reset when the
warning pops up.  The users of course blame the tool.

I suspect in time the addition of certificate based
checking into SSH or the centralised management
of keys will overcome this.

iang
-- 
Advances in Financial Cryptography:
   https://www.financialcryptography.com/mt/archives/000458.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Papers about Algorithm hiding ?

2005-05-31 Thread Ian G
On Thursday 26 May 2005 22:51, Hadmut Danisch wrote:
 Hi,

 you most probably have heard about the court case where the presence
 of encryption software on a computer was viewed as evidence of
 criminal intent.

 http://www.lawlibrary.state.mn.us/archive/ctappub/0505/opa040381-0503.htm
 http://news.com.com/Minnesota+court+takes+dim+view+of+encryption/2100-1030_
3-5718978.html



 Plenty of research has been done about information hiding.
 But this special court case requires algorithm hiding as a kind of
 response. Do you know where to look for papers about this subject?

 What about designing an algorithm good for encryption which someone
 can not prove to be an encryption algorithm?

I don't agree with your conclusion that hiding algorithms
is a requirement.  I think there is a much better direction:
spread more algorithms.  If everyone is using crypto then
how can that be relevant to the case?

I would suggest that the best way to overcome this
flawed view of cryptography by the judges is to have
the operating systems install with GPG installed by
default.  Some of the better ones already install SSH
by default.

(In fact the thrust of the argument was flawed as the
user's PC almost certainly had a browser with SSL
installed.  As HTTPS can be used to access webmail
privately and as we have seen this was an El Qaeda
means of secret communication, the presence of one
more crypto tool as relevent is a stretch.)

iang
-- 
Advances in Financial Cryptography:
   https://www.financialcryptography.com/mt/archives/000458.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


SSL stops credit card sniffing is a correlation/causality myth

2005-05-31 Thread Ian G
On Tuesday 31 May 2005 02:17, Steven M. Bellovin wrote:
 In message [EMAIL PROTECTED], James A. Donald writes:
 --
 PKI was designed to defeat man in the middle attacks
 based on network sniffing, or DNS hijacking, which
 turned out to be less of a threat than expected.

 First, you mean the Web PKI, not PKI in general.

 The next part of this is circular reasoning.  We don't see network
 sniffing for credit card numbers *because* we have SSL.

I think you meant to write that James' reasoning is
circular, but strangely, your reasoning is at least as
unfounded - correlation not causality.  And I think
the evidence is pretty much against any causality,
although this will be something that is hard to show,
in the absence.

 * AFAICS, a non-trivial proportion of credit
card traffic occurs over totally unprotected
traffic, and that has never been sniffed as far as
anyone has ever reported.  (By this I mean lots of
small merchants with MOTO accounts that don't
bother to set up proper SSL servers.)

 * We know that from our experiences
of the wireless 802.11 crypto - even though we've
got repeated breaks and the FBI even demonstrating
how to break it, and the majority of people don't even
bother to turn on the crypto, there remains practically
zero evidence that anyone is listening.

  FBI tells you how to do it:
  https://www.financialcryptography.com/mt/archives/000476.html

As an alternate hypothesis, credit cards are not
sniffed and never will be sniffed simply because
that is not economic.  If you can hack a database
and lift 10,000++ credit card numbers, or simply
buy the info from some insider, why would an
attacker ever bother to try and sniff the wire to
pick up one credit card number at a time?

And if they did, why would we care?  Better to
let a stupid thief find a way to remove himself from
a life of crime than to channel him into a really
dangerous and expensive crime like phishing,
box cracking, and purchasing identity info from
insiders.

 Since many of 
 the worm-spread pieces of spyware incorporate sniffers, I'd say that
 part of the threat model is correct.

But this is totally incorrect!  The spyware installs on the
users' machines, and thus does not need to sniff the
wire.  The assumption of SSL is (as written up in Eric's
fine book) that the wire is insecure and the node is
secure, and if the node is insecure then we are sunk.

  Eric's book and 1.2 The Internet Threat Model
  http://iang.org/ssl/rescorla_1.html

Presence of keyboard sniffing does not give us any
evidence at all towards wire sniffing and only serves
to further embarrass the SSL threat model.

 As for DNS hijacking -- that's what's behind pharming attacks.  In
 other words, it's a real threat, too.

Yes, that's being tried now too.  This is I suspect the
one area where the SSL model correctly predicted
a minor threat.  But from what I can tell, server-based
DNS hijacking isn't that successful for the obvious
reasons (attacking the ISP to get to the user is a
higher risk strategy than makes sense in phishing).

User node-based hijacking might be more successful.
Again, that's on the node, so it can totally bypass any
PKI based protections anyway.

I say minor threat because you have to look at the big
picture:  attackers have figured out a way to breach the
secure browsing model so well and so economically
that they now have lots and lots of investment money,
and are gradually working their way through the various
lesser ways of attacking secure browsing.

As perhaps further evidence of the black mark against
so-called secure browsing, phishers still have not
bothered to acquire control-of-domain certs for $30
and use them to spoof websites over SSL.

Now, that's either evidence that $30 is too much to
pay, or that users just ignore the certs and padlocks
so it is no big deal anyway.  Either way, a model
that is bypassed so disparagingly without even a
direct attack on the PKI is not exactly recommending
itself.

iang
-- 
Advances in Financial Cryptography:
   https://www.financialcryptography.com/mt/archives/000458.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL stops credit card sniffing is a correlation/causality myth

2005-05-31 Thread Ian G
On Tuesday 31 May 2005 21:03, Perry E. Metzger wrote:
 Ian G [EMAIL PROTECTED] writes:
  On Tuesday 31 May 2005 02:17, Steven M. Bellovin wrote:
  The next part of this is circular reasoning.  We don't see network
  sniffing for credit card numbers *because* we have SSL.
 
  I think you meant to write that James' reasoning is
  circular, but strangely, your reasoning is at least as
  unfounded - correlation not causality.  And I think
  the evidence is pretty much against any causality,
  although this will be something that is hard to show,
  in the absence.
 
   * AFAICS, a non-trivial proportion of credit
  card traffic occurs over totally unprotected
  traffic, and that has never been sniffed as far as
  anyone has ever reported.

 Perhaps you are unaware of it because no one has chosen to make you
 aware of it. However, sniffing is used quite frequently in cases where
 information is not properly protected. I've personally dealt with
 several such situations.


This leads to a big issue.  If there are no reliable reports,
what are we to believe in?  Are we to believe that the
problem doesn't exist because there is no scientific data,
or are we to believe those that say I assure you it is a
big problem?

It can't be the latter;  not because I don't believe you in
particular, but because the industry as a whole has not
the credibility to make such a statement.  Everyone who
makes such a statement is likely to be selling some
service designed to benefit from that statement, which
makes it very difficult to simply believe on the face of it.

The only way we can overcome this issue is data.  If
you have seen such situations, document them and
report them - on forums like these.  Anonymise them
suitably if you have to.

Another way of looking at this is to look at Choicepoint.
For years, we all suspected that the real problem was
the insider / node problem.  The company was where
the leaks occurred, traditionally.

But nobody had any data.  Until Choicepoint.  Now we
have data.  We know how big a problem the node is.
We now know that the problem inside the company is
massive.

So we need to see a Choicepoint for listening and
sniffing and so forth.  And we need that before we can
consider the listening threat to be economically validated.


 Bluntly, it is obvious that SSL has been very successful in thwarting
 certain kinds of interception attacks. I would expect that without it,
 we'd see mass harvesting of credit card numbers at particularly
 vulnerable parts of the network, such as in front of important
 merchants. The fact that phishing and other attacks designed to force
 people to disgorge authentication information has become popular is a
 tribute to the fact that sniffing is not practical.

And I'd expect to see massive email scanning by
now of say lawyer's email at ISPs.  But, no, very
little has occurred.

 The bogus PKI infrastructure that SSL generally plugs in to is, of
 course, a serious problem. Phishing attacks, pharming attacks and
 other such stuff would be much harder if SSL weren't mostly used with
 an unworkable fake PKI. (Indeed, I'd argue that PKI as envisioned is
 unworkable.)  However, that doesn't make SSL any sort of failure -- it
 has been an amazing success.

In this we agree.  Indeed, my thrust all along in
attacking PKI has been to get people to realise
that the PKI doesn't do nearly as much as people
think, and therefore it is OK to consider improving
it.  Especially, where it is weak and where attackers
are attacking.

Unfortunately, PKI and SSL are considered to be
sacrosanct and perfect by the community.  As these
two things working together are what protects people
from phishing (site spoofing) fixing them requires
people to recognise that the PKI isn't doing the job.

The cryptography community especially should get
out there and tell developers and browser implementors
that the reason phishing is taking place is that the
browser security model is being bypassed, and that
some tweaks are needed.

   * We know that from our experiences
  of the wireless 802.11 crypto - even though we've
  got repeated breaks and the FBI even demonstrating
  how to break it, and the majority of people don't even
  bother to turn on the crypto, there remains practically
  zero evidence that anyone is listening.

 Where do you get that idea? Break-ins to firms over their unprotected
 802.11 networks are not infrequent occurrences. Perhaps you're unaware
 of whether anyone is listening in to your home network, but I suspect
 there is very little that is interesting to listen in to on your home
 network, so there is little incentive for anyone to break it.

Can you distinguish between break-ins and sniffing
and listening attacks?  Break-ins, sure, I've seen a
few cases of that.  In each case the hackers tried to
break into an unprotected site that was accessible
over an unprotected 802.11.

My point though is that this attack is not listening.
It's an access attack.  So one must be careful

Garfinkel analysis on Skype withdrawn?

2005-05-20 Thread Ian G
Has anyone got a copy of the Skype analysis done by Simson
Garfinkel?  It seems to have disappeared.
 Original Message 
Subject: Simson Garfinkel analyses Skype - Open Society Institute
Date: Sun, 10 Apr 2005 10:32:44 +0200
From: Vito Catozzo
Hi
I am Italian, so forgive any possible error or whatever regards the
English language. I read your article on mail-archive.com
(http://www.mail-archive.com/cryptography@metzdowd.com/msg03305.html)
and I am so interested in reading what Simson Garfinkel has written
about skype.
Unfortunately the link you posted in the message is now broken
(http://www.soros.org/initiatives/information/articles_publications/articles/security_20050107/OSI_Skype5.pdf).
If you have this article saved on your hard disk could you please send it to me?
Best regards
Vito Catozzo
--
News and views on what matters in finance+crypto:
http://financialcryptography.com/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


calling all French-reading cryptologers - Kerckhoff's 6 principles needs a translation

2005-05-20 Thread Ian G
It's been a year or so since this was raised, perhaps there are
some French reading cryptologers around now?

--  Forwarded Message  --

 Financial Cryptography Update: HCI/security - start with Kerckhoff's 6
 principles

  May 01, 2005

http://www.financialcryptography.com/mt/archives/000454.html


It seems that interest in the nexus at HCI (human computer interface)
and security continues to grow.  For my money I'd say we should start
at Kerckhoff's 6 principles.

http://www.financialcryptography.com/mt/archives/000195.html

Now, unfortunately we have only the original paper in French, so we can
only guess at how he derived his 6 principles:

http://www.petitcolas.net/fabien/kerckhoffs/index.html

Are there any French crypto readers out there who could have a go at
translating this?  Kerckhoff was a Dutchman, and perhaps this means we
need to find Dutch cryptographers who can understand all his nuances...
 Nudge, nudge...

(Ideally the way to start this, I suspect, is to open up a translation
in a Wiki.  Then, people can debate the various interpretations over an
evolving document.  Just a guess - but are there any infosec wikis out
there?)

--
Powered by Movable Type
Version 2.64
http://www.movabletype.org/

---

-- 
http://iang.org/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


[Fwd] Advances in Financial Cryptography - First Issue

2005-05-20 Thread Ian G

Advances in Financial Cryptography - First Issue

  May 11, 2005

https://www.financialcryptography.com/mt/archives/000458.html


I'm proud to announce our first issue of Advances in Financial
Cryptography!  These three draft papers are presented, representing a
wide range of potential additions to the literature:


   Daniel Nagy, On Secure Knowledge-Based Authentication
   Adam Shostack, Avoiding Liability:
An Alternative Route to More Secure Products
   Ian Grigg, Pareto-Secure


[snip]...  Click on:


https://www.financialcryptography.com/mt/archives/000458.html


to see the full story.  (You'll have to battle the cert or
drop the https == http as I am trying to get SSL going
for the blog).

iang
--
http://iang.org/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Malaysia car thieves steal finger

2005-05-20 Thread Ian G
On Friday 20 May 2005 19:22, Ben Laurie wrote:
 R.A. Hettinga wrote:
  Police in Malaysia are hunting for members of a violent gang who chopped
  off a car owner's finger to get round the vehicle's hi-tech security
  system.

 Good to know that my amputationware meme was not just paranoia.

https://www.financialcryptography.com/mt/archives/000440.html

Photo of an advert that ran in Germany.  You need
German for the words but that's not necessary.

iang
-- 
Advances in Financial Cryptography:
   https://www.financialcryptography.com/mt/archives/000458.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: how email encryption should work

2005-03-29 Thread Ian G
Hi James,
I read that last night, and was still musing on it...
James A. Donald wrote:
--
In my blog http://blog.jim.com/ I post how email 
encryption should work

I would appreciate some analysis of this proposal, which 
I think summarizes a great deal of discussion that I 
have read.

* The user should automagically get his certified 
key when he sets up the email account, without 
having to do anything extra. We should allow him the 
option of doing extra stuff, but the default should 
be do nothing, and the option to do something should 

For clarity reasons, I think you mean that the
default should be to not invoke the 'extra stuff'
on automagic creation, rather than do nothing
which is in fact what users get today - nothing.

be labelled with something intimidating like 
Advanced custom cryptographic key management so 
that 99% of users never touch it.

Concur.  The notion that a user needs a cert
from anyone else for the purpose of email is
wrong;  this doesn't mean denying them so they
can take part in corporate nets for example,
but that ordinary users in ordinary email will
not get much benefit from certs signed by other
agents.

* In the default case, the mail client, if there are 
no keys present, logs in to a keyserver using a 
protocol analogous to SPEKE, using by default the 
same password as is used to download mail. That 
server then sends the key for that password and 
email address, and emails a certificate asserting 
that holder of that key can be reached at that email 
address. Each email address, not each user, has a 
unique key, which changes only when and if the user 
changes the password or email address. Unless the 
user wants to deal with advanced custom options, 
his from address must be the address that the 
client downloads mail from  as it normally is.

I would put this in the extra stuff category,
and not in the default category.
The reason is that it creates a dependency on a
server that might not exist and even if a good
idea, will take a while to prove itself.

* The email client learns correspondent's public 
keys by receiving signed email.

The problem I've discovered with this is that the
signing of mail is (I suggest) not a good idea
unless you have a good idea what the signature
means.  I've not seen anywhere where it sets out
what a signature means for S/MIME.  For OpenPGP
the signature is strictly undefined by the code,
so that's a better situation - it means whatever
you want it to mean.
Which means that most people under most circumstances
should not send most emails out signed.  Which sort
of makes signed emails a poor carrier pigeon for a
key exchange.
(I don't have a solution to this - just pointing
out what I see as a difficulty.  The workaround is
that the user turns off signing and has to send
an explicit blank signed email as a key exchange
message.  Clumsy.)
(One possibility is to put the cert in headers.)

It assigns petnames 
on a per-key basis. A petname is also shorthand for 
entering a destination address (Well it is shorthand 
if the user modified it. The default petname is the 
actual address optionally followed by a count.)

Yes this would help a lot.  Any petname set should
be displayed distinctly from the default name.
(Oh, as a nitpick, a default address is not a petname,
it's just a default name.  A petname has to be set by
the user to exist.)

* The email client presents two checkboxes, sign and 
encrypt, both of which default to whatever was last 
used for this email address. If several addresses 
are used, it defaults to the strongest that was used 
for any one of them. If the destination address has 
never been used before, then encrypt is checked if 
the keys are known, greyed out if they are unknown. 
Sign is checked by default.

Right, the UI could do a lot to show what is possible
by shading the various email addresses or adding little
icons to indicate their encryptability state.

* The signature is in the mail headers, not the 
body, and signs the body, the time sent, the 
sender's address, and the intended recipient's 
address. If the email is encrypted, the signature 
can only be checked by someone who possesses the 
decryption key.

I had an entertaining read of the paper on Naive
Sign  Encrypt last night.  There are a lot of
issues in how signatures are combined with encryption,
I don't think this is a solved issue by any means
when it comes to email.

* If the user is completely oblivious to encryption 
and completely ignores those aspects of the program, 
and those he communicates with do likewise, he sends 
his public key all over the place in the headers, 
signs everything he sends, and encrypts any messages 

See caveat about signing above.  I certainly agree
that any message that can be encrypted should be
encrypted.
If you thought that 

Re: Secure Science issues preview of their upcoming block cipher

2005-03-29 Thread Ian G
Dan Kaminsky wrote:
Have you looked at their scheme?
http://www.securescience.net/ciphers/csc2/

Secure Science is basically publishing a cipher suite implemented by
Tom St. Denis, author of Libtomcrypt.

Aha!  I seem to recall on this very list about
2 years back, Tom got crucified for trying to
invent his own simple connection protocol.  He
withdrew from doing useful work in creating a
new crypto protocol because of criticism here,
and the world is a poorer place for it.
I'd be interested to hear why he wants to
improve on AES.  The issue with doing that
is that any marginal improvements he makes
will have trouble overcoming the costs
involved with others analysing his work.
Using AES is just efficient, it allows us all
to say, right, ok, next question in 2 seconds
and then easily recommend his product.
Still, even if he hasn't got any good reasons,
I'd still support his right to try.
iang
--
News and views on what matters in finance+crypto:
http://financialcryptography.com/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


What is to be said about pre-image resistance?

2005-03-25 Thread Ian G
Collision resistance of message digests is effected by the birthday
paradox, but that does not effect pre-image resistance.  (correct?)
So can we suggest that for pre-image resistance, the strength of
the SHA-1 algorithm may have been reduced from 160 to 149?  Or can
we make some statement like reduced by some number of bits that may
be related to 11?
Or is there no statement we can make?
iang
PS: There is a nice description (with a bad title) here for the
amateurs like myself:
http://www.k2crypt.com/sha1.html
--
News and views on what matters in finance+crypto:
http://financialcryptography.com/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: how to phase in new hash algorithms?

2005-03-20 Thread Ian G
Steven M. Bellovin wrote:
So -- what should we as a community be doing now?  There's no emergency 
on SHA1, but we do need to start, and soon.
The wider question is how to get moving on new hash
algorithms.  That's a bit tricky.
Normally we'd look to see NIST or the NESSIE guys
lead a competition.  But NESSIE just finished a
comp, and may not have the appetite for another.
NIST likewise just came out with SHA256 et al, and
they seem to have a full work load as it is trying
to get DSS-2 out.
How about the IACR?  Would they be up to leading
a competition?  I don't know them at all myself,
but if the Shandong results are heard at IACR
conferences, then maybe it's time to take on a
larger role.
Most of the effort could be volunteer, and it would
also be easy enough to schedule everything aligned
with the conference circuit.
Just a thought.  Anyone know anyone at the IACR?
iang
--
News and views on what matters in finance+crypto:
http://financialcryptography.com/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Encryption plugins for gaim

2005-03-15 Thread Ian G
Adam Fields wrote:
Given what may or may not be recent ToS changes to the AIM service,
I've recently been looking into encryption plugins for gaim. 

Specifically, I note gaim-otr, authored by Ian G, who's on this list.
Just a quick note of clarification, there is a collision
in the name Ian G.  4 letters does not a message digest
make.
Gaim-otr as I understand it is authored by Nikita Borisov
and Ian Goldberg [EMAIL PROTECTED].  It can be acquired
here:
  http://www.xelerance.com/mirror/otr/
and here are some other links:
  http://www.emergentchaos.com/archives/000715.html
Just to confuse the issue I also am working on a private
instant messaging service which is markedly different, in
that I am taking a payment system and reworking it into an
IM system:
  http://www.financialcryptography.com/mt/archives/000379.html
But I haven't got around to a download yet.  And it's not
AIM compatible, as it works through its host payment system.

Ian - would you care to share some insights on this? Is it ready for
prime time or just a proof-of-concept? Any known issues?
Over to Ian G.
iang
--
News and views on what matters in finance+crypto:
http://financialcryptography.com/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


$90 for high assurance _versus_ $349 for low assurance

2005-03-13 Thread Ian G
In the below, John posted a handy dandy table of cert prices, and
Nelson postulated that we need to separate high assurance from low
assurance.  Leaving aside the technical question of how the user
gets to see that for now, note how godaddy charges $90 for their
high assurance and Verisign charges $349 for their low assurance.
Does anyone have a view on what low and high means in this
context?  Indeed, what does assurance mean?
iang
John Gilmore wrote:
For the privilege of being able to communicate securely using SSL and a
popular web browser, you can pay anything from $10 to $1500.  Clif
Cox researched cert prices from various vendors:
  http://neo.opn.org/~clif/SSL_CA_Notes.html
Nelson B wrote:
 https://www.godaddy.com/gdshop/ssl/ssl.asp shows that this CA runs
 two classes, high assurance and low assurance.

 Do they have two roots that correspond to these two classes?
 If not, how can users choose to trust high assurance separately
 from (perhaps instead of) low assurance certs?

 I think mozilla's policy should require separate roots for separate
 classes of assurance.  Alternatively, we could require separate
 intermediate CAs for each class, issued from a common root, but
 then the intermediates would have to be shipped with mozilla so
 that they can be marked with explicit trust.
--
News and views on what matters in finance+crypto:
http://financialcryptography.com/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SHA-1 cracked

2005-02-22 Thread Ian G
John Kelsey wrote:
Anyone know where we could find the paper?  It'd be kind-of convenient when trying to assess the impact of the attack if we knew at least a few details
 

The *words* part I typed in here:
http://www.financialcryptography.com/mt/archives/000357.html
I skipped the examples.  It is very brief.
If it's really the case that the attack requires colliding messages of different sizes (that's what this comment implies), then maybe the attack won't be applicable in the real world, but it's hard to be sure of that.  Suppose I can find collisions of the form (X,X*) where X is three blocks long, and X* is four blocks long.  Now, that won't work as a full collision,  because the length padding at the end will change for X and X*.  But I can find two such collisions, and still get a working attack by concatenating them.  
 

This is the relevant para:
Table 2: A collision of SHA1 reduced to 58 steps. The two messages that 
collide are M0 and M'0. Note that padding rules were not applied to the 
messages.


iang
--
News and views on what matters in finance+crypto:
   http://financialcryptography.com/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Many Wireless Security Breaches Reported At (RSA) Security Conference

2005-02-22 Thread Ian G
(As I've said many times, security breaches reported at
conferences full of security people don't count as a
predictor of what's out in the real world as a threat.
But, it makes for interesting reading and establishes
some metric on the ease of the attack.  iang)

http://www.mobilepipeline.com/showArticle.jhtml?articleID=60401970
February 18, 2005
Many Wireless Security Breaches Reported At Security Conference 

By Mobile Pipeline Staff
There were 32 Evil Twin attacks and many other types of security 
breaches aimed at Wi-Fi users of the recently-concluded RSA security 
conference, wireless security vendor AirDefense claimed Thursday.

In an Evil Twin attack, hackers set up bogus access points and try to 
get nearby wireless users to log on either. Then, they can steal 
information that the user transmits The use of this method of attack 
marks a significant shift in how eavesdroppers and hackers are trying to 
steal information from wireless LAN users, according to the company.

Rather than simply scanning for and identifying access points, people 
are now imitating access points, Richard Rushing, AirDefense's chief 
security officer, said in a statement. The same holds true for identity 
theft -- hackers have realized the value is in trying to become the 
access point or station, not merely finding one.

AirDefense regularly monitors the airwaves at industry conferences and 
reports the results afterwards. The company noted that the conference 
organizers made extraordinary efforts to provide secure wireless access, 
including as issuing digital credentials for accessing the wireless 
network used at the conference.

AirDefense acknowledged that the efforts made the conference's wireless 
network secure, but that didn't mean individual users were secure. 
That's because hackers were probing individual users' wireless profiles 
on their laptops, which list previously-used wireless networks. The 
hackers could then use the names of those networks to launch Evil Twin 
attacks.

We cannot stress how important it is for wireless users to clear their 
profile of access points on a regular basis, Rushing said. Wireless, 
by design, will always connect with the strongest signal, even if that 
means abandoning a secure connection.

The Evil Twin attacks mimicked networks such as T-Mobile's and Wayport's 
networks of public Wi-Fi hotspots. That meant that some users who 
previously had accessed those networks were automatically logged on to 
the bogus versions of those networks.

In addition, AirDefense noted that it detected other types of attacks at 
the conference. Specifically, it sand it found 116 attempts to spoof MAC 
addresses and 45 denial-of-service attacks against access points. It 
also found 28 unauthorized access points connected to the conference's 
wireless LAN. The unauthorized access points drew a lot of traffic, the 
company said.

--
News and views on what matters in finance+crypto:
   http://financialcryptography.com/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SHA-1 cracked

2005-02-17 Thread Ian G
Steven M. Bellovin wrote:
According to Bruce Schneier's blog 
(http://www.schneier.com/blog/archives/2005/02/sha1_broken.html), a 
team has found collisions in full SHA-1.  It's probably not a practical 
threat today, since it takes 2^69 operations to do it and we haven't 
heard claims that NSA et al. have built massively parallel hash 
function collision finders, but it's an impressive achievement 
nevertheless -- especially since it comes just a week after NIST stated 
that there were no successful attacks on SHA-1.
 

Stefan Brands just posted on my blog (and I saw
reference to this in other blogs, posted anon)
saying that it seems that Schneier forgot to
mention that the paper has a footnote which
says that the attack on full SHA-1 only works
if some padding (which SHA-1 requires) is not
done.
http://www.financialcryptography.com/mt/archives/000355.html
I think this might be an opportune time to introduce a
new way of looking at algorithms.  I've written it up
in draft (excuse the postit notes) :
http://iang.org/papers/pareto_secure.html
In short, what I do is apply the concepts of the econ
theory of Pareto efficiency to the metric of security.
This allows a definition of what we mean by secure
which is quite close to colloquial usage;  in the
language so introduced, I'd suggest that SHA-1 used
to be Pareto-complete, and is now Pareto-secure for
certain applications.  I have a little table down
the end that now needs to be updated!
Comments welcome, it is not a long nor mathematical
paper!  Some small consolation for those not at the
RSA conference.
iang
--
News and views on what matters in finance+crypto:
   http://financialcryptography.com/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


critical bits in certs

2005-02-16 Thread Ian G
Has anyone got any experience or tips on critical
bits in certificates?  These are bits that can be
set in optional records that a certificate creator
puts in there to do a particular job.  The critical
bit says don't interpret this entire certificate
if you don't understand this record.
x.509 certs have them, they are mentioned in RFCs
http://www.faqs.org/rfcs/rfc3039.html
http://www.faqs.org/rfcs/rfc2459.html
Also, OpenPGP may have them (I recall arguing against
them a while back, never checked where it all ended).
The reason I ask is that a CA has started issuing
certs with an optional critical section.  It has a
good reason to do this ... but the results aren't
pretty, and the CA is now asking browser manufacturers
to accept its certs and/or comply with the crit.
Many issues are swirling around, so it seems useful
to ask around.
iang
--
News and views on what matters in finance+crypto:
   http://financialcryptography.com/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: A cool demo of how to spoof sites (also shows how TrustBar preventsthis...)

2005-02-09 Thread Ian G
Adam Shostack wrote:
Have you run end-user testing to demonstrate the user-acceptability of
Trustbar?
 

Yes, this was asked over on the cap-talk list.
Below is what I posted there.  I'm somewhat
sympathetic as doing a real field trial which
involves testing real responses to a browser
attack raises all sorts of heisenberg uncertainty /
experimental method issues.  Off the top of
my head, I think this is a really tricky problem,
and if anyone knows how to test security
breaches on ordinary users, shout!
Ka-Ping Yee wrote:
1. TrustBar: Protecting (even Naive) Web Users from Spoofing and
Phishing Attacks, Amir Herzberg and Ahmad Gbara
http://www.cs.biu.ac.il/~herzbea//Papers/ecommerce/spoofing.htm
  

I've read that paper.  What they did is not a user study at all;
it was merely a questionnaire.  It's certainly better than nothing,
but it is not a user study.  For the results to be applicable, the
tests should take place while users are actually interacting with
a browser normally.
 

I agree it wasn't much.  But it was a bit more than
just a multiple choice:
 The second goal of the third question was to evaluate whether the use 
of TrustBar is likely to improve the ability of users to discern between 
unprotected sites, protected sites and spoofed (fake) sites. For this 
purpose, we gave users a very brief explanation on the TrustBar security 
indicators, and then presented three additional screen shots, this time 
using a browser equipped with TrustBar. Again, the screen shots are 
presented in Appendix B, and each was presented for 10 to 15 seconds, 
taken using Mozilla in the Amazon web site. We leave it as a simple 
exercise to the reader to identify the protected, unprotected and 
spoofed (fake) among these three screen shots.

 The results provide positive indication supporting out belief that 
the use of TrustBar improves the ability of (naïve) web users to discern 
between protected, unprotected and fake sites. Specifically, the number 
of user that correctly identified each of the three sites essentially 
doubled (to 21, 22 and 29).

That would rate as a simulation rather than
a field trial, I guess.
--
iang
--
News and views on what matters in finance+crypto:
   http://financialcryptography.com/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: A cool demo of how to spoof sites (also shows how TrustBar preventsthis...)

2005-02-09 Thread Ian G
Taral wrote:
On Wed, Feb 09, 2005 at 07:41:36PM +0200, Amir Herzberg wrote:
 

Why should I trust you? Filtering xn--* domains works for me, and
doesn't require that I turn my browser over to unreviewed, possibly
buggy code.
 

I understand this is a theoretical question, but
here is an answer:
The plugin is downloadable from a MozDev site,
and presumably if enough attention warrants it,
Amir can go to the extent of signing it with a
cert in Mozilla's code signing regime.
Also, as Amir is a relatively well known name in
the world of crypto I suppose you could consider
his incentives to be more aligned with delivering
good code than code that would do you damage.
iang
--
News and views on what matters in finance+crypto:
   http://financialcryptography.com/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is 3DES Broken?

2005-02-05 Thread Ian G
John Kelsey wrote:
From: Steven M. Bellovin [EMAIL PROTECTED]
No, I meant CBC -- there's a birthday paradox attack to watch out for.
   

Yep.  In fact, there's a birthday paradox problem for all the standard chaining modes at around 2^{n/2}.  

For CBC and CFB, this ends up leaking information about the XOR of a couple plaintext blocks at a time; for OFB and counter mode, it ends up making the keystream distinguishable from random.  Also, most of the security proofs for block cipher constructions (like the secure CBC-MAC schemes) limit the number of blocks to some constant factor times 2^{n/2}.
 

It seems that the block size of an algorithm then
is a severe limiting factor.  Is there anyway to
expand the effective block size of an (old 8byte)
algorithm, in a manner akin to the TDES trick,
and get an updated 16byte composite that neuters
the birthday trick?
Hypothetically, by say having 2 keys and running
2 machines in parallel to generate a 2x blocksize.
(I'm just thinking of this as a sort of mental challenge,
although over on the OpenPGP group we were toying
with the idea of adding GOST, but faced the difficulty
of its apparent age/weakness.)
iang
--
News and views on what matters in finance+crypto:
   http://financialcryptography.com/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Can you help develop crypto anti-spoofing/phishing tool ?

2005-02-04 Thread Ian G
Michael H. Warfield wrote
What Amir and Ahmad are looking at is
showing the CA as part of the trust equation
when the user hits a site.  Some CAs will
enter the user's consciousness via normal
branding methods, and new ones will
trigger care  caution.  Which is what
we want - if something strange pops up,
the user should take more care.
   

	How do you make it strange enough for them to give a flip when a
modal dialog box won't even do it?
 

I'd suggest you have a quick browse through
their paper, skip the words and look for the
graphics.  It will show it faster than these 1000
words.
http://www.cs.biu.ac.il/~herzbea//Papers/ecommerce/spoofing.htm
In one word, it is 'branding.'  In many words,
it goes like this:  TrustBar allows the user to
'sign off' on her favourite banking sites, which
means when that cert is seen it shows a logo
that the user is familiar with.  It also shows
the logo of the CA, which is something that
the user is familiar with.
http://trustbar.mozdev.org/
Note that this is not a popup with techie
messages in it, but an 'advert' that appears
on the chrome.  On the basis of the recognition
of the cert, which belongs to that site, the
browser shows the bright coloured advert
for the bank and for the CA.
Now, a phisher, to attack that, would have to
acquire a cert from the same CA, and get the
user to also sign off on that cert as being her
bank.  Which is hard to do because she already
has signed off on her bank.
So what happens under attack is that the brand
adverts change, and the user should notice that.
This is in effect what branding is, it is a message
to you to notice when you are not drinking your
favourite cola brand, and to make you feel guilty
or something.
So, to use a little handwaving, we do know how
to make the user notice that she is in a different
place - by using the brand concepts that marketing
as a science and art has used for many a century.
iang
--
News and views on what matters in finance+crypto:
   http://financialcryptography.com/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Dell to Add Security Chip to PCs

2005-02-02 Thread Ian G
Erwann ABALEA wrote:
On Wed, 2 Feb 2005, Trei, Peter wrote:
 

Seeing as it comes out of the TCG, this is almost certainly
the enabling hardware for Palladium/NGSCB. Its a part of
your computer which you may not have full control over.
   

Please stop relaying FUD. You have full control over your PC, even if this
one is equiped with a TCPA chip. See the TCPA chip as a hardware security
module integrated into your PC. An API exists to use it, and one if the
functions of this API is 'take ownership', which has the effect of
erasing it and regenerating new internal keys.
 

So .. the way this works is that Dell  Microsoft
ship you a computer with lots of nice multimedia
stuff on it.  You take control of your chip by erasing
it and regenerating keys, and then the multimedia
software that you paid for no longer works?
I'm just curious on this point.  I haven't seen much
to indicate that Microsoft and others are ready
for a nymous, tradeable software assets world.
iang
--
News and views on what matters in finance+crypto:
   http://financialcryptography.com/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


  1   2   >