Re: Crypto dongles to secure online transactions

2009-11-25 Thread John Levine
we claimed we do something like two orders magnitude reduction in
fully-loaded costs by going to no personalization (and other things)
...

My concern with that would be that if everyone uses the the same
signature scheme and token, the security of the entire industry
becomes dependent on the least competent bank in the country not
leaking the verification secret.

For something like a chip+pin system it is my understanding that the
signature algorithm is in the chip and different chips can use
different secrets and different algorithms, so a breach at one bank
need not compromise all the others.

R's,
John

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Crypto dongles to secure online transactions

2009-11-25 Thread Jerry Leichter

On Nov 18, 2009, at 6:16 PM, Anne  Lynn Wheeler wrote:
... we could moved to a person-centric paradigm ... where a person  
could use the same token for potentially all their interactions ...
we claimed we do something like two orders magnitude reduction in  
fully-loaded costs by going to no personalization (and other  
things) ... and then another two orders magnitude reduction in  
number of tokens by transitioning from institutional-centric  
paradigm to person-centric paradigm (compared to proposed smartcard/ 
dongle replacing every pin/password).


we then came up against that the bank marketing departments have  
taken advantage of the requirement for institutional  
personalization ... to put their brand and other stuff on every  
token
It goes deeper than that.  Oh, sure, marketing loves having a presence  
- but their desire fits into corporate cultural biases.


When I go to work, I have to carry two key cards - one for the  
building, one for my employer.  They use the same technology - if you  
use the wrong one, the reader beeps in recognition but of course won't  
unlock the door.  In fact, they interfere with each other - you have  
to make sure to keep the wrong one a couple of inches away from the  
reader or it will usually be confused.  It's a pain, actually.


Now, it's certainly possible that there's something proprietary on one  
card or the other - though as we've discussed here before, that's only  
true on badly designed systems:  It's no big deal to read these cards,  
and from many times the inch or so that the standard readers require.   
So all that should be on the cards is an essentially random number  
which acts as a key into the lock systems database.  It's just that  
the owners of each system insist on assigning that random number  
themselves.  Does it give them any additional security?  Hardly.  If  
you think through the scenarios, you confirm that quickly - a direct  
consequence of the lack of any inherent value in the card or its  
contained number in and of themselves:  The real value is in the  
database entry, and both institutions retain control of their own  
databases.


What's needed is some simple cooperation and agreement on how to  
assign unique numbers to each card.  There already has to be  
cooperation on the issuance and invalidation of building cards.  But  
institutions insist on their sense of control and independence, even  
when it has no real payoffs for them (and, in fact, raises their costs).

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Crypto dongles to secure online transactions

2009-11-25 Thread Anne Lynn Wheeler

On 11/21/2009 04:56 PM, John Levine wrote:

we claimed we do something like two orders magnitude reduction in
fully-loaded costs by going to no personalization (and other things)
...


My concern with that would be that if everyone uses the the same
signature scheme and token, the security of the entire industry
becomes dependent on the least competent bank in the country not
leaking the verification secret.

For something like a chip+pin system it is my understanding that the
signature algorithm is in the chip and different chips can use
different secrets and different algorithms, so a breach at one bank
need not compromise all the others.

R's,
John



there is no shared secret ... there is unique chip private/public key generated at power-on/test  
and the public key was included/transmitted with the test result data as part of the initial power-on/test

cycle (this is process that occurs while the chips are still in wafer ... before 
being sliced  diced).
the silicon is designed to never (volunteerly) divulge the private key (modulo 
some extremely heavy duty physical attacks).

the patent stuff was all done for employer as assigned patents quite awhile ago 
(we've been gone for several yrs and the patent stuff keeps going on).

initially there was a large number of claims and had gotten to packaged as over 
60 patents and looked to be 100 before we were done. about that point, the 
employer looks at filing costs in the US and international ... and directs that 
all the claims be packaged as nine patents. Later, the patent office comes back 
and makes some comment about getting tired of huge patents where the filing fee 
doesn't even cover the cost of reading all the claims ... and directed that the 
claims be packaged as larger number of claims.
http://www.garlic.com/~lynn/aadssummary.htm

while there are claims related to unique devices with unique digital signatures 
in other applications ... there was a patent application (in our name ... years 
after we are gone) this year
http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO1Sect2=HITOFFd=PG01p=1u=%2Fnetahtml%2FPTO%2Fsrchnum.htmlr=1f=Gl=50s1=%2220090158029%22.PGNR.OS=DN/20090158029RS=DN/20090158029

all the initial chips were ec/dsa (each chip with its own unique public/private 
key) ... all done in fab that had security certified by US, EU  other gov. 
institutions and also financial institutions (no compromised chips substituted for 
real ones) ... I even got to walk the fab in bunny suit doing my own certification.

if you want different algorithms (or key lengths) ... you have to cut a new 
mask and make different wafer runs. if the number of wafers in wafer runs are 
too small ... you would start to drive the cost/chip above a few cents. There 
is no single-point-of-compromise. Compromising a single chip is equivalent to 
skimming a single magstripe ... can do fraudulent transactions against the 
accounts for that chip/token (and chip compromise significantly more difficult 
than magstripe skimming).

In theory there might be weakness found in specific chip or specific algorithm 
... but design allows for a large number of different chips and algorithms to 
interoperate in the same environment. For the initial chips ... I got a EAL4+ 
common criteria certification (by accredited lab in germany). I wanted a higher 
certification ... but had a problem that EC/DSA verification suite had been 
withdrawn. There were some higher certifications on similar chips by others 
...but their design involved loading the crypto after the certification (they 
got certification done on chip before any software loaded). My chip had 
everything in silicon (all feature/functions) ... and so the certification was 
done on everything that would be in actual use.

in the person-centric scenario ... each chip's private key becomes somewhat akin to fingerprint 
or iris pattern ... a unique something you have ... as opposed to unique something you 
are (and much easier to replace/change if there is a specific compromise).

some of the patents cover not only recording public key for each account the 
corresponding token is authorized for (and multiple different tokens might be 
authorized for same account) ... but also knowledge about the assurance level 
of the related chip. Real-time updates are then available about chip assurance 
level ... and real-time authorizations can not only take into account whether 
the transaction is within the account balance ... but potentially is the 
assurance level of the chip is high enough for authorizing the transaction.

X9.59 financial standard transaction protocol also allows for the environment 
that the transaction is performed in to also sign the transaction (in addition 
to the person's chip). Real-time authorization then may take into account both 
the assurance level (potentially updated in real-time) of the user's chips as 
well as the assurance level of the transaction environment (in determining if 
there is sufficient 

Re: Crypto dongles to secure online transactions

2009-11-25 Thread Anne Lynn Wheeler

On 11/21/2009 05:56 PM, Jerry Leichter wrote:

On Nov 18, 2009, at 6:16 PM, Anne  Lynn Wheeler wrote:

... we could moved to a person-centric paradigm ... where a person
could use the same token for potentially all their interactions ...
we claimed we do something like two orders magnitude reduction in
fully-loaded costs by going to no personalization (and other things)
... and then another two orders magnitude reduction in number of
tokens by transitioning from institutional-centric paradigm to
person-centric paradigm (compared to proposed smartcard/dongle
replacing every pin/password).

we then came up against that the bank marketing departments have taken
advantage of the requirement for institutional personalization ... to
put their brand and other stuff on every token

It goes deeper than that. Oh, sure, marketing loves having a presence -
but their desire fits into corporate cultural biases.

When I go to work, I have to carry two key cards - one for the building,
one for my employer. They use the same technology - if you use the wrong
one, the reader beeps in recognition but of course won't unlock the
door. In fact, they interfere with each other - you have to make sure to
keep the wrong one a couple of inches away from the reader or it will
usually be confused. It's a pain, actually.

Now, it's certainly possible that there's something proprietary on one
card or the other - though as we've discussed here before, that's only
true on badly designed systems: It's no big deal to read these cards,
and from many times the inch or so that the standard readers require. So
all that should be on the cards is an essentially random number which
acts as a key into the lock systems database. It's just that the owners
of each system insist on assigning that random number themselves. Does
it give them any additional security? Hardly. If you think through the
scenarios, you confirm that quickly - a direct consequence of the lack
of any inherent value in the card or its contained number in and of
themselves: The real value is in the database entry, and both
institutions retain control of their own databases.

What's needed is some simple cooperation and agreement on how to assign
unique numbers to each card. There already has to be cooperation on the
issuance and invalidation of building cards. But institutions insist on
their sense of control and independence, even when it has no real
payoffs for them (and, in fact, raises their costs).
-- Jerry


We went thru all the scenarios with the objections on why they wanted 
institutional-centric paradigm ... part of the scenario was putting the 
assurance level of the chip on level with assurance level of your fingerprint 
or iris pattern ... and asking when institutions were going to start issuing 
individual, institutional-specific fingers for people to use.

this is various person-centric claims here and there  (assigned and still 
having activity after we've been gone for yrs)
http://www.garlic.com/~lynn/aadssummary.htm

there is specific granted patent here:
http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1Sect2=HITOFFd=PALLp=1u=%2Fnetahtml%2FPTO%2Fsrchnum.htmr=1f=Gl=50S1=6978369.PN.OS=PN/6978369RS=PN/6978369

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Crypto dongles to secure online transactions

2009-11-25 Thread Bill Frantz
leich...@lrw.com (Jerry Leichter) on Saturday, November 21, 2009 wrote:

It's no big deal to read these cards,  
and from many times the inch or so that the standard readers require. 

So surely someone has built a portable reader for counterfeiting the cards
they read in restaurants near big target companies...

Cheers - Bill

---
Bill Frantz|After all, if the conventional wisdom was working, the
408-356-8506   | rate of systems being compromised would be going down,
www.periwinkle.com | wouldn't it? -- Marcus Ranum

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Crypto dongles to secure online transactions

2009-11-25 Thread Jerry Leichter

On Nov 21, 2009, at 6:12 PM, Bill Frantz wrote:

leich...@lrw.com (Jerry Leichter) on Saturday, November 21, 2009  
wrote:



It's no big deal to read these cards,
and from many times the inch or so that the standard readers require.


So surely someone has built a portable reader for counterfeiting the  
cards

they read in restaurants near big target companies...
Well, my building card is plain white.  If anyone duplicated it,  
there'd be nothing stopping them from going in.  But then the actual  
security offered by those cards - and the building controls - is more  
for show (and I suppose to keep the riffraff out - than anything else.


My work card has my photo and name on it, but there's nothing to  
correlate name with underlying ID in normal operation.  Snap a photo  
of the card while you clone it, make up a reasonable simulacrum with  
your own picture and name, and walk right in.


Not really more or less secure than the old days when you flashed you  
(easily copied) badge to a card who probably only noticed that it was  
about the right size and had roughly the right color.  But it's higher  
tech, so an improvement.  :-)


Physical security for most institutions has never been very good, and  
fortunately has never *needed* to be very good.  Convenience wins out,  
and technology gives a nice warm feeling.  A favorite example:  My  
wife's parents live in a secured retirement community.  The main  
entrance has a guard who checks if you're on a list of known visitors,  
or calls the people you're visiting if not.  Residents used to have a  
magnetic card, but that's a bit of pain to use.  So it was replaced by  
a system probably adapted from railroad freight card ID systems:  You  
stick big barcode in your passenger side window, and a laser scanner  
on a post reads it and opens the door.


Of course, it's trivial to duplicate the sticker using a simple photo,  
and since the system has to work from varying distances, at varying  
angles, on moving cars, in all light and weather conditions, it can't  
possibly be highly discriminating - almost certainly just a simple  
Manchester-style decoder.


-- Jerry


Cheers - Bill

---
Bill Frantz|After all, if the conventional wisdom was  
working, the
408-356-8506   | rate of systems being compromised would be  
going down,

www.periwinkle.com | wouldn't it? -- Marcus Ranum


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Why the onus should be on banks to improve online banking security

2009-11-25 Thread Damien Miller
On Fri, 20 Nov 2009, Peter Gutmann wrote:

 There's been a near-neverending debate about who should be responsible for
 improving online banking security measures: the users, the banks, the
 government, the OS vendor, ... .  Here's an interesting perspective from Peter
 Benson peter.ben...@codescan.com, reposted with permission, on why the onus
 should be on banks to provide appropriate security measures:
 
   One of the main reasons to target the banks with accountability is because
   you can. There is a lot of historical regulation and controls around
   banking, which makes it *relatively* easy to hold them to account. The
   bigger problem, and the next logical step, is how the banks hold suppliers /
   vendors of software accountable for flaws in their systems and software that
   enable the problems to occur in the first place.
 
   Anyone recognise the following?
 
   This software is provided as is, and any expressed or implied warranties,
   including, but not limited to, the implied warranties of merchantability and
   fitness for a particular purpose are disclaimed. In no event shall the
   contributors be liable for any direct, indirect, incidental, special,
   exemplary, or consequential damages (including, but not limited to,
   procurement of substitute goods or services; loss of use, data, or profits;
   or business interruption) however caused and on any theory of liability,
   whether in contract, strict liability , or tort (including negligence or
   otherwise) arising in any way out of the use of this software, even if
   advised of the possibility of such damage.
 
   Accountability is great, and I fully support it, and would like to somehow
   find the way to push a level of accountability back to various software
   developers / manufacturers. Unfortunately in the current state of Contract
   and Tort law, there is so much protection(ism) of the software industry,
   that its still going to be time consuming and expensive to get a couple of
   decent case studies out there or to change anything. So from a public good
   perspective, unfortunately (realistically), it is the banks that should
   carry the onus.

It is a lazy argument that the banks should be held responsible just
because it is easy to regulate them. Moreover, it seems like magical
thinking that they would then suddenly start demanding liability
warranties from their software vendors. For a start, it is probable that
the majority of the problems are in their clients' software and not the
banks', so demanding liability insurance would from banking software
vendors would have little effect. Furthermore, the cost of liability
warranties may well be less than the cost of fraud.

Also, exactly how is second paragraph of the BSD license evidence of
protection(ism) of the software industry? It seems like no-liability is
an equilibrium that the software market has settled to without assistance
from government - is there evidence to the contrary?

A much better argument for the banks being responsible for the security
of customers' money is that this is exactly what we pay them for.

-d

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


RE: Crypto dongles to secure online transactions

2009-11-25 Thread Scott Guthery
The FINREAD smart card reader was a European run at moving trust-bearing
transactions to an outboard device. It was a full Java VM in a
tamper-resistant box with a modest GUI, biometrics, lots of security on the
I/O ports and much attention to application isolation. FINREAD readers were
produced and an attempt was made to make its specifications into an ISO/IEC
standard. I don't know why it didn't get any traction but suspect that it
was more on business grounds than on technical grounds.  Telling folks they
had to buy a $100 card reader that was controlled and monetized by one
particular bank wasn't exactly a compelling offer.  

Recently GlobalPlatform has reinvigorated the STIP reader effort which is
from 35K feet the same thing.  GP took over STIP in 2004.  Google or Bing
for details.

As Dan Geer as observed over the years, reducing bank risk is not a consumer
benefit.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Crypto dongles to secure online transactions

2009-11-25 Thread Ray Dillinger
On Fri, 2009-11-20 at 20:13 +1300, Peter Gutmann wrote:

 Because (apart from the reasons given above) with business use specifically
 you run into insurmountable PC - device communications problems.  Many
 companies who handle large financial transactions are also ones who, due to
 concern over legal liability, block all access to USB ports to prevent
 external data from finding its way onto their corporate networks (they are
 really, *really* concerned about this).  If you wanted this to work, you'd
 need to build a device with a small CMOS video sensor to read data from the
 browser via QR codes and return little more than a 4-6 digit code that the
 user can type in (a MAC of the transaction details or something).  It's
 feasible, but not quite what you were thinking of.

So the model of interaction is: 
Software displays transaction on the screen (in some
 predetermined form)
Device reads the screen, MACs the transaction, and 
 displays MAC to user
User enters MAC to confirm transaction
Transaction, with MAC, is submitted to user's bank.
Bank checks MAC to make sure it matches transaction
   and performs transaction if there's a match.

Malware that finds the user account details lying around 
on the hard drive cannot form valid MACs for the transactions
it wants to use those details for, so the user and the bank 
are protected from credit card harvesting by botnets. 
Malware that attempts to get a user authorization by displaying
a different transaction on the screen is foiled by not being 
able to MAC the transaction it's really trying to do.  etc.

But a four or six digit MAC isn't nearly enough. 

You see, there's still the problem of how you handle fraudulent 
transactions.  If the black hats start submitting transactions
with random MACs in the certain knowledge that one out of ten 
thousand four-digit MACs will be right, all that happens is 
that they have to invest some bandwidth when they want to drain
your account.  They will do it, because it's more profitable 
than sending spam.  

If there is some reasonable control like freezing an account 
after a thousand attempts to make fraudulent transactions or 
not accepting transaction requests within twenty seconds after 
an attempt to make a fraudulent transaction on the same account, 
then you have created an easy denial-of-service attack that can 
be used to deny a particular person access to his or her bank 
account at a particular time of the attacker's choosing.  

Denying someone access to their money makes them vulnerable at 
an unexpected time - they can't get a hotel room, they can't get 
a cab, they can't get a plane home, they can't buy gas for their 
car and get stranded somewhere, and they become easy pickings 
for physical crimes like assault, rape, theft of vehicle, etc. 
That's not acceptable.

In order to be effective the MAC has to make success so unlikely
that submitting a fraudulent transaction has a higher cost than 
its amortized benefit.  Since the botnets are stealing their 
electricity and bandwidth anyway, the absolute cost to black 
hats of submitting a fraudulent transaction is very very close 
to zero.  What we have to look at then is their opportunity cost.  

Consider the things a botted machine could do with a couple 
kilobytes of bandwidth and a couple milliseconds of compute time.  
It could send a spam or it could send a fraudulent transaction 
to a bank with a random MAC.  It will do whichever is considered 
most profitable by the operator of the botnet.  

Note: with spamming there's almost no chance of arrest.  Receiving
money via a fraudulent transaction submitted to a bank *MIGHT* be 
made more risky, so if that actually happens then there's an 
additional risk or cost associated with successful fraud attempts, 
which I don't account for here. But ignoring that because I don't 
know how to quantify it: 

In late 2008, ITwire estimated that 7.8 billion spam emails were 
generated per hour. (http://www.itwire.com/content/view/19992/53/)
Consumer reports estimates consumer losses due to phishing at a 
quarter-billion dollars per year. 
(http://www.consumerreports.org/cro/magazine-archive/june-2009/
electronics-computers/state-of-the-net/phishing-costs-millions/
state-of-the-net-phishing-costs-millions.htm)

Check my math, but If we believe those sources, then that puts 
the return on sending one spam email at 1/546624 of a dollar, 
or about one point eight ten-thousandths of a penny. If we can 
make submitting a fraudulent transaction return less than that, 
then the botnets go on sending spams instead of submitting 
fraudulent transactions and our banking infrastructure is 
relatively safe. For now.  (Just don't think about our email 
infrastructure - it's depressing). 

If a fraud attack seeks to drain the account, it'll go 
for about the maximum amount it expects the bank to honor, 
which means, maybe, a couple thousand dollars (most checking
accounts have overdraft 

Re: Crypto dongles to secure online transactions

2009-11-25 Thread Darren J Moffat

Peter Gutmann wrote:

external data from finding its way onto their corporate networks (they are
really, *really* concerned about this).  If you wanted this to work, you'd
need to build a device with a small CMOS video sensor to read data from the
browser via QR codes and return little more than a 4-6 digit code that the
user can type in (a MAC of the transaction details or something).  It's
feasible, but not quite what you were thinking of.


That reminds me of the Lenslok copy protection device on the Elite (and 
others) game from the '80s[1]


[1] http://www.birdsanctuary.co.uk/sanct/s_lenslok.php


--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


[fc-announce] FC 2010: Call for Posters. Accepted Papers.

2009-11-25 Thread R.A. Hettinga



Begin forwarded message:

From: Radu Sion s...@cs.sunysb.edu
Date: November 23, 2009 8:42:06 AM GMT-04:00
To: fc-annou...@ifca.ai
Subject: [fc-announce] FC 2010: Call for Posters. Accepted Papers.


Financial Cryptography and Data Security
Tenerife, Canary Islands, Spain
25-28 January 2010

http://fc10.ifca.ai

Dear Colleagues,

We would like to invite you to submit a poster (deadline extended to
December 3rd) and participate in the 2010 Financial Cryptography and
Data Security Conference, January 25-28, 2010 in Tenerife, Canary
Islands, Spain, a boat-ride away from Morocco.

We had an extremely competitive review process this year. Out of 130
submissions we accepted 19 as FULL papers (acceptance rate: 14.6%) and  
15 as

SHORT papers (acceptance rate: 26.1%) for a total of 34 presentations.
Additionally we are glad to have 3 workshops co-located with FC this  
year,

with an additional 19-20 papers.

FC 2010 will thus feature close to 55 high quality paper
presentations, 2-3 panels, 3 workshops, 3 distinguished lectures in
the main conference (and several additional talks in the workshops),
as well as a great social and networking program, all hosted in a 5
star hotel in a most beautiful location.

The following is a preliminary (several papers are accepted
conditional to succesful shepherding) list of all the 54 papers
accepted at the main FC conference as well as at the workshops.

---

FC 2010 FULL PAPERS (19 papers, 14.6% acceptance rate)

+ Dan Kaminsky, IOActive, Len Sassaman, Meredith Patterson, KU
  Leuven, PKI Layer Cake: New Collision Attacks Against the Global
  X.509 Infrastructure

+ Frank Stajano, University of Cambridge, Ford-Long Wong, Bruce
  Christianson, Multichannel protocols to prevent relay attacks

+ Tom Chothia, University of Birmingham, Vitaliy Smirnov, A
  Traceability Attack Against e-Passports

+ Octavian Catrina, Amitabh Saxena, International University in
  Germany, Secure Computation With Fixed-Point Numbers

+ Paul Karger, IBM TJ Watson Research Center, David  Toll, IBM, TJ
  Watson Research Center, Elaine Palmer, IBM, TJ Watson Research
  Center, Suzanne McIntosh, IBM, TJ Watson Research Center, Samuel
  Weber, Implementing a High-Assurance Smart-Card OS

+ Jan Camenisch, IBM Research - Zurich, Maria Dubovitskaya, IBM
  Russian Systems and Technology Laboratory Moscow Engineering
  Physics Institute, Gregory Neven, IBM Research - Zurich, Unlinkable
  Priced Oblivious Transfer with Rechargeable Wallets

+ Aline Gouget, Gemalto, Sebastien Canard, Orange, Multiple
  Denominations in E-cash with Compact Transaction Data

+ Joseph Bonneau, University of Cambridge, Mike Just, Greg Matthews,
  What's in a Name? Evaluating Statistical Attacks on Personal
  Knowledge Questions

+ Benedikt Westermann, Q2S - NTNU, Rolf Wendolsky, Jondos GmbH, Lexi
  Pimenidis, iDev GmbH, Dogan Kesdogan, University of Siegen,
  Cryptographic Protocol Analysis of AN.ON

+ Sven Sche, Ruhr-Universitaet Bochum, Jg Schwenk, A CDH-Based Ring
  Signature Scheme with Short Signatures and Public Keys

+ Emiliano De Cristofaro, UCI, Gene Tsudik, UCI, Practical Private
  Set Intersection Protocols with Linear Complexity

+ Mathias Bjkqvist, Christian Cachin, IBM Research - Zurich, Robert
  Haas, Xiao-Yu Hu, Anil Kurmus, Ren Pawlitzek, Marko Vukoli, Design
  and Implementation of a Key-Lifecycle Management System

+ Tyler Moore, Harvard University, Benjamin Edelman, Harvard Business
  School, Measuring the Perpetrators and Funders of Typosquatting

+ Adam Barth, UC Berkeley, Ben Rubinstein, UC Berkeley, Mukund
  Sundararajan, Stanford, John Mitchell, Stanford, Dawn Song, UC
  Berkeley, Peter Bartlett, UC Berkeley, A Learning-Based Approach to
  Reactive Security

+ Kimmo Jvinen, Helsinki University of Technology, Vladimir
  Kolesnikov, Bell Laboratories, Ahmad-Reza Sadeghi, Ruhr-University
  Bochum, Thomas Schneider, Ruhr-University Bochum, Embedded SFE:
  Offloading Server and Network using Hardware Tokens

+ Tal Moran, Harvard University, Tyler Moore, Harvard University, The
  Phish Market Protocol: Securely Sharing Attack Data Between
  Competitors

+ Roger Dingledine,  The Tor Project, Tsuen-Wan Ngan, Dan Wallach,
  Building Incentives into Tor

+ Moti Yung, Columbia University, Aggelos Kiayias, Uconn,
  Tree-Homomorphic Encryption and Scalable Hierarchical Secret-Ballot
  Elections

+ Prithvi Bisht, University of Illinois, Chicago, A. Sistla,
  University of Illinois, Chicago, V.N. Venkatakrishnan, University
  of Illinois, Chicago, Automatically Preparing Safe SQL Queries

---

FC 2010 SHORT PAPERS (15 papers, 26.1% acceptance rate)

+ Xiaofeng Chen, Xidian University, Fangguo  Zhang, Haibo  Tian, Yi
  Mu, Kwangjo Kim, Three-round Abuse-free Optimistic Contract Signing
  With Everlasting Secrecy

+ Ryan Gardner, Johns Hopkins University, Sujata Garera, Johns
  Hopkins University, Aviel Rubin, Johns Hopkins University,
  Designing for Audit: A Voting Machine with a Tiny TCB

+ Felix Grert, Ruhr 

Proper way to check for JCE Unlimited Strength Jurisdiction Policy files

2009-11-25 Thread Kevin W. Wall
Hi list...hope there are some Java developers out there and that this is not
too off topic for this list's charter.

Does anyone know the *proper* (and portable) way to check if a Java VM is
using the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction
Policy files (e.g., for JDK 6, see
https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_Developer-Site/en_US/-/USD/viewproductdetail-start?productref=jce_policy-6-oth-...@cds-cds_developer.)

I would like something that works with at least Java 5 and later and that does
not have any false positives or negatives. I also would _prefer_ some test
that does not require locating and parsing the policy files within the JVM's
installed JCE local_policy.jar and US_export_policy.jar files as that seems
kludgey and might not work with future JDKs.

My first thought was just to try a naive dummy encryption of a test string
using a 256-bit AES key. However, I realized that this might still succeed
without having the JCE Unlimited Strength Jurisdiction Policy files installed
if the JCE provider being used implemented some exemption mechanism (i.e., see
javax.crypto.ExemptionMechanism) such as key recovery, key weakening, or
key escrow.

Then I saw that javax.crypto.Cipher class has a getExemptionMechanism() method
that returns either an ExemptionMechanism object associated with the Cipher
object OR null if there is no exemption mechanism being used.  So I figured
I could then do the naive encryption of some dummy string using 256-bit
AES/CBC/NoPadding and if that succeeded AND cipher.getExemptionMechanism()
returned null, THEN I could assume that the JCE Unlimited Strength Jurisdiction
Policy files were installed. (When the default strong JCE jurisdiction
policy files are installed, the max allowed AES key size is 128-bits.)

Does that seem like a sound plan or is there more that I need to check? If
not, please explain what else I will need to do.

Thanks in advance,

-kevin wall
-- 
Kevin W. Wall
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents.-- Nathaniel Borenstein, co-creator of MIME

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: TLS break

2009-11-25 Thread Nicolas Williams
On Wed, Nov 11, 2009 at 10:57:04AM -0500, Jonathan Katz wrote:
 Anyone care to give a layman's explanation of the attack? The 
 explanations I have seen assume a detailed knowledge of the way TLS/SSL 
 handle re-negotiation, which is not something that is easy to come by 
 without reading the RFC. (As opposed to the main protocol, where one can 
 find textbook descriptions.)

Not to sound like a broken record, and not to plug work I've done[*],
but IMO the best tool to apply to this situation, both, to understand
the problem, produce solutions, and to analyze proposed solutions, is
channel binding [0].

Channel binding should be considered whenever one combines two (or more)
two-peer end-to-end security protocols.

In this case two instances of the same protocol are combined, with an
outer/old TLS connection and an inner/new connection negotiated with the
protection of the outer one.  That last part, negotiated with the
protection of the outer one may have led people to believe that the
combination technique was safe.  However, applying channel binding as an
analysis technique would have made it clear that that technique was
vulnerable to MITM attacks.

What channel binding does not give you as an analysis technique is
exploit ideas beyond try being an MITM.

The nice thing about channel binding is that it allows you to avoid
having to analyze the combined protocols in order to understand whether
the combination is safe.  As a design technique all you need to do is
this: a) design a cryptographically secure name for an estabilished
channel of the outer protocol, b) design a cryptographically secure
facility in the inner protocol for veryfying that the applications at
both ends observe the same outer channel, c) feed (a) to (b), and if the
two protocols are secure and (a) and (b) are secure then you'll have a
secure combination.

[*] I've written an RFC on the topic, but the idea isn't mine -- it
goes as far back as 1992 in IETF RFCs.  I'm not promoting channel
binding because I had anything to do with it, but because it's a
useful technique in combining certain cryptographic protocols that I
think should be more widely understood and applied.

[0] On the Use of Channel Bindings to Secure Channels, RFC5056.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Proper way to check for JCE Unlimited Strength Jurisdiction Policy files

2009-11-25 Thread Kevin W. Wall
FWIW, my implementation of this for OWASP ESAPI is at:
http://code.google.com/p/owasp-esapi-java/source/browse/trunk/src/test/java/org/owasp/esapi/reference/CryptoPolicy.java

The main() is there just for stand-alone testing. From the ESAPI JUnit tests,
I call:
  if ( keySize  128  !CryptoPolicy.isUnlimitedStrengthCryptoAvailable() )
  {
System.out.println(Skipping test for  + cipherXform +  where key size  +
   is  + keySize + ; install JCE Unlimited Strength  +
   Jurisdiction Policy files to run this test.);
return;
  }

Would appreciate it if someone could take 5 min to look at this CryptoPolicy
source to see if it looks correct. It's only 90 lines including comments and
white space. It tried to check the exemption mechanism but am not sure I
am understanding it correctly.

Thanks,
-kevin

-Original Message-
Kevin W. Wall wrote:
 Hi list...hope there are some Java developers out there and that this is not
 too off topic for this list's charter.
 
 Does anyone know the *proper* (and portable) way to check if a Java VM is
 using the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction
 Policy files (e.g., for JDK 6, see
 https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_Developer-Site/en_US/-/USD/viewproductdetail-start?productref=jce_policy-6-oth-...@cds-cds_developer.)
 
 I would like something that works with at least Java 5 and later and that does
 not have any false positives or negatives. I also would _prefer_ some test
 that does not require locating and parsing the policy files within the JVM's
 installed JCE local_policy.jar and US_export_policy.jar files as that seems
 kludgey and might not work with future JDKs.
 
 My first thought was just to try a naive dummy encryption of a test string
 using a 256-bit AES key. However, I realized that this might still succeed
 without having the JCE Unlimited Strength Jurisdiction Policy files installed
 if the JCE provider being used implemented some exemption mechanism (i.e., see
 javax.crypto.ExemptionMechanism) such as key recovery, key weakening, or
 key escrow.
 
 Then I saw that javax.crypto.Cipher class has a getExemptionMechanism() method
 that returns either an ExemptionMechanism object associated with the Cipher
 object OR null if there is no exemption mechanism being used.  So I figured
 I could then do the naive encryption of some dummy string using 256-bit
 AES/CBC/NoPadding and if that succeeded AND cipher.getExemptionMechanism()
 returned null, THEN I could assume that the JCE Unlimited Strength 
 Jurisdiction
 Policy files were installed. (When the default strong JCE jurisdiction
 policy files are installed, the max allowed AES key size is 128-bits.)
 
 Does that seem like a sound plan or is there more that I need to check? If
 not, please explain what else I will need to do.
 
 Thanks in advance,
 
 -kevin wall


-- 
Kevin W. Wall
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents.-- Nathaniel Borenstein, co-creator of MIME

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Crypto dongles to secure online transactions

2009-11-25 Thread Anne Lynn Wheeler

On 11/21/2009 06:31 PM, Jerry Leichter wrote:

Well, my building card is plain white. If anyone duplicated it, there'd be nothing 
stopping them from going in. But then the actual security offered by those cards - and 
the building controls - is more for show (and I suppose to keep the riffraff 
out - than anything else.

My work card has my photo and name on it, but there's nothing to correlate name 
with underlying ID in normal operation. Snap a photo of the card while you 
clone it, make up a reasonable simulacrum with your own picture and name, and 
walk right in.

Not really more or less secure than the old days when you flashed you (easily 
copied) badge to a card who probably only noticed that it was about the right 
size and had roughly the right color. But it's higher tech, so an improvement. 
:-)

Physical security for most institutions has never been very good, and 
fortunately has never *needed* to be very good. Convenience wins out, and 
technology gives a nice warm feeling. A favorite example: My wife's parents 
live in a secured retirement community. The main entrance has a guard who 
checks if you're on a list of known visitors, or calls the people you're 
visiting if not. Residents used to have a magnetic card, but that's a bit of 
pain to use. So it was replaced by a system probably adapted from railroad 
freight card ID systems: You stick big barcode in your passenger side window, 
and a laser scanner on a post reads it and opens the door.


Simplest card/token is basically (single-factor) something you  have  
authentication

the cheapest RFID proximity card is just some static data ... that can be trivially copied and reproduced ... think of it somewhat akin to a 
wireless magstripe. that has also the YES CARD point-of-sale contact card vulnerability. Compromised POS terminal that recorded the 
static data from card transaction and trivially used to produce a counterfeit card (little or no difference from compromised POS terminal that 
records magstripe data). What made it worse than magstripe was that POS terminals were programmed to ask a validated chip three questions 1) was the entered 
PIN correct, 2) should the transaction be done offline, and 3) is the transaction within the account credit limit. A counterfeit YES CARD would 
answer YES to all three questions (it wasn't necessary to even know the correct pin with counterfeit YES CARD ... and deactivating the 
account ... as in magstripe ... wasn't sufficient to stop the fraud). A counterfeit YES CARD was also some other counterme
asures that had been built into the infrastructure:
http://web.archive.org/web/20030417083810/http://www.smartcard.co.uk/resources/articles/cartes2002.html

a little more secure is two-factor token that requires both the token and possibly 
something you know. However, two-factor authentication is assumed more secure 
is based on single factor authentication is based on
the different factors having independent compromises. In the case of the YES 
CARD (supposedly two-factor) ... it was only necessary to compromise the token's 
static data ... and it wasn't even necessary to know the correct PIN. In the case of 
pin-debit cards ... skimming compromises of ATMs or point-of-sale terminals can collect 
both the PIN and the magstripe data at the same time (invalidating assumption about 
independent compromises).

we had somewhat been asked in the mid-90s to participate in the x9a10 financial standard 
group (which had been given the requirement to preserve the integrity of the financial 
infrastructure for all retail payments) because of having worked on this stuff now 
frequently called electronic commerce. This was *ALL* as in debit, credit, 
ACH, internet, point-of-sale, low-value, high-value, face-to-face, unattended, and/or 
transit. Transit-turnstyle has similar requirements to building access ... although the 
contactless power limitations and contactless elapsed time requirements can be more 
stringent than building access.

Somewhat as a result ... the related work on the AADS chip strawman, had all sorts of 
requirements ... form factor agnostic, very-very fast, very-very low-power, contactless 
capable ... but for high-value ... had to no have *NO* static data and very 
difficult to counterfeit ... but at the same time ... for low-value ... had to have as 
close to zero cost as possible.

Most of the alternatives from the period ... tended to only consider a very small subset of those 
requirements ... and therefor created a solution that had a single, specific operation and were ill-suited 
for a general purpose use. A simple issue was having the same token that was multi-factor authentication 
agile ... operate with single-factor (something you have) at a transit turnstyle (no time to enter PIN) ... 
but works the same way at a high-security building access turnstyle that requires multi-factor authentication 
(something you have token in conjunction with PIN something-you-know or palm 
finger