Re: A note on vendor reaction speed to the e=3 problem

2006-09-17 Thread Anne Lynn Wheeler

Taral wrote:

*That* is the Right Way To Do It. If there are variable parts (like
hash OID, perhaps), parse them out, then regenerate the signature data
and compare it byte-for-byte with the decrypted signature. Anything
you don't understand/control that might be variable (e.g. options) is
eliminated by this process.


FSTC originally created FSML for digitally signed xml encoded data ... which 
was then donated to w3c and became part of xml digital signature specification.

the issue for FSTC was e-checks ... where originator took fields from ACH 
transaction ... encoding them in XML, digitally signed the XML encoding, and then 
appended the signature to the original ACH transaction. the recipient received the ACH 
transaction ... duplicated the original XML encoding process, computed the hash ... and 
then compared it to the decoded signature (from the ACH transaction append field).

the original issue for FSML was that XML didn't have a bit-deterministic 
encoding process ... which could result in the originator and the recipient 
getting different results doing XML encoding of ACH transaction fields.

X9.59 financial transaction specified something similar
http://www.garlic.com/~lynn/x9.59.html#x959

which allowed originator and recipient to perform deterministic encoding of 
standard financial transaction (in manner similar to FSTC e-check process) ... 
where the signature was carried in standard electronic transaction append 
field. the base standard specified ASN.1 encoding ... but the fully constituted 
x9.59 fields included a version field ... the purpose of which included being 
able to specify an x9.59 version that used XML encoding (rather than ASN.1 
encoding).

the standard just specified all the fields and ordering for the encoding.

there were sample mappings between the fields in the standard and fields in 
various
existing financial transactions. if x9.59 called for fields that weren't part of
specific financial transaction ... then those fields needed to be carried in 
the transaction append/addenda, along with the digital signature (i.e. the 
digital signature was appended
to standard transaction in unencoded format, it wasn't required that the 
encoded format
being transmitted ... just that the encoded format could be reproduced in a deterministic manner). 


old write-up giving correspondence between x9.59 fields and some fields from 
some
common financial transaction formats (includes a proposed xml tagged encoding)
http://www.garlic.com/~lynn/8583flow.htm

part of the issue for the x9.59 specification was the requirement for a 
standard that preserved the integrity of the financial infrastructure for all 
retail payments (ALL, including point-of-sale).

A typical point-of-sale payment card transaction avgs. 60-80 bytes. By 
comparison, some of the PKI digital signature based specifications from the 
period had enormous payload bloat resulting in 4k-12k bytes ... aka increasing 
transaction payload size by two orders of magnitude (100 times).
http://www.garlic.com/~lynn/subpubkey.html#x959
http://www.garlic.com/~lynn/subpubkey.html#certless

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: A note on vendor reaction speed to the e=3 problem

2006-09-17 Thread Jon Callas

This amounts to *not* using ASN.1 - treating the ASN.1
data as mere arbitrary padding bits, devoid of
information content.


That is correct, it has the advantage of being merely a byte string  
that denotes a given hash.


Jon


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [cryptography] Re: Why the exponent 3 error happened:

2006-09-17 Thread Eric Young

James A. Donald wrote:

--
James A. Donald wrote:
 Code is going wrong because ASN.1 can contain
 complicated malicious information to cause code to go
 wrong.  If we do not have that information, or simply
 ignore it, no problem.

Ben Laurie wrote:
 This is incorrect. The simple form of the attack is
 exactly as described above - implementations ignore
 extraneous data after the hash. This extraneous data
 is _not_ part of the ASN.1 data.

But it is only extraneous because ASN.1 *says* it is
extraneous.

If you ignore the ASN.1 stuff, treat it as just
arbitrary padding, you will not get this problem.  You
will look at the rightmost part of the data, the low
order part of the data, for the hash, and lo, the hash
will be wrong!
This is a question I would not mind having answered; while the exponent 
3 attack works when there are low bits to 'modify', there has been talk 
of an attack where the ASN.1 is correctly right justified (hash is the 
least significant bytes), but incorrect ASN.1 encoding is used to add 
'arbitrary' bytes before the hash.  So in this case some of the most 
significant bytes are fixed, the least significant bytes are fixed, but 
some in the middle can be modified.  Does the exponent 3 attack work in 
this case?  My personal feel is that his would be much harder, but is 
such an attack infeasible?


This issue about ASN.1 parameters being an evil concept goes away if the 
attack can only work when the least significant bytes need to be modifiable.


eric

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: A note on vendor reaction speed to the e=3 problem

2006-09-17 Thread David Shaw
On Sat, Sep 16, 2006 at 12:35:08PM +1000, James A. Donald wrote:
 --
 Peter Gutmann wrote:
   How does [GPG] handle the NULL vs.optional
   parameters ambiguity?
 
 David Shaw:
  GPG generates a new structure for each comparison, so
  just doesn't include any extra parameters on it.  Any
  optional parameters on a signature would cause that
  signature to fail validation.
 
  RFC-2440 actually gives the exact bytes to use for the
  ASN.1 stuff, which nicely cuts down on ambiguity.
 
 This amounts to *not* using ASN.1 - treating the ASN.1
 data as mere arbitrary padding bits, devoid of
 information content.

That is correct.  OpenPGP passes the hash identification in the
OpenPGP data as well as encoded in ASN.1 for the PKCS-1 structure.
Since there is another source for the information, it is unnecessary
to generate or parse ASN.1.  In the case of GPG specifically (other
implementations may do the same, but I can't say for sure), all ASN.1
data is hardcoded opaque strings.

David

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Why the exponent 3 error happened:

2006-09-17 Thread Whyte, William
 This is incorrect. The simple form of the attack
 is exactly as described above - implementations
 ignore extraneous data after the hash. This
 extraneous data is _not_ part of the ASN.1 data.
 
 James A. Donald wrote:
But it is only extraneous because ASN.1 *says* it is
extraneous.

No. It's not the ASN.1 that says it's extraneous, it's the
PKCS#1 standard. The problem is that the PKCS#1 standard
didn't require that the implementation check for the
correct number of ff bytes that precede the BER-encoded
hash. The attack would still be possible if the hash
wasn't preceded by the BER-encoded header.

William

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: A note on vendor reaction speed to the e=3 problem

2006-09-17 Thread Whyte, William
   RFC-2440 actually gives the exact bytes to use for the
   ASN.1 stuff, which nicely cuts down on ambiguity.
 
 This amounts to *not* using ASN.1 - treating the ASN.1
 data as mere arbitrary padding bits, devoid of
 information content.

Again, not quite right. You have to do a memcmp() and
make sure you've got the right arbitrary padding bits.

Anyway, the attack applies even if you throw away the
ASN.1 data. 

William

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Exponent 3 damage spreads...

2006-09-17 Thread David Wagner
James A. Donald [EMAIL PROTECTED] writes:
Parameters should not be expressed in the relevant part
of the signature.  The only data that should be
encrypted with the RSA private key and decrypted with
the public key is the hash result itself, and the
padding.  If the standard specifies that additional
material should be encrypted, the standard is in error
and no one should follow it.

I agree with this comment, and with many of the other sensible
comments you have made in this thread.

I would modify what you said slightly: it may be reasonable to include a
field identifying the hash algorithm alongside the hash digest.  But apart
from that, throwing in additional optional parameters strikes me as just
asking for trouble.

It seems to me that e=3 is a distraction.  I think that these security
holes have revealed some more fundamental issues here that are independent
of the value of e you use.

It seems to me that the problems can be attributed to two problems:
(a) implementation bugs (failures to implement the spec faithfully); and
(b) ad hoc signatures schemes that have never been adequately validated.
In more detail:

  (a) Any implementation that doesn't check whether there is extra
  junk left over after the hash digest isn't implementing the PKCS#1.5
  standard correctly.  That's a bug in the implementation.  Of course,
  as we know, if you use buggy implementations that fail to implement
  the specification faithfully, all bets are off

  (b) The discussion of parameter fields in PKCS#1.5 signatures
  illustrates a second, orthogonal problem.  If your implementation
  supports appending additional parameter fields of some general
  structure, then you have not implemented conventional PKCS#1.5
  signatures as they are usually understood; instead, you have implemented
  some extension.  That raises a natural question: Why should we think
  that the extended scheme is still secure?  I see no reason to think
  that throwing in additional parameters after the hash digest is a safe
  thing to do.  I suggest that part of the problem here is that people
  are using signature padding schemes that have not been validated and
  have not been proven secure.  These PKCS#1.5 variants that allow you to
  include various optional ASN.1 crud alongside the hash digest have never
  been proven secure.  These days, using an ad hoc padding scheme that
  has not been proven secure is asking for trouble.  Why are people still
  deploying cryptographic schemes that haven't been properly validated?

I would suggest that there are two lessons we can learn from this
experience: (a) maybe more attention needs to be paid to verifying
that our implementations correctly implement the specification; and,
(b) maybe more attention needs to be paid to validating that the spec
defines a cryptographic mode of operation that is sensible and secure --
and provable security might be a good starting point for this.

Consequently, I think the focus on e=3 is misguided.  I think we should
be more concerned by the fact that our crypto implementations have
implementation bugs, and that our specs were never adequately validated.
This time, the impact of those failures may have been worse for signatures
using e=3, but it seems to me that this is more an accident than anything
particularly fundamental.  The latest problems with e=3 are just the symptom,
not the root cause.  I think it's worth putting some effort into treating
the root cause, not just the symptom.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: RSA SecurID SID800 Token vulnerable by design

2006-09-17 Thread Paul Zuefeldt
I wouldn't dispute any of the arguments made in the original or subsequent 
posts on this topic pointing out that the programmatic interface to the 
device opens a security hole. But I think it needs to be said that this is 
only in the environment where trojans, etc., can infiltrate the machine. 
Acknowledged... this is probably in 99.99% of the applications.


But in defense of the product, there are server-to-server type applications 
that don't involve a human which wouldn't be able to provide this style of 
two-factor authentication without a programmatic interface. And without 
hardward-based security solutions for these types of systems, they are 
vulnerable to compromise of keys and secrets by administrators. With a 
little physical security and isolation from the types of use that put them 
at risk for trojans, etc., the security hole under fire doesn't really 
exist. These systems do gain more security... by providing a device that 
doesn't allow an administrator to walk away with the secrets.


Maybe server-to-server applications weren't really the intended market for 
this particular product, but the point is that you need to be careful with 
blanket criticisms.


Regards,
Paul Zufeldt 



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: A note on vendor reaction speed to the e=3 problem

2006-09-17 Thread James A. Donald

--
On 9/15/06, David Shaw [EMAIL PROTECTED] wrote:
 GPG was not vulnerable, so no fix was issued.
 Incidentally, GPG does not attempt to parse the
 PKCS/ASN.1 data at all.  Instead, it generates a new
 structure during signature verification and compares
 it to the original.

Taral wrote:
 *That* is the Right Way To Do It. If there are
 variable parts (like hash OID, perhaps), parse them
 out, then regenerate the signature data and compare it
 byte-for-byte with the decrypted signature. Anything
 you don't understand/control that might be variable
 (e.g. options) is eliminated by this process.

 I don't think there's anything inherently wrong with
 ASN.1 DER in crypto applications.

If there are no options, you are not using ASN.1 DER.
You are using some random padding bytes that happen to
be equal to ASN.1 DER.

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 mMZpx7gaL6S/5STlYWv0A0ZM+HqCZSD2m0ClWjxL
 4UR16e+x3Uv/VW8C0Swxx9XMPtH99PEBNIc6BzpkQ

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: A note on vendor reaction speed to the e=3 problem

2006-09-17 Thread James A. Donald

--
Whyte, William wrote:
 Anyway, the attack applies even if you throw away the
 ASN.1 data.

If you ignore the ASN.1 data you expect the hash to be
in a fixed byte position, so the attack does not apply.

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 qF2+GCfNPchHe4vzSkkYoOEjOI5i/kZtLIlyTUbX
 45tXJAuT/Tj9w0qpg0VFij8GrtY2JXG05fj6YE6M2

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Fw: [Cfrg] Invitation to review Bluetooth Simple Pairing draft specification

2006-09-17 Thread Steven M. Bellovin
Forwarded with permission.  



Begin forwarded message:

Date: Fri, 15 Sep 2006 17:17:55 -0700
From: Robert Hulvey [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: [Cfrg] Invitation to review Bluetooth Simple Pairing draft
specification


Hello,
 
My name is Robert Hulvey and I am a Systems Engineer with Broadcom Corp.
working on Bluetooth products.  I participate in several groups within the
Bluetooth Special Interest Group (SIG) including the Core Specification
Working Group (CSWG), the Human Interface Device (HID) Working Group, and
the Bluetooth Architecture Review Board (BARB).  Within the CSWG, we have
been developing a feature called Simple Pairing to address the weaknesses
which were part of the original Bluetooth specification's pairing
mechanism. Our hope is that the new pairing method will be FIPS compliant,
and as such we would appreciate your review and feedback on whether we are
on track to achieve this goal.  Pairing refers to the method of
associating 2 devices so that they can communicate via the Bluetooth
wireless protocol. 
Note that Simple Pairing is just a first step, and does nothing to change
the Bluetooth encryption mechanism (the Massey-Rueppel stream cipher, also
known within the specification as E0).  We anticipate changing to AES in
counter-mode, similar to what WiFi currently uses, in a future version of
the specification.
 
The following is a link to a whitepaper which has been made publicly
available for the express purpose of encouraging outside review of the the
draft specification.  Please feel free to forward this to any other
interested parties.
 
See:
http://www.bluetooth.com/Bluetooth/Apply/Technology/Research/Simple_Pairing
.htm
http://www.bluetooth.com/Bluetooth/Apply/Technology/Research/Simple_Pairing.
htm
 
Please send any feedback to the address shown in the document (
mailto:[EMAIL PROTECTED] [EMAIL PROTECTED]), but
please also copy me at [EMAIL PROTECTED]
 
Thank you for your time.
 
Best Regards,
-Rob
 






Robert W. Hulvey
Principal Systems Engineer  Broadcom Corporation
Mobile
http://maps.yahoo.com/py/maps.py?Pyt=Tmapaddr=16215+Alton+Parkwaycsz=Irvi
ne%2C+CA+92618country=us  Wireless Group
16215 Alton Parkway
Irvine, CA 92618
[EMAIL PROTECTED]
http://www.broadcom.com http://www.broadcom.com/  
tel: 
mobile: 949-926-6239
310-384-0996


 https://www.plaxo.com/add_me?u=30065054807v0=565779k0=68427479 Add me
to your address book...   http://www.plaxo.com/signature Want a
signature like this?
 



--Steven M. Bellovin, http://www.cs.columbia.edu/~smb
attachment: ConnectBt.jpg


Re: Why the exponent 3 error happened:

2006-09-17 Thread Hal Finney
For another example of just how badly this kind of thing can be done,
look at this code excerpt from Firefox version 1.5.0.7, which is the
fixed version.  There are two PKCS-1 parsing functions, one which returns
the hash and its prefix, the other of which is given the hash and asked
whether it matches the RSA-signed value.  This is from the latter one:

/*
 * check the padding that was used
 */
if (buffer[0] != 0 || buffer[1] != 1)
goto loser;
for (i = 2; i  modulus_len - hash_len - 1; i++) {
if (buffer[i] == 0)
break;
if (buffer[i] != 0xff)
goto loser;
}

/*
 * make sure we get the same results
 */
if (PORT_Memcmp(buffer + modulus_len - hash_len, hash, hash_len) != 0)
goto loser;

PORT_Free(buffer);
return SECSuccess;

Here, buffer holds the result of the RSA exponentiation, of size
modulus_len, and we are passed hash of size hash_len to compare.

I don't think this code is used, fortunately.  It will accept anything
of the form 0, 1, 0, garbage, hash.  Just goes to show how easy it is
to get this kind of parsing wrong.

(Note, this is from 
mozilla/security/nss/lib/softoken/rsawrapr.c:RSA_CheckSign())

Hal Finney

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: RSA SecurID SID800 Token vulnerable by design

2006-09-17 Thread Travis H.

On 9/15/06, Daniel Carosone [EMAIL PROTECTED] wrote:

But let's not also forget that these criticisms apply approximately
equally to smart card deployments with readers that lack a dedicated
pinpad and signing display.


This looks mildly interesting:
http://www.projectblackdog.com/product.html
I guess it uses an autorun file on Windows; I wonder whether most systems
allow you to effectively launch X.  The docs say it connects via ethernet
over USB, so you're effectively a thin X client.  Nice that it's open-source.

Good idea, still vulnerable to software surveillance and host OS.
No display.

This looks more interesting:

http://fingergear.com/bio_computer_on_a_stick.php

This has a display, a fingerprint reader, runs Linux, has many common apps
(office-compatible suite), IM, etc.  More relevant to the list, it has a OTP
generator, so this is effectively a security token.

See:
http://fingergear.com/faq1.php#4

Unfortunately, it looks like you can't reimage it without wiping
everything, and then you lose the OS.  I hope you can get a modifiable
OS image and install it just as one would save data to the USB drive,
but it could be impossible.


The worst cost for these more advanced methods may be in user
acceptance: having to type one or more things into the token, and then
the response into the computer.  A USB connected token could improve
on this by transporting the challenge and response, displaying the
challenge while leaving the pinpad for authentication and approval.


I wonder if the ubiquitous fingerprint reader could replace the need
for lots of buttons; controls tend to be the most expensive and fragile
part of electronic devices.

I wonder why nobody has an open-source cell phone that does voice
recognition yet.  That would seem to be the ideal solution, wouldn't
it?  You're already carrying one around, and you have a keypad for
dialing (can be used for PIN), LCD panel for output, and if you have
a fingerprint reader, enough juice to perform some crypto, and a USB
or bluetooth connector (for storage and communication) it'd be perfect.
--
On the Internet noone knows you're a dog - except Bruce Schneier.
Unix guru for rent or hire -- http://www.lightconsulting.com/~travis/
GPG fingerprint: 9D3F 395A DAC5 5CCC 9066  151D 0A6B 4098 0C55 1484

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]