Re: RSA SecurID SID800 Token vulnerable by design

2006-09-14 Thread Vin McLellan


On Cryptography, and in several other online forums, Hadmut Danisch 
[EMAIL PROTECTED], a respected German information security analyst, 
recently published a harsh critique of one optional feature in the 
SID800, one of the newest of the six SecurID authentication tokens -- 
some with slightly different form-factors, others with additional 
security functions -- sold by RSA.  It's raised quite a stir, and I'd 
like to respond.


A personal authentication token, by classical definition, must be 
physical, personal, and difficult to counterfeit.  The most popular 
implementations in computer security move the calculation of a 
pseudo-random authentication code -- a so-called One-Time Password, 
or OTP-- off an employee's PC and into a hand-held hardware fob, 
small enough to be attached to a personal key chain.


RSA's mainstay token, the SID700 SecurID -- millions of which are 
used in over 20,000 enterprise installations worldwide, including 
many government agencies and financial institutions -- use AES (the 
US cryptographic standard) to process Current Time and a 128-bit 
token-specific secret to generate and continuously display a series 
of 6-8 digit (or alphanumeric) OTP token-codes which change every 
60-seconds, and remain valid only for a couple of minutes.


In practice, a RSA authentication server can then independently 
calculate the token-code that is appearing on a specific SecurID at 
this particular moment; compare that against an OTP submitted by a 
pre-registered user, and validate a match.  RSA, which first 
introduced the SecurID in 1987, has always insisted on the necessity 
of two-factor authentication (2FA), where a remote RSA 
authentication server must validate both a SecurID token-code 
(evidence of something held) and a user-memorized PIN or password 
(something known.)


A stolen password can be reused indefinitely to masquerade as the 
legitimate user, often with the victim wholly unaware. A 
token-generated OTP, valid only briefly, is a far more robust 
authenticator.  With 2FA, if a SecurID is stolen or lost, it still 
can't be used to obtain illicit access to protected resources without 
the second secret: the user's memorized PIN or password.


The elegant simplicity of the traditional SecurID -- and patents on 
the mechanism by which the drift in each individual SecurID's 
internal clock is tracked by the RSA authentication server -- has 
allowed RSA's time-synched SecurID to dominate the market niche for 
hand-held OTP authentication devices for 20 years.


In a typical installation, anyone seeking to log on to a protected PC 
or network, or to access restricted online resources, must manually 
type in the OTP currently displayed on the SecurID -- as well as his 
memorized PIN or password -- to have his identity and access 
privileges validated. Network applications handle the combined 
SecurID pass-code like any long traditional password. The link 
between the user and the RSA infrastructure is often, but not always, 
an encrypted VPN channel. That's a local decision. Data exchanges 
between the RSA agent  and RSA authentication server -- which 
typically means between one of the 350-odd SecurID-aware network 
applications and the RSA Authentication Manager, using RSA's own 
protocol -- are always fully encrypted.


Mr. Danisch is an admirer of the classic SecurID (SID700), RSA's 
traditional hand-held token. His ire is directed at one of the two 
new hybrid SecurID designs that RSA has recently offered in an 
attempt to respond to new requirements in the boisterous and 
rapidly-evolving market for what's called strong authentication.


With the nascent prospect of a new billion-dollar market in consumer 
authentication for financial services boosted by US federal 
regulatory initiatives, RSA announced the SecurID Signing Token, the 
SID900. The SecurID Signing Token still has a time-synched OTP, but 
RSA added a keypad and a challenge/response function which 
successively authenticates the user, the remote server, and a 
specific financial transaction, before the transaction (e.g., a funds 
transfer) is executed.


On the other side of the market -- where again US laws and federal 
regulatory initiatives have boosted demand for internal controls and 
more accountability measures in enterprise IT -- RSA has introduced 
the SID800, another hybrid SecurID, to meet the requirements of 
organizations that want to move into a full public key infrastructure (PKI.)


The SID800 SecurID is a multi-function authentication and 
cryptographic device that combines, in a single DPA-resistant token, 
the mobility and availability of the classic hand-held SecurID, as 
well as a smart chip that implements v2.1.1 Java tech (essentially 
a virtual smart card) in a USB format. It looks like a slightly 
smaller version of the classic SecurID key fob, with a USB plug 
jutting out at one end. It can carry up to seven X.509 digital 
certificates for PKI, as well as account information and 

Re: Exponent 3 damage spreads...

2006-09-14 Thread James A. Donald

Simon Josefsson wrote:

Jostein Tveit [EMAIL PROTECTED] writes:


Anyone got a test key with a real and a forged signature to test
other implementations than OpenSSL?


There are actually two problems to consider...

First, there is the situation by Bleichenbacher at Crypto 06 and
explained in:

http://www.imc.org/ietf-openpgp/mail-archive/msg14307.html

That uses the fact that implementation doesn't check for data beyond
the end of the ASN.1 structure.  OpenSSL was vulnerable to this,
GnuTLS was not, see my analysis for GnuTLS on this at:

http://lists.gnupg.org/pipermail/gnutls-dev/2006-September/001202.html

Eric already posted test vectors that trigger this problem.

The second problem is that the parameters field can ALSO be used to
store data that may be used to manipulate the signature value into
being a cube.  To my knowledge, this was discovered by Yutaka Oiwa,
Kazukuni Kobara, Hajime Watanabe.  I didn't attend Crypto 06, but as
far as I understand from Hal's post, this aspect was not discussed.
Their analysis isn't public yet, as far as I know.


It seems to me that the evil here is ASN.1, or perhaps standards that 
use ASN.1 carelessly and badly.


It is difficult to write code that conforms to ASN.1, easy to get it 
wrong, and difficult to say what in fact constitutes conforming to ASN.1 
or at least difficult to say what in fact constitutes conforming to 
standard written in ASN.1


ASN.1 does the same job as XML, but whereas XML is painfully verbose and 
redundant, ASN.1 is crypticly concise.


People do not seem to get XML wrong all that often, while they endlessly 
get ASN.1 wrong, and endlessly disagree over what constitutes being right.


Obviously we do need a standard for describing structured data, and we 
need a standard that leads to that structured data being expressed 
concisely and compactly, but seems to me that ASN.1 is causing a lot of 
grief.


What is wrong with it, what alternatives are there to it, or how can it 
be fixed?


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-14 Thread Peter Gutmann
Simon Josefsson [EMAIL PROTECTED] writes:

The second problem is that the parameters field can ALSO be used to store
data that may be used to manipulate the signature value into being a cube.
To my knowledge, this was discovered by Yutaka Oiwa, Kazukuni Kobara, Hajime
Watanabe.  I didn't attend Crypto 06, but as far as I understand from Hal's
post, this aspect was not discussed. Their analysis isn't public yet, as far
as I know.

Can you make a guess at what it is?  Is it the fact that you can have NULL
parameters for algorithms or optionally non-NULL parameters?  Changing this
could be tricky because there are all sorts of inconsistencies both in
standards and implementations, the standard practice has been to skip the
parameters field because if you don't, things break.

Peter.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-14 Thread Simon Josefsson
[EMAIL PROTECTED] (Peter Gutmann) writes:

 Simon Josefsson [EMAIL PROTECTED] writes:

The second problem is that the parameters field can ALSO be used to store
data that may be used to manipulate the signature value into being a cube.
To my knowledge, this was discovered by Yutaka Oiwa, Kazukuni Kobara, Hajime
Watanabe.  I didn't attend Crypto 06, but as far as I understand from Hal's
post, this aspect was not discussed. Their analysis isn't public yet, as far
as I know.

 Can you make a guess at what it is?  Is it the fact that you can have NULL
 parameters for algorithms or optionally non-NULL parameters?

Yes.  Implementations that didn't validate the parameters field are
potentially vulnerable; the attacker can put garbage in the parameters
field to make the signature value a cube.  Look at the certificates I
posted.

 Changing this could be tricky because there are all sorts of
 inconsistencies both in standards and implementations, the standard
 practice has been to skip the parameters field because if you don't,
 things break.

I don't think so.  The contents of the parameters field depends on the
hash algorithm.  As far as I know (but I didn't read the scriptures),
for normal hashes like SHA-1 the parameters field should not be used.
Checking that it is empty shouldn't be a problem.

Or do you know of real certificates with a non-NULL parameters field
in the signature?

It is important to keep in mind that this only applies to incorrect
implementations that handle keys with e=3.  Using Debian's
/etc/ssl/certs/ CA list, which on my system contains around 100 CAs, I
extracted the issuer name of the CAs with e=3:

Issuer: C=US,O=Digital Signature Trust Co.,OU=DSTCA E1
Issuer: C=US,O=Digital Signature Trust Co.,OU=DSTCA E2
Issuer: C=US,O=Entrust.net,OU=www.entrust.net/Client_CA_Info/CPS incorp. by 
ref. limits liab.,OU=(c) 1999 Entrust.net Limited,CN=Entrust.net Client 
Certification Authority
Issuer: C=US,O=Entrust.net,OU=www.entrust.net/CPS incorp. by ref. (limits 
liab.),OU=(c) 1999 Entrust.net Limited,CN=Entrust.net Secure Server 
Certification Authority

I'm not familiar with DST, so I wonder whether those two are widely
used.  https://secure.digsigtrust.com/ doesn't use it.

That leaves two Entrust certificates.  At least
https://www.entrust.com/ is protected by the second certificate above,
so it may be in wide use.

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Why the exponent 3 error happened:

2006-09-14 Thread James A. Donald

Why the exponent 3 error happened:

The signature consists of a number that when cubed, is
equal modulo N to the padded hash of the quantity to be
signed.

Part of the padding is the ASN.1 encoding of the hash.

Now suppose we had not ASN.1 encoded the hash.

Suppose the padding was simply

010101010101010 ... 1010101010101 hash

with all leading zeros in the hash omitted, and four
zero bits showing where the actual hash begins.

Then the error would never have been possible.

ASN.1 provided additional redundant information, making
possible unexpected data layouts that should not
normally happen.  It had too much expressive power, too
much flexibility.  It could express cases that one does
not expect to deal with, could flex in more ways than
one's software is likely to be written for.

XML has even greater redundancy and flexibility, but has
the advantage that one can constrain it with a DTD that
guarantees that it will have the form, and only the
form, that one's software expects.

Still, usually the simplest way of dealing with data
that may be hostile and malicious is to use a data
format that is entirely free from redundancy, thus can
*only* have the expected form.  If there is no
redundancy in ones data, then one never has to deal with
cases where the data is inconsistent.

One is not always able to eliminate all redundancy, but
one always has to check that the data is of the expected
form, and the fewer forms it can has the easier that is.
ASN.1's crowning virtue is its flexibility, and we do
not want flexibility in possibly hostile data.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-14 Thread Peter Gutmann
Simon Josefsson [EMAIL PROTECTED] writes:
[EMAIL PROTECTED] (Peter Gutmann) writes:
Simon Josefsson [EMAIL PROTECTED] writes:
The second problem is that the parameters field can ALSO be used to store
data that may be used to manipulate the signature value into being a cube.
To my knowledge, this was discovered by Yutaka Oiwa, Kazukuni Kobara, Hajime
Watanabe.  I didn't attend Crypto 06, but as far as I understand from Hal's
post, this aspect was not discussed. Their analysis isn't public yet, as far
as I know.

Can you make a guess at what it is?  Is it the fact that you can have NULL
parameters for algorithms or optionally non-NULL parameters?

Yes.  Implementations that didn't validate the parameters field are
potentially vulnerable; the attacker can put garbage in the parameters field
to make the signature value a cube.

In that case (and because of something else I thought of after I posted, I was
just heading out for dinner at the time) I think it's game over for RSA e=3
(see below).

Changing this could be tricky because there are all sorts of
inconsistencies both in standards and implementations, the standard
practice has been to skip the parameters field because if you don't,
things break.

I don't think so.  The contents of the parameters field depends on the hash
algorithm.  As far as I know (but I didn't read the scriptures), for normal
hashes like SHA-1 the parameters field should not be used.

It may or may not be used, depending on which standard you follow.  First of
all, even for the simple case of SHA-1, the parameters can be present or not.
See the note in RFC 3274:

  There are two possible encodings for the [...] parameters field which arise
  from the fact that when the 1988 syntax for AlgorithmIdentifier was
  translated into the 1997 syntax, the OPTIONAL associated with the
  AlgorithmIdentifier parameters got lost.  Later it was recovered via a
  defect report, but by then, everyone thought that algorithm parameters were
  mandatory.  Because of this, some implementations will encode null
  parameters as an ASN.1 NULL element and some will omit them entirely (see
  for example section 12 of CMS [RFC2630]).

So for both standards and implementations it's pretty much a coin-toss (crap-
shoot if you're in the US) as to what you'll find there.  Because of this,
standard practice in implementations has been to skip any parameters.

But wait, there's more!  Some AlgorithmIdentifiers have parameters that you
can set to either a large range of fixed values or even arbitrary values.  For
example the Ascom Tech IDEA AlgoID (which admittedly isn't a hash algorithm,
but bear with me) has three optional parameters for CFB mode which can take
various values and may or may not be present (AES almost took this path as
well, luckily NIST decided against it at the last minute and went with simple,
streamlined parameters).

OK, so you don't sign IDEA values so this isn't a problem.  However, a related
AlgoID is the DES one, which for CBC mode has as parameter an IV, which is 64-
bits of arbitrary user-chosen data.  I've seen a banking security system from
either Belgium or the Netherlands that signs CBC-MACs (I'm using a borrowed
machine to send this so I can't provide names and numbers at the moment, it's
from memory), and knowing the organisation that produced it it wouldn't
surprise me if they also used keys with e=3.

So lets extend this further.  There have been a pile of designs for hash
algorithms with user-definable parameters.  If these ever get used in
standards then no doubt there'll be AlgoIDs defined that allow an attacker to
set arbitrary values in the AlgoID through them.  So the arms-race of trying
to track invalid data now becomes a problem of proving a negative, i.e.
proving that there isn't some AlgoID out there somewhere that allows you to
set one or two parameter bytes to arbitrary values.

But wait, there's more!  From what I understand of the attack, all you need
for it to work is for the sig.value to be a perfect cube.  To do this, all you
need to do is vary a few of the bytes of the hash value, which you can do via
a simple brute-force search.  So even with a perfect implementation that does
a memcmp() of a fixed binary string for all the data present but the hash, the
attack still works.

In either of these cases, RSA e=3 is dead.  Obesa cantavit.

So the fix isn't an ongoing penetrate-and-patch arms race to try and filter
out more and more hard-to-find possibilities, it's to immediately deprecate e=
3.  Grab a pile of IETF boilerplate, add a single sentence Don't use RSA with
e = 3 (I'd actually say = 17 since there's no good reason not to), and
apply it as a BCP by reference to SSL/TLS, IPsec, S/MIME, PGP, DNSSEC, and so
on.  There'll always be broken standards out there that require e=3 (I know of
at least one that uses e=2, and another that uses raw, unpadded RSA, and
another that... well you get the idea), but the only quick, sure fix is to
kill e=3, not to try and 

Re: Exponent 3 damage spreads...

2006-09-14 Thread Tim Dierks

On 9/14/06, James A. Donald [EMAIL PROTECTED] wrote:

It seems to me that the evil here is ASN.1, or perhaps standards that
use ASN.1 carelessly and badly.

It is difficult to write code that conforms to ASN.1, easy to get it
wrong, and difficult to say what in fact constitutes conforming to ASN.1
or at least difficult to say what in fact constitutes conforming to
standard written in ASN.1

ASN.1 does the same job as XML, but whereas XML is painfully verbose and
redundant, ASN.1 is crypticly concise.

People do not seem to get XML wrong all that often, while they endlessly
get ASN.1 wrong, and endlessly disagree over what constitutes being right.


This problem is just as likely or more likely if we were using XML to
encode the hash inside the RSA-encrypted blob (signature). The
equivalents would be:

Appended garbage:
 signed-hashValid-looking-hash/signed-hashGarbage here
Or
 signed-hashValid-looking-hash/signed-hash[null byte]Garbage here

Interior garbage:
 signed-hash legal-but-unparsed-attribute=Garbage
hereValid-looking-hash/signed-hash

or similar attacks. The problem is not XML or ASN.1: the problem is
that it's very demanding and tricky to write parsers that are
invulnerable to all the different kinds of malicious attack that are
out there.

If anything, I think XML is more vulnerable to such attacks because
its less-structured format makes it harder to write very strict
parsers. The actual way to design a system that was less vulnerable to
this attack would have been to use a much simpler data structure:
e.g., one could have said that the hashing algorithm must already be
known, so the size of the hash is known to be n bytes, and that the
data block should be a byte of value 0, followed by bytes of value FF,
with the last n bytes equal to the hash. Then it would have been a
no-brainer for anyone to write a precisely accurate parser and
validator, and we'd be less vulnerable to such oversights.

- Tim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: RSA SecurID SID800 Token vulnerable by design

2006-09-14 Thread Thor Lancelot Simon
On Wed, Sep 13, 2006 at 10:23:53PM -0400, Vin McLellan wrote:
 
[... a long message including much of what I can only regard as
 outright advertising for RSA, irrelevant to the actual technical
 weakness in the SID800 USB token that Hadmut described, and which
 Vin's message purportedly disputes.  It would be nice if, when confronted
 with such a response in the future, the moderator of this list would
 return it to its author with the requirement that the marketeering be
 stripped out before the actual content be forwarded to this list!  I
 have snipped everything irrelevant to my own response. ... ]

 None of these features -- none of the SID800's cryptographic 
 resources -- were of apparent interest to Mr. Danisch. He ignored 
 them all when he denounced the SID800 as vulnerable by design.

As well he should have, because they are utterly irrelevant to the
genuine design flaw which he pointed out, and which Vin seeks to
minimize (by burying it in irelevancies?) here.

 What particularly disturbs Mr. D is one option among in the SID800 
 SecurID features which allows RSA's local client software to poll and 
 retrieve a single OTP from the token when the SID800 is plugged into 
 the PC's USB port.  Given the potential for malicious malware to 
 invade and take control of any Windows PC -- malware that can seize 
 and misuse both the user's PIN and an OTP fresh from the USB bus -- 
 it was irresponsible, Danisch suggests, for RSA to make such a option 
 available to its customers.

And so it was.  Vin simply handwaves away the fact that if RSA's client
software can poll the token and retrieve the current OTP, so can any
malicious software running on the host to which the token is attached.

It is not correct to suggest that perhaps this could be done only once,
when the token were first plugged in to the host system's USB port,
because USB *by design* allows the host to cut and restore power to
devices under software control, so that the SID800 can, even if it
somehow is intended to only allow token retrieval once upon plug-in
and once only (something Vin seems to imply, but does not directly
state) simply be repeatedly tricked into thinking that it has just been
plugged in.

 In the second version of the SID800 -- an option selectable by local 
 management pre-purchase, and burnt into the token's USB firmware by 
 RSA -- the user can select a menu in which he instructs the SecurID 
 to load one OTP token-code directly into the paste buffer, presumably 
 for immediate use. Since internal access to the SecurID's OTP via the 
 USB bus makes it a potential target for malware or intruders on the 
 computer, claimed Mr. Danisch, This is weak by design.  I beg to 
 differ. Effective IT security is about an appropriate balance, not 
 walls of iron or stone.

Good cooking is about full-bodied flavor, not wire rope or persian
kittens; but let's leave the irrelevant analogy aside and stick to the
facts that seem to be discussed here, I suppose.

Vin claims that the user instructs the SecureID to load one OTP
token-code directly into the paste buffer.  This is a very, very odd
claim, because it implies that the user communicates directly with a
USB peripheral and instructs _the peripheral_ to autonomously load
a token-code -- some bytes -- into an area in host memory that is used
by the host operating system's user interface.  We should note that,
unlike Firewire, USB *does not include a mechanism by which a
peripheral on the bus may initiate a DMA transfer into the memory of
the host system that is the master of the bus* so clearly what Vin
claims cannot be strictly true.  What, then, should we think that it
likely means?  I think he must mean something like this:

  The user instructs the RSA-supplied application code running on the
   host system to retrieve one token code from the SecureID across the
   USB bus, and place that retrieved token code into the paste buffer.

If that is not what Vin means, I think that he should respond and say
exactly what he does mean, in a way that does not make reference to
mythical properties that USB peripherals do not have.

Now, consider what it means that it is even _possible_ for the RSA-
supplied application to retrieve a token code from the SID800 in this
way.  It means that by, at worst, cutting and restoring power to the
USB port in question, malicious software can retrieve *a new, current
token code* *any time it wants to do so*.  In other words, while, with
traditional SecureID tokens, it is possible for malicious software to
steal token codes typed by the user into a compromised host system _when
the user types them_, and by engaging in a man-in-the-middle scheme
impersonate the intended target system to the user _that once_ (since
SecureID token codes can not be used twice), this new system does, in
fact, open up the gaping new security hole Hamisch claims it does:

   With this USB-connected token, malicious software on the host can poll
   the token and retrieve 

Re: Exponent 3 damage spreads...

2006-09-14 Thread Hal Finney
Peter Gutmann writes:
 But wait, there's more!  From what I understand of the attack, all you need
 for it to work is for the sig.value to be a perfect cube.  To do this, all you
 need to do is vary a few of the bytes of the hash value, which you can do via
 a simple brute-force search.  So even with a perfect implementation that does
 a memcmp() of a fixed binary string for all the data present but the hash, the
 attack still works.

I don't think this works. I tried with a 1024 bit key.  We want a cube root of
something between:

0x1003021300906052B0E03021A05000414

and

0x1003021300906052B0E03021A05000414

But actually the nearest cube root is:

0x1428A2F98D728AE223DDAB715BE250D0C288F10291631FBC061800CC36FA2DD3A60B7D03DA26F0840F25C

Cubing this gives:

0x1FFFC66E7388AFD22947A600FB19230A3162AB4A53B003B80F979B8E97D7DB74891A5769312C88639E645DD3DB79E32561BD7FF661977573AF888EF025ED0608245DE7048210C94AC32731DD6B19B2F25722E951F41C0

and cubing the next higher value gives:

0x200012A06F78681CDECFB70DC81AEE9F1B2FF7CBB6140D9A07D97209E81A4A2D957560CB04CF8F504EF90797FEBD799E9A64841F1168C13EC938E0D109610B8CC43EF3FDA8B374E3AD57AF2A0E084B15E8BB328384C05

So no variation on the hash value will produce something that is a
perfect cube.  Now, this is specific to 1024 bit keys, but larger keys
should be even more unfavorable.  As a general rule we can only force
the top 1/3 of the bits to be 1s as required, and the chances of getting
lucky will be worse for larger keys.

We could start adding in multiples of the modulus and look for perfect
cubes again, but basically the odds against are 1 in N^(2/3) so there
is no point.

Hal Finney
PGP Corporation

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-14 Thread Greg Rose
So, there is at least one top-level CA installed in some common 
browsers (I checked Firefox) that uses exponent-3. It is Starfield 
Technologies Inc. Starfield Class 2 CA. There may well be 
others... I only looked far enough to determine that that was a 
problem.


So the next question becomes, what browsers used OpenSSL and/or their 
own broken code, and need to be patched? I have no idea.


Thanks to Alex Gantman for asking the question...

Greg.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Why the exponent 3 error happened:

2006-09-14 Thread Greg Rose

At 19:02  +1000 2006/09/14, James A. Donald wrote:

Suppose the padding was simply

010101010101010 ... 1010101010101 hash

with all leading zeros in the hash omitted, and four
zero bits showing where the actual hash begins.

Then the error would never have been possible.


I beg to differ. A programmer who didn't understand the significance 
of crypto primitives would (as many did) just search for the end of 
the padding to locate the beginning of the hash, and check that the 
next set of bytes were identical to the hash, then return true. So


01010101 ... 1010101010101 hash crappetycrap

would still be considered valid. There's a lot of code out there that 
ignored the fact that after the FFs was specific ASN.1 stuff, and 
just treated it as a defined part of the padding.


Greg.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-14 Thread Greg Rose

At 23:40  +1200 2006/09/14, Peter Gutmann wrote:

But wait, there's more!  From what I understand of the attack, all you need
for it to work is for the sig.value to be a perfect cube.  To do this, all you
need to do is vary a few of the bytes of the hash value, which you can do via
a simple brute-force search.  So even with a perfect implementation that does
a memcmp() of a fixed binary string for all the data present but the hash, the
attack still works.


I thought this for a while, but no, it isn't true. Take a number k, 
which is of the order of 2^1008 (which is what a properly padded 
1024-bit RSA signature will look like numerically). So the cube root 
of K is a real number of the order of 2^336... call this k'. Now on 
average it will be within +/- 0.25 of the nearest integer, so for 
sake of argument let i = k' + 0.25 be an integer.


i^3 - k = (k' + 0.25)^3 - k
= k + 0.25*k'^2 +0.0625*k' + 1/64 - k

which is of order 0.25*k^2/3, ie, 2^670. Unless you are using very 
large hashes indeed, the chance of a properly padded RSA signature 
being a perfect cube is vanishingly small.




In either of these cases, RSA e=3 is dead.  Obesa cantavit.


I don't yet agree with this conclusion.



  There'll always be broken standards out there that require e=3 (I know of
at least one that uses e=2, and another that uses raw, unpadded RSA, and
another that... well you get the idea), but the only quick, sure fix is to
kill e=3, not to try and anticipate every potential way of trying to use it,
because you'll never be secure that way.


I just have to mention that e=2 is Rabin signatures, and they have 
different and very stringent requirements for signatures. Maybe the 
same problem exists, maybe it doesn't, I don't know.


Greg.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: RSA SecurID SID800 Token vulnerable by design

2006-09-14 Thread Leichter, Jerry
| The problem is that _because there is an interface to poll the token for
| a code across the USB bus_, malicious software can *repeatedly* steal new
| token codes *any time it wants to*.  This means that it can steal codes
| when the user is not even attempting to authenticate
I think this summarizes things nicely.  Moving to a higher level of
abstraction:  With a traditional token, if the correct value has been
entered, we can reasonably assume intent on the part of the human
being in possession of the token to identify himself, and thus take
responsibility for some set of actions.  With the additional something
you know password associated with the token, we can further reasonably
assume that the person in possession of the token is in fact the person
who has the right to possess that token.*

In the case of a software-readable USB token, *neither* assumption is
reasonable.  The resulting authentication is very different in kind.

| It is noteworthy that a token that requires *any* kind of intervention
| by the user -- even something as simple as a press of a button on the
| token which illuminates when the host presents the token-code request --
| is invulnerable to such an attack...
Pressing the button supplies exactly the confirmation of intent that
was lost.  (However, it can't get you to the assumptions about the
right person having possession of the token.  The fingerprint scanning
technologies that one sees in some USB drives today would probably be
reasonable for that purpose - not that one has much information about
their false positive rates or how hard they really are to attack.
I don't know what the costs are, however - low enough for ~$100 drives,
maybe not low enough for an ID token.)

-- Jerry

* Yes, there are attacks that render these assumptions invalid.
Nothing is perfect.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Why the exponent 3 error happened:

2006-09-14 Thread Victor Duchovni
On Thu, Sep 14, 2006 at 11:25:11AM -0400, [EMAIL PROTECTED] wrote:

 
 James A. Donald writes:
 -+---
  | snip
  |
  | ASN.1 provided additional redundant information, making
  | possible unexpected data layouts that should not
  | normally happen.  It had too much expressive power, too
  | much flexibility.  It could express cases that one does
  | not expect to deal with, could flex in more ways than
  | one's software is likely to be written for.
  |
  | snip
 
 Sir,  There is a lesson here as important as
 Fred Brook's Adding people to a late project
 makes it later and I urge you to put this in
 some form of print at your earliest capability.
 No, not urge but rather beg.

If so, I fear we are learning the wrong lesson, which while valid in
other contexts is not pertinent here. TLS must be flexible enough to
accommodate new algorithms, this means that the data structures being
exchanged are malleable, and that implementations must validate strict
adherence to a specifically defined form for the agreed algorithm,
but the ability to express other forms cannot be designed out.

This, in my view, has little to do with ASN.1, XML, or other encoding
frameworks. Thorough input validation is not yet routinely and
consistently practiced by most software developers. Software is almost
invariably written to parse formats observed in practice correctly, and is
then promptly declared to work. The skepticism necessary to continually
question the implicit assumption that the input is well-formed is perhaps
not compatible with being a well-socialized human. The attackers who ask
the right questions to break systems and the few developers who write
truly defensive code are definitely well off the middle of the bell-curve.

It is not just PKCS#1 or X.509v3 that presents opportunities for crafting
interesting messages. MIME, HTTP, HTML, XML, ... all exhibit similar
pitfalls. Loosely speaking, this looks like a variant of Goedel's theorem,
if the protocol is expressive enough it can express problematic assertions.

We can fine-tune some protocols to remove stupid needless complexity, but
enough complexity will remain to make the required implementation disciple
beyond the reach of most software developers (at least as trained today,
but it is not likely possible to design a training program that will
a preponderance all strong defensive programmers).

-- 

 /\ ASCII RIBBON  NOTICE: If received in error,
 \ / CAMPAIGN Victor Duchovni  please destroy and notify
  X AGAINST   IT Security, sender. Sender does not waive
 / \ HTML MAILMorgan Stanley   confidentiality or privilege,
   and use is prohibited.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Why the exponent 3 error happened:

2006-09-14 Thread James A. Donald

--
Greg Rose wrote:
 At 19:02  +1000 2006/09/14, James A. Donald wrote:
 Suppose the padding was simply

 010101010101010 ... 1010101010101 hash

 with all leading zeros in the hash omitted, and four
 zero bits showing where the actual hash begins.

 Then the error would never have been possible.

James A. Donald:
 I beg to differ. A programmer who didn't understand
 the significance of crypto primitives would (as many
 did) just search for the end of the padding to locate
 the beginning of the hash, and check that the next set
 of bytes were identical to the hash, then return
 true. So

The hash is known size, and occurs in known position.
He does not search the padding for location, but
examines it for correct format.


 01010101 ... 1010101010101 hash crappetycrap

 would still be considered valid. There's a lot of code
 out there that ignored the fact that after the FFs was
 specific ASN.1 stuff, and just treated it as a defined
 part of the padding.

And that code is correct, and does not have the problem
that we discuss.  Paying attention to ASN.1 stuff is
what is causing this problem.

Code is going wrong because ASN.1 can contain
complicated malicious information to cause code to go
wrong.  If we do not have that information, or simply
ignore it, no problem.

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 8Jickn3nr3AE+2RW3jUC7DaHw6yD1gLpSTISH0F6
 4Bjf3VmASP+HQ4q0CYdRKgWFZxd/QnFOiartuob5Q

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Real World Exploit for Bleichenbachers Attack on SSL from Crypto'06 working

2006-09-14 Thread Erik Tews
Hi

I had an idea very similar to the one Peter Gutmann had this morning. I
managed to write a real world exploit which takes as input:

  * an CA-Certificate using 1024 Bit RSA and Exponent 3 (ca-in)
  * a Public Key, using an algorithm and size of your choice
(key-in)

and generats an CA-Certificate signed by ca-in, using public key key-in.

At least 3 major webbrowsers on the marked are shipped by default with
CA certificates, which have signed other intermediate CAs which use
rsa1024 with exponent 3, in their current version. With this exploit,
you can now sign arbitary server certificates for any website of your
choice, which are accepted by all 3 webbrowsers without any kind of
ssl-warning-message.

I used the following method:

I first generated a certificate, with BasicConstraints set to True,
Public Key set to one of my keys, and Issuer to the DN of a CA using
1024 Bit RSA with Exponent 3. I used usual values for all the other
fields. When I signed a Certificate I shiftet all my data to the left. I
had 46 bytes of fixed valued (this can perhaps be reduced to 45 bytes, I
have not checked yet, but even with 46, this attack works). They had the
form 00 01 FF FF FF FF FF FF FF FF ASN1DataWithHash. This gives me 82
bytes I can fill with arbitary values (at least, if the implementations
skipps some part of the asn1-data, I can choose some bytes there too).

If you now set all the bytes right of your ASN1DataWithHash to 00, and
interpret that as a number n, and compute:

   y = (ceil(cubeRoot(n)))^3

   Where ceil means rounding to the next bigger natural number and cubeRoot
 computes the third Root in R.

y will be a perfect cube and have the form:

00 01 FF FF FF FF FF FF FF FF ASN1DataWithHash' Garbage

and ASN1DataWithHash' looks quite similar to your original
ASN1DataWithHash, with perhaps 2-3 rightmost bytes changed. These bytes
are part of the certificate hash value.

This signature is useless, because every certificate has a fixed hash
value. But you don't need to sign a fixed certificate. So i started
adding some seconds to the notAfter value of the certificate and
computed the hash again. I brute forced until I had a certificate where
the computation of y did not alter any bytes of the ASN1DataWithHash.

I had to try 275992 different values which took 2-3 minutes on my 1.7
GHz Pentium using an unoptimized java-implementation.

I used this cert and my key to sign an end-entity certificate which I
used to set up an webserver.

I have to check some legal aspects before publishing the names of the
browser which accepted this certificate and the name of the
ca-certificates with exponent 3 I used in some hours, if nobody tells me
not to do that. Depending on the advice I get, I will release the
sourcecode of the exploit too.

Thanks go to Alexander May and Ralf-Philipp Weinmann who helped me.


signature.asc
Description: Dies ist ein digital signierter Nachrichtenteil