RE: Private Key Generation from Passwords/phrases

2007-02-03 Thread Anton Stiglic
Bill Stewart wrote:
>Salt is designed to address a couple of threats
>- Pre-computing password dictionaries for attacking wimpy passwords
>...

Yes indeed.  The rainbow-tables style attacks are important to protect
against, and a salt does the trick.  This is why you can find rainbow tables
for LanMan and NTLMv1 hashed passwords, but not for NTLMv2.
This to me is the most important property achieved with a salt, and the salt
doesn't have to be that big to be effective.

--Anton




-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Intuitive cryptography that's also practical and secure.

2007-02-03 Thread Anton Stiglic
I am not convinced that we need intuitive cryptography.  
Many things in life are not understood by the general public.
How does a car really work: most people don't know but they still drive one.
How does a microwave oven work?

People don't need to understand the details, but the high level concept
should be simple:  If that is what you are trying to convey, I agree with
you.

I guess we could very well do with some cryptographic simplifications.  Hash
functions are one example.  We have security against arbitrary collisions,
2nd pre-image resistance, preimage resistance.  Most of our hash functions
today don't satisfy all of these properties:  "Oh SHA1 is vulnerable to
aribitrary collisions attacks, but it is still safe agains 2nd pre-image
attacks, so don't worry!" 
Why do we need all of these properties?  In most cases, we don't.
Mathematical masturbation might be to blame?   
Block cipher encryption.  How many modes of operations exist?  Some use a
counter, others need a random non predictable IV, others just need a non
repeatable IV?  Do we need all of this?
I often find myself explain these concepts to non-cryptographers.  I'm often
taken for a crazy mathematician.

What is the length of a private key?  In 1024-bit RSA, your d is about 1024
bits.  But is d your private key, or is it (d,N),  in which case there is
more than 1024 bits!  No, N is public, the known modulus, but you need it to
decrypt, you can't just use d by itself.  Oh, in DSA the private key is much
shorter.  You actually also need a random k, which you can think of as part
of your key, but it's just a one time value.  Are we talking about key
lengths, of modulus lengths really?

When you encrypt with RSA, you need padding.   With Elgamal, you don't need
any, complicated story.  And don't use just any padding.  You would be
foolish to use PKCS#1 v1.5 padding, everybody knows that right?  Use OAEP.
It is provably broken, but works like a charm when you encrypt with RSA!

Going back to the million dollar paranormal challenges:  Something like a
Windows SAM file containing the NTLM v2 hash of the passphrase consisting of
the answer might be something to consider?  Not perfect but...

--Anton




-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Matt Blaze
Sent: January 26, 2007 5:58 PM
To: Cryptography
Subject: Intuitive cryptography that's also practical and secure.

I was surprised to discover that one of James Randi's "million dollar
paranormal challenges" is protected by a surprisingly weak (dictionary-
based) commitment scheme that is easily reversed and that suffers from
collisions. For details, see my blog entry about it:
http://www.crypto.com/blog/psychic_cryptanalysis/

I had hoped to be able to suggest a better scheme to Randi (e.g., one
based on a published, scrutinized bit commitment protocol).   
Unfortunately
I don't know of any that meets all his requirements, the most important
(aside from security) being that his audience (non-cryptographers
who believe in magic) be able to understand and have confidence in it.

It occurs to me that the lack of secure, practical crypto primitives and
protocols that are intuitively clear to ordinary people may be why
cryptography has had so little impact on an even more important problem
than psychic debunking, namely electronic voting. I think "intuitive
cryptography" is a very important open problem for our field.

-matt

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Traffic Analysis References

2006-10-21 Thread Anton Stiglic
You will find a couple of references on traffic analysis applied to
anonymous networks here
http://freehaven.net/anonbib/

--Anton

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Leandro Meiners
Sent: October 19, 2006 2:09 PM
To: Cryptography
Subject: Traffic Analysis References

Dear list,

Can anybody point me to any good references regarding traffic analysis?

regards,

Leandro.

--
Leandro Federico Meiners

GnuPG key fingerprint:
7B98 C0F5 42A3 2BEE 44AF
9D19 936F 5957 27DF AE74

GnuPG-Key:
http://pgp.mit.edu:11371/pks/lookup?op=index&search=lmeiners%40gmail.com&fin
gerprint=on&exact=on
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: interesting HMAC attack results

2006-09-25 Thread Anton Stiglic
Very interesting, I wonder how this integrates with the following paper

http://citeseer.ist.psu.edu/bellare06new.html

which basically says:
Abstract: HMAC was proved in [2] to be a PRF assuming that (1) the
underlying compression function is a PRF, and (2) the iterated hash
function is weakly collision-resistant. However, recent attacks show that
assumption (2) is false for MD5 and SHA-1, removing the proof-based
support for HMAC in these cases. This paper proves that HMAC is a PRF
under the sole assumption that the compression function is a PRF. This
recovers a proof based guarantee since no known attacks compromise the
pseudorandomness of the compression function, and it also helps explain
the resistance-to-attack that HMAC has shown even when implemented with
hash functions whose (weak) collision resistance is compromised.

--Anton



Perry E. Metzger
Sat, 23 Sep 2006 05:52:04 -0700

  http://eprint.iacr.org/2006/319

Cryptology ePrint Archive: Report 2006/319

Forgery and Partial Key-Recovery Attacks on HMAC and NMAC Using Hash
Collisions

Scott Contini and Yiqun Lisa Yin

  Abstract. In this paper, we analyze the security of HMAC and NMAC,
  both of which are hash-based message authentication codes. We present
  distinguishing, forgery, and partial key recovery attacks on HMAC and
  NMAC using collisions of MD4, MD5, SHA-0, and reduced SHA-1. Our
  results demonstrate that the strength of a cryptographic scheme can be
  greatly weakened by the insecurity of the underlying hash function.

[I Heard about this paper from ekr's blog.]


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Exponent 3 damage spreads...

2006-09-22 Thread Anton Stiglic
O.k., thanks to Hal Finney for pointing out to me in a private email that my
modulus wasn't in fact the right size.  I have had some problems with the
openssl key generation (doesn't always seem to generate the exact modulus
size I ask for).  

In attachment, the forged signature opensslB-fake-bin.sig on
messageBunicode.txt which can be validated using the new key I generated
bellow.  I took the same s that I computed beforehand, without reducing it
this time.  The value s is independent of the value of the modulus of the
public key (only dependency is the size)

So here are the instructions, once again (but I bit simplified):

I followed the instructions of Hal Finney's excellent post:
http://www.mail-archive.com/cryptography@metzdowd.com/msg06537.html

I started out by generating 3072 RSA key pair, with public exponent e = 3.

openssl genrsa -des3 -3 -out my.key 3072
(make sure that the modulus is really 3072 bits, no less no more).

the resulting key can be found at the end of this post, the passwords is
"test" if you ever want to use it. I also included the public key
certificate by itself.  All in PEM format.

I created a plaintext message messageBunicode.txt on which I want to forge a
signature.  The file can be found in attachment, it is in Unicode because I
wanted to also try this out with a VBscript implementing a signature
function using CAPICOM, in which all plaintext is transformed into UNICODE
(pretty annoying!). 

The hash of this file is

openssl dgst -sha1 messageBunicode.txt
SHA1(messageBunicode.txt)= 5d89b46034e0f41a920b2fa964e230ebb2d040b0

Now, let's create a valid signature over messageBunicode.txt using the
private key, just to see what the output looks like:

openssl dgst -hex -sha1 -sign my.key messageBunicode.txt
Enter pass phrase for my.key:
SHA1(..\messageBunicode.txt)=
bb029c611d34e98188eb23bd1a836ec7305e94fc726577a59c8dab927fb6c5b3fce549be389f
6b15d8608c15a256f4209c7e76ed1186d8382242d53ee36e47b2d8ccb09be0440dcdcb373e37
967cfbe9d38b12a16dc9aa78e20e1303f8033f27b4c679dc215ac56b4bf33edc2c50f73d3da9
424bc072de81674b1905020546a9e57e3f96ceef7932751b94883f0409577ed7c0aa8cbed272
8dabf2a6df334357df2cabf7f5282107b6ec99e2bf0f65cd992a97f2f3e004e35739a368823c
4c0cebe4fdff53a9f039c2d85e57feff81284c7db97823359d9982da23707c8a48923e16e09e
13803cb5da3d87b94d32b5241198d91d490a3d1b343448b528e853d2add80679054e39e2e21c
1096ba66100ea73559c283c4f281fbf82fa292e6bc1dc5df5773affc3ea2425ce95ac779e7d7
90f59d11e3527d5645d9580bbf68909c2a1c4672204859cf46caa247186d917241a79224c9b6
95499644aeea43daefdc5438e9b96952771f1fbc809655ef5a5a5a148ffd47197369f4a85498
3220596a

Now, let's do some bignumber math.
I wanted to look at the value obtained when you verify a signature (RSA
encrypt the signature with exponent 3).  I use BC, a bignumber calculator I
like allot:
http://www.gnu.org/software/bc/
A version that can be installed on Windows:
http://gnuwin32.sourceforge.net/packages/bc.htm

I use a fast modexp function I implemented for BC, available at
http://crypto.cs.mcgill.ca/~stiglic/Programming/modgroup.bc

You can load it by simply calling bc like this:

bc modgroup.bc

I did all the calculations in hex, so typed the following in bc

obase=16
ibase=16

Now, denote by sp the signature given above, e = 3 and m is the modulus in
the public key certificate I generated.  When pasting the values into BC,
the hex digits need to be in capital letters.  You can get the value of the
modulus by using an ASN.1 interpreter, such as ASN.1 Editor or Peter
Gutmann's dumpASN1 program.  Here are the BC calculations:

sp
BB029C611D34E98188EB23BD1A836EC7305E94FC726577A59C8DAB927FB6C5B3FCE5\
49BE389F6B15D8608C15A256F4209C7E76ED1186D8382242D53EE36E47B2D8CCB09B\
E0440DCDCB373E37967CFBE9D38B12A16DC9AA78E20E1303F8033F27B4C679DC215A\
C56B4BF33EDC2C50F73D3DA9424BC072DE81674B1905020546A9E57E3F96CEEF7932\
751B94883F0409577ED7C0AA8CBED2728DABF2A6DF334357DF2CABF7F5282107B6EC\
99E2BF0F65CD992A97F2F3E004E35739A368823C4C0CEBE4FDFF53A9F039C2D85E57\
FEFF81284C7DB97823359D9982DA23707C8A48923E16E09E13803CB5DA3D87B94D32\
B5241198D91D490A3D1B343448B528E853D2ADD80679054E39E2E21C1096BA66100E\
A73559C283C4F281FBF82FA292E6BC1DC5DF5773AFFC3EA2425CE95AC779E7D790F5\
9D11E3527D5645D9580BBF68909C2A1C4672204859CF46CAA247186D917241A79224\
C9B695499644AEEA43DAEFDC5438E9B96952771F1FBC809655EF5A5A5A148FFD4719\
7369F4A854983220596A
m
C68C6FEA9A08822C1DA79D4D975F95BF9EB56A906A66F7B63AF7CFC5B7A43012DDB4\
2779002AFE4109922FA7B98CECDED22CADDBF1B1F116C79E9068D62A3B371318EA1C\
56CA2106F196D0E6E8DE2D95D300935F5942638220073E311EB59FE9E2D7F84E\
F753A415E47E42604175F4B7F1AF52838B0BBA5769120B8DAFA1FFB5C96D961BA1E5\
20717202DE09379F34960F03FE196F0A97704BFE32A5DF0D81445E8DDBAF750B5213\
3EBF07D4859032AE392F161544BE5860579ABA6A6F34395D44754DC24C5C37D90F7F\
017F7225F4EA20D4EADD7BEAFA9025D6D58B49DB151FB6C5C5F2CF6BE9175DCFE96D\
5B752C00D71F032C6E89DEE85514816C50AC70D53A6417B30E93EA640E0E274BE926\
4E94C9D1BE835EE252EB9550AFB7E26DCD3E42407F0797332562C2CA202A5A7C200A\
F499C21CFB1367AE8

Re: Why the exponent 3 error happened:

2006-09-21 Thread Anton Stiglic
As other's have mentioned, I don't believe the small RSA exponent (e = 3)
is to blame in Bleichenbacher's attack.
Indeed, the mathematical problem of computing the cubic root of m modulo
an rsa modulus n, for a *fixed*, arbitrary m, is still considered to be
hard (no one has shown the opposite).
What Bleichenbacher demonstrated is that computing the cubic root of m' ||
G, where G can be any value, garbage, and is sufficiently large, is easy.

These are two different problems, and the vulnerability is due to the fact
that these libraries allow for the variant G part.

I don't see ASN.1 as being faulty either.  The ASN.1 simply acts as a
value that allows you to determine what hash algorithm to use.  If the
encrypted signature would be of the form:
  What-ever-padding, hash, header
and implementations would directly go to the least significant bits in
order to retrieve the header (which should be of fixed size), and then
retrieve the hash, we wouldn't have this problem.

I believe you should put the most sensitive information in the least
significant bytes, which are harder to manipulate (Bleichenbacher's attack
plays with the most significant bytes, the least significant bytes are
basically random in his calculations, he doesn't have control over them).

This reminds me of the RSA lsb hardness problem theorem
http://www.wisdom.weizmann.ac.il/~oded/annot/node17.html
I have notes explaining it right here, section 8.4.1:
http://crypto.cs.mcgill.ca/~stiglic/Papers/crypto2.ps
The theorem basically says that if you can predict the least significant
bit of the plaintext given the corresponding RSA ciphertext, than you can
compute the whole plaintext.
The theorem doesn't directly apply however (RSA signature verification
uses the encryption operation, not decryption), but may be of some
insight.

The problem is that we (crypto community) still don't have a good way of
writing specs.  This is in fact a hard problem.  And the problem doesn't
get easier with the increasing complexity of the specs.  We need simple
algorithms and protocols, which allow just enough flexibility, and we need
a good and precise way to write specs for these.

On one side you have theoretical cryptographers / mathematicians who work
in an abstract level, develop algorithms and protocols, but don’t have
much interest in implementing these other than possibly in a prototype
form.  On the other end, you have developers who excel in coding and
system integration but don’t necessarily understand the theoretical
background in all its details.  Specifications act as a bridge between
these two worlds, but this bridge is not very solid today.  We need to do
allot more effort into building stronger bridges.

--Anton


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Exponent 3 damage spreads...

2006-09-21 Thread Anton Stiglic
Thanks for taking the time to look at this.

But I recounted, and I count 765 hex (with the formatting I get in my mail,
11 lines of 68 hex + 17 hex at the end), which gives 3060 bits.  Considering
that the first hex is 1 and can be represented in 1 bit, not for, that would
give 3060 - 3 = 3057 bits.

The modulus is the same size, but starts with 1D instead of 1F (the
beginning of s^3), so s^3 is bigger.  My bc library has a function called
bits which returns the number of bits, I get 3057 in both cases, see bellow
(also look at the value of m - s, which is negative, and modexp(s, 3, m)
which doesn't have the form we want, but modexp(s/100, 3, m) does).  

But I seem to remember now that in openssl, mod(x, y) doesn't always return
a value which is between 0 and y, maybe it would accept my s.  Will try it.

--Anton

m
1D851D5148345606F586935D227CD5CF7F04F890AC5024178BA5F4EE85D7796918C3\
DC7A5951C985539CB240E28BA4AC3AFBE0F6EB3151A0DBAFD686C234A30D07D590D6\
1A5474491BF0D68E1AC7F94CDC989C19C2E25B12511A29FFAF5F11E0B994E19C5C3D\
C298F9E584FFF3C7DBB8F703A0EAD97167F88C7229BBFA55B449CDE4C91B409D5B9A\
CF0134CB61352E9CE6CB3D847C7F3D9AFA74E8E19DD1ED7923270E310A5D91E97EF1\
98694465950715AA066ACB06FAEC0BA64FCCCA155104852EFD41346F75D1ACB8574B\
BE3C7C8D6D1B501C1163AD2058506DF1B64059A6932C0672FB9D094364EA4D7FA044\
42B8E643B74B8746B594866C7CBDAB8FEA954FDEE7C44B9C5D6B9E19B49082D65B51\
7EA7DBFEF5CA1EEA39AB2283CDB854C8B246F2B8EFE51895349640248A3248EC65F6\
4A89CA5AB194B444DF676B015AFBCACE13697CEEB5268F5E9AA674A83DD1B0CE4DC8\
3603CFFB801DB669216FC647CD7A6A84831E421D9676C7AAC44411B2AB3E901A7139\
B3519B58EBAEEC20B
s
7FFF\
\
FFEAAEAD6EAB6B2B18EBD595822B1555\
AC5D20CF08046814578C2B994E1DBD8413A43C05640
bits(m)
BF1
bits(s^3)
BF1
m - s^3
-27AE2AEB7CBA9F90A796CA2DD832A3080FB076F53AFDBE8745A0B117A288696E73C\
2385A6AE367AAC634DBF1D745B53C5041F0914CEAE5F245029793DCB5CF2F82A6F29\
E5AB8BB6E40F2971E53806B3236763E63D1DA4DDB1E7E900E10140D0269B4003F3C2\
7EACDEB5C1035A4765F029AF59AB74B1A6C2A091E14405AA4BB6321B36E4BF62A465\
30FECB349ECAD1631934C27B8380C265058B171E622E1286DCD8F1CEF5A26E16810E\
6796BB9A6DA29467C54B41AC61C95E3785A9F85D4578F21C056D03ECF9128580717D\
563B5F437FEB9CDCAAE7E01D6C84F908AF5336EC3D710D6DF1F282A270E150F32438\
31826C7843300F514A6B799383425470156AB021183BB4637CBCB87B1902C4D519D7\
52B7C10EC94263DF2E26A5466F17150A2E4E2745BD967E5CC8352D58EE7A5237E637\
3B36C59357EE1C3BCBBAE5A3EBFA812347B2D71FF03ECAD84A80E22419EA004D6C73\
4BD35CE67B61094E6292B8E8BA5005F3D0F436A6C0EE1F47EDAFD37F94C16FE58EC6\
4CAE64A7145113DF5

modexp(s, 3, m)
27AE2AEB7CBA9F90A796CA2DD832A3080FB076F53AFDBE8745A0B117A288696E73C2\
385A6AE367AAC634DBF1D745B53C5041F0914CEAE5F245029793DCB5CF2F82A6F29E\
5AB8BB6E40F2971E53806B3236763E63D1DA4DDB1E7E900E10140D0269B4003F3C27\
EACDEB5C1035A4765F029AF59AB74B1A6C2A091E14405AA4BB6321B36E4BF62A4653\
0FECB349ECAD1631934C27B8380C265058B171E622E1286DCD8F1CEF5A26E16810E6\
796BB9A6DA29467C54B41AC61C95E3785A9F85D4578F21C056D03ECF9128580717D5\
63B5F437FEB9CDCAAE7E01D6C84F908AF5336EC3D710D6DF1F282A270E150F324383\
1826C7843300F514A6B799383425470156AB021183BB4637CBCB87B1902C4D519D75\
2B7C10EC94263DF2E26A5466F17150A2E4E2745BD967E5CC8352D58EE7A5237E6373\
B36C59357EE1C3BCBBAE5A3EBFA812347B2D71FF03ECAD84A80E22419EA004D6C734\
BD35CE67B61094E6292B8E8BA5005F3D0F436A6C0EE1F47EDAFD37F94C16FE58EC64\
CAE64A7145113DF5
modexp(s/100, 3, m)
1FFF\
\
FFF003021300906052B0E03021A05000\
4145D89B46034E0F41A920B2FA964E230EBB2D040B00\
\
02A9AA11CBB60CB35CB569DDD576C272967D774B02AE385C6EE43238C8C9\
1477DBD0ED06ECF8BC4B8D3DC4D566FA65939092D09D13E0ED8F8BE5D5CB9E72C47C\
743B52BBFA7B9697DA285694CD9347AB7528\
D15F9D0DBF0C82C967D1C7CA3CCF69D2E09519FEAD7B96F1FCCB6D7D78AC9B244C2D\
85C08FEE0982D080AB2250A546F64BF15B1C540EA5655A36E52756CC57BBB11BBA3B\
81D72CE1FB7EBFB784027F3087CA7078541278C45764E6F2B1F3E5324000\
000



-Original Message-
From: "Hal Finney" [mailto:[EMAIL PROTECTED] 
Sent: September 20, 2006 6:21 PM
To: [EMAIL PROTECTED]; cryptography@metzdowd.com
Subject: RE: Exponent 3 damage spreads...

Anton Stiglic writes:
> I tried coming up with my own forged signature that could be validated
with
> OpenSSL (which I intended to use to test other libraries). ...

> Now let's look at s^3
> 1FFF\
> \
> FFF003021300906052B0E03021A05000\
> 4145D89B46034E0F41A920

RE: Exponent 3 damage spreads...

2006-09-20 Thread Anton Stiglic

I tried coming up with my own forged signature that could be validated with
OpenSSL (which I intended to use to test other libraries).  I haven't
succeeded, either because in the particular example I came up with OpenSSL
does something that catches the invalid signature, or I messed up somewhere
(the likelihood of which is far from negligible).  Unfortunately, I don't
have much more time to play with this.  I decided to share the methodology I
used with those of you who are interested in case the info is helpful to
anyone, or someone can tell me why the signature I produced doesn't get
validated by OpenSSL.

I followed the instructions of Hal Finney's excellent post:
http://www.mail-archive.com/cryptography@metzdowd.com/msg06537.html

I started out by generating 3072 RSA key pair, with public exponent e = 3.

openssl genrsa -des3 -3 -out my.key 3072

the resulting key can be found bellow, the passwords is "test" if you ever
want to use it.

Then I created the corresponding public key certificate:

openssl req -new -x509 -days 1001 -key my.key -out my.cer

The public key certificate can be found bellow as well.  You can import this
in Windows key store, for example.

I then created a plaintext file, messageAunicode.txt, for which I computed a
signature on (a valid signature).  The idea was then to forge a signature on
an alternate messageBunicode.txt, without using the private key of course.
The two files can be found in attachment, they are in Unicode because I
wanted to also try this out with a VBscript implementing a signature
function using CAPICOM. (you can get a file in Unicode by opening it with a
simple text editor that allows you to save as Unicode, such as notepad, and
erase any extra bytes (header) with a hex editor such as XVI32).

The hashes of these files are

openssl dgst -sha1 messageAunicode.txt
SHA1(messageAunicode.txt)= eb8302606217ae549fe6ab1345f0b4c804195367

openssl dgst -sha1 messageBunicode.txt
SHA1(messageBunicode.txt)= 5d89b46034e0f41a920b2fa964e230ebb2d040b0

Now, create the valid signature over messageAunicode.txt to see what the
output looks like:

openssl dgst -hex -sha1 -sign my.key messageAunicode.txt
Enter pass phrase for my.key:
SHA1(messageAunicode.txt)=00d3cda91b578b6df29aeb140272bd9198759f79fa10dc410b
5d10362048ac7abe5df7fe0d94a6646e791c5b95b29f2c6384a570769dc888ed0b7ad510ccd3
c758cebeb648511620490e0fd54162badb1ed05411acc853509b62a4c1b242e1e2f737a1e7e4
340f5a79b05ec3475d7ba6fc73b3302f1258abac1079f11e8dfb9fc09d42716ba4054add460b
b12fc1b0b8d5d32db50395374aeb3c215c2bc566328d2f03bf043068c5c9abc649ba1767e97d
f32b6aa734594ee22fffe7fb06ea3b77030e79bd6fe7683ab7ffce462abfba5777b3914de466
5b86c1ec203feb6fccb3dadb8ba51fd87a7457c62385418e65d17809c4256e3d27dc2017d7a0
93c8bd193a09168f34d522dd7d3afb95fc61c9f4339091cf25d78bf461b4ea5620eed722ab7d
3eff99cea4a4f546bff6ce338d7763aff20a9b61452da07179590d3316bbce63b06b43d996d7
75d6843f46633ff107a3c866e3b0a8aaaea31f4a2048c9fcb448958287f8e961c9f3393e18fc
9a05460d51a286737aec14a1a7b27a51

Now, let's do some bignumber math.
I wanted to look at the value obtained when you verify a signature (RSA
encrypt the signature with exponent 3).  I use BC, a bignumber calculator I
like allot:
http://www.gnu.org/software/bc/
A version that can be installed on Windows:
http://gnuwin32.sourceforge.net/packages/bc.htm

I use a fast modexp function I implemented for BC, available at
http://crypto.cs.mcgill.ca/~stiglic/Programming/modgroup.bc

You can load it by simply calling bc like this:

bc modgroup.bc

I did the calculations all in hex, so typed the following in bc

obase=16
ibase=16

Now, denote by s the signature given above, e = 3 and m is the modulus in
the public key certificate I generated.  When pasting the values into BC,
the hex digits need to be in capital letters.  You can get the value of the
modulus by using a ASN.1 viewer, such as ASN.1 Editor or Peter Gutmann's
ASN1dump.  Here are the BC calculations:

s=00D3CDA91B578B6DF29AEB140272BD9198759F79FA10DC410B5D10362048AC7ABE5DF7FE0D
94A6646E791C5B95B29F2C6384A570769DC888ED0B7AD510CCD3C758CEBEB648511620490E0F
D54162BADB1ED05411ACC853509B62A4C1B242E1E2F737A1E7E4340F5A79B05EC3475D7BA6FC
7B3302F1258ABAC1079F11E8DFB9FC09D42716BA4054ADD460BB12FC1B0B8D5D32DB50395374
AEB3C215C2BC566328D2F03BF043068C5C9ABC649BA1767E97DF32B6AA734594EE22FFFE7FB0
6EA3B77030E79BD6FE7683AB7FFCE462ABFBA5777B3914DE4665B86C1EC203FEB6FCCB3DADB8
BA51FD87A7457C62385418E65D17809C4256E3D27DC2017D7A093C8BD193A09168F34D522DD7
D3AFB95FC61C9F4339091CF25D78BF461B4EA5620EED722AB7D3EFF99CEA4A4F546BFF6CE338
D7763AFF20A9B61452DA07179590D3316BBCE63B06B43D996D775D6843F46633FF107A3C866E
3B0A8AAAEA31F4A2048C9FCB448958287F8E961C9F3393E18FC9A05460D51A286737AEC14A1A
7B27A51
m=01D851D5148345606F586935D227CD5CF7F04F890AC5024178BA5F4EE85D7796918C3DC7A5
951C985539CB240E28BA4AC3AFBE0F6EB3151A0DBAFD686C234A30D07D590D61A5474491BF0D
68E1AC7F94CDC989C19C2E25B12511A29FFAF5F11E0B994E19C5C3DC298F9E584FFF3C7DBB8F
703A0EAD97167F88C7229BBFA55B449CDE4C91B409D5B9ACF013

Re: Interesting bit of a quote

2006-07-12 Thread Anton Stiglic
> David Wagner writes:
> SB1386 says that if a company conducts business in Caliornia and
> has a system that includes personal information stored in unencrypted from
> and if that company discovers or is notified of a breach of the security
> that system, then the company must notify any California resident whose
> unencrypted personal information was, or is reasonably believed to have
> been, acquired by an unauthorized person. [*]
>

> [*] This is pretty close to an direct quote from Section 1798.82(a)
> of California law.  See for yourself:
>   
> http://info.sen.ca.gov/pub/01-02/bill/sen/sb_1351-1400/sb_1386_bill_20020926_chaptered.html

Does that mean that you (the company) are safe if all of the personal
information in the database is simply encrypted with the decryption key
laying right there alongside the data?  Alot of solutions do this, some go
to different lengths in trying to obfuscate the key.

--Anton




-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: NPR : E-Mail Encryption Rare in Everyday Use

2006-03-08 Thread Anton Stiglic

>More strongly, if we've never met, and you are not in the habit of
>routinely signing email, thereby tying a key to your e-persona, it
>makes no sense to speak of *secure* communication to *you*. 

Regularly signing email is not necessarily a good idea.  I like to be able
to repudiate most emails I send...

 --Anton

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.375 / Virus Database: 268.1.2/274 - Release Date: 03/03/2006
 


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: general defensive crypto coding principles

2006-02-14 Thread Anton Stiglic
I don't believe MtE is good advice, and I have yet to see a decent reason
why one would want to use that instead of EtM. 
Of course when we talk about EtM, the MAC should be applied over all
plaintext headers and trailers (including IV used for encryption, algorithm
identifier, protocol version, whatever).

Allot of attacks could have been prevented with EtM, including the Vaudenay
padding attack, the Chosen-Ciphertext Attacks against PGP and other email
encryption protocols described by Schneier, Katz and Jallad
http://www.schneier.com/paper-pgp.pdf
as well as the attacks on Host Security Modules key blocks (well in this
case the bad was simply that their were to integrity checks, 2 key
Triple-DES keys were protected by a master triple-DES key by encrypted the
left part and right part independently) and other such types as described by
Clulow and others
http://www.cl.cam.ac.uk/~jc407/Chap3.pdf

Ferguson gave an explanation why in his book with Schneier they recommend
MtE
http://groups.google.ca/group/sci.crypt/msg/1a0e0165c48e4fe4?q=g:thl19936885
73d&dq=&hl=en&lr=&ie=UTF-8&oe=UTF-8
But the arguments he gives pertain to other problems; see for example the
comments given by Wagner which I agree with
http://groups.google.ca/group/sci.crypt/msg/532fdfb5edca19a8?q=g:thl24955674
08d&dq=&hl=en&lr=&ie=UTF-8&oe=UTF-8

I had come up with a list of advices for crypto implementation some time ago
myself.  These included (from memory)

- Use good RNGs, even for things other than the generation of keys (such as
for generating IVs, challenges, etc.)
- Use standard algorithms, and use them in secure ways (choose a good mode
of encryption, adequate key sizes, pick the IVs the way you are supposed to
securely, usually either randomly or for counters make sure you have no
repeats)
- Use standard protocols (don't try to re-invent TLS or IPSec)
- Encrypt then authenticate over ciphertext and all plaintext headers and
trailers.
- Use independent keys for different functionalities.  If needed, derive
independent keys based on a single secret using a good key derivation
function.
- Limit the amount of time you handle secrets (zeroize after use...)
- Don't let yourself be used as a random oracle (I think Ross Anderson said
it this way first), this includes limiting information that is leaked about
errors, avoiding timing attacks and such (this is hard to do in practice).

--Anton




-- 
Internal Virus Database is out-of-date.
Checked by AVG Free Edition.
Version: 7.1.375 / Virus Database: 267.15.1/250 - Release Date: 03/02/2006
 


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: a crypto wiki

2006-01-28 Thread Anton Stiglic
I agree.  The cryptodox page looks nice, but I would rather see the content
go in wikipedia, which is worked on, and looked at, by many more people, a
really beautiful community work.

--anton

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Whyte, William
Sent: January 26, 2006 10:07 AM
To: Travis H.; cryptography@metzdowd.com
Subject: RE: a crypto wiki

There's also a crypto portal in Wikipedia itself:
http://en.wikipedia.org/wiki/Portal:Cryptography

FWIW, I'd rather see energy focused on the Wikipedia
version, which more people are likely to use.

William 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Travis H.
> Sent: Thursday, January 26, 2006 12:02 PM
> To: cryptography@metzdowd.com
> Subject: a crypto wiki
> 
> http://www.cryptodox.com/Main_Page
> 
> --
> "The generation of random numbers is too important to be left 
> to chance."
>   -- Robert Coveyou -><- http://www.lightconsulting.com/~travis/
> GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B
> 
> -
> The Cryptography Mailing List
> Unsubscribe by sending "unsubscribe cryptography" to 
> [EMAIL PROTECTED]
> 
> 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: another feature RNGs could provide

2005-12-22 Thread Anton Stiglic
>Actually, by definition, a cipher should be a permutation from the set
>of plaintexts to the set of ciphertexts. It has to be 1 to 1 bijective
>or it isn't an encryption algorithm.
>
>Therefore, if you want an ergodic sequence of size 2^N, a counter
>encrypted under an N bit block cipher will do it.
>
>Perry

Yes, and the set of keys define a subset of all of the possible permutations
(working on the same size input as the block cipher).  The set of all
permutations is a group, but a subset of that is not necessarily a subgroup.

Most security proofs of modes of operations, and others, model a block
cipher as a random permutation.

--Anton


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Fermat's primality test vs. Miller-Rabin

2005-12-05 Thread Anton Stiglic

>Ok after making that change, and a few others. Selecting only odd numbers 
>(which acts as a small seive) I'm not getting much useful information. It 
>appears to be such that at 512 bits if it passes once it passes 128 times, 
>and it appears to fail on average about 120-130 times, so the sieve 
>amplifies the values more than expected. Granted this is only a test of the

>generation of 128 numbers, but I got 128 primes (based on 128 MR rounds). 


O.k., so if I read this right, your new results concord with the analysis of
Pomerance et al.   That would make much more sense.

When you say "on average about 120-130 times the test fails", out of how
many is that?


--Anton





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Encryption using password-derived keys

2005-12-02 Thread Anton Stiglic
It can be useful to derive a key encryption key from the password, and not
use the key derived from the password to directly encrypt data you want to
protect, when the resulting ciphertext can be found in different places
where your encrypted key won't necessarly also be found.  For example, to
encrypt files, when the encrypted files found themselves on a backup disk,
but the key is stored somewhere else (encrypted with a password based key).

This can prevent someone who has access to the ciphertext from executing a
brute force attack.

If however your ciphertext always travers with your encrypted key, you don't
gain much of an advantage (the weak point is the password-based key which
can be brute forced or dictionary attacked).

I don't recommend just XORing for the protection of the key.  If ever your
Key Derivation Function doesn't really act like a good pseudo-random
function, or if you use the same password and salt to derive the same key to
protect two different keys, you will be screwed. I rather recommend
encrypting with something like AES, and I also recommend to compute a MAC
over the ciphertext to turn it into a strong encryption, and avoid attacks
such as what have been found with the HSM and the way they stored keys
outside the HSM.  For further details on that point, see for example section
4.3 of the following paper (follow the references given there)
http://crypto.cs.mcgill.ca/~stiglic/Papers/tripleDES.pdf

--Anton


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jack Lloyd
Sent: November 29, 2005 11:09 AM
To: cryptography@metzdowd.com
Subject: Encryption using password-derived keys


The basic scenario I'm looking at is encrypting some data using a
password-derived key (using PBKDF2 with sane salt sizes and iteration
counts). I am not sure if what I'm doing is sound practice or just pointless
overengineering and wanted to get a sanity check.

My inclination is to use the PBKDF2 output as a key encryption key, rather
than
using it to directly key the cipher (with the key used for the cipher itself
being created by a good PRNG). For some reason the idea of using it directly
makes me nervous, but not in a way I can articulate, leading me to suspect
I'm
worried over nothing.

So, assuming using it as a KEK makes sense: At first I thought to use XOR to
combine the two keys, but realized that could lead to related key attacks
(by
just flipping bits in the field containing the encrypted key). That is
probably
not a problem with good algorithms, but, then again, why take the chance; so
I
was thinking instead using NIST's AES-wrap (or perhaps a less weirdly
designed
variant of it that uses HMAC for integrity checking and AES in CBC mode for
confidentiality).

Am I thinking about this far harder than I should?

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Fermat's primality test vs. Miller-Rabin

2005-11-30 Thread Anton Stiglic


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Joseph Ashwood
Sent: November 18, 2005 3:18 AM
To: cryptography@metzdowd.com
Subject: Re: Fermat's primality test vs. Miller-Rabin

>> Look at table 4.3 of the Handbook of
>> applied cryptography: for t = 1 (one iteration) and for a 500-bit 
>> candidate,
>> we have probability p(X | Y_1) <= 2^-56, which is better than what you
>> concluded.  (X representing the event that the candidate n is composite, 
>> Y_t
>> representing the event that Miller-Rabin(n, t) declares n to be prime).
>>
>> The results in table 4.3 and 4.4 of HAC are for randomly (uniform) chosen
>> candidates, and I think you need to do a basic sieving (don't remeber if
>> that is necessary, but I think it is).  The result is due to the fact 
>> that under these conditions, the strong pseudoprime test does in fact 
>> much  better than 1/4 probability of error ( value of P(Y_t | X) is very
>> low ), this result is due to Damgard, Landrock and Pomerance, based on 
>> earlier work of Erdos and Pomerance.

>I think much of the problem is the way the number is being applied. Giving
>a stream of random numbers that have passed a single round of MR you will
>find that very close to 50% of them are not prime, this does not mean that
>it passes 50% of the numbers (the 2^-80 probability given above is of this 
>type). 

Do you do an initial sieving to get rid of the more obvious primes?  I'm
guessing you don't since you seem to have a result contradictory to what has
been proven by Damgard, Landrock and Pomerance.  If you look at table 4.3 of
HAC (which comes from Damgard & al. paper), it says that if your candidates
come from a uniform random distribution, then for 500 bit candidate, the
probability that a candidate n is composite when one round of miller-Rabin
said it was prime is <= (1/2)^56.  You are finding that the probability is
about 1/2, that seems very wrong (unless you are not doing the sieving,
which is very important).  Am I misunderstanding something?


>In fact it appears that integers fall on a continuum of difficulty 
>for MR, where some numbers will always fail (easy composites), and other 
>numbers will always pass (primes). The problem comes when trying to denote 
>which type of probability you are discussing. 

Well I think I explained it pretty clearly.  I can try to re-iterate.  Let X
represent the event that a candidate n is composite, and let Y_n denote the
event that Miller-Rabin(n,t) declares n to be prime, where Miller-Rabin(n,t)
means you apply t iterations of Miller-Rabin on n.
Now the basic theorem that we all know is that P(Y_t | X) <= (1/4)^t (this
is problem in one of Koblitz basic textbooks on cryptography, for example).
But this is not the probability that we are interested in, we are (at least
I am) more interested in P(X | Y_t).  In other words, what is the
probability that n is in fact composite when Miller-Rabin(n, t) declared n
to be prime?  Do we agree that this is the probability that we are
interested in?


>What are the odds that a 
>random 512-bit composite will be detected as composite by MR in one round?
>I don't think anyone has dependably answered that question, but the answer
>is very different from 1-(probability that MR-* says it's a prime)^-k. Any 
>discussion needs to be more accurately phrased.

You are looking for P( Comp Y_t | X), where Comp Z is the complementary
event of Z. In our case, Comp Y_t is the event that Miller-Rabin(n,t) proves
n to be composite. Is that what you are looking for?


>For example, my phrasing is that in the tests that I performed 50% (+/- 
>experimental noise) of those numbers that passed a single round of MR also 
>passed 128 rounds, leading me to conclude that 50% of the numbers that 
>passed a single round of MR are in fact prime. Since each number that
>passed a single round was subjected to 127 additional rounds, a number of 
>additional statistics can be drawn, in particular that of those that failed

>at least one round none failed less than 40 rounds, and that few passed
>less than 40 rounds. Due to the fact that this was only iterated 65536
>times there is still substantial experimental error available. These pieces
>of information combined indicate that for 512-bits it is necessary to have
>80 rounds of MR to verify a prime.
 
I don't understand what you are trying to point out.  If you chose your
candidates uniformly at random, do the sieving before applying the
Miller-Rabin tests, then for 512 bit number it is sufficient to apply 5
rounds to get probability of error lower than (1/2)^80.  

You should take a look at Damgard & al's paper, they did a very good
analysis.

--Anton
  



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Fermat's primality test vs. Miller-Rabin

2005-11-16 Thread Anton Stiglic

>The general consensus is that for 500-bit numbers one needs only 6 MR
>tests for 2^{-80} error probability [1]:

>... 

> and thus a single test gives ~2^{-13}.

If you just took the exponent 80 and divided it by 6 to get ~13, I don't
think that is the right reasoning.  Look at table 4.3 of the Handbook of
applied cryptography: for t = 1 (one iteration) and for a 500-bit candidate,
we have probability p(X | Y_1) <= 2^-56, which is better than what you
concluded.  (X representing the event that the candidate n is composite, Y_t
representing the event that Miller-Rabin(n, t) declares n to be prime).

The results in table 4.3 and 4.4 of HAC are for randomly (uniform) chosen
candidates, and I think you need to do a basic sieving (don't remeber if
that is necessary, but I think it is).  The result is due to the fact that
under these conditions, the strong pseudoprime test does in fact much better
than 1/4 probability of error ( value of P(Y_t | X) is very low ), this
result is due to Damgard, Landrock and Pomerance, based on earlier work of
Erdos and Pomerance.

--Anton


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-11-10 Thread Anton Stiglic

>> Although the Carmichael numbers fool the Fermat test
>> (that is, $a^{n-1} = 1 (n)$) for *all* a, there are no such things for
>> the Miller-Rabin test:  for any odd composite n at least 3/4 of a's
>> fail the test, that is if you made m MR tests with random a's then you
>> are mistaken with probability at most (1/4)^m.
>
> Yes I guess the difference is that with MR you are trying to find a
> number that is *likely* a prime, whereas with Fermat you are testing
> primality. But MR will still fail when given a Carmichael number,
> since elsewhere MR is defined as iterated application of the Fermat
> test [1].

That is not true, in several counts.
Firstly Miller-Rabin probabilistic primality test doesn't generate a
number, it verifies a number for primality.
Secondly, the Miller-Rabin probabilistic primality test is not based on
Fermat's Little theorem, or so called pseudoprime test, but rather on the
strong pseudoprime test, which derives from a theorem that says that if n
is an odd prime, n-1 = 2^s * r with r odd, then for any a such that
gcd(a,n) = 1 either a^r == 1 (mod n)  or  a^(r*2^j) == -1 (mod n) for some
j, 0 <= j <= s-1.   See Handbook of a applied cryptography fact 4.20.

I'm affraid the reference you gave is incorrect.

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-11-10 Thread Anton Stiglic
>> I guess the small increase in efficiency would not be worth additional
>> program code.
>
> That depends on the size of the numbers you're working with...
> Considering the research that goes into fast implementations of
> PowerMod I don't think the required computation is trivial.
>
>> Although the Carmichael numbers fool the Fermat test
>> (that is, $a^{n-1} = 1 (n)$) for *all* a, there are no such things for
>> the Miller-Rabin test:  for any odd composite n at least 3/4 of a's
>> fail the test, that is if you made m MR tests with random a's then you
>> are mistaken with probability at most (1/4)^m.

That is true but is not the result of a direct conclusion.  Let X
represent the event that n is composite, and Y_t the event that
MILLER-RABIN(n,t) declares n to be prime.  Because for a composite n there
is at least 3/4 of a's that fail the test, we can conclude that Pr(Y_t |
X) <= (1/4)^t.
But the probability I think you are referring to (the one that is usually
considered the most interesting) is P(X | Y_t).  It happens to be the case
that P(X | Y_t) is in fact <= (1/4)^t when using uniform random
candidates, but to come to that conclusion you need to consider the fact
that the error probability of Miller-Rabin is usually far smaller than
(1/4)^t (and apply Bayes theorem and a theorem on the distribution of
prime numbers).  See Note 4.47 in the Handbook of applied cryptography, or
the following paper:
http://www.cs.mcgill.ca/~crepeau/PS/BBC+87.ps

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: The Pointlessness of the MD5 "attacks"

2005-01-04 Thread Anton Stiglic
>David Wagner wrote:
>> Ben Laurie writes:
>
>
>> Or, even more contrived, imagine that img1.jpg looks
>> like a completely normal JPG file, but img2.jpg exploits some buffer
>> overrun in the startup screen's JPG decoder to overwrite the program's
>> image with some other malicious code.
>> 
>> Sure, these scenarios are contrived and unlikely.  But how do you
>> know that there is not some other (possibly more complex but less
>> contrived) scenario that you would consider more troubling?
>
>They do not relate to the known MD5 collisions - these are general 
>collisions, which we do not know how to create, not the restricted ones 
>we do know how to create.

I disagree; I think it might be possible with the current cryptanalysis on
MD5.  The collisions that can be currently produced only flip a couple of
bits, and you can add what you want before and after the 1024-bit block.

Imagine some code that reads the image (or whatever bit-string) as a textual
string, in one case it doesn't read the whole bit-array because there is a
null string-terminating character, in the other case (collision) the
character is not present and causes a buffer overflow.  I think something
like that can be done today.

--Anton




-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: SSL/TLS passive sniffing

2004-12-05 Thread Anton Stiglic
>This sounds very confused.  Certs are public.  How would knowing a copy
>of the server cert help me to decrypt SSL traffic that I have intercepted?

I found allot of people mistakenly use the term certificate to mean
something like a pkcs12 file containing public key certificate and private
key.  Maybe if comes from crypto software sales people that oversimplify or
don't really understand the technology.  I don't know, but it's a rant I
have.  

>Now if I had a copy of the server's private key, that would help, but such
>private keys are supposed to be closely held.

>Or are you perhaps talking about some kind of active man-in-the-middle
>attack, perhaps exploiting DNS spoofing?  It doesn't sound like it, since
>you mentioned passive sniffing.

I guess the threat would be something like an adversary getting access to a
web server, getting a hold of the private key (which in most cases is just
stored in a file, allot of servers need to be bootable without intervention
as well so there is a password somewhere in the clear that allows one to
unlock the private key), and then using it from a distance, say on a router
near the server where the adversary can sniff the connections.  A malicious
ISP admin could pull off something like that, law authority that wants to
read your messages, etc.

Is that a threat worth mentioning?  Well, it might be.  In any case,
forward-secrecy is what can protect us here.  Half-certified (or fully
certified) ephemeral Diffie-Hellman provides us with that property.

Of course, if someone could get the private signature key, he could then do
a man-in-the-middle attack and decrypt all messages as well.  It wouldn't
really be that harder to pull off.

--Anton


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


New IBM Thinkpad includes biometrics

2004-10-19 Thread Anton Stiglic

http://www.theregister.co.uk/2004/10/05/biometric_thinkpad_t42/

I wonder how well it can counter the attacks discussed by researchers in the
last few years.  Like reactivating a fingerprint authentication by breathing
on the sensor's surface containing residue fat traces of the finger, or
placing a bag of water.  Or the jelly finger trick.
The biometric authentication might very well make the laptop less secure
than password-based authentication.

--Anton




-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Maths holy grail could bring disaster for internet

2004-09-08 Thread Anton Stiglic
>Mathematicians could be on the verge of solving two separate million dollar
>problems. If they are right - still a big if - and somebody really has
>cracked the so-called Riemann hypothesis, financial disaster might follow.
>Suddenly all cryptic codes could be breakable. No internet transaction
>would be safe.

Looks like they are saying that if one can disprove the Riemann hypothesis,
then one could break (presumably) public key crypto, (presumably) by
factoring or computing DL.  But I am not aware of any factoring or DL
algorithm that can be drastically sped up if Riemann hypothesis is proven to
be false?

Here the author quotes the mathematician:


 "The whole of e-commerce depends on prime numbers. I have described the
primes as atoms: what mathematicians are missing is a kind of mathematical
prime spectrometer. Chemists have a machine that, if you give it a
molecule, will tell you the atoms that it is built from. Mathematicians
haven't invented a mathematical version of this. That is what we are after.
If the Riemann hypothesis is true, it won't produce a prime number
spectrometer. But the proof should give us more understanding of how the
primes work, and therefore the proof might be translated into something
 *
that might produce this prime spectrometer. If it does, it will bring the
 *
whole of e-commerce to its knees, overnight. So there are very big
implications."


This wording, with the word *might*, is more accurate, and not at all
equivalent to the assertion the author makes at the beginning.

Another bad article.

--Anton

 


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Microsoft .NET PRNG (fwd)

2004-08-12 Thread Anton Stiglic


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ed Gerck
Sent: 10 août 2004 13:42
To: [EMAIL PROTECTED]
Subject: Re: Microsoft .NET PRNG (fwd)

>The PRNG should be the least concern when using MSFT's cryptographic
>provider. The MSFT report 140sp238.pdf says:
>
>   RSAENH stores keys in the file system, but relies upon Microsoft
>   Windows XP for the encryption of the keys prior to storage.


Yes that's true.  The security policy explains that the safeguarding of
private keys is done outside the crypto boundary.  (as someone mentioned to
me in personal email you need to have a look at the fine print of such
accreditations, this is an example of a fine print).
Note however that the OS uses the crypto provider to encrypt the private key
using a secret that is generated based on (or protected by a key generated
based on, don't remember off the top of my head) the user's password.

The strength of the system is based on the user's Windows password, which I
think is reasonable (anyone who can login as the user can use his private
keys, stored in his container, anyways)...

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Microsoft .NET PRNG (fwd)

2004-08-10 Thread Anton Stiglic
There is some detail in the FIPS 140 security policy of Microsoft's
cryptographic provider, for Windows XP and Windows 2000.  See for example
http://csrc.nist.gov/cryptval/140-1/140sp/140sp238.pdf

where they say the RNG is based on FIPS 186 RNG using SHS.  The seed is
based on the collection of allot of data, enumerated in the security policy.

I would guess that what is written is true, less NIST would look very bad if
someone reversed engineered the code and showed that what they certified was
wrong.

So based on that it would seem that the PRNG in recent Microsoft
cryptographic providers is o.k.

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: dual-use digital signature vulnerability

2004-07-21 Thread Anton Stiglic

About using a signature key to only sign contents presented in a meaningful
way that the user supposedly read, and not random challenges:

The X.509 PoP (proof-of-possession) doesn't help things out, since a public
key certificate is given to a user by the CA only after the user has
demonstrated to the CA possession of the corresponding private key by
signing a challenge.  I suspect most implementation use a random challenge.
For things to be clean, the challenge would need to be a content that is
readable, and that is clearly only used for proving possession of the
private key in order to obtain the corresponding public key certificate.

X.509 PoP gets even more twisted when you want to certify encryption keys (I
don't know what ietf-pkix finally decided upon for this..., best solution
seems to be to encrypt the public key certificate and send that to the user,
so the private key is only ever used to decrypt messages...)


--Anton


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: New Attack on Secure Browsing

2004-07-16 Thread Anton Stiglic

>You stated that http://www.pgp.com is an SSL-protected page, but did you
>mean https://www.pgp.com? On my Powerbook, with all the browsers I get an
>error that the certificate is wrong and they end up at http://www.pgp.com.

What I get is a bad certificate, and this is due to the fact that the
certificate is issued to store.pgp.com and not www.pgp.com.
Interestingly (maybe?), when you go and browse on their on-line store, and
check something out to buy, the session is secured but with another
certificate, one issued to secure.pgpstore.com.

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Verifying Anonymity

2004-07-16 Thread Anton Stiglic

>> [...] I find it hard to imagine how you
>> can even know whether it "seems to work", let alone has some subtle
>> problem.
>
>That's clearly a much harder problem--and indeed I suspect it's behind
>the general lack of interest that the public has shown in anonymous
>systems.
>
>-Ekr

The lack of understanding of how a solution works applies to most security
products and in general to all computer products.  Most people don't have a
clue how an SSL encrypted session really protects your credit card number in
transit, but allot of people are starting to realize that they should use it
(they understand to some extent the problem SSL attempts to solve).

With anonymity systems, I don't think understanding how a solution works is
a problem to its wide-spread use, the problem is more that of understanding
the *problem the solution attempts to solve*.  People still don't understand
the consequences of privacy invasion on the Internet (the problem).  Once
they do, they will be willing to pay for a solution from any trusted
company, without needing to understand how the solution actually works.
IMHO...

--Anton



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Humorous anti-SSL PR

2004-07-15 Thread Anton Stiglic

>This barely deserves mention, but is worth it for the humor:
>"Information Security Expert says SSL (Secure Socket Layer) is Nothing More
>Than a Condom that Just Protects the Pipe"
>http://www.prweb.com/releases/2004/7/prweb141248.htm

The article says
"The weaknesses of SSL implementations have been well known amongst security
professionals, but their argument has been that SSL is the best tool
currently on offer. The fact that it can be spoofed and is open to man in
the middle attacks is played down."

O.k., so if there is a vulnerability in a particular implementation there
might be a possible MITM attack.  Also possible to do MITM if user doesn't
do proper verification.  But I wouldn't say that SSL implementations in
general are suspect to MITM attacks.
Later in the article it is written:

"What we can be certain of is that it is not possible to have a
man-in-the-middle attack with FormsAssurity - encryption ensures that the
form has really come from the claimed web site, the form has not been
altered, and the only person that can read the information filled in on the
form is the authorized site."

O.k., so how do they achieve such assurances?

Eric's comment about condoms being effective is right, so bad analogy as
well!

--Anton



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: EZ Pass and the fast lane ....

2004-07-13 Thread Anton Stiglic

My 2 cents on the subject...

The automatic toll fee system I am most familiar with is that of Kapsh (used
to be Combitech).  They have implemented automatic toll fee collection in
many countries around the world (in Europe, Asia, Australia, south
America)...

http://www.kapsch.se/

I think they usually implement a combination of 
1) a system that queries a device in the car, which identifies the car
owner, and then charges the owner in a central database (incrementing the
amount that is due)
2) license plate scanning for accountability purposes.

When you do crypto to authenticate the communication between the toll device
and the device in the car, you need to do fast crypto.  Where I work, we
used to be in the hardware arena and had a project designing an HSM for a
toll fee system.  The requirements where that it had to be based on DES/3DES
and you had to be able to do DES/3DES operations on single, small length
messages, rapidly.  This last part is a bit tricky, it's not the same as
getting good average speed on longer messages, you need to take into account
the communication between the PC and the HSM which accounts for allot of
overhead on a single, small length message;  IO memory mapping is a good way
to go, also preparing keys in RAM can help just a bit, but for us IO memory
mapping gave the most significant speed-up.  There a paper from IBM on this
subject (can't find the reference now), with the same conclusions.

License plate scanners seem to be effective these days.  I related story to
the toll fee license plate scanning, Toronto police are using a license
plate recognition device to scan parked cars in order to attempt to identify
stolen cars:
http://www.ipc.on.ca/scripts/index_.asp?action=31&P_ID=14285&N_ID=1&PT_ID=10
01&U_ID=0

They were able to recover 153 stolen cars in a 3-month test period.
They say they can scan 1000 license plates an hour, but this includes the
time to send the information to a central point and do a search in a
repository.

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: identification + Re: authentication and authorization

2004-07-09 Thread Anton Stiglic


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ed Gerck
Sent: 7 juillet 2004 14:46
To: [EMAIL PROTECTED]
Subject: identification + Re: authentication and authorization

>I believe that a significant part of the problems discussed here is that
>the three concepts named in the subject line are not well-defined. This
>is not a question of semantics, it's a question of logical conditions
>that are at present overlapping and inconsistent.
>
>For example, much of what is called "identity theft" is actually
>"authentication theft" -- the stolen credentials (SSN, driver's
>license number, address, etc) are used to falsely *authenticate* a
>fraudster (much like a stolen password), not to identify. 

Yes and no.  The problem is that most authentication and authorisation
schemes today are actually identification and authentication and
authorisation schemes.  Even when you read CISSP study guides, they always
describe it in 3 steps, identification, authentication and authorisation.
The thing is that we can do without identification.  Identification is not
necessary, even if you want accountability.  In
Identification-authentication-authorisation schemes, identification is the
process of pin-pointing an exact individual from a set of individuals (e.g.
SSN allows you to define a unique united-states citizen), authentication is
the process of verifying that the individual claiming to be who he
identified himself as, is really that individual.   But most systems don't
really need identification, all they need is a proof that the individual
possesses a certain attribute.  It is possible to do authentication and
authorisation, without doing the identification part!   For example, it is
possible to prove that you are a united-states citizen that has a valid SSN
number, without actually giving out information about SSN.

Why is identity theft a bad thing?  Usually, you don't want your identity to
be stolen because you could be accused of something due to accountability
that is associated with your identity.  The problem is not that someone can
authenticate himself to a system he is not suppose to have access to, the
problem is that a thief can identify himself as you and authenticate himself
as you, and than do bad things (like transfer your money).

The problem is not really authentication theft, its identity theft, or if
you want to put it even more precisely, it's "identity theft and
authenticating as the individual to whom the identity belongs to".  But the
latte doesn't make for a good buz-word :)

Here is another way of seeing it.  Consider a system where you need to
authenticate yourself as a citizen, of some region, that is 18 years of age
or older, in order to participate in some gambling thing say.  One way to
implement the authentication and authorisation in the system is to have each
individual identify themselves, and then authenticate themselves.  If the
individual is part of a set of individuals that are known to be over 18,
then the individual is given access.  Another way to implement it is to have
each individual prove that they are over 18 without identifying themselves,
using Stefan Brands digital credentials say.  If the authentication is
successful, the un-identified individual is given access.  In the latter
case, you don't really care about authentication theft unless there is some
sort of accountability (with Stefan's digital credentials, you can embed the
identity in the tokens that are presented for authentication, the identity
can only be revealed under certain circumstances, for example excessive use
or if require by a law, it could be revealed by a third party).

I do agree that stronger authentication does help, preferably authentication
based on zero-knowledge protocols, since these reveal less information about
the individual's identity that can be used to impersonate the individual.

--Anton





Once we
understand this, a solution, thus, to what is called  "identity theft"
is to improve the *authentication mechanisms*, for example by using
two-factor authentication. Which has nothing to do with identification,
impersonation, or even the security of identification data.

In further clarifying the issue, it seems that what we need first is
a non-circular definition for identity. And, of course, we need a
definition that can be applied on the Internet.  Another important
goal is to permit a safe automatic processing of identification,
authentication and authorization [1].

Let me share with you my conclusion on this, in revisiting the
concept of identification some time ago. I found it useful to ask
the meta question -- what is identification, that we can identify it?
In short, a useful definition of identification should also work
reflexively and self-consistently [2].

In this context, what is "to identify"? I think that "to identify"
is to look for connections. Thus, in identification we should look
for logical and/or natural connections. For exam

RE: authentication and authorization (was: Question on the state of the security industry)

2004-07-08 Thread Anton Stiglic

>However, in some scenarios
>http://www.garlic.com/~lynn/2001h.html#61
>the common use of static data is so pervasive that an individual's
>information
>is found at thousands of institutions. The value of the information to the
>criminal is that the same information can be used to perpetrate fraud
>across all institutions and so the criminal value is enormous. However
>the value to each individual institution may be minimal. As a result
>there can be situations where an individual institution hasn't the
>infrastructure or the funding to provide the countermeasures necessary
>to keep the criminals away from the information (they simply don't
>have the resources to provide security proportional to the risk).
>
>The value of the static data authentication information to a criminal
>is far greater than the value of the information to the institution ...
>or the cost to the criminal to acquire the information is possibly
>orders of magnitude less than the value of the information (for
>criminal purposes).

Agreed.  This is where federated identity management becomes a tricky
problem to solve.  It is important to get something like the Liberty
Alliance right.

A solution that I like can be found here (there is also a ppt presentation
that can be found on the site):

http://middleware.internet2.edu/pki04/proceedings/cross_domain_identity.pdf


>Given such a situation  the infrastructures simply don't have
>the resources to provide the countermeasures adequate to meet
>the attacks they are going to experience (there is such a huge
>mismatch between the value of the information to the individual
>institutions and the value of the information to the criminal).

>Which results in my assertion that there has to be a drastic
>move away from the existing "static data" authentication paradigm
> because there is such a mismatch between the value
>to secure the information verses the value of attacks to
>obtain the information.

>It isn't that theory can't provide  mechanisms to protect
>the information  it that the information is spread far and
>wide and is in constant use by thousands of business processes,
>and that protection problem is analogous to the problem of
>having people  memorize a hundred different 8+character
>passwords that  change every month (which is also a shortcoming
>of the static data authenticaton paradigm).

Yes, theory is far more advanced than what is used in practice.
With Zeroknowledge proofs and attribute authentication, based on 
secrets stored on smart cards held by the proper owners, and possibility
to delegate part of the computation to a server (so clients can 
authenticate on low powered devices), without revealing information 
about the secret, etc...

I agree that what you call "static data" authentication paradigm
is the cause of many problems, including identity theft.  It is 
one reason why Identity Management is a hot topic these days; businesses
are loosing control of all these "static data" associated to the various
systems they have, and when an employee leaves a company he often has an
active account on some system even months after his departure.
This is the de-provisioning problem.

Not to sure about the wording however, if you take a zeroknowledge
Proof to authenticate possession of an attribute, prover will hold
some static data (some sort of secret), the only difference is that
the verifier doesn't need to know the secret, and in fact you can't
learn anything from looking at the communication link when the proof
is executed.  You can't learn anything either by modifying the protocol
from the verifier's point (malicious verifier).  But if you can steal
the secret that the prover possesses, than you can impersonate her.


--Anton 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: authentication and authorization

2004-07-07 Thread Anton Stiglic


>-Original Message-
>From: John Denker [mailto:[EMAIL PROTECTED] 
>Sent: 5 juillet 2004 18:28
>To: Anton Stiglic
>Cc: [EMAIL PROTECTED]; 'Ian Grigg'
>Subject: Re: authentication and authorization

>[...]
>We should assume that the participants on this list have a
>goodly amount of technical expertise.  We should use the
>established technical definitions, unless there is a good
>reason not to.

Well, there is nt established technical definition for "digital identity",
but most definitions seem to focus to what I defined it as.

> [...]
>> A digital identity is usually composed of a set of identifiers (e.g. 
>> Unix ID, email address, X.500 DN, etc.) and other information 
>> associated to an entity (an entity can be an individual, computer 
>> machine, service, etc.). "Other information" may include usage 
>> profiles, employee profiles, security profiles, cryptographic keys, 
>> passwords, etc.

>That is very unhelpful, because it lumps together two types
>of things that really ought to be treated differently.
>  -- I want my email address to be widely known.  I want my
>   public keys to be widely known.
>  -- I want my password to be secret.  I want my private keys
>   to be secret.

The term "digital identity" is not intended to help you solve the problem.
In a digital identity there are parts that an individual wants to keep
private, other parts can be public (others should be divulged to only
certain individuals, possibly via a zero-knowledge proof that will convince
the verifier, without giving him enough information to be able to prove the
property to someone else).  You can refer to the different parts of a
digital identity using different terms if you want, but the term "digital
identity" usually includes all of those parts.  Relating to the real world,
you might have a fetish for high-healed pink leather boots, which is part of
your identity (something that characterizes you), but not want others to
know about that.  But its still part of your identity, just as your SSN
number is.


>> Identity can be stolen in the sense that this information can be 
>> copied, revealed to someone, and that someone can use it in order to 
>> identify and authenticate himself to a system and get authorization 
>> to access resources he wouldn't normally be allowed to.
>> 
>> The following document has a nice diagram on the first page of 
>> appendix A: http://www.ec3.org/Downloads/2002/id_management.pdf

>Again that (including the reference) misses the point and
>blurs things that really need to be kept distinct.


You are mixing up two problems, that of defining digital identity, and that
of preventing unauthorized individuals to access resources that they are not
supposed to (via identity theft for example), as well as privacy.


>The focus _must_ be on the transaction, not on the ID.
>Suppose I carry out a transaction with the jewellery
>store.  Did I authorize a $3.00 payment for a new watch
>battery, or a $30,000.00 payment for diamond necklace?

You are talking about the problem of non-repudiation here...

>[...]
>Collecting more and more ID information about me is at
>best marginally helpful to the relying party;  "ID" might
>tell the RP whether I *could* have authorized a particular
>transaction (was it within my account limit?) but "ID"
>cannot possibly tell the RP whether I *did* authorize a
>particular transaction.  And (!!) don't forget the
>converse:  If the transaction is legit, there is no
>reason why my ID needs to be involved.  Cash transactions
>are still legal!

I agree with that last part.  It relates to the whole thing about attribute,
vs identity vs individual authentication that I mentioned.  I favour
attribute authentication in most cases.  And with stuff like Digital
Credentials you can also have accountability even with attribute
authentication (for example if forced by law).

>The proper use of _identification_ is obvious:  In some
>exceptional circumstances it is important to be able to
>connect a real meat-space _identity_ with a particular
>event.  For instance, if there is a hit-and-run accident,
>it really helps if a witness notes the license number of
>the car.  (Been there, done that.)

Again, this relates exactly to my discussion about attribute, identity and
individual authentication.  Things like Digital Credentials is what is going
to help you out, not re-defining the term "digital identity".

--Anton




-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: authentication and authorization (was: Question on the state of the security industry)

2004-07-07 Thread Anton Stiglic
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of John Denker
Sent: 1 juillet 2004 14:27
To: [EMAIL PROTECTED]
Cc: Ian Grigg
Subject: Re: authentication and authorization (was: Question on the state of
the security industry)

>1) For starters, "identity theft" is a misnomer.  My identity
>is my identity, and cannot be stolen.  The current epidemic
>involves something else, namely theft of an authenticator ...

Identity has many meanings.   In a typical dictionary you will find several
definitions for the word identity.  When we are talking about information
systems, we usually talk about a digital identity, which has other meanings
as well. If you are in the field of psychology, philosophy, or computer
science, identity won't mean the same thing. One definition that relates to
computer science that I like is the following:
"the individual characteristics by which a thing or person is recognized or
known".

A digital identity is usually composed of a set of identifiers (e.g. Unix
ID, email address, X.500 DN, etc.) and other information associated to an
entity (an entity can be an individual, computer machine, service, etc.).  
"Other information" may include usage profiles, employee profiles, security
profiles, cryptographic keys, passwords, etc.

Identity can be stolen in the sense that this information can be copied,
revealed to someone, and that someone can use it in order to identify and
authenticate himself to a system and get authorization to access resources
he wouldn't normally be allowed to.

The following document has a nice diagram on the first page of appendix A:
http://www.ec3.org/Downloads/2002/id_management.pdf

I came up with a similar diagram for a presentation I recently gave, but
instead of talking about primary and secondary identifying documents I
mention primary and secondary identifying information in general, and I also
have an "identifiers" circle situated beside the bigger circle, containing
identifiers that belong to an entity but are not linkable to the entity
(talking about nyms and pseudonyms).  Recall that there are basically 3
types of authentication:  individual authentication (such as via biometrics,
where you use primary identifying information to authenticate someone),
identity authentication (where the identity may or may not be linkable to an
individual), and attribute authentication (where you need reveal nothing
more than the possession of a certain attribute, such as can be done with
Stefan Brands digital credentials).

--Anton
 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: recommendations/evaluations of free / low-cost crypto libraries

2004-06-30 Thread Anton Stiglic


>-Original Message-
>From: [EMAIL PROTECTED]
[mailto:owner->[EMAIL PROTECTED] On Behalf Of Peter Gutmann
>Sent: 29 juin 2004 09:49
>To: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]
>Subject: RE: recommendations/evaluations of free / low-cost crypto
>libraries
>
>"Anton Stiglic" <[EMAIL PROTECTED]> writes:
>
>>A list can be found here
>>
>>http://www.homeport.org/~adam/crypto/
>
>Hmm, that list is somewhat out of date (several years in some cases).

Indeed.  Adam started that list in 1996, but I don't think he put allot of
time updating it in recent years.  Still, I think it's a good list for
someone who is starting to look for crypto libraries.

It would be nice gift to the community if someone came up with a similar,
updated list.

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: recommendations/evaluations of free / low-cost crypto libraries

2004-06-22 Thread Anton Stiglic

A list can be found here

http://www.homeport.org/~adam/crypto/

There are several things that you might want to consider, other than the
language in which the library was written of course.

You might want to consider the cryptographic algorithms that are supported,
and support for standards such as various PKCS standards.  For example,
although JCE is a standard framework, not all JCE providers implement the
same functionality.  Some may allow you to create a PKCS#12 object or not,
some will only let you read one.  And creating a PKCS#12 object can be done
in various ways, you might need to provide all of the keys that go in the
PKCS#12 object at once, or you can add them incrementally (we actually
changed a JCE provider because of this point).  Some allow you to talk to a
cryptographic hardware via PKCS#11 and some not.

You also might be interested in software performance, and if operations can
be accelerated by hardware you might have in hand.

You might also be interested in FIPS accreditation.  For example, Crypto++
and the NSS library are FIPS 140 accredited.  OpenSSL is in the process of
being certified (there was a discussion about that in this list around
September 2003).

Architecture and OS platform compatibility is another important issue.

You might also be interested in the size of the compiled executable once it
is statically linked with the library (some libraries do much better than
others on this point).  

Regards,

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: SSL accel cards

2004-05-26 Thread Anton Stiglic
>
> Does anyone know of an SSL acceleration card that actually works under
> Linux/*BSD?

I successfully used a Broadcom PCI card on a Linux (don't remember
what Linux and kernel version, this was close to 2 years ago).
If I remember correctly it was the BCM5820 processor I used
http://www.broadcom.com/collateral/pb/5820-PB04-R.pdf
(the product sheet mentions support for Linux, Win98, Win2000,
FreeBSD, VxWorks, Solaris).

I was able to use it on a Linux and on a Windows (where I offloaded
modexp operation from MSCAPI crypto provider).

The Linux drivers where available from Broadcom upon request, there was
also a crypto library that called the card via the drivers, but at the time
I looked at it the code wasn't very stable (e.g. I had to debug the RSA
key generation and send patches since it did not work at all, later versions
had the key generation part working properly).
The library might be stable by now.

I also made the Broadcom chip work with OpenCryptoki on a Linux,
I submitted the code for supporting Broadcom in OpenCryptoki.

http://www-124.ibm.com/developerworks/oss/cvs/opencryptoki/

> []
> and certainly
> they don't (gasp) make updated versions available for download. Because
someone
> might... what, steal the driver? Anyway...
> []

No, but they might find out how poorly written they are??? Don't know the
reason...

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: The future of security

2004-05-26 Thread Anton Stiglic

- Original Message - 
From: "Steven M. Bellovin" <[EMAIL PROTECTED]>
To: "Ian Grigg" <[EMAIL PROTECTED]>
Cc: "Graeme Burnett" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Tuesday, May 11, 2004 11:36 AM
Subject: Re: The future of security 


> In message <[EMAIL PROTECTED]>, Ian Grigg writes:
> > Security architects
> >will continue to do most of their work with
> >little or no crypto.
> 
> And rightly so, since most security problems have nothing to do with 
> the absence of crypto.
> >
> >j.  a cryptographic solution for spam and
> >viruses won't be found.
> 
> This ties into the same thing:  spam is *unwanted* email, but it's not 
> *unauthorized*.  Crypto can help with the latter, but only if you can 
> define who is in the authorized set of senders.  That's not feasible 
> for most people.


Something like hashcash / client puzzles / Penny Black define a set
of authorized email (emails that come with a proof-of-work), and then
provide a cryptographic solution.   This is not a full-proof solution (as
described in the paper Proof-of-Work Proves Not to Work), 
but a good partial solution that is probably best used in combination
with other techniques such as white-lists, Bayesian spam filters , etc...

I think cryptography techniques can provide a partial solution to spam.

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Is there a Brands certificate reference implementation?

2004-05-08 Thread Anton Stiglic

Stefan Brands started his own company,
http://www.credentica.com/

There isn't much on the web site yet, but if you click on the image you get
the info
email address.

The code that was developed for Brands credentials at ZKS was never
released.  There was also code written during the ESPRIT project called
CAFE.

A description of protocols for Brands credentials can be found here
http://crypto.cs.mcgill.ca/~stiglic/Papers/brands.pdf

A more elaborate reference is the technical paper that can be found here
http://www.credentica.com/technology/technology.html

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: [Mac_crypto] Apple should use SHA! (or stronger) to authenticate software releases

2004-04-06 Thread Anton Stiglic

> > But if you are given the choice between using MD5 and SHA1, I'd prefer
> > SHA1, but I wouldn't be concerned with someone using MD5 isntead of SHA1
> > for the time being. In other words, if I were to do a risk analysis, I
would
> > identify
> > the use of MD5 instead of SHA1 as one of the major risks.
> >
>
> "were" or "were not"?

I wanted to write "I would *not* identify the use of MD5 instead of SHA1 as
one
of the major risks".  In other words, using MD5 instead of SHA1 would be low
risk
compared to the other threats that exist.
Sorry, the mistake changes to whole sense of the phrase.

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: [Mac_crypto] Apple should use SHA! (or stronger) to authenticate software releases

2004-04-05 Thread Anton Stiglic

The attacks by Dobbertin on MD5 only allow to find collisions in the
compression function, not the whole MD5 hash.

But it is a sign that something might be fishy about MD5.

MD5 output is 128 bits.  There are two types of collision finding
attacks that can be applied.  In the first you are given a hash value
y = H(x), for some x, and try to find a different input x' that hashes
to the same output:  H(x) = H(x') = y.  This relates to 2nd-preimage
resistance.  This can be done on MD5 in 2^128 work factor.
The other attack is to find to arbitrary inputs x, x' such that
H(x) = H(x').  This relates to collision resistance.  This can be done
with good probability in 2^64 work factor.  Now, the problem
of having a malicious source code hash to the same value as good/valid
source code seems to be related more to the former, that is you have
some code that is checked-in, that gives some hash value Y, and you
want to find a different code (malicious one) that hashes to the same value.
You might be able to play with the valid code as well, giving you more
flexibility for the search of a collision, but you can't play to much
without
having this noticed by other developers.

I think that there are many other problems that are more of concern.  For
example hacking a web site (or mirror site) that contains code for download,
and changing the code along with the hash value of the code, or preventing
a developer from inserting some kind of trap door or Trojan.

But if you are given the choice between using MD5 and SHA1, I'd prefer
SHA1, but I wouldn't be concerned with someone using MD5 isntead of SHA1
for the time being. In other words, if I were to do a risk analysis, I would
identify
the use of MD5 instead of SHA1 as one of the major risks.

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: [Fwd: Re: Non-repudiation (was RE: The PAIN mnemonic)]

2004-01-08 Thread Anton Stiglic

- Original Message - 
From: "Jerrold Leichter" <[EMAIL PROTECTED]>
Cc: "Cryptography" <[EMAIL PROTECTED]>
Sent: Wednesday, January 07, 2004 7:14 AM
Subject: Re: [Fwd: Re: Non-repudiation (was RE: The PAIN mnemonic)]


> Now that we've trashed non-repudiation ... just how is it different from
> authentication?

I don't think the word "authentication" has the same problem as
"non-repudiation",
but you do need to be careful how you define it.

So here we are talking about entity authentication (as opposed to data
authentication,
the latter really has a unambiguous definition, at least I hope it does!).

The way you should define entity authentication
is by stating that it is a process of verifying that an entity possesses the
authentication
credentials associated to a user that entity claims to be.  This entity
might be the rightful
user, or it might be someone who stole the credentials from the rightful
user.   If someone
stole my ATM card and my PIN, he/she can successfully authenticate
him/herself to an
ATM and withdraw money.  The word "authenticate" is appropriate in this last
phrase.

But I see that most definitions that have been collected here:
http://www.garlic.com/~lynn/secgloss.htm#t523
are not careful about this.

The thing about non-repudiation is that it is something that even most laws
do not
permit.  See for example:
http://www.firstmonday.dk/issues/issue5_8/mccullagh/

Non-repudiation applied to digital signatures implies that the definition
states that
only one person possibly had possession of the private signing key and was
conscious
about the fact that it was used to sign something.

In most jurisdictions a person has the right to repudiate a signature
(had-written
or electronic), and thus non-repudiation does not work.  People have the
right to
repudiate signatures since it might be the result of a forgery, fraud, the
signer might have
been drunk or something at the time of signing or forced to sign (like with
a gun to his
head).Repudiation is possible but non-repudiation is not.

I know some people who use the term "accountability" instead of
"non-repudiation"
to express the property needed in certain systems (commercial
infrastructures where
users login and need to be accountable for their acts).  This seems like a
better term
to be used in certain contexts, but I'm still thinking about it...

--Anton







-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Any good books or URLs for WinXP crypto & security?

2004-01-07 Thread Anton Stiglic
NSA Windows hardening guides:
http://nsa2.www.conxion.com/

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: CIA - the cryptographer's intelligent aid?

2004-01-07 Thread Anton Stiglic
The thing about CIA is that it is commonly used in security (not
cryptography)
courses to mean Confidentiality, Integrity (of systems) and Availability
(instead
of Authentication).  Availability of systems, services and information.

For crypto I always talked about CAIN or PAIN (like in no PAIN
no gain, or cryptography is allot of PAIN).  -- note, I also prefer the word
Confidentiality over Privacy, the latter being to high level and I usually
reserve
it to mean the hiding of who is communicating with who (anonymity
systems...).

When introducing digital signatures I always state that they provide
integrity
(as do MACs, which I introduce beforehand) but also the possibility of
non-repudiation.  And then I go on stating that it is very hard, if not
impossible,
to in fact implement non-repudiation.

--Anton


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: PKI root signing ceremony, etc.

2003-12-15 Thread Anton Stiglic
> Some folks here might be interested in
>http://webservices.xml.com/pub/a/ws/2003/12/09/salz.html
> which walks through a secure, auditable root keygen and signing ceremony.

We had something similar going on at Zeroknowlege Systems for the PKI
of the Freedom servers.  But the password that protected the private key
was in split knowledge, and the observer wrote down what he observed
in a log journal...

You also want to make sure that the computer you are doing this on is 
not connected to another machine or network.


--Anton


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Postgraduate programs

2003-12-14 Thread Anton Stiglic
>Good day,
> I wonder if you could suggest some of the best postgraduate programs
focusing on crypto related themes in the world?
>I am making research that will relate schools, security advances and
government policies on several countries and knowing your suggestions >on
good schools is a key component for my paper.

You can start by looking here:
http://hcs.harvard.edu/~dmolnar/gradschools.html

 --Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: "Zero Knowledge Authentication"? (was Cryptolog Unicity Software-Only Digital Certificates)

2003-12-14 Thread Anton Stiglic
> Previously used primarily in scientific/academic applications, "zero
> knowledge" authentication is a method of proving a user's identity without
> revealing his password to the verifier.

So anybody knows exactly what this zero-knowledge authentication is
that they use?

> Using this technology, Unicity
> allows companies to issue digital certificates securely on a software-only
> basis, eliminating the need to supply employees, partners and clients with
> special hardware, or to require them to locally store certificates on
their
> computers. The private data is never stored on the user's hard drive, and
> is erased from the RAM as soon as the user no longer needs it.

This part about storing private keys on a server is not novel.  The company
that I work for has a similar solution with respect to this, it's called
HotSign:

http://www.okiok.com/index.jsp?page=Hot+Sign

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: yahoo to use public key technology for anti-spam

2003-12-09 Thread Anton Stiglic

- Original Message - 
From: "Steven M. Bellovin" <[EMAIL PROTECTED]>

> I use a variety of email addresses, for various reasons.  I have my 
> usual work account, some university accounts, a few personal accounts, 
> one I reserve for EBay use, etc.  I also use several different SMTP 
> servers to send my email.  I *always* have a secure tunnel set up; in 
> fact, Postfix on my laptop is hard-wired to send to port 20025 on 
> 127.0.0.1.  Of course, where that ends up will vary, but it's not in a 
> one-to-one correspondence with the sending address I use.  The Yahoo 
> scheme would apparently require that each email I send be routed via 
> the domain owner's SMTP server.  

So I`m guessing you have all your emails forwarded to one mail account
and fetch them all from there, and when you reply or send a new email
you just use one of your SMTP servers, which doesn't necessarily 
correspond to the incoming (POP or IMAP or whatever) server you
received the mail from.  
Is that correct?
In that case I guess it becomes problematic.
If you just receive your mail from one incoming server I don't see a
problem of having your mail be sent via the SMTP on same machine
where your incoming mail server resides. 

If the signature just certified that the mail was relayed via an SMTP
server where the user authenticated himself I think that would be a 
good idea (SMTP server that necessarily on the same machine than
the incoming mail server).  Than at least you would know that the 
email you received was send by someone who authenticated himself 
to some SMTP server, and not just someone that sent the email via 
an open relay.

If you want something better it seems that it requires the sender to 
have possession of his private signature key and sign the emails 
he sends, but that's not a user-friendly solution and I think we all
agree that it won't work in practice (not transparent enough...)

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: yahoo to use public key technology for anti-spam

2003-12-07 Thread Anton Stiglic

- Original Message - 
From: "Carl Ellison" <[EMAIL PROTECTED]>
To: "'Will Rodger'" <[EMAIL PROTECTED]>; "'Steve Bellovin'"
<[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Sunday, December 07, 2003 8:44 AM
Subject: RE: yahoo to use public key technology for anti-spam


> I, for one, hate the idea.  My From address should be [EMAIL PROTECTED]  That's
> my remailer where I receive all my incoming e-mail.  However, my outgoing
> SMTP server depends on which cable modem provider or hot spot I happen to
be
> at the moment.  It would be that SMTP machine that signs my outgoing mail,
> not acm.org who never sees my outgoing mail.

But you should be sending mails via *your* SMTP server, and should be
connecting to that SMTP server using SSL and authentication.  Open relays
encourage spam.  People shouldn't be relaying mail via just any SMTP server.

--Anton



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: safety of Pohlig-Hellman with a common modulus?

2003-12-07 Thread Anton Stiglic

- Original Message - 
From: "Peter Fairbrother" <[EMAIL PROTECTED]>
To: "David Wagner" <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>
Sent: Saturday, December 06, 2003 7:58 PM
Subject: Re: safety of Pohlig-Hellman with a common modulus?


> David Wagner wrote:
>
> > Steve Bellovin  wrote:
> >> Is it safe to use Pohlig-Hellman encryption with a common modulus?
> >> That is, I want various parties to have their own exponents, but share
> >> the same prime modulus.  In my application, a chosen plaintext attack
> >> will be possible.  (I know that RSA with common modulus is not safe.)
> >
> > Yes, I believe so.  The security of Pohlig-Hellman rests on the
difficulty
> > of the discrete log problem.
>
> Nope. In P-H there is no g. A ciphertext is M^k mod p. An attacker won't
> know k, and usually won't know M, but see below. I don't know what the
> problem is called, but it isn't DLP. Anyone?

If you don`t know M and k, there are several values M', k' such that
M'^k' mod p == M^k mod p.   For example, if M is a generator of the
group mod p, than all other generators M' will have a corresponding k'
that will give you this value.

Think about known plaintext attack or chosen plaintext attack.  A symmetric
cipher should be secure against these attacks and much more...
In these attacks you know the bases of several values...

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Problems with GPG El Gamal signing keys?

2003-12-01 Thread Anton Stiglic

- Original Message - 
From: "Ralf Senderek" <[EMAIL PROTECTED]>
To: "Werner Koch" <[EMAIL PROTECTED]>; "cryptography" <[EMAIL PROTECTED]>
Sent: Thursday, November 27, 2003 11:23 AM
Subject: Re: Problems with GPG El Gamal signing keys?


> On Thu, 27 Nov 2003, Werner Koch wrote:
>
> > Yes, yes, I should have removed ElGamal signing key support back in
> > 1998 when there was no more need for it.  I recall that some folks
> > begged me not to do that and I took the wrong decision.
>
> I think no-one will blame you for this, you couldn't have known the
> effects. But what are we going to learn? Heading for far less complexity
> is the future!

Maybe we can learn that code re-use is tricky in cryptography:  indeed, if
the signing function and encryption function did not use the same gen_k
function, the author of the code would have done the optimization that
causes the vulnerability in the signing function because this has never been
recommended (while for encryption it is a well known recommendation).

Maybe we can learn that using the same key for two different things is
really really not a good idea!  If the vulnerability was restricted to
signatures
it would have been less severe, being able to decrypt all confidential
messages
that were created in the past is much more severe.  Allot of applications
use
one single key for both signing and encryption, while this doesn't seem to
be
immediately dangerous I don't think it's a good idea. For example when I
receive
an email from someone that is signed, Outlook will save the public signature
key
that comes with the message and use it to encrypt if I decide to send an
encrypted
message to that person.
I never understood why having separate keys for signing and encrypting was
so complicated to implement?Also in the PoP protocol of X.509, a
signature
using the private key is used to prove possession of the private key
corresponding to a public encryption key.  While the different padding used
in signature and encryption schemes make it difficult to find an obvious
vulnerability with this, I don't think it's a good idea.

You have to be very careful when using the same key pair for encrypting and
signing.  The subtle error found in GnuPG about using small k is a good
example.  Another thing to consider is that ElGamal encryption with base
g = 2 is safe but insecure for signatures...  It's just simpler to have two
distinct pairs of keys.

By the way, is the paper by Phong Q. Nguyen describing the vulnerability
available somewhere?  Or maybe someone could describe the cryptanalysis
steps to retrieving the private key from the signature when using smaller
random k, I would appreciate.  ElGamal with smaller k looks allot like
DSA, exept in DSA you work with a generator of a smaller subgroup and
your k is chosen in this smaller subgroup...

Thanks.

--Anton



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Problems with GPG El Gamal signing keys?

2003-11-27 Thread Anton Stiglic

- Original Message - 
From: "Perry E.Metzger" <[EMAIL PROTECTED]>

> Some notes have been floating around claiming that there are bugs in
> GPG's use of El Gamal keys. For example, see:
>
http://groups.google.com/groups?selm=E1AOvTM-0001nY-00%40alberti.g10code.de&oe=UTF-8&output=gplain
>
> Can anyone confirm these reports?

The note talks about GPG Elgamal encryption and signature schemes
using small value of k (where k is the random value that you pick for
each signature, each encryption).  For encryption choosing a small k
is o.k. (by small I mean something like 160 bits when you have a 1024
bit prime), but this was never recommended for the signature scheme,
and the note states that this would in fact be a security
vulnerability.  The note says that with one signature using a certain
private key x, generated using a small random k, you can compute the
private key x.  So if you are also using this key for decryption, the
private key found could also be used to decrypt everything that was
encrypted to you.

I haven't put any taught yet in how you would retrieve the private key
given the signature (I just read this email), but it sounds plausible.
One thing I can confirm however is that GnuPG 1.2.3 (the latest
version available from the GnuPG we site) indeed has is that both the
encryption and signature schemes use a small k.

If you have the source code, just take a look at cipher/elgamal.c,
there is a function gen_k( MPI p ) that is called by both
do_encrypt(...) and sign(...)  functions.  In the function gen_k, k is
chosen to be of size nbits, where nbits is smaller than the size of
the prime.  Look at the comment in the code:
 /* IMO using a k much lesser than p is sufficient and it greatly
 * improves the encryption performance.  We use Wiener's table
 * and add a large safety margin.
 */
nbits = wiener_map( orig_nbits ) * 3 / 2;

wiener_map maps sizes of primes to sizes for k and q.  For example,
for a 1024 bit prime, the function will return 165, so in this case
nbits would be 165*3/2 = 247.

I give credit to Phong Nguyen which the note says was the person who
observed this and came up with the attack.

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Are there...one-way encryption algorithms

2003-11-19 Thread Anton Stiglic

"David Wagner" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> martin f krafft  wrote:
> >it came up lately in a discussion, and I couldn't put a name to it:
> >a means to use symmetric crypto without exchanging keys:
> >
> >  - Alice encrypts M with key A and sends it to Bob
> >  - Bob encrypts A(M) with key B and sends it to Alice
> >  - Alice decrypts B(A(M)) with key A, leaving B(M), sends it to Bob
> >  - Bob decrypts B(M) with key B leaving him with M.
> >
> >Are there algorithms for this already? What's the scheme called?
>
> It's called Pollig-Hellman.

If I'm not mistaken you are wrong.  Pohlig-Hellman proposed an encryption
scheme based on discret log, the description of the OP was for a
key transport protocol.
In Pohlig-Hellman, what you do is have Alice and Bob share secret
keys k and d such that k*d == 1 mod (p-1), where p is some prime.
To encrypt a message M Alice computes M^k mod p, and Bob
can decrypt by computing (M^k)^d mod p == M mod p.

This is commonly referred to as the Pohlig-Hellman symmetric-key
exponentiation cipher.

It is described in patent 4,424,414 which you can find here
http://patft.uspto.gov/netahtml/search-bool.html

Also mentioned in HAC, chapter 15, section 15.2.3, (iii).

The algorithm that was described by the OP is really Shamir's
three-pass algorithm, also known as Shamir's no-key protocol.

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: A-B-a-b encryption

2003-11-19 Thread Anton Stiglic
- Original Message - 
From: "Jeremiah Rogers" <[EMAIL PROTECTED]>
To: "crypto list" <[EMAIL PROTECTED]>
Sent: Sunday, November 16, 2003 12:50 PM
Subject: Re: A-B-a-b encryption


> This is Shamir's Three-Pass Protocol, described in section 22.3 of
> Schneier. It requires a commutative cryptosystem.
>
> - Jeremiah Rogers

Also described in HAC, protocol 12.22.
It's like basic DH, except it provides key transport instead of key
agreement.

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-11-12 Thread Anton Stiglic
> I'm not sure how you come to that conclusion.  Simply
> use TLS with self-signed certs.  Save the cost of the
> cert, and save the cost of the re-evaluation.
> 
> If we could do that on a widespread basis, then it
> would be worth going to the next step, which is caching
> the self-signed certs, and we'd get our MITM protection
> back!  Albeit with a bootstrap weakness, but at real
> zero cost.

I know of some environments where this is done.  For example
to protect the connection to a corporate mail server, so that 
employees can read their mail from outside of work.  The caching 
problem is easily solved in this case by having the administrator 
distribute the self-signed cert to all employees and having them 
import it and trust it.  This costs no more than 1 man day per year.

This is near 0 cost however, and gives some weight to Perry's
argument.

> Any merchant who wants more, well, there *will* be
> ten offers in his mailbox to upgrade the self-signed
> cert to a better one.  Vendors of certs may not be
> the smartest cookies in the jar, but they aren't so
> dumb that they'll miss the financial benefit of self-
> signed certs once it's been explained to them.

I have a hard time believing that a merchant (who plans
to make $ by providing the possibility to purchase on-line)
cannot spend something like 1000$ [1] a year for an SSL 
certificate, and that the administrator is not capable of 
properly installing it within 1-2 man days.  If he can't install
it, just get a consultant to do it, you can probably get one
that does it within a day and charges no more than 1000$.

So that would make the total around 2000$ a year, let's 
generously round it up to 10K$ annum.
I think your 10-100 million $ annum estimate is a bit 
exaggerated...


[1] this is the price I saw at Verisign
http://www.verisign.com/products/site/commerce/index.html
I'm sure you can get it for cheaper. This was already 
discussed on this list I think...

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-11-12 Thread Anton Stiglic

- Original Message - 
From: "Tom Otvos" <[EMAIL PROTECTED]>

> As far as I can glean, the general consensus in WYTM is that MITM attacks
are very low (read:
> inconsequential) probability.

I'm not certain this was the consensus.

We should look at the scenarios in which this is possible, and the tools
that
are available to accomplish the attack.  I would say that the attack is more
easily done inside a local network (outside the network you have to get
control
of the ISP or some node, and this is more for the "elite").
But statistics show that most exploits are accomplished because of employees
within a company (either because they are not aware of basic security
principals,
or because the malicious person was an employee within), so I find this
scenario
(attack from inside the network) to be plausible.

Take for an example a large corporation of 100 or more employees, there has
got to be a couple of people that do on-line purchasing from work, on-line
banking, etc...  I would say that it is possible that an employee (just
curious, or
really malicious) would want to intercept these communications

So how difficult is it to launch an MITM attack on https?  Very simple it
seems.  My hacker friends pointed out to me two softwares, ettercap and
Cain:
http://ettercap.sourceforge.net/
http://www.oxid.it/cain.html

Cain is the newest I think, and remarkably simple to use.  It has a very
nice
GUI and it doesn't take much hacking ability to use it.  I've been using it
recently for educational purposes and find it very easy to use, and I don't
consider myself a hacker.

Cain allows you to do MITM (in HTTPS, DNS and SSHv1) on a local
network.  It can generate certificates in real time with the same common
name as the original.  The only thing is that the certificate will probably
not
be signed by a trusted CA, but most users are not security aware and
will just continue despite the warning.

So given this information, I think MITM threats are real.  Are these attacks
being done in practice?  I don't know, but I don't think they would easily
be reported if they were, so you  can guess what my conclusion is...

--Anton



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: NCipher Takes Hardware Security To Network Level

2003-10-15 Thread Anton Stiglic

- Original Message - 
From: "Ian Grigg" <[EMAIL PROTECTED]>

> * In contrast, someone who knows little about cars,
> can objectively evaluate a car.  They can take it
> for a test drive and see if it feels right.  Using
> it is proving it.

I'm not totally convinced of this...  Someone with little knowledge about
cars might see the difference between a KIA and a Mercedes in one test
drive, but I would think that most affordable cars seem to drive the same
in a simple test drive (at least from my experience).  But what
a person will do is talk to his friends and get feedback, he'll learn that
some type of cars have a bad reputation and others seem to be good.
This is also done in security, take for example host security modules used
by banks, most banks make their choice  based on the vendors reputation.
Unfortunately this choice is often influenced by publicity (and the more a
certain company sells, the more money it makes, the more publicity it can
afford, the more it will sell, even if their product is not the best).

There is a marketing rule that state that there is one product that
dominates
its field in every category and gets about 80% of all sells, then there are
1-3
other products that battle for second place, all others get almost nothing.
(example for cola Coke is number 1, with Pepsi
coming second).  I don't think security products make an exception to
this.

Another way people choose products is if they are recommended.  For
example, I buy a certain toothpaste because it is recognized by the
Canadian dental association.  This is a sort of certification.  There are
certainly other example of products in everyday life that get this type
of certification that influence people's choices.  Of course, publicity
also has some degree of influence here as well.

There are no official security associations recognized by the government
that include most of the security experts we know, rather what exists is
certain standards that the government itself decides upon and are used
(FIPS 140, CC).  This lack of an independent security association to
which any security expert can become a member of is maybe the root
of the problem?

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Internal format of RSA private keys in microsoft keystore.

2003-10-15 Thread Anton Stiglic

- Original Message - 
From: "R.Sriram" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, October 10, 2003 1:20 AM
Subject: Internal format of RSA private keys in microsoft keystore.


> Greetings,
> 
> In the process of trying to work around some of the limitations
> of the m$-CAPI API, I'm trying to decipher the internal representation
> of private keys in the default m$ key store, in order to extract
> the private key out.

If you could acquire a context, you could export the private key into 
a blob and then read it from that, but you can't acquire a context.
As Tom mentioned, the keys are encrypted in the container.
The FIPS 140 security policies for M$'s CSPs say that the task 
of protecting the keys in the system is delegated to Data Protection 
API (DPAPI).  There is a brief explanation in the security policies, 
see for example
http://csrc.nist.gov/cryptval/140-1/140sp/140sp241.pdf
section "Key Storage".
You might be able to find more detailed information somewhere else...

Good luck!

--Anton


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: NCipher Takes Hardware Security To Network Level

2003-10-11 Thread Anton Stiglic

- Original Message - 
From: "Peter Gutmann" <[EMAIL PROTECTED]>
> [...]
>
> The problem is
> that what we really need to be able to evaluate is how committed a vendor
is
> to creating a truly secure product.
> [...]

I agree 100% with what you said.  Your 3 group classification seems
accurate.
But the problem is how can people who know nothing about security evaluate
which vendor is most committed to security?
For the moment, FIPS 140 and CC type certifications seem to be the only
means
for these people...  Unfortunately these are still to general and don't
always give
you an accurate measurement of how dedicated to security the vendor was...
This seems to be a big open-problem in practical security!

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: NCipher Takes Hardware Security To Network Level

2003-10-07 Thread Anton Stiglic

- Original Message - 
From: "Peter Gutmann" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Tuesday, October 07, 2003 11:07 AM
Subject: Re: NCipher Takes Hardware Security To Network Level


> "Anton Stiglic" <[EMAIL PROTECTED]> writes:
>
> >This is why you get requirements of the type that it should run on
Windows in
> >single-user mode, which I take to mean have only an admin account.  This
> >prevents privilege escalation attacks (regular user to root) that are
easily
> >done.
> >
> >I think this is reasonable, since you really are relying on the OS and
the PC
> >for the security of the module.
>
> Uhh, so you're avoiding privilege escalation attacks by having everyone
run as
> root, from which you couldn't escalate if you wanted to.  This doesn't
strike
> me as a very secure way to do things (and it would still get MSDOS
certified,
> because you've now turned your machine into a DOS box protection-wise).

Did you read the security policy of Netscape Security Module?  Basically,
if you want to get the configuration that is FIPS 140 certified, you need
to install the module on a PC and add tamper resistant seals over
appropriate
interfaces, junctions and fasteners of all doors and covers in the enclosure
of the PC, so that you can't open the cover without the fact being
physically
noticeable.  I suggest adding some duct tape in strategic positions for
additional
security :).

By reasonable I mean in the framework of having a general purpose software
cryptographic library be certified FIPS.  I'm not saying I find this secure.
When I see a software library being certified FIPS 140, I say to myself it
must
implement the cryptographic algorithms in a descent way, has a descent
random number generator, and stuff like that.  I don`t care much about the
physical boundary that they artificially determine.

If I want high security, I will go with hardware.  At the end of the line,
what
you want to protect is your secret keys, and if you don't have a tamper
resistant
hardware (that zeroizes your secrets when someone tries to poke at it)
to do that it is difficult if not impossible.

--Anton


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: NCipher Takes Hardware Security To Network Level

2003-10-07 Thread Anton Stiglic

- Original Message - 
From: "Peter Gutmann" <[EMAIL PROTECTED]>
> [...]
> If you think that's scary, look at Microsoft's CryptoAPI for Windows XP
FIPS
> 140 certification.  As with physical security certifications like BS 7799,
you
> start by defining your security perimeter, defining everything inside it
to be
> SECURE, and ignoring everything outside it.  Microsoft defined their
perimeter
> as "the case of the PC".  Everything inside the PC is defined to be
SECURE.
> Everything outside is ignored.

I believe that is typical of most software crypto modules that are FIPS 140
certified, isn't it?
It classifies the module as multi-chip standalone.

This is why you get requirements of the type that it should run on Windows
in
single-user mode, which I take to mean have only an admin account.  This
prevents
privilege escalation attacks (regular user to root) that are easily done.

I think this is reasonable, since you really are relying on the OS and the
PC for the
security of the module.

More scary to me is stuff like
"DSSENH does not provide persistent storage of keys.  While it is possible
to
store keys in the file system, this functionality is outside the scope of
this validation."

This is where Microsoft's CSPs do the dirty work, and use what is called
the Data Protection API (DPAPI) to somehow safeguard keys somewhere
in your system.

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymity +- credentials

2003-10-07 Thread Anton Stiglic

- Original Message - 
From: "Ian Grigg" <[EMAIL PROTECTED]>

> [...]
> In terms of actual "practical" systems, ones
> that implement to Brands' level don't exist,
> as far as I know?  

There were however several projects that implemented 
and tested the credentials system.  There was CAFE, an 
ESPRIT project.

At Zeroknowledge there was working implementation written 
in Java, with a client that ran on a blackberry.

There was also the implementation at ZKS of a library in C 
that implemented Brands's stuff, of which I participated in.
The library implemented issuing and showing of credentials,
with a limit on the number of possible showing (if you passed
the limit, identity was revealed, thus allowing for off-line
verification of payments for example.  If you did not pass the
limit, no information about your identity was revealed).  
The underlying math was modular, you could work in a 
subgroup of Z*p for prime p, or use Elliptic curves, or 
base it on the RSA problem.  We plugged in OpenSSL 
library to test all of these cases.
Basically we implemented the protocols described in 
[1], with some of the extensions mentioned in the conclusion.

The library was presented by Ulf Moller at some coding
conference which I don't recall the name of...

It was to be used in Freedom, for payment of services, 
but you know what happended to that projet.

> Also, the use of Brands work
> would need to consider that he holds a swag of
> patents over it all (as also applies to all of
> the Chaum concepts).

Yes, most of the stuff is patented, as is Chaum's stuff.
Somebody had suggested that to build an ecash system
for example, you could start out by implementing David
Wagner's suggestion as described in Lucre [2], and then
if you sell and want extra features and flexibility get the
patents and implement Brands stuff.  Similar strategy 
would seem to apply for digital credentials in general.

> There is an alternate approach, the E/capabilities
> world.  Capabilities probably easily support the
> development of psuedonyms and credentials, probably
> more easily than any other system.   But, it would
> seem that the E development is still a research
> project, showing lots of promise, not yet breaking
> out into the wider applications space.
> 
> A further alternate is what could be called the
> hard-coded psuedonym approach as characterised
> by SOX.  (That's the protocol that my company
> wrote, so normal biases expected.)  This approach
> builds psuedonyms from the ground up, which results
> in a capabilities model like E, but every separate
> use of the capability must be then re-coded in hard
> lines by hardened coders.

Do you have any references on this?

Thanks.

--Anton

[1] http://crypto.cs.mcgill.ca/~stiglic/Papers/brands.pdf
[2] http://anoncvs.aldigital.co.uk/lucre/theory2.pdf

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymous DH & MITM

2003-10-06 Thread Anton Stiglic

- Original Message - 
From: "Jerrold Leichter" <[EMAIL PROTECTED]>
To: "Tim Dierks" <[EMAIL PROTECTED]>
Cc: "Jerrold Leichter" <[EMAIL PROTECTED]>; "Cryptography list"
<[EMAIL PROTECTED]>
Sent: Friday, October 03, 2003 8:19 PM
Subject: Re: anonymous DH & MITM


> | From: Tim Dierks <[EMAIL PROTECTED]>
> |
> | I'm lost in a twisty page of MITM passages, all alike.
> |
> | My point was that in an anonymous protocol, for Alice to communicate
with
> | Mallet is equivalent to communicating with Bob, since the protocol is
> | anonymous: there is no distinction. All the concept of MITM is intended
to
> | convey is that in an anonymous protocol, you don't know who you're
talking
> | to, period. Mallet having two conversations with Alice & Bob is
equivalent
> | to Mallet intermediating himself into a conversation between Alice &
Bob.
> |
> | If you have some unintermediated channel to speak with a known someone
> | once, you can exchange a value or values which will allow you to
> | authenticate each other forevermore and detect any intermediations in
the
> | past. But the fundamental truth is that there's no way to bootstrap a
> | secure communication between two authenticated parties if all direct &
> | indirect communications between those parties may be intermediated.
(Call
> | this the 'brain in a jar' hypothesis.)
> OK, let's set up two different scenarios:
>
> 1.  Non-anonymous communication.  Alice talks to Bob.  Alice knows
> Bob is on the other end, Bob knows Alice is on the other
> end.  They share some secret data; Alice wishes it to be
> known only to her and Bob.  Mallet has a bug in Bob's home
> and copies the data.
>
> Can Alice or Bob detect that Mallet is there?  Clearly not if
> Mallet never uses the data in a detectable way.  No matter how
> many times Alice and Bob communicate, whether or not Mallet
> continues to bug Bob, neither Alice nor Bob can never learn of
> Mallet's presence.
>
> 2.  Anonymous communication.  Alice and Bob have a conversation.
> Mallet plays MITM.  Alice and Bob don't know who their
> corresponding partner is, but they each tell the other
> that they will not reveal the secrets they exchange, and
> each believes the other - and indeed neither ever reveals
> those secrets.  They wish to know if anyone else had a
> chance to learn their secret.
>
> On the face of it, there's no difference between these two
> cases.  In each case, someone receives a copy of the secrets
> exchanged between Alice and Bob, but doesn't *do* anything
> with them that either Alice or Bob can see.
>
> However, in this case, unlike 1, if Alice and Bob continue to
> communicate - using private pseudonyms for each other to
> make "continue to communicate" a meaningful phrase - then,
> assuming Mallet cannot *always* interpose himself, they will
> eventually discover that someone has played a MITM game on
> them.

You started by talking about anonymous communication, but ended up
suggesting a scheme for pseudonymous communication.

Anonymous != pseudonymous.

Let us be clear on that!
It is an important difference.

For example, if you take Stefan Brands digital credentials, and issue
a multi-show credential, the showings of the credential can be linked
it is not anonymous but pseudonymous in some sense (even though the
showings cannot be linked to the issuing).  An open problem would be to
have something similar (something as efficient) which allows you to
issue a single credential which can be shown multiple times in an
unlikable way (completely anonymous).

Camenisch and Lysyanskaya came up with a scheme that allows
you to demonstrate possession of a credential multiple times in a way
that these are unlikable, however their solution is far from being efficient
in practice. You are much better off using Brands' credentials and just
have multiple credentials be issued, which when shown will be unlikable.

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymous DH & MITM

2003-10-06 Thread Anton Stiglic

- Original Message - 
From: "Jerrold Leichter" <[EMAIL PROTECTED]>
To: "Anton Stiglic" <[EMAIL PROTECTED]>
Cc: "Jerrold Leichter" <[EMAIL PROTECTED]>; "Cryptography list"
<[EMAIL PROTECTED]>; "Tim Dierks" <[EMAIL PROTECTED]>
Sent: Friday, October 03, 2003 4:51 PM
Subject: Re: anonymous DH & MITM


> | From: Anton Stiglic <[EMAIL PROTECTED]>
> | From: "Jerrold Leichter" <[EMAIL PROTECTED]>
> | > No; it's false.  If Alice and Bob can create a secure channel between
> | > themselves, it's reasonable to say that they are protected from MITM
> | > attacks if they can be sure that no third party can read their
messages.
> |
> | How do they create the secure channel in the first place?  We are
talking
> | about MITM that takes place during the key agreement protocol.
> I didn't say I had a protocol that would accomplish this - I said that the
> notion was such a protocol was not inherently self-contradictory.

Seems to be an important part, especially in an anonymous network...
My point was that you can't do that, thus making the rest of your proposal
infeasable.

>
> | > That is: If Alice and Bob are anonymous, they can't say *who* can read
the
> | > messages they are sending, but they might be able to say that,
assuming
> | > that their peer is following the protocol exactly (and in particular
is
> | > not releasing the shared secret) *exactly one other party* can read
the
> | > message.
> |
> | That's false.  Alice and Bob can follow the basic DH protocol, exactly,
but
> | Mallory is in the middle, and what you end up with is a shared key
between
> | Alice and Bob and Mallory.
> There's nothing to be true or false:  It's a definition!  (And yes, DH
does
> not provide a system that meets the definition.)

I didn't see this as being a definition, I saw this as a suggestion for a
protocol
which I believe cannot be achieved (again, assuming both parties want to
remain anonymous).

The best you could probably do is have a system where users are anonymous
and detain anonymous credentials when they register, and have users use
these
credentials to demonstrate that they registered, but without having them
reveal
exactly who they are.  This way, you can probably prevent MITM who did
not register...

>
> | The property you are talking about, concerning the *exactly one other
party*
> | can read the message is related to the *key authentication* property,
> | discussed in [1] (among other places), which enables you to construct
> | authenticated key agreements.
> The reference was missing; I'd be interested in seeing it.

Sorry I forgot, here it is:

[1]  Authenticated Diffie-Hellman Key Agreement Protocols.  Simon
Blake-Wilson, Alfred Menezes.
http://citeseer.nj.nec.com/blake-wilson98authenticated.html

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymity +- credentials

2003-10-06 Thread Anton Stiglic

- Original Message - 
From: "bear" <[EMAIL PROTECTED]>
To: "John S. Denker" <[EMAIL PROTECTED]>
Cc: "R. A. Hettinga" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Friday, October 03, 2003 6:05 PM
Subject: Re: anonymity +- credentials


>
>
> On Fri, 3 Oct 2003, John S. Denker wrote:
>
> >We need a practical system for anonymous/pseudonymous
> >credentials.  Can somebody tell us, what's the state of
> >the art?  What's currently deployed?  What's on the
> >drawing boards?
>
> The state of the art, AFAIK, is Chaum's credential system.

The state of the art is Brands' credentials.

A technical overview of digital credentials by Stefan Brands:
http://citeseer.nj.nec.com/brands02technical.html

A white paper from Zeroknowledge systems during the time
Stefan visited ZKS:
http://osiris.978.org/~brianr/crypto-research/anon/www.freedom.net/products/whitepapers/credsnew.pdf

Description of example protocols that use this stuff in practice,
which I wrote with Ariel Glenn, Ian Goldberg and Frédéric
Légaré during the time we implemented the protocols at
ZKS :
http://crypto.cs.mcgill.ca/~stiglic/Papers/brands.pdf

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: how to defeat MITM using plain DH, Re: anonymous DH & MITM

2003-10-06 Thread Anton Stiglic

- Original Message - 
From: "Ed Gerck" <[EMAIL PROTECTED]>
To: "Anton Stiglic" <[EMAIL PROTECTED]>
Cc: "Jerrold Leichter" <[EMAIL PROTECTED]>; "Cryptography list"
<[EMAIL PROTECTED]>; "Tim Dierks" <[EMAIL PROTECTED]>
Sent: Friday, October 03, 2003 6:44 PM
Subject: how to defeat MITM using plain DH, Re: anonymous DH & MITM


> Anton Stiglic wrote:
>
> > That's false.  Alice and Bob can follow the basic DH protocol, exactly,
but
> > Mallory is in the middle, and what you end up with is a shared key
between
> > Alice and Bob and Mallory.
>
> No. What you get is a shared key between Bob and Mallory and *another*
shared
> key between Alice and Mallory. This is important for many reasons.

You are correct on that point.
>
> First, it provides a way to detect that a MITM attack has occurred. For
example,
> if the MITM is not there at any time forth after key agreement, the
DH-based encryption/decryption will not work since Alice and Bob did NOT
share a
> secret key when under the MITM attack. As another example, if Alice and
Bob can
> communicate using another channel even an ongoing MITM attack can be
likewise
> discovered.

That is true, but doesn't apply in practice when one party wants to remain
anonymous.
Most protocols have it that Alice and Bob verify that they share the same
key once, and
then let them go on with their lives.
If you do some kind of continuous verification, MITM can just disrupt the
communication
between Alice and Bob, and Alice and Bob will then restart a DH agreement
from scratch.
You can't use previous secret since you will break anonymity (could be done
for
pseudonymity however, or when both parties reveal their identity...), Alice
and Bob will
have never realized that there was a MITM.

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymous DH & MITM

2003-10-03 Thread Anton Stiglic

- Original Message - 
From: "Jerrold Leichter" <[EMAIL PROTECTED]>

> [...]
> | > I think it's a tautology: there's no such thing as MITM if there's no
such
> | > thing as identity. You're talking to the person you're talking to, and
> | > that's all you know.
> |
> | That seems to make sense
> No; it's false.  If Alice and Bob can create a secure channel between
them-
> selves, it's reasonable to say that they are protected from MITM attacks
if
> they can be sure that no third party can read their messages.

How do they create the secure channel in the first place?  We are talking
about
MITM that takes place during the key agreement protocol.

> That is:
> If Alice and Bob are anonymous, they can't say *who* can read the messages
> they are sending, but they might be able to say that, assuming that their
> peer is following the protocol exactly (and in particular is not releasing
the
> shared secret) *exactly one other party* can read the message.

That's false.  Alice and Bob can follow the basic DH protocol, exactly, but
Mallory is in the middle, and what you end up with is a shared key between
Alice and Bob and Mallory.
The property you are talking about, concerning the *exactly one other party*
can read the message is related to the *key authentication*  property,
discussed
in [1] (among other places), which enables you to construct authenticated
key
agreements.

>
> Note that if you have this, you can readily bootstrap pseudonymity:  Alice
> and Bob simply use their secure channel to agree on a shared secret, or on
> pseudonyms they will henceforth use between themselves.  If there were a
> MITM, he could of course impersonate each to the other ever afterward.

But how do they share the initial secret?  And with true anonymity you don't
want linkability.  Pseudonymity is a different thing, with pseudonymity you
have
linkability.

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: DH with shared secret

2003-10-03 Thread Anton Stiglic

- Original Message - 
From: "Jack Lloyd" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, October 03, 2003 5:13 AM
Subject: DH with shared secret


> This was just something that popped into my head a while back, and I was
> wondering if this works like I think it does. And who came up with it
> before me, because it's was too obvious. It's just that I've never heard
of
> something alone these lines before.
>
> Basically, you share some secret with someone else (call it S).  Then you
> do a standard issue DH exchange, but instead of the shared key being
> g^(xy), it's g^(xyS)

Not exactly the same thing, but you get the same properties:  SKEME.
See section 3.3.2, Pre-shared key and PFS, of
SKEME:  A Versatile Secure Key Exchange Mechanism for internet,
Hugo Krawczyk.
http://citeseer.nj.nec.com/krawczyk96skeme.html


--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymous DH & MITM

2003-10-03 Thread Anton Stiglic

- Original Message - 
From: "Tim Dierks" <[EMAIL PROTECTED]>

>
> I think it's a tautology: there's no such thing as MITM if there's no such
> thing as identity. You're talking to the person you're talking to, and
> that's all you know.

That seems to make sense.   In anonymity providing systems often you
want one side to be anonymous, and the other to identify itself (like in
anonymous web surfing).  In this case, if you are using DH to exchange
keys, what you want is something like half-certified DH (see for example
section 2.3 of [1]), where the web server authenticates itself.  With half
certified DH, Alice (the user that is browsing in my example) can be
assured that she is really talking to Bob (web server she wanted to
communicate with), and not a MITM.


[1] http://crypto.cs.mcgill.ca/~stiglic/Papers/dhfull.pdf

--Anton



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: VeriSign tapped to secure Internet voting

2003-10-02 Thread Anton Stiglic
> Schu stressed that several layers of security will prevent hackers from
> accessing the system. VeriSign will house the security servers in its own
> hosting centers. The company will ask military personnel to use their
> Common Access Cards--the latest form of ID for the military--to access
> the system and cast a vote. Civilians will use digital signatures.

So how will these civilians get a certified public key, and how will the
private
key be protected?  Is there a special policy for the issuance of these kind
of certificates?

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: End of the line for Ireland's dotcom star

2003-09-24 Thread Anton Stiglic

> Why is it that none of those 100-odd companies with keys in the browsers
> are doing anything with them?  Verisign has such a central role in
> the infrastructure, but any one of those other companies could compete.
> Why isn't anyone undercutting Verisign's prices?  Look what happened with
> Thawte when it adopted this strategy: Mark Shuttleworth got to visit Mir!

And Thawte got bought by Verisign, so no more competition...
Interestingly, last time I checked, it was cheaper to buy from Thawte than 
it was from Verisign directly.

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Code breakers crack GSM cellphone encryption

2003-09-08 Thread Anton Stiglic

- Original Message - 
From: "Greg Rose" <[EMAIL PROTECTED]>
To: "Anton Stiglic" <[EMAIL PROTECTED]>
Cc: "John Doe Number Two" <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>
Sent: Monday, September 08, 2003 1:39 PM
Subject: Re: Code breakers crack GSM cellphone encryption


> At 11:43 AM 9/8/2003 -0400, Anton Stiglic wrote:
> >I think this is different however.  The recent attack focused on the A5/3
> >encryption algorithm, while the work of Lucky, Briceno, Goldberg, Wagner,
> >Biryukov, Shamir (and others?) was on A5/1 and A5/2 (and other crypto
> >algorithms of GSM, such as COMP128, ...).
>
> No, that's not right. The attack *avoids* A5/3, by making the terminal end
> of the call fall back to A5/2, solving for the key in real time, then
> continuing to use the same key with A5/3.

That`s what I meant to say but did not use the right words to say.
The attack does however seem novel.
I haven`t seen the paper on the web yet (all I know is that it was
presented at Crypto 03 which I did not attend), I`m anxious to get my hands
on it.

--Anton


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Code breakers crack GSM cellphone encryption

2003-09-08 Thread Anton Stiglic

>- Original Message - 
>From: "John Doe Number Two" <[EMAIL PROTECTED]>
>To: "R. A. Hettinga" <[EMAIL PROTECTED]>; "Clippable"
<[EMAIL PROTECTED]>
>Cc: <[EMAIL PROTECTED]>
>Sent: Sunday, September 07, 2003 6:45 PM
>Subject: Re: Code breakers crack GSM cellphone encryption
>

>It's nice to see someone 'discovering' what Lucky Green already figured-out
>years ago.  I wonder if they'll cut him a check.

I think this is different however.  The recent attack focused on the A5/3
encryption algorithm, while the work of Lucky, Briceno, Goldberg, Wagner,
Biryukov, Shamir (and others?) was on A5/1 and A5/2 (and other crypto
algorithms of GSM, such as COMP128, ...).

--Anton



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: cryptographic ergodic sequence generators?

2003-09-08 Thread Anton Stiglic
> [...]
> The Yarrow RNG uses counter-mode as a PRNG.  However in the paper they
> describe some effects you may want to avoid by re-keying depending on
> your application as the stream becomes distinguishable from random
> output.
>
> Adam

This is essentially because if your output sequence of n-bit blocks were
really random,
you would expect to see a collision between two n-bit blocks after seeing
about 2^(n/2)
block outputs (birthday paradox), but using a block cipher with a counter
gives you no
collision before 2^n block outputs.  This is indeed why in the Yarrow design
they
suggest to re-key after 2^(n/3) block outputs.

--Anton


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: OpenSSL *source* to get FIPS 140-2 Level 1 certification

2003-09-05 Thread Anton Stiglic
> On Fri, Sep 05, 2003 at 01:32:21PM -0400, Anton Stiglic wrote:
> > If I'm not mistaken, this would be the first free,
> > open-source, crypto library that has FIPS 140 module certification!  
> 
> I believe that this is incorrect.  
> 
> The two open-source projects that I'm aware of that have FIPS 140 certs
> are The Crypto++ Library, (cert 343, issued today) and The Mozilla
> project's NSS, which was certified by SUN under FIPS 140-1, levels 1
> and 2.  (certs 247 and 248).

You are correct, I just saw Crypto++ in the list of FIPS 140 validated 
modules:
http://csrc.nist.gov/cryptval/140-1/140val-all.htm
It is the latest entry, added today.
Congratulations to Wei Dai!

I was not aware of NSS before, their might be others as well which I 
am not aware of then.

OpenSSL`s *source code* being evaluated remains exiting.

Thanks for the information Joshua and Rich!


--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: OpenSSL *source* to get FIPS 140-2 Level 1 certification

2003-09-05 Thread Anton Stiglic
Really exiting news.  If I'm not mistaken, this would be the first free,
open-source,
crypto library that has FIPS 140 module certification!  Other free
open-source
libraries have algorithms that have been FIPS 140 certified, but the whole
module
hasn't been certified (exemple Cryptlib and Crypto++).

And OpenSSL crypto module runs on all kinds of platforms.  Really nice!

--Anton


- Original Message - 
From: "Rich Salz" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, September 05, 2003 10:50 AM
Subject: OpenSSL *source* to get FIPS 140-2 Level 1 certification


> This is termendously exciting.  For the first time ever, NIST will be
> certifying a FIPS 140 implementation based on the source code.  As long
> as the "pedigree" of the source is tracked, and checked at run-time,
> then applications can claim FIPS certification.
>
> For details:
>
http://groups.google.com/groups?dq=&hl=en&lr=&ie=UTF-8&threadm=bj9mos%242tbt%241%40FreeBSD.csie.NCTU.edu.tw&prev=/groups%3Fgroup%3Dmailing.openssl.users
>
> /r$
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: PRNG design document?

2003-09-02 Thread Anton Stiglic
> Allow me to clarify my problem a little. I'm commonly engaged to review
> source code for a security audit, some such programs include a random
> number generator, many of which are of ad-hoc design. The nature of such
> audits is that it's much more appealing to be able to say "here are three
> accepted guidelines that your generator violates" rather than "I haven't
> seen that before and I don't like it, you should replace it with something
> else".

Here are two references that might also be helpful:

http://www.cryptography.com/resources/whitepapers/VIA_rng.pdf
http://www.cryptography.com/resources/whitepapers/IntelRNG.pdf

These are reports on the analysis of two RNGs, I found them well written.

--Anton


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: PRNG design document?

2003-09-02 Thread Anton Stiglic

- Original Message - 
From: "Thor Lancelot Simon" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, August 29, 2003 3:45 PM
Subject: Re: PRNG design document?


> On Fri, Aug 29, 2003 at 11:27:41AM +0100, Ben Laurie wrote:
> > >
> > > As you mentioned, the FIPS-140-2 approved PRNG
> > > are deterministic, they take a random seed and extend it
> > > to more random bytes.  But FIPS-140-2 has no
> > > provision for generating the seed in the first place,
> > > this is where something like Yarrow or the cryptlib
> > > RNG come in handy.
> >
> > Actually, FIPS-140 _does_ have provision for seeding, at least for X9.17
> > (you use the time :-), but not for keying.
>
> I think there's some confusion of terminology here.  A "time", Ti for each
> iteration of the algorithm, is one of the inputs to the X9.17 generator
> (otherwise, you might as well just use DES/3DES in any chaining or
feedback
> mode, for all practical purposes).

Right.  So I don't actually have the original ANSI X9.17 document (and it is
no longer available in the ANSI X9 catalogue).  My references are
HAC section 5.3.1
http://www.cacr.math.uwaterloo.ca/hac/about/chap5.pdf
and Kelsey, Schneier, Wagner and Hall's paper
http://www.counterpane.com/pseudorandom_number.pdf

In both of the above references, ANSI X9.17 PRNG is described as taking
a 64-bit seed s along with a DES E-D-E encryption key k.
The encrypted time is XORed with the seed and this result is encrypted to
obtain the output, the seed is updated by encrypting the last output XORed
with the encrypted time.
So there is possibility of re-keying (the key that is used for the
encryption),
and re-seeding (explicitly, not relying on the self-re-seeding...).

It is important to chose both a random seed and random key, and FIPS 140
has no provision for this.

--Anton


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: PRNG design document?

2003-08-27 Thread Anton Stiglic

- Original Message - 
From: "Bob Baldwin PlusFive" <[EMAIL PROTECTED]>
To: "Tim Dierks" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Friday, August 22, 2003 1:00 PM
Subject: Re: PRNG design document?


> Tim,
>  One issue to consider is whether the system
> that includes the PRNG will ever need a FIPS-140-2
> rating.  
> [...]

As you mentioned, the FIPS-140-2 approved PRNG 
are deterministic, they take a random seed and extend it
to more random bytes.  But FIPS-140-2 has no 
provision for generating the seed in the first place, 
this is where something like Yarrow or the cryptlib
RNG come in handy.

So if you want FIPS-140-2 compliance, generate a 
seed using something based on Yarrow or cryptlib RNG
(or if you have a good hardware RNG use that to 
generate the seed), and then apply a FIPS approved
PRNG to the seed.

NIST should really approve something like Yarrow 
or Peter Gutmann's design...

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: PRNG design document?

2003-08-25 Thread Anton Stiglic
> "Software Generation of Practically Strong Random Numbers" by Peter
> Gutmann
> 
> http://www.cs.auckland.ac.nz/~pgut001/pubs/usenix98.pdf
> 
> and the followup:
> 
> http://www.cypherpunks.to/~peter/06_random.pdf
> 
> David

That's a good reference on PRNGs.  There is also the work on Yarrow, 
http://www.counterpane.com/yarrow-notes.html

The best implementation of Yarrow that I know of is
http://opensource.zeroknowledge.com/yarrow/
the implementation available on Couternpane's site is of an old version of
Yarrow, and had some security bugs (don't know if they were fixed...).

Also interesting is David Wagner's collection of links to stuff about
randomness:

http://www.cs.berkeley.edu/~daw/rnd/

--Anton



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Looking for an N -out-of-M split algorithm

2003-07-16 Thread Anton Stiglic

> Does anyone have any idea where I might learn about this algorithm - or
> indeed any algorithm which does the job.

Just as Perry mentioned, look into Shamir Secret Sharing.
There are also implementations of this, see for example
http://www.astro.gla.ac.uk/users/norman/distrib/tontine.html
(I'm not certain if I ever used that one in particular, so I don't know if
it's good,
but I'll let you do the research...).

--Anton


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: replay & integrity

2003-07-09 Thread Anton Stiglic
> Integrity:  Financial protocols that use crypto
> (as opposed to ones abused by crypto) generally
> include signed messages.  The signature provides
> for its own integrity, as well as a few other
> things.

I don't believe that is enough.  Take for example
the SSL 2.0 ciphersuite rollback vulnerability or the 
SSL 3.0 key-exchange algorithm vulnerability . Any kind
of rollback attack is serious, and won't be protected
by signatures in the bulk data (and those signature might
be weakened by forcing a rollback to a possible weaker
version/implementation).

> 
> Replay:  One of the commonest problems in HTTPS
> sites is replay failure.  The solution is well
> known out in the real world - you have to have
> replay prevention at the higher layers.
> 
> (Credit card processors have replay prevention
> too!)
> 
> So, some protocols don't need replay prevention
> from lower layers because they have sufficient
> checks built in.  This would apply to any protocols
> that have financial significance;  in general, no
> protocol should be without its own unique Ids.

So maybe I can't replay a complete financial transaction, 
because at some high layer there is replay prevention,
what about replaying some init protocol request?
Is that not annoying?  Would a bank not care that 
their ATMs are not working for a day because someone
is executing a DoS attack on the lower layers of the 
protocols of their system? I think not, you need replay 
protection on both levels. 

How can a secure socket be dubbed secure if it doesn't
protect against these basic attacks?

To quote from Wagner and Schneier`s paper, Analysis
of the SSL 3.0 protocol:

"Replay attacks are a legitimate concern, and as they are
so easy to protect against, it would be irresponsible to fail
to address these threats."

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Fwd: [IP] A Simpler, More Personal Key to Protect Online Messages

2003-07-09 Thread Anton Stiglic

- Original Message - 
From: "Whyte, William" <[EMAIL PROTECTED]>

[...]
> But you don't have to contact the CA to get someone's certificate.
> A standard way is to send them an email saying "can you send me
> a signed message?"

Yes, that works.  When I want someone to send me confidential
email, and that someone doesn't know much or anything about crypto, 
I usually just send an S/MIME signed email, and let his MUA (usually
Outlook or Outlook Express) do the work of saving the certificate
and all.

> This also ensures you have the right public key. I haven't
> studied the details of IBE, but I assume that (a) there may
> be multiple IBE-based "CA"s, with different parameters, and

The way I see IBE being useful is as a corporate solution for
encrypting messages.  Inside a corporation everyone will use 
the same public parameter (which could probably come with 
the software installation).  And in most corporate crypto solutions
you want key escrow, which IBE gives you as well for free :)
The benefit is that you don't need to deal with users public keys:
you don't need to get them from some repository or ask the
person to send it to you by email and stuff.  So say that you are
with your laptop away, and don't have the persons public key
certificate, you can still send him/here email directly (without asking
anyone to send you his/her public key).  I admit the feature is
of limted value however.

> (b) the identity that's used to encrypt will be not just a 
> name, but a name and a date (to ensure that some revocation-like
> capability exists). In either case, you can't simply pick the
> email address and use it as the public key; you need to establish
> some additional information first. This seems to put us back 
> in the same place as with standard PKI, usability-wise. (Or,
> rather, there may be a usability delta for IBE, but it's very
> small).

In the Boneh-Franklin paper one suggestion is to use
[EMAIL PROTECTED] || current-year
which would make public keys good for one year (which sounds
reasonable, especially within a corporation).   Of course, the software 
will include the year when creating the public key, the users wouldn't 
need to do it explicitly.  If you really want to be able to revoke
public keys, you need more granularity and use something like
[EMAIL PROTECTED] || current-date, and that does become
anoying for the users (need to fetch your private key everyday).

One interesting thing about IBE is that you can transform any such 
scheme into a Non-interactive forward-secret cryptosystem as
Adam Back pointed out:
http://www.cypherspace.org/~adam/nifs/
(his web server might be down, but you can look at the cached version
on Google...).

--Anton


 


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: pubkeys for p and g

2003-06-26 Thread Anton Stiglic
I'm not certain I understand your questions, but here are some answers (I
think).
In the DH protocol you have what we call public parameters, p and g.
p is a large prime integer, which defines a group Z*p, g is a generator
which
defines a subgroup in Z*p.
You can use fix values for p an g.
Now, participants will choose private and public keys.  The private key
is simply chosen as a random number x, whose value is between 1 and
p-1.   The public key associated to x will be y = g^x mod p.
Participants keep x secret and y is public.
You can say that (y, g, p) is the public key, or simply say that y is the
public
key if g and p (the public parameters) are implicitly known.
Participants can choose a different x and associated y on each execution
of the protocol, or have long term private public key pairs.

--Anton


>The Check Point Firewall-1 Docs insist, that the public keys be used
>for p and g for the Oakley key exchange. I ask you: is this
>possible?
>
>  - which of the two pubkeys will be p, which g?
>  - are they both always primes?
>  - are they both always suitable generators mod p?
>
>It just seems to me that Check Point isn't entirely sure themselves
>here. I'd appreciate a short cleanup...
>
>To my knowledge, g and p are globally defined, either in DH Groups
>(which are nothing but pre-defined g's and p's, right?), or
>otherwise set constant. Am I wrong about this?

Thanks.


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Security of DH key exchange

2003-06-20 Thread Anton Stiglic

- Original Message - 
From: "Jaap-Henk Hoepman" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, June 20, 2003 5:02 AM
Subject: Security of DH key exchange


>
> In practice the following method of exchanging keys using DH is used, to
ensure
> bit security of the resulting session key. If alice and bob exchange g^a
and
> g^b, the session key is defined as h(g^{ab}). This is mentioned in many
> textbooks, but i can't find a reference to a paper discussing the security
of
> this in the following sense. If g^a etc. are computed over a field F of
order
> p, and h hashes F to {0,1}^n, under which conditions is h(g^{ab}) given
g^a and
> g^b indistinguishable from a randomly selected session key k? (where
> indistinguishable would mean that the advantage of the adversary of
> distinguishing h(g^{ab}) from k is negligible in _n_).

I don't know of any references that will explain this explicitly, but the
reasoning is simple:  You model h as a random oracle, which would imply that
if the minimum entropy of g^(ab) is at least n bits, then h(g^{ab}) will be
indistinguishable from a value chosen randomly for the set of n-bit strings.

For information on general about DH, you can look at the following
manuscript:
http://crypto.cs.mcgill.ca/~stiglic/Papers/dhfull.pdf

--Anton



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: "PGP Encryption Proves Powerful"

2003-05-31 Thread Anton Stiglic

So what happened to passphrase guessing?  That's got to be
one of the weakest links.  Unless their private key wasn't
stored on the device?

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]