Re: Against Rekeying

2010-03-25 Thread Joseph Ashwood

--
From: Perry E. Metzger pe...@piermont.com
Subject: Against Rekeying


I'd be interested in hearing what people think on the topic. I'm a bit
skeptical of his position, partially because I think we have too little
experience with real world attacks on cryptographic protocols, but I'm
fairly open-minded at this point.


Typically, rekeying is unnecessary, but sooner or later you'll find a 
situation where it is critical. The claim made that everything hinges on is 
that 2^68 bytes is not achievable in a useful period of time, this is not 
always correct.


Cisco recently announced the CRS-3 router, a single router that does 322 
Tbits/sec, that's 40.25 TBytes/sec. Only 7 million seconds to exhaust the 
entire 2^68. This is still fairly large, but be around the industry long 
enough you'll see a big cluster where they communicate as fast as possible, 
all with the same key. I've seen clusters of up to 100 servers at a time, so 
in theory it could be just 70,000 seconds, not even a full day, its also 
worth keeping in mind the bandwidth drain of an organization as cmopute 
intensive as Google of Facebook will easily exceed even these limits.


Certainly an argument can be made that the protocol used is wrong, but this 
kind of protocol gets all too frequently, and since it is usually used for 
high availability (one of the primary reasons for clustering) the need to 
rekey becomes all too real.


So there are times where rekeying is a very necessary requirement. I prefer 
a protocol reboot process instead of an in-protocol rekey, but sometimes you 
have to do what you have to do. Rekeying probably should never have been 
implemented in something like SSL/TLS or SSH, even IPsec it is arguable, but 
extreme environments require extreme solutions.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Detecting attempts to decrypt with incorrect secret key in OWASP ESAPI

2009-09-18 Thread Joseph Ashwood

--
From: Kevin W. Wall kevin.w.w...@gmail.com
Subject: Re: Detecting attempts to decrypt with incorrect secret key in 
OWASP ESAPI



So given these limited choices, what are the best options to the
questions I posed in my original post yesterday? As Peter mentioned, we
want to give web app developers something that will work out-of-the-box.


It isn't difficult to implement CMAC and CTR modes in pure Java. The NIST 
specs for CMAC and CTR are plenty clear. You'll be looking for the 
AES/ECB/NoPadding option. From there use update it returns a byte []. I've 
used the standard JCE implementation in this way to implement unsupported 
modes before, it works.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Detecting attempts to decrypt with incorrect secret key in OWASP ESAPI

2009-09-16 Thread Joseph Ashwood

--
From: Kevin W. Wall kevin.w.w...@gmail.com
Subject: Detecting attempts to decrypt with incorrect secret key in OWASP 
ESAPI




The new default for the new encryption / decryption methods is to be
128-bit AES/CBC/PKCS5Padding and use of a random IV.


That's a good solution, except for the missing MAC, considering the 
environment I would suggest hard coding it, there's really no reason to 
offer options.




MIC = HMAC-SHA1( nonce, IV + secretKey )
or
MIC = HMAC-SHA1( nonce, IV + plaintext )
MIC = HMAC-SHA1( IV, plaintext )
So, having provided all of that background, in summary, here are my
three questions:
   1) Is using either of these MIC calculations cryptographically secure?
   2) If answer to #1, is 'yes', which one is safer / more secure?
   3) If answer to #1 is 'no', do you have any suggestions less
  computationally expensive then digital signatures that would
  allow us to detect attempts to decrypt with the incorrect secret
  key and/or an adversary attempting to alter the IV prior to the
  decryption.


You are looking at the correct type of construct, the solution for your 
problem is known as a Message Authentication Code or MAC. The biggest 
concerns are what are you MACing, and that it must use a second key. There 
are a number of solutions.


The first part of the solution is to store an additional key, you're storing 
128-bits hopefully securely store 256, and split it, trust me you'll thank 
yourself when the security issues are investigated. The second key is for 
the MAC algorithm.


Since you already have CBC available, my first suggestion would be CBC-MAC 
(IV = 0x000, okcs5 padding works fine, MAC = final block of ciphertext), 
it has good strong security proofs behind it, and is fast. Apply the MAC 
algorithm, prepend the MAC value to the plaintext (just because indexing is 
easier this way), then encrypt to ciphertext, store. To decrypt, retrieve, 
decrypt the ciphertext, parse out the MAC value and the plaintext, run MAC 
algorithm on plaintext to find MAC2, check MAC == MAC2 . This will give you 
a failure rate of 1/2^128 or something on the order of 
0.3%, you are more likely to have the 
system burst into flame than see a false positive on this. Overall, this 
will reduce your speed to 50% of the current.


You might also want to take a look at Poly1305, it is faster.

As you noted HMAC is also available, but I would recommend against using 
SHA-1 for anything, it simply raises too many questions that will take too 
long to answer. The secure hashes available are simply not as fast unless 
you have bare-metal coding ability which Java really doesn't like.


The other, and arguably better, option would be a combined mode. The good 
combined modes are a legal minefield, so using them for an open project is 
difficult. It is claimed that GCM and EAX are public domain, but I'm more 
inclined to recommend the conservative approach and avoid them.


While there has been no concern raised by cryptanalysts about CBC with 
CBC-MAC, to many laypeople it doesn't look good. So, for purely politic 
reasons, I'm recommending the shift to CCM mode. At the same time I 
recommend moving to an IV counter instead of random IV, in CTR mode it is 
necessary to make sure an IV is never reused and randomness is irrelevant. 
This will give you a thoroughly analyzed standard mode of operation.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Detecting attempts to decrypt with incorrect secret key in OWASP ESAPI

2009-09-16 Thread Joseph Ashwood



--
From: David Wagner d...@cs.berkeley.edu
Sent: Wednesday, September 16, 2009 5:19 PM
To: cryptography@metzdowd.com
Subject: Re: Detecting attempts to decrypt with incorrect secret key in 
OWASP ESAPI



I don't exactly follow the argument for using CCM mode instead
AES-CBC encryption followed by AES-CMAC, and I'm not familiar with
the political/perception arguments (who complains about the latter?),
but whatever.


I've actually had a few clients ask for a more detailed explaination of why 
it is ok, so there are people who are confused. Some people get confused.



It's hardly worth arguing over.  The cryptographic mode
of operation is unlikely to be the weakest link in your system, and the
security differences between CCM mode vs AES-CBC + AES-CMAC seem minor,
so it doesn't seem worth worrying too much about it: CCM mode seems good
enough.  I'm not sure I'm familiar with the arguments against EAX mode
(full disclosure: I'm a co-author on the EAX paper and hence probably
biased), but again, whatever.


Actually I think EAX great, and if I had known you were replying while I was 
writing mine I wouldn't have replied at all. My problem is that I haven't 
taken the time to look over the patents on bordering technologies to see if 
I believe it is patent safe. Lately, I've been dealing with a lot of patent 
weirdness, so I'm more aware of patent issues.



ObNitpick:

Joseph Ashwood wrote:
Since you already have CBC available, my first suggestion would be 
CBC-MAC
(IV = 0x000, okcs5 padding works fine, MAC = final block of 
ciphertext),

it has good strong security proofs behind it, and is fast. [...]


Are you sure?  For vanilla CBC-MAC, the security proofs don't apply to
variable-length messages, and I recall that there are known attacks on
vanilla CBC-MAC when message lengths can vary (I'm not claiming those
attacks are necessarily realistic in all applications, but they may be).
AES-CMAC is a nice design that addresses this problem.  CMAC is based
upon CBC-MAC, but addresses the imperfections of vanilla CBC-MAC.


I could try and justify my position, but honestly, CMAC really doesn't any 
real downsides, and the proof is tighter.


(I moved this down here)


These three choices are all good enough and
the security differences between them seem minor.  In my view, choosing
any of the three would be a reasonable choice.  Just my personal opinion.


As opinions go, its hard to find a better source than David Wagner.
   Joe

BTW: Anyone looking to make a venture capital investment? 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Attacks against GOST? Was: Protocol Construction

2009-08-06 Thread Joseph Ashwood

My apologies for the delay, I had forgotten the draft message.

--
From: Alexander Klimov alser...@inbox.ru
Subject: Attacks against GOST? Was: Protocol Construction


On Sun, 2 Aug 2009, Joseph Ashwood wrote:

 So far, evidence supports the idea that the stereotypical Soviet
 tendency to overdesign might have been a better plan after all,
 because the paranoia about future discoveries and breaks that
 motivated that overdesign is being regularly proven out.

And that is why Kelsey found an attack on GOST


Do you want to say that the GOST (28147-89) block cipher is broken? I
have never heard of an attack against it that is faster than the
exhaustive search.


I just said there are attacks, the situation is open for interpretation 
because of the nature of the attacks and the unknown S-box. Kelsey and 
Schneier published the first related key attack in 1996, in 1997 Kelsey 
enhanced the attack. My point was that the proposed method of boosting 
security (increased key size and rounds) does not necessarily correlate to 
increased security and since GOST was given as an example of how to do it 
right the attacks by Kelsey, et al mattered.



By the way, it was not overdesign (IMO it is simpler even than DES),
nor it was an example of the stereotypical Soviet... According to an
informed source [1], it was specifically made to be not like military
ciphers:  its only purpose was to make something for non-military
cryptography that would not betray any Soviet cryptographic know-how
(this is why they chose to do something very similar to DES).


Good to know, I didn't remember that part.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: AES, RC4

2009-08-02 Thread Joseph Ashwood

-
From: PETER SCHWEITZER pe...@infosecsys.com
Subject: AES, RC4

Referring to your note of August 1: I haven't found anything about 
breaking RC4 if used with a newly randomly generated key (unrelated to 
any others) for every communication session. I would appreciate being 
enlightened!


If a completely unrelated new key is used, and the key has sufficient 
entropy, and it isn't used for too long, and the entropy of the key is 
fairly smoothly distributed, and the first several bytes are discarded, and 
I'm probably missing a couple of requirements, then RC4 is reasonably 
secure. On the other hand using AES-128 in CTR mode, the key requires 
sufficient entropy. That is the difference, particularly attempting to make 
sure there the RC4 kys are truly unrelated is continually difficult.


Is your partly negative recommendation for AES' ...for most new  protocol 
purposes to do with the recent related-key attack? Which I  would 
certainly agree is very disquieting, even though, as you say, it  has no 
current negative consequences.


The last few weeks have not been kind to AES-256, a couple new attacks, the 
related key on the full structure, and the more recent significant erosion 
in other areas. Like I said, not enough to force an immediate retirement, 
AES-256 remains functionally secure, but the argument for usage is getting 
more difficult, AES-256 seems to be no more secure than AES-128, and is 
slower.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-08-02 Thread Joseph Ashwood

--
From: James A. Donald jam...@echeque.com
Subject: Re: Fast MAC algorithms?


Joseph Ashwood wrote:

RC-4 is broken when used as intended.

...

If you take these into consideration, can it be used correctly?


James A. Donald:

Hence tricky


Joseph Ashwood wrote:

By the same argument a Viginere cipher is tricky to use securely, same
with monoalphabetic and even Ceasar. Not that RC4 is anywhere near the
brokenness of Viginere, etc, but the same argument can be applied, so the
argument is flawed.


You cannot use a Viginere cipher securely. You can use an RC4 cipher
securely:  To use RC4 securely discard the first hundred bytes of output,
and renegotiate the key every gigabyte.


The way to use a Viginere securely is to apply an All-Or-Nothing-Transform 
to the plaintext, then encrypt, this results in the attacker entropy of the 
system that is in excess of the size, and therefore a OTP. There are other 
ways, but this method is not significantly more complex than the efforts 
necessary to secure RC4 and results in provable secrecy. It is just tricky 
to use a Vigenere securely.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Protocol Construction WAS Re: Fast MAC algorithms?

2009-08-02 Thread Joseph Ashwood

--
From: Ray Dillinger b...@sonic.net
Subject: Re: Fast MAC algorithms?


I mean, I get it that crypto is rarely the weakest link in a secured
application.  Still, why are folk always designing and adopting
cryptographic tools for the next decade or so instead of for the
next few centuries?


Because we have no idea how to do that. If you were to ask 6 months ago we 
would've said AES-256 will last at least a decade, probably 50 years. A few 
years before that we were saying that SHA-1 is a great cryptographic hash. 
Running the math a few years ago I determined that with the trajectory of 
cryptographic research it would've been necessary to create a well over 
1024-bit hash with behaviors that are perfect by todays knowledge just to 
last a human lifetime, since then the trajectory has changed significantly 
and the same exercise today would probably result in 2000+ bits, 
extrapolating the trajectory of the trajectory, the size would be entirely 
unacceptable. So, in short, collectively we have no idea how to make 
something secure for that long.



So far, evidence supports the idea that the stereotypical Soviet
tendency to overdesign might have been a better plan after all,
because the paranoia about future discoveries and breaks that motivated
that overdesign is being regularly proven out.


And that is why Kelsey found an attack on GOST, and why there is a class of 
weak keys. That is the problem, all future attacks are rather by definition 
a surprise.



This is fundamental infrastructure now!  Crypto decisions now
support the very roots of the world's data, and the cost of altering
and reversing them grows ever larger.


By scheduling likely times for upgrades the prices can be assessed better, 
scheduled better, and works far better for business than the OH . OUR 
 IS BROKEN experience that always results from trying to plan for 
longer than a few years at a time. It is far cheaper to build within the 
available knowledge, and design for a few years.




If you can deploy something once, even something that uses three
times as many rounds or key bits as you think now that you need,


Neither of those is a strong indicator of security. AES makes a great 
example, AES-256 has more rounds than AES-128, AES-256 has twice as many key 
bits as AES-128, and AES-256 has more attacks against it than AES-128. An 
increasing number of attack types are immune to the number of rounds, and 
key bits has rarely been a real issue.


There is no way predicting the far future of cryptography, it is hard enough 
to predict the reasonably near future.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-08-01 Thread Joseph Ashwood

--
From: James A. Donald jam...@echeque.com
Subject: Re: Fast MAC algorithms?


james hughes wrote:


On Jul 27, 2009, at 4:50 AM, James A. Donald wrote:
No one can break arcfour used correctly - unfortunately, it is tricky to 
use it correctly.


RC-4 is broken when used as intended.

...

If you take these into consideration, can it be used correctly?


Hence tricky


By the same argument a Viginere cipher is tricky to use securely, same 
with monoalphabetic and even Ceasar. Not that RC4 is anywhere near the 
brokenness of Viginere, etc, but the same argument can be applied, so the 
argument is flawed.


The question is: What level of heroic effort is acceptable before a cipher 
is considered broken? Is AES-256 still secure?3-DES? Right now, to me 
AES-256 seems to be about the line, it doesn't take significant effort to 
use it securely, and the impact on the security of modern protocols is 
effectively zero, so it doesn't need to be retired, but I wouldn't recommend 
it for most new protocol purposes. RC4 takes excessive heroic efforts to 
avoid the problems, and even teams with highly skilled members have gotten 
it horribly wrong. Generally, using RC4 is foolish at best.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-07-23 Thread Joseph Ashwood

--
From: Nicolas Williams nicolas.willi...@sun.com
Sent: Tuesday, July 21, 2009 10:43 PM
Subject: Re: Fast MAC algorithms?


But that's not what I'm looking for here.  I'm looking for the fastest
MACs, with extreme security considerations (e.g., warning, warning!
must rekey every 10 minutes)


There's a reason everyone is ignoring that requirement rekeying in any 
modern system is more or less trivial. As an example take AES, rekeying 
every 10 minutes will have a throughput of 99.999% of the original, there 
will be bigger differences depending on whether or not you move the mouse.



being possibly OK, depending on just how
extreme -- the sort of algorithm that one would not make REQUIRED to
implement, but which nonetheless one might use in some environments
simply because it's fast.


I would NEVER recommend it, let me repeat that I would NEVER recommend it, 
but Panama is a higher performing design, IIRC about 8x the speed of the 
good recommendations, but DON'T USE PANAMA. You wanted a bad recommendation, 
Panama is a bad recommendation.


If you want a good recommendation that is faster, Poly1305-AES. You'll get 
some extra speed without compromising security.



For example, many people use arcfour in SSHv2 over AES because arcfour
is faster than AES.


I would argue that they use it because they are stupid. ARCFOUR should have 
been retired well over a decade ago, it is weak, it meets no reasonable 
security requirements, and in most situations it is not actually faster due 
to the cache thrashing it frequently induces due to the large key expansion.



In the crypto world one never designs weak-but-fast algorithms on
purpose, only strong-and-preferably-fast ones.  And when an algorithm is
successfully attacked it's usually deprecated,


The general preference is to permanently retire them. The better algorithms 
are generally at least as fast, that's part of the problem you seem to be 
having, you're not understanding that secure is not the same word as slow, 
in fact everyone has worked very hard in making the secure options at least 
as fast as the insecure.



new
ones tend to be slower because resistance against new attacks tends to
require more computation.


New ones tend to be faster than the old.
New ones are designed with more recent CPUs in mind.
New ones are designed with the best available knowledge on how to build 
security

New ones are simpler by design
New ones make use of everything that has been learned.


I realized this would make my question seem a
bit pointless, but hoped I might get a surprising answer :(


I think the answer surprised you more than you expected. You had hoped for 
some long forgotten extremely fast algorithm, what you've instead learned is 
that the long forgotten algorithms were not only forgotten because of 
security, but that they were eclipsed on speed as well.


I've moved this to the end to finish on the point

The SSHv2 AES-based ciphers ought to be RTI and
default choice, IMO, but that doesn't mean arcfour should not be
available.


I very strongly disagree. One of the fundamental assumptions of creating 
secure protocols is that sooner or later someone will bet their life on your 
work. This isn't an idle overstatement, instead it is an observation.
How many people bet their life and lost because Twitter couldn't protect 
their information in Iran?

How many people bet their life's savings on SSL/TLS?
How many people trusted various options with their complete medical history?
How many people bet their life or freedom on the ability of PGP to protect 
them?


People bet their life on security all the time, it is a part of the job to 
make sure that bet is safe.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-07-22 Thread Joseph Ashwood

--
From: Nicolas Williams nicolas.willi...@sun.com
Subject: Fast MAC algorithms?


Which MAC algorithms would you recommend?


I didn't see the primary requirement, you never give a speed requirement. 
OMAC-AES-128 should function around 100MB/sec, HMAC-SHA-512 about the same, 
HMAC-SHA1 about 150MB/sec, HMAC-MD5 250MB/sec. I wouldn't recommend MD5, but 
in many situations it can be acceptable, and none of these make use of 
parallelism to achieve the speeds.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: MD6 withdrawn from SHA-3 competition

2009-07-03 Thread Joseph Ashwood

--
Sent: Wednesday, July 01, 2009 4:05 PM
Subject: MD6 withdrawn from SHA-3 competition


Also from Bruce Schneier, a report that MD6 was withdrawn from the SHA-3
competition because of performance considerations.


I find this disappointing. With the rate of destruction of primitives in any 
such competition I would've liked to see them let it stay until it is either 
broken or at least until the second round. A quick glance at the SHA-3 zoo 
and you won't see much left with no attacks. It would be different if it was 
yet another M-D, using AES as a foundation, blah, blah, blah, but MD6 is a 
truly unique and interesting design.


I hope the report is wrong, and in keeping that hope alive, the MD6 page has 
no statement about the withdrawl.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: RSA modulus record

2008-09-17 Thread Joseph Ashwood
- Original Message - 
From: Victor Duchovni [EMAIL PROTECTED]

To: cryptography@metzdowd.com
Sent: Tuesday, September 16, 2008 2:08 PM
Subject: Re: RSA modulus record



On Tue, Sep 16, 2008 at 09:01:51PM +0200, Weger, B.M.M. de wrote:


There's a new biggest known RSA modulus.
It is (in hexadecimal notation):

FF...(total of 9289166 F's)...FFDFF...(total of 1488985
F's)...FF800...(total of 9289165 0's)...001

It is guaranteed to be the product of two different large primes,


Are the primes actually known, or just guaranteed to exist?


and it has more than 80 million bits. Impressive security...


In what sense is this impressive security?



I have to agree that it is impressive, in the same way that having a 
nickname Tripod is impressive. Doesn't actually mean much in terms of 
reality neither is all that useful, or realistic for actual use.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-13 Thread Joseph Ashwood
- Original Message - 
From: Hal Finney [EMAIL PROTECTED]

To: [EMAIL PROTECTED]; cryptography@metzdowd.com
Sent: Sunday, February 10, 2008 9:27 AM
Subject: Re: questions on RFC2631 and DH key agreement



Joseph Ashwood writes:

From: Hal Finney [EMAIL PROTECTED]
 Joseph Ashwood writes, regarding unauthenticated DH:
 if b uses the same private key
 to generate multiple yb the value of b will slowly leak.

 I'm not familiar with this last claim, that the value of b's private 
 key
 (presuming that is what you mean) would slowly leak if it were reused 
 for

 many DH exchanges. Can you explain what you mean? Are you talking about
 LimLee style attacks where the recipient does not check the parameters
 for validity? In that case I would say the private exponent would leak
 quickly rather than slowly. But if the parameters are checked, I don't
 see how that would leak a reused exponent.

I am not immediately aware of any known attacks that have been published
about it, but it is fairly obvious that Eve has more information about 
the

private key by having a second key set with the same unknown. With only a
single pair Eve's information set is:
g_1,p_1,q_1,y_1 where y_1 = g_1^x mod p_1

By adding the second key set Eve now has
g_1,p_1,q_1,y_1 where y_1 = g_1^x mod p_1
g_2,p_2,q_2,y_2 where y_2 = g_2^x mod p_2

This is obviously additional information, and with addition key set _i
eventually Eve has the information to guess x with improves probability.


That's hardly grounds for saying that the value of the secret will
slowly leak. You have given no reason to believe that this information
will be of any practical value to Eve.


We obviously disagree. Security is alway about information control, and 
disclosing additional information for no gain is always a bad idea.


Expressing the equations differently:
Y_i = g_i^X - k_i*p_i
is equivalent to
Y_i = g_i^X mod p_i

Since Y_i, g_i, and p_i are known, k_i is irrelevant, and g_i and p_i can 
even be chosen, simple algebra shows that not all Xs can be discovered from 
a given set, but it also says that sets of possible X can be determined from 
each triple, and by choosing g,p the overlap of the sets can be reduced. 
Creating an oracle for Y,g,p triples out of the client is begging for an 
adaptive attack.



After all, exactly the same observation might be made about a digital
signature, that each signature gives additional information about the
private exponent.


Actually there is an additional random variable in the signature, and 3 
additional k_i so the algebra says that the sets will overlap simply too 
much for a similar set-based attack to work.


This is a largely fuzzy-logic based attack. And while I obviously haven't 
sorted it through that far should allow for a probability sorting of values 
for X.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-09 Thread Joseph Ashwood
- Original Message - 
From: Hal Finney [EMAIL PROTECTED]

To: [EMAIL PROTECTED]; cryptography@metzdowd.com
Sent: Wednesday, February 06, 2008 8:54 AM
Subject: Re: questions on RFC2631 and DH key agreement



Joseph Ashwood writes, regarding unauthenticated DH:
I would actually recommend sending all the public data. This does not 
take
significant additional space and allows more verification to be 
performed. I

would also suggest looking at what exactly the goal is. As written this
provides no authentication just privacy, and if b uses the same private 
key

to generate multiple yb the value of b will slowly leak.


I'm not familiar with this last claim, that the value of b's private key
(presuming that is what you mean) would slowly leak if it were reused for
many DH exchanges. Can you explain what you mean? Are you talking about
LimLee style attacks where the recipient does not check the parameters
for validity? In that case I would say the private exponent would leak
quickly rather than slowly. But if the parameters are checked, I don't
see how that would leak a reused exponent.


I am not immediately aware of any known attacks that have been published 
about it, but it is fairly obvious that Eve has more information about the 
private key by having a second key set with the same unknown. With only a 
single pair Eve's information set is:

g_1,p_1,q_1,y_1 where y_1 = g_1^x mod p_1

By adding the second key set Eve now has
g_1,p_1,q_1,y_1 where y_1 = g_1^x mod p_1
g_2,p_2,q_2,y_2 where y_2 = g_2^x mod p_2

This is obviously additional information, and with addition key set _i 
eventually Eve has the information to guess x with improves probability.





You can then use the gpb trio for DSA, leveraging the key set for more
capabilities.


Presuming here you mean (g,p,q) as suitable for reuse. This raises the
question, is the same set of (g,p,q) parameters suitable for use in both
DH exchange and DSA signatures?

From the security engineering perspective, I'd suggest that the goals and
threat models for encryption vs signatures are different enough that one
would prefer different parameters for the two.


I agree with that, presuming that the private key values are different, 
there is at least no harm in using different parameters, and it avoids some 
possible avenues of attack.



For DSA signatures, we'd
like small subgroups, since the subgroup size determines the signature
size. This constraint is not present with DH encryption, where a large
subgroup will work as well as a small one. Large subgroups can then
support larger private exponents in the DH exchange.


Actually there is nothing stopping parameters for DSA from being prime(160 
bit)*prime(5 bit)*2+1 which would have a large enough subgroup as to be 
effectively unbreakable. Now obviously 5 bits is excessive, but my point 
is that finding p with a moderately sized subgroup q and a large additional 
subgroup is entirely possible, even though it is arguably unnecessary.





Now it may be argued that large subgroups do not actually increase
security in the DH exchange, because index calculus methods are
independent of subgroup size. In fact, parameters for DSA signatures
are typically chosen so that subgroup based methods such as Shanks that
take sqrt(q) cost are balanced against estimates of index calculus
work to break p. However, this balancing is inherently uncertain and
it's possible that p-based attacks will turn out to be harder than ones
based on q. Hence one would prefer to use a larger q to provide a margin
of safety if the costs are not too high.


I would consider that except for (semi)ephemeral parameters the cost of 
finding an appropriate prime are minor relative to the other considerations. 
This is especially true with signature parameters where a signing pair can 
be worth more than all the data authenticated by it.



While there is a computational
cost to using a larger subgroup for DH exchange, there is no data cost,
while for DSA there are both computational and data costs. Therefore the
tradeoffs for DH would tend to be different than for DSA, and a larger
q would be preferred for DH, all else equal. In fact it is rather common
in DH parameter sets to use Sophie-Germain primes for q.


I don't know if they are common but they are definitely a good idea, or at 
the very least using parameters with very large factors of p-1. Primes of 
the form q*k+1 for small k are certainly a good idea.



We may also consider that breaking encryption keys is a passive
attack which can be mounted over a larger period of time (potentially
providing useful information even years after the keys were retired)
and is largely undetectable; while breaking signatures, to be useful,
must be performed actively, carries risks of detection, and must be
completed within a limited time frame. All these considerations motivate
using larger parameter sets for DH encryption than for DSA signatures.


I'm not as certain about that last

Re: questions on RFC2631 and DH key agreement

2008-02-09 Thread Joseph Ashwood

[to and CC trimmed]
- Original Message - 
From: ' =JeffH ' [EMAIL PROTECTED]
To: Hal Finney [EMAIL PROTECTED]; Eric Rescorla 
[EMAIL PROTECTED]; [EMAIL PROTECTED]; Joseph Ashwood 
[EMAIL PROTECTED]

Cc: [EMAIL PROTECTED]; cryptography@metzdowd.com
Sent: Thursday, February 07, 2008 2:17 PM
Subject: Re: questions on RFC2631 and DH key agreement


I think I already know the answer to this question, but I just want to test 
my

understanding...

How wise (in a real-world sense) is it, in a protocol specification, to
specify that one simply obtain an ostensibly random value, and then use 
that
value directly as the signature key in, say, an HMAC-based signature, 
without
any further stipulated checking and/or massaging of the value before such 
use?


With any authentication the biggest consideration is to examine what the 
intention is. Using a single-use one time key for a symmetric MAC works for 
local authenticity, but not for remote authenticity. That is to say that the 
local process knows that it didn't generate the MAC, and the MAC is shared 
with only one other, so the authenticity is known, but any external source 
can only say that an entity knowing the key generated it. This may or may 
not be the intended condition, in that auditing this is very, very 
difficult.




E.g., here's such a specfication excerpt and is absolutely everything said 
in

the spec wrt obtaining said signature keys:

 When generating MAC keys, the recommendations in [RFC1750] SHOULD be
followed.
 ...
 The quality of the protection provided by the MAC depends on the 
randomness


This should be entropy.


of
 the shared MAC key, so it is important that an unguessable value be used.

How (un)wise is this, in a real-world sense?


It all depends on the intended meaning. If this is intended to authenticate 
to a third party, it fails completely. If it is specifically intended NOT to 
authenticate to a third party it may be exactly what is needed.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-06 Thread Joseph Ashwood
- Original Message - 
From: ' =JeffH ' [EMAIL PROTECTED]

Sent: Saturday, February 02, 2008 12:56 PM
Subject: Re: questions on RFC2631 and DH key agreement

If a purportedly secure protocol employing a nominal DH exchange in 
order to

establish a shared secret key between a requester and responder, employs
widely known published (on the web) fixed values for g (2) and p (a
purportedly prime 1040 bit number) for many of it's implementations and
runtime invocations, what are the risks its designers are assuming with
respect to the resultant properties of ZZ?


It is assuming that the total value of the data protected by those 
parameters never crosses the cost to break in the information value 
lifetime. For 1040 bits this is highly questionable for any data with a 
lifetime longer than 6 months.


I suspect that many implementations will simply use the equivalent of 
whatever
rand() function is available to get the bits for their private keys 
directly,


Very bad idea, for two reasons, the rand() function does not have sufficient 
internal state, and the rand() function is far from cryptographically 
strong.



and will likely not reallocate private keys unless the implementation or
machine are restarted. So if the random number generator has known flaws, 
then

there may be some predictability in both the public keys and in ZZ, yes?


All flaws in the private key generator will show in the public key 
selection, do yes.



Additionally there's the previously noted issue with the values of static
private keys slowly leaking.


Only if the value of p changes, if the value of p remains static, then the 
private key doesn't leak. A simple proof of this is simple, Eve can easily, 
and trivially generate any number of public/private key pairs and thereby 
generate any number of viewable sets to determine the unknown private key.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-06 Thread Joseph Ashwood
- Original Message - 
From: ' =JeffH ' [EMAIL PROTECTED]

To: Joseph Ashwood [EMAIL PROTECTED]
Cc: cryptography@metzdowd.com
Sent: Monday, February 04, 2008 5:18 PM
Subject: Re: questions on RFC2631 and DH key agreement



I'd scrawled:

 If a purportedly secure protocol employing a nominal DH exchange in
 order to
 establish a shared secret key between a requester and responder, 
 employs

 widely known published (on the web) fixed values for g (2) and p (a
 purportedly prime 1040 bit number) for many of it's implementations and
 runtime invocations, what are the risks its designers are assuming with
 respect to the resultant properties of ZZ?


Joseph Ashwood graciously replied:


It is assuming that the total value of the data protected by those
parameters never crosses the cost to break in the information value
lifetime.


yes.



 I suspect that many implementations will simply use the equivalent of
 whatever
 rand() function is available to get the bits for their private keys
 directly,

Very bad idea, for two reasons, the rand() function does not have 
sufficient

internal state, and the rand() function is far from cryptographically
strong.


what about just using bytes from /dev/urandom on *nix?


*nix /dev/urandom should work well, the entropy harvesting is reasonably 
good, and the mixing/generating are sufficient to keep it from being the 
weak link.





 and will likely not reallocate private keys unless the implementation 
 or
 machine are restarted. So if the random number generator has known 
 flaws,

 then
 there may be some predictability in both the public keys and in ZZ, 
 yes?


All flaws in the private key generator will show in the public key
selection, do yes.

^^
so?


Yep, my typos show I'm far from perfect. I meant so.






 Additionally there's the previously noted issue with the values of 
 static

 private keys slowly leaking.

Only if the value of p changes, if the value of p remains static, then 
the

private key doesn't leak.


Ok, I can see that from ya = g ^ xa mod p  ...  ya doesn't change if g, 
xa,

and p don't change.



A simple proof of this is simple, Eve can easily,
and trivially generate any number of public/private key pairs and thereby
generate any number of viewable sets to determine the unknown private 
key.


Are you saying here that if p (and g) are static, then one has some
opportunity to brute-force guess the private key that some long-running
instance is using, if it doesn't otherwise re-allocate said private key 
from

time to time?


Actually I'm saying that if p and g do not change, then there is no 
additional leakage of the private key beyond what Eve can compute anyway.


There are many, many factors involved in any deep security examination, 
making it truly a business decision with all the complexities involved in 
that, and very easy to get wrong.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-02 Thread Joseph Ashwood
- Original Message - 
From: ' =JeffH ' [EMAIL PROTECTED]

To: cryptography@metzdowd.com
Cc: ' =JeffH ' [EMAIL PROTECTED]
Sent: Friday, February 01, 2008 1:53 PM
Subject: questions on RFC2631 and DH key agreement



(ya and yb) if { p, q, g, j } are known to both parties.


So if p, q, g are not static, then a simplistic, nominally valid, DH 
profile

would be to..
 a b
 --   --
 g, p, ya 
 --- yb
..yes?


I would actually recommend sending all the public data. This does not take 
significant additional space and allows more verification to be performed. I 
would also suggest looking at what exactly the goal is. As written this 
provides no authentication just privacy, and if b uses the same private key 
to generate multiple yb the value of b will slowly leak.


Other than for b perhaps wanting to verify the correctness of { p, q, g, 
j }

(group parameter validation), is there any reason to send q ?


You can then use the gpb trio for DSA, leveraging the key set for more 
capabilities.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Password hashing

2007-10-18 Thread Joseph Ashwood
- Original Message - 
From: Tero Kivinen [EMAIL PROTECTED]

Sent: Monday, October 15, 2007 5:47 AM
Subject: Re: Password hashing



Joseph Ashwood writes:

On NetBSD HMAC-SHA1:
There is a shortcut in the design as listed, using the non-changing 
password
as the key allows for the optimization that a single HMAC can be keyed, 
then
copied and reused with each seed. this shortcut actually speeds attack by 
a
factor of 3. The fix is to use the salt as the HMAC key, this assumes 
much

less of the hash function.


When you are trying to crack password, you do know the SALT and
iteration count. You do not know the password. You need to try all
possible passwords with different salts. As we use the password we are
trying as an input to our test function we need to initialize
hmac_sha1 again for each pasword we are guessing. Or did I understand
something wrong.

With your fix I could take the SALT from the passwd string and
initialize first level of hmac with it and then feed different
password to it.


It is true that the first two iterations of the compression function in my 
supplied solution are computationally irrelevant, while in the current 
design the first two are computationally relevant, but the second time 
through the HMAC the situation reverses, the password keyed HMAC has exactly 
the same pre-salt state as the in the first HMAC iteration, and so in the 
second and subsequent HMAC iteration the first two applications of the 
compression function are computationally irrelevant, but in my solution 
there is no prior knowledge of the key for the second and subsequent HMAC 
iteration and so the first two applications of the compression function are 
computationally relevant. So my given solution trades the computation in the 
first two compression function computations for the millions of subsequent 
compression function computations. Asymptotically this is a 3 fold 
improvement, and so it is a very good change.


It is also worth noting that most passwords, even so called good 
passwords, have only a small amoutn of entropy, and a 50,000 word list will 
contain a significant number of all passwords on a system, there are more 
salts, and so storing the precomputations of the passwords versus the 
precomputations of even a 32-bit salt is radically different.





On USERID || SALT || PASSWORD:


Adding USERID to the calculations will firstly break API
compatibility, as the crypt function do not know the userid.


There is a choice, do it right, or keep the API. I am firmly on the side of 
doing it right. While the USERID is irrelevant if the SALT can be made to 
never repeat, that is a very hard thing to truly accomplish, especially 
across multiple disconnected systems.



It will
also break the ability to copy the encrypted passwords from one
machine to other,


So it prevents people from doing something that is poor security.


even when you would need to change user id in the
progress (If I need to set up account for someone on my machines, I
usually either ask them to send me already encrypted password I can
put in to my /etc/password, or ask them to send me ssh public key...


While the design is being changed (as you noted making this change would 
necessitate other changes) it is worthwhile to eliminate other security poor 
decisions as well.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Password hashing

2007-10-13 Thread Joseph Ashwood

Just combining several of my thoughts into a single email.

On the Red Hat proposal:
Why does every undereducated person believe that complexity==security? It is 
far better to rely on little things called proofs. There are several 
proofs out there with significant impact on this. In particular the really 
nice HMAC proof. The absurd complexity makes it highly likely that there is 
at least some shortcut in it that hasn't been seen yet.


On SALT || PASSWORD:
In doing that you are assuming collision resistence, and no shortcuts in 
computation. It is better than the RedHat proposal, but not optimal.


On NetBSD HMAC-SHA1:
There is a shortcut in the design as listed, using the non-changing password 
as the key allows for the optimization that a single HMAC can be keyed, then 
copied and reused with each seed. this shortcut actually speeds attack by a 
factor of 3. The fix is to use the salt as the HMAC key, this assumes much 
less of the hash function.


On PDKDF2:
Also appears to suffer from the same precomputation flaw, possibly more I 
haven't looked at it too closely for this purpose.


On USERID || SALT || PASSWORD:
Close, anything that is fixed (USERID and PASSWORD) should be put at the 
end, so the there is round to round variation before it, preventing 
precomputation. It also assumes the same collision resistence and no 
shortcut.



The better solution, with aspects borrowed from the others:
IV[0] = Salt
IV[i] = HMAC(key=IV[i-1], data=USERID||PASSWORD)
PasswordHash=IV[k]

Of course nonambiguous formatting for USERID||PASSWORD is necessary to avoid 
any shortcuts or precomputations, but any nonambiguous method is sufficient, 
including a fixed length USERID.


By using an HMAC instead of just a hash function allows it to make use of 
most of the HMAC proof, reducing the assumptions on the underlying hash to 
the effective minimum. By ordering everything to place the SALT and later 
prior result as the HMAC key this prevents any precomputation under the 
assumption that there is no method of computing the hash shorter than 3 hash 
compression iterations, a quite small window of opportunity, and any result 
will likely benefit the rightful computation of the PasswordHash resulting 
in a simple increase in the value of k.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Password hashing

2007-10-13 Thread Joseph Ashwood
- Original Message - 
From: Jim Gellman [EMAIL PROTECTED]

To: Joseph Ashwood [EMAIL PROTECTED]
Cc: Cryptography cryptography@metzdowd.com
Sent: Saturday, October 13, 2007 1:25 PM
Subject: Re: Password hashing



I'm not sure I follow your notation.  Are you saying that IV[n] is the
n'th application of the compression function?
No, each application of the HMAC is seperate, this is to incur the 
finalization penalty in the computation. if you want it closer to 
implementation:

IV = SALT
for(n iterations)
   IV = HMAC(key=IV, data=USERID||PASSWORD)
PasswordHash=IV

Why put each field in

it's own block?


It really is to incur as many necessary performance penalties as possible. 
The HMAC keying requires 2 compressions, then the USERID||PASSWORD 
formatting can be created to make it consistently 2 more compressions, and a 
finalization per round.


More inflation is of course possible, but I don't think it is reasonable, 
too much possibility of stretching too far, giving too much leverage for an 
attack on the compression function (i.e. the more times you use the 
compression function the more likely a shortcut exists, but by resetting the 
state such attacks become much less likely).

   Joe


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Private Key Generation from Passwords/phrases

2007-01-13 Thread Joseph Ashwood
- Original Message - 
From: Matthias Bruestle [EMAIL PROTECTED]

Subject: Private Key Generation from Passwords/phrases



What do you think about this?


I think you need some serious help in learning the difference between 2^112 
and 112, and that you really don't seem to have much grasp of the entire 
concept. 112 bits of entropy is 112 bits of entropy, not 76 bits of entropy, 
27 bits of bull, 7 bits of cocaine, and a little bit of alcohol, and the 224 
bits of ECC is approximate anyway, as you noted the time units are 
inconsistent. Basically just stop fiddling around trying to convince 
yourself you need less than you do, and locate 112 bits of apparent entropy, 
anything else and you're into the world of trying to prove equivalence 
between entropy and work which work in physics but doesn't work in 
computation because next year the work level will be different and you'll 
have to redo all your figures.
   Joe 



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Status of SRP

2006-06-01 Thread Joseph Ashwood
- Original Message - 
From: James A. Donald [EMAIL PROTECTED]

Subject: Status of SRP


The obvious solution to the phishing crisis is the widespread deployment 
of SRP, but this does not seem to happening.  SASL-SRP was recently 
dropped.  What is the problem?


The problem is that you're attempting to treat the wrong aspect. Yes SRP 
verifies the server, but requiring even more work on the part of the client 
will not solve the problem. Attempting to use SRP to solve this problem is 
basically saying You must be this smart to be worth protecting.
   Joe 



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Re: Is AES better than RC4

2006-05-25 Thread Joseph Ashwood
- Original Message - 
From: Ed Gerck [EMAIL PROTECTED]

Subject: [!! SPAM] Re: Is AES better than RC4



Joseph Ashwood wrote:



SOP: discard first 100's of bytes


This is part of the lack of key agility.


Using it securely requires so much in the way of heroic efforts


SOP: hash the key


There is far more to using RC4 securely than sumply hashing the key. Hashing 
the key only prevents recovering the original key (to the limits of the hash 
used) it does not provide for anything close to all the heroic efforts. If 
you look at the design of SSL/TLS a very significant portion of the effort 
that has gone into design of the frame/cell/whatever they call them is 
specifically to address issues like those seen in RC4.


[Slow rekeying speed makes RC4] unusable for any system that requires 
rekeying.


Code RC4 in a way that makes it easy.


You simply cannot code around the fact that the RC4 key processing is dog 
slow, and that even after the original keying design there is the necessity 
to discard the first several bytes of data. So just in the keying you have 
to deviate substantially from the original design.




It's only redeeming factors are that the cipher itself is simple to 
write, and once keyed it is fast.


simple to code/verify  is good for security too. This is a major
point.


A Viginere cipher is easier to code, we don't recommend it. Just as with a 
Viginere cipher, building a secure protocol (even for storage) with RC4 
quickly becomes an arms race requiring heroic efforts on the design side 
along with huge amounts of compute cycles on the execution side to avoid a 
PFY with a laptop. The same amount of effort in design with AES leads to a 
simpler, more compact design of approximately the same speed. And exactly as 
Ed noted : simple to ... verify is good for security too.


The truth is that because AES is so much simpler to build a secure protocol 
around the end result is actually easier to analyse.
   Joe 



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Re: Is AES better than RC4

2006-05-25 Thread Joseph Ashwood
- Original Message - 
From: Ed Gerck [EMAIL PROTECTED]

Subject: [!! SPAM] Re: Is AES better than RC4



Please note that my email was way different in scope. My opening
sentence, where I basically said that it does not make much sense
to compare RC4 with AES, was cut in your quote -- but here it is:

AES has more uses and use modes than RC4, in addition to the fact that
it encrypts more than one byte at once. Having said that, it is curious
to note the following misconceptions:


Yes I did snip that out. I figured everything we agreed on could be left out 
easily enough. I apologize for removing something you considered core to 
your view.



BTW, discarding the first 100's of bytes in RC4 is easy, fast, and
has nothing to with lack of key agility. And, if you do it, you don't
even have to hash the key (ie, you must EITHER hash the key OR discard the
first bytes).


From my view it does. Every extra clock cycle has an impact on key agility, 
even 1 byte of RC4 discards slows the rekeying process, and as a result it 
does affect the effective key agility. That only 256 discards are necessary 
does not mean that those extra 256*(clock cycles per pull) clock cycles 
don't affect key agility. At what point do we say This affects key agility 
when it increases the time by 1%? 10%? 100%? If we don't consider every 
cycle to reduce key agility it's all just a matter of scale. This does mean 
that different implementations will have different key agilities, but if you 
look hostorically RC2 makes a great example of where the attacker has 
substantially more key agility than the legitimate user, so it is not 
without precedent.

   Joe
   Joe 



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is AES better than RC4

2006-05-24 Thread Joseph Ashwood


RC4 should have been retired a decade ago, that it has not is due solely to 
the undereducated going with whatever's fastest. It's time we allowed RC4 
to stay dead.
   Joe 



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [!! SPAM] Re: Is AES better than RC4

2006-05-24 Thread Joseph Ashwood
- Original Message - 
From: James A. Donald [EMAIL PROTECTED]

Subject: [!! SPAM] Re: Is AES better than RC4



--
Joseph Ashwood wrote:
 RC4 should have been retired a decade ago,

Why?


It is in general distuingable from random, actually quite quickly.
The first few bytes are so biased that any security is imaginary.
Using it securely requires so much in the way of heroic efforts that the 
overall system slows down into the same speed class as a much simpler, more 
secure design based on AES (or 3DES, or a dozen other ciphers).
The key anti-agility slows it down to the point of being functionally 
unusable for any system that requires rekeying.
It's only redeeming factors are that the cipher itself is simple to write, 
and once keyed it is fast. Neither of these is of any substantial use after 
considering the previous major issues.
   Joe 



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Creativity and security

2006-03-27 Thread Joseph Ashwood
- Original Message - 
From: J. Bruce Fields [EMAIL PROTECTED]

Subject: Re: Creativity and security



On Fri, Mar 24, 2006 at 06:47:07PM -, Dave Korn wrote:

  IOW, unless we're talking about a corrupt employee with a photographic
memory and telescopic eyes,


Tiny cameras are pretty cheap these days, aren't they?  The employee
would be taking more of a risk at that point though, I guess.


The one I find scarier is the US restaurant method of handling cards. For 
those of you unfamiliar with it, I hand my card to the waiter/waitress, the 
card disappears behind a wall for a couple of minutes, and my receipt comes 
back for to sign along with my card. Just to see if anyone would notice I 
actually did this experiment with a (trusted) friend that works at a small 
upscale restaurant. I ate, she took my card in the back, without hiding 
anything or saying what she was doing she took out her cellphone, snapped a 
picture, then processes everything as usual. The transaction did not take 
noticably longer than usual, the picture was very clear, in short, if I 
hadn't known she was doing this back there I would never have known. Even at 
a high end restaurant where there are more employees than clients no one 
paid enough attention in the back to notice this. If it wasn't a trusted 
friend doing this I would've been very worried.
   Joe 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: passphrases with more than 160 bits of entropy

2006-03-22 Thread Joseph Ashwood
- Original Message - 
From: Travis H. [EMAIL PROTECTED]

Subject: passphrases with more than 160 bits of entropy



I was thinking that one could hash the first block, copy the
intermediate state, finalize it, then continue the intermediate result
with the next block, and finalize that.  Is this safe?  Is there a
better alternative?


Use a bigger hash, SHA-512 should be good to basically 512-bits, same for 
Whirlpool. If those aren't enough for you, use the chaining mode from 
Whirlpool and Rijndael with appropriate size.
   Joe 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: quantum chip built

2006-01-19 Thread Joseph Ashwood
- Original Message - 
From: Michael Cordover [EMAIL PROTECTED]

Subject: Re: quantum chip built



John Denker wrote:

My understanding is that quantum computers cannot easily do anything.


Probably one of the best statements so far, certainly QC and easy don't go 
together very well at this time.



Alex said

Is ECC at risk too?  And are we at risk in 10, 20 or 30 years from now?


At this time pretty much everything is potentially at risk from QC mostly 
because we know so little about how they really behave. Will ECC-160 fall to 
QC within 20 years? Probably not, but I wouldn't offer insurance against it. 
Right now we can safely assume that for our lifetime QC will be less of a 
threat than classical computation, but my standard recommendation of 
checking your security in depth at least every 6 months (depending on the 
safety buffer you have less decrease this) along with some continual 
critical point examination (e.g. check every paper on cryptanalysis of AES 
if you use AES) should be more than sufficient.
   Joe 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Countries that ban the use of crypto?

2005-12-08 Thread Joseph Ashwood
- Original Message - 
From: Jörn Schmidt [EMAIL PROTECTED]

Subject: Re: Countries that ban the use of crypto?



[China bans cryptography]


I'm not going to out anyone on this, but even a quick search of Skype finds 
quite a few individuals who make use of cryptography in China. So I strongly 
suspect that there is a great deal of lenience on that front. In fact, I 
have it on dependable authority that there are a number of places in China 
where the only IM system that functions is Skype.


I have no doubt that China does place controls on it, and it has been 
published a few places that their telecom industry has a particular distaste 
for Skype, but it appears that there is more to it.
   Joe 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-12-06 Thread Joseph Ashwood
- Original Message - 
From: Anton Stiglic [EMAIL PROTECTED]

Subject: RE: Fermat's primality test vs. Miller-Rabin





Ok after making that change, and a few others. Selecting only odd numbers
(which acts as a small seive) I'm not getting much useful information. It
appears to be such that at 512 bits if it passes once it passes 128 times,
and it appears to fail on average about 120-130 times, so the sieve
amplifies the values more than expected. Granted this is only a test of 
the



generation of 128 numbers, but I got 128 primes (based on 128 MR rounds).



O.k., so if I read this right, your new results concord with the analysis 
of

Pomerance et al.   That would make much more sense.

When you say on average about 120-130 times the test fails, out of how
many is that?


I should've said that the the quantity of numbers that failed the first test 
between each success was about 120-130. Apparently, even sieving based 
solely on is it odd is enough to substantially influence the outcome.
   Joe 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-12-05 Thread Joseph Ashwood
- Original Message - 
From: Sidney Markowitz [EMAIL PROTECTED]

Subject: Re: Fermat's primality test vs. Miller-Rabin



Joseph Ashwood wrote:

Granted this is only a test of the
generation of 128 numbers, but I got 128 primes (based on 128 MR rounds).


That doesn't make sense, unless I'm misinterpreting what you are saying. 
Primes

aren't that common, are they?


Apparently, they are, I'm ran a sample, but even with the added second 
sanity check, every one of them that passes a single round comes up prime.


I then proceeded to move it to 2048-bit numbers. It takes longer and the 
gaps between primes is averaging around 700 right now, but once again if it 
passes a single test it passes all 128+128. This sample is currently 
statistically completely insignificant, but even after the currently 8 tries 
I'd expect something different.
   Joe 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-12-04 Thread Joseph Ashwood
- Original Message - 
From: Sidney Markowitz [EMAIL PROTECTED]

Subject: Re: Fermat's primality test vs. Miller-Rabin



Joseph Ashwood wrote:

  byte [] rawBytes = new byte[lenNum/8];
  rand.nextBytes(rawBytes);
  curNum = new BigInteger(rawBytes);



curNum = BigInteger.ONE.or(new BigInteger(512, rand));


Ok after making that change, and a few others. Selecting only odd numbers 
(which acts as a small seive) I'm not getting much useful information. It 
appears to be such that at 512 bits if it passes once it passes 128 times, 
and it appears to fail on average about 120-130 times, so the sieve 
amplifies the values more than expected. Granted this is only a test of the 
generation of 128 numbers, but I got 128 primes (based on 128 MR rounds). 
For the sake of full code review and duplication I've appended the entire 64 
lines of code. You'll see I made a few optimizations, and removed writing 
the data to a csv. I developed compiled and ran it only through Eclipse.'

   Joe




import java.math.BigInteger;
import java.security.SecureRandom;
import java.io.IOException;

public class millerMain {
static int numTests = 128;
static int lenNum = 512;
static SecureRandom rand = new SecureRandom();
static BigInteger two = BigInteger.valueOf(2);


public static void main(String[] args) throws IOException {
 BigInteger curNum = null;
 int totalPrimes = 0;
 int [] successes = new int[numTests];
 int failuresBetween = 0;
 for(int i = 0; i  numTests; i++)
 {
  failuresBetween = -1;
  //choose starting number
  do
  {
   curNum = BigInteger.ONE.or(new BigInteger(lenNum, rand));
   failuresBetween++;
  }
  while(testOnce(curNum) == false);
  System.out.println(Failed  + failuresBetween+  times);
  //passed once


  //run 127 more tests
  for(int j = 0; j  127; j++)
  {
   if(testOnce(curNum))
   {
successes[i]++;
   }
  }
  if(successes[i] == 127) totalPrimes++;
  System.out.println(Test Number +i+ successes +(successes[i]+1)+ 
Total Prime so far +totalPrimes);


  BigInteger temp = BigInteger.valueOf(successes[i]);
  String num = temp.toString();
 }
}


static boolean testOnce(BigInteger N){
 BigInteger A = null;

 // 0  A  N
 A = new BigInteger(lenNum, rand).mod(N);
 if(A.compareTo(BigInteger.ZERO) == 0) return testOnce(N);

 BigInteger Nminus1 = N.subtract(BigInteger.ONE);

 //shouldBeOne = A^(N-1) mod N
 BigInteger shouldBeOne = A.modPow(Nminus1, N);
 if(shouldBeOne.compareTo(BigInteger.ONE)!= 0) return false;
 return true;

}
}



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-12-02 Thread Joseph Ashwood
- Original Message - 
From: Nicolas Rachinsky [EMAIL PROTECTED]

Subject: Re: Fermat's primality test vs. Miller-Rabin



* Joseph Ashwood [EMAIL PROTECTED] [2005-11-22 02:50 -0800]:

16384 times
..



If I remember the proof of MR correctly it assumes an odd number. Were
all your questions odd?


The random numbers tested were almost certainly not all odd, no filtering 
was done on random.



If not, please try again with odd numbers only.


I'm running an abbreviated test right now, and it's looking less impressive, 
I have to assume I'm hitting a bad streak somehow. Real bad, 30 numbers 
tested, no primes at all so far, I see one that has passed 79 tests. I have 
to assume I'm missing something really stupid at this point in my new number 
chooser that I don't have the time to find right now. So I'm asking for 
anyones help in pointing out to me, why after I let it go the full 128 runs 
(that is 128 numbers that passed a single round of MR) I didn't get a single 
number to pass more than 79? Did I just hit a really, really bad streak?


The exact code for the function and the support variables :

static int lenNum = 512;
static SecureRandom rand = new SecureRandom();
static BigInteger two = BigInteger.valueOf(2);

static BigInteger chooseValue()
{
 //pick a random integer
 BigInteger curNum = null;
 byte [] rawBytes = new byte[lenNum/8];
 rand.nextBytes(rawBytes);
 curNum = new BigInteger(rawBytes);

 //make sure it's odd
 if(curNum.mod(BigInteger.valueOf(2)).compareTo(BigInteger.ZERO) == 0)
 {
  curNum = curNum.add(BigInteger.ONE);
 }

 //it's 0 or negative try again
 if(curNum.compareTo(BigInteger.ZERO)=0)
 {
  return chooseValue();
 }
 return curNum;

}

This should choose a 512-bit random odd positive number, unless I'm missing 
something horribly, horribly braindead.


Anyway, back to trying to design a cool user interface (anyone who knows 
me knows that the cue to begin laughing, I can't design a UI for sh*t).
   Joe 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-11-30 Thread Joseph Ashwood
- Original Message - 
From: Anton Stiglic [EMAIL PROTECTED]

Subject: RE: Fermat's primality test vs. Miller-Rabin



-Original Message-
From: [Joseph Ashwood]
Subject: Re: Fermat's primality test vs. Miller-Rabin

I think much of the problem is the way the number is being applied. Giving
a stream of random numbers that have passed a single round of MR you will
find that very close to 50% of them are not prime, this does not mean that
it passes 50% of the numbers (the 2^-80 probability given above is of this
type).


Do you do an initial sieving to get rid of the more obvious primes?


No I did not, since this was specifically to test the effectiveness of MR I 
determined that it would be better to test purely based on MR, and not use 
any sieving. The actual algorithm was:



16384 times
{
   question = random 512-bit number
   //this is not the most efficient, but it should remove bias making this 
just MR
   while(question does not pass a single round of MR) question = random 
512-bit number

   127 times
   {
   perform an MR round
   log MR round result
   }
}

Then I performed analysis based on the log generated. I will gladly disclose 
the source code to anyone who asks (it's in Java).



I'm
guessing you don't since you seem to have a result contradictory to what 
has
been proven by Damgard, Landrock and Pomerance.  If you look at table 4.3 
of
HAC (which comes from Damgard  al. paper), it says that if your 
candidates

come from a uniform random distribution, then for 500 bit candidate, the
probability that a candidate n is composite when one round of miller-Rabin
said it was prime is = (1/2)^56.  You are finding that the probability is
about 1/2, that seems very wrong (unless you are not doing the sieving,
which is very important).  Am I misunderstanding something?


No you're not. The seiving is important from a speed standpoint, in that the 
odds improve substantially based on it, however it is not, strictly 
speaking, necessary, MR will return a valid result either way.



In fact it appears that integers fall on a continuum of difficulty
for MR, where some numbers will always fail (easy composites), and other
numbers will always pass (primes). The problem comes when trying to denote
which type of probability you are discussing.


Well I think I explained it pretty clearly.  I can try to re-iterate.  Let 
X
represent the event that a candidate n is composite, and let Y_n denote 
the
event that Miller-Rabin(n,t) declares n to be prime, where 
Miller-Rabin(n,t)

means you apply t iterations of Miller-Rabin on n.
Now the basic theorem that we all know is that P(Y_t | X) = (1/4)^t (this
is problem in one of Koblitz basic textbooks on cryptography, for 
example).
But this is not the probability that we are interested in, we are (at 
least

I am) more interested in P(X | Y_t).  In other words, what is the
probability that n is in fact composite when Miller-Rabin(n, t) declared n
to be prime?  Do we agree that this is the probability that we are
interested in?


If we are discussing that aspect, then yes we can agree to it. That is the 
probability I gave, at exactly a single round (i.e. no sieving involved), 
approaching 1/2 (my sample was too small to narrow it beyond about 2 
significant digits). I know this result is different from the standard 
number, but the experiment was performed, and the results are what I 
reported. This is where the additional question below becomes important 
(since it gives how quickly the odds of being incorrect will fall).




What are the odds that a
random 512-bit composite will be detected as composite by MR in one round?
I don't think anyone has dependably answered that question, but the answer
is very different from 1-(probability that MR-* says it's a prime)^-k. Any
discussion needs to be more accurately phrased.


You are looking for P( Comp Y_t | X), where Comp Z is the complementary
event of Z. In our case, Comp Y_t is the event that Miller-Rabin(n,t) 
proves

n to be composite. Is that what you are looking for?


Actually I'm not, the probability is a subtley different one and the key 
different is in Y. Instead it is given random composite RC what is P(MR(RC, 
k) | Comp X). This appears to me to be a complex probability based on the 
size of the composite. But this is the core probability that governs the 
probability of composites remaining in the set of numbers that pass MR-k. 
Fortunately, while it is a core probability, it is not necessary for MRs 
main usefulness. Performing log_2(N)/4 rounds of MR appears to be a solid 
upper bound on the requirements, and as this is the probability given by 
Koblitz, and the most common assumption on usage it is a functionable 
substitute.



For example, my phrasing is that in the tests that I performed 50% (+/-
experimental noise) of those numbers that passed a single round of MR also
passed 128 rounds, leading me to conclude that 50% of the numbers that
passed a single round of MR

Re: Fermat's primality test vs. Miller-Rabin

2005-11-18 Thread Joseph Ashwood
- Original Message - 
From: Anton Stiglic [EMAIL PROTECTED]

Subject: RE: Fermat's primality test vs. Miller-Rabin



The general consensus is that for 500-bit numbers one needs only 6 MR
tests for 2^{-80} error probability [1]:


My own tests disagreed with this, 512-bits seemed to have a threshhold 
around 70 passes for random candidates, I'm thinking you forgot a sieving 
step there (which would change the number substantially).



and thus a single test gives ~2^{-13}.


If you just took the exponent 80 and divided it by 6 to get ~13, I don't
think that is the right reasoning.  Look at table 4.3 of the Handbook of
applied cryptography: for t = 1 (one iteration) and for a 500-bit 
candidate,

we have probability p(X | Y_1) = 2^-56, which is better than what you
concluded.  (X representing the event that the candidate n is composite, 
Y_t

representing the event that Miller-Rabin(n, t) declares n to be prime).

The results in table 4.3 and 4.4 of HAC are for randomly (uniform) chosen
candidates, and I think you need to do a basic sieving (don't remeber if
that is necessary, but I think it is).  The result is due to the fact that
under these conditions, the strong pseudoprime test does in fact much 
better

than 1/4 probability of error ( value of P(Y_t | X) is very low ), this
result is due to Damgard, Landrock and Pomerance, based on earlier work of
Erdos and Pomerance.


I think much of the problem is the way the number is being applied. Giving a 
stream of random numbers that have passed a single round of MR you will find 
that very close to 50% of them are not prime, this does not mean that it 
passes 50% of the numbers (the 2^-80 probability given above is of this 
type). In fact it appears that integers fall on a continuum of difficulty 
for MR, where some numbers will always fail (easy composites), and other 
numbers will always pass (primes). The problem comes when trying to denote 
which type of probability you are discussing. What are the odds that a 
random 512-bit composite will be detected as composite by MR in one round? I 
don't think anyone has dependably answered that question, but the answer is 
very different from 1-(probability that MR-* says it's a prime)^-k. Any 
discussion needs to be more accurately phrased.


For example, my phrasing is that in the tests that I performed 50% (+/- 
experimental noise) of those numbers that passed a single round of MR also 
passed 128 rounds, leading me to conclude that 50% of the numbers that 
passed a single round of MR are in fact prime. Since each number that passed 
a single round was subjected to 127 additional rounds, a number of 
additional statistics can be drawn, in particular that of those that failed 
at least one round none failed less than 40 rounds, and that few passed less 
than 40 rounds. Due to the fact that this was only iterated 65536 times 
there is still substantial experimental error available. These pieces of 
information combined indicate that for 512-bits it is necessary to have 80 
rounds of MR to verify a prime.
   Joe 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-11-13 Thread Joseph Ashwood
- Original Message - 
From: Charlie Kaufman [EMAIL PROTECTED]

Subject: FW: Fermat's primality test vs. Miller-Rabin
In practice, the probability of randomly choosing a Carmichael number of 
size 250 bits is vanishingly small.


I would say that finding any Carmichael number without deliberately looking 
for it is vanishingly small.


The probability of a single run of Miller-Rabin or Fermat not detecting 
that a randomly chosen number is composite is almost vanishingly small.


I've heard but not confirmed a figure of one failure in 20 million. I've 
never heard an estimate of the probability that two runs would fail to 
detect the composite. It couldn't be better than one failure is 20 million 
squared or worse than one in 80 million.


I can confirm that that number of completely wrong. I just implemented a 
small Java program to test exactly that. Each number was vetted by a single 
pass of Miller-Rabin (iterations = 1). With 512-bit numbers the first 52 
random guesses that pass the first test resulted in 26 numbers that failed 
to pass 128 iterations. I find it rather odd that this is exactly half, and 
I also notice that of those that failed they almost all seem to have failed 
at least half of them.


It appears that the minimum estimate of 1/2 probability is necessary, but 
that 1/4 is more likely.
   Joe 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: semi-preditcable OTPs

2005-10-25 Thread Joseph Ashwood
- Original Message - 
From: Travis H. [EMAIL PROTECTED]

Subject: semi-preditcable OTPs

Despite [flawed OTPs], the NSA wasn't able to crack any messages.



My question is, why?   I think I know the reason, and that is that any
predictability in a symbol of the OTP correlated to a predictability
in only one plaintext symbol.  In other words, there was no leverage
whereby that plaintext could then be used to derive other symbols.
Can anyone explain this better (or more accurately)?  Is this lack of
diffusion?  Or does it have something to do with the unicity distance?


You've pretty much got it. In order for a OTP to work you simply need what I 
commonly refer to as an overflow of entropy. The source of this entropy 
doesn't matter and it can be from the plaintext as much as it can be from 
the key. This extends the unicity distance (as you noted) and can render it 
impossible to decrypt.
   Joe 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [EMAIL PROTECTED]: Skype security evaluation]

2005-10-23 Thread Joseph Ashwood
- Original Message - 
Subject: [Tom Berson Skype Security Evaluation]


Tom Berson's conclusion is incorrect. One needs only to take a look at the
publicly available information. I couldn't find an immediate reference
directly from the Skype website, but it uses 1024-bit RSA keys, the coverage
of breaking of 1024-bit RSA has been substantial. The end, the security is 
flawed. Of course I told them this now years ago, when I told them that 
1024-bit RSA should be retired in favor of larger keys, and several other 
people as well told them.

   Joe



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SecurID and garage door openers

2005-10-18 Thread Joseph Ashwood
- Original Message - 
From: Travis H. [EMAIL PROTECTED]

Subject: SecurID and garage door openers



Similarly, how do those garage door openers with rolling codes work,
given that the user may have pressed the button many times
accidentally while out of range of the receiver?


My understanding is that since it is a purely monotonic counter it is plenty 
possible to do one of two things:
send {counter, data} instead of {data}, receiver stores last counter to 
avoid replays

have the receiver just keep counting forward for a while (not a good idea


Is there any interest in reviewing the security of consumer-level
devices?


I'd be willing to take a look at the protocol, but dissection is not my 
specialty.



PS: How many cypherpunks does it take to open a garage door?
http://www.cap-lore.com/Garage/


Currently, not very many, with proper designs openly published, not very 
many because not very many companies will use it. However, it could be a 
useful way for some cipherpunks to make some extra money. Anyone else up for 
it? and how about the car alarm?
   Joe 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Possibly new result on truncating hashes

2005-08-02 Thread Joseph Ashwood
- Original Message - 
From: John Kelsey [EMAIL PROTECTED]

Subject: Possibly new result on truncating hashes



How could this work?  Suppose we have an algorithm like the
Wang attacks on MD5, SHA0, or SHA1 for finding a single
collision pair.  The algorithm returns a single collision
pair on the first 160 bits of SHA256 for (say) 2^{64} work.
(Remember that this is just an example--I don't have any
such algorithm!)  Each time the algorithm is run, it gives a
new, unrelated collision pair, and the remaining 96 bits are
completely randomized by the collision pair.

Now, this is an attack on SHA256 truncated to 160 bits.
Does it lead to an attack on SHA256 as a whole?


Actually it does. Such an attack would reduce the difficulty of producing a 
collision in SHA-256 to 2^(64+(96/2)) or 2^112. The math for this is fairly 
easy, the remaining 96 bits will collide in on average 2^(96/2) tries, since 
it takes 2^64 work for each of these tries, we get 2^112 work, hence an 
attack on the original hash has been found.



Let's do the counting argument:  Each time we call the
160-bit collision algorithm, we get a new pair which has the
same first 160 bits of SHA256 output, and random unrelated
last 96 bits of SHA256 output.  Each pair has a probability
of 2^{-96} of colliding in the remaining bits.  So, to get a
collision on the whole SHA256 using this 160-bit collision
algorithm, we expect to have to try about 2^{96} collision
pairs


There's the mistake. To find a collision in the remaining bits requires 
2^(96/2) work, not 2^96 work. For a chosen initial value you will of course 
have the 2^96 work, but there you'll only have 2^(64+96) work instead of 
2^256, the attack still works.
   Joe 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: EMV [was: Re: Why Blockbuster looks at your ID.]

2005-07-15 Thread Joseph Ashwood
- Original Message - 
From: Victor Duchovni [EMAIL PROTECTED]

Subject: Re: EMV [was: Re: Why Blockbuster looks at your ID.]



Whose loses do these numbers measure?

- Issuer Bank?

- Merchant?

- Consumer?

- Total?


I'd say that you've fairly well hit the nail on the head. I've actually been 
meaning to reply to this for about a week now. The truth is that each credit 
card transaction actually has either 3 or 4 parties; User U, Merchant M, 
Credit Card Issuer CCI, and Merchant Insurer MI (this is simplified there 
are generally multiple parties under CCI).


Under legitimate circumstances the process is fairly simple; Legitimate User 
LU agrees to pay CCI, CCI already has an agreement to pay M, and M supplies 
the product/service to LU. During billing LU pays CCI, CCI pays M, everyone 
is happy.


Things are different in the case of False User FU. FU goes to M, FU agrees 
for LU to pay CCI, CCI (believing FU is LU) agrees to pay M, M supplies the 
product/service to FU. During billing is where things get strange. LU 
reports the bad transaction to CCI. CCI informs M and does not pay M. FU 
gets the product, M accepts the loss. In the normal case MI and M are the 
same entity so the buck stops there, if MI is seperate from M, then MI 
reimburses M for some portion.


It's important to understand exactly who loses what when FU is in the 
picture. CCI loses the commision, generally a small flat fee on the order of 
$0.35, and a percentage generally 2%, this is not a large amount to lose, 
and the phone call to report the problem actually costs more than is lost, 
followed by the filing and tracking of the correct paperwork, this is the 
ACTUAL loss for CCI. MI loses the cost of the product/service reimbursed. LU 
loses basically nothing except time. FU obviously gains.


The point being that expecting CCI to foot a multi-billion dollar bill to 
change the process so that MI doesn't lose the money doesn't make sense. CCI 
will only work to increase CCIs profits. It is up to MI to pay for the 
upgraded systems by working with CCI towards CCIs goals (fewer losses for MI 
also means fewer reports to CCI so fewer losses). LU may be willing to foot 
part of the bill for the perceived improvements, CCI will only foot the 
portion that is in CCIs favor, MI will have to foot the majority of the bill 
and will only do so when it is in MIs favor. With credit card fraud 
decreasing, it is not in MIs favor to examine it at this time.
   Joe 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: how to phase in new hash algorithms?

2005-03-21 Thread Joseph Ashwood
- Original Message - 
From: Steven M. Bellovin [EMAIL PROTECTED]
Subject: how to phase in new hash algorithms?


We all understand the need to move to better hash algorithms than SHA1.
At a minimum, people should be switching to SHA256/384/512; arguably,
Whirlpool is the right way to go.  The problem is how to get there from
here.
...
So -- what should we as a community be doing now?  There's no emergency
on SHA1, but we do need to start, and soon.
Phase 1 is to change the hash function choice from implicit to explicit. 
Specifically instead of having hash = 457253W4568MM48AWA2346, move to hash 
= SHA-1:lq23rbp8yaw4tilutqtipyu..

Then over time ratchet down the default.
There is also an easy argument that it may be beneficial to skip SHA-256 
entirely. The argument put succinctly is:
64-bit computing is arriving
on 64-bit systems SHA-512 is nearly twice as fast as SHA-256 (crypto++ 
benchmarks).
SHA-512 is at least as strong, and faster.
   Joe 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: comments wanted on gbde

2005-03-06 Thread Joseph Ashwood
- Original Message - 
From: Steven M. Bellovin [EMAIL PROTECTED]
Subject: comments wanted on gbde

I'll just deal with it piece by piece.
Page 3 decrypting and re-encrypting an entire disk would likely take more 
than a day with currently available hardware is wrong. Assuming 256-bit 
AES, using a two-pass encryption system, on a 2.1 GHz pentium 4 (currently 
below low end) this would equal a disk of over 1000GB (numbers taken from 
Crypto++ benchmarks), personally I don't have a laptop with that kind of 
space. I have a rackmount server with that space, and my desktop is getting 
close, but no where near full. Even by page 3 I'm becoming doubtful about 
the abilities and there isn't any real information yet.

Page 3 again, system supports multiple access paths. Clearly this means that 
either the entire disk, or per block or per sector or whatever has an 
encrypted key, this is how they will meet the false criteria above, and the 
multiple access path. Of course this is a standard technique, and has been 
around for ages, nothing new here.

Page 4 It has been said that there is only one really hard problem in 
cryptography: the problem of key management. GBDE does not try to address 
that. They're already admitting to heavy flaws, downplaying the abilities 
of the design, not a good sign.

Page 5 finally begins the actual information.
Page 5 plaintext sector data should be encrypted with one-time-use 
(pseudo-)random keys serves no purpose if a strong mode is used.  The only 
purpose this serves is to slow the system down as additional searches have 
to be made. This is claimed to provide protection from when AES is broken. 
It offers nothing except wasted cryptographic and disk overhead.

Page 5 RSA2/512 as strong one way hash just in case I was wrong and this 
does exist, I ran a quick google for it, and found the only references to it 
are in reference to this paper. I suppose they meant SHA-512, which further 
brings the question: Why are they using a hash function at all? Obviously 
this is where the decrypt/encrypt taking more than a day came from, SHA-512 
is slow, using 256-bit AES in a properly secure mode using a MAC would have 
been better cryptographically, better for storage, computationally easier, 
and would have substantially reduced the window of exposure (A break of 
AES-256 would have been required instead of a break of either AES-128 or 
SHA-512). Further the choice of MD5 as a bit blender is extremely 
questionable, and again it would have created a much smaller window of 
exposure to use an AES CBC-MAC with a known key and IV. Obviously this was 
not built by someone with anything more than a rudimentary knowledge of 
cryptography.

Still on page 5, (apparently unkeyed) MD5 is used as a MAC of the lock 
sector. Very, very poor security decision.

Using variable formatting. How many times do we have to kill this before it 
stays dead? Obviously more, probably many more. Variable formatting gains 
you nothing, consumes entropy, and consumes cpu time, it also often is the 
foundation for the break.

Page 6 covers the paranoia plausible deniability. Strangely although they 
seem to imply that the current GBDE does this they apparently have chosen 
not to even begin covering this, so most likely they either make no real 
attempt at it, it doesn't work, or both.

Page 6, section 7.4 covers using MD5 to make sure the performance is as bad 
as possible by maing it impossible to precompute where data will be.

The footnote on page 6 should read In both cases the encoded sector address 
was placed in the middle to ensure the worst possible performance, 
cryptographically it makes no difference

In the Known Weaknesses section they forget something important THERE IS NO 
MAC ON THE DATA. This means that there is no possibility of detecting an 
attack, not possibility of determining what has been tampered with. In short 
the real level of security fails miserably.


Section 16 likes to hint that David Wagner and Lucky Green peer reviewed 
this paper. Assuming this is true, my best guess is that the authors mistook 
a statement of This should be secure for a statement of This is good 
that is unless there are enormous holes in the paper.


GBDE is built on concepts that are nonsensically put together, the design 
itself is out of date by at least a decade, and makes a wide number of 
extremely amateur decisions.


The most important point to make is that this paper shows rather well that 
while developing something secure is difficult, something that is good 
and secure is substantially more difficult.

   Joe
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: ATM machine security

2005-02-22 Thread Joseph Ashwood
- Original Message - 
From: Lee Parkes [EMAIL PROTECTED]
Subject: ATM machine security


Hi,
I'm working on a project that requires a benchmark against which to judge
various suppliers. The closest that has similar requirements is the ATM
industry. To this end I'm looking for any papers, specifications or 
published
attacks against ATM machines and their infrastructure. I'm also looking 
for what
type of networks they use and the crypto they use to protect comms.
Also any standards would be good that the ATM industry has to adhere to.
Not directly what you are looking for but the BITS standard is applied by 
banks so should probably fit well within your needs, it is available at 
http://www.bitsinfo.org/ . Also if that doesn't work I've been considering 
doing a security requirements for Trust Laboratories, and a solid project 
would give me an incentive.
   Joe 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SHA1 broken?

2005-02-22 Thread Joseph Ashwood
- Original Message - 
From: Joseph Ashwood [EMAIL PROTECTED]
Sent: Friday, February 18, 2005 3:11 AM

[the attack is reasonable]
Reading through the summary I found a bit of information that means my 
estimates of workload have to be re-evaluated. Page 1 Based on our 
estimation, we expect that real collisions of SHA1 reduced to 70-steps can 
be found using todays supercomputers. This is a very important statement 
for estimating the real workload, assuming there is an implicit in one 
year in there, and assuming BlueGene (Top 500 list slot 1) this represents 
22937.6 GHz*years, or slightly over 2^69 clock cycles, I am obviously still 
using gigahertz because information gives us nothing better to work from. 
This clearly indicates that the operations used for the workload span 
multiple processor clocks, and performing a gross estimation based on pure 
guesswork I'm guessing that my numbers are actually off by a factor of 
between 50 and 500, this factor will likely work cleanly in either adjusting 
the timeframe or production cost.

My suggestion though to make a switch away from SHA-1 as soon as reasonable, 
and to prepare to switch hashes very quickly in the future remains the same, 
the march of processor progress is not going to halt, and the advance of 
cryptographic attacks will not halt which will inevitably squeeze SHA-1 to 
broken. I would actually argue that the 2^80 strength it should have is 
enough to begin its retirement, 2^80 has been strong enough for a decade 
in spite of the march of technology. Under the processor speed enhancements 
that have happened over the last decade we should have increased the 
keylength already to accomodate for dual core chips running at 20 times the 
speed for a total of 40 times the prior speed (I was going to use Spec data 
for a better calculation but I couldn'd immediately find specs for a Pentium 
Pro 200) by adding at least 5 bits preferrably 8 to our necessary protection 
profile.
   Joe 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SHA1 broken?

2005-02-22 Thread Joseph Ashwood
- Original Message - 
From: Dave Howe [EMAIL PROTECTED]
Subject: Re: SHA1 broken?


  Indeed so. however, the argument in 1998, a FPGA machine broke a DES 
key in 72 hours, therefore TODAY... assumes that (a) the problems are 
comparable, and (b) that moores law has been applied to FPGAs as well as 
CPUs.
That is only misreading my statements and missing a very large portion where 
I specifically stated that the new machine would need to be custom instead 
of semi-custom. The proposed system was not based on FPGAs, instead it would 
need to be based on ASICs engineered using modern technology, much more 
along the lines of a DSP. The primary gains available are actually from the 
larger wafers in use now, along with the transistor shrinkage. Combined 
these have approximately kept the cost in line with Moore's law, and the 
benefits of custom engineering account for the rest. So for exact details 
about how I did the calculations I assumed Moore's law for speed, and an 
additional 4x improvement from custom chips instead of of the shelf. In 
order to verify the calculations I also redid them assuming DSPs which 
should be capable of processing the data (specifically from TI), I came to a 
cost within a couple orders of magnitude although the power consumption 
would be substantially higher.
   Joe 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SHA-1 cracked

2005-02-17 Thread Joseph Ashwood
- Original Message - 
From: Steven M. Bellovin [EMAIL PROTECTED]
Subject: SHA-1 cracked

It's probably not a practical
threat today, since it takes 2^69 operations to do it
I will argue that the threat is realizable today, and highly practical. It 
is well documented that in 1998 RSA Security's DES Challenge II was broken 
in 72 hours by $250,000 worth of custom machine. Scale this forward to 
today, and $500,000 worth of custom equipment and 2^69 is not out of reach 
for 3 days worth of work. So assuming that your attackers are smallish 
businesses, you have 3 days of security, and large businesses with a vested 
interest in breaking your security you are looking at minutes if not seconds 
before break.

While most uses of SHA-1 actually end up searching for collisions against 
fixed outputs (e.g. given A find B such that AB and SHA1(A) == SHA1(B)), 
this attack does not immediately cause the collapse of all e-commerce

This attack means that we need to begin the process for a quick and painless 
retirement of SHA-1 in favor of SHA-256/384/512 in the immediate future and 
begin further preparations to move to Whirlpool and other hashes in the near 
future. I say this because with MD5 completely broken, SHA-0 effectively 
completely broken, and SHA-1 showing big cracks, the entire SHA series is in 
doubt, and needs to be heavily reconsidered, otherwise we're looking at a 
continuing failure of hash functions apparently in a yearly fashion until we 
run out of the SHA series.
   Joe

Trust Laboratories
Changing Software Development
http://www.trustlaboratories.com 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Simson Garfinkel analyses Skype - Open Society Institute

2005-01-26 Thread Joseph Ashwood
- Original Message - 
From: David Wagner [EMAIL PROTECTED]
Subject: Simson Garfinkel analyses Skype - Open Society Institute


In article [EMAIL PROTECTED] you write:
Is Skype secure?
The answer appears to be, no one knows.  The report accurately reports
that because the security mechanisms in Skype are secret, it is impossible
to analyze meaningfully its security.
Actually that is not entirely true. While Skype has been getting more than 
it's fair share of publicity lately surrounding it's security the truth is 
that shortly after it's first release I personally had a discussion in their 
forums (should still be there if you find something by holomntn that's the 
correct one, I haven't discussed anything since). In that discussion it was 
shown that they clearly did not have a solid grasp on security, nor 
apparently had anyone of them read the SIP specification. During that 
conversation, and some future private ones, it has been revealed to me that 
Skype's security is questionable at best, and that they are in fact 
basically relying on security through obscurity. It is likely that this will 
work for quite some time simply because most IM conversations, and most 
phone conversations for that matter are simply not worth listening to.

With that said, in their favor they do have substantial qualities. Because 
they effectively form a routed network an intermediate evesdropping attempt 
will have to sort through a substantial amount of undesired traffic (see 
Rivest on Wheat and Chaff for explaination of the security offered), this is 
possible because although there are security holes, the end stream is 
difficult to determine from random (AES/CBC). This creates a substantial 
boost in the amount of effort required to acquire a stream of significance 
unless the endpoints are known. The other big thing in their favor is that 
apparently very few people want to be bothered by analysing the security, 
basically if no one is looking it is secure. Additionally, in version 1.1 
Skype appears to have begun providing a moving target for a break, between 
version 1.0 and 1.1 Skype performed some changes to the protocol, while I do 
not know the exact nature of these, even a simple investigation of the GUI 
shows some changes (IM someone with a different version you will be 
cautioned about protocol changes even though security is not listed), this 
moving target creates the possibility to generate some security through 
obscurity, and the ability to upgrade the security at a moments notice.

Working against them. The biggest thing working against them is that a 
growing number of teenagers are using Skype (a significant portion of 
Gunderson High School in San Jose, Ca actually uses Skype during class, and 
has been busted by me for it). This poses a substantial risk for common 
hacking to occur. This is something that I am unclear on whether or not 
Skype has prepared. As the general populus begins to use Skype more the 
security question becomes of greater importance (reference the attacks on 
Windows that go on every day).

With all that said it is important to note that I have no access to the 
current Skype protocol and I only briefly had limited access to an early 
one, so my analysis may be substantially off.
   Joe 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: 3DES performance

2004-12-09 Thread Joseph Ashwood
- Original Message - 
From: Lee Parkes [EMAIL PROTECTED]
Subject: 3DES performance


I'm working on a project for a company that involves the use of 3DES. They 
have
asked me to find out what the overheads are for encrypting a binary file. 
There
will be quite a lot of traffic coming in (in the region of hundreds of
thousands of files per hour). Has anyone got any figures for 3DES 
performance?
I've tried bdes on OpenBSD which has given me some useful results.
Good estimates for the speed of many algorithms can be found at 
http://www.eskimo.com/~weidai/benchmarks.html , while the system is a bit 
old, the numbers are still relatively valid, considering that you will have 
other overheads involved. Just to save you a trip 3DES comes in around 
10MByte/second, and AES up to 6 times that speed.
   Joe

Trust Laboratories
Changing Software Development
http://www.trustlaboratories.com 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Kerberos Design

2004-09-06 Thread Joseph Ashwood
I'm currently looking into implementing a single sign-on solution for
distributed services.
Be brave, there's more convolutions and trappings there than almost anywhere
else.
Since I'm already using OpenSSL for various SSL/x.509 related things,
I'm most astonished by the almost total absence of public key
cryptography in Kerberos, and I haven't been able to find out why this
design choice was made - performance reasons, given that at its
inception public key operation cost was probably much more prohibitive?
Actually the primary reason Iv'e heard had more to do with the licensing
costs (at the time they were not free) than with anything else. You will
however find PKI extensions to Kerberos, don't remember the RFC off-hand.
- Is there a good web/book/whatever resource regarding the design
  of Kerberos? Amazon offers the O'Reilly book, which, from the
  abstract, seems to take the cryptographic design of Kerberos as
  a given and concentrates on its usage, and another one that also
  doesn't seem to give much detail on the issue. Something in the
  direction of EKR's SSL/TLS book would be very much appreciated.

From my understanding Kerberos was originally thrown together at MIT, then
it was broken, and patched, and broken and patched, until it was relatively
recently qualified to be implemented in Windows, so you're not likely to
find much in the way of well thought-out arguments governing the little
details. In fact many of the decisions seem to be based on My pet project
is . . . .
- Is Kerberos a sane choice to adapt for such solutions today?
  Is there anything more recent that I should be aware of?
Kerberos is a very sane choice, it may not be the cleanest design ever but
it has withstood a great deal of analysis. Actually, I was a member of a
group that was working on a replacement for Kerberos because of it's age and
potential issues in the future, but we fell into substantial disarray, and
eventually it collapsed. Given this, I can confidently say that it is 
unlikely that you will find something in the Kerberos vein taht is newer.
   Joe

Trust Laboratories
Changing Software Development
http://www.trustlaboratories.com 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: On hash breaks, was Re: First quantum crypto bank transfer

2004-08-24 Thread Joseph Ashwood
- Original Message - 
From: Jerrold Leichter [EMAIL PROTECTED]
Subject: Re: On hash breaks, was Re: First quantum crypto bank transfer


| (they all have backup
| plans that involve the rest of the SHA series and at the very least
| Whirlpool).
Moving to a larger hash function with no underlying theory isn't very far 
from
the million-bit key algorithms you see all over the place.  Bigger 
probably
can't be worse, but is it really better?
The key expansion problem is why the rest of the SHA series is present, and 
Whirlpool is present because of the fundamental flaw problem. The truth is 
that having a diversity of options for this is simple enough, it takes only 
a small amount of additional work to allow a cryptographic function to be 
easily replaced, and making it replacable by 1000 is only marginally more 
difficult than 2, the four I listed are well-built, which is why they are 
the recommended ones.

Suppose a year ago I offered the following bet:  At the next Crypto, all 
but
one of the widely-discussed hash functions will be shown to be 
fundamentally
flawed.  What odds would you have given me?
I think it would be important to change the phrasing a bit to make the odds 
more quantifiable, simply chagne At the next Crypto to By the end of the 
next Crypto. With that said considering history, I would've put the odds at 
~~5:1 (Current hash functions seem to be broken quite often, and being the 
house I want the odds in my favor). But you are correct in that this 
represents a major advance in the state of the art, one that has taken large 
portions of the security community completely blind, I simply took the 
opportunity to push the concept of good business planning into this as a way 
that allows a good escape plan should anything happen.

What odds would you have given me
on the following bet:  At the next Crypto, an attack against AES that is
substantially better than brute force will be published?  If the odds were
significantly different, how would you have justified the difference?
Very different odds actually, we as a group have a much better understanding 
of block ciphers than hash functions, as evidence the just published 4 for 
the price of 2 break (cryptography list post by Hal Finney Subject: More 
problems with hash functions 8/20/2004). However AES has one of the smallest 
security margins available, so let's put it around 10:1, I really don't 
expect a break, but I would not be excessively shocked to see one made. It 
is for this very reason that again I recommend to all my clients that the 
have backup plans here as well, all the AES finalists, and Camellia because 
of it's Nessie selection.


Let's update the question to today:  Replace widely-discussed hash 
functions
with SHA-1 and the related family.  Keep the AES bet intact.  But let's 
got
out 5 years.  Now what odds do you give me?  Why?
SHA series 1:1
AES   3:1
Whirlpool   3:1 (even though it wasn't asked)
Camellia 3:1
Of SHA and Whirlpool being felled by the same attack in the next 5 years 
100:1
AES and Camellia by the same attack within 5 years 30:1

SHA in five years because the SHA methodology is showing some cracks, there 
are only minor differences between SHA-0 and SHA-1, and the differences 
between SHA-1 and SHA-256/384/512 are basically just matters of scale, I 
expect to see a major break against the methodology within 10 years, and 
with the current renewed interest in hash functions I expect the manpower to 
be available very soon to find that break.

AES is a very solid algorithm, but it's security margin is too close for me, 
this is always solid evidence that a break may be just around the corner, 
that the evidence is that various agencies don't have a break is irrelevant, 
the current evidence is that the general cryptographic community is  10 
years behind and gaining quickly..

Whirlpool has the same odds as AES because the underlying cipher is based on 
the same methodology, by the same people, so if it has a flaw it is likely 
to be extremely similar.

Camellia simply does not have the examination behind it that the AES 
finalists do, something that makes me nervous and why it is only a backup 
algorithm.

SHA and Whirlpool are unlikely to all at the same time because they have 
fundamentally different cores, SHA is a hash constructed primitive, 
Whirlpool a block cipher constructed primitive based on a chaining mode. 
This makes the odds of a single attack felling both slim at best. This odd 
is probably slanted too far in my favor.

AES and Camellia by the same attack is more likely because the tools against 
block ciphers are generally cross borders capable, and the differences 
between the styles in Camellia and AES are simply not great enough to 
prevent this. The difference in the styles though represents the additional 
3.333:1 odds.

All my odds on this are conservative and based on sloppy meanings (you and I 
may have very different meanings for 

Re: Question on the state of the security industry (second half not necessarily on topic)

2004-07-01 Thread Joseph Ashwood
- Original Message - 
From: Ian Grigg [EMAIL PROTECTED]
Subject: Question on the state of the security industry


 Here's my question - is anyone in the security
 field of any sort of repute being asked about
 phishing, consulted about solutions, contracted
 to build?  Anything?

I am continually asked about spam, and I personally treat phishing as a
subset of it, but I have seen virtually no interest in correcting the
problem. I have personally been told I don't even know how many times that
phishing is not an issue.

I personally know it's an issue because between my accounts I receive ~3-5
phishing attempts/day, and the scams apparently account for a major portion
of the GNP of many small countries.

 Or, are security professionals as a body being
 totally ignored in the first major financial
 attack that belongs totally to the Internet?

 What I'm thinking of here is Scott's warning of
 last year:

Subject: Re: Maybe It's Snake Oil All the Way Down
At 08:32 PM 5/31/03 -0400, Scott wrote:
...
When I drill down on the many pontifications made by computer
security and cryptography experts all I find is given wisdom.  Maybe
the reason that folks roll their own is because as far as they can see
that's what everyone does.  Roll your own then whip out your dick and
start swinging around just like the experts.

 I think we have that situation.  For the first
 time we are facing a real, difficult security
 problem.  And the security experts have shot
 their wad.

 Comments?

In large part that's the way it looks to me as well. We have an effectively
impotent security community, because all the solutions we've ever made
either didn't work, or worked too well. We basically have two types of
security solutions the ones that are referred to as That doesn't work, we
had it and it did everything it shouldn't have and those that result in I
don't think it works, but I can't be sure because we were never attacked.
The SSL/TLS protocol is an example of this second type, I am unaware of any
blackhats that bother attacking SSL/TLS because they simply assume it is
impenetrable. At the same time we have the situation where Windows is
continually not because it is less secure than the others, but because it is
_believed_ to be less secure than the others, so the Windows security is
clearly of the first type. The biggest problem I've seen is that we're
dealing with generally undereducated peoople as far as security goes. We
need to start selling that we facilitate a business process, and that
because of this all you will see are the failures, the successes are almost
always be invisible.

Also as with all business processes, there is never a final state, it must
be often reanalyzed and revised. This puts us in a rather strange situation,
where somethign that I have always offered becomes important, we become an
outsourced analyst, almost an auditor situation. To build this properly the
security model that is constructed needs to be built to include emergency
threshholds and revision timeframes. By supporting the security process as a
business process it allows the concepts to more easily permeate the CXO
offices, which means that you are far more likely to make more money, build
a long term client, and create a strong security location.

To make the point clearer, I have ended up with clients that were previously
with better known cryptanalysts, including some worldwide names. These
clients have been told by their previous consultants that there security is
good, but their consultant never told themthat it needs reanalysis, they
never encouraged the creation of a business process around it, it was always
Ask me when you have questions. I did not poach these clients, they left
their previous consultants, and found me through referrals. These
relationships are extremely profitable for me, for many reasons; I actually
cost less than their prior consultants, but I make more, because everything
is done quickly, efficiently, and effectively.

This security process builds stronger security, and while I admit I am still
rarely asked about phishing, and even rarer is my advice listened to, my
clients are rarely successfully hacked, and have lower than average losses.

Our biggest problem is that we view the security process as distinct from
business processes. I truly wish I could make the Sarbanes-Oxley 2002
(http://news.findlaw.com/hdocs/docs/gwbush/sarbanesoxley072302.pdf) act
required reading for every security consultant, because it demonstrates very
much that proper security consulting is actually a business process.

Getting back to the topic, by doing this we can help them move from the
dick swinging phase to a best practices security infrastructure used
accurately and appropriately. We also need to start putting our money where
our mouth is, I've seen too many security consultants whose primary job
was to sell the add-on services available from their employer, instead we
need to follow 

Re: A National ID: AAMVA's Unique ID

2004-06-18 Thread Joseph Ashwood
- Original Message - 
From: John Gilmore [EMAIL PROTECTED]
[EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Thursday, June 17, 2004 10:31 AM
Subject: Re: A National ID: AAMVA's Unique ID


  The solution then is obvious, don't have a big central database. Instead
use
  a distributed database.

 Our favorite civil servants, the Departments of Motor Vehicles, are about
 to do exactly this to us.

 They call it Unique ID and their credo is: One person, one license,
 one record.  They swear that it isn't national ID, because national
 ID is disfavored by the public.  But it's the same thing in
 distributed-computing clothes.

I think you misunderstood my point. My point was that it is actually
_easier_, _cheaper_, and more _secure_ to eliminate all the silos. There is
no reason for the various silos, and there is less reason to tie them
together. My entire point was to put my entire record on my card, this
allows faster look-up (O(1) time versus O(lg(n))), greater security (I
control access to my record), it's cheaper (the cards have to be bought
anyway), it's easier (I've already done most of the work on defining them),
and administration is easier (no one has to care about duplication).

 This sure smells to me like national ID.

I think they are drawing the line a bit finer than either of us would like.
They don't call it a national ID because it being a national ID means that
it would be run by the federal government, being instead run by state
governments, it is a state ID, linked nationally.

As I said in the prior one, I disagree with any efforts to create forced ID.

 This, like the MATRIX program, is the brainchild of the federal
 Department of inJustice.  But those wolves are in the sheepskins of
 state DMV administrators, who are doing the grassroots politics and
 the actual administration.  It is all coordinated in periodic meetings
 by AAMVA, the American Association of Motor Vehicle Administrators
 (http://aamva.org/).  Draft bills to join the Unique ID Compact, the
 legally binding agreement among the states to do this, are already
 being circulated in the state legislatures by the heads of state DMVs.
 The idea is to sneak them past the public, and past the state
 legislators, before there's any serious public debate on the topic.

 They have lots of documents about exactly what they're up to.  See
 http://aamva.org/IDSecurity/.  Unfortunately for us, the real
 documents are only available to AAMVA members; the affected public is
 not invited.

 Robyn Wagner and I have tried to join AAMVA numerous times, as
 freetotravel.org.  We think that we have something to say about the
 imposition of Unique ID on an unsuspecting public.  They have rejected
 our application every time -- does this remind you of the Hollywood
 copy-prevention standards committees?  Here is their recent
 rejection letter:

   Thank you for submitting an application for associate membership in
AAMVA.
   Unfortunately, the application was denied again. The Board is not clear
as
   to how FreeToTravel will further enhance AAMVA's mission and service to
our
   membership. We will be crediting your American Express for the full
amount
   charged.

   Please feel free to contact Linda Lewis at (703) 522-4200 if you would
like
   to discuss this further.

   Dianne
   Dianne E. Graham
   Director, Member and Conference Services
   AAMVA
   4301 Wilson Boulevard, Suite 400
   Arlington, VA 22203
   T: (703) 522-4200 | F: (703) 908-5868
   www.aamva.org http://www.aamva.org/

 At the same time, they let in a bunch of vendors of high security ID
 cards as associate members.

Well then create a High-Security ID card company, build it on the technology
I've talked about. It's fairly simple, file the paperwork to create an LLC
with you and Robyn, the LLC acquires a website, it can be co-located at your
current office location, the website talks about my technology, how it
allows the unique and secure identification of every individual, blah, blah,
blah, get a credit card issued in the correct name. They'll almost certainly
let you in, you'll look and smell like a valid alternative (without lying
because you could certainly offer the technology), if you really want to
make it look really good I'm even willing to work with you on filing a
patent, something that they'd almost certainly appreciate.

 AAMVA, the 'guardians' of our right to travel and of our identity
 records, doesn't see how listening to citizens concerned with the
 erosion of exactly those rights and records would enhance their
 mission and service.

Of course it won't, their mission and service is to offer the strongest
identity link possible in the ID cards issued nation-wide, as such the
citizen's course of action has to be to govern the states issuing these
identication papers. However, if you offer them technology to actually make
their mission and service cheaper, more effective, and as a side-benefit
better for their voters. Besides, if you can't beat them (you 

Re: recommendations/evaluations of free / low-cost crypto libraries

2004-06-18 Thread Joseph Ashwood
- Original Message - 
From: Amir Herzberg [EMAIL PROTECTED]
Subject: recommendations/evaluations of free / low-cost crypto libraries


 I will appreciate experience-reports/evaluations/comparisons with free
 or low cost (and in particular  zero `per seat` cost) crypto libraries,
 especially in C / C++ (or links to web-sites containing them).

Generally the two most suggested free products are Crypto++
(http://www.eskimo.com/~weidai/cryptlib.html) and OpenSSL
(http://www.openssl.org/). I have used both and both are very good toolsets,
each has small advantages, but mostly it's just a programming style
preference.

As a personal preference, I generally preper OpenSSL for most purposes,
because the interface feels better to me, but lately I've been using
Crypto++ more because it supports a wider selection of algorithms (including
the very important for me ECC variants) which is recently of extreme value
to me.

I will say that if either of these was pay-ware, I would gladly pay for it.
Joe

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: threat modelling tool by Microsoft?

2004-06-10 Thread Joseph Ashwood
- Original Message - 
From: Ian Grigg [EMAIL PROTECTED]
Subject: threat modelling tool by Microsoft?


 Has anyone tried out the threat modelling tool
 mentioned in the link below, or reviewed the
 book out this month:

 http://aeble.dyndns.org/blogs/Security/archives/000419.php

I played with it for a bit, short story: it crashed. Long version: it feel
very clunky, and lacking in features. The output isn't very pretty either,
and rather difficult to understand. Additionally, although it can find users
easily (in fact it already does this) it doesn't import them without manual
intervention. With a large userlist though I suspect that the user listing
interface would become rather unusable.

With that said, for a small installation it should be fairly usable, and
certainly better than nothing. For a large installation though or a
situation where depth of security analysis is necessary it will probably
become unwieldly, and it seems likely to collapse under it's own weight.
Joe

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: A National ID

2004-06-03 Thread Joseph Ashwood
Although I am against any national ID, at least as far terrorist
identification goes (note that the Social Security Number that every
American has IS a national ID card), I feel that a discussion on how to do
it properly is a worthwhile endeavor.

- Original Message - 
From: Peter Clay [EMAIL PROTECTED]
Subject: Re: A National ID


 [T]he real danger is not the cards but the database for which they
 are a unique key. See just about every issue of RISKS for ways in which
 big national databases can go wrong.

The solution then is obvious, don't have a big central database. Instead use
a distributed database. I first suggested this concept some time ago on
sci.crypt. It's very simple, use cryptography so we don't have to be
concerned about duplication (although fraudulent acquisition of valid id
would be an issue). Issue each person a Flash RAM card, on the card is
biometric information, name, birthdate, etc, a Law Enforcement Only Field,
and a signature across all the information, most importantly DO NOT print
anything resembling what we currently see as an ID card (no picture, no
drivers license number, etc) just print a name on the card for ease of card
identification. At this point (assuming the cryptography is good) people can
make as many copies as they'd like, it's not going to make any difference.

The Law Enforcement Only Field (which I'll call LEAF for historical reasons)
serves a unique purpose, it is either a random number, or an encrypted old
identity. There are several possible reasons for the old identity;
undercover police, witness protection, support for pseudo-nyms, etc. This
field allows the police and only the police to identify undercover officers,
and provides tracability back through the process to identify granting a new
identity to someone.

The most important part though is the search time required for verifying an
ID. In the case of a giant central database it is O(log(n)) time, with the
cryptographic ID it is O(1). This reduces the cost of the national overhead,
while a database is still necessay for reissuing, and a new signing setup is
required, the access requirements are reduced by several orders of
magnitude. Further reduction comes from the ability of each police precinct
to have their own local known database, as well as every bar/nightclub
having their own banned list without the possibility of cross-corruption,
because there is no direct link. This further increases the security because
access to the main database can even be restricted to key personnel. This
personnel access reduction will again lower the speed requirements for the
central database, probably down to the point where a single Oracle server
with a few Terabytes of disk space could easily handle the load (I come up
with a horrible case size of about 300 Terabytes, and a minimum size of 70
gigabytes for storing only the signature and LEAF because everything else
can be reconstructed). (Sizes assume 1MB maximum data set, and DSA/ECDSA
with SHA-512)

This would also have a knock-on effect of creating a small ID customization
industry, because the ID can take any form-factor within certain reasonable
bounds there is no reason that it cannot be as customizable as a cell-phone.

As for security, this would put the citizen in general control of their
information, and with the minimum database size used would give the citizen
complete control over their own data. The additional overhead for the
current law enforcement databases would be minimal, each entry would only be
expanded by the size of the signature to mark the ID card.

The invasiveness for your average citizen would be minimized because there
is no chance of leakage between the big central database (which could be
very small) and the corner market, because the central database does not
have to be online.

Now as to the level of cryptographic security that would be necessary for
this. It is important to realize that the potential market for fraudulent ID
of this caliber would be massive, so a multi-decade multi-trillion dollar
effort to break the key is not unreasonable. This poses a risk of a
magnitude that cryptanalysts really haven't dealt with. Even at the level of
protecting the drivel from Shrub II, the possibility of a multi-decade,
multi-trillion dollar is simply inconceivable, and it is important to
remember that this signature has to remain secure not for a few years, or
even a couple of decades, it has to remain secure for longer than the
longest concievable lifespan for a human, which means 150 years (I've
rounded up from the record), which is a timeframe that we cannot even
conceive of at this time. A 100 trillion dollar, 150 year effort to break
the security is simply beyond our ability to predict cryptographically, with
Celerons at about $35 per GHz right now, that timeframe works out to
approximately 2^95 (again being generous to the attacker), that already
means that SHA-1 cannot be used simply because the workload is available to

Re: The future of security (bulk reply, long)

2004-05-25 Thread Joseph Ashwood
I've moved this to the top because I feel it is the most important statement
that can be made
Hadmut said :
 Security doesn't
 necessarily mean cryptography.

- Original Message - 
From: Hadmut Danisch [EMAIL PROTECTED]
Subject: Re: The future of security


 On Mon, Apr 26, 2004 at 08:21:43PM +0100, Graeme Burnett wrote:
 
  Would anyone there have any good predictions on how
  cryptography is going to unfold in the next few years
  or so?  I have my own ideas, but I would love
  to see what others see in the crystal ball.

 - I don't expect that there will be much progress in
   maths and theory of cryptography. Very few inventions
   will make it out of the ivory tower, if any at all.

I actually expect quite the opposite, we seem to be reaching an age in
cryptanalysis where we are developing techniques faster than they can be
functionally applied, and the speed of development is only increasing here.
We've now gone from a time when we were seeing a new functional attack about
every five years (differential to linear), to now just during the AES
selection proces we had a number of potential new avenues opened up. I
expect this trend to continue for a while, and the news taht this generates
should bring greater light, and more active people to studying cryptography.
I expect this trend to continue for approximately 1 human generation (about
20 years), but that human nature being what it is, that the second human
generation in this timeframe will have substantially fewer cryptanalytic
advances.

   Key lenghts will increase. We'll play RSA with
   4096 or 8192 bit.

Actually I'm seeing an increasing trend in moving away from RSA and DH
because the keys are becoming too big. The required key length to match the
strength of AES-256 is simply too large to offer functional speed, instead
we're going to have to switch over to the assymptotically superior
encryption/decryption/signing/verifying algorithm, because of this we should
see a major increase in the research moneys applied towards public key
techniques, this compounded with my expected increase in the number of
cryptanalysts should result in some very interesting times.

 They will find that Quantum Computers
   may be fast, but still bound to computation complexity.

I agree.

 - SSL/TLS will become even more of a de facto standard in
   open source software and (new?) protocols. It will make
   it's way into the standard libraries of programming languages
   (e.g. as it did for Ruby).

Again I have to disagree with you, we're already seeing some backlash
against SSL/TLS, where many people are beginning to see the value in
protecting the data not the link. This methodology fairly well eliminates
the usability of SSL/TLS, the added complexity of the new PK algorithms will
almost certainly spell doom for the current protocols in use.

 - I don't expect that we'll ever have a common PKI for
   common people with a significant distribution. It's like
   with today's HTTPS: The big ones have commercial certificates,
   plain people use passwords and simple authentication mechanisms
   (like receiving a URL with a random number by e-mail).

Again I have to disagree, I can only speak for what Trust Laboratories is
doing, but we are at this moment working on projects that will lower the
necessary threshhold for PKI implementations (through client proliferation).
This combined with the already solidly known presence of NGSCB in the
majority of future PCs should have the added effect that, while
Verisign-like PKI may remain unusual, the availability of what can be
treated as a smartcard in every computer will certainly increase the
availability of PKI to the common man.

 - I guess the most important crypto applications will be:

 - HTTPS of course

For the short term yes, but longer term I actually think that HTTPS will
diminish, in fact some measurements are already showing a trend where per
capita web usage is already decreasing, so HTTP may soon be decreasing, lead
ing to an obvious decrease in the usage of HTTPS. This combined with the
protect the data not the link movement should have substantial further
impact.

 - portable storage equipped with symmetric ciphers
   such as USB-Sticks and portable hard disks.

Agreed, but I also think we'll start seeing distributed file system, I know
we are working on them, and have already had some interest form companies.
These distributed file systems will make use of smart cards (although the
form factor WILL be different). With the proliferation of high speed data
connections (US cell phones are already available at 150 Kbps, and 3G can
bring speeds of up to 1Mbps, in the next few years WiMax, and great future
cell potential e.g. Flarion) I suspect that removable storage will actually
decrease, that leaves moving those USB/removable drives over to distributed
file systems or even in some cases p2p networks (more on this from Trust
Laboratories in the future) which will massively reduce cost. I'm 

Re: Can Skype be wiretapped by the authorities?

2004-05-08 Thread Joseph Ashwood
- Original Message - 
From: Axel H Horns [EMAIL PROTECTED]
Subject: Can Skype be wiretapped by the authorities?


 Is something known about the details of the crypto protocol within
 Skype? How reliable is the encryption?

While Skype is generally rather protective of their protocol, there have
been leaks, in fact one elak that I am aware of was to me personally,
unfortunately I do not have the protocol any more it just wasn't worth
saving. With that said the protocol is horribly and completely worthless,
they brag about using 1536-2048 bit RSA, but what they dont' tell you is
that when I saw the protocol the key was directly encrypted without padding,
it's also worth noting that when I said key that wasn't a typo, there was
only one, although it was hashed to create two. There was a complete lack of
message authentication, a complete lack of key verification, a complete lack
of one-timeness to the transfers, basically a complete lack of security,
even their user verification was flawed to the point where it was completely
worthless. Assuming that they have not changed their protocol substantially
(likely considering no one would listen to the individual that leaked it to
me, and hence was given the breaks) the protocol is still horribly insecure,
and pointlessly complex. The ONLY functional security it has is that it is
peer2peer and as such it is harder to eavesdrop.
Joe

Trust Laboratories
Changing Software Development
http://www.trustlaboratories.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Zero Knowledge Authentication? (was Cryptolog Unicity Software-Only Digital Certificates)

2003-12-14 Thread Joseph Ashwood
- Original Message - 
From: R. A. Hettinga [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, December 10, 2003 8:47 AM
Subject: Zero Knowledge Authentication? (was Cryptolog Unicity
Software-Only Digital Certificates)


 Launch Marks the First Commercial Use of Zero-Knowledge Authentication

I've snipped the rest, because it is primarily not useful beyond this. They
are highly incorrect about their lauch being the first commercial use of
ZKA, as a matter of fact I was involved in implenting one for commercial
use, and I was a part of a mandatory workfoce reduction (aka laid off)
from that company 2 1/2 years ago. I will admit we never referred to it as
Zero Knowledge Authentication which just sounds like a mass of crap thrown
together to sound geeky. Instead we used zero knowledge proof of knowledge
(in particular a PIN), and used that proof to provide authentication. I can
also tell you that if you're dealing with some high security requirements
(such as the claim of high security in the press release), there are some
very tricky situations and I found a number of unpublished attacks against
such systems (all were addressed before the product shipped, except the one
I address below which is inherent). So to anyone looking at such a system, I
recommend that they give it at least 2 years to mature and be attacked, and
even then make sure that a number of worthwhile names have actually looked
at the protocols involved, and the implementation.

With that said, I see little reason that such systems need to exist, you
continually end up coming back to but what is it actually good for the
truth is that with a small piece of knowledge, only a small number of
accounts need their existance known to compromise the system. An example,
simple PIN-based system, e.g. ATM bank card network, PIN must be at least 4
digits, and a maximum of 6. First, statistically the vast majority of PINs
will be 4 digits. Now contrary to reality, we will assume that the pins are
chosen randomly (most people choose a pattern). The fact is that with 4
digits there are only 10,000 possible pins, so only 5000 guesses need to be
made to on average have broken into one account. From there the standard is
that each account is given 3 guesses before disabling, so only 1667 cards
have to be uncovered in order to break into an account. Now realistically,
how long will this take? Here in the US ATM cards can be uniquely identified
by 16 digits (it's been linked into the Visa network), this makes acquiring
the card number easy. Acquiring the number of 1667 cards is almost trivial.

On such high security systems, they invariably have further problems. The
base information required for a user to log in can be downloaded free of
security (for roaming), this allows an attacker to simply download all the
login credentials for the entire enterprise. In many cases large companies
will have more than 1667 people who have root access on the network. This is
a fatal flaw for the design, and unfortunately for such systems this is a
flaw that cannot be addressed except by switching to passphrases, something
that would lower their usability (their biggest selling point) to the same
level of all other secure systems.
Joe

Trust Laboratories
Changing Software Development
http://www.trustlaboratories.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: NCipher Takes Hardware Security To Network Level

2003-10-13 Thread Joseph Ashwood
- Original Message - 
From: Ian Grigg [EMAIL PROTECTED]
Sent: Saturday, October 11, 2003 1:22 PM
Subject: Re: NCipher Takes Hardware Security To Network Level

 Is there any reason to believe that people who
 know nothing about security can actually evaluate
 questions about security?

Actually, there are reasons to believe that they won't be able to, just as I
would not be qualified to evaluate the functionality of a sewage pump
(except from the perspective of it seems to work).

 And, independant assessors are generally subvertable
 by special interests (mostly, the large incumbents
 encourage independant assessors to raise barriers
 to keep out low cost providers).  Hence, Peter's
 points.  This is a very normal economic pattern, in
 fact, it is the expected result.

I take the counter view, assuming that a independent assessor can be found
that is truly independent, that assessor helps the small companies _more_
than the larger ones. To make a pointed example I will use a current
situation (which I am active in).

Trust Laboratories is a software assurance firm, whose first service is the
assurance of PKCS #11 modules. From the marketting perspective the large
incumbents (e.g. nCipher which started this conversation) have little
incentive to seek such assurances, they already have a solid lock on the
market, and the brand recognition to keep it that way. The small companies
though have a much stronger incentive, with an assurance they can hint and
in some cases maybe even outright claim technological superiority over the
encumbents, giving them a strong road into the market. The only purpose the
encumbents have for such assurances is combatting the small companies
assurances (not that I wouldn't love to have nCipher as a customer, I think
it would lend a great deal of credibility to the assurance, as well as
solidifying their marketshare against the under-developed technologies).

 So, right now, I'd say the answer to that question
 is that there is no way for someone who knows nothing
 about security to objectively evaluate a security
 product.

That will likely always be the case. In order to judge what level of
security is required they simply must have some knowledge of security.
Otherwise it is very much like asking John Smith what Ian Grigg's favorite
food is, (a typical) John Smith simply does not have the knowledge to give a
useful answer.
Joe

Trust Laboratories
Changing Software Development
http://www.trustlaboratories.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Tinc's response to Linux's answer to MS-PPTP

2003-09-26 Thread Joseph Ashwood
And a response. I have taken the liberty of copying the various portions of
the contents of the webpage to this email for response. I apologize for the
formatting confusion which may mistake Peter Gutmann's comments with those
of the semi-anonymous misinformed person under scrutiny.

I would have CC'd the author of the response page, but it fails to mention
an author, in spite of the Comments are welcome statement at the
beginning.

 On September the 15th Peter Gutmann contacted us and showed us part of a
writeup which he
 posted to a cryptography mailing list on the 22th. In this writeup he
identifies several security issues
 in CIPE, VTun and tinc. Below we will examine his findings and explain why
some flaws or
 weaknesses Peter Gutmann found are in fact not a problem at all for a VPN
daemon like tinc.
 As a reader you are ultimately left to draw your own conclusions, I
encourage you to read all the
 arguments from both sides and, if possible, verify them by reading the
documentation and source
 code of tinc. Comments are welcome.

 Predictable IV
Proposed solution: provide the option of a full IV and move the sequence
number out of the ciphertext, note that this is an _option_, instead of the
necessary for security always.

 Truncated MAC
 tinc will continue to use only the first 32 bits by default.
Simply put this is unacceptable from a security standpoint. The view taken
is that the extra 128 bits represents a significant overhead in the
communication. So I did the math, sending the extra 128 bits over a 52kbs
would take 0.002 seconds, and over a T1 the time cost is an absolutely
enormous 0.8 seconds. The other consideration is the potentially the
computation time argument, but SHA-1 is used regardless, the computation
time is identical. There is no justification in even a dialup environment
for not using the full HMAC.

 Use of RSA
 Tinc uses RSA encryption only once, during authentication.
sarcasm Yeah authentication is such a minor thing that major flaws are
completely aceptable./sarcasm

 A message is sent which has the same length as the RSA key, and is
 completely filled . . .using real random data (OpenSSL's RAND_bytes()).

I really wish people would actually read documentation *before* making
stupid claims like this, in fact to quote the OpenSSL docs These functions
implement a cryptographically secure pseudo-random number generator (PRNG).
 Any claim that OpenSSL implements a real random number generator are
completely false.

 So, the message does not have low entropy and doesn't contain predictable
 bits.

Read the docs, the message has 0 entropy (actually marginally above 0, but
these are simple rounding errors), that's what a pseudo-random number
generator means.

 However, there is no guarantee that the message was encrypted using the
 correct public key or that noone has tampered with it. This is of no
concern
 for tinc though.

There goes that authentication doesn't matter problem again, remind is tinc
supposed to have any sembalnce of security? or is it just a stupid toy
designed for young children to keep they're even younger siblings out of
their personal matters?

 Part of the message is used as the key for the symmetric cipher that is
used
 by the sender of this key to encrypt the rest of the messages it will
send. A
 challenge-response exchange right after exchanging the RSA encrypted
 messages is done to ensure that both the sender of the symmetric cipher
key
 has the right public key, the recipient has the right corresponding
private
 key, and the message was not tampered with (because that would change the
 symmetric cipher key).

Whoever designed and stated this has no idea about cryptographic security.
Using a part of a shared secret, generated by a pRNG on only one side,
introduces horrific flaws in the protocol. Pretending that poorly done RSA
encryption magically solves the problem will only risk everything that has
ever been encrypted using tinc.

 Authentication protocol
 First of all, we assume Mallet does not know the private keys of Bob and
 Alice (Bob and Alice are not compromised), and Bob and Alice do not know
 Mallet at all (they don't trust Mallet, otherwise he could've made a
 connection without having to resort to a cryptographic attack).
Good luck keeping that assumption true, with the oracle attack listed above
that won't stay true very long at all.

 First, keys for the
 symmetric cipher encryption are exchanged. Mallet cannot decrypt keys he
 gets from Bob and Alice, because he doesn't have their private keys.
But he does, he spoofed each connection and acted as initiator for both, now
it's a simple matter of resending. Your entire model is based on a
misunderstanding of what RSA does and does not offer.

What you're missing is that the connection iniator sets all the keys and can
determine all the keys (assuming the uncontested simplified message flow is
correct). Mallet can very easily perform a complete man-in-the-middle attack

 Configuration

Re: Digital cash and campaign finance reform

2003-09-09 Thread Joseph Ashwood
- Original Message - 
From: Steve Schear [EMAIL PROTECTED]
Subject: Re: Digital cash and campaign finance reform


 At 04:51 PM 9/8/2003 -0700, Joseph Ashwood wrote:
 - Original Message -
 From: Steve Schear [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
 [anonymous funding of politicians]
   Comments?
 
 Simple attack: Bob talks to soon to be bought politician. Tomorrow
you'll
 recieve a donation of $50k, you'll know where it came from.
 Next day, buyer makes 500 $100 donations (remember you can't link him to
any
 transaction), 50k arrives through the mix. Politician knows where it came
 from, but no one can prove it.

 Not so fast.  I said the mix would delay and randomize the arrival of
 payments.  So, some of the contributions would arrive almost immediately
 others/many might take weeks to arrive.

You act like they aren't already used to addressing that problem. I'll go
back to the Bustamante, simply because it is convenient right now.
Bustamante recieved a multi-million dollar donation from the Native
Americans, this was not done through a single check, that would be illegal,
instead it was done through multiple smaller checks, each of which ends up
randomized and delayed in processing (USPS is wonderful source of
randomness), so the actual occurance of the donations is scattered acros
several days, from several accounts, by several people, and I'm sure
Bustamante never even looked to see who the donations were actually from,
just that the full amount arrived. The problem that you found, is already
addressed, and already not a problem.
Joe

Trust Laboratories
Changing Software Development
http://www.trustlaboratories.com


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is cryptography where security took the wrong branch?

2003-09-08 Thread Joseph Ashwood
- Original Message - 
From: Ian Grigg [EMAIL PROTECTED]
Sent: Sunday, September 07, 2003 12:01 AM
Subject: Re: Is cryptography where security took the wrong branch?

 That's easy to see, in that if SSL was oriented
 to credit cards, why did they do SET?  (And,
 SHTTP seems much closer to that mission, on a
 quick reading, at least.)

Actually they do target very different aspects. SET, 3D-Secure, and any
other similar have a different target then SSL. To understand this it is
important to realize that instead of the usual view of two-party
transactions, credit card transactions actually take 3 parties; Issuer,
Seller, and Buyer. SSL covers the Seller-Buyer communication, and can also
be applied to the Seller-Issuer communication, but on a transaction basis it
offers nothing for the Issuer-Buyer (the important one for minimizing costs
for the Issuer).

SET/3D-Secure/etc address this through various means but the end target is
to create a pseudo-Buyer-Issuer link, through the Seller. This allows the
Issuer to minimize costs (less chance of having to make a call) and because
it is behind the scenes technology has no reason to be accompanied by a
reduction in fees (and actually because of the reduced likelihood of buyer
fraud, it may be possible to charge the seller _more_).

In the end SSL and SET/3D-Secure/etc target entirely different portions of
the problem (the former targets seller fraud against the buyer, latter
seller against issuer). Both of these are important portions, of course the
real upside of SET/3D-Secure/etc is that the seller doesn't have a choice,
and the fees in accordance with the fraud-reduction may very well increase
the costs to the seller, the buyer costs of course stay the same. End
result: lower fraud, increased fees-higher profit margins.

However, if it meets expectations, it is entirely possible that all
legitimate parties (non-fraud entities) will see improved profits (seller
has reduced fraud and charge-backs, buyer less likelihood of the $50
penalty, issuer higher fees). Will it meet those expectations? I have no
idea.
Joe

Trust Laboratories
Changing Software Development
http://www.trustlaboratories.com


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Digital cash and campaign finance reform

2003-09-08 Thread Joseph Ashwood
- Original Message - 
From: Steve Schear [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
[anonymous funding of politicians]
 Comments?

Simple attack: Bob talks to soon to be bought politician. Tomorrow you'll
recieve a donation of $50k, you'll know where it came from.
Next day, buyer makes 500 $100 donations (remember you can't link him to any
transaction), 50k arrives through the mix. Politician knows where it came
from, but no one can prove it.

By implementing this we'll see a backwards trend. It will be harder to prove
the buyout (actually impossible), but the involved parties will know exactly
who did the paying. Right now you can actually see a similar usage in the
Bustamante (spelling?) campaign in the California Recall Election, the
Native Americans donated $2M to him in spite of a limit of ~22k by donating
from several people. Same method only now we know who did the paying.
Joe

Trust Laboratories
Changing Software Development
http://www.trustlaboratories.com


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]