Re: Run a remailer, go to jail?

2003-03-31 Thread Ed Gerck
It would also outlaw pre-paid cell phones, that are anonymous
if you pay in cash and can be untraceable after a call. Not to
mention proxy servers. On the upside, it would ban spam ;-)

Cheers,
Ed Gerck

Perry E. Metzger wrote:

 http://www.freedom-to-tinker.com/archives/000336.html

 Quoting:

 Here is one example of the far-reaching harmful effects of
 these bills. Both bills would flatly ban the possession, sale,
 or use of technologies that conceal from a communication
 service provider ... the existence or place of origin or
 destination of any communication.

 --
 Perry E. Metzger[EMAIL PROTECTED]

 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-25 Thread Ed Gerck


Jeroen C. van Gelderen wrote:

 1. Presently 1% of Internet traffic is protected by SSL against
 MITM and eavesdropping.

 2. 99% of Internet traffic is not protected at all.

I'm sorry, but no. The bug in MSIE, that prevented the correct
processing of cert path restraints and which led to easy MITM
attacks, has been fixed for some time now.  Consulting browser
statistics sites will show that the MSIE update in question,
fueled by the need for other security updates, is making
good progress.

 3. A significant portion of the 99% could benefit from
 protection against eavesdropping but has no need for
 MITM protection. (This is a priori a truth, or the
 traffic would be secured with SSL today or not exist.)

I'm sorry but the a priori truth above is false .  Ignorance about
the flaw, that is now fixed, and the need to do a LAN attack (if
you  want not to mess with the DNS) have helped avert a major
public exploit. The hole is now fixed and the logic fails for this
reason as well.

 4. The SSL infrastructure (the combination of browsers,
 servers and the protocol) does not allow the use of
 SSL for privacy protection only. AnonDH is not supported
 by browsers and self-signed certificates as a workaround
 don't work well either.

There is a good reason -- MITM. AnonDH and self-signed
certs cannot prevent MITM.


 5. The reason for (4) is that the MITM attack is overrated.
 People refuse to provide the privacy protection because
 it doesn't protect against MITM. Even though MITM is not
 a realistic attack (2), (3).

But it is, please see the spoof/MITM method in my previous post.
Which, BTW, is rather old info in some circles (3 years?) and is
easy to do by script kiddies with no knowledge about anything we
are talking here -- they can simply do it. Anyone can do it.

 (That is not to say that (1) can do without MITM
  protection. I suspect that IanG agrees with this
  even though his post seemed to indicate the contrary.)

I think Ian's post, with all due respect to Ian, reflects a misconception
about cert validation. The misconception is that cert validation can
be provided as an absolute reference -- it cannot. The *mathematical*
reasons are explained in the paper I cited. This misconception
was discussed some 6 years in the ssl-talk list and other lists, and
clarified at the time -- please see the archives. It was good, however,
to post this again and, again, to allow this to be clarified.


 6. What is needed is a system that allows hassle-free,
 incremental deployment of privacy-protecting crypto
 without people whining about MITM protection.

You are asking for the same thing that was asked, and answered,
6 years ago in the ssl-talk and other lists. There is a way to do it
and the way is not self-signed certs or SSL AnonDH.

 Now, this is could be achieved by enabling AnonDH in the SSL
 infrastructure and making sure that the 'lock icon' is *not* displayed
 when AnonDH is in effect. Also, servers should enable and support
 AnonDH by default, unless disabled for performance reasons.

Problem -- SSL AnonDH cannot prevent MITM. The solution is
not to deny the problem and ask who cares about MITM?

 Ed Gerck wrote:
  BTW, this is NOT the way to make paying for CA certs go
  away. A technically correct way to do away with CA certs
  and yet avoid MITM has been demonstrated to *exist*
  (not by construction) in 1997, in what was called intrinsic
  certification -- please see  www.mcg.org.br/cie.htm

 Phew, that is a lot of pages to read (40?). Its also rather though
 material for me to digest. Do you have something like an example
 approach written up? I couldn't find anything on the site that did not
 require study.

;-) If anyone comes across a way to explain it, that does not require study,
please let me know and I'll post it.

OTOH, some practical code is being developed, and has been sucessfully
tested in the past 3 years with up to 300,000 simultaneous users, which
may provide the example you ask for. Please write to me privately if you'd
like to use it.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-25 Thread Ed Gerck


Ben Laurie wrote:

 Ed Gerck wrote:
  ;-) If anyone comes across a way to explain it, that does not require study,
  please let me know and I'll post it.

 AFAICS, what it suggests, in a very roundabout way, is that you may be
 able to verify the binding between a key and some kind of DN by being
 given a list of signatures attesting to that binding. This is pretty
 much PGP's Web of Trust, of course. I could be wrong, I only read it
 quickly.

This would still depend on what the paper calls extrinsic references,
that are outside the dialogue and create opportunity for faults (intentional
or otherwise). The resulting problems for PGP are summarized in
www.mcg.org.br/cert.htm#1.2.




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-25 Thread Ed Gerck


Jeroen van Gelderen wrote:

 Heu? I am talking about HTTPS (1) vs HTTP (2). I don't see how the MSIE
 bug has any effect on this.

Maybe we're talking about different MSIE bugs, which is not hard to do ;-)
I was referring to the MSIE bug that affects the SSL handshake in HTTPS,
from the context in discussion. BTW, HTTP has no provision to prevent
MITM in any case -- in fact, establishing a MITM is part of the HTTP
tool box and used in reverse proxies for example.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-25 Thread Ed Gerck


Jeroen van Gelderen wrote:

 3. A significant portion of the 99% could benefit from
 protection against eavesdropping but has no need for
 MITM protection. (This is a priori a truth, or the
 traffic would be secured with SSL today or not exist.)

Let me summ up my earlier comments: Protection against
eavesdropping without MITM protection is not protection
against eavesdropping.

In addition,  when you talk about HTTPS traffic (1%) vs.
HTTP traffic (99%) on the Internet you are not talking
about user's choices -- where the user is the party at risk
in terms of their credit card number. You're talking about
web-admins failing to protect third-party information they
request. Current DO liability laws, making the officers
of a corporation personally responsible for such irresponsible
behavior, will probably help correct this much more efficiently
than just a few of us decrying it.

My personal view is that ALL traffic SHOULD be encrypted,
MITM protected, and authenticated, with the possibility of
anonymous authentication if so desired. Of course, this is
not practical today -- yet. But we're working to get there.
BTW, a source once told me that about 5% of all email traffic
is encrypted. So, your 1% figure is also just a part of the picture.

Cheers --/Ed Gerck






-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-25 Thread Ed Gerck


Jeroen van Gelderen wrote:

 On Tuesday, Mar 25, 2003, at 14:38 US/Eastern, Ed Gerck wrote:
  Let me summ up my earlier comments: Protection against
  eavesdropping without MITM protection is not protection
  against eavesdropping.

 You are saying that active attacks have the same cost as passive
 attacks. That is ostensibly not correct.

Cost is not the point even though cost is low and within the reach of
script kiddies.

 What we would like to do however is offer a little privacy protection
 trough enabling AnonDH by flipping a switch. I do have CPU cycles to
 burn. And so do the client browsers. I am not pretending to offer the
 same level of security as SSL certs (see note [*]).

I agree with this. This is helpful. However, supporting this by
asking Who's afraid of Mallory Wolf? is IMO not helpful --
because we should all be afradi fo MITM attacks. It's not good
for security to deny an attack that is rather easy to do today.

 I'm proposing a slight, near-zero-cost improvement[*] in the status
 quo. You are complaining that it doesn't achieve perfection. I do not
 understand that.

Your proposal is, possibly, a good option to have. However, it does not:
provide a credible protection against eavesdropping. It is better than
ROT13, for sure.

Essentially, you're asking for encryption without an authenticated end-point.
This is acceptable. But I suggest that advancing your idea should not be
prefaced by denying or trying to hide the real problem of MITM attacks.

Cheers,
Ed Gerck




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-25 Thread Ed Gerck

Ben Laurie wrote:

 It seems to me that the difference between PGP's WoT and what you are
 suggesting is that the entity which is attempting to prove the linkage
 between their DN and a private key is that they get to choose which
 signatures the relying party should refer to.

PGP's WoT already does that. To be clear, in PGP the entity that is attempting
to prove the linkage between a DN and a public key chooses which signatures
are acceptable, their degree of trust, and how these signatures became
acceptable in the first place. BTW, a similar facility also exists in X.509, where
the entity that is attempting to prove the linkage may  accept or reject a CA
for that purpose (unfortunately, browsers make this decision automatically
for the user but it does not need to be so).

That said, the paper does not provide a way to implement the method I
suggested. The paper only shows that such a method should exist.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-24 Thread Ed Gerck

Ian Grigg wrote:

 ...
 The analysis of the designers of SSL indicated
 that the threat model included the MITM.

 On what did they found this?  It's hard to pin
 it down, and it may very well be, being blessed
 with nearly a decade's more experience, that
 the inclusion of the MITM in the threat model
 is simply best viewed as a mistake.

I'm sorry to say it but MITM is neither a fable nor
restricted to laboratory demos. It's an attack available
today even to script kiddies.

For example, there is a possibility that some evil attacker
redirects the traffic from the user's computer to his own
computer by ARP spoofing. With the programs arpspoof,
dnsspoof and webmitm in the dsniff package it is possible
for a script kiddie to read the SSL traffic in cleartext (list
of commands available if there is list interest). For this attack
to work the user and the attacker must be on the same LAN
or ... the attacker could be somewhere else using a hacked
computer on the LAN -- which is not so hard to do ;-)

...
 Clearly, the browsers should not discriminate
 against cert-less browsing opportunities

The only sign of the spoofing attack is that the user gets a
warning about the certificate that the attacker is presenting.
It's vital that the user does not proceed if this happens --
contrary to what you propose.

BTW, this is NOT the way to make paying for CA certs go
away. A technically correct way to do away with CA certs
and yet avoid MITM has been demonstrated to *exist*
(not by construction) in 1997, in what was called intrinsic
certification -- please see  www.mcg.org.br/cie.htm

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: double shot of snake oil, good conclusion

2003-03-08 Thread Ed Gerck
Tal Garfinkel wrote:

 ...
 Clearly, document controls are not a silver bullet, but if used properly
 I believe they do provide a practical means of helping to restrict the
 propagation of sensitive information.

I believe we are in agreement in many points. Microsoft's mistake was
to claim that For example, it might be possible to view a document but
not to forward or print it.  As I commented, of course it is possible
to copy of forward it.  Thus, claiming that it isn't possible is snake oil
and I think we need to point it out.

I'd hope that the emphasis on trustworthy computing will help Microsoft
weed out these declarations and, thus, help set a higher standard.

Cheers,
Ed Gerck



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Scientists question electronic voting

2003-03-07 Thread Ed Gerck


(Mr) Lyn R. Kennedy wrote:

 On Thu, Mar 06, 2003 at 10:35:22PM -0500, Barney Wolff wrote:
 
  We certainly don't want an electronic system that is more
  vulnerable than existing systems, but sticking with known-to-be-terrible
  systems is not a sensible choice either.

 Paper ballots, folded, and dropped into a large transparent box, is not a
 broken system.

The broken system is the *entire* system -- from voter registration,
to ballot presentation (butterfly?), ballot casting, ballot storage,
tallying, auditing, and reporting.

 It's voting machines, punch cards, etc that are broken.
 I don't recall seeing news pictures of an election in any other western
 democracy where they used machines.

Brazil, 120 million voters, 100% electronic in 2002, close to 100%
since the 90's, no paper copy (and it failed when tried). BTW, the
3 nations with largest number of voters are, respectively:

- India
- Brazil
- US

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: double shot of snake oil, good conclusion

2003-03-06 Thread Ed Gerck


Tal Garfinkel wrote:

 The value of these type of controls that they help users you basically
 trust who might be careless, stupid, lazy or confused to do the right
 thing (however the right thing is defined, according to your company
 security policy).

It beats me that users you basically trust might also be careless, stupid,
lazy or confused ;-)

Your point might be better expressed as the company security policy would
be followed even if you do NOT trust the users to do the right thing. But,
as we know, this only works if the users are not malicious, if social engineering
cannot be used, if there are no disgruntled employees, and other equally
improbable factors.

BTW, one of the arguments that Microsoft uses to motivate people to
be careful with unlawful copies of Microsoft products is that disgruntled
employees provide the bulk of all their investigations on piracy, and everyone
has disgruntled employees. We also know that insider threats are responsible
for 71% of computer fraud.

Thus, the lack of value of these type of controls is to harass the legitimate users
and give a false sense of security. It reminds me of a cartoon I saw recently,
where the general tells a secretary to shred the document, but make a copy
first for the files.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Scientists question electronic voting

2003-03-06 Thread Ed Gerck


Anton Stiglic wrote:

 -Well the whole process can be filmed, not necessarily photographed...
 It's difficult to counter the attack.  In you screen example, you can
 photograph
 the vote and then immediately photograph the thank you, if the photographs
 include the time in milliseconds, and the interval is short, you can be
 confident
 to some degree that the vote that was photographed was really the vote that
 was casted.
 You can have tamper resistant film/photograph devices and whatever you want,
 have the frames digitally signed and timestamped,
 but this is where I point out that you need to consider the value of the
 vote to
 estimate how far an extortionist would be willing to go.

The electronic process can be made much harder to circumvent by
allowing voters to cast any number of ballots but counting only the last
ballot cast. Since a voter could always cast another vote after the one that
was so carefully filmed, there would be no value for such film.

BTW, a similar process happens in proxy voting for shareholders meeting,
where voters can send their vote (called a proxy) before the meeting
but can also go to the meeting and vote any way they please -- trumping
the original vote.

Much work needs to be done, and tested, to protect the integrity of
public elections. Even with all such precautions, if  the choices made by
a voter are disclosed (ie, not just the tally for all voters) then a voter
can be identified by using an unlikely pattern -- and the Mafia has,
reportedly, used this method in Italy to force (and enforce) voter
choices in an otherwise private ballot.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Scientists question electronic voting

2003-03-06 Thread Ed Gerck
bear wrote:

 Let's face it, if somebody can *see* their vote, they can record it.

Not necessarily. Current paper ballots do not offer you a way to record
*your* vote. You may even photograph your ballot but there is no way to
prove that *that* was the ballot you did cast. In the past, we had ballots with
different collors for each party ;-) so people could see if you were voting
Republican or Democrat, but this is no longer the case.


 and if someone can record it, then systems for counterfeiting such a
 record already exist and are already widely dispersed.

It's easier than one may think to have a reliable proof, if you can photograph
the ballot that you *did* cast (as in that proposal for printing a paper receipt
with your vote choices) -- just wait out of the poll place and demand the
film right there, or wait out of the poll place, hear the voter's voice right
then and get the image sent by the cell phone before the voter leaves the
poll booth.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Scientists question electronic voting

2003-03-06 Thread Ed Gerck
Dan Riley wrote:

 The vote can't be final until the voter confirms the paper receipt.
 It's inevitable that some voters won't realize they voted the wrong
 way until seeing the printed receipt, so that has to be allowed for.
 Elementary human factors.

This brings in two other factors I have against this idea:

- a user should not be called upon to distrust the system that the user
is trusting in the first place.

- too many users may reject the paper receipt because they changed their
minds, making it impossible to say whether the e-vote was wrong or
correct based on the number of rejected e-votes.

 But this whole discussion is terribly last century--still pictures are
 passe.  What's the defense of any of these systems against cell phones
 that transmit live video?

This was in my first message, and some subsequent ones too:

For example, using the proposed system a voter can easily, by using a
small concealed camera or a cell phone with a camera, obtain a copy of
that receipt and use it to get money for the vote, or keep the job. And
no one would know or be able to trace it.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: double shot of snake oil, good conclusion

2003-03-05 Thread Ed Gerck

A.Melon wrote:

 Ed writes claiming this speculation about Palladium's implicatoins is
 mis-informed:

  while others speculated on another potentially devastating effect,
  that the DRM could, via a loophole in the DoJ consent decree, allow
  Microsoft to withhold information about file formats and APIs from
  other companies which are attempting to create compatible or
  competitive products

 I think you misunderstand the technical basis for this claim.  The
 point is Palladium would allow Microsoft to publish a file format and
 yet still control compatibility via software certification and
 certification on content of the software vendor who's software created
 it.

We are in agreement. When you read the whole paragraph that I wrote,
I believe it is clear that my comment was not whether the loophole existed
or not. My comment was that there was a much more limited implication
for whistle-blowing because DRM can't really control what humans do
and there is no commercial value in saying that a document that I see
cannot be printed or forwarded -- because it can.

 Your other claims about the limited implications for whistle-blowing
 (or file trading of movies and mp3s) I agree with.

And that's what my paragraph meant.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Comments/summary on unicity discussion

2003-03-05 Thread Ed Gerck
 THE FINE PRINT
Of further importance and often ignored or even contradicted by
some statements in the literature such as any cipher can be attacked by
exhaustively trying all possible keys, I usually like to call attention to
the fact that any cipher (including 56-bit-key DES) can be theoretically
secure against any attacker -- even an attacker with unbounded
resources -- when the cipher is used within its unicity. Not only the
One-Time Pad is theoretically secure, but any cipher can be theoretically
secure if used within the unicity distance. Thus, indeed there is a
theoretically secure defense even against brute-force attacks, which is to
work within the unicity limit of the cipher. And, it works for any cipher
that is a good random cipher -- irrespective of key-length or encryption
method used.

It is also important to note, as the literature has also not been very neat
in this regard, that unicity is always referred to the plaintext. However,
it may also be applied to indicate the least amount of ciphertext which
needs to be intercepted in order to attack the cipher -- within the
ciphertext/plaintext granularity. For example, for a simple OTP-cipher,
being sloppy works because one byte of ciphertext links back to one
byte of plaintext -- so, a unicity of n bytes implies n bytes of ciphertext.
For DES, however, the ciphertext must be considered in blocks of 8
bytes -- so, a unicity of n bytes implies a corresponding modular number
of 8 bytes.

3. ONLINE REFERENCES

[Sha48] Shannon, C. Communication Theory of Secrecy Systems. Bell Syst.
Tech. J., vol. 28, pp. 656-715, 1949.  See also
http://www3.edgenet.net/dcowley/docs.html for readable scanned images of
the complete original paper and Shannon's definition of unicity distance in
page 693.  Arnold called my attention to a typeset version of the paper at
http://www.cs.ucla.edu/~jkong/research/security/shannon.html.

[Sha49] Shannon, C. A Mathematical Theory of Communication. Bell Syst.
Tech. J., vol. 27, pp. 379-423, July 1948. See also
http://cm.bell-labs.com/cm/ms/what/shannonday/paper.html

Anton also made available the following link, with notes he took for
Claude Crepeau's crypto course at McGill. See page 24 and following at
http://crypto.cs.mcgill.ca/~stiglic/Papers/crypto1.ps
(Anton notes that it's not unlikely that there are errors in those notes).

Comments are welcome.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Scientists question electronic voting

2003-03-05 Thread Ed Gerck

Henry Norr had an interesting article today at
http://sfgate.com/cgi-bin/article.cgi?file=/chronicle/archive/2003/03/03/BU122767.DTLtype=business

Printing a paper receipt that the voter can see is a proposal that addresses
one of the major weaknesses of electronic voting. However, it creates
problems that are even harder to solve than the silent subversion of e-records.

For example, using the proposed system a voter can easily, by using a
small concealed camera or a cell phone with a camera, obtain a copy of
that receipt and use it to get money for the vote, or keep the job. And
no one would know or be able to trace it.

Of course, proponents of the paper ballot copy, like Peter Neumann and
Rebecca Mercuri, will tell you the same thing that Peter affirmed in an official
testimony  before the California Assembly Elections  Reapportionment Committee
on January 17, 2001, John Longville, Chair, session on touch-screen (DRE)
voting systems, as recorded by C-SPAN (video available):

  ...I have an additional constraint on it [a voter approved paper ballot produced
  by a DRE machine] that  it  is behind reflective glass so that if you try to
  photograph it with a little secret camera hidden in your tie so you can go out and
  sell your vote for a bottle of whiskey or whatever it is, you will get a blank image.
  Now this may sound ridiculous from the point of view of trying to protect the
  voter, but this problem of having a receipt in some way that verifies that what
  seems to be your vote actually was recorded properly, is a fundamental issue.

I was also in Sacramento that same day, and this was my reply, in the next panel,
also with a C-SPAN videotape:

  .. I would like to point out that it is very hard sometimes to take opinions, even
  though from a valued expert, at face value. I was hearing the former panel [on
  touch screen DRE systems] and Peter Neumann, who is a man beyond all best
  qualifications, made the affirmation that we cannot photograph what we can see.
  As my background is in optics, with a doctorate in optics, I certainly know that is
  not correct. If we can see the ballot we can photograph it, some way or another.

But, look, it does not require a Ph.D. in physics to point out that what Peter says is
incorrect -- of course you can photograph what you see. In other words, Peter's
solution goes as much of this DRE discussion has also gone -- it's paying lip service
to science but refutes basic scientific principles and progress.  After all, what's the
scientific progress behind storing a piece of paper as evidence? And, by the way, are
not paper ballots what were mis-counted, mis-placed and lost in Florida?

Finally, what we see in this discussion is also exactly what we in IT security
know that we need to avoid. Insecure statements that create a false sense of
security -- not to mention a real sense of angst. This statement, surely vetted by
many people before it was printed, points out how much we need to improve in
terms of a real-world model for voting.

This opinion is my own, and is not a statement by any company.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: AES-128 keys unique for fixed plaintext/ciphertext pair?

2003-02-21 Thread Ed Gerck


Arnold G. Reinhold wrote:

 At 2:18 PM -0800 2/19/03, Ed Gerck wrote:
 The previous considerations hinted at but did not consider that a
 plaintext/ciphertext pair is not only a random bit pair.
 
 Also, if you consider plaintext to be random bits you're considering a very
 special -- and least used -- subset of what plaintext can be. And, it's a
 much easier problem to securely encrypt random bits.
 
 The most interesting solution space for the problem, I submit, is in the
 encryption of human-readable text such as English, for which the previous
 considerations I read in this list do not apply, and provide a false sense of
 strength. For this case, the proposition applies -- when qualified for  the
 unicity.
 

 Maybe I'm missing something here, but the unicity rule as I
 understand it is a probabilistic result.  The likelihood of two keys
 producing different natural language plaintexts from the same cipher
 text falls exponentially as the message length exceeds the unicity
 distance, but it never goes to zero.

Arnold,

This may sound intuitive but is not correct. Shannon proved that if
n (bits, bytes, letters, etc.) is the unicity distance of a ciphersystem,
then ANY message  that is larger than n bits CAN be uniquely deciphered
from an analysis of its ciphertext -- even though that may require some
large (actually, unspecified) amount of work. Thus, the likelihood of
of two keys producing valid decipherments (as plaintexts that can be
enciphered to the same ciphertext, natural language or not), from the
same ciphertext is ZERO after the message length exceeds the unicity
distance -- otherwise the message could not be uniquely deciphered
after the unicity condition is reached, breaking Shannon's result.

Conversely, Shannon also proved that if the intercepted message has less
than n (bits, bytes, letters, etc.) of plaintext then the message CANNOT
be uniquely deciphered from an analysis of its ciphertext -- even by trying
all keys and using unbounded resources.

 So unicity can't be used to
 answer the original question* definitively.

As above, it can. And the answer formulated in terms of the unicity
is valid for any plaintext/ciphertext pair, even for random bits. It
answers the question in all generality.

 I'd also point out that modern ciphers are expected to be secure
 against know plaintext attacks, which is generally a harsher
 condition than knowing the plaintext is in natural language.

No cipher is theoretically secure above the unicity distance, even though
it may be practically secure.

 * Here is the original question. It seems clear to me that he is
 asking about all possible plaintext bit patterns:

 At 2:06 PM +0100 2/17/03, Ralf-Philipp Weinmann wrote:
 I was wondering whether the following is true:
 
 For each AES-128 plaintext/ciphertext (c,p) pair there
   exists exactly one key k such that c=AES-128-Encrypt(p, k).

The following is always true, for any possible plaintext bit pattern:

For each AES-128 plaintext/ciphertext (c,p) pair with length
equal to or larger than the unicity distance, there exists exactly
one key k such that c=AES-128-Encrypt(p, k).

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: AES-128 keys unique for fixed plaintext/ciphertext pair?

2003-02-20 Thread Ed Gerck


Anton Stiglic wrote:

  The statement was for a plaintext/ciphertext pair, not for a random-bit/
  random-bit pair. Thus, if we model it terms of a bijection on random-bit
  pairs, we confuse the different statistics for plaintext, ciphertext, keys
 and
  we include non-AES bijections.

 While your reformulation of the problem is interesting, the initial question
 was regarding plaintext/ciphertext pairs, which usually just refers to the
 pair
 of elements from {0,1}^n, {0,1}^n, where n is the block cipher length.

The previous considerations hinted at but did not consider that a
plaintext/ciphertext pair is not only a random bit pair.

Also, if you consider plaintext to be random bits you're considering a very
special -- and least used -- subset of what plaintext can be. And, it's a
much easier problem to securely encrypt random bits.

The most interesting solution space for the problem, I submit, is in the
encryption of human-readable text such as English, for which the previous
considerations I read in this list do not apply, and provide a false sense of
strength. For this case, the proposition applies -- when qualified for  the
unicity.

Cheers,
Ed Gerck



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: AES-128 keys unique for fixed plaintext/ciphertext pair?

2003-02-18 Thread Ed Gerck

The relevant aspect is that the plaintext and key statistics are the
determining factors as to whether the assertion is correct or not.

In your case, for example, with random keys and ASCII text in English,
one expects that a 128-bit ciphertext segment would NOT satisfy the
requirement for a unique solution -- which is 150 bits of ciphertext.
However, since most cipher systems begin with a magic number or
has a message format that begins with the usual Received, To:, From:,
etc., it may be safer to consider a much lower unicity, for example less than
128 bits. In that case, even one block of AES would satisfy the requirements
-- and compression would NOT help.

Of course, keeping the same key while encrypting the next block would
also satisfy the requirements for the resulting 256-bit ciphertext/plaintext
pair to have a unique solution.[*]

Cheers,
Ed Gerck

[*] But note that if the plaintext has the full entropy of ASCII text in English
(as in your example) and compression is used, then the unicity should
increase to above 300 bits of ciphertext. The result is that a two-block
segment of ASCII text in English that is encrypted with the same key would
NOT satisfy the requirement for a unique solution.

Sidney Markowitz wrote:

 Ed Gerck [EMAIL PROTECTED] wrote:
   For each AES-128 plaintext/ciphertext (c,p) pair with length
  equal to or larger than the unicity distance, there exists exactly
  one key k such that c=AES-128-Encrypt(p, k).

 Excuse my naivete in the math for this, but is it relevant that the unicity
 distance of ASCII text encrypted with a 128 bit key is about 150 bits
 [Schneier, p 236] and the AES block size is only 128 bits? If you use plain
 ECB mode is the plaintext/ciphertext length in the above statement 128 bits,
 or does the statement imply that you have an arbitrary length (c,p) pair
 using whatever mode, possibly chaining, makes sense for your purpose?

  -- sidney


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: DeCSS, crypto, law, and economics

2003-01-08 Thread Ed Gerck


Nomen Nescio wrote:

 John S. Denker writes:
  The main thing the industry really had at stake in
  this case is the zone locking aka region code
  system.

 I don't see much evidence for this.  As you go on to admit, multi-region
 players are easily available overseas.  You seem to be claiming that the
 industry's main goal was to protect zone locking when that is already
 being widely defeated.

 Isn't it about a million times more probable that the industry's main
 concern was PEOPLE RIPPING DVDS AND TRADING THE FILES?

Well, zone locking helps curb this because it *reduces* the market for each
copy. The finer the zone locking resolution, the more effort an attacker needs
to make in order to be able to trade more copies.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Micropayments, redux

2002-12-16 Thread Ed Gerck

What follows below is from my dialogue with Ron
earlier this year, when the design was still being
worked out as he told me, when he kindly answered
some of my remarks --  which I also report below.

This is a very interesting proposal that creates a
large aggregate value worth billing for (in terms
of all operational and overhead costs), but which
large value the user will pay *on average*.

The user has a limit, and one idea is that the user
would pre-pay it (which may raise questions about
creating a barrier against spontaneous buying but
could be presented as an authorized credit limit,
I think) and then spend the limit in thousands (or more)
of peppercorn-worth (i.e., very small value -- maybe
cents or fractions of cents)  transactions that would be
paid only *on average*.  That is, most of the peppercorn
transactions would go *unpaid* and *unprocessed* -- thus,
with near zero overhead. However, some transactions would
hit the jackpot and be charged with a multiplicative
factor that -- on average -- pays for all unpaid transactions
and overhead.

Thus, because of the limit and the prepay, this can be seen
as a game that has no possible underpaying strategy
for the user, and the bank would be happy to let the
user play it as often as he likes -- with the following
caveats:

1. If there is no limit, then the well-known doubling
strategy would allow the user to, eventually, make the
bank lose -- the user getting a net profit.

2. If there is no prepaid amount, lucky users could quit
while ahead -- which would hurt the bank since those
users would be out of the pool to be charged, but they
have used the service.

3. The game is fair -- the bank will not weigh the
wheel (and hurt the users) and no one can compromise
the methods used by the bank (and hurt the bank).

Of course, if the wheel is not exactly balanced,
or if the house takes a cut in some other way,
then the user or the bank are losing ground at each
step.

Another question, which answer I guess is more
market-related than crypto-related, is whether banks
will accept the liability of a losing streak ...for them.
Likewise, users may lack motivation to continue using
the system if they have a losing streak (i.e., if they run
out of their prepaid amount sooner than what they and
the bank expects, and pre-pay again, and again run out
of money sooner than expected, and again until they
give up to be on the losing side). The problem here
is that, all things being fair, the system depends on
unlimited time to average things out.  This can be
compensated, I'd expect, by adequate human monitoring
and insurance. As always, it is not only the math that makes
things work -- even though it's also the math.

All things considered, though, as I said above this is a
very interesting proposal because it does reduce
processing and overhead costs to near zero for a large
number of transactions. I'd refrain from saying zero
because there should be some auditing involved for
all transactions.

Cheers,
Ed Gerck



Udhay Shankar N wrote:

 Ron Rivest is involved, too. Anybody got more info?

 http://www.peppercoin.com/peppercoin_is.html

 Peppercoin is a new approach to an old challenge: how to make small value
 transactions—micropayments—feasible. There is a whole world of digital
 content gathering dust because owners cannot find a profitable way to get
 it into the hands of paying customers.

 Merchants can profitably sell content or services at very low price points,
 which would be unprofitable with traditional payment methods.
 Consumers can purchase small-value items easily; PepperCoins are digital
 pocket change for music, games, and other downloads.

 Through a cryptographically secure process of sampling digital payments,
 Peppercoin reduces the volume of transactions processed by a third-party
 payment processor or financial institution. Peppercoin utilizes the most
 robust and secure digital encryption technologies, based on RSA digital
 signatures, to process and protect payments.

 Peppercoin's innovative technology is protected by worldwide patent
 applications.

 --
 ((Udhay Shankar N)) ((udhay @ pobox.com)) ((www.digeratus.com))

 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Micropayments, redux

2002-12-16 Thread Ed Gerck
David:

I'm happy you don't see any problems and I don't see
them either -- within the constraints I mentioned. But
if you work outside those #1, #2 and #3 constraints
you would have problems, which is something you may
want to look further into.

For example, in reply to my constraint  #2, you say:

 This is expected to be roughly counterbalanced by the
 number of unlucky users who quite (sic) while behind.

but these events occur under different models. If there
is no prepayment (which is my point #2) then many users
can quit after few transactions and there is no statistical
barrier to limit this behavior. On the other hand, the number
of users who quit after being unlucky is a matter of statistics.
These are apples and speedboats. You ned to have an
implementation barrier to handle #2.

Cheers,
Ed Gerck


David Wagner wrote:

 Ed Gerck  wrote:
 1. If there is no limit, then the well-known doubling
 strategy would allow the user to, eventually, make the
 bank lose -- the user getting a net profit.

 I think you misunderstand the nature of the martingale strategy.
 It's not a good way to win in Las Vegas, and it's not a good way to
 win here, either.  Anyway, even if it were a problem, there would
 be lots of ways to prevent this strategy in a digital cash system.

 2. If there is no prepaid amount, lucky users could quit
 while ahead -- which would hurt the bank since those
 users would be out of the pool to be charged, but they
 have used the service.

 No problem.  This is expected to be roughly counterbalanced by the
 number of unlucky users who quite while behind.

 Another question, which answer I guess is more
 market-related than crypto-related, is whether banks
 will accept the liability of a losing streak ...for them.
 [...] The problem here
 is that, all things being fair, the system depends on
 unlimited time to average things out.

 No, it doesn't.  It doesn't take unlimited time for lottery-based
 payment schemes to average out; finite time suffices to get the
 schemes to average out to within any desired error ratio.  The
 expected risk-to-revenue ratio goes down like 1/sqrt(N), where N
 is the number of transactions.  Consequently, it's easy for banks
 to ensure that the system will adequately protect their interests.

 And everything is eminently predictable.  Suppose the banks expect
 to do a 10^8 transactions, each worth $0.01.  Then their expected
 intake is $1 million, plus or minus maybe $1000 or so (the latter
 depends slightly on the exact parameter choices).  Any rational
 bank ought to be willing to absorb a few thousand in plus or minus,
 at this level of business.

 In short: I think your list of problems in the approach are not
 actually problematic in practice.

 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Secure Electronic and Internet Voting

2002-11-19 Thread Ed Gerck
List:

I want to spread the word about a newly published book
by Kluwer, where I have a chapter explaining Safevote's
technology and why we can do in voting (a much harder
problem) what e-commerce has not yet accomplished (it's
left as an exercise for the reader to figure out why 
e-commerce has not yet done it; hints by email if you 
wish). This book serves as a good introduction to other 
systems and some nay-sayers.  The book's URL is
http://www.wkap.nl/prod/b/1-4020-7301-1

With the US poised to test Internet voting in 2004/6, 
this book may provide useful, timely points for the 
discussion. We can't audit electrons but we can certainly
audit their pattern.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: more snake oil? [WAS: New uncrackable(?) encryption technique]

2002-10-25 Thread Ed Gerck


bear wrote:

 The implication is that they have a hard problem in their
 bioscience application, which they have recast as a cipher.

Their problem is not hard -- it is just either slow to converge for
some methods or not simply uniquely determined (*). They consider
the cases that are not uniquely determined, which is equivalent to the
following problem:

   given Y solve for X in Y = X mod 11

(and I mean 11 as a good number for their problem space),
which has many answers. Indeed, the number of answers (‘keys’)
that fit the equation is infinite. Since they know the only X that they
consider (quite arbitrarily) to be the right answer, they say that
you can't guess it -- hence it is unbreakable in their view. However,
their search space is very small and all functional exponential forms
can be tried in parallel with much better algorithms than what they
seem to use (*). This is not better than short passwords, so that one
probably does not even need to break in and snatch the file holding
the keys to the kingdom -- the coefficients that were used.

(*) For an example, see the Prony method comment and reference in  
http://www-ee.stanford.edu/~siegman/Beams_and_resonators_2.pdf

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-24 Thread Ed Gerck


David Wagner wrote:

 Ed Gerck  wrote:
 (A required property of MACs is providing a uniform distribution of values for a
 change in any of the input bits, which makes the above sequence extremely
 improbable)

 Not so.  This is not a required property for a MAC.
 (Not all MACs must be PRFs.)

Thanks. I should have written a usually required property. In general,
to have a good MAC, we require a good PRF.

Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



collision resistance -- Re: Why is RMAC resistant to birthday attacks?

2002-10-24 Thread Ed Gerck
There seems to be a question about whether:

1. the internal collision probability of  a hash function is bounded by the
inverse of the size of its internal state space, or

2. the internal collision probability of a hash function is bounded by the
inverse of the square root of size of its internal state space.

If we assume that the hash function is a good one and thus its hash space
is uniformely distributed (a good hash function is a good PRF), then we can
say:

For a hash function with an internal state space of size S, if we take n
messages x1, x2, ...xn, the probability P that there are i and j such that
hash(xi) = hash(xj), for xi  xj, is

P = 1 - (S!/( (S^n)*(S - n)!)

which can be approximated by

P ~ 1 - e^(-n*(n - 1)/2^(S + 1) ).

We see above a n^2 factor which will translate into a factor with sqrt(2^S)
when we solve for n. For example, if we ask how many messages N we
need in order to have P  0.5, we solve for n and the calculation gives:

N ~ sqrt( 2*ln(2)*2^S ).

Thus, if we consider just two messages, affirmation #1 holds, because
P reduces to 1/S. If we consider n  2 messages, affirmation #2 holds (the
birthday paradox).

Cheers,
Ed Gerck






-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-24 Thread Ed Gerck
... pls read this message with the edits below... 
missing ^ in exp and the word WITHOUT...still no coffee...

David Wagner wrote:

 Ed Gerck  wrote:
 Wei Dai wrote:
  No matter how good the MAC design is, it's internal collision probability
  is bounded by the inverse of the size of its internal state space.
 
 Actually, for any two (different) messages the internal collision probability
 is bounded by the inverse of the SQUARE of the size of the internal state space.

 No, I think Wei Dai had it right.  SHA1-HMAC has a 160-bit internal state.
 If you fix two messages, the probability that they give an internal collision
 is 1/2^160.

 Maybe you are thinking of the birthday paradox.  If you have 2^80 messages,
 then there is a good probability that some pair of them collide.  But this
 is the square root of the size of the internal state space.  And again, Wei
 Dai's point holds: the only way to reduce the likelihood of internal collisions
 is to increase the internal state space.

 In short, I think Wei Dai has it 100% correct.

Thanks again. I should have had some coffee at that time...I meant SQUARE ROOT.

As to the point you say is in question: the only way to reduce the likelihood of 
internal
collisions is to increase the internal state space. -- this is clearly true but is 
NOT what
is in discussion here. The point is whether the only way to reduce the likelihood of
attacks based on MAC collisions is to increase the internal state space.  These
statements are not equivalent.

 Not really. You can prevent internal collision attacks, for example, by using
 the envelope method (e.g., HMAC) to set up the MAC message.

 This is not accurate.  The original van Oorschot and Preneel paper
 describes an internal collision attack on MD5 with the envelope method.
 Please note also that HMAC is different from the envelope method, but
 there are internal collision attacks on HMAC as well.  Once again, I
 think Wei Dai was 100% correct here, as well.

However, it was possible to reduce the likelihood of attacks based on MAC
collisions WITHOUT increasing the internal state space.   This is what I was 
trying to explain. More below...

 You might want to consider reading some of the literature on internal
 collision attacks before continuing this discussion too much further.
 Maybe all will become clear then.

It's always good to read more, and learn more. But what I'm saying is
written in many such papers, including some that are written for
a general audience:

---
To attack MD5 [for example], attackers can choose any set of messages and
work on these  offline on a dedicated computing facility to find a collision.
Because attackers know the hash algorithm and the default IV, attackers can
generate the hash code for each of the messages that attackers generate. However,
when attacking HMAC, attackers cannot generate message/code pairs offline
because attackers do not know K. Therefore, attackers must observe a
sequence of messages generated by HMAC under the same key and perform
the attack on these known messages. For a hash code length of 128 bits, this
requires 2^64 observed blocks (2^73 bits) generated using the same key.
--in Dr. Dobbs, April 1999.

The point is clear: WITHOUT increasing the internal search space of MD5,
MD5 is used in a way that vastly reduces the likelihood of attacks based on
MAC collisions.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-22 Thread Ed Gerck


bear wrote:

 On Tue, 22 Oct 2002, Ed Gerck wrote:

 Short answer:  Because the MAC tag is doubled in size.
 
 Longer answer: The “birthday paradox” says that if the MAC tag has t bits,
 only 2^(t/2) queries to the MAC oracle are likely  needed in order to discover
 two messages with the same tag, i.e., a “collision,” from which forgeries
 could easily be constructed.

 This is a point I don't think I quite get. Suppose that I have
 a MAC oracle and I bounce 2^32 messages off of it.  With a
 64-bit MAC, the odds are about even that two of those messages
 will come back with the same MAC.

 But why does that buy me the ability to easily make a forgery?

;-) please note that you already have one forgery...

BTW, it is important to look at the size of the internal chaining variable.
If it is 128-bit, this means that attacks with a 2^128 burden would likely
work. However, if only a subset of the MAC tag  is used OR if the
message to be hashed has a fixed length defined by the issuer, this is not
relevant. Only one of these conditions are needed.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-22 Thread Ed Gerck


[EMAIL PROTECTED] wrote:

 On Tue, 22 Oct 2002, Ed Gerck wrote:

  Short answer:  Because the MAC tag is doubled in size.

 I know, but this is not my question.

;-) your question was Why is RMAC resistant to birthday attacks?

  Longer answer: The “birthday paradox” says that if the MAC tag has t bits,
  only 2^(t/2) queries to the MAC oracle are likely  needed in order to discover
  two messages with the same tag, i.e., a “collision,” from which forgeries
  could easily be constructed.

 So the threat model assumes that there is a MAC oracle. What is a
 practical realization of such an oracle? Does Eve simply wait for (or
 entice) Alice to send enough (intercepted) messages to Bob?

Eve may just watch traffic that comes into her company's servers, knowing
the back-end plain text messages. No need to watch external networks. Eve
may also be, for example, one of those third-party monitoring services that
monitor traffic inside enterprise's networks for the purpose of assuring security.

 Are there any other birthday attack scenarios for keyed MAC?

A birthday attack requires 2^(t/2) values, which looks surprising low -- hence
the name paradox (btw, this attack provides the mathematical model behind the
game of finding people with same birthday in a party, which works for a
surprisingly low number of people).  If you can get 2^(t/2) values, the attack
works.

 In many
 applications the collection sufficiently many messages between Alice and
 Bob is simply out of the question. In such cases if Eve cannot mount the
 attack independently and cannot collect 2^(n/2) messages from Alice to
 Bob, presumably RMAC does not offer an advantage over any other keyed MAC.

In an Internet message, datagrams can be inserted, dropped, duplicated, tampered
with or delivered out of order at the network layer (and often at the link layer). TCP
implements a reliable transport mechanism  and copes with the datagram unreliability
at the lower layers. However, TCP is unable to cope with a fraudulent datagram that is
crafted to pass TCP's protocol checks and is inserted into the datagram stream. That
datagram will be accepted by TCP and passed on to higher layers. A cryptographic
system operating  below TCP is needed to avoid this attack and filter out the deviant
datagrams -- and that's where you would use a MAC, if you want to protect each
datagram. It's not difficult, thus, to have more than 2^32 MACs in one message or
in a series of messages.

This is a scenario where it is not so difficult for an attacker to forge an acceptable
MAC for a datagram that was not sent in a given sequence, possibly tampering with
the upper-layer message and also making it more vulnerable to denial-of-service 
attacks.
Note that having a MAC above TCP does not prevent this attack, even though it can
detect it (and thus lead to a denial-of-service).

 I am not confused by the RMAC algorithm or so the associated work factor
 estimates, I want to understand the assumptions (threat models) behind the
 work factor estimates. Does the above look right?

If birthday attack is a concern, RMAC is helpful. If not, then not.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-22 Thread Ed Gerck


Wei Dai wrote:

 On Tue, Oct 22, 2002 at 12:31:47PM -0700, Ed Gerck wrote:
  My earlier comment to bear applies here as well -- this attack can be avoided
  if only a subset of the MAC tag  is used

 I can't seem to find your earlier comment. It probably hasn't gone through
 the mailing list yet.

 I don't see how the attack is avoided if only a substring of the MAC tag
 is used. (I assume you mean substring above instead of subset.)

Yes, subset -- not  a string with less N characters at the end. For example,
you can calculate the P subset as MAC mod P, for P smaller than
2^(bits in the MAC tag).

 The
 attacker just needs to find messages x and y such that the truncated MAC
 tags of x|0, x|1, ..., x|n, matches those of y|0, y|1, ..., y|n, and this
 will tell him that there is an internal collision between x and y.

No. The attacker gets A and B, and sees that A = B. This does not mean
that a=b in  A = a mod P and B = b mod P.  The internal states are possibly
different even though the values seen by the attacker are the same.

 n only
 has to be large enough so that the total length of the truncated MAC tags
 is greater than the size of the internal state of the MAC.

  OR if the message to be hashed has
  a fixed length defined by the issuer. Only one of these conditions are needed.

 No I don't think that works either. The attacker can try to find messages
 x and y such that MAC(x|0^n) = MAC(y|0^n) (where 0^n denotes enough zeros
 to pad the messages up to the fixed length).  Then there is a good
 chance that the internal collision occured before the 0's and so
 MAC(x|z)  = MAC(y|z) for all z of length n.

Why do you think there is a good chance?

Note that all messages for which you can get a MAC have some fixed message
length M. The attacker cannot leverage a MAC value to calculate the state of
a M+1 length message -- exactly because this is prevented by making all messages
have length M.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-22 Thread Ed Gerck


Sidney Markowitz wrote:

 [EMAIL PROTECTED]
  I want to understand the assumptions (threat models) behind the
  work factor estimates. Does the above look right?

 I just realized something about the salt in the RMAC algorithm, although it
 may have been obvious to everyone else:

 RMAC is equivalent to a HMAC hash-based MAC algorithm, but using a block
 cipher.

No -- these are all independent things. One can build an RMAC wih SHA-1.
An RMAC does not have to use an HMAC scheme. One can also have an
HMAC hash-based MAC algorithm using a block cipher, that is not an RMAC.

 The paper states that it is for use instead of HMAC iin circumstances
 where for some reason it is easier to use a block cipher than a cryptographic
 hash.

That's is not the reason it was devised. The reason is to prevent a birthday attack
for 2^(t/2) tries on a MAC using a t-bit key. Needless to say, it also makes harder
to try a brute force attack.

Cheers,
Ed Gerck





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-22 Thread Ed Gerck


Sidney Markowitz wrote:

 Ed Gerck [EMAIL PROTECTED] said:
  That's is not the reason it was devised. The reason is to prevent a birthday
  attack for 2^(t/2) tries on a MAC using a t-bit key. Needless to say, it also makes
  harder to try a brute force attack.

 RMAC was devised for the reason I stated, as it says in the last quote from
 the paper above. The salt is there to make the cost of the extension forgery
 attack more expensive because the birthday surprise shows that just the number
 of bits in the cipher block may not make it expensive enough without a salt.
 The key size is not relevant to the birthday attack (actually extension
 forgery attack) as shown in the table where the work factor expressed as a
 function of the block length and the salt length, not the key size.

A minor nit, but sometimes looking into why things were devised is helpful.
What I explained can be found in
http://csrc.nist.gov/encryption/modes/workshop2/report.pdf
and especially useful is the segment:

The RMAC algorithm was a refinement of the DMAC algorithm in which a random bit
string was exclusive-ORed into the second key and then appended to the resulting MAC
to form the tag. The birthday paradox in principle was no longer relevant, for, say, 
the
AES with 128 bit keys, because the tag would be doubled to 256 bits. Joux presented his
underlying security model and the properties that he had proven for RMAC: the number
of queries that bounded the chance of a forgery was relatively close to the number of 
128
bit keys.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Microsoft marries RSA Security to Windows

2002-10-15 Thread Ed Gerck
 eliminate the need to
 type in a code, or allow the PIN to be entered directly into the
 token (my preference).

It's costly, makes you carry an additional thing and -- most important
of all -- needs that pesky interface at the other end.

 10. There is room for more innovative tokens. Imagine a finger ring
 that detects body heat and pulse and  knows if it has removed. It
 could then refuse to work, emit a distress code when next used or
 simply require an additional authentication step to be reactivated.
 Even implants are feasible.

There is always room for evolution, and that's why we shan't run out of
work ;-)

However, not everyone wants to have an implant or carry a ring on their
finger -- which can be scanned and the subject targeted for a more serious
threat. My general remark on biometrics applies here -- when you are the
key (eg, your live fingerprint),  key compromise has the potential to be
much serious and harmful to you.

BTW, what is the main benefit of two-channel (as opposed to just two-factor)
authentication? The main benefit is that security can be assured even if the user's
credentials are compromised -- for example, by writing their passwords on stick-it
notes on their screen, or under their keyboards, or by using weak passwords, or
even having their passwords silently sniffed by malicious sofware/hardware,
problems that are very thorny  today and really have no solution but to add
another, independent, communication channel. Trust on authentication effectiveness
depends on using more than one channel, which is a general characteristic of trust
( http://nma.com/papers/it-trust-part1.pdf  )

Cheers,
Ed Gerck




 Arnold Reinhold

 At 8:56 AM -0700 10/9/02, Ed Gerck wrote:
 Tamper-resistant hardware is out, second channel with remote source is in.
 Trust can be induced this way too, and better. There is no need for
 PRNG in plain
 view, no seed value known. Delay time of 60 seconds (or more) is fine because
 each one-time code applies only to one page served.
 
 Please take a look at:
 http://www.rsasecurity.com/products/mobile/datasheets/SIDMOB_DS_0802.pdf
 
 and http://nma.com/zsentry/
 
 Microsoft's move is good, RSA gets a good ride too, and the door may open
 for a standards-based two-channel authentication method.
 
 Cheers,
 Ed Gerck
 
 Roy M.Silvernail wrote:
 
  On Tuesday 08 October 2002 10:11 pm, it was said:
 
   Microsoft marries RSA Security to Windows
   http://www.theregister.co.uk/content/55/27499.html
 
  [...]
 
   The first initiatives will centre on Microsoft's licensing of RSA SecurID
   two-factor authentication software and RSA Security's
 development of an RSA
   SecurID Software Token for Pocket PC.
 
  And here, I thought that a portion of the security embodied in a SecurID
  token was the fact that it was a tamper-resistant, independent piece of
  hardware.  Now M$ wants to put the PRNG out in plain view, along with its
   seed value. This cherry is just begging to be picked by some blackhat,
   probably exploiting a hole in Pocket Outlook.
 
 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Microsoft marries RSA Security to Windows

2002-10-15 Thread Ed Gerck

[I'm reducing the reply level to 2, for context please see former msg]

Arnold G. Reinhold wrote:

 At 8:40 AM -0700 10/11/02, Ed Gerck wrote:
 Cloning the cell phone has no effect unless you also have the credentials
 to initiate the transaction. The cell phone cannot initiate the authentication
 event. Of course, if you put a gun to the user's head you can get it all but
 that is not the threat model.

 If we're looking at high security applications, an analysis of a
 two-factor system has to assume that one factor is compromised (as
 you point out at the end of your response). I concede that there are
 large classes of low security applications where using a cell phone
 may be good enough, particularly where the user may not be
 cooperative. This includes situations where users have an economic
 incentive to share their login/password, e.g. subscriptions, and in
 privacy applications (Our logs show you accessed Mr. Celebrity's
 medical records, yet he was never your patient. Someone must have
 guessed my password. How did they get your cell phone too?)

I like the medical record dialogue. But please note that what you wrote is
much stronger than asking How did they get your hardware token too?
because you could justifiably go for days without noticing that the hardware
token is missing but you (especially if you are an MD) would almost
immediately notice that your cell phone is missing. Traffic logs and call
parties for received and dialed calls could also be used to prove that you
indeed used your cell phone both before and after the improper access. Also,
if you lose your cell phone you are in a lot more trouble.

The point made here is that the aggregate value associated with the cell
phone used for receiving a SMS one-time code is always higher than that
associated with the hardware token (it is token +), hence its usefulness
in the security scheme. Denying possession of the cell phone would be
harder to do -- and easier to disprove -- than denying possession of the
hardware token.

 Here the issue is preventing the user from cloning his account or denying
 its unauthorized use, not authentication.

The main objective of two-channel, two-factor authentication (as we
are discussing) is to prevent unauthorized access EVEN if the user's
credentials are compromised. This includes what you mentioned, in addition
to assuring authentication (i.e., preventing the user from cloning his account;
allowing enterprises to deny the unauthorized use of user's accounts).

Now, why should the second channel be provided ONLY by a hardware
token?  There is no such need, or security benefit.

The second channel can be provided by a hardware token, by an SMS-
enabled cell phone, by a pager or by ANY other means that creates a
second communication channel that is at least partially independent from
the first one. There is no requirement for the channels to be 100%
independent. Even though 100% independency is clearly desirable and can
be provided in some systems, it is hard to accomplish for a number of reasons
(indexing being one of them). In RSA SecurID, for example, the user's
PIN (which is a shared secret) is used both in the first channel (authenticating
the user) as well as in the second channel (authenticating the  passcode). Note also
that in SecurID systems without a PIN pad, the PIN is simply prefixed in plain
text to the random code and both are sent in the passcode.

The second channel could even be provided, for example, by an HTTPS (no
MITM) response in the same browser session (where the purported user
entered the correct credentials) if the response can be processed by an
independent means that is inacessible to others except the authorized user
(for example, a code book, an SMS query-response, a crypto calculator, etc.)
and the result fed back into the browser (i.e., as a challenge response).


 
 A local solution on the PDA side is possible too, and may be helpful where
 the mobile service may not work. However, it has less potential for wide
 use. Today, 95% of all cell phones used in the US are SMS enabled.

 What percentage are enabled for downloadable games? A security
 program would be simpler than most games.  It might be feasible to
 upload a new game periodically for added security.

There is nothing dowloaded on the cell phone.  Mobile RSA SecurID and
NMA ZSentryID are zero foot print applications.

BTW, requiring the download of a game or code opens another can of worms
-- whether the code is trusted by both sender and receiver (being trusted by
just one of them is not enough).

  2. Even if the phone is tamperproof, SMS messages can be intercepted.
  I can imagine a man-in-the-middle attack where the attacker cuts the
  user off after getting the SMS message, before the user has a chance
  to enter their code.
 
 Has no effect if the system is well-designed. It's possible to make
 it mandatory
 (under strong crypto assurances) to enter the one-time code using the *same*
 browser page

Re: Microsoft marries RSA Security to Windows

2002-10-10 Thread Ed Gerck

Tamper-resistant hardware is out, second channel with remote source is in.
Trust can be induced this way too, and better. There is no need for PRNG in plain
view, no seed value known. Delay time of 60 seconds (or more) is fine because
each one-time code applies only to one page served.

Please take a look at:
http://www.rsasecurity.com/products/mobile/datasheets/SIDMOB_DS_0802.pdf

and http://nma.com/zsentry/

Microsoft's move is good, RSA gets a good ride too, and the door may open
for a standards-based two-channel authentication method.

Cheers,
Ed Gerck

Roy M.Silvernail wrote:

 On Tuesday 08 October 2002 10:11 pm, it was said:

  Microsoft marries RSA Security to Windows
  http://www.theregister.co.uk/content/55/27499.html

 [...]

  The first initiatives will centre on Microsoft's licensing of RSA SecurID
  two-factor authentication software and RSA Security's development of an RSA
  SecurID Software Token for Pocket PC.

 And here, I thought that a portion of the security embodied in a SecurID
 token was the fact that it was a tamper-resistant, independent piece of
 hardware.  Now M$ wants to put the PRNG out in plain view, along with its
 seed value. This cherry is just begging to be picked by some blackhat,
 probably exploiting a hole in Pocket Outlook.

 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: unforgeable optical tokens?

2002-09-22 Thread Ed Gerck



bear wrote:

 Anyway; it's nothing particularly great for remote authentication;
 but it's *extremely* cool for local authentication.

Local authentication still has several optical issues that need to be answered,
and which may limit the field usefullness of a device based on laser speckle.

For example, optical noise by both diffraction and interference effects is a
large problem -- a small scratch, dent, fiber, or other mark (even invisible,
but producing an optical phase change) could change all or most all of
the speckle field. The authors report that a 0.5mm hole produces a large
overall change -- which can be easily understood since the smaller the defect,
the larger the spatial effect (Fourier transform).

But temperature/humidity/cycle differences might be worse -- any dilation or
contraction created by a temperature/humidity/cycle difference between recording
time (in lab conditions) and the actual validation time (in field conditions) would
change the entire speckle field in a way which is not geometric -- you can't just
scale it up and down to search for a fit.

Also, one needs to recall that this is not a random field -- this IS a speckle field.
There is a definite higher probability for bunching at dark and white areas
(because of the scatter's form, sine function properties, laser coherence length,
etc). This intrinsic regularity can be used to reduce the search space to a much
lower space than what I saw suggested.  Taking into account loss of resolution
by vibration and positioning would also reduce the search space.

Finally, the speckle field will show autocorrelation properties related to the sphere's
size and size distribution, which will further reduce randomness. In fact, this is a
standard application of speckle: to measure the diameter statistics of small spheres.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Cryptogram: Palladium Only for DRM

2002-09-18 Thread Ed Gerck

Peter:

The question of what is trust might fill this listserver for months.
But, if we want to address some of the issues that Pd (and, to some
extent, PKI) forces on us then we must be clear what we mean when
we talk about  trust in a communication system -- what is a trusted
certificate, a trusted computer? Trusted for what? What happens
when I connect two computers that are trusted on matters of X --
are they trusted together on matters of X, less or more? What do
we mean by trustworthy?

I can send you some of my papers on this but the conclusion I arrived
is that in terms of a communication process, trust has nothing to do with
feelings or emotions.

Trust is qualified reliance on information, based on factors independent of
that information.

In short, trust needs multiple, independent channels to be communicated.
Trust cannot be induced by self-assertions -- like, trust me!  or trust Pd!
More precisely, Trust is that which is essential to a communication channel
but cannot be transferred using that channel.  Please see the topic “Trust Points”
by myself in “Digital Certificates: Applied Internet Security” by Jalal Feghhi,
Jalil Feghhi and Peter Williams, Addison-Wesley, ISBN 0-20-130980-7, pages
194-195, 1998.

That said, the option of being *able* to define your own signatures on what
you decide to trust does not preclude you from deciding to rely on someone
else's signature.  BTW, this has been used for some time with a hardened version
of Netscape, where the browser does not use *any* root CA cert unless you sign
it first.

Thanks for your nice  comment ;-)

Ed Gerck



Peter wrote:

 I disagree with your first sentence (I believe that Pd must be trustworthy
 for *the user*), but I like much of the rest of the first paragraph.

 I am not sure what value my mother would find in defining her own
 signatures. She doesn't know what they are, and would thus have no idea on
 who or what to trust without some help.

 What my mother might trust is some third party (for example she might trust
 Consumer's Union). We assumed we needed a structure which lets users
 delegate trust to people who understand it and who are investing in
 branding their take on the trustworthiness of a given thing (think UL
 label, Good Housekeepking Seal of Approval, etc.). I totally agree that some
 small segment of users will have an active interest in managing the trust on
 their machines directly (like, maybe, us) but any architecture that you want
 to be used by normal PC users needs to also let users delegate this
 managment to others who can manage it for users (just like we might decide
 to use others to manage our retirement funds, defend us in a court of law,
 or operate on our kidneys).

 To delegate trust, you need to start out trusting something to do that
 delegation. That's part of what Pd is addressing - Pd needs to be
 trustworthy enough so that when a user sets policy (eg don't run any SW in
 Pd which isn't signed by the EFF or don't run any SW which isn't
 debuggable), it is enforced.

 P

 - Original Message -
 From: Ed Gerck [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Sent: Tuesday, September 17, 2002 2:51 PM
 Subject: Re: Cryptogram: Palladium Only for DRM

 
  It may be useful to start off with the observation that Palladium will not
 be
  the answer for a platform that *the user* can trust.  However, Palladium
  should raise awareness on the issue of what a user can trust, and what
 not.
  Since a controling element has to lie outside the controled system, the
 solution
  for a trustworthy system is indeed an independent module with processing
  capability -- but which module the user should be able to control..
 
  This may be a good, timely opening for a solution  in terms of a write
 code
  approach, where an open source trustworthy (as opposed to trusted)
  secure execution module TSEM (e.g., based on a JVM with permission
  and access management) could be developed and -- possibly -- burned on a
  chip set for a low cost system. The TSEM would require user-defined
  signatures to define what is trustworthy to *the user*, which would set a
 higher
  bar for security when compared with someone else defining what is
  trustworthy to the user.  The TSEM could be made tamper-evident, too.
 
  Note: This would not be in competition with NCipher's SEE, because
 NCipher's
  product is for the high-end market and involves commercial warranties,
  but NCipher's SEE module is IMO a good example.
 
  Comments?
 
  Ed Gerck
 
 
 
 
  -
  The Cryptography Mailing List
  Unsubscribe by sending unsubscribe cryptography to
 [EMAIL PROTECTED]
 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Cryptogram: Palladium Only for DRM

2002-09-17 Thread Ed Gerck


It may be useful to start off with the observation that Palladium will not be
the answer for a platform that *the user* can trust.  However, Palladium
should raise awareness on the issue of what a user can trust, and what not.
Since a controling element has to lie outside the controled system, the solution
for a trustworthy system is indeed an independent module with processing
capability -- but which module the user should be able to control..

This may be a good, timely opening for a solution  in terms of a write code
approach, where an open source trustworthy (as opposed to trusted)
secure execution module TSEM (e.g., based on a JVM with permission
and access management) could be developed and -- possibly -- burned on a
chip set for a low cost system. The TSEM would require user-defined
signatures to define what is trustworthy to *the user*, which would set a higher
bar for security when compared with someone else defining what is
trustworthy to the user.  The TSEM could be made tamper-evident, too.

Note: This would not be in competition with NCipher's SEE, because NCipher's
product is for the high-end market and involves commercial warranties,
but NCipher's SEE module is IMO a good example.

Comments?

Ed Gerck




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Quantum computers inch closer?

2002-09-03 Thread Ed Gerck



Jaap-Henk Hoepman wrote:

 On Mon, 02 Sep 2002 17:59:12 -0400 John S. Denker [EMAIL PROTECTED] writes:
  The same applies even more strongly to quantum computing:
  It would be nice if you could take a classical circuit,
  automatically convert it to the corresponding quantum
  circuit, with the property that when presented with a
  superposition of questions it would produce the
  corresponding superposition of answers.  But that cannot
  be.  For starters, there will be some phase relationships
  between the various components of the superposition of
  answers, and the classical circuit provides no guidance
  as to what the phase relationships should be.

 In fact you can! For any efficient classical circuit f there exists an
 efficient quantum circuit Uf that does exactly what you describe:
 when given an equal superposition of inputs it will produce the equal
 superposition of corresponding outputs.

Jaap-Henk,

a proof of existence does not allow one to automatically convert a classical
circuit to the corresponding quantum circuit, which was the original comment
by John. Devising QC algorithms from classical algorithms should not be
the best way to do it, either.

Cheers,
Ed Gerck



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Quantum computers inch closer?

2002-08-30 Thread Ed Gerck



bear wrote:

 On Sat, 17 Aug 2002, Perry E. Metzger wrote:

 
 [I don't know what to make of this story. Anyone have information? --Perry]
 
 Quantum computer called possible with today's tech
 http://www.eet.com/story/OEG20020806S0030
 
 ..
 The papers I've been reading claim that feistel ciphers (such as
 AES, DES, IDEA, etc) are fairly secure against QC.

 But I don't see how this can be true in the case where the
 opponent has a plaintext-ciphertext pair.
 ...
 I'm not a quantum physicist; I could be wrong here.  In
 fact, I'm probably wrong here.  But can anyone explain
 to me *why* I'm wrong here?

I'm a quantum physicist. Your argument is good but it has
nothing to do with quantum physics. The claim that feistel
ciphers are fairly secure against QC has to do with a
complex calculation that has no counterpart in a physical
system that could be used to calculate it. Not that the
calculation is not possible, but that it cannot be efficiently
transposed to a QC. Other ciphers may be a lot easier in this
regard  -- for example, there is a good similarity between
factoring the product of two primes and calculating
standing wave harmonics in a suitable quantum system.

Cheers,
Ed Gerck





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: TCPA / Palladium FAQ (was: Re: Ross's TCPA paper)

2002-06-27 Thread Ed Gerck


Interesting QA paper and list comments. Three
additional comments:

1. DRM and privacy  look like apple and speedboats.
Privacy includes the option of not telling, which DRM
does not have.

2. Palladium looks like just another vaporware from
Microsoft, to preempt a market like when MS promised
Windows and killed IBM's OS/2 in the process.

3. Embedding keys in mass-produced chips has
great sales potential. Now we may have to upgrade
processors also because the key  is compromised ;-)

Cheers,
Ed Gerck

PS: We would be much better off with OS/2, IMO.

Ross Anderson wrote:

 http://www.cl.cam.ac.uk/~rja14/tcpa-faq.html

 Ross

 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Shortcut digital signature verification failure

2002-06-21 Thread Ed Gerck


A DoS would not pitch one client against one server. A distributed attack
using several clients could overcome any single server advantage.  A
scalable strategy would be a queue system for distributing load to
a pool of servers and a rating system for early rejection of repeated
bad queries from a source. The rating system would reset the source rating
after a pre-defined time, much like anti-congestion mechanisms on the Net.
Fast rejection of bogus signatures would help, but not alone.

Cheers,
Ed Gerck

Bill Frantz wrote:

 I have been thinking about how to limit denial of service attacks on a
 server which will have to verify signatures on certain transactions.  It
 seems that an attacker can just send random (or even not so random) data
 for the signature and force the server to perform extensive processing just
 to reject the transaction.

 If there is a digital signature algorithm which has the property that most
 invalid signatures can be detected with a small amount of processing, then
 I can force the attacker to start expending his CPU to present signatures
 which will cause my server to expend it's CPU.  This might result in a
 better balance between the resources needed by the attacker and those
 needed by the server.

 Cheers - Bill

 -
 Bill Frantz   | The principal effect of| Periwinkle -- Consulting
 (408)356-8506 | DMCA/SDMI is to prevent| 16345 Englewood Ave.
 [EMAIL PROTECTED] | fair use.  | Los Gatos, CA 95032, USA

 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



dejavu, Re: Hijackers' e-mails were unencrypted

2001-10-03 Thread Ed Gerck


List:

With all due respect to the need to vent our fears, may I remind
this list that we have all seen this before (that is, governments
trying to control crypto), from key-escrow to GAK, and we all
know that it will not work -- and for many reasons.  A main one
IMO is that it is simply impossible to prevent anyone from
sending an encrypted message to anyone else except by
controlling the receivers and the transmitters (as done in WWII,
for example). Since controlling receivers and transmitters is
now really impossible, all one can do is control routing and
addresses. I suggest this would be a much more efficient way
to reduce the misuse of our communication networks. For
example, if one email address under surveillance receives
email from X, Y and Z, then X, Y and Z will also be added
to the surveillance. Even if everything is encrypted, people
and computers can be verified.

In addition, we also need to avoid to add fuel to that misconception,
that  encryption is somehow  dangerous or should be controlled
as weapons are. The only function of a weapon is to inflict harm.
The only function of encryption is to  provide privacy.

Cheers,

Ed Gerck



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]