Cryptography-Digest Digest #811, Volume #12       Mon, 2 Oct 00 05:13:01 EDT

Contents:
  Re: Choice of public exponent in RSA signatures (Paul Rubin)
  Re: Choice of public exponent in RSA signatures (Roger Schlafly)
  Re: Choice of public exponent in RSA signatures (David A Molnar)
  Re: Choice of public exponent in RSA signatures (Roger Schlafly)
  Re: Choice of public exponent in RSA signatures ("John A. Malley")
  Re: Choice of public exponent in RSA signatures (Paul Rubin)
  Re: Question on biases in random numbers & decompression (Ray Dillinger)
  Re: How Colossus helped crack Hitler's codes (John Savard)
  Ciphers and Unicode (Ray Dillinger)
  Re: Choice of public exponent in RSA signatures (Francois Grieu)
  Re: Which is better? CRC or Hash? (Tiemo Ehlers)
  Re: Choice of public exponent in RSA signatures (Francois Grieu)
  Re: Avoiding bogus encryption products: Snake Oil FAQ (Robert Davies)
  Re: Josh MacDonald's library for adaptive Huffman encoding (Phil Norman)
  Re: Shareware Protection Schemes (Anders Thulin)
  Re: Choice of public exponent in RSA signatures (D. J. Bernstein)
  Re: About implementing big numbers (David Blackman)
  Re: On block encrpytion processing with intermediate permutations (Mok-Kong Shen)
  Re: Choice of public exponent in RSA signatures (Mok-Kong Shen)
  Re: On block encrpytion processing with intermediate permutations (Mok-Kong Shen)
  Re: Signature size ([EMAIL PROTECTED])
  Re: Ciphers and Unicode (David Blackman)

----------------------------------------------------------------------------

From: Paul Rubin <[EMAIL PROTECTED]>
Subject: Re: Choice of public exponent in RSA signatures
Date: 01 Oct 2000 22:42:43 -0700

Francois Grieu <[EMAIL PROTECTED]> writes:
> Researchers publishing on factorisation, be it using NFS, QS, or EC,
> all agree that it would be harder to factor say a 1152 bit product of
> three 384 bit primes, than to factor a 1024 bit product of two 512
> bit primes.

Well, ok, since the modulus is bigger.  But how does that difficulty
compare to that of a 1152 bit product of two 576-bit primes?

Does anyone really think that 1024-bit N=pq might be practical some
day, but 1152-bit N=pqr won't also be practical at that time?

I think factoring 1024-bit N=pq needs a mathematical breakthrough;
and if we have one of those, who knows what will happen.

------------------------------

From: Roger Schlafly <[EMAIL PROTECTED]>
Subject: Re: Choice of public exponent in RSA signatures
Date: Sun, 01 Oct 2000 22:50:42 -0700

Francois Grieu wrote:
> Researchers publishing on factorisation, be it using NFS, QS, or EC,
> all agree that it would be harder to factor say a 1152 bit product of
> three 384 bit primes, than to factor a 1024 bit product of two 512
> bit primes; while secret-key operation with the first modulus
> is one-third faster than with the second, using the CRT of course.

Yes.

> Yet multiprime RSA has not catch up (at least if you look at the
> offer of hardware vendors). I do not think it is superstition only,
> but also a bias towards simplicity, which I feel quite reasonable.

3-prime RSA is almost as simple as 2-prime RSA.

------------------------------

From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: Choice of public exponent in RSA signatures
Date: 2 Oct 2000 05:32:47 GMT

Paul Rubin <[EMAIL PROTECTED]> wrote:
> Roger Schlafly <[EMAIL PROTECTED]> writes:
>> A lot of crypto is based on superstitition. For several years
>> it has been agreed that 3-prime RSA is superior to 2-prime RSA,
>> but no one uses it.

> Agreed by who?!!

Compaq, for one.

There's also a draft revision to PKCS #1 which will support multi-prime
(distinct primes, note) RSA. 

-David

------------------------------

From: Roger Schlafly <[EMAIL PROTECTED]>
Subject: Re: Choice of public exponent in RSA signatures
Date: Sun, 01 Oct 2000 22:59:43 -0700

Paul Rubin wrote:
> Well, ok, since the modulus is bigger.  But how does that difficulty
> compare to that of a 1152 bit product of two 576-bit primes?

The difficulty is the same with GNFS. GNFS is the fastest method
for numbers in that range. The advantage to 3-prime RSA would be
that secret key operations are faster.

> Does anyone really think that 1024-bit N=pq might be practical some
> day, but 1152-bit N=pqr won't also be practical at that time?

The point is that there is a speed/security tradeoff. When comparing
2-prime to 3-prime RSA, you would usually compare them at the same
security, or the same speed. Francois just happened to choose the
parameters so that 3-prime RSA wins on both security and speed.

------------------------------

From: "John A. Malley" <[EMAIL PROTECTED]>
Subject: Re: Choice of public exponent in RSA signatures
Date: Sun, 01 Oct 2000 23:09:12 -0700

Francois Grieu wrote:
> 
[snip]
> 
> > Exponent 65537 is a horrible waste of time.
> 
> I'm trying to understand why so many professionals swear by it !
> 
 
The number e = 2^16 + 1 = 65537 is a reasonable choice for a small
encryption exponent that minimizes the number of possible unconcealed
messages when encrypting with RSA and SIMULTANEOUSLY allows the same
message (or variations of that message) to be sent to a large number of
different recipients while minimizing the threat of a specific attack
(from Coppersmith) that relies on the same message (or a variation of
that message) sent to a number of different recipients. 

Here's the explanation from the HAC: 

First from section 8.2.2 viii, Security of RSA,  Message concealing:
(paraphrased, not an exact quote)

A plaintext message m, 0 <= m <= n - 1 is said to be unconcealed if m^e
mod n = m mod n. The number of unconcealed messages is given by [ 1 +
gcd(e-1, p- )] * [ 1 + gcd(e-1, q-1)].  If p and q are random primes and
e is chosen to be a small number such as 3 or 2^16 + 1 = 65537 then the
proportion of messages which are unconcealed by RSA encryption will in
general be negligibly small and will not pose a threat to the security
of the RSA encryption in practice. 

Why the interest in 2^16 + 1 as a "small" number?

The HAC adds this in section 8.2.3, RSA encryption in practice, in Note
8.9 (small encryption exponents):
(direct quote)

"Another encryption exponent used in practice is e = 2^16 + 1 = 65537.
This number has only two 1's in its binary representation, and so
encryption using the repeated square-and-multiply algorithm requires
only 16 modular squarings and 1 modular multiplication. The encryption
exponent e = 2^16 + 1 has the advantage over e = 3 in that it resists
the kind of attack discussed in (section) 8.2.2.(ii) since it is
unlikely the same message will be sent to 2^16 + 1 recipients." 

The attack in section 8.2.2.(ii) is from Coppersmith against RSA using
small encryption exponents and discussed in detail in the notes for
section 8.2 in section 8.8 Notes and further references. See pages 313 -
314. Salting thwarts this attack.
 

 
John A. Malley
[EMAIL PROTECTED]

------------------------------

From: Paul Rubin <[EMAIL PROTECTED]>
Subject: Re: Choice of public exponent in RSA signatures
Date: 01 Oct 2000 23:17:43 -0700

Roger Schlafly <[EMAIL PROTECTED]> writes:
> > Does anyone really think that 1024-bit N=pq might be practical some
> > day, but 1152-bit N=pqr won't also be practical at that time?
> 
> The point is that there is a speed/security tradeoff. When comparing
> 2-prime to 3-prime RSA, you would usually compare them at the same
> security, or the same speed. Francois just happened to choose the
> parameters so that 3-prime RSA wins on both security and speed.

Well, speed/security/space, not just speed/security.  Where would a
1024-bit product of 341*341*342-bit primes fit into the picture?

1024=341*341*342 is interesting because most web browsers have a
1024-bit maximum public key size.  So 1024=341*341*342 could give a
server-side speedup while still being able to work with old browsers.

It might be worth extending OpenSSL to do this, if the security is ok.

But somehow I think if anyone ever factors 1024=512*512, it won't be
with GNFS.

------------------------------

From: Ray Dillinger <[EMAIL PROTECTED]>
Subject: Re: Question on biases in random numbers & decompression
Date: Mon, 02 Oct 2000 06:18:41 GMT

Benjamin Goldberg <[EMAIL PROTECTED]> wrote:

: The reason I want to try this, is that if I take my random bitstream,
: and take two bits at a time, getting a number in the range 0, 1, 2, 3,
: and discard all 3s to get a number in the desired (0, 1, 2) range, I'm
: WASTING 25% of my random bits, AND I might end up taking an arbitrarily
: long time to get a single trit.  Bleh.  One way of avoiding wasting my
: hard-earned random bits would be to use a huffman decompressor.  The
: problem with that is that I would get something like 50% 0s, 25% 1s, and
: 25% 2s.  NOT what I want.  What I *WANT* is 33% 0s, 33% 1s, and 33% 2s.

Well, you could definitely bite it off in larger chunks. If you pull 
bits out of your Random Number Stream 32 at a time, you can just 
convert it to ternary notation and take those trits. In this case, 
because it doesn't come out exactly, the first trit won't have the 
full range of possibilities -- the odds of a "2" won't be as high 
as for the other trits, or perhaps a "2" won't even be possible and 
the odds of a "1" won't be as high.  If it's the first case, you 
can discard any 2's and push back an 0 or 1 result as one of the 
next batch of bits.  If it's the second case, just discard the first 
trit.  Depending on which case it is, you should be able to waste 
either just a bit more, or just a bit less, than one bit out of 
every thirty-two, and you will *always* get a valid trit in a 
bounded time. 

or, you could try coming up with something sneaky...  eight bits 
is a range of 256 numbers, and five trits is a range of 243 numbers. 
256 = 243 (5 Trits) + 9 (2 trits) + 4 (2 bits).  So you could take 8
 bits at a time and behave like this: 

0-242 -- straight conversion to 5 trits, with no losses. 
243-251 -- subtract 243, convert the difference into 2 trits
252-255 -- subtract 252, use these 2 bits in your next batch

In the long run though, even though we have avoided outright 
throwing stuff away, we have gotten less efficient than the 
"just bite off big chunks and convert them to ternary" approach 
above. That will *always* get us 20 trits for 4 bytes, and this 
won't. 

                                Bear







------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: How Colossus helped crack Hitler's codes
Date: Mon, 02 Oct 2000 06:47:53 GMT

On Sun, 01 Oct 2000 20:33:36 GMT, [EMAIL PROTECTED]
(John Savard) wrote, in part:

>Which reminds me: the forthcoming book mentioned in that "Science and
>Technology" article about Mike, Copperhead, and company will be coming
>out soon as well.

The article was in "American Heritage of Technology and Invention",
and the book is "Battle of Wits" by Stephen Budiansky, ISBN
0684859327, published by Free Press.

John Savard
http://home.ecn.ab.ca/~jsavard/crypto.htm

------------------------------

From: Ray Dillinger <[EMAIL PROTECTED]>
Subject: Ciphers and Unicode
Date: Mon, 02 Oct 2000 06:59:32 GMT


Has anyone looked at Unicode seriously, and how it will interact 
with cipher software?  

One basic issue I see is that if we start writing english with a 
16-bit character set, we're going to get one point-three bits of 
information (per character) out of sixteen bits, rather than 8. 
This affects the feasibility of guessing plaintext in, say, an 
8-byte block cipher, and drives the entropy of the plaintext 
way way down.  

Yes, I know it's always good to compress before encrypting, but the 
fact is most people don't.  

Another basic issue with Unicode is that it has scads of special 
characters -- some are designated whitespace, some print left-to-
right and others right-to-left, some are designated stop or break 
characters, some don't print at all, some bit combinations are 
explicitly not part of the character set, etc, etc etc. This will 
introduce a lot of niggling little complications and possibly 
reduce security by introducing more opportunities for the coders 
to mess up. It will also make major problems because lots of 
software will cheerfully convert unicode text to some preferred 
format, changing one character for another invisibly to humans. 
wanna bet digital signatures on it won't check if that happens?

Another basic problem with Unicode is that it's explicitly 
"endianness-agnostic", storing MSB first and LSB first on 
different machines.  Nifty as far as use is concerned, but 
I can picture a very persistent problem where digital signatures 
created on one platform don't check on a different platform 
because the mail software (or whatever) has "corrected" the 
endianness of the representation to match local standards. 
Or where ciphertext is reduced to unicode printable characters, 
transmitted, and the underlying representation is changed on 
the recipient's machine and it decrypts as gibberish.  

Also, you've got lots of different character sequences that 
represent the same damn glyphs.  When you're looking at an 
e with a cedilla under it, for example, you've normally got 
no way of knowing whether the underlying representation is 
the precomposed character or the e followed by the non-spacing 
accent.  Or a character you didn't even know about, in a 
completely different alphabet, that just happens to *look* 
like an e with a cedilla under it. So if you ever have to 
enter a key or a passphrase from a printout, or read off 
exact text by voice over the phone, or whatever, you have 
no way of knowing if it's going to be exactly the same text.
Whether cryptographic operations that require them to be 
the same binary representation will work becomes a crapshoot.

                                Ray



------------------------------

From: Francois Grieu <[EMAIL PROTECTED]>
Subject: Re: Choice of public exponent in RSA signatures
Date: Mon, 02 Oct 2000 09:06:53 +0200

Francois Grieu <[EMAIL PROTECTED]> wrote:
> D. J. Bernstein <[EMAIL PROTECTED]> wrote:
> > Exponent 65537 is a horrible waste of time.
> 
> I'm trying to understand why so many professionals swear by it !

Could we get back on that track ? I really wonder...

   Francois Grieu

------------------------------

From: Tiemo Ehlers <[EMAIL PROTECTED]>
Subject: Re: Which is better? CRC or Hash?
Date: Mon, 02 Oct 2000 09:22:45 +0200

I don't know what you mean with a cheap microcontroller.
I want to run that algorithm on an Infineon C167. OK, compare to a Pentium III 800
it looks a little bit cheap and slow.
Do you think it it possible to run a hash function on a C167 properly?

Dido Sevilla wrote:

> Tiemo Ehlers wrote:
> >
> > I want to be able to notice any changes, no matter if done by evil forces or
> > just by coincidence.
> > And it should be infeasable to generate a file with a different content but
> > the same digest number.
> > I think real one way hash functions would do that job.
> >
> > But CRC is easier to computer. How likely is it to generate a file with a
> > different content and the same CRC value as before?
> > I don't have a clue. How can I find out?
> >
>
> It's fairly easy.  CRC's are designed to defend against non-malicious
> threats to data integrity, such as flaky hardware and line noise.  A
> good choice of polynomial will protect against single bit errors, burst
> errors, and other sorts of modifications that are caused by coincidence
> or happenstance, but if the polynomial is not kept secret, it's not that
> difficult for enemy action to produce modifications in the file that
> don't affect the digest.
>
> On the other hand, a cryptographic hash is designed specifically to
> thwart enemy action in that regard.  If that's what you're after, then
> that's what you should be going for, definitely.  And if you're not
> doing the digest computation on a cheap microcontroller with limited
> processing power, then this is definitely the way to go.
>
> --
> Rafael R. Sevilla <[EMAIL PROTECTED]>         +63 (2)   4342217
> ICSM-F Development Team, UP Diliman             +63 (917) 4458925
> OpenPGP Key ID: 0x0E8CE481


------------------------------

From: Francois Grieu <[EMAIL PROTECTED]>
Subject: Re: Choice of public exponent in RSA signatures
Date: Mon, 02 Oct 2000 09:52:02 +0200

"John A. Malley" <[EMAIL PROTECTED]> wrote:

> The number e = 2^16 + 1 = 65537 is a reasonable choice for a
> small encryption exponent (..)  minimizing the threat of a
> specific attack (from Coppersmith) that relies on the same
> message (or a variation of that message) sent to a number of
> different recipients. (..)
> The attack in section 8.2.2.(ii) is from Coppersmith against
> RSA using small encryption exponents and discussed in detail
> in the notes for section 8.2 in section 8.8 Notes and further
> references. See pages 313 - 314. Salting thwarts this attack.

Yes, these are good reason e = 2^16 + 1 is used in encryption
applications (though arguably with good padding/salting e = 3
should be safe).

But in a  ** signature **  application there is nothing to hide,
except how to sign, and with proper padding that resist the
Corron-Naccache-Stern attacks [3] I see no danger with e = 3.

So again I wonder why  e = 2^16 + 1  is increasingly prescribed.

  Francois Grieu


[3] Jean-Sebastien Coron, David Naccache, Julien P. Stern:
On the Security of RSA Padding, Advances in Cryptology
CRYPTO '99; for sale at
<http://www.springer.de/cgi-bin/search_book.pl?isbn=3-540-66347-9>

------------------------------

From: Robert Davies <[EMAIL PROTECTED]>
Subject: Re: Avoiding bogus encryption products: Snake Oil FAQ
Date: Mon, 02 Oct 2000 21:03:12 +1300
Reply-To: [EMAIL PROTECTED]

C Matthew Curtin <[EMAIL PROTECTED]> wrote:

>URL: http://www.interhack.net/people/cmcurtin/snake-oil-faq.html
>Version: 1.9
>Archive-name: cryptography-faq/snake-oil
>Posting-Frequency: monthly
>
>                          Snake Oil Warning Signs:
>                        Encryption Software to Avoid
>
>                          Copyright � 1996-1998
>                    Matt Curtin <[EMAIL PROTECTED]>
>
>                               April 10, 1998
>
Isn't time some-one updated this. E.g. hasn't the US government
relaxed its export rules a little?

Robert


------------------------------

From: [EMAIL PROTECTED] (Phil Norman)
Crossposted-To: comp.compression,comp.theory
Subject: Re: Josh MacDonald's library for adaptive Huffman encoding
Date: 2 Oct 2000 10:33:13 +0200

On 1 Oct 2000 06:36:09 GMT,
    SCOTT19U.ZIP_GUY <[EMAIL PROTECTED]> wrote:
>
>   I don't write notes. Even when I worked for the government
>my job was to write code to keep the stuff in the air. comments
>are never correct anyway. I always made the algorithms do what
>seems natural.

For most things, the natural thing is to fall *out* of the air.
Comments are as correct as they are made to be.  If the comments
are not correct, you should spend more effort in ensuring that
they are.

Cheers,
Phil


------------------------------

From: Anders Thulin <[EMAIL PROTECTED]>
Subject: Re: Shareware Protection Schemes
Date: Mon, 2 Oct 2000 08:29:02 GMT

musashi_x wrote:
 
> I want to create a serial number registration scheme for a piece of
> shareware I'm working on.

  I haven't kept much up to date with the shareware environment for the
last few years, but I think there used to be one or two central organizations
that could come up with suggestions and experiences from the use of various
protection schemes. You might want to check for that.

-- 
Anders Thulin     [EMAIL PROTECTED]     040-10 50 63
Telia Prosoft AB,   Box 85,   S-201 20 Malm�,   Sweden

------------------------------

From: [EMAIL PROTECTED] (D. J. Bernstein)
Subject: Re: Choice of public exponent in RSA signatures
Date: 2 Oct 2000 08:43:07 GMT

Francois Grieu  <[EMAIL PROTECTED]> wrote:
> That's my view too in theory, but in practice I hesitate to recommend
> Rabbin signatures, because if things go wrong with the padding, they
> tend to go wrong so badly that the factorization of the public key is
> revealed.

This is certainly not unique to Rabin-Williams signatures. You'll reveal
your secret key if you screw up your exponentiation in RSA, for example,
or your random number generation in ElGamal/Schnorr/DSA.

The obvious solution is to stop screwing up. This isn't rocket science.

> the Jacobi symbol is hard to grasp by the implementor

Huh? The only Jacobi symbols used are b^((p-1)/2) mod p, in situations
where b^((p+1)/4) mod p is going to be computed in any case.

---Dan

------------------------------

From: David Blackman <[EMAIL PROTECTED]>
Subject: Re: About implementing big numbers
Date: Mon, 02 Oct 2000 19:55:27 +1100

Martin Miller wrote:
> 
> Hi,
> 
> I would like to know if there is information on the web on implementing in
> C big numbers, such as the ones used in RSA. Is it difficult ?
> 
> I would also like to know what kind of solution most crypto software use
> when they need big numbers.

I think most crypto libraries that need big numbers would include the
necessary big number routines. I'm pretty sure this is true for rsaref,
ssleay, and openssl, which are probably the most popular ones.

> Is there a good library I should use ?
> 
> Would it be better and not too difficult to implement them myself?
> 
> I plan to do this under Linux, but I'll maybe port the software on
> Windows.
> 
> Thank you!
> 
> Martin.

GMP, the Gnu Multi-Precision library. It's big, it's heavy, and it
includes heaps of stuff you'll never need, but it does seem fairly
quick, and if you're a Linux person, you'll probably like the license
agreement.

http://www.swox.com/gmp/

There's plenty of other libraries out there which others on the group
will no doubt recommend. Some of them are good. It's also fun but a bit
tedious to roll your own.

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: On block encrpytion processing with intermediate permutations
Date: Mon, 02 Oct 2000 11:10:58 +0200



Bryan Olson wrote:
> 
> Mok-Kong Shen wrote:
> > Bryan Olson wrote:
> > >
> > [snip]
> > > I assumed the attacker could get the same permutation in
> > > different messages.  Your only specific suggestion of how
> > > the PRNG is seeded divided the message in half and used each
> > > half to determine the permutation for the other, which is
> > > obviously repeatable in a chosen plaintext attack.  If the
> > > attacker cannot re-start the PRNG, then choosing all blocks
> > > of x the same, except the block that differs from x', looks
> > > like a promising tactic.
> >
> > In my original post, I first said of using a PRNG,
> > which I intend not to reseed (see answer to Tom St. Denis),
> > i.e. the permutations are different for different messages.
> 
> I see nothing about how the sender and receiver syncronize,
> only that the key for the permutation should be independent
> of the key to the block cipher.
> 
> Given a non-restartable PRNG, keeping all the blocks
> constant looks promising.

Each session uses a (different) secret seed for the PRNG.
(I use effectively more key material, as said in a previous
follow-up.)

> [...]
> > I doubt that the answer is yet very
> > clear to me owing to the not too small volume of texts of
> > debate involved in a number of follow-ups:
> 
> So you've noticed that a large volume of texts devoid
> of any actual result is counter-productive have you?
> 
> > In your opinion
> > does my introduction of the permutation steps diminish or
> > enhance the security, i.e. whether the additional
> > computational cost involved has caused a negative or
> > positive effect? Thanks.
> 
> Hard to sell exposing the key as a good thing.

Sorry, the above sentence is difficult for me (foreigner)
to understand. (I have even difficulty to parse it with my 
comparatively poor knowledge of English grammar.) Would 
you please answer my question once again in some concrete 
and unambigious manner? Thanks in advance.

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Choice of public exponent in RSA signatures
Date: Mon, 02 Oct 2000 11:10:50 +0200



Roger Schlafly wrote:
> 
[snip]
> A lot of crypto is based on superstitition. 

Well said.

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: On block encrpytion processing with intermediate permutations
Date: Mon, 02 Oct 2000 11:11:05 +0200



Mok-Kong Shen wrote:
> 
> I like to add some remarks that are mainly resume of what I
> wrote in the other follow-ups:
[snip]

Addendum:

(7) Generalizing (4), different blocks may be processed
    by different block ciphers, if these have the same 
    number of cycles.

M. K. Shen

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Signature size
Date: Mon, 02 Oct 2000 08:48:03 GMT



> What is your platform?  What are the exact requirements?
>
> My suggestion is to consider ECC since 163-bit prime fields in GF(2)
> appear to be secure (I would use a slightly larger field).  They
> essentially use the ElGamal type math if I am not mistaken.

Is ECC ElGamal patented by Gericom? Any pointers to source code
(preferably in Java but C/C++ will do)?

> Why would you need to sign a 32 bit value though?

Platform: Java on desktop computers

The signature will be used to secure software serial numbers. To be
more precise it is only supposed to keep people from generating their
own serial number generators. Since the user has to key it in it has to
be as short as possible (sacrificing some security for that would be
fine: I think 56-bit DES like security would be enough).

And yes I know that a program can never be secure and a cracker can
always change the code verifying the serial number (especially in
Java). But as long as there are no unauthorized serial numbers floating
around people who want to use it have to download the modified program
which lots of people will not do since it is a) insecure (viruses, etc)
and b) more of a hassle and c) more obviously a criminal act.

Thanks for your help!

kryps


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: David Blackman <[EMAIL PROTECTED]>
Subject: Re: Ciphers and Unicode
Date: Mon, 02 Oct 2000 20:04:00 +1100

Ray Dillinger wrote:
> 
> Has anyone looked at Unicode seriously, and how it will interact
> with cipher software?
> 
> One basic issue I see is that if we start writing english with a
> 16-bit character set, we're going to get one point-three bits of
> information (per character) out of sixteen bits, rather than 8.
> This affects the feasibility of guessing plaintext in, say, an
> 8-byte block cipher, and drives the entropy of the plaintext
> way way down.

Unicode specifies an encoding called UTF-8 that uses 8 bits per
character to encode English text. I think most Unicode applications will
use either UTF-8, or the Java encoding, for files and interchange. The
Java encoding also uses 8 bits per character for English, and also
(confusingly) is called UTF-8.

Some of the other issues you bring up sound familiar. I think i saw them
either on Risks Digest, or maybe the CryptoGram newsletter. Try a web
search. I don't think i heard any suggested remedies, except "Be
careful". I expect major software vendors to make some mistakes with
this, but no worse than they do with everything else.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to