Cryptography-Digest Digest #104

2000-02-12 Thread Digestifier

Cryptography-Digest Digest #104, Volume #11  Sat, 12 Feb 00 05:13:00 EST

Contents:
  Re: UK publishes 'impossible' decryption law (wtshaw)
  Re: Latin Squares (was Re: Reversibly combining two bytes?) (Terry Ritter)
  Re: Latin Squares (was Re: Reversibly combining two bytes?) (Terry Ritter)
  Re: BASIC Crypto Question (wtshaw)
  Re: need help with a basic C++ algorithm (wtshaw)
  Re: Period of cycles in OFB mode (Terry Ritter)
  Re: Twofish vs. Blowfish ("Douglas A. Gwyn")
  Re: Which compression is best? (Thomas Pornin)
  Re: Continually Secure Password/Pin (Thomas Wu)
  Re: BASIC Crypto Question (Johnny Bravo)
  Re: Does the NSA have ALL Possible PGP keys? (Highdesertman)



From: [EMAIL PROTECTED] (wtshaw)
Crossposted-To: talk.politics.crypto
Subject: Re: UK publishes 'impossible' decryption law
Date: Sat, 12 Feb 2000 00:18:09 -0600

In article 881rin$cc6$[EMAIL PROTECTED], [EMAIL PROTECTED]
(Mike Eisler) wrote:


 As was pointed out, under the UK law, you are guilty till proven
 innocent.
 
The French and Spanish have finally defeated the British.  Let's see...how
many centuries did it take?
-- 
If Al Gore wants to be inventor of the internet, complain that he 
did a lousy job.  If he admits to not doing much, complain that he 
is a slacker. Now, do we want him in charge of security?

--

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Latin Squares (was Re: Reversibly combining two bytes?)
Date: Sat, 12 Feb 2000 07:50:26 GMT


On Fri, 11 Feb 2000 18:21:58 -0800, in
882g5q$k0k$[EMAIL PROTECTED], in sci.crypt "r.e.s."
[EMAIL PROTECTED] wrote:

"Terry Ritter" [EMAIL PROTECTED] wrote ...
[...]
: My main reason for using combiners is to complicate or eliminate
: known-plaintext or defined-plaintext attacks.  Normally, in a stream
: cipher using an additive combiner, known-plaintext (or just "guessed
: plaintext") will immediately reveal the keystream or confusion stream.
: Then the opponents start to work on the key generator.  A keyed
: nonlinear combiner at least prevents easy access.
:
: The reason for having *Latin square* combiners is to prevent leakage
: through either input port.  We certainly don't want leakage from
: plaintext.  But if we choose things at random I think it would be
: common to find some amount of correlation between output and key under
: known-plaintext conditions.  Then the opponents can at least start to
: work on the key generator.
[...]

Additive combiners (XOR or modular addition) are, of course,
a subset of Latin Square combiners.  

Yes, and are clearly linear as mod 2 polys (XOR) or mod n (n probably
= 2**m for addition).  This is not just theoretical weakness. 


I've looked at your
on-line Glossary to see what you mean by "nonlinear combiner",
but the definition of "linear" is not really clear to me in
this context. The confusion is surely mine, but I have trouble
interpreting the above paragraphs because I'm not sure where
you draw the line between linear  nonlinear combiners. E.g.,
do you consider all Latin Square combiners to be linear?

I think that's a very reasonable question, if perhaps less easily
answered.  In my experience, the distinction between "linear" and
"nonlinear" is not as clear as one might like throughout cryptography.
Presumably any form of linearity counts, and there can be many such
forms.  

We might consider permutation itself to be linear, but I think to do
so we must have access to the whole permutation.  If we know only part
of it, or none at all (e.g., one term of larger equation), we may not
be able to manipulate this "linearity" in a useful linear way.  The
S-boxes in DES are composed of permutations, yet they are always
called "nonlinear."  

One way to think about linearity is the ease with which one can
extrapolate from a known transformation (from domain value to range
value) to expose an unknown transformation (given some other range or
domain value).  This is clearly trivial with XOR or (+), less trivial
with a random permutation of 256 values, and much less trivial when
multiple keyed transformations are involved in an equation.  

I have also done some work comparing permutations in Latin squares
with respect to "Boolean function nonlinearity."  This is basically a
distance measure with respect to "affine Boolean functions" or "Walsh
functions," for any particular bit-position in such a table.  (An
"8-bit" permutation with 256 elements would have 8 Boolean function
nonlinearity values, of which we would typically take the smallest as
the indicator of the weakest bit in the permutation.)  Each result
tells us how well a bit-string can be modeled by the best linear
approximation to that sequence.  For the 256-bit sequences generated
by random "8-bit" permutations, typically something like 100 bits must
change to reach the linear Boolean function which is closest to any of
the 8 measured bit sequences.  I see this as 

Cryptography-Digest Digest #106

2000-02-12 Thread Digestifier

Cryptography-Digest Digest #106, Volume #11  Sat, 12 Feb 00 13:13:01 EST

Contents:
  SHA1 and longer keys? ([EMAIL PROTECTED])
  Re: PKI's and CA's ("Lyal Collins")
  Re: Fwd: Anyone feel like joining a cool DeCSS related project? (Lincoln Yeoh)
  Re: Twofish vs. Blowfish (Daniel S. Riley)
  Re: Does the NSA have ALL Possible PGP keys? (Sander Vesik)
  Basic Crypto Question 2 ([EMAIL PROTECTED])
  Re: encryption export question (Frank Hecker)
  Re: Which compression is best? (Tim Tyler)
  Re: Basic Crypto Question 2 (Sandy Harris)
  YACM - Yet Another Chaining Mode (Sander Vesik)
  Re: Basic Crypto Question 2 (Mok-Kong Shen)
  Re: Which compression is best? (Tim Tyler)



From: [EMAIL PROTECTED]
Subject: SHA1 and longer keys?
Date: Sat, 12 Feb 2000 14:04:38 GMT

Hi there!

I'm a newbie to encryption and was wondering if the following method in
VisualBasic to get larger keys using a HMAC_SHA1 class from another
devloper is secure —— assuming that HMAC_SHA1 does what it is supposed to
do and that the sessionkey array [1..8] each contain 20 bytes of high
entropy random data:

For i = 2 to 8
  result = HMAC_SHA1(password, sessionkeys(i)+HMAC_SHA1(password, result+
sessionKeys(i-1)))+result
Next

returns 160 bytes I use as key. password is the literal password string.
Is this secure at all? If not, what's the best way to get a larger key
than the result of SHA1?

Thanks,

John Stein


Sent via Deja.com http://www.deja.com/
Before you buy.

--

From: "Lyal Collins" [EMAIL PROTECTED]
Subject: Re: PKI's and CA's
Date: Sat, 12 Feb 2000 10:31:44 +1100

in answer to 2:
Since the Private key associated with your Public Key is protected by a
password, you can't improve protection using software.
Embedded systems, such as a smartcard, can perfrom a more robust password
verification than can be acheived in software, provided password-entry
protection is implemented in the embedded hardware.  A tamper-evident
PINpad/reader, or possibly a tamper evident, type approved biometrics
sensor/reader combination may do the trick.

But, we are all stuck with passwords for now.
What the public/private key is used for after your password is verified is
immaterial if the password is compromised or spoofed, especially on second
or sunsequent logons.

lyal


[EMAIL PROTECTED] wrote in message 880lq7$t8g$[EMAIL PROTECTED]...
I am trying to understand PKI and the role of the
CA's, toolkits etc. Here are a few of my queries,
can any one help?

1) To have use PKI technology you need CSP's.
Where do these come from (not who makes them). Do
they get installed when you go to the CA?

2) When logging on to send a secure message how
can the computer verify that it is you by any
more than a password and therefore bypass the
main problem that "passwords are notoriously easy
to crack".

3) When a secure message is sent is everything
verified with the CA at the time? And the
recievers CA?

4) do CA's sell the "middleware" or "toolkits" so
the PKI may be used across applications

As you can probably tell I am rather confused!
Any help oon any of these issues or other related
issues that you think are important would be very
much appreciated.

Thank you

Philly



Sent via Deja.com http://www.deja.com/
Before you buy.



--

From: [EMAIL PROTECTED] (Lincoln Yeoh)
Crossposted-To: comp.security.pgp.discuss
Subject: Re: Fwd: Anyone feel like joining a cool DeCSS related project?
Date: Sat, 12 Feb 2000 16:15:56 GMT
Reply-To: [EMAIL PROTECTED]

How about looking at 

http://personal.sip.fi/~lm/c2txt2c/

It makes for weird reading tho :). I prefer reading C to that "english".

On Fri, 11 Feb 2000 03:42:33 +0100, [EMAIL PROTECTED] (Tony L. Svanstrom)
wrote:

--- Begin Forwarded Message ---

Subject: Anyone feel like joining a cool DeCSS related project?
From:Omri Schwarz [EMAIL PROTECTED]
Newsgroups:  comp.lang.perl.misc
Date:Thu, 10 Feb 2000 20:39:58 -0500

I'm writing a script that converts ANSI C code
into English sentences that are reasonably descriptive of what
the code is doing. The purpose is to demonstrate that 
computer code is a form of expression. By composing 
a "recipe book" out of a piece of C code (like the DeCSS code),
one can then distribute it in a free-speech-protected form and
convert it back to C upon recepit.

Anyway, DeCSS is already like erm everywhere. 

What's this "free speech" thing? Might be even a good idea... How about USA
be the first to do this "free speech" thing? ;)

Cheerio,

Link.

Reply to: @Spam to
lyeoh at  @[EMAIL PROTECTED]
pop.jaring.my @ 
***

--

From: [EMAIL PROTECTED] (Daniel S. Riley)
Subject: Re: Twofish vs. Blowfish
Date: 12 Feb 2000 11:26:56 -0500

David Hopwood [EMAIL PROTECTED] writes:
 Eric Lee Green wrote:
  John Myre wrote:
   Why wouldn't it be 

Cryptography-Digest Digest #108

2000-02-12 Thread Digestifier

Cryptography-Digest Digest #108, Volume #11  Sat, 12 Feb 00 17:13:01 EST

Contents:
  Re: *** ECC - new strong and fast calc method ("Craig Clapp")
  Re: RFC: Reconstruction of XORd data (David A Molnar)
  Re: BASIC Crypto Question (Johnny Bravo)
  Has some already created a DATA DIODE? (No Spam)



From: "Craig Clapp" [EMAIL PROTECTED]
Subject: Re: *** ECC - new strong and fast calc method
Date: Sat, 12 Feb 2000 20:51:26 GMT


David Hopwood wrote in message [EMAIL PROTECTED]...
-BEGIN PGP SIGNED MESSAGE-

Greg wrote:

 Here is another stab at trying to make things run faster for ECC.
 Assuming that none or only portions may already be covered by
 existing patents, the remainder is immediately submitted to the
 public domain for free use by all.

 Patents that MAY have some overlap include 5,987,131.

 Given a curve over a field of say 163 bits, I have found
 that average performance in calculating a point multiplication
 can be reduced to 1/3 the time if all points resulting from
 each power of 2 are precomputed and the ones matching the powers
 of two in the private key are then added together.

This is a special case of a well-known technique called "fixed
base windowing", for example see Handbook of Applied Cryptography
section 14.6.3 (which describes it for exponentiation in a
multiplicative group, but it's obvious that it also applies to
multiplication in an additive group). The algorithm you've described
is the case where h = 2.

Chapter 14 of HAC is at
http://www.cacr.math.uwaterloo.ca/hac/about/chap14.pdf (or .ps);
I strongly recommend reading it. Chapter 3 is also relevant to
this thread.


I second that recommendation.


 This is the part that most likely conflicts with 5,987,131.


If you have enough storage space for 163 powers of the base
(including the base itself), then you would need just 28 point
additions to achieve 160-bit exponent diversity by properly
applying the techniques of US5,987,131, not the 80 that you
plan on doing.  If you use the fixed-base comb method of [LL94]
(see also HAC 14.6.3 (iii) ) then you can get by with 36 point
additions. To get down to 28 point additions using the fixed-base
comb method you need something north of 381 table entries
(381 is enough to get down to 29 point additions, 508 gets you
down to 27).


Fixed base windowing is not patented.

Hmmm. If you are using the term as it is used in HAC (which for
all I know may be the origin of this term), then you may note that
HAC attributes the fixed-base windowing method (algorithm
14.109) to Brickell et al. (HAC page 633, ref [204] ).  This is
the method embodied in U.S. patent 5,299,262 (HAC ref [203] ).
Moreover, this is one of the ten selected patents that HAC
details in section 15.2.3, and HAC page 644 explicitly references
the technique of US5,299,262 as that presented in algorithm 14.109.

Of course, the scope of the patent is defined by its claims, all of
which are related to using the fixed-base windowing technique for
exponentiating in cryptographic systems.  So, strictly speaking you
may be right - fixed base windowing is not patented in the abstract
sense. If you have non-cryptographic uses for the fixed-base
windowing technique then you'll probably have no problems with
patent infringement.  :-)

 In fact, mathematical
algorithms are not supposed to be patentable (in any country,
AFAIK), although the US Patent Office in particular is very
inconsistent about applying this rule.

[...]
 Additionally, if the key space is limited to 80 bits (in this
 case), the number of point additions on average is cut in half.
 That is, the average time it takes to calculate the resulting
 point is 1/6 of the average using standard calculations using
 a full key space.  I argue that security is not weakened for
 the following reasons:

 Some have argued that 80 bits is enough to prevent a brute force
 key search attack.  This is accepted as obvious.

 Some have argued that limiting the private key to 80 bits is enough
 to make the Pollard Lambda (aka Pollard Kangaroo) attack feasible,
 since the attack can be limited by boundaries in the key space.

 However, if the bits that are used to define the key are 80 in
 number, it does not matter where they are located.  The total
 time to calculate remains roughly the same.  Therefore, they
 can span the entire 163 bits in any random fashion.

The sci.crypt thread from November 1999 (title "bits of
diffiehellman private key") that I mentioned before was about
precisely this case, assuming the positions of the 80 bits are
known but arbitrary. In this case the cost of the attack described
there would be 2^41 curve additions (with a fairly large memory
requirement of 2^40 curve points, although it may be possible to
reduce that).

Also note that as pointed out in the notes on page 128 of HAC, an
exponent space with a Hamming weight of t (i.e. there are exactly
t set bits but in unknown 

Cryptography-Digest Digest #109

2000-02-12 Thread Digestifier

Cryptography-Digest Digest #109, Volume #11  Sat, 12 Feb 00 19:13:01 EST

Contents:
  Re: YACM - Yet Another Chaining Mode (zapzing)
  Re: BASIC Crypto Question (wtshaw)
  Re: UK publishes 'impossible' decryption law (zapzing)
  Re: BASIC Crypto Question (wtshaw)
  Re: Period of cycles in OFB mode ("Scott Fluhrer")
  Re: Latin Squares (was Re: Reversibly combining two bytes?) ("r.e.s.")
  Re: Which compression is best? (Jerry Coffin)
  Re: Does hashing an encrypted message increase hash security? 
([EMAIL PROTECTED])
  Re: Period of cycles in OFB mode (Tim Tyler)
  Re: Block chaining (Tim Tyler)



From: zapzing [EMAIL PROTECTED]
Subject: Re: YACM - Yet Another Chaining Mode
Date: Sat, 12 Feb 2000 22:13:34 GMT

In article [EMAIL PROTECTED],
  Sander Vesik [EMAIL PROTECTED] wrote:
 Yet Another Chaning Mode - CfBC

 Probably mainly of interest to anybody collecting various
non-defective
 chaining modes. It 'hides' plaintext structure somewhat better than
 CBC, while requiring more work on encryptionm and decryption.

 Let:
   P_n - n-th plaintext to be enciphered
   C_n - n-th ciphertext
   IV_n - n-th 'derivate' of the initalisation vector
   E_k - encryption with key k
   D_k - decryption with key k

 Encryption:
   IV_0=E_k(IV)
   IV_n=E_k(IV_(n-1))
   C_n=E_k(P_n ^ IV_n)

 Decryption:
   IV_0=E_k(IV)
   IV_n=E_k(IV_(n-1))
   P_n=D_k(C_n) ^ IV_n

 Propeties:

 DRAWBACK - divulges cyphertext at double rate.

 1) hides plaintext structure well
 2) blocks are chained, blocks cannot be inserted by a 3rd party in the
middle
 of the stream even if that party knows the key
 3) in hardware can be implemented in parrallel, requiring about twice
the
 resources for no speed-down, in software runs at half the speed of an
 interleaved implemetation
 4) adds an additional tiny amount of security above just plaintext
structure
 hiding.

 --
   Sander

   There is no love, no good, no happiness and no future -
   these are all just illusions.


What you have essentially done is created
a poor PRNG for the IV_i series.
It is poor (IMHO) because it could have
short cycles, and it could have
short cycles for some key-IV
pairs and not others.

I have always thought this would be
a good idea, though, and it is similar:

C_i=E_k(P_i^IV_i)
IV_(i+1)=H(IV_i + P_i + i)

where, in addition to your notation,
H is some hash function, and
the "+" operator indicates
catenation.
i could be expressed in a fixed
number of bits (i.e., with leading
zeros), or in "free form"
(however many bits it has, that's
how many it is expressed in).
For the latter, you would need to
have a hash function that could handle
variable length blocks.

But what would we call it?
Oh no, we could run out of letters!

--
Do as thou thinkest best.


Sent via Deja.com http://www.deja.com/
Before you buy.

--

From: [EMAIL PROTECTED] (wtshaw)
Subject: Re: BASIC Crypto Question
Date: Sat, 12 Feb 2000 15:40:13 -0600

In article [EMAIL PROTECTED], Johnny Bravo
[EMAIL PROTECTED] wrote:

  There are text only ciphers, but as a general rule they are easy to
 break, and are rather old.  Newer ciphers encrypt anything into binary
 data, you could use UUencode to convert it back and forth to ascii within
 you program if necessary, but simple binary attachments via email are much
 less work.

The general rule is just that, not always true at all. Newer ciphers are
not necessary all binary either.  Life is simple when you ignore the ever
present exceptions that may be better than prevalent choices

 
 Obviously data mapping is important..so with a 32 bit colour image, one
 would want to map down the pixel...so for a 54 bit block cipher that
 would be two pixels...i.e. no sheet mapping across pixels
 
   I fail to see the point.  The cipher has no idea what is being
 encrypted, it makes no difference if it is text, a 2 color image, a mp3 or
 a truecolor image. 

Binary often presents color choices as a compromize when base 3
representation is more exact.  Consider that color is in three mixed hues
in displays.  So, the trit presented values are powers of three, including
9, 27, 81, and 243 for few colors.  Whenever you see 256 colors, you must
ask what they are since binary at that level in not a natural fit for the
real world.

With bigger numbers you can resolve three equal outputs, but hue changes
are easily noticed in different resolutions, especially when you go to
more colors.  Such changes may never get as close to the color as it
should have been in the first place.
-- 
If Al Gore wants to be inventor of the internet, complain that he 
did a lousy job.  If he admits to not doing much, complain that he 
is a slacker. Now, do we want him in charge of security?

--

From: zapzing [EMAIL PROTECTED]
Crossposted-To: talk.politics.crypto
Subject: Re: UK publishes 'impossible' 

Cryptography-Digest Digest #111

2000-02-12 Thread Digestifier

Cryptography-Digest Digest #111, Volume #11  Sun, 13 Feb 00 02:13:01 EST

Contents:
  Re: Latin Squares (was Re: Reversibly combining two bytes?) (zapzing)
  Re: Counter mode (was Re: Period of cycles in OFB mode) (David Wagner)
  Re: Do 3 encryptions w/ a 40-bit key = 120 bits? (David Wagner)
  Re: Block chaining (David Wagner)
  Re: Basic Crypto Question 2 ("Douglas A. Gwyn")
  Re: Which compression is best? ("Douglas A. Gwyn")
  Re: Message to SCOTT19U.ZIP_GUY ("Douglas A. Gwyn")
  Re: BASIC Crypto Question ("Douglas A. Gwyn")
  Re: Which compression is best? (Jerry Coffin)
  Re: RFC: Reconstruction of XORd data (Samuel Paik)
  SHA-1 sources needed! ("Nikolaus")



From: zapzing [EMAIL PROTECTED]
Subject: Re: Latin Squares (was Re: Reversibly combining two bytes?)
Date: Sun, 13 Feb 2000 01:59:23 GMT

In article [EMAIL PROTECTED],
  [EMAIL PROTECTED] (Terry Ritter) wrote:

 On Sat, 12 Feb 2000 01:36:12 GMT, in 882ded$6q9$[EMAIL PROTECTED], in
 sci.crypt zapzing [EMAIL PROTECTED] wrote:

 In article [EMAIL PROTECTED],
   [EMAIL PROTECTED] wrote:
  Latin squares are really only useful if the input alphabet and key
  alphabet are the same size. The DES S-boxes don't have the same sizes
  but seem to work OK.
 
 
 
 Well. actually that is true by definition,
 since a latin *square* must be square.
 But your post gives me an interesting idea.
 What if we generalize it to a latin *rectangle*?

 I suppose much would depend upon how it is used.  In many cases it
 might be better to use multiple Latin squares:  If we just drop in
 multiple occurrences of values at random, we are almost guaranteed to
 have some grouping which will highlight some locality of input values.
 This would leak more information than would happen if groupings were
 minimized by the enforced spacing of different tables.


I believe that is correct. If the column number
is the message and the row number is the
encrypting symbol, then if we made up
the "combining rectangle" by stacking up
rows that are permutations of the bytes from
0..255, then we would not be guaranteed that
every column would contain 256 instances
exactly of every byte, but the imbalance would
be much less severe.

And your idea of using multiple latin squares
could be used to, we could use a Latin
square, followed by a generalized combining
function, followed by a latin square. That
would make an encrypting symbol of an
effective size of three bytes. no highlited
input values, and more effective entropy
than a generalized combining function.

--
Do as thou thinkest best.


Sent via Deja.com http://www.deja.com/
Before you buy.

--

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: Counter mode (was Re: Period of cycles in OFB mode)
Date: 12 Feb 2000 18:17:07 -0800

In article [EMAIL PROTECTED],
David Hopwood  [EMAIL PROTECTED] wrote:
  But it is worth mentioning that counter mode also starts to exhibit
  some regularities (a birthday paradox-like issue) after about 2^32
  blocks, so it would only be prudent to change keys well before that
  point.
 
 Really? I agree that it's probably prudent not to encrypt that much
 data with the same key, but I thought the regularities in, say, CBC mode
 were due to collisions in the cipher input (i.e. with random 64-bit inputs
 you expect a collision after about 2^32 blocks, and since the encryption
 function is a fixed permutation, that corresponds to a collision in the
 output). In counter mode there will be no collisions in the input.

Yup.  There's your regularity. :-)

By regularity, I mean it behaves differently from, say,
a one-time pad would (with true randomness).  In other words,
the output of counter mode can be distinguished from random
after about 2^32 blocks (if there are collisions, it is not
counter mode; if there are no collisions, probably it's not
random).

It may be non-trivial to exploit this property in a 
ciphertext-only attack model (certainly it's not as favorable
for the cryptanalyst as a stream cipher with a period of 2^32
would be!), but it's still unpleasant enough to make me want
to change keys before we approach the point where this sort
of information might be available to sophisticated attackers.

 Technically you can infer (under known plaintext) that an output block
 that has been seen will not occur again, but after 2^32 known plaintext
 blocks that only limits the output to 2^64 - 2^32 possibilities. I can
 see why that would be detectable, in the sense that you're not seeing
 repeats when you would expect to do so for other modes, but why would it
 be a weakness? Unless I'm missing something, all it really tells you is
 that counter mode (or some close variant of it) is being used.

Well, I don't have a ready answer for you, but if we look at it
from an information-theoretic perspective, I think there's a storm
brewing here.  Even if I don't know of any computationally feasible
way to