Cryptography-Digest Digest #160, Volume #14      Mon, 16 Apr 01 15:13:00 EDT

Contents:
  Re: NSA is funding stegano detection (Walter Roberson)
  Re: P1363 draft (DJohn37050)
  Re: Reusing A One Time Pad ("Tony T. Warnock")
  Re: LFSR Security (Ian Goldberg)
  Re: LFSR Security (David Wagner)
  Re: There Is No Unbreakable Crypto ("Henrick Hellstr�m")
  Re: LFSR Security (Ian Goldberg)
  ShyFile Encryption (Frog2)
  Re: There Is No Unbreakable Crypto (David Wagner)
  Re: There Is No Unbreakable Crypto ("Henrick Hellstr�m")
  Re: Reusing A One Time Pad ("Tom St Denis")
  Re: NSA is funding stegano detection (Bernd Eckenfels)
  Re: LFSR Security (Ian Goldberg)
  Re: MS OSs "swap" file:  total breach of computer security. (Darren New)
  Re: NSA is funding stegano detection (Bart Bailey)
  Re: XOR_TextBox:  Doesn't write to swap file if... ("Christian Bohn")
  Re: Distinguisher for RC4 ([EMAIL PROTECTED])
  Re: Note on combining PRNGs with the method of Wichmann and Hill (Mok-Kong Shen)
  Re: Distinguisher for RC4 ("Michael Lee")

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Walter Roberson)
Crossposted-To: comp.security.misc,talk.politics.crypto
Subject: Re: NSA is funding stegano detection
Date: 16 Apr 2001 16:14:49 GMT

In article <9bdhj2$52q$[EMAIL PROTECTED]>,
Bernd Eckenfels  <[EMAIL PROTECTED]> wrote:
:In comp.security.misc Walter Roberson <[EMAIL PROTECTED]> wrote:
:> On the other hand, the data may be compressed. Compression works
:> by reducing redundancy, so the compressed data is "more random".
:> Any compression algorithm that leaves statistically-detectable
:> correlation in its file, could theoretically be improved [but improving
:> the algorithm might be difficult as a practical matter.]

:The Problem is not detecting correlatrions in the stegano data, the problem is
:that adding to the noice of a existing data source will change the
:characterisitics of the "natural" noise. So if you do not model the changing
:of the bits after the equipment used to capture the transport-data-stream
:(audio, ccd picture, scan picture) an analysis can quite easyly tell you that
:"that kind of noise is not produced by an ccd cam".

That is true, but only to the extent that the analyst can model the
original source data and the processing steps. 

:Or even worse, if you add
:bits to compressed dataa like JPG you might generate pixel patterns which "are
:never created by one of the well known JPG cmpressors".

Should we, in this semi-hypothetical thread, be restricting ourselves
to a handful of "well-known" source image creation technologies,
and to imitate a handful of "well-known" image compression programs? 
Such restrictions might make sense within the context of the original
posting, as a practical matter; or should we continue the discussion
into more theoretical possibilities (for which the modeling assumption
might not hold) ?



------------------------------

From: [EMAIL PROTECTED] (DJohn37050)
Date: 16 Apr 2001 16:41:40 GMT
Subject: Re: P1363 draft

You can join and get the password and then drop out, if you wish.  THis is a
tracking method that IEEE uses.
Don Johnson

------------------------------

From: "Tony T. Warnock" <[EMAIL PROTECTED]>
Subject: Re: Reusing A One Time Pad
Date: Mon, 16 Apr 2001 10:59:38 -0600
Reply-To: [EMAIL PROTECTED]

The probability of overlap gets really high when the total length of the
messages is approximately the square root of the length of the OTP. Any
overlap allows the two messages overlapped to be decoded (cf Venona).
Using the OTP straight through, deleting the already used part is more
efficient.


------------------------------

From: [EMAIL PROTECTED] (Ian Goldberg)
Crossposted-To: sci.crypt.random-numbers
Subject: Re: LFSR Security
Date: 16 Apr 2001 17:33:37 GMT

In article <[EMAIL PROTECTED]>,
Trevor L. Jackson, III <[EMAIL PROTECTED]> wrote:
>Not quite.  You can assume that the most recent known bit is the culprit, as
>in your example, but there's no reason to prefer that bit, and good reason
>to believe that the culprit is earlier in the sequence.  So when a conflict
>is found one must backtrack by trying (toggling) each of the assumed bits in
>turn and use the smallest machine created.  The backtracking is complete
>when no single-bit change of the assumed bits produces a smaller machine. (I
>need to show that any change to a smaller machine can be found by successive
>single-bit changes -- can't quite do that yet).

I believe I have a counterexample to this conjecture.

Consider the sequence

000010100?000?1?0

Here's how it'll go:

First you'll process "000010100" in the usual way to find the LFSR
of size 5 with connection polynomial (in the language of HAC)
1+D^2+D^4+D^5, which generates the sequence 0000101001101110...

So when you hit the first ? you'll optimisticly guess that it's a 1.
But then you hit the 0.  The best LFSR that generates 00001010010 is
of size 6 (1+D^2+D^4+D^5+D^6), which generates 000010100100110...
But you decide to see what happens if you flip the first ? to a 0,
and you find it doesn't help (also size 6), so you stick with what
you've got.

Then you hit the next 0, which is what your current LFSR predicts, so
that's good.  Now you're at the third 0 after the ?, which is *not* what
the current LFSR predicts.  So you fix your LFSR to get one of size 7
(1+D^5+D^7), which generates 00001010010001101...
Again, you see if flipping the ? helps, but it doesn't (also of size 7),
so you stick with your current choice.

Now you hit the second ?, so you optimistically guess that it's a 1,
in accordance with your current best LFSR.  The next bit is 1, which
matches your LFSR, so you still have the (7,1+D^5+D^7) LFSR when you
get to the third ?, so you optimistically guess that it's a 0.

Now you get to the last bit in the sequence, a 0, which doesn't match
you current LFSR.  Now, to clarify, your current LFSR sequence is
(with guessed bits in parens):

000010100(1)000(1)1(0)

but its next bit is a 1, and you want a 0.  So you grow the LFSR
to the shortest one which generates 00001010010001100, which is
of size 10 (1+D^4+D^5+D^6+D^7+D^8+D^9+D^10).  Now to try to make it
smaller by changing your guesses of the ? bits.

You propose changing them one at a time (in effect, trying a
'hill-climbing' algorithm), moving towards shorter LFSRs if possible.
Now we see that this algorithm does not work.

Changing the bits one at a time yields the following 3 sequences, with
their associated shortest LFSRs:

000010100(0)000(1)1(0)0: 9 (1+D+D^9)
000010100(1)000(0)1(0)0: 8 (1+D^3+D^8)
000010100(1)000(1)1(1)0: 9 (1+D+D^3+D^6+D^9)

So we change our guess of the second ?.  Now we try it again, to
see if we can do any better.  We won't bother trying to flip the second
? back the way it was, of course, so we get the following two
possibilities:

000010100(0)000(0)1(0)0: 9 (1+D^4+D^6+D^8)
000010100(1)000(0)1(1)0: 9 (1+D+D^4+D^5+D^7)

So we conclude that (8,1+D^3+D^8) is the best we can do.

But we're wrong.

In fact, (7,1+D+D^3+D^5+D^7) generates the sequence

000010100(0)000(1)1(1)0

which we didn't find by changing one bit at a time.  :-(

>> i.e. the "Greedy Algorithm" isn't optimal.
>
>There's a terminology conflict here.  First we need correctness.  Then we
>need efficiency.  The technique you analyzed is not correct (did not produce
>the smallest machine).  An optimal algorithm would produce the smallest
>machine is as few steps as possible.

Conceded.  I meant "The `Greedy Algorithm' generates a non-optimal
LFSR.", not "The 'Greedy Algorithm' non-optimally generates the LFSR."

   - Ian

------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Crossposted-To: sci.crypt.random-numbers
Subject: Re: LFSR Security
Date: 16 Apr 2001 17:36:49 GMT

Trevor L. Jackson, III wrote:
>So when a conflict
>is found one must backtrack by trying (toggling) each of the assumed bits in
>turn and use the smallest machine created.

That's exponential-time.

------------------------------

From: "Henrick Hellstr�m" <[EMAIL PROTECTED]>
Subject: Re: There Is No Unbreakable Crypto
Date: Mon, 16 Apr 2001 19:38:36 +0200

This can't be generally true. Usually the main reason why a block cipher E
is secure against all attacks that use at most two blocks of chosen
plaintext, is that it is logically impossible to exploit time-data trade
offs if you only have two blocks of message text at your disposal. As a
consequence, brute force will be the fastest attack and that's more or less
what is meant by the claim that E is secure.

However, if you use those two blocks to generate new blocks, it is no longer
necessarily true that the attacker only has two blocks to work with -
because the block encryption keys are related by construction, even though
they are not identical. For instance, the key setup scheme of the underlying
cipher might be such that the resulting cipher could be broken by
differential attacks. I don't think it would be that hard to design a block
cipher that would be reasonably secure if used in conventional modes but
provably insecure if used in David Wagner's mode.

--
Henrick Hellstr�m  [EMAIL PROTECTED]
StreamSec HB  http://www.streamsec.com

"David Wagner" <[EMAIL PROTECTED]> skrev i meddelandet
news:9bco2u$q4i$[EMAIL PROTECTED]...
> Mok-Kong Shen  wrote:
> >Very dumb question: It seems that your argument hinges
> >on the block size being larger (double) than the key.
>
> No, it doesn't: It is very general.  If E(k,x) is a block cipher with
> 128-bit key and 128-bit block, then set F(k) = <E(k,0), E(k,1)> and note
> that this is a length-doubling PRG.  Moreover, F is secure if E is secure
> against all attacks that use at most two blocks of chosen plaintext.
> This same idea can be extended to work for any key and block sizes.



------------------------------

From: [EMAIL PROTECTED] (Ian Goldberg)
Crossposted-To: sci.crypt.random-numbers
Subject: Re: LFSR Security
Date: 16 Apr 2001 17:41:17 GMT

In article <9bf38p$mq0$[EMAIL PROTECTED]>,
Scott Fluhrer <[EMAIL PROTECTED]> wrote:
>If you're going to do all that back-tracking, wouldn't it be easier to, if
>you have N unknown bits, to scan through all 2**N possible settings for them
>at the outset, and simply run BM 2**N times, and select the smallest output?

We're really hoping to do better than O(2^n).  The proposal was to not
try *all* 2^N settings at once, but rather to try to "approach" the
right one by changing one at a time.  This has a complexity of (I think)
O(n^4).  But, as I indicated in another post, the algorithm doesn't
work.

:-(

   - Ian

------------------------------

From: Frog2 <Use-Author-Address-Header@[127.1]>
Date: 16 Apr 2001 17:44:18 -0000
Subject: ShyFile Encryption

Has anyone here ever heard of ShyFile? The people who wrote it are
extremely ignorant when it comes to crypto.

The program is closed source and it uses their own encryption algorithm.

Yet they use up to 6xxxBit symmetric key encryption! Anyone who is paranoid
enough to use such encryption
would never use a closed source, home-brewed encryption app. They are
merely spreading misconceptions
about cryptography.

The only good thing about ShyFile is that it can create self-decrypting
webpages. Yet these (and the passwords)
are most likely swapped to disk. And a person can easially write a program
in Visual Basic that will encrypt/decrypt
with RC4 that is less than 20k!

-Anonymous-


_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com




------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: There Is No Unbreakable Crypto
Date: 16 Apr 2001 17:48:01 GMT

Henrick Hellstr�m wrote:
>This can't be generally true.

Well, it's been proven, and by smarter people than I.
You can go read the proof if you like.  See the references
I gave in my post.  Once you've read the proof, come tell
us where it is wrong.

I read your "reason" why the claim must be wrong, but your
reasoning is faulty.

>I don't think it would be that hard to design a block
>cipher that would be reasonably secure if used in conventional modes but
>provably insecure if used in David Wagner's mode.

I'm open to counterexamples!  If you can find one, you may
have found a counterexample to a proof in a famous paper.

------------------------------

From: "Henrick Hellstr�m" <[EMAIL PROTECTED]>
Subject: Re: There Is No Unbreakable Crypto
Date: Mon, 16 Apr 2001 19:55:54 +0200

"David Wagner" <[EMAIL PROTECTED]> skrev i meddelandet
news:9bfb8h$9b4$[EMAIL PROTECTED]...
> Henrick Hellstr�m wrote:
> >This can't be generally true.
>
> Well, it's been proven, and by smarter people than I.
> You can go read the proof if you like.  See the references
> I gave in my post.  Once you've read the proof, come tell
> us where it is wrong.

The only reference I found was "(e.g., Bellare and Goldwasser's course
notes)" and a web search turned out blank. Could you please specify where I
should look?


--
Henrick Hellstr�m  [EMAIL PROTECTED]
StreamSec HB  http://www.streamsec.com



------------------------------

From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Re: Reusing A One Time Pad
Date: Mon, 16 Apr 2001 17:59:32 GMT


<[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> On Mon, 16 Apr 2001 12:33:25 GMT, "Tom St Denis"
> <[EMAIL PROTECTED]> wrote:
>
> ><[EMAIL PROTECTED]> wrote in message
> >news:[EMAIL PROTECTED]...
> <snip>
> >> Of course one can re-use a OTP, its just that in doing so you make
> >> cryptanalysis trivially simple.
> >
> >No you cannot reuse an OTP it's impossible.  If you somehow did reuse a
pad
> >then it's not a *******ONE TIME****** pad.
> >
> <snip>>
> >Tom
> >
> >
> Semantics. It is ackowledged that the classic breach of a OTP is to
> reuse it.  Of course if you do reuse it it becomes a two timing pad
> ;-) not a "*******ONE TIME****** pad".
>
> But leaving the semantics to one side, the question Mark has, in
> effect, asked is what is the effect of *partially* reusing a OTP?
>
> If the reuse is of one bit only, does it in fact matter?  If it is of
> two bits presumably it matters more, but how much easier is it to
> cryptanalyse two messages made using two one time pads in which the
> last two consecutive bits of the first pad are the same as the first
> two consecutive bits of the second?  What about if the overlap is of
> three bits?  What about four? etc etc.  This is not an entirely
> meaningless question, as you seem to suggest.

Who cares?  It's still inefficient.  Let's say I can reuse a 100 bit pad...
how do both parties regnerate new pads?  Using some source of entropy?  Well
they both need the same entropy or it won't work.  Oh I know, they could
distribute a short amount of bits and use some sort of magical key stream
generator!!! wow... I must be the first to think of this... I will go file
my Patent now.

Tom



------------------------------

From: Bernd Eckenfels <[EMAIL PROTECTED]>
Crossposted-To: comp.security.misc,talk.politics.crypto
Subject: Re: NSA is funding stegano detection
Date: 16 Apr 2001 18:02:44 GMT

In comp.security.misc Walter Roberson <[EMAIL PROTECTED]> wrote:
> Should we, in this semi-hypothetical thread, be restricting ourselves
> to a handful of "well-known" source image creation technologies,
> and to imitate a handful of "well-known" image compression programs? 
> Such restrictions might make sense within the context of the original
> posting, as a practical matter; or should we continue the discussion
> into more theoretical possibilities (for which the modeling assumption
> might not hold) ?

as long as stegano is theoretical safe but in practise detectable, it is a
nice mind experient but otherwise completely useless.

Greetings
Bernd

------------------------------

From: [EMAIL PROTECTED] (Ian Goldberg)
Crossposted-To: sci.crypt.random-numbers
Subject: Re: LFSR Security
Date: 16 Apr 2001 18:08:50 GMT

In article <9bdo0c$ij8$[EMAIL PROTECTED]>,
Scott Fluhrer <[EMAIL PROTECTED]> wrote:
>Of course, having 2n consecutive bits aren't the only was to get n-1 such
>consecutive series -- if we are given 2 series of 3n/2 consecutive bits,
>each series has (n-1)/2 such sets of consecutive bits, and so they both give
>us the requisite n-1 series, and so the same Gaussian Elimination approach
>works.

You can get cases where your equations are not linearly independednt
(see below).

>One obvious way to extend this is to make the wild claim that, if the
>keystream bits you do have, there exists a a_0 < a_1 < ... < a_{n+1} and n-1
>distinct r_0, ..., r_{n-2}, you know the keystream output for b_{a_i + r_j}
>for all i, j, then you can derive the taps for a virtual LFSR (which
>hopefully isn't too hard to translate back into the normal form).

You'll have to add more conditions; what you've got isn't enough.

For example, n=3, a_0 = 0, a_1 = 1, a_2 = 2, a_3 = 3, a_4 = 7, a_5 = 8,
r_0 = 0, r_1 = 1.

So you get to learn b_{0,1,2,3,4,7,8,9}, which I tell you is:

01110??011

In fact, I'm going to tell you some more bits:

01110??01110??01110??01110

Now you can see that, although you have at least two groups of at least
3n/2 bits (2n-1 in fact), your resulting equations are not linearly
independent, so you don't have enough information.

In fact, there are two differernt LFSRs (for n=3) which produce the
above pattern.

B-M says that if you have 2n consecutive bits, that always determines a
unique LFSR.

Now perhaps I've cheated, since with n=3, you know (well, for primitive
connection polynomials) that the period will be 7.  So those extra bits
I gave you weren't actually "new".  But you'll have to talk that into
account in your statement.

   - Ian

------------------------------

From: Darren New <[EMAIL PROTECTED]>
Crossposted-To: talk.politics.crypto,alt.hacker
Subject: Re: MS OSs "swap" file:  total breach of computer security.
Date: Mon, 16 Apr 2001 18:13:56 GMT

Tom St Denis wrote:
> There can be stuff in the swap file without constant disk writting. 

My mistake. I misinterpreted what was meant.

> Ahh but alot of apps need VM to work. 

I've only ever found one, and that's an unpacker/installer. Your mileage may
vary, of course.

-- 
Darren New / Senior MTS & Free Radical / Invisible Worlds Inc.
San Diego, CA, USA (PST).  Cryptokeys on demand.
        schedule.c:7: warning: assignment makes calendar_week 
                          from programmer_week without a cast.

------------------------------

From: Bart Bailey <[EMAIL PROTECTED]>
Crossposted-To: comp.security.misc,talk.politics.crypto
Subject: Re: NSA is funding stegano detection
Date: Sat, 14 Apr 2001 21:35:48 -0700

SCOTT19U.ZIP_GUY wrote:

> [EMAIL PROTECTED] wrote in
> <[EMAIL PROTECTED]>:
>
> >
> >3) send lots of images which contain no messages
>
>       No send lots of images. But those that
> that have no messages should have mosie. So that
> the appearant noise added when you do encyrpt will be
> at the same level as when a message is present.

Better yet would be to routinely send images that contain stegnated
dummy files of a size range similar to the occasional valid
message.


~~Bart~~

------------------------------

From: "Christian Bohn" <[EMAIL PROTECTED]>
Subject: Re: XOR_TextBox:  Doesn't write to swap file if...
Date: Mon, 16 Apr 2001 21:03:38 +0200

> "I have a 256MB RAM computer running Windows '98.  When I run
> XOR_TextBox there is no writing to the WIN386.SWP swap file.  In
> other words, the entered or displayed text is only stored in RAM.
You have absolutly NO guarentee for that.

> If you have less RAM, the text you enter or display may be written
> to this swap file.  Because you normally have no control over or
> access to this swap file, writing to it may be an unacceptable
> security risk.
Why no tell Windows that it is not allowed to page out the memory blocks
using VirtualAlloc, VirtualProtect... Oh, I forgot, you can't do that in
VB<g>.

Christian



------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Distinguisher for RC4
Date: 16 Apr 2001 11:54:05 -0700

"Scott Fluhrer" <[EMAIL PROTECTED]> writes:
> Actually, it shouldn't.  The idea is that you examine the second byte of 200
> different RC4 keystreams (generated by 200 different keys).  It turns out
> that RC4 has a probabilty 1/128 of making the second value output after key
> setup take on the value 0.

Is this really true?  I'm doing some numerical simulations and I don't
see it.  Actually I see a dearth of zeros on the second byte, so far,
but it's not statistically significant yet and may be a fluke.

Has anyone verified this?  Anyone have a prove or argument why it is so?

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Crossposted-To: sci.crypt.random-numbers
Subject: Re: Note on combining PRNGs with the method of Wichmann and Hill
Date: Mon, 16 Apr 2001 20:59:27 +0200



Brian Gladman wrote:
> 

> If two different PRNGs giving unfiformly distributed random numbers in
> [0.0:1.0) are added and the result is taken 'mod 1.0', this output will then
> be uniformly distributed in [0.0:1.0).  A bit of maths shows that the output
> in [0.0-2.0) is not uniform but that the mod function combines the ranges
> [0.0:1.0) and [1.0:2.0) in such a way that a uniform distribution results.
> 
> But if the outputs of the generators are multiplied by constants close to
> 1.0 before combination, the output will not generally be uniformly
> distributed in [0.0:1.0).
> 
> This can be seen by considering a single PRNG giving uniformly distributed
> random numbers in [0.0:1.0) and considering the output after multiplying by
> a number (1.0 + delta), close to 1.0, and taking the output 'mod 1.0'.  In
> this case numbers in the range [0.0:delta) will occur twice as often as
> those in the range [delta:1.0).
> 
> Although the maths is more complicated when several generators are
> combined, the same issue turns up.
> 
> The uneven distributions that result may not be a problem in some
> applications but they will frequently be undesirable.

One can consider the continuous case as the limiting
case of the discrete case. In the discrete case, i.e.
for integer range [0, n-1], it can be easily proved that 
the sum of a uniform random variable and an arbitrary 
random variable (more exactly one that is not degenerate 
in that it has non-zero frequency for at least one value 
relatively prime to n) mod n is a uniform variable.

M. K. Shen

------------------------------

From: "Michael Lee" <[EMAIL PROTECTED]>
Subject: Re: Distinguisher for RC4
Date: Mon, 16 Apr 2001 01:03:57 -0700

> Mark Wooding <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> >
> > At the RSA Conference last week, Adi Shamir claimed that he'd found a
> > way to distinguish the output of RC4 from random data with about 200
> > bytes.  (This is quite scary.  It's put my right off the idea of using
> > RC4 for anything serious.)

Can anyone please point me towards an article or paper about this
distinguisher (either written by Shamir or someone else).  Thanks



------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to