Cryptography-Digest Digest #640, Volume #13 Tue, 6 Feb 01 08:13:01 EST
Contents:
Re: Encrypting Predictable Files (Benjamin Goldberg)
Re: Encrypting Predictable Files (Richard Heathfield)
Re: Phillipine math guy claims to have fast RSA Factoring... (Mok-Kong Shen)
Re: Different cipher type (Mok-Kong Shen)
Re: efficient coin flipping (Mok-Kong Shen)
Re: On combining permutations and substitutions in encryption (Mok-Kong Shen)
Re: Redundancy in algorithms (Mok-Kong Shen)
Re: ith bit of an LFSR sequence? (Benjamin Goldberg)
Re: ith bit of an LFSR sequence? (Benjamin Goldberg)
Re: Rijndael's resistance to known plaintext attack (Benjamin Goldberg)
Re: OverWrite freeware completely removes unwanted files from hard drive (Tom St
Denis)
Re: OverWrite freeware completely removes unwanted files from hard drive (Tom St
Denis)
----------------------------------------------------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Encrypting Predictable Files
Date: Tue, 06 Feb 2001 11:43:15 GMT
Splaat23 wrote:
>
> Umm, I must be missing something, but don't all block ciphers support
> this "feature". I mean, a block cipher defines a key-base bijective
> permutation. That permutation can be just as secure one way as it is
> the other. I just used RC6 and Rijndael, and they both worked in this
> respect: decrypt data with a key and you get gibberish (ciphertext?),
> then encrypt it with the same key to get the data back. If this is
> what you want, any file encrypting utility can be adapted to do this,
> but I'm not entirely sure this is that useful.
While this is indeed true of the block cipher, it is not necessarily
true of the file encryption schemes which use the block cipher.
Suppose you have a 5 octet file, containing {0,0,3,7,0}. Both the
contents and the length are significant. If you decrypt then encrypt
this, is your result exactly identical to the original, in both length
and contents?
If both encryption and decryption are bijections, then encryption and
decryption are permutations.
Block ciphers are permutations [on a the set of fixed length
bitstrings], but most file encryption schemes, which usually have to pad
the file to a multiple of the blocksize, aren't.
They are bijections, but only when 'encrypt file' operations are
considered. 'Decrypt file' operations generally aren't bijections --
Assuming there's some sort of padding scheme, decrypt must either strip
off the padding, or reject the file for having invalid padding. This
makes operations which decrypt a file not be bijections on the set of
all files, since either decryption is undefined for some files, or else
there is no inverse for the decryption of certain files.
David Scott's file encryption schemes are permutations. (whoop de do)
Of course, simply because his ciphers do one minor thing that other
ciphers don't, doesn't mean that they are better. His methods are
generally rather innefficient, and don't offer the increased protection
he claims they do.
Also, David lacks the intelligence to write his algorithms in
psuedocode. He doesn't understand the benefit of writing in portable
C. He doensn't understand the benefit of writing an unoptomized
reference implementation for legibility purposes. He doesn't understand
the purpose of "good coding style." He doesn't have the intelligence to
explain the algorithms, and when asked, says to look at the source.
He's surly and insulting to those who honestly ask for his help. I
could continue, but these are the only /facts/ about him I can give from
what I've seen, and have no desire to give you my /opinions/, as this is
after all sci.crypt, not *.flame. We're here to discus algorithms and
protocols, breaks and cracks in said algorithms and protocols, how to
implement them, and how and why they work [or don't work].
--
A solution in hand is worth two in the book.
Who cares about birds and bushes?
------------------------------
Date: Tue, 06 Feb 2001 13:05:36 +0000
From: Richard Heathfield <[EMAIL PROTECTED]>
Subject: Re: Encrypting Predictable Files
Benjamin Goldberg wrote:
>
<snip>
>
> Of course, simply because his ciphers do one minor thing that other
> ciphers don't, doesn't mean that they are better. His methods are
> generally rather innefficient, and don't offer the increased protection
> he claims they do.
Since, upthread, I claimed that my own algorithm meets the criterion
David discusses, I should perhaps clarify at this juncture that that
statement did not and does not imply any claim for the security of my
algorithm, either consequent upon this property or in other respects.
Whilst, naturally, I hope that my algorithm is secure, I do not claim
it.
I /do/ claim SNA-Coil
(http://users.powernet.co.uk/eton/crypto/SNACoil.zip - 154KB or
thereabouts, Win32 only) is 100% secure, of course, but that's another
matter (and another algorithm) altogether.
<snip>
> He's surly and insulting to those who honestly ask for his help.
To be fair, this is not 100% accurate. It would be fairer to insert the
word "sometimes" in there somewhere.
<snip>
--
Richard Heathfield
"Usenet is a strange place." - Dennis M Ritchie, 29 July 1999.
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
K&R Answers: http://users.powernet.co.uk/eton/kandr2/index.html
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Phillipine math guy claims to have fast RSA Factoring...
Date: Tue, 06 Feb 2001 13:09:21 +0100
arcmight wrote:
>
> http://www.mb.com.ph/INFO/2001-02/IT020601.asp
[snip]
This issue has happily been able to be settled within hours
in the group, John Mayre being the first to observe that the
computation is infeasible in practical situations.
Historically it may be of interest to know that Leo de Velez
wrote to Prof. Rivest, who responded in one mail:
> Thanks for the more detailed explanation of your approach to
> attacking RSA given in your emails (copied below). For the
> reasons I will explain, and as you are perhaps aware, I think
> your approach is unlikely to work in practice against large
> RSA numbers. It would be very premature or misleading to
> characterize RSA as "broken" based on your work to date.
Rivest then took time to give a rather lengthy explanation of
the complexity issue and also a literature reference. After
publication de Velez's material on the web (apparently by his
friend EDU H. LOPEZ) there were still a few correspondences
in which Rivest continued to express his opinions in a very
gentle manner.
I personally find this gentleness to be very noteworthy, if
comparison is made to some of the discussions that one not
seldom sees in some internet groups where bad words (certrain
even not comprehensible to those of poor English knowledge)
are readily employed.
[The material quoted above was obtained from someone's post
in a mailing list.]
M. K. Shen
==========================
http://home.t-online.de/home/mok-kong.shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Different cipher type
Date: Tue, 06 Feb 2001 13:09:06 +0100
Michael Brown wrote:
>
> I mentioned this a while ago, but never really followed up on it. The idea
> was to have a cipher where instead of each bit in a key representing a
> number (bad description, but it's hard to cover DES and RSA keys under one
> word :), it represented an instruction. For example (almost certainly
> insecure or otherwise flawed, but you'll get the idea) using 3 bits and
> having 8 "instructions" that could be, and having, say, 5 bits for
> "parameters" to that function. These functions could be (with n beign the
Parametrization is commonplace in programming and also
known in crypto. But for some reasons yet unclear to me
it seems not to be an idea favoured by the majority. One
can use parameters to result in quite a lot of different
'versions' of an algorithm/system. Bits of the parameters
are effectively part of the keys.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: efficient coin flipping
Date: Tue, 06 Feb 2001 13:08:51 +0100
[EMAIL PROTECTED] wrote:
>
> The population at large agrees that flipping a coin is a good way to
> make a random binary decision. But it's slow.
>
> A faster method is to drop lots of coins, line them up horizontally, and
> read them left to right. The only reason to do such a thing is if you
> need to say "I made 2000 coin flips and ...".
A faster way is to cast dice. Cast a bunch of dice at once
and convert the base 6 digits obtained. One can get packages
of plastic dice very cheap in toy shops. The imperfection
present cancels out in some sense due to the fact that
many dice are used and taken in some random order and
that a base conversion takes place, I suppose. (It also
seems unlikely that bias is intentionally made for such
products.) I use dice to determine my password.
M. K. Shen
===========================
http://home.t-online.de/home/mok-kong.shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: On combining permutations and substitutions in encryption
Date: Tue, 06 Feb 2001 13:09:13 +0100
Terry Ritter wrote:
>
> Mok-Kong Shen<[EMAIL PROTECTED]> wrote:
>
> >[...]
> >It is interesting that you view whole file processing as
> >variable-size block ciphers (in view of the fact that
> >you have some patents on variable-size block ciphers,
> >if I remember what you wrote previously correctly).
>
> The cipher design is what it is. The patent is what it is. This is
> not really a viewpoint issue.
The meaning of what you wrote above is not yet very clear
to me. A 'consideration' that whole file processing is a
block encryption is also what it is (as a view/fact). It
is not a trivial issue at all, for there is the very
serious question of possible infringement of your patents
(hopefully only in US currently) when one does whole file
processing in crypto.
Further, using e.g. CBC to chain blocks effectively makes
the whole message a 'block'. When one looks the proper
(small) block cipher as a component of a larger system,
then one is doing 'block' encryption, the whole message
being that (large) block, and one of 'variable' block size,
because the size of messages is in general not constant.
So block chaining in general would have possibly been in
conflict with your patents. If that is indeed the case,
it would be very important that all people concerned with
crypto know it, in fact much more important than Hitachi's
rotation.
Sorry for going a bit in-depth about this issue, for I
myself often do whole file processing and employ chaining
(and further once considered letting the user to have the
comfort of choosing the block size within an appropriate
range) and hence am particularly interested in it.
> >[...]
> >As far as I have seen, nobody has argued against any claim
> >of the sort 'attacking DT is very complex'. So the many
> >disputes about DT in a number of threads hitherto had
> >constituted 'much ado for nothing' or have there been some
> >real causes that render disputes unavoidable?
>
> I guess that depends on your idea of a "real cause."
>
> What I said was distorted and exaggerated, and then addressed as
> though that was what I actually said. That would seem to be ample
> cause for dispute.
>
> Then there was the peculiar situation of people telling me what I
> really meant, and criticizing that.
>
> The Proof
>
> As one issue, there was considerable dismay at my statement that
> ciphering structures could exist for which one might find mathematical
> proofs that certain attacks could not work in practice.
>
> Various lengthy, strong, and particularly condescending statements
> were made to the effect that anyone with any knowledge of modern
> cryptography would know that such a thing had been proven impossible.
>
> But proofs do not come without assumptions, and recognizing and
> meeting such limits is part of the ordinary business of mathematics.
> The inability to know when a proof does not apply presumably speaks
> volumes.
>
> In this case I was fortunate to find a contradiction which showed that
> no such proof could apply to the discussion. But if I had not found a
> contradiction, my claims still would have been valid. Yet the
> improper use of mathematical proof probably would have convinced
> almost everybody of something which is now known to be false. I would
> call that "cause for dispute."
>
> We now know that it *is* possible to have mathematical proofs which
> show that particular ciphers are not vulnerable to certain types of
> attack in practice. Since some ciphers will support such proof better
> than others, there would seem to be ample reason for cryptographers to
> design a wide range of fundamentally new ciphering structures.
>
> The OTP
>
> Then we had some strange horror over the realization that, in
> practice, an "OTP" can be insecure. There was some sort of statement
> about it being "widely accepted" that a physical RNG could not be
> predicted, which is just twaddle: ample possibilities for error exist
> at multiple levels in most physically-random generators.
>
> The classic "OTP" has the structure of the weakest possible stream
> cipher, and so is trivially vulnerable to sequence defects. A good
> argument can be made that, in practice, a cipher using a modern
> combiner which does not expose the keying sequence is less risky than
> a classic "OTP."
If you had retracted your mention of OTP (the likelihood
of confusion and misconception of the concept by a number
of people is apparently well-known to you) in the context
of what you wrote in the original thread of DT, a lot
of disputes could have been spared, I believe.
M. K. Shen
=========================
http://home.t-online.de/home/mok-kong.shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Redundancy in algorithms
Date: Tue, 06 Feb 2001 13:08:58 +0100
[EMAIL PROTECTED] wrote:
>
> Some algorithms are described as being non-redundant. What does that
> mean?
I should be surprised to see such a term (referring to
an algorithm) appears in a CS paper. Could you please
supply a reference?
M. K. Shen
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: ith bit of an LFSR sequence?
Date: Tue, 06 Feb 2001 12:16:18 GMT
[EMAIL PROTECTED] wrote:
>
> In article <95ljkt$2fg$[EMAIL PROTECTED]>,
> [EMAIL PROTECTED] (David Wagner) wrote:
> > >Given x in 1..2^^n-1, what's the most efficient way to find
> > >i such that x is the ith to i+n-1th bits of an LFSR's sequence?
> >
> > This is precisely as hard as the discrete log problem in F^*, where
> > F = GF(2)[x]/(p(x)); it is no harder, and no easier.
>
> Aw boo. That's just as hard as I thought it was. I was hoping I was
> just asking a dumb question.
>
> Hmmm. I always wondered why the NSA likes LFSRs. Maybe I can ask a
> dumb question yet. Are there any ciphers built on top of this
> problem?
Well, I asked in another thread about using this for Diffie Helman. The
response was that it will certainly work, BUT it is faster to do
discrete logs in GF(2^m) than in GF(p) if m is the same as the sizeof p.
If you pick m and p so that it is just as hard for an attacker to
perform discrete logs in the field, then the GF(2^m) operations will
take longer than the GF(p) operations.
There may, however, be speedups so that the GF operations are as fast as
or faster than the integer operations in equal strength fields.
--
A solution in hand is worth two in the book.
Who cares about birds and bushes?
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: ith bit of an LFSR sequence?
Date: Tue, 06 Feb 2001 12:31:04 GMT
Paul Rubin wrote:
>
> [EMAIL PROTECTED] writes:
> > Given i in 0..2^^n-2, what's the most efficient way to generate the
> > LFSR sequence starting at the ith bit? (The best I can come up with
> > offhand is the standard way of producing large exponents, that is,
> > multiplying n nxn bit matrices together. Is there a better way?)
> >
> > Given x in 1..2^^n-1, what's the most efficient way to find i such
> > that x is the ith to i+n-1th bits of an LFSR's sequence? (This
> > seems an example of the discrete log problem, which is roughly as
> > hard as factoring. But maybe there are special tricks for LFSRs.)
>
> The answers to these questions are well known in the literature. Look
> in the index to Applied Cryptography for LFSR's, then check the
> references in the bibliography, for example. And to answer your
> question, solving an LFSR is *not* as hard as factoring. That's why
> LFSR keystreams are not secure. This stuff goes back for decades.
You've misread the question.
Suppose we have an n bit LFSR:
We start at state y, step i times to state x.
If we know x (or know n bits starting at i), and know i, we can easily
learn y, and in fact all of the keystream. This is what you said is
known/broken, but it NOT what bob asked about.
If we know y, and know i, what is the most efficient way to generate x?
This is the first thing bob asked about. To do this, we take the value
2, raise it to the ith power, and multiply the result by y. Raising to
a power can easily be done by repeated squaring.
If we know x (or know n bits starting at i), and know y, and want to
know i, what do we do? This is the second thing Bob asked about. THIS
problem is exactly equal in difficulty to the discrete log problem.
Is factoring about as hard as discrete log? Maybe, maybe not -- but it
is irrelevant except for comparison purposes.
--
A solution in hand is worth two in the book.
Who cares about birds and bushes?
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Rijndael's resistance to known plaintext attack
Date: Tue, 06 Feb 2001 12:39:48 GMT
SCOTT19U.ZIP_GUY wrote:
>
> [EMAIL PROTECTED] (Joseph Ashwood) wrote:
> >
> >"Marcin" <[EMAIL PROTECTED]> wrote:
> >> Hello,
> >> Can someone comment or refer me to the analysis on resistance of
> >> Rijndael to known plaintext attacks?
> >> Thanks,
> >> Marcin Kurzawa
> >
> >As it stands now, with any even remotely reasonable amount of
> >known-plaintext (anything less than the 2^100+ bits) reveals very
> >little. I'd expect that for the forseeable future, as long as you
> >don't go above 2^90 bits of text there won't be any reasonable attack
> >against Rijndael.
> > Joe
>
> Actaully with a few 100 bytes of text its highly unlikely that
> two seperate keys could exist for a given plain text cipher text
> pair. So in theory there most likely is a solution to the problem
> with very short amounts of data. The only real questions are
> if the solution is well known outside of possible the NSA.
> It may well be in our life times that no such solution will
> me made available to the public. But from an informational point
> of view there most likely is an break. If some one can find it.
Moron. Joe was not talking about unicity distance. He was talking
about things like linear or differential analysis, or other forms of
analysis which require known/chosen plaintext.
If you only have a few blocks (unicity distance) of known plaintext, the
only attack you can mount is brute force. Only someone as mind
bogglingly obtuse as yourself would consider doing brute force on a key
of 128 or more bits.
--
A solution in hand is worth two in the book.
Who cares about birds and bushes?
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Crossposted-To: talk.politics.crypto,alt.hacker,alt.conspiracy
Subject: Re: OverWrite freeware completely removes unwanted files from hard drive
Date: Tue, 06 Feb 2001 12:48:38 GMT
In article <[EMAIL PROTECTED]>,
Anthony Stephen Szopa <[EMAIL PROTECTED]> wrote:
> Daniel wrote:
> >
> > On Mon, 05 Feb 2001 16:57:13 GMT, Tom St Denis <[EMAIL PROTECTED]>
> > wrote:
> >
> > >
> > >I would argue on super-dense HD's that simply writting FF to the file is
> > >enough. Not alot of snoopers have the time to break out the 'ol
> > >electron microscope and read bits "The HardWay (tm)". If I overwrite
> > >the file with FF the os doesn't keep a backup (or shouldn't) thus
> > >mission accomplished the file is wiped.
> > >
> > >Tom
> > >
> > Let us not forget what it would cost to have a HardDisk scanned up to
> > 11 layers deep. Usually, those HD which contained "critical
> > information" but are no longer used are destroyed (mechanical + heat).
> > That's the only assuring way :)
> >
> > best regards,
> >
> > Daniel
>
> What are you talking about: "11 layers deep".
I dunno about the physics of a hard disk but...
>
> Don't be ridiculous.
You first!!!
Tom
Sent via Deja.com
http://www.deja.com/
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Crossposted-To: talk.politics.crypto,alt.hacker,alt.conspiracy
Subject: Re: OverWrite freeware completely removes unwanted files from hard drive
Date: Tue, 06 Feb 2001 12:49:19 GMT
In article <[EMAIL PROTECTED]>,
Anthony Stephen Szopa <[EMAIL PROTECTED]> wrote:
> Tom St Denis wrote:
> >
> > In article <[EMAIL PROTECTED]>,
> > Anthony Stephen Szopa <[EMAIL PROTECTED]> wrote:
> > > OverWrite freeware completely removes unwanted files from hard drive
> > >
> > > OverWrite Program: incorporates the latest recommended file
> > > overwriting techniques. State-of-the-art detection technology and
> > > the subtleties of hard drive technology have made most overwritten
> > > and deleted data on magnetic media recoverable. Simply overwriting
> > > a file a few times is just not good enough.
> >
> > I would argue on super-dense HD's that simply writting FF to the file is
> > enough. Not alot of snoopers have the time to break out the 'ol
> > electron microscope and read bits "The HardWay (tm)". If I overwrite
> > the file with FF the os doesn't keep a backup (or shouldn't) thus
> > mission accomplished the file is wiped.
> >
> > Tom
> >
> > Sent via Deja.com
> > http://www.deja.com/
>
> Your point is well made except that there continues to be
> vulnerable tracking variations even on modern hard drives.
>
> And they do not use electron microscopes for this purpose.
>
> Pointing out these facts should lead the average person of even
> common intelligence to question your grasp of the facts and
> conclusions as I have.
>
I will pay you 1000$ if you can read via DOS any file that I wrote over with
FF's.
Tom
Sent via Deja.com
http://www.deja.com/
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list by posting to sci.crypt.
End of Cryptography-Digest Digest
******************************