Cryptography-Digest Digest #857, Volume #8 Thu, 7 Jan 99 02:13:02 EST
Contents:
Re: coNP=NP Made Easier? (Planar)
Re: U.S. Spying On Friend And Foe (Withheld)
Re: MAGENTA question... ("Brian Gladman")
UNIX crypt()? (Dennis Robins)
Re: SOC.CULTURE.JEWISH IS UNDERGOING AN ATTACK BY NEO-NAZIS!! (william wheeler)
Re: Chosen-Signature Steganography (Nicko van Someren)
Re: Help: a logical difficulty (Nicol So)
Re: Prime Gaps: a method for calculating them. ([EMAIL PROTECTED])
Re: Birthday Attack calculations. (Fred Van Andel)
Re: Chosen-Signature Steganography (David A Molnar)
ScramDisk - password size - high ASCII (Jim Rollins)
Re: Graph Isomorphism (Soeren Mors)
Re: On leaving the 56-bit key length limitation ([EMAIL PROTECTED])
Re: MAGENTA question...
----------------------------------------------------------------------------
From: Planar <[EMAIL PROTECTED]>
Crossposted-To: sci.math,comp.theory
Subject: Re: coNP=NP Made Easier?
Date: 6 Jan 1999 23:07:28 GMT
>From: rosi <[EMAIL PROTECTED]>
> I first commit to what I adopt as the definition for the complexity
>class of P (which is quoted from "Handbook of Applied Cryptography" by
>Menezes et al):
> The complexity class P is the set of all decision problems
> that are solvable in polynomial time.
Aha. Now we know where the problem is. The correct definition of P
would be more like this:
The complexity class P is the set of all decision problems
that are solvable in polynomial time BY DETERMINISTIC TURING
MACHINES.
> I here also commit my understanding of the definition. As long as
>the problem is solvable in polynomial time, be via DTM or NDTM, it is
>in P.
And this is equivalent (for people who know the correct definition of
P and NP) to claiming P = NP.
I'm quite sure Ilias won't disagree with me (this time).
--
Planar
------------------------------
From: Withheld <[EMAIL PROTECTED]>
Subject: Re: U.S. Spying On Friend And Foe
Date: Wed, 6 Jan 1999 22:55:06 +0000
Reply-To: Withheld <[EMAIL PROTECTED]>
In article <770125$gf8$[EMAIL PROTECTED]>, Richard Herring
<[EMAIL PROTECTED]> writes
>In article <[EMAIL PROTECTED]>, Tony T. Warnock (u091889@cic-
>mail.lanl.gov) wrote:
>> Douglas A. Gwyn wrote:
>
>> > The British can keep secrets, and they have an Official Secrets Act to
>> > enforce it.
>
>> That's a Blunt statement. Both the Brits and the US believe that the other
>> cannot keep secrets. They are both Wright.
>
>Time to coMaClean on this one - but who gives a Fuchs?
>
I wish your name was Phil, then I could quip back Phil-By quiet....
--
Withheld
------------------------------
From: "Brian Gladman" <[EMAIL PROTECTED]>
Subject: Re: MAGENTA question...
Date: Wed, 6 Jan 1999 22:49:26 -0000
John Savard wrote in message <[EMAIL PROTECTED]>...
>[EMAIL PROTECTED] (John Savard) wrote, in part:
>
>>The S-box contains the following:
>
>>1, 2, 4, 8, 16, 32, 64, 128, 101, 202, (404-256) xor 101...
>
>>I take it?
>
>That is, for others who have the same question, are the elements of
>the Magenta S-box the following:
>
[snip]
As far as I can tell without checking all the values you quoted, yes.
I have implemented magenta at:
http://www.seven77.demon.co.uk/aes.htm
so you can check out the details using my source code. The paper was quite
difficult to follow but I managed to code Magenta from it in about 1/2 a
day. However since it was the last of the 15 AES algorithms I tackled, I
had learnt quite a few tricks in earlier work!
Brian Gladman
------------------------------
From: Dennis Robins <[EMAIL PROTECTED]>
Subject: UNIX crypt()?
Date: Thu, 07 Jan 1999 00:36:20 GMT
This is probably a simple question for this group but a
complex problem for me. Please keep in mind that the encrypted
data must be secure. Here is the problem.
I have a data file of which I need to create another file with
a subset of the data in the first file (AWK would work). One
of the fields in the first file needs to be encrypted in the
second file. ONLY the one field.
As an example think of a situation where I have an address
book and I want to create a file of first name, last name and
zip code. Of the three I want the last name encrypted.
Thanks,
Dennis
------------------------------
From: william wheeler <[EMAIL PROTECTED]>
Crossposted-To:
soc.culture.jewish,soc.culture.israel,news.admin.net-abuse.misc,news.software.nntp,news.admin.censorship
Subject: Re: SOC.CULTURE.JEWISH IS UNDERGOING AN ATTACK BY NEO-NAZIS!!
Date: Wed, 06 Jan 1999 15:35:12 -0800
Fred Cherry wrote:
> Soc.culture.jewish is getting an avalanche of weird messages. The material
> below the dashed line is something I posted in a local newsgroup of my
> ISP. Something has to be done, but I don't know what.
>
people are working on it....
------------------------------
From: Nicko van Someren <[EMAIL PROTECTED]>
Subject: Re: Chosen-Signature Steganography
Date: Thu, 07 Jan 1999 01:16:02 +0000
This is a multi-part message in MIME format.
==============8DE4EC03A83C7D95736FCDE0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Signatory wrote:
> Chosen-Signature Steganography
>
> Summary
> -------
> The Digital Signature Algorithm (DSA) is likely to become
> widely distributed in many nations in the next few years.
> The signatures contain 320 random-looking bits, and so, the
> signatures are a good place to put messages without being
> noticed. This essay for sci.crypt describes a way to choose
> the signatures to send private messages securely and how to
> decode those private messages using simple calculations which
> can be done by hand or by a simple, non-cryptographic program.
> This can be categorized as a type of steganography.
The idea here was spotted by Gus Simmons while working on
equipment to verify the Strategic Arms Limitation Treaty (SALT).
He called this sort of steganography a "Subliminal Channel".
He presented a paper on it at Crypto'83 (from memory, it might
have been '82).
> Details of Chosen-Signature Messaging With Bootstrapped Keys
> ------------------------------------------------------------
> Two people share a secret 3 digit key and then they part company.
The problem here is that if the only shared secret information
is 3 bits and all subsequent communications are in the clear then
an eavesdropper can simply try each possible key, run through
your process, and discover the messages.
Nicko
==============8DE4EC03A83C7D95736FCDE0
Content-Type: text/x-vcard; charset=us-ascii;
name="nicko.vcf"
Content-Transfer-Encoding: 7bit
Content-Description: Card for Nicko van Someren
Content-Disposition: attachment;
filename="nicko.vcf"
begin:vcard
n:van Someren;Nicko
x-mozilla-html:FALSE
org:nCipher Corporation Ltd.<br><img
src="http://www.ncipher.com/images/masters/ncipher100.jpg">
version:2.1
email;internet:[EMAIL PROTECTED]
title:Chief Technology Officer
tel;fax:+44 1223 723601
tel;work:+44 1223 723600
adr;quoted-printable:;;Jupiter House=0D=0AStation Road;Cambridge;;CB1 2JD;England
x-mozilla-cpt:;0
fn:Nicko van Someren
end:vcard
==============8DE4EC03A83C7D95736FCDE0==
------------------------------
From: Nicol So <[EMAIL PROTECTED]>
Subject: Re: Help: a logical difficulty
Date: Wed, 06 Jan 1999 21:44:13 -0500
Trevor Jackson, III wrote:
> Nicol So wrote:
>
> > Of course, because you don't even have enough names for infinite
> > sequences. (There are uncountably many infinite strings, but only
> > countably many finite strings that can be used as descriptions). The
> > infinite binary strings capable of finite representation correspond to the
> > decidable languages.
>
> I find this statement confusing. What does "countably many finite" mean as
> opposed to "finite". Are there finite numbers that are uncountable???
You misunderstood the sentence. The phrase "countably many finite strings"
should be parsed as "(countably many) (finite strings)", not "(countably many
finite) strings". "Finite" in the context means "finite-length".
> Also, is there are reason why name or description strings have to be finite?
> Consider the string 3.14159265... (in binary or any other radix), with the
> name "3.14159265..." (in binary or any other radix).
The reason you want a description to be finite is that you want to be able to
write it down, store it, communicate it, or do something useful with it. In
your example, the number pi is a transcendental number, and therefore has a
non-terminating decimal expansion. Note that finite prefixes of the decimal
expansion of pi are NOT descriptions of pi. "3.14" is not the value of pi,
neither is "3.14159265". There are infinitely many real numbers beginning with
"3.14159265...", so "3.14159265..." is not a description of pi either. (If
somebody talks about a number beginning with "3.14159265...", he could be
talking about "3.141592651111...").
The number pi is usually defined as "the ratio of the circumference of a circle
to its diameter". Because this ratio doesn't vary from one circle to another,
the above description uniquely and unambiguously specifies a real number. The
fact that the decimal representation of pi begins with "3.14159265..." is just a
consequence of its definition.
You can also express pi as the sum of an infinite series, but even in such a
case, the math expression can still be completely specified in a finite number
of symbols, leaving nothing to intuition or guesswork.
> Why is the set of
> strings any larger than the set of string names?
The short answer is: Cantor's diagonalization argument.
Let's consider the set of all *infinite* binary strings (the binary alphabet is
as good as any finite alphabet).
An infinite binary string can be interpreted as encoding a subset of the natural
numbers. If the i-th bit of the string is 1, the integer i is in the set. With
this interpretation, different infinite strings encode different subsets of the
natural numbers. Therefore, the set of all infinite binary strings has the same
cardinality as the power set of the natural numbers.
On the other hand, a finite binary string can be interpreted as encoding a
natural number. If we order all finite binary strings in lexicographical
(dictionary) order, we can interpret the i-th string as encoding the integer i.
It is easy to see that the set of all finite binary strings has the same
cardinality as the set of natural numbers.
By Cantor's theorem, the two sets cannot be equipotent (having the same
cardinality).
Nicol
------------------------------
From: [EMAIL PROTECTED]
Crossposted-To: comp.lang.apl,sci.math
Subject: Re: Prime Gaps: a method for calculating them.
Date: Wed, 06 Jan 1999 23:49:27 GMT
I am working on learning Derive, which should have a much greater readership
than J.
Your description, of what the method is storing, is very accurate. The method
is, in fact, converting the ever increasing periodic chunks of bitmaps into
gaps.
torres
In article <76u8s7$[EMAIL PROTECTED]>,
Planar <[EMAIL PROTECTED]> wrote:
> >From: [EMAIL PROTECTED]
> >
> >The following is a first draft program (written in J notation,
> >http://www.jsoftware.com) that generates prime numbers from prime gap
> >calculations.
>
> I'm not very good at reading J. It would help if you could write your
> algorithm in a more readable language (such as pseudo-code).
>
> Anyway, I have the impression that this is simply Eratosthenes' sieve,
> except for the following:
>
> The normal version uses a bitmap to remember the remaining numbers
> (the numbers that are not multiples of the primes found so far). At
> any time during the execution of the program, this bitmap is periodic.
>
> Instead of storing it as a bitmap, you store it as a list of gaps
> (distance from each set bit to the next one), with the length of the
> list limited to one period of the bit pattern.
>
> --
> Planar
>
============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
------------------------------
From: [EMAIL PROTECTED] (Fred Van Andel)
Subject: Re: Birthday Attack calculations.
Date: Thu, 07 Jan 1999 03:53:13 GMT
Reply-To: [EMAIL PROTECTED]
> [EMAIL PROTECTED] wrote:
> The reason for the needed precision is that I am designing a variable
> length hash function and I want to test it for resistance to
> collision. Since I don't have the resources to do a full test for a
> 256 bit hash I am going to test 8, 16, 24, 32 and maybe 40 bit hashes
> and search for trends. If the algorithm is resistant to collision in
> the smaller sizes and there is no trend away from the "proper" value
> then due to the nature of the algorithm I can be quite confidant that
> the larger hashes are also resistant to collisions.
[EMAIL PROTECTED] replied:
> This is a very good idea. Since only if hash is any good will it
>have the distribution predicticed by the bithday collision method.
>However funny things do happen and though it is a very god idea
>to check at the small lengths where more statisitics can be made.
>You should always run tests on the final lenght your going to use
>or you might get surprised.
The whole point of testing on small hashes and extrapolating is that
is computationally impossible to do a birthday attack on a large hash.
For a 256 bit hash you will need to create more than 2^128 hashes
before the odds of a collision reach 50%.
Do You know how long that will take on my 486-66. Or even a planet
full of computers for that matter. The indirect evidence is the ONLY
indication of collision resistance.
Fred Van Andel
------------------------------
From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: Chosen-Signature Steganography
Date: 7 Jan 1999 03:31:24 GMT
Nicko van Someren <[EMAIL PROTECTED]> wrote:
> The idea here was spotted by Gus Simmons while working on
> equipment to verify the Strategic Arms Limitation Treaty (SALT).
> He called this sort of steganography a "Subliminal Channel".
> He presented a paper on it at Crypto'83 (from memory, it might
> have been '82).
Yes, this. He also went on to do a fair amount of work in this area.
I went looking for papers on "subliminal channels" and found
that there's an IEEE Transactions on Specific Areas in Computing
with an article on "The History of Subliminal Channels" by Simmons.
also a paper on whether a subliminal-channel free sig or crypto scheme
is possible, but I haven't read that yet.
I was wondering if anyone had thought of something useful to
shove into 'em (software failure modes, anyone ?) , and looks
like yes.
-David
------------------------------
From: Jim Rollins <[EMAIL PROTECTED]>
Subject: ScramDisk - password size - high ASCII
Date: 7 Jan 1999 06:42:53 GMT
I will begin by stating that I have not downloaded the source code to
ScramDisk because I would not able to understand it. However, I have
read the documentation, and not found answers to these questions. My
questions center around my desire to maximize the strength of my
password, while still keeping it recorded in only one place, namely in
my head.
I understand that bigger is better, when it comes to passwords, and that
the more random a password is the less successful a dictionary attack
will be. For the moment however, let us forget the ease with which a
short password could be calculated. My first question is, does the
length of the password have any effect on the randomness of the encoded
volume? Put another way, will a ScramDisk volume with a short password
be any less subject to decryption than a volume with a long password? I
believe the name for the type of attack I'm thinking of is
cryptoanalysis, a direct attack upon the contents of the encoded file.
Second, the use of high ASCII characters. When I refer to high ASCII
characters I am refereeing to those non keyboard characters that have
hex values from 80h to FFh. These values are input by holding down the
'Alt' key and keying in a four digit number on the numeric keypad. For
example on my keyboard by holding down the Alt key and entering 0169 the
screen displays the copyright symbol, a 'c' with a circle around it.
When I use the "low level Red mode" to input a password, I am not able
to use high ASCII characters. When I use the non Red input screen these
characters are displayed. The question is, does ScramDisk use these
high ASCII characters; does the extra effort of typing in four numbers
to get one character make the password more secure, or does the program
simple default these characters to some lower character code?
I want to thank those of you who have worked on developing and
supporting ScramDisk. It is a very fine program, which I greatly
appreciate. For many years I used the old Norton program 'Diskreet',
which is similar in design concept. ScramDisk has now made it possible
for me to move securely to Win95 ( do you intend to port this to Linux?).
Thank you, Jim
------------------------------
From: Soeren Mors <[EMAIL PROTECTED]>
Subject: Re: Graph Isomorphism
Date: 07 Jan 1999 07:42:57 +0100
[EMAIL PROTECTED] (Alan Martin) writes:
> Is graph isomorphism still considered to be a hard problem suitable
> for use in cryptography?
It is still considered hard.
> It is mentioned in the second edition of Applied Cryptography.
> According to the errata, although it is not known to be NP-complete,
> it may be useful in cryptography.
Is there any known usefull algorithms using it. The only thing I can
think of is that you can build bit-commitments from it, although you
will have to exchnage quite a lot of data, pr. bit commited to.
> If so, how large should the graphs be, and what special properties
> should they have, to resist isomorphism-finding algorithms?
No idea.
> --
> Alan Martin
> [EMAIL PROTECTED]
--
Soeren Mors
Student of Computer Science at DAIMI [EMAIL PROTECTED]
Student programmer at Cryptomathic A/S [EMAIL PROTECTED]
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: On leaving the 56-bit key length limitation
Date: Thu, 07 Jan 1999 05:47:01 GMT
Ed Gerck wrote:
> Shannon [Sha49] defined "unicity distance" (hereafter, "n") as the
> least amount of plaintext which could be uniquely deciphered from the
> corresponding ciphertext -- given unbounded resources by the
> attacker.
Though essentially correct, there are a couple subtle points
worth bringing up. Modern references invariably define
unicity distance as the amount of _ciphertext_ that has a
unique decryption. [Sha49] is not entirely clear as to whether
the letters are plaintext or ciphertext, and the definition is
in the context of ciphers that don't expand the text, so no
such distinction was necessary.
What does "could be uniquely deciphered" mean? Shannon was
clear: given a ciphertext and an /a priori/ probability
distribution on the plaintext space, unique decipherment means
that the conditional probability of some plaintext given the
ciphertext, is close to one (and therefore the probability of
all other strings must be close to 0).
> 5. THE EFFECT OF ADDED OPEN TRUST
>
> It may seem that the more the attacker trusts information about the
> intercepted message, the least the unicity. It seems intuitive that
> the more the attacker knows, the least he would need to know.
>
> This is, however, provably wrong.
>
> Added open trust -- publicly known to all, including the attacker --
> can either increase or decrease unicity, as has been proved in
> Sections 3. and 4. above.
Compression of plaintext can increase the unicity distance over
uncompressed plaintext. I think it's at least worth pointing out
that the transformation - not just public knowledge of it - that
increases the unicity distance.
[...]
Now the major technical error in the post:
> 7. THE UNICITY OF DES
>
> Regarding the unicity of DES, when one uses compression (best case,
> as seen in Section 4.), different authors quote DES unicity to be
> from 16 to 20 bytes of English. For uncompressed English, a value of
> 8.2 is often quoted. These numbers are indeed easy to obtain. For
> example, if I would directly use the calculation given in Sections 3.
> and 4. above, DES unicity would indeed be 8.2 bytes for uncompressed
> English and 20 bytes for 2x compressed English -- since DES uses
> 56-bit keys and enciphers bytes.
>
> As another example, quoting from Schneier [Sch98]:
>
> "For a standard English message, the unicity distance is K/6.8,
> where K is the key length. (The 6.8 is a measure of the redundancy
> of English in ASCII. For other plaintexts it will be more or less,
> but not that much more or less.) For DES, the unicity distance is
> 8.2 bytes."
>
> "This means that if you are trying to brute-force DES you need two
> ciphertext blocks. (DES's block length is 8 bytes.) Decrypt the
> first ciphertext block with one key after another. If the resulting
> plaintext looks like English, then decrypt the second block with the
> same key. If the second plaintext block also looks like English,
> you've found the correct key."
>
> "The unicity distance grows as the redundancy of the plaintext
> shrinks. For compressed files, the redundancy might be 2.5, or
> three blocks of DES ciphertext".
>
> However, I will show that DES unicity is actually close to 3 bytes of
> English -- and could even be 0 byte in some systems, such as in SSL
> that contains a large fraction of known plaintext.
>
> My calculation is simple to explain and can be so summarized (see
> the forthcoming paper for the full text):
>
> When one tries to decrypt the encipherment of plaintext with a
> defined statistics (such as English text or 2x compressed English
> text), DES has a "yes/no" feature on decryption such that using the
> wrong key almost 100% guarantees (due to the complex processes used
> and the fact that 56 bits encode blocks of 64 bits) that one obtains
> "garbage" as a result. Thus, only the correct key is expected to
> reproduce statistics other than "random" from ciphertext
A random cipher has exactly that property. If f(p,k)->c is a
random cipher (plaintext p, key k and ciphertext c), then
f^-1(f(p,k), k') where k != k', is equally likely to be any
string in the plaintext domain of f other than p.
What is the "almost 100%" probability of not getting English
text when decrypting with an incorrect key? It's the
probability of a random 64 bit vector being English text (or
_very_ slightly less, since one English string is ruled out).
So how many English texts of 8 characters are there? (The
1.2 bits of entropy per character doesn't hold for short
texts.) Lets say there are 1,000,000, which is probably
within an order of magnitude. The chance that a randomly
chosen 64 bit string is English is only 0.0000000000000542,
so the chance of not getting English when decrypting with
an incorrect key is 0.9999999999999458. It seems hard to
disagree that it's almost 100%.
There's a catch. We have 2^56 -1 chances to hit one of those
texts with an incorrect key. Thus by trying all keys we get
the correct text and about 3906 false English plaintexts.
For those who like to work with smaller numbers, our key is 8
bits shorter than the 64 bit block, so false keys hit about
one in 2^8=256 texts. One in 256 of our 1000000 English texts
is 3906.25.
So after one 64-bit block, there are still almost 4000
candidate keys. The uniciy distance for English under DES is
greater than 64 bits.
[...]
> Thus, the first point is that the current unicity formula (as derived
> by Shannon) does not apply to DES -- Shannon's "random cipher"
> assumption given in Section 3 item d breaks down completely for DES.
> Of course, this may have been a design goal since it aids decryption
> attacks also in some other aspects and for random plaintext (both to
> be commented in the paper).
Completely wrong. If x is "almost 100%", that doesn't mean that
(100%-x)*2^56 is almost 0. Here it's closer to 3906.
The random cipher model seems to work well for DES. See the
DES cycling experiments and Matt Blaze's DES challenges for
experimental evidence.
> This means that for DES one must use the unicity of English alone,
> without any key but that provided by the English alphabet itself.
> This is a strong reduction over the case with 2^56 keys.
Nope. The established concept of unicity distance leads
to correct results. The claims to the contrary are wrong.
--Bryan
============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
------------------------------
From: [EMAIL PROTECTED] ()
Subject: Re: MAGENTA question...
Date: 7 Jan 99 05:04:55 GMT
Brian Gladman ([EMAIL PROTECTED]) wrote:
: As far as I can tell without checking all the values you quoted, yes.
Great! After describing - in varying degrees of detail - most of the AES
candidates, I kind of got tired of it, and did not tackle some that were
hard to understand, such as HPC, DFC, Magenta. E2 was easy enough to
understand, but perhaps its submitters don't want anybody else to describe
their cipher...the huge copyright notices on every page of their document
gave me pause.
I should at least turn back to Twofish, and finish my half-finished
description of it...for the moment, I've been doing much less demanding
things with my site, such as touching up the Links page.
John Savard
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************