Cryptography-Digest Digest #501, Volume #12 Mon, 21 Aug 00 23:13:01 EDT
Contents:
Re: Bytes, octets, chars, and characters (Jim Thomas)
Re: On pseudo-random permutation (Tim Tyler)
Re: Hidden Markov Models on web site! (Tim Tyler)
Re: On pseudo-random permutation (Tim Tyler)
Re: My unprovability madness. (Future Beacon)
Re: What is required of "salt"? ([EMAIL PROTECTED])
Re: blowfish problem (Richard Heathfield)
Re: 1-time pad is not secure... (Tim Tyler)
Re: My unprovability madness. (Future Beacon)
Re: Bytes, octets, chars, and characters (Eric Smith)
Re: Stream Cipher/PRBG idea. (Benjamin Goldberg)
PKI (Herry Koh)
Re: New algorithm for the cipher contest ([EMAIL PROTECTED])
Re: Bytes, octets, chars, and characters (John Savard)
Re: The DeCSS ruling (John Savard)
----------------------------------------------------------------------------
From: Jim Thomas <[EMAIL PROTECTED]>
Crossposted-To: comp.lang.c,alt.folklore.computers
Subject: Re: Bytes, octets, chars, and characters
Date: 21 Aug 2000 14:00:47 -1000
>>>>> "Richard" == Richard Bos <[EMAIL PROTECTED]> writes:
Richard> [EMAIL PROTECTED] (John Savard) wrote:
>> However, in the past, it had been customary to refer to a six-bit area
>> in a computer's memory, where such an area was the span of memory
>> occupied by a character of a text, as a character.
>>
>> The term byte did not come into general use until computers stored
>> characters in 8-bit areas.
Richard> This is completely wrong. You would do well to read, say, the Jargon
Richard> File.
Presumably you are referring to the introduction of "byte". 1401 memory
sizes WERE advertized in terms of "characters", and 1620 memory sizes WERE
advertized in terms of "digits".
------------------------------
Crossposted-To: comp.programming
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: On pseudo-random permutation
Reply-To: [EMAIL PROTECTED]
Date: Mon, 21 Aug 2000 23:36:12 GMT
In sci.crypt Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
: Pseudo-random permutation is commonly done with a PRNG giving
: real values in [0,1) according to an algorithm due to
: Durstenfeld (see Knuth vol. 2). Sometimes one has, however,
: a pseudo-random bit sequence given, instead of a PRNG of
: the said kind. In that case, I suggest that it is more
: convenient to [use a sort-based method].
A PRNG outputting real numbers is not a requirement for the
orthodox method - you need random integers between 0 and n for
various values of n - not random reals.
If one method is better than the other, I think performance
should probably be the main distinguishing criterion.
--
__________ http://alife.co.uk/ http://mandala.co.uk/
|im |yler [EMAIL PROTECTED] http://hex.org.uk/ http://atoms.org.uk/
------------------------------
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Hidden Markov Models on web site!
Reply-To: [EMAIL PROTECTED]
Date: Mon, 21 Aug 2000 23:44:45 GMT
Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
: John Savard wrote:
:> http://home.ecn.ab.ca/~jsavard/co041003.htm
: The page was not accessible. [...]
Try instead: http://home.ecn.ab.ca/~jsavard/crypto/co041003.htm
--
__________ Lotus Artificial Life http://alife.co.uk/ [EMAIL PROTECTED]
|im |yler The Mandala Centre http://mandala.co.uk/ Namaste.
------------------------------
Crossposted-To: comp.programming
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: On pseudo-random permutation
Reply-To: [EMAIL PROTECTED]
Date: Mon, 21 Aug 2000 23:41:45 GMT
In sci.crypt Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
[Pseudo-random permutation]
: Let A[1..n] be the array to be permuted, B[1..n] be an array
: of records of two unsigned integers, with the first component
: of each record filled with the pseudo-random bits (m bits
: are used for each unsigned integer of size m, which is thus
: a random number in [0, 2^m-1]) and the second component of
: the records set consecutively to 1..n. We sort the array B
: according to the first component of the records in ascending
: order so that the second component now gives the original
: (i.e. before sorting) array-index of the individual records.
: With these indices, which form edivently a random permutation
: of 1..n, the array A can be re-ordered.
I observe that you omit any form of detection of collisions between the
first components of B. Without such a check the result does not form a
truly unbiased random permutation (on the assumption that the RNG is
good).
--
__________ Lotus Artificial Life http://alife.co.uk/ [EMAIL PROTECTED]
|im |yler The Mandala Centre http://mandala.co.uk/ Namaste.
------------------------------
From: Future Beacon <[EMAIL PROTECTED]>
Crossposted-To: sci.math,sci.physics
Subject: Re: My unprovability madness.
Date: Mon, 21 Aug 2000 20:01:21 -0400
On Mon, 21 Aug 2000, Douglas A. Gwyn wrote:
> Future Beacon wrote:
> > If you're interested, the message below was selectively answered to
> > make it appear that I disagree with Goedel's theorem.
>
> It does seem from what you just included that you think
> there is some way to avoid the existence of undecidability.
> Other than the head-in-the-sand approach, how can that be
> done?
>
> > I don't agree with you in your assessment of the discourteousness
> > involved, but I may have taken it worse than it was intended.
>
> It is all too easy in this medium to lose context and
> get confused over attributions. Generally speaking,
> postings are likely to be responded to as they stand
> without reference to preceding articles in the same thread.
>
This is fair and closes the more global issue which I felt
justified wider posting. It is time to return to sci.math.
Jim Trek
Future Beacon Technology
http://eznet.net/~progress
[EMAIL PROTECTED]
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: What is required of "salt"?
Date: Tue, 22 Aug 2000 00:13:37 GMT
David A. Wagner <[EMAIL PROTECTED]> wrote:
>> Usually I see the assumption that the salt is "random". I
>> don't see, however, why this is so. For example, what would
>> be wrong with using a simple counter to generate salt values?
>> Or, what if the salt were the concatenation of the user name
>> and the server name (or address)? Is there a reason to
>> change the salt when we change the password?
> Using just the username would be bad (someone could build a
> password -> encrypted-password codebook for specially targeted
> users, e.g., "root"), but username + fully-qualified server name
> seems ok as far as I can see. Counter is probably ok as long
> as you avoid correlations between servers (e.g., if all servers
> started at counter zero, that'd be bad).
To be pedantic, the salt is not truly random either, whch noone has
pointed out. It's normally obtained from the time of day, so there's
some bias towards whatever time the administrator normally creates
accounts at.
> However, in general I think the Unix model of storing a hashed
> password in a publicly-readable location is a really bad idea
> these days. Experience is that many passwords have very low
> entropy, so low that no amount of salting can save you.
The current practice is to move the hashed password to a non-public
file, and throw the salt away after calculating the hash. If you
absolutely need to have them, that's probably they way to go.
As to discarding the salt, remember you can still verify the has by
trying every possible salt value. (For a worse case of 4096 tries for
crypt(3), for example).
> Passwords are basically an obsolete technology, unsafe at any
> speed. I'd recommend that you consider the alternatives to
> passwords, and the risks of passwords, very carefully before
> deploying any new systems that use passwords for authentication.
Unfortunatly, they're a neccessary evil at this point in time. Even
systems that use a stronger form of authentication often protect that
with a password.
I would, however, avoid crypt(3) like the bubonic plague. One of the
MD5 style hashes (perhaps even modified to use SHA) is even better,
and doesn't limit you to eight character words.
--
Matt Gauthier <[EMAIL PROTECTED]>
------------------------------
Date: Tue, 22 Aug 2000 01:14:11 +0100
From: Richard Heathfield <[EMAIL PROTECTED]>
Crossposted-To: comp.lang.c
Subject: Re: blowfish problem
"Trevor L. Jackson, III" wrote:
>
> "Douglas A. Gwyn" wrote:
>
> > "Trevor L. Jackson, III" wrote:
> > > > > out =3D malloc((inLen * 2 + 1) * sizeof(char));
> > > > > /* a char could be more than 1 byte */
> > > > No, a char is always one byte in C.
> > > No it's not.
> >
> > I am unaware of *any* C implementation for which sizeof(char)
> > is other than 1.
>
> Ditto.
>
> > Within the context of defining what C is,
> > a "byte" is the unit of addressability (as you said) of at
> > least 8 bits, and type "char" is required to occupy precisely
> > one byte.
>
> A machine byte I presume, rather than a contiguous region of 8 bits (octet).
No, he's talking about a C byte - i.e. a byte as defined by the ISO C
Standard:
3.6
1 byte
addressable unit of data storage large enough to hold any member of the
basic character
set of the execution environment
2 NOTE 1 It is possible to express the address of each individual byte
of an object uniquely.
3 NOTE 2 A byte is composed of a contiguous sequence of bits, the number
of which is implementation-defined.
The least significant bit is called the low-order bit; the most
significant bit is called the high-order
bit.
> Is it your position than when char is 16 bits, and sizeof(char)==1 the
> language is not C?
If CHAR_BIT is 16, then bytes are 16 bits wide and a byte is still big
enough to hold a char.
>
> The only claim I made was that a char is not always a byte, meaning it can be
> more.
Not according to the document which defines the C language.
>
> There was a freeware compiler around 6-7 years ago that used Unicode
> characters. It used 16 bits as the unit of allocation. I guess for some
> cases it was easier to eliminate (bad) assumptions regarding the sizes of ints
> that it was to modify all of the character and string handling for wide
> chars. Since it consistently supports the assumptions C makes re sizes, I
> consider it to be C.
Having 16-bit bytes would not be enough to disqualify it from being C.
16-bit bytes are quite legal and possible in C.
--
Richard Heathfield
"Usenet is a strange place." - Dennis M Ritchie, 29 July 1999.
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
65 K&R Answers: http://users.powernet.co.uk/eton/kandr2/index.html (32
to go)
------------------------------
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: 1-time pad is not secure...
Reply-To: [EMAIL PROTECTED]
Date: Tue, 22 Aug 2000 00:06:33 GMT
Douglas A. Gwyn <[EMAIL PROTECTED]> wrote:
: Tim Tyler wrote:
:> It amuses me to think that some people feel they can completely dismiss
:> the idea that the universe is deterministic on the basis of our current
:> knowledge of physics.
: Excuse me, but (1) you have confused (non)determinism with fundamental
: randomness [...]
*Perhaps*. AFAIK, in common usage, these terms are very closely equivalent.
My dictionary also has: "determinism (n): theory that human action is
settled by forces independent of will".
This is a confusing definition in the context of scientific discussion.
Is "will" proposed as a causal physical force not currently recognised by
science? What if "will" is itself determined by physical factors.
I'm *not* using the word "determinism" in a manner that relates
specifically to human action. I'm using it in the sense that
effects are "determined" uniquely by causes in a predictable fashion.
In other words, I am using "deterministic", as a synonym for "non-random"
or "predictable" - and "indeterministic" as a synonym for "containing at
least some random elements", or "being predictable only in a statistical
manner".
I hope this makes my intended meaning clear. If you are trying to draw
distinctions between these terms, it might help to explain what you think
the differences are, so that we are not at cross purposes over terminology.
--
__________ http://alife.co.uk/ http://mandala.co.uk/
|im |yler [EMAIL PROTECTED] http://hex.org.uk/ http://atoms.org.uk/
------------------------------
From: Future Beacon <[EMAIL PROTECTED]>
Crossposted-To: sci.math,sci.physics
Subject: Re: My unprovability madness.
Date: Mon, 21 Aug 2000 20:47:14 -0400
On Mon, 21 Aug 2000, Paul Lutus wrote:
> > If you're interested, the message below was selectively answered to
> > make it appear that I disagree with Goedel's theorem.
> >
>
> Simply disagreeing with G�del's Theorem carries about as much weight as
> disagreeing with a woman. You would need to have a technical basis to
> disagree with it, something no one else has thought of. Good luck.
I have to say it again. I did not say that his theorem in
incorrect and I have said that it is. It is a correct deduction
from the PM starting point. Please!
Jim Trek
Future Beacon Technology
http://eznet.net/~progress
[EMAIL PROTECTED]
------------------------------
From: Eric Smith <[EMAIL PROTECTED]>
Crossposted-To: comp.lang.c,alt.folklore.computers
Subject: Re: Bytes, octets, chars, and characters
Date: 21 Aug 2000 18:16:59 -0700
[not sure about this mess of attributions]
> >>>>> "Richard" == Richard Bos <[EMAIL PROTECTED]> writes:
>
> Richard> [EMAIL PROTECTED] (John Savard) wrote:
> >> However, in the past, it had been customary to refer to a six-bit area
> >> in a computer's memory, where such an area was the span of memory
> >> occupied by a character of a text, as a character.
> >>
> >> The term byte did not come into general use until computers stored
> >> characters in 8-bit areas.
>
> Richard> This is completely wrong. You would do well to read, say, the Jargon
> Richard> File.
Jim Thomas <[EMAIL PROTECTED]> writes:
> Presumably you are referring to the introduction of "byte". 1401 memory
> sizes WERE advertized in terms of "characters", and 1620 memory sizes WERE
> advertized in terms of "digits".
The term "byte" was coined by the IBM Stretch (7030) architects. On a
Stretch, a byte was of variable size from 1 to 8 bits. Although the
Stretch software did use an 8-bit character (but not ASCII or EBCDIC),
there was no special hardware or microcode requirement that this be the case
(unlike the System/360), and this was not the sole motivation for the term
"byte".
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Stream Cipher/PRBG idea.
Date: Tue, 22 Aug 2000 02:19:57 GMT
almis wrote:
> David A. Wagner wrote:
>> This is a four-loop Vigenere system, with periods 211, 223, 227, and
>> 229. It is well-known that such a system can be cracked using on the
>> order of 211+223+227+229 bits of known keystream, using efficient
>> techniques (e.g., linear algebra). Consequently, the proposed system
>> should be considered extremely insecure.
While I do know that it's a variant on a Vigenere cipher, I don't know
how to use O(the *sum* of the rotor lengths) bits of output to crack it.
How do I use linear algebra to break this? Keep in mind I'm not really
a crypto expert. If this is a well-known problem, where on the web
might I find a description/solution?
Using (the *product* of the rotor lengths) bits of output, sure, it's
trivial... Break it up into 211 segments length (233*227*229); Xor each
segment with the first: This gives the entire first rotor, or the
bitflip of it. The other rotors can be calculated similarly.
Also, using O(X) bits of keystream, where X is a constant, isn't
precisely meaningful, because that could mean 2X bits, 3X bits... Do
you mean *precisely* 211+223+227+229 bits, or something else?
> OK Ben! It's your turn.
> See if you can explain to Dave why it's true the number of unknowns is
> the sum of the lengths of the rotors, the length of the recovered
> composite key, necessary for a unique solution, is of the order of the
> rank of the problem matrix.
I know each bit of each rotor is a seperate unknown, but I'm not sure
what you mean by "the length of the recovered composite key," and I'm
not certain what you mean by "the order of the rank of the problem
matrix."
"The recovered composite key" could mean the keystream seen so far, or
possibly those bits of N that have been calculated so far, or something
else entirely.
> (You might also mention that if only one rotor is used, and the
> bitstring is cryptographically strong... ya got a OTP.)
>
> BTW Your K is usefull for a completely different reason.
Changing K lets one start at various offsets of the 211*223*227*229 bit
pad. What other ways do you see it as being useful?
--
"There's a mathematical reason not to trust Christians... The Buddhists
believe that human lives repeat. The atheists believe that human lives
terminate. That means that the Christians must believe that humans are
irrational."
- Matt Katinas
"Not necessarily... they could think that humans are imaginary."
- Rob Pease, in response to the above
"Of course Christians think humans are irrational: They believe humans
are transcendental, and all transcendentals are irrational. I suppose
that all we can be certain of is that humans are complex."
- Me, in response the the above
------------------------------
From: Herry Koh <[EMAIL PROTECTED]>
Subject: PKI
Date: Tue, 22 Aug 2000 10:41:48 +0800
Hi,
can anybody point me to some references on the web regarding PKI. I
would like to learn more about it.
Thank you,
Herry.
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: New algorithm for the cipher contest
Date: Tue, 22 Aug 2000 02:33:16 GMT
In article <8ns93e$fr3$[EMAIL PROTECTED]>,
[EMAIL PROTECTED] wrote:
>
> It looks like you are doing multiplications modulo 2^64, may I suggest
> this neato thing called differential cryptanalysis? Since your
> multiplication doesn't create a group structure you don't get the nice
> decorrelation you would have otherwise. Also there are many weak keys
> per round (i.e even keys). Consider any key with the lower 'n' bits
> set to zero ... there are sum from i=1 to 63 (2^64) over 2^i, weak
> keys... or just over 2^63... Well this is per round. If your cipher
> only uses 8 32-bit words as the key, then it's slightly more
>
> I suggest you could make neat differentials such as toggleing the
> middle bit which would affect the upper 32 bits in the first round,
> then only the lower 32 bits in the next round (with some probability)
> etc... this could be made into a four round differential.
>
> I can't think of a specific attack for it yet, but I bet others
could :)
>
> Back to the drawing board :)
The K argument passed to the CipherBlock function is a subkey derived
from the primary key S (called "secret numbers" in the documentation).
S is a vector with a variable number of 64-bit integers. Usually, S
contains 2 or 3 integers (128 or 192 bits). Is the key that the user
should keep and provides the algorithm strength against brute-force
attack.
K is a fixed length vector (8 integers = 512 bits).
The "MakeKey" function is used to create K from S. In this function I
try to construct K as random integer sequence based on S.
Examining "MakeKey" you will notice that is hard to find S that gives
an specific K. Note the similarity between "MakeKey" and a stream
cipher with S the key, K the stream and "FCipher" function the RNG.
Statistical tests of MakeKey produced K's as random bit sequences, for
all S values I could test.
Also, at the end of "MakeKey" the multiplicative elements K[0]..K[3]
are forced to be odd.
Weak K values are very improbable but not impossible. So, based on your
observations, I will improve MakeKey to test and discard non-random K
values. I think discarding is not a problem, since K keyspace is
usually much larger then S keyspace.
About differential cryptanalysis, I think the combination of
multiplication (even mod 2^64) with bit-reversal is a good defense. I
will study this deeply and post my results.
Thank you Tom
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Crossposted-To: comp.lang.c,alt.folklore.computers
Subject: Re: Bytes, octets, chars, and characters
Date: Tue, 22 Aug 2000 02:51:36 GMT
On Mon, 21 Aug 2000 14:07:35 GMT, [EMAIL PROTECTED] (Richard
Bos) wrote, in part:
>[EMAIL PROTECTED] (John Savard) wrote:
>> However, in the past, it had been customary to refer to a six-bit area
>> in a computer's memory, where such an area was the span of memory
>> occupied by a character of a text, as a character.
>> The term byte did not come into general use until computers stored
>> characters in 8-bit areas.
>This is completely wrong. You would do well to read, say, the Jargon
>File.
About the only suitable reply I can think of is something along the
lines of:
"I was *there* back then, you young whippersnapper!"
It is indeed true that prior to 1964, many computers only handled
upper-case characters and had word lengths which were multiples of six
bits. The capacities of core memories, and of disk and drum units,
were often given in terms of characters.
The density of a 7-track magnetic tape was expressed in "bpi", but
that stood for *bits* per inch (per track).
John Savard
http://home.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: The DeCSS ruling
Date: Tue, 22 Aug 2000 02:44:41 GMT
On Mon, 21 Aug 2000 18:44:00 -0400, Jim Steuert <[EMAIL PROTECTED]>
wrote, in part:
> Where I think the judge is mistaken is his
>lack of distinction
>between publishing the specific keys to a
>specific bank vault, versus
>reverse-engineering a publicly visible vault
>mechanism (software).
I would think that the judge made that distinction, because the DeCSS
software, according to the publicity I have seen about it, included
specific keys to decrypt DVD disks as they exist, and was not merely
based on the algorithms involved in the Content Scrambling System.
So there was an actual combination involved.
John Savard
http://home.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************