Cryptography-Digest Digest #506

1999-01-02 Thread Digestifier

Cryptography-Digest Digest #506, Volume #10   Thu, 4 Nov 99 14:13:02 EST

Contents:
  Re: questions about twofish (Tom St Denis)
  Re: Re: Compression: A ? for David Scott (CoyoteRed)
  Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column 
([EMAIL PROTECTED])
  Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column ("Trevor 
Jackson, III")
  Re: A new thread disappeared ? (SCOTT19U.ZIP_GUY)
  Re: relative security of hashes, md5 vs. snefru (Ian Wehrman)
  Re: Steganography Academy (wtshaw)
  Re: crypto and dependencies (wtshaw)
  Re: A new thread disappeared ? (wtshaw)
  Re: The Code Book (Derrick Schneider)
  Re: The Beale Mystery ("Charles Brockman")
  Re: What is the deal with passwords? (Tom St Denis)



From: Tom St Denis [EMAIL PROTECTED]
Subject: Re: questions about twofish
Date: Thu, 04 Nov 1999 13:05:31 GMT

In article [EMAIL PROTECTED],
  [EMAIL PROTECTED] wrote:
 In counterpane's optimized twofish, there are different options you
can
 choose during compilation like zero, partial, or full key.
 First,
  What are the advantage/dis-advantages.
  Do they affect security, or is it just a memory/speed trade-off.

I would suggest you read the paper.


 Second,
  What's the difference between using the 192 bit key option, and using
 the 256 bit key option with 64 bits zeroized (both still have same key
 space).

See above.

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

--

From: [EMAIL PROTECTED] (CoyoteRed)
Subject: Re: Re: Compression: A ? for David Scott
Date: Thu, 04 Nov 1999 13:34:55 GMT
Reply-To: this news group unless otherwise instructed!

On Wed, 3 Nov 1999 16:04:26 GMT, Tim Tyler [EMAIL PROTECTED] wrote:

Not random.  Structured.  (Assuming some expansion takes place).

If we have structure, where does this structure come from?  Is it
inherent to the algorithm or is added in?  It would have to inherent
to the decompression algorithm and not added it because the attacker
could use this to his advantage.

Pardon me if this has already been answered, but I couldn't follow
some of the other threads, it was just over my head.

-- 
CoyoteRed
CoyoteRed at bigfoot dot com
http://go.to/CoyoteRed
PGP key ID: 0xA60C12D1 at ldap://certserver.pgp.com


--

From: [EMAIL PROTECTED]
Subject: Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column
Date: Thu, 04 Nov 1999 13:43:36 GMT



I think we are talking about different degrees of agility. You are
concerned only about the agility to define a set of acceptable ciphers.
I want something more: the agility to be able to modify the ciphers in
the future.

In article [EMAIL PROTECTED],
  "Trevor Jackson, III" [EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] wrote:
 Let's see: Normally, I compose the message and encrypt it off-line,
 then I connect to Internet and send all my mail in batch mode.

Good, that means you already have obtained the public keys of all of
the recipients of your messages.  When you got their public keys you
also got their supported cipher list.

If I want to allow my addressees to change their choice of cipher, then
I would have to download their list every time I want to send them a
message.

What if none of the ciphers in one addresse's set is trusted by me?

 Then you cannot communicate.

If there is a code server then you can. Maybe I trust rot13. I encrypt
my message with it and send it to you. Your email client does not know
anything about rot13 - it downloads its code and successfully decrypts
my message. It is my message and if I chose a bad cipher it is my
problem; you will always be able to read any message you receive.

This problem is not created by having multiple ciphers supported at
each end.  This problem is created by having users insist upon
particular ciphers.  Note that if you insist upon ciphers {A1, A2, A3
} and your respondent insists upon { B1, B2 }, they you can still
communicate securely in both your opinions, by using two ciphers, one
of yours and one of his.  Both of you will believe that the doubly
encuphered messages are secure.

This is a good idea and certainly an improvement over the original
"join the cipher lists" idea.

 What if my addressee does not have public keys?

Then, as now, you would need to obtain a secret key from him.  With the
secret key comes the set of his ciphers.

Again, I want to be able to change ciphers at a moment's notice.

 What if my addressee uses a different PK protocol?

Then you can't talk anyway.

Why not think about how this could be achieved? Maybe PK systems can be
changed dynamically. How about having a centralized server cross-
certify users?

 What if I want to send one message to 100 destinations?

You have to encipher multiple times anyway because using the same key
for broadcast messages creates an opportunity for attack.  I'm not a
protocol expert, barely even a 

Cryptography-Digest Digest #507

1999-01-02 Thread Digestifier

Cryptography-Digest Digest #507, Volume #10   Thu, 4 Nov 99 18:13:03 EST

Contents:
  Re: questions about twofish ("Adam Durana")
  Re: Interesting LFSR (Medical Electronics Lab)
  Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column 
([EMAIL PROTECTED])
  Bit/byte orientation in SHA-1 (JohnSmith)
  Re: Build your own one-on-one compressor (Mok-Kong Shen)
  Re: Q: Removal of bias (Mok-Kong Shen)
  Re: Build your own one-on-one compressor (SCOTT19U.ZIP_GUY)
  Re: Build your own one-on-one compressor (SCOTT19U.ZIP_GUY)
  Data Scrambling references ("Larry Mackey")



From: "Adam Durana" [EMAIL PROTECTED]
Subject: Re: questions about twofish
Date: Thu, 4 Nov 1999 12:56:22 -0500

 In counterpane's optimized twofish, there are different options you can
 choose during compilation like zero, partial, or full key.
 First,
  What are the advantage/dis-advantages.
  Do they affect security, or is it just a memory/speed trade-off.

From what I understand Twofish is able to spend more time on the key
generating process and in return you get faster encryption times.  In
different cases you would want to spend more time on the key generating
process, e.g., you are encrypting several big files with the same key.  In
other cases you would want to spend no time on generating keys, e.g., the
key you are using changes a lot and you are encrypting small chunks of data.
I would guess, 'zero' means spend no time on generating keys, 'partial'
means spend some time, and 'full' means generate all the keys.


 Second,
  What's the difference between using the 192 bit key option, and using
 the 256 bit key option with 64 bits zeroized (both still have same key
 space).

I really have no idea.  Only thing I could see being a problem is if someone
is trying to brute force your key, they could be testing keys in such a way
that it approaches 192bits + 64 bits zeroed.  And if it isn't found in
192bits + 64 bits zeroed they just continue on to 256bits.  So if they did
find the key in 192bits + 64 bits zeroed they only had to test 2^192 keys
and not 2^256 keys.  But it would look as if it was encrypted using a 256bit
key, but since you know the workings of the algorithm used you could test
all the 2^192 keys with 64 bits zeroed first, just incase someone did
encrypt using 192bits + 64bits zeroed.

But you probably know more about Twofish than I do.  I was hoping you would
get a response from someone who knew for sure, but seeing how you did not
get a good response (Hi Tom!), I decided to take a chance.  I hope it helps.

-- Adam



--

From: Medical Electronics Lab [EMAIL PROTECTED]
Subject: Re: Interesting LFSR
Date: Thu, 04 Nov 1999 12:14:58 -0600

David Wagner wrote:
 
 Why not just run it backward, keeping track of the _set_ of all possible
 states?  If you implement it, I strongly suspect you will find that this
 set usually stays very small.

The number of sets should grow like the (number of rounds) * (number
of duplicate entries)/128. If there are only a few duplicates, it
should run backwards easily.
  
 (Sometimes some states have multiple predecessors, which grows the set,
 but also some states have no predecessors, which shrinks the set, and the
 two effects are expected to cancel each other out almost exactly.  I'll
 omit the mathematical calculations.)
 
 Worth a try...

Definitly!  If there's only 128 rounds, you'll only expect to see
(number of duplicates) sets, which should be pretty easy to keep
track of.

Patience, persistence, truth,
Dr. mike

--

From: [EMAIL PROTECTED]
Subject: Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column
Date: Thu, 04 Nov 1999 18:45:46 GMT

In article [EMAIL PROTECTED],
  "Trevor Jackson, III" [EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] wrote:
Maybe I trust rot13. I encrypt my message with it and send it to you.
Your email client does not know anything about rot13 - it downloads
its code and successfully decrypts my message. It is my message and
if I chose a bad cipher it is my problem; you will always be able to
read any message you receive.

I disagree.  It is OUR conversation.  Since I expect you to refer,
implicitly and explicitly, to the contents of the messages I send,
which contents I intend to keep private, I have a vested interest in
the security of the messages you send.

You are right.

My goal was to avoid the need for a priori negotiation at the email
level. I don't see a very good solution.
(...)
 In a networked world third parties are a fact of life.

 Sure.  As a key respository.  Not as a repository for security
 implementations.  The two aren;t anywhere near comparable.

Not really. The difference between code and data is contextual. In
LISP both code and data have exactly the same form.

Hardly.  Representation yes; form, meaning structure, no.

Consider that given something that claims to be a 

Cryptography-Digest Digest #509

1999-01-02 Thread Digestifier

Cryptography-Digest Digest #509, Volume #10   Fri, 5 Nov 99 00:13:04 EST

Contents:
  Re: How protect HDisk against Customs when entering Great Britain  (That guy...from 
that show!)
  Re: Data Scrambling references ("Trevor Jackson, III")
  Re: Data Scrambling references ("Trevor Jackson, III")
  Re: Q: Removal of bias (Scott Nelson)
  Re: Q: Removal of bias - reply (vic)
  Re: Q: Removal of bias (Scott Nelson)
  Re: Doesn't Bruce Schneier practice what he preaches? (K. Y. Lemonair)
  Re: How protect HDisk against Customs when entering Great Britain (Shaitan)
  Re: Incompatible algorithms (Max Polk)



From: That guy...from that show! [EMAIL PROTECTED]
Crossposted-To: 
alt.security.pgp,comp.security.pgp.discuss,comp.security.pgp.tech,alt.privacy,alt.privacy.anon-server
Subject: Re: How protect HDisk against Customs when entering Great Britain 
Date: 5 Nov 1999 00:54:34 -
Reply-To: [EMAIL PROTECTED]

On Thu, 04 Nov 1999 23:23:53 GMT , [EMAIL PROTECTED] (Ike R.
Malony)  wrote
pgp651 [EMAIL PROTECTED] wrote:

Oops! I was going to make a suggestion, but this is cross-posted to too
many groups.

You idiot, you posted to them anyway.   ArghAOLers


--

Date: Thu, 04 Nov 1999 20:18:03 -0500
From: "Trevor Jackson, III" [EMAIL PROTECTED]
Subject: Re: Data Scrambling references

Larry Mackey wrote:

 Hi,

 I have a project where we need to scramble (and unscramble) a parallel data
 stream such that when the data stream is serialized, the stream is a fairly
 symetrical set of ones and zeros.

Who do you need protection against?  What is your "threat model" ?

 The data does not need to be compressed or encrypted rather we need to
 randomize the data on a bit level.

If you need to have the scrambled data pass statistical tests you can simply
use an RNG which pass the requisite tests and maks your data stream with the
stream from the RNG using exclusive-or, addition, or subtraction.  This might
work if you are using a bit-phase sensitive recording method.  But it provides
zero security.

If you need cryptographic security you'll need a cipher.  A side effect of a
good cipher is that the output is statistically equivalent to noise so you get
balanced sets of bitstrings of all lengths including one.

 I am trying to find a scheme that encodes and decodes the data words in as
 uncomplicated manner as possible.  This is presently a bi-directional path
 but we would like to be able to do this in a single direction only if
 possible.  Since all the data in the stream needs to be randomized, the
 decoding procress information needs to be extracted from the data stream or
 decoding logic.

 Does anyone have any suggestions, pointers to references, thoughts or
 ideas??
 We have been doing a number of web searches and have not found any
 references that go into enough detail to understand the process enough to
 replicate it.

 It appears that high speed data links 100 Mbit/sec and above use this
 approach but we are unable to find a detailed description anywhere of the
 logic, or process.  We don't have the $$ to buy all the various upper level
 reference documents called out to determine if they have the information we
 are looking for.




--

Date: Thu, 04 Nov 1999 20:20:40 -0500
From: "Trevor Jackson, III" [EMAIL PROTECTED]
Subject: Re: Data Scrambling references

Larry Mackey wrote:

 Thanks for reply

 Unfortunately, we have to do the "scrambling" while the data is in parallel
 format before the serializer.  I agree that implementing LFSR at the serial
 stream would be the norm for this type of application but this application
 won't allow us to do that.

Sure it will.  You can use an LFSR that is as wide as your data register and
update all of the bits in the register one every clock cycle.  No shifts
required.

How wide is your data when it is in parallel format?



 Regards
 Larry
 John Savard [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]...
  "Larry Mackey" [EMAIL PROTECTED] wrote, in part:
 
  I have a project where we need to scramble (and unscramble) a parallel
 data
  stream such that when the data stream is serialized, the stream is a
 fairly
  symetrical set of ones and zeros.
 
  The data does not need to be compressed or encrypted rather we need to
  randomize the data on a bit level.
 
  Well, modems do this by using a simple LFSR, and XORing the bit stream
  with its output. This "scrambling", as it is called, prevents long
  runs of 1s and 0s, it is hoped.
 
  More elaborate techniques are used when the ratio of ones and zeroes
  must be more strictly controlled. Thus, data written to disk drives is
  coded using one or more forms of GCR (group-coded recording), a
  technique whose name was introduced by IBM when it came out with its
  6250bpi tape drives.
 
  Using a 4 of 8 code to represent 6 bits in 8 bits, for example, one
  could keep the number 

Cryptography-Digest Digest #510

1999-01-02 Thread Digestifier

Cryptography-Digest Digest #510, Volume #10   Fri, 5 Nov 99 03:13:03 EST

Contents:
  Re: The Beale Mystery (korejwa)
  Re: Build your own one-on-one compressor (Don Taylor)
  Re: DVD Encryption Broken by Norwegians! (Tom St Denis)
  Re: Steganography Academy (korejwa)
  Re: Some humble thoughts on block chaining
  Nova program on cryptanalysis -- also cipher contest (Jim Gillogly)
  Re: Q: Removal of bias ([EMAIL PROTECTED])
  Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column 
([EMAIL PROTECTED])
  Re: How protect HDisk against Customs when entering Great Britain  (pgp651)



From: korejwa [EMAIL PROTECTED]
Subject: Re: The Beale Mystery
Date: Thu, 04 Nov 1999 22:46:49 -0500
Reply-To: [EMAIL PROTECTED]



Ertborbob wrote:

 Can anyone please post some information about the pirate named Beale who
 supposedly has treasure buried in Virginia or direct me to some information?
 Thank You


You mean Thomas Beale was a pirate?  Where did you hear about that?  I never
heard that he was a pirate - it almost sounds like someone is building on the
legend.  Please tell me your source.

I have the three ciphertexts, but I assume you already have those.  The Beale
papers are almost certainly a fraud.

-korejwa



--

From: Don Taylor [EMAIL PROTECTED]
Subject: Re: Build your own one-on-one compressor
Crossposted-To: comp.compression
Date: 4 Nov 1999 22:10:28 -0600

In comp.compression SCOTT19U.ZIP_GUY [EMAIL PROTECTED] wrote:
 In article 3820e4ad$[EMAIL PROTECTED], Don Taylor [EMAIL PROTECTED] 
wrote:
...
Consider the following dictionary

a   1
apple   2
banana  3
house   4
...

Now, I claim that the dictionary contains all the words that might be
   
used in messages, we just mandate that it contains the vocabulary that
 
people will use for messages.
 ^
...
It is guaranteed that it is 1-1.
...
All the questions about whether the message is maintained by any number
of translations back and forth is settled in a single stroke.

  IF you ordered these 16 bit wordrs so that  in hex 00 00 was in general
the most commmon occuring token  then 00 01 and 01 00 and 01 01 where
the next most common token where you increase and use the next available 8 bit 
token to build this table and so on. So that the table is order based on some
standard english text.  Build your compressor to convert the english only 
words to something like this.  Then used a FIXED HUFFMAN TABLE not
my adaptive huffman table as the starting table.

All this seems to be related to just manipulating the representation of
the codes.  I agree.  As I said, it seems clear that such things can be
done in a way to make this work, and probably in lots of ways to make
this work.  But it's just representation.  If someone can find a terrific
way to do this then all the better.

...
 The main disadvantge is that only 2^16words can be used but for most messages
this should be ok. Since even in WWII the navjho code talkers had to use 
concepts in the language for words that where not in the language. You may
have to write a program that converts words not in the language to strings of
letters. This would take away from some of the 2^16 symboles. It would also
mean people who such like me and can't spell worht a shit will me more apt
to have longer messages unless some sort of specail spell checker built in.

Three things.  If you like make it 2^24 words.  All the claims still hold.

Second, I tried to be clear.  Above I highlighted with "^^^"s the explicit
claim.  The dictionary contains what you can send.  That's it.  If there is
no code for 'specail' then there is no translation of it and there is no
claim about any of this.  As it said, the dictionary IS the vocabulary.
I'm sorry if I was not sufficiently clear in this.

Third, you have not specified exactly how you might implement this
extension to allow words outside the dictionary to be encoded but I
think that doing this requires some degree of care or it will destroy
exactly the 1-1 property that started this whole exchange.

It seems that if you JUST allow for codes to spell out words that are
not in the dictionary then you would send, using one of your example
misspellings

W-code O-code R-code H-code T-code  (to send your 'worht')

and the person at the far end would then be able to send

W-code O-code R-code T-code H-code

back and have it potentially retranslated into

WORTH-code  (since the word WORTH is in the dictionary)

and sent back and thus your 1-1 property has been violated.

Now I believe after having thought about this fairly carefully for a while
that I can demonstrate that it is possible to avoid such problems but it