Cryptography-Digest Digest #907, Volume #9 Sun, 18 Jul 99 17:13:04 EDT
Contents:
Re: Why public key in PGP (Paul Crowley)
Re: How Big is a Byte? (was: New Encryption Product!) (wtshaw)
Re: Compression and security (was: Re: How to crack monoalphabetic ciphers)
Q: A FIFO hash function? ("Kasper Pedersen")
Re: obliterating written passwords (wtshaw)
Re: Compression and security (was: Re: How to crack monoalphabetic ciphers)
Re: How Big is a Byte? (was: New Encryption Product!) (Giles Todd)
Re: How Big is a Byte? (was: New Encryption Product!) (wtshaw)
Re: Compression and security (was: Re: How to crack monoalphabetic ciphers)
(SCOTT19U.ZIP_GUY)
Re: Xor Redundancies ([EMAIL PROTECTED])
Re: Xor Redundancies ([EMAIL PROTECTED])
Re: Algorithm or Protocol? (David A Molnar)
IEEE P1363 August Meeting Announcement ([EMAIL PROTECTED])
Re: Why public key in PGP ([EMAIL PROTECTED])
Re: obliterating written passwords ([EMAIL PROTECTED])
----------------------------------------------------------------------------
From: Paul Crowley <[EMAIL PROTECTED]>
Subject: Re: Why public key in PGP
Date: 18 Jul 1999 10:25:25 +0100
David A Molnar <[EMAIL PROTECTED]> writes:
> Do you mean using the same PRNG on each side and just iterating
> state over the course of several conversations? Then you're assuming
> that the PRNG isn't predictable "backwards in time." Probably a valid
> assumption, but how would it be any more valid than assuming your favorite
> public key system isn't compromised? You'd have to exhibit the PRNG and
> argue why it's more secure than the public key system.
This suggests an optimisation of protocols which need PFS: when making
contact with another entity, check if you and they have a copy of a
shared secret in your entity caches (exchange hashes to make sure they
match), and use that instead of negotiating a new one with DH or
whatever at the start of your session. Immediately you start using
it, securely wipe it and replace it with a hash of itself.
Of course using the same hash as for the match check would not be
clever.
--
__
\/ o\ [EMAIL PROTECTED] Got a Linux strategy? \ /
/\__/ Paul Crowley http://www.hedonism.demon.co.uk/paul/ /~\
------------------------------
From: [EMAIL PROTECTED] (wtshaw)
Crossposted-To: alt.folklore.computers
Subject: Re: How Big is a Byte? (was: New Encryption Product!)
Date: Sun, 18 Jul 1999 12:05:27 -0600
In article <[EMAIL PROTECTED]>, "Douglas A. Gwyn"
<[EMAIL PROTECTED]> wrote:
> wtshaw wrote:
> > Come, to think of it, base one is noncomputational as well.
>
> No, base one is the common "tally mark" notation, which does work.
Yes, sometimes useful for counting....higher math? Naw. The
one-to-any-power-is-still-one seems an important restrictive rule somehow.
As an information unit, 1 is not comparable to any other since in log
terms any other base to the power of one is zero, and I have trouble
dividing into or by zero myself.
Even still, we have the term *unit*, but please tell me how many units are
in a bit or a trit.
--
Encryption means speaking in turns.
------------------------------
From: [EMAIL PROTECTED] ()
Subject: Re: Compression and security (was: Re: How to crack monoalphabetic ciphers)
Date: 18 Jul 99 17:43:56 GMT
[EMAIL PROTECTED] wrote:
: think about this, what about the first bunch of bytes where there is no
: history to guess?
Thank you for raising this point. It is something I forgot to think of
when I discussed the compression issue at length in another reply.
In typical Lempel-Ziv compressors, the first few bytes are expanded to
nine bits, with an extra bit indicating they're literal instead of
pointers to the table of previous strings.
It's possible to do a bit better than that. For example, one could
reproduce the entire input literally up to and including the first time a
byte value is used for the second time, and then switch to using
additional bits to support compression overhead.
Also, since adaptive compression methods do more poorly in the first part
of the text, perhaps bisection - keyed bisection, since one can't
reconstruct by eye the way it is possible in paper-and-pencil operation -
or a transposition cipher ought to be the first encryption step after
using that form of compression.
(And making the encryption of the start of the text dependent on the
encryption of the better-compressed text is another way of dealing with
this, as Mr. Scott has doubtless noted. Once again, he has a valid point.)
John Savard
------------------------------
From: "Kasper Pedersen" <[EMAIL PROTECTED]>
Subject: Q: A FIFO hash function?
Date: Sun, 18 Jul 1999 20:14:41 +0200
Sorry to bother you all..
I am looking for a hashing function with fifo-like properties.
(I am presenting the problem AND a sample solution)
What I desire is something similar to a block averaging function for a
1024-element block:
Such a thing can either be written as
accu:=0;
for i:=n to n+1023 do accu:=accu+value[i]
or, what I need, as when it is slid along the block:
n:=n+1
and instead of resumming the entire block, we can say
accu:=accu-value[n-1]
accu:=accu+value[n+1023];
But what I want is a hash, not an averaging function. It might be something
like
ACC:=(ACC-(MU^1023)*value[now-1024]) mod MO
ACC:=(ACC*MU + value[now]) mod MO
which is essentially a fed linear congruential generator.
It works because all ops are linear (and thus off-topic of this ng?:-).
Choosing MU for a particular MO will be critical.
The second problem is that I need about 96 bits to keep the probability of
failure insignificant, AND I need to do it quickly.
It is not required to be 'secure' in the cryptographic sense.
So, does anyone have some pointers to hashes like this?
/Kasper Pedersen
------------------------------
From: [EMAIL PROTECTED] (wtshaw)
Subject: Re: obliterating written passwords
Date: Sun, 18 Jul 1999 12:12:36 -0600
In article <[EMAIL PROTECTED]>, "Douglas A. Gwyn"
<[EMAIL PROTECTED]> wrote:
> Lincoln Yeoh wrote:
> > Burn it and flush it down the toilet.
>
> After eating it, as pointed out elsewhere in this thread.
In the field, this may not be possible.
--
Encryption means speaking in turns.
------------------------------
From: [EMAIL PROTECTED] ()
Subject: Re: Compression and security (was: Re: How to crack monoalphabetic ciphers)
Date: 18 Jul 99 17:36:34 GMT
Sundial Services ([EMAIL PROTECTED]) wrote:
: So the question before the house is: does compression make the
: effective-plaintext more predictable (therefore less secure), or less
: predictable (more secure)?
The better the compression algorithm, the less regularity or redundancy is
left in the resulting compressed data stream. Hence, the conventional
wisdom, that compression increases security, is quite sound.
However, you are quite right that a compression algorithm can still leave
some redundancy behind.
Plain, uncompressed, ASCII text has one fixed bits, and a couple of highly
constrained bits in each byte. Thus, it is even more regular than data
compressed by a universal algorithm, such as one of the Liv-Zempel family,
even if there are some regular patterns left behind.
Using a single-state Huffman code to compress text will also leave behind
some patterns, since, for example, text usually includes a space every few
characters. So, if 000 represents a space character, it will still recur
regularly, even if no longer at byte boundaries.
A compression algorithm can be tuned to reduce redundancy for a particular
kind of input file. For text, one could have a multi-state Huffman code.
Use one Huffman code to represent only the 26 letters of the alphabet.
Another code would represent things like "five-letter word follows",
"six-letter word follows", "punctuation mark followed by space follows",
"punctuation mark not followed by space follows", "shift to figures mode".
And randomizing the table of equivalents before beginning *real*
encryption wouldn't hurt either.
You're quite right that using just any compression algorithm won't
eliminate redundancy, but many types of uncompressed files are so bad that
most compression algorithms really do improve things, even if not as
perfectly as one might like.
John Savard
------------------------------
From: Giles Todd <[EMAIL PROTECTED]>
Crossposted-To: alt.folklore.computers
Subject: Re: How Big is a Byte? (was: New Encryption Product!)
Date: 18 Jul 1999 17:39:15 +0200
Reply-To: [EMAIL PROTECTED]
"Michael D." <[EMAIL PROTECTED]> writes:
> English, as well as the other Germanic languages, cannot be made to be
> gender neutral because of their structure. Latin languages, on the other
> hand (that is: Portuguese, Spanish, French, Italian and Romanian) are all
> gender specific, in general using the "a" and the "o" declensions to
> denominate genders in all nouns, including inanimate subjects.
Germanic languages are also gender specific. "Het meisje" is the
Dutch for "the girl" and is a neuter noun. There are endless other
examples (e.g. to the naive, what is the sense in the Dutch "de dag"
["day", masculine gender] and "het jaar" ["year", neuter gender]?).
The idea that this means that days are male and years are neither male
nor female is quite ridiculous.
> What is the solution?
The solution is for the P.C. types to stop displaying their confusion
between grammatical gender and biological sex. Once they do that then
the "problem" disappears. Sometimes, I wish that the grammatical
genders were called "red", "green" and "blue". There would be no
confusion with biology then.
> Should there even be a solution?
No. It is a non-problem. We're only discussing the issue because
some twits are too ignorant to realize this fact.
Giles.
--
Saxo cere comminuit brum.
------------------------------
From: [EMAIL PROTECTED] (wtshaw)
Crossposted-To: alt.folklore.computers
Subject: Re: How Big is a Byte? (was: New Encryption Product!)
Date: Sun, 18 Jul 1999 12:36:29 -0600
In article <[EMAIL PROTECTED]>, "Douglas A. Gwyn"
<[EMAIL PROTECTED]> wrote:
> wtshaw wrote:
> > Come, to think of it, base one is noncomputational as well.
>
> No, base one is the common "tally mark" notation, which does work.
Considering that calculations according to normal rules include exactly
the same number of symbots as the base, including zero, that rule would
have to be broken with base one, which I would maintain contains two
elements within the rules, zero and infinity, infinity and zero being
expressed as one since anything to base 1, including infinity, is one.
Tally marks would not be a computational base that would fit with all the
other rules for all the other bases, which makes it an orphan, an
exception to everything else.
About considering the usefulness of tally marks for encryption, they would
merely be a means of representing a count for symbols in another hidden
set. Since zeros would be excluded from strings of ones, you do not have
conventional numbers, only groups, of ones, or a solitary zero perhaps.
If you use ones and zeros in the same number, you are encroaching on every
other base, beginning with base two.
I bet you can hardly wait to have ciphertext posted in the forms of tally
marks, which could not be grouped in less than the total number of the
string unless you separated them with a solitary zero. Something like:
1111 0 111 0 1111111 0 111111 0, which could also be used in
stegnography were zeros were represented by a list of certain words or
characters, and everything else was used as ones. You would have to have
another set to reference to get the meanings of the counts. The
decryption program would, of course, be the tallywacker.
--
Encryption means speaking in turns.
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Compression and security (was: Re: How to crack monoalphabetic ciphers)
Date: Sun, 18 Jul 1999 19:51:19 GMT
In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] () wrote:
>[EMAIL PROTECTED] wrote:
>: think about this, what about the first bunch of bytes where there is no
>: history to guess?
>
>Thank you for raising this point. It is something I forgot to think of
>when I discussed the compression issue at length in another reply.
>
>In typical Lempel-Ziv compressors, the first few bytes are expanded to
>nine bits, with an extra bit indicating they're literal instead of
>pointers to the table of previous strings.
>
>It's possible to do a bit better than that. For example, one could
>reproduce the entire input literally up to and including the first time a
>byte value is used for the second time, and then switch to using
>additional bits to support compression overhead.
>
>Also, since adaptive compression methods do more poorly in the first part
>of the text, perhaps bisection - keyed bisection, since one can't
>reconstruct by eye the way it is possible in paper-and-pencil operation -
>or a transposition cipher ought to be the first encryption step after
>using that form of compression.
>
>(And making the encryption of the start of the text dependent on the
>encryption of the better-compressed text is another way of dealing with
>this, as Mr. Scott has doubtless noted. Once again, he has a valid point.)
>
>John Savard
Thanks John
I still think many compression routines leave tell tell signs that they are
a compressed file. Ideally one would like to compress Ascii mesages to
a file that appears completely random. One also would like the decompression
routine to take any random file and expand it to a readable ascii file. Well
this is not going to happen ( but if it does I want a copy of the routine)
The reason this would be nice is that the entropy of the encrypted compressed
message would be a maximun and that if one guesses a key used for the
encryption the resulting decompressed file would be a valid looking file
and the attacker would not have a way to know if it was the correct
key or message.
One thing that would help is using compression/decompression methods where
every file is valid. In my mod of adapative huffman compression "any file can
be decompressed and then compressed back to starting file" while all
comprssion methods can do "any file can be compressed and then decompressed
back to starting file" If you can find another method other than mine that
has both properties I would be more than happy to test it out.
For those who want to use compression before encryption please look at my
compression page and if your stuck using a small keyed AES type of encryption
you might try my method of causing an effective forward and reverse pass of an
adaptive huffman compression to partially get a all or nothing type of effect.
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
http://members.xoom.com/ecil/index.htm
NOTE EMAIL address is for SPAMERS
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Xor Redundancies
Date: Sun, 18 Jul 1999 15:31:48 -0400
Or better yet! I'll just have a friend from outside the country do it. He
has the source for the major algs out there. And he simply downloaded it from
the internet. I'll get him to post the source here. What the hell can they
do to him?
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Xor Redundancies
Date: Sun, 18 Jul 1999 15:19:42 -0400
> By the way what you posted may be illegal in most parts of the US
> it depends where you live in the US till the courts decide.
Bah, who cares? If a DA starts questioning me, I'll just post RC4 on here
(along with blowfish).
------------------------------
From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: Algorithm or Protocol?
Date: 18 Jul 1999 20:00:35 GMT
[EMAIL PROTECTED] wrote:
> I think pioneers in the group should move to designing secure systems
> (with respect to where the trust is placed). I will bet that anyone
> could sell a system more then an algorithm and that we could use
> protocols more then algorithms...
Sure! What problems need protocols? what's your favorite protocol?
Want to start with intrusion detection, logging in, digital cash,
pseudonymous transactions, or?
-David Molnar
------------------------------
From: [EMAIL PROTECTED]
Subject: IEEE P1363 August Meeting Announcement
Date: Sun, 18 Jul 1999 16:46:35 -0400
IEEE P1363 Working Group:
Standard Specifications for Public-Key Cryptography
MEETING NOTICE
Thursday, August 19, 1998, 2:00pm-5:30pm
Friday, August 20, 1998, 8:30am-5:00pm
University of California, Santa Barbara
Santa Barbara, California, USA
(Exact Location to be Announced)
This meeting of the P1363 working group, open to the public,
will review the status of the P1363 ballot (which will
hopefully be ready for final submission to IEEE RevCom),
review the plans for officer elections, receive new
contributions to the P1363a addendum and ideas for other
projects, refine the scope for P1363a and select additional
techniques, and plan for future work.
An information session will be held on Tuesday afternoon at
UCSB for those interested in a general introduction to P1363.
The meeting is held in conjunction with the CRYPTO '99
conference.
AGENDA
Thursday Afternoon
1. Approval of agenda
2. Update on P1363 ballot
4. New P1363a contributions
5. New project ideas
Friday
6. Approval of minutes from previous meeting
7. Officers' reports
8. Refinement of scope for P1363a, further selection of
techniques
9. New project proposals
10. Work assignments
11. Meeting schedule
There will be a meeting fee, which will include an IEEE fee
of $10 per half-day and which will also cover meeting
expenses (exact amount to be determined).
Information on the working group is available through
http://grouper.ieee.org/groups/1363/. To join the working
group's electronic mailing list, send e-mail with the text
"subscribe stds-p1363" to <[EMAIL PROTECTED]>.
REGISTRATION
Although formal registration is not required for IEEE P1363
meetings, for planning purposes, please contact Burt
Kaliski <[EMAIL PROTECTED]> if you would like to attend.
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Why public key in PGP
Date: Sun, 18 Jul 1999 20:38:39 GMT
[EMAIL PROTECTED] wrote:
> [EMAIL PROTECTED] (Patrick Juola) wrote:
> > More colloquially, you cannot decrypt with the public key -- you
> > can verify a signature with a public key, but that's a different
> > operation. Of course, in RSA these two operations are implemented
> > identically, but that's not the case in other systems.
>
> Actually all signing methods are encrypt/decrypt variants. you have
> to 'encrypt' the signature with your private info and they 'decrypt'
it
> with your public info. If you can't decrypt the encrypted signature
> then you can't verify the document. If you don't 'encrypt' the hash
> then others could modify it.
Tell us how to encrypt and decrypt with, say,
Merkle's one-time hashing based signature scheme.
Is there any way we can convince you to stop making
up this nonsense and polluting the group with it?
--Bryan
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: obliterating written passwords
Date: Sun, 18 Jul 1999 20:35:15 GMT
In article <7moe8q$tp9$[EMAIL PROTECTED]>,
[EMAIL PROTECTED] wrote:
> In article <7moauf$sln$[EMAIL PROTECTED]>,
> [EMAIL PROTECTED] wrote:
> > I occasionally jot down a password, or social security number
> > or such, consisting of a handful of numbers and letters. I
> > later attempt to obliterate it by writing random numbers and
> > letters over all the original numbers and letters, several times.
> >
> > Suppose you are given that piece of paper and told to find the
> > original password. How easy is it? What attacks are available?
>
> Well some investigative centers can recover this. On a A&E special
> documenting plane accidents they used these techniques to find 'repair
> slips' which were forged (by writing over them). They could recover
> writing from papers which were scribbled over in different colors,
even
> thru white out and such.
Multiple colors and white-out would make it easier to determine
the layers. Out of curiosity, how do they determine which number
was written first when the same pen is used for all the layers?
Maybe spacing between characters? How hard the pen was pressing
down (probably lightest for the first layer)? Use a magnifying
glass to identify which line wasn't deposited on top of an
existing groove? Or do they need a scanning tunneling microscope
to measure the orientation of the pigment? Or do they note that
there are only three layers, leaving 3^^8 possibilities, so test
all of them?
- Bob Jenkins
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************