Cryptography-Digest Digest #114, Volume #11      Sun, 13 Feb 00 12:13:02 EST

Contents:
  Re: Guaranteed Public Key Exchanges (No Brainer)
  Is there a list server for this newsgroup? (No Brainer)
  Re: Message to SCOTT19U.ZIP_GUY (SCOTT19U.ZIP_GUY)
  Re: Guaranteed Public Key Exchanges (No Brainer)
  Re: Guaranteed Public Key Exchanges (No Brainer)
  Re: Guaranteed Public Key Exchanges (No Brainer)
  Re: Which compression is best? (Tim Tyler)
  Re: SHA1 and longer keys? ([EMAIL PROTECTED])

----------------------------------------------------------------------------

From: No Brainer <[EMAIL PROTECTED]>
Subject: Re: Guaranteed Public Key Exchanges
Date: Mon, 14 Feb 2000 00:06:15 +0800

Shen,

On Fri, 11 Feb 2000 08:19:32 +0100, Mok-Kong Shen <[EMAIL PROTECTED]>
wrote:

<snip>

> As far as I know, you could only 'approximately' not 'absolutely'
> solve your problem, in that either you have a third party certifying
> the public key, eventually via a hierachy of trust centers, or have
> so-to-say the 'public' (this could in the special case simply be
> one person) certifying that, i.e. in the art of the web of trust of
> PGP. (Interestingly the two approaches could be compared to
> centralized vs. distributed computing.) There has to be somewhere
> some trust, i.e. something on which you put your (subjective,
> hopefully not blind) faith on, and that's something one can have
> no prove (by definition).

Aaah...so if I had a third party (or other party) certifying the public key it
should be OK?

Two questions though; how would it be done over the Internet and what is
"approximately"? :)

AND...if I did use this model, could I still obtain the public key via the
Internet?

TIA.





------------------------------

From: No Brainer <[EMAIL PROTECTED]>
Subject: Is there a list server for this newsgroup?
Date: Mon, 14 Feb 2000 00:07:40 +0800

To all,

I was just wondering if there is a list server linked to this
newsgroup...it's just a hell of a lot easier to compose replies etc etc.

Cheers

EC




------------------------------

From: SCOTT19U.ZIP_GUY <[EMAIL PROTECTED]>
Subject: Re: Message to SCOTT19U.ZIP_GUY
Date: Sun, 13 Feb 2000 16:06:11 GMT

In article <[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] wrote:
> [EMAIL PROTECTED] wrote:
>
> :> >And what about the Compressors A, B,C
> :> >
> :> >Is it the same Compressor?
> :>
> :>     They could be the same compressors as long as they
> :> are one-one compressors. And even better if they can reverse the
> :> byte order of the file.
>
> : DOES your Compressor have a switch ...same module does
> : Compression-Decompression????
>
> I don't think so.

   goto http://members.xoom.com/ecil/compress.htm
and wander to the following pages get ah2.zip
the source code and executables are there.

>
> :>  Better yet learn how to do "wrapped PCBC" in which case just
compress
> :> intially and do at least 3 passes of the "wrapped PCBC chainning
> :> useing your choice of AES block ciphers with independent keys.
>
> : What is "wrapped PCBC"  How does that work???
  I coined the word "wrapped" and "wrapped PCBC"
is very close to "PCBC" except the IV is last few
bits in file and the file treated as rotated ring
where I would hope one does at least 3 passes through the file.
look at scott16u.zip and scott19u.zip again use pointers
from my home page but turn off the high level browser functions
like java script that are not needed for my site.


>
> PCBC stands for "Plain & Cipher Block Chaining".
>
> The "&" is sometimes omitted.
>
> To quote from http://www.mit.edu:8008/menelaus.mit.edu/kprot/23
>
> ``PCBC was intended to propagate ciphertext changes indefinitely in
the
>   resulting plaintext.  Each block of plaintext to be enciphered using
>   CBC is first xored with the preceding plaintext block.  On
decryption,
>   after the block has been decrypted using CBC it is then xored again
>   with the preceding plaintext block.  The idea is that a change in
the
>   ciphertext will cause the resulting plaintext for the modified block
>   to be garbage, and that the garbage would propagate to later
plaintext
>   blocks since each successive block would be xored with the incorrect
>   plaintext for the preceding block.''
> --
> __________
>  |im |yler  The Mandala Centre  http://www.mandala.co.uk/
[EMAIL PROTECTED]
>
> Ignore previous tagline.
>

--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
http://members.xoom.com/ecil/index.htm
NOTE EMAIL address is for SPAMERS


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: No Brainer <[EMAIL PROTECTED]>
Subject: Re: Guaranteed Public Key Exchanges
Date: Mon, 14 Feb 2000 00:13:25 +0800

Erik,

On  Fri, 11 Feb 2000 08:21:49 -0500, Erik <[EMAIL PROTECTED]> wrote:

<snip>

> 1) The public keys are signed by a trusted third party's private key,
> and downloaded from him.

OK...is there any info anywhere as to how this is accomplished via
RFC's/Protocols etc etc Mike?

Also, could "the trusted party's server" be a middle man and how would you
know you are talking to the trusted server and not the middle man?

Cheers.




------------------------------

From: No Brainer <[EMAIL PROTECTED]>
Subject: Re: Guaranteed Public Key Exchanges
Date: Mon, 14 Feb 2000 00:19:59 +0800

Paul,

On Fri, 11 Feb 2000 15:14:14 -0500, Paul Koning <[EMAIL PROTECTED]> wrote:

<snip>

> The issue is: how you you bootstrap this?  I.e., how do you get that
> first key, the one from A?  The same problem exists in X.509 and similar
> certificate systems, the only difference is that these use trees while
> the WOT uses graphs.

That's the golden question Paul; how DO you get the first key?

Surely there must be *some* way to get a key over the Internet in a trusted
manner without someone "intercepting/modifying" it on the way through?

What about a X509.3 signed program for download that cannot be reverse
engineered? Is there (for example) a 100% secure way to protect a C/C++
program from reverse engineering?




------------------------------

From: No Brainer <[EMAIL PROTECTED]>
Subject: Re: Guaranteed Public Key Exchanges
Date: Mon, 14 Feb 2000 00:25:35 +0800

On 12 Feb 2000 20:20:02, lcs Mixmaster Remailer <[EMAIL PROTECTED]> wrote:

<snip>

> This does not totally defeat the MITM, as he is still pretending to each
> side to be the other.  But he is not successful in monitoring a conversation
> that the two sides have with each other.  It requires him to take a much
> more active role, and the conversation will not end up being the same as
> if he had not intervened.  It at least makes his job more difficult.

LOL...that's the key - to make it _difficult_ for them...have printed your
e-mail and currently reviewing it...

Cheers.




------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Which compression is best?
Reply-To: [EMAIL PROTECTED]
Date: Sun, 13 Feb 2000 16:46:08 GMT

Jerry Coffin <[EMAIL PROTECTED]> wrote:
: In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...

: [ compression making the difference between a completely secure and 
: completely broken cipher. ] 

:> This would happen when the analyst is effectively deprived of a halting
:> criterion.

: But they're not.

Really?  How do you know that?

: The only difference it makes is that when they do a trial decryption,
: they have to do an entire message instead of a single block.

No.  That's not necessarily the only difference it makes.

:> If the compression were maximal in some sense, decompressed texts would be
:> indistinguishable from plausible messages.  The failure of decompreseed
:> files to resemble correct messages is one symptom of poor compression.
:> So it *WAS* what you were talking about ;-)

: I guess as long as you're free to define "in some sense" any way you 
: want, (I.e. requiring the result you say you want) then your statement 
: becomes a meaningless tautology.

I can't say "compression being maximal", without qualification, since for
any target set, there may be more than one way to maximally compress it.

As an example, a compressor can completely fill it's range with valid
compressed files, yet it can try especially hard to compress those files
which are exactly 1K long.

I'd happy, for the sake of argument to have maximal compression to mean,
say, compressing all files in its domain by roughly the same percentage
[give-or-take some specified small value] with the percentage being as
large as possible.

I hope this clarifies why I needed to qualify my statement.

: Otherwise, it's basically just false.  e.g. Huffman has been proven to
: be "maximal in some sense", but the effect you claim simply doesn't
: happen.

Any Huffman compressor with DS's optimal file ending is optimal, for
/some/ classes of file.  For those target files, the effect /does/ happen
- decompressed files all resemble plausible target files.

When you're saying "the effect doesn't happen", you're probably using the
compressor on a set of target files for which is it far from an optimal
compressor.

:> : I've long since gotten too sick of reading his crap to bother spending 
:> : a couple hours on Deja News to find it again.  If you really care, 
:> : feel free to do so yourself.
:> 
:> I was the one who was questioning the existence of these things.  Why
:> should I spend time searching for something I doubt even exists, in order
:> to demonstrate your point for you?

: My apologies --  I'd mistaken you for somebody who was asking a 
: question because he wanted to know the answer, not because he was 
: simply trying to give the appearance winning an argument, even when he 
: was wrong.

I'm not trying to give the appearance of winning an argument, and
- to my mind, in the complete absence of any evidence, it has yet
to be established whether your assertions were correct.

If you claim that someone has published information that falsely
represents the capabilities of their product - and yet you produce no
evidence of this - you must expect to be questioned about your claims.

When you're asked you support your claims, and you do not do so, I
for one will feel quite at liberty to put your assertions down to a
misinterpretation on your part.

It /may/ conceivably be that David said something like what you claimed.
Without his words to compare yours with, it's hard to judge to what
extent you're twisting his words.  You appear generally opposed to
his view of the world, and perhaps may not have given them fair hearing.

I guess without any support, we'll never know.

:> It's questionable whether what you describe qualifies as "compressing in
:> one direction".  It takes information from near the end of the file and
:> allows it to influence output from near the start of the file.  You could
:> not use such a method as a stream compressor.

: Nobody said anything about a stream compressor. [...]

If a compressor does *not* compress in a stream, in what sense may it be
described as having a direction at all?  Stream compressors have to be  
the subject - or the discussion doesn't make much sense.

: Since you seem to have forgotten what you said, I'll quote it for you
: again:

: :> Compressing in one direction only diffuses plaintext in one
: :> direction.

:> I never claimed that compressing in both directions was the /only/ way
:> to get information diffusion through the whole file.  Obviously there are
:> other ways of getting similar effects.

: You seem to be changing your story a bit.

I hope not.

Certainly the above quotations appear to be consistent with one another.

: You specifically said that compressing in only one direction can only
: have certain effects.

That's right.  A stream compressor /can't/ diffuse information backwards
significantly.  It /has/ to spit out it's output /fairly/ quickly if
it is to stay a stream, rather than becoming a stagnant pool ;-)

: Now you're saying that "obviously" there are other ways of getting similar 
: effects.

Well, yes.  Obviously, you can get whole file diffusion, by doing, for
example, whole file diffusion.

: Of course it's obvious you were wrong AFTER I proved it, but 
: I think it's pretty obvious to everybody reading this thread exactly 
: what you thought up until your ideas were proven incorrect.

You simply have a different idea of what it means for a compressor to
compress in one direction from me.

The way I was using the term, this must describe a stream compressor.

If a compressor reads in the whole file before spitting anything out, it
need have no "direction" - since it can produce exactly the same output if
it reads in the file in either direction, or all at once.

:> Well - for what it's worth - compressing in each direction seems to me to
:> be a type of diffusion that can easily be applied with a serial machine.

: I have to admit that I don't see what it's worth, or how it relates to 
: anything else being discussed at all.

Time to drop this part of the subject then?

:> Non-1-1 compressors (I assume this is the subject? - or are you referring 
:> to compressing once in each direction here?) allow systematic rejection of
:> keys, regardless of the plaintext they transmit.

: That's not necessarily true -- it's only true if decrypting with the 
: wrong key can and does produce a plaintext that can't possibly have 
: resulted from the non-1-1 compressor in question.

Well, it's a fairly orthodox assumption that the attacker has access to
the cyphermachine.  I'm just assuming the compressor is treated as part
of the cyphermachine.

What proportion of the time a wrong key produces a valid output depends on
how much information is added by the compressor.  For each bit added,
approximately half the remaining keys can be eliminated.  If as many bits
are added as there are keys in the space you're searching, you might
well get down to just one possible file.

: At the same time, a 1-1 compressor CAN allow easy and systematic 
: rejection of keys: using David Scott's compressor as an example, even 
: though every input is theoretically legal, decompressing from most 
: produces output that's _easily_ distinguishable from normal text.

Indeed.  David's compressor is far from optimal for text messages.
A "decompress, then perform frequency analysis" strategy may well
bear fruit.

:> If your cypher's keyspace is reduced to dimensions which allow an
:> automated search, whether by a bad RNG used to generate the keys, a cypher
:> break, key leakage through a programming bug, key knowledge gained through
:> captured key documents, or any of a zillion other factors, use of 1-1
:> compression can make the difference between having one document which is
:> identified as the genuine article, and having millions of possible
:> plaintexts, with no known automatic method of choosing between them.

: You've got two basic ideas here: one that a non-1-1 compressor makes 
: it easy to distinguish a correct decryption from an incorrect one.
: That's not necessarily true.

I never said it was.  A non 1-1 compressor allows rejection of incorrect
keys, not identification of correct ones.

I said:

``use of 1-1 compression *can* make the difference between having
  one document which is identified as the genuine article, and having
  millions of possible plaintexts, with no known automatic method of
  choosing between them.'' [emphasis added].

: Second, that a 1-1 compressor makes it impossible to distinguish 
: between a correct and an incorrect decryption.  That's pretty clearly 
: not true either [...]

I never said that either.  I said (paraphrasing) that a 1-1 compressor
*can* do this.  That is, when compared with an ordinary one.  What I
said was correct.

:> :> If you don't care about compressors systematically adding information to
:> :> the files they compress, that's your look out.
:> 
:> : "that's your look out"?  You're apparently trying your own version of 
:> : security through obscurity, saying things that simply make no sense.
:> 
:> ? Did I say something that made no sense?

: Yes.  I even quoted the part that made no sense to make it easy for 
: you to pick out.  Apparently that wasn't quite enough for you...

Indeed not.  Perhaps you'd like to explain further?

:> 1-1 compression /is/ generally a good thing, though.  Algorithms that
:> aren't 1-1 are wasting their range, and losing their efficiency.

: In theory that's true.  In reality, nobody seems to have devised a 1-1 
: compressor yet that's nearly as effective as some non-1-1 forms.

Well, how long ago was 1-1 compression invented?

I first heard about it under six months ago.

:> Huffman and arithmetic compressors already have the fundamental technology
:> behind their compression available in 1-1 variants - and shorter files
:> result.  Hopefully, other systems will follow.

: You don't seem to be giving a referent for your comparison.  You say 
: "shorter files result", but you never say WHAT they're shorter than.

The same compressors without the optimal 1-1 ending treatment.  The
technique appears to be a general one, which can be applied to *all*
Huffman, and arithmetic coding schemes.

: Clearly they're NOT shorter than the result from MANY forms of 
: compression that aren't 1-1.

For some common classes of file, no.

: In theory, it's true that a form of compression that was absolutely as 
: effective as theoretically possible would have to be 1-1.

Yes.

: What you're  ignoring is the fact that at the present time, NO compressor
: extant even approaches theoretical perfection [...]

Surely not true.  For some traffic, perfect compressors are trivial to
design, by a simple counting theorem.

: [...] and in fact they're all SO far from perfect, that this isn't
: really even a consideration at all -- experience shows that no 1-1 compressor
: extant gives anywhere close to the compression available by other
: compressors without the 1-1 characteristic.

Well, this depends on your target files.  David's compressor is likely to
be optimal for *some* traffic.  However, in general, I'd agree with the
spirit of your comment.

To my mind, the solution is not to bash those avocating 1-1 compression,
though - it's to build more and better 1-1 compressors.  Getting the 1-1
property is not easy, but it should not be *that* difficult.

: I'd also note that I think that if you want to do a reverse 
: "scrambling" step to diffuse data from the end of the file back toward 
: the beginning, something other than a compressor is probably just as 
: effective, and quite a bit simpler to write.

That seems very likely.  Another compression should be easy - since you
already have a compressor handy - but I agree that other things may be
more effective.  I'd be interested in seeing any such structures.

: Even simply starting at the end of the file, and writing out the XOR of
: each byte with the following byte will effectively diffuse information
: all the way from the end of the file back toward the front.

I feel that this won't work for some things.

If don't compress (but do XOR) and then you perform David's trick of
decrypting with the correct key, and then you will find that this gives a
space of 256 possible files through which to search in order to identify
the correct text.

If you're given a fragment of the text encrypted in such a manner, you'll
be able to recover most of the corresponding plaintext from that fragment,
if you are given the key.  This illustrates that the information has not
been diffused very effectively.

I believe such an XOR technique does not completely get rid of the
"known header in the message be used for partial plaintext attacks"
problem.

This type of experiment indicates to me that a more sophisticated
proceedure is required.  Something with a non-trivial quantity of
"internal state" appears necessary.  Bytes just aren't enough.

As I have said, if I had a parallel machine available, I'd prefer to
use a proper diffuser.  This has advantages in that allows the quantity
of diffusion to be easily controlled - and it will not bulk up the file
unnecessarily.

: Most forms of compression will actually expand data when run across
: the same data a second time, but simply XORing bytes avoids any
: possibility of that happening.

Yes, this /would/ good, were it not for the fact that it barely
diffuses the information necessary to decrypt the plaintext at all.

XORing with words - rather than bytes - doesn't appear to work, either.

More work is required to recover the plaintext - but if there's a
non-trivial quantity of english text involved - it should be possible.
-- 
__________
 |im |yler  The Mandala Centre  http://www.mandala.co.uk/  [EMAIL PROTECTED]

'Onst baas, aas jist pressed dis, an' de carrier, he gone walkabout...

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: SHA1 and longer keys?
Date: Sun, 13 Feb 2000 16:47:07 GMT



> > For i = 2 to 8
> >   result = HMAC_SHA1(password, sessionkeys(i)+HMAC_SHA1(password,
> result+
> > sessionKeys(i-1)))+result
> > Next
> >
> > returns 160 bytes I use as key. password is the literal password
> string.
> > Is this secure at all? If not, what's the best way to get a larger key
> > than the result of SHA1?

>
> You would never need keys this big comming from a user supplied
> passwd.  But lets suppose you want a 320 bit key... [using sha]  Just
> do this.
>
> K1 = SHA(PASSWD + SALT)
> K2 = SHA(K1 + PASSWD)
>
> K = K1 || K2
>
> It's basically a CBC style chaining mode.  'Salt' should be some >64
> bit value [which is publicly known, but unique to each session key]
> appended to the password in the first hash.

Well, thanks for the reply but it didn't really answer my question. I
know I don't need a 1280 bit key - this was just an example. However, my
question was, wether my chaining method using HMAC_SHA1 is secure. Is it?

Thanks,

John Stone


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to