Cryptography-Digest Digest #770, Volume #12      Mon, 25 Sep 00 14:13:00 EDT

Contents:
  Re: What make a cipher resistent to Differential Cryptanalysis? (Mok-Kong Shen)
  Encryption Project ("Robert Hulme")
  Re: What am I missing? (Scott Craver)
  Re: What make a cipher resistent to Differential Cryptanalysis? (Tom St Denis)
  Re: LFSR as a passkey hashing function? (Simon Johnson)
  Re: What make a cipher resistent to Differential Cryptanalysis? (Mok-Kong Shen)
  Re: Tying Up Loose Ends - Correction (Mok-Kong Shen)
  Re: Tying Up Loose Ends - Correction (Mok-Kong Shen)
  Re: What am I missing? (Scott Craver)

----------------------------------------------------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: What make a cipher resistent to Differential Cryptanalysis?
Date: Mon, 25 Sep 2000 18:35:08 +0200



Tom St Denis wrote:
> 
>   Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> >
> >
> > "Paul L. Hodgkinson" wrote:
> > >
> > > "Mok-Kong Shen" wrote
> > >
> > > > No practical cipher is absolutely secure.
> > >
> > > Which isn't entirely true.
> > > A one-time pad can be shown to be theoretically unbreakable, which
> requires
> > > a random key of the same length as the message.
> > > Such a pad was used in the Cold War to encrypt communications
> between Moscow
> > > and Washington, sent by secure courier.
> >
> > The ideal OTP is 'theoretical' and, as you said,
> > 'theoretically' unbreakable. No practical 'ideal' OTP
> > exists (or knowable to be ideal, if given one),
> > however.
> 
> Actually an OTP is unconditionally secure if you can't guess my secret
> pad better then a prob of 1/2.  If I make a CDROM full of random bits
> (or two, one for either direction) and the CDs *are not* compromised
> then the communication is *provably* secure until the pads run out.

The catch is with the term 'random'. The randomness
required by an ideal OTP, i.e. the exact theoretical
assumptions of it, cannot be absulutely verified for
any given sequence pretended to be an ideal OTP.
That's why an ideal OTP can't be obtained in practice,
even it could exist in the world according to theory.

M. K. Shen

------------------------------

From: "Robert Hulme" <[EMAIL PROTECTED]>
Subject: Encryption Project
Date: Mon, 25 Sep 2000 16:59:31 +0100

Hi,

This is my first time working with encryption - my employers have asked me
to create a system that will allow users to access their payroll information
via the web.

I'm going to be using NT 4 Server and IIS 4.0 to create the system, using
ASP in IIS.

I can see that securing the communications between the server and the users
web browser is fairly easy using SSL 3 by plugging a certificate into IIS
and using 40 bit encryption (I have to use that because I'm in the UK don't
I?), but to be really sure the system is safe even if the NT box got cracked
I want to encrypt the records in the database...

I'm new to cryptography but I thought maybe I could use some one way hash
like MD5 to encrpyt their plaintext password and put the ciphertext in the
database. Then I could use some suitably strong encryption with the users
plaintext password to encryt all their other information and store the
ciphertext of that in another field in the database.

I would appreciate some comments / suggestions / general help on this
though..

As I see it I can use MD5 on the plaintext password they send, compare it to
the MD5 encrypted password in the database and if they match use that
plaintext string to decrypt their private information (and then display it
on screen).

In particular I wonder if people could help me with if this:

a) makes any sense :0)
b) has any glaring security holes
c) what encryption I should use ?

Generally any kind of help would be good :0)
Thanks
-Rob




------------------------------

From: [EMAIL PROTECTED] (Scott Craver)
Subject: Re: What am I missing?
Date: 25 Sep 2000 16:48:54 GMT

Lon Willett  <[EMAIL PROTECTED]> wrote:
>[EMAIL PROTECTED] (Scott Craver) wrote:
>>
>>      That doesn't matter.  There are some operations that SHOULD
>>      NOT be undone by a compressor, even if they COULD.
>
>Not actually true, in theory.  

        Oh, I disagree wholeheartedly.  The reason that these certain 
        operations should not be undone by a compressor is that in only 
        in SOME cases are they allowable.

        Watermarking is applied to clips for which the watermark's modification
        does not damage the content.  If compression wiped that out, then 
        it would wipe it out for all clips, including ones for which that
        modification is bad.

        Also, watermarking takes advantage of operations that are
        technically noticeable but arbitrary.  You may deform an image
        slightly to put a certain feature point in a certain geometric
        relation with other feature points (this method has been proposed,
        I believe by Maurice Maes at Phillips.)  To "compress" away this
        arbitrary choice, all images would have to have their feature points 
        snapped to specific locations!  An artist making a portrait couldn't
        decide to make the nostrils a tad smaller or a freckle just so;
        this magical compression scheme would keep undoing those alterations,
        on the grounds that your average viewer wouldn't notice without
        the original anyway!

>>      Often true, but _good_ watermarking schemes are also designed
>>      to avoid discovery by hackers.
>
> [snip]
>
>I'll strongly disagree (with the "good" part).
>
>In theory, it can be made very difficult indeed to detect a watermark
>by looking at the raw data alone ("cipertext only", but its not really
>"cipertext" we're talking about; is there some correct jargon that I
>should use here?).  

        (Nope.  Some people say "stego-whatever," such as stego-image,
        stego-clip, stego-text.  More commonly, the stuff in which 
        the message is hid is called a cover-image, or cover-clip, or
        cover-text.

        Seeing as how "stego" is short for "stegano," a prefix meaning
        "covered," I guess you can say whatever you want.)

>Assuming that the crypto isn't flawed, and that the watermarking has 
>a sequence of bits that it can play with whose distribution is 
>"effectively" random with a known distribution, and that those bits 
>aren't destroyed by a compression algorithm, then you can indeed 
>watermark effectively.  

        Indeed.

>But reverse-engineering of the scheme need only be done _once_ by
>_one_person_, and the whole thing is dead.  

        But this is true for anything.  RSA need only be cracked 
        once by one person.  That doesn't mean it will be done, or
        should not be trusted.

>And it will be done.  

        People will figure out minor operations to which the watermark
        is not robust (mild spatial warping, a la Fabien Petitcolas's
        program StirMark, for example.)  But they will not crack it in 
        the sense of extracting those secret bits.

        This is important in the non-watermarking application of sending
        secret messages in images.  Maybe you can find some operation
        to squeeze secret messages out of videoconferencing sessions
        (and inflict on all video feeds without a public backlash??!!)
        but as long as you don't _find_ a hidden message, or find
        _evidence_ of a hidden message, any parties engaging in secret
        communication are safe from persecution.

        In these schemes, there isn't a point where a hacker eventually
        figures out how to extract the bits.  They're concealed in a 
        subliminal channel based on a secret key, and you'll need that
        key.  Just like you would trying to extract bits from ciphertext.

>why bother trying to make the watermarking scheme resistant to
>detection?  

        In the case of watermarking, one can use a key so that only
        someone with that key can detect the mark.  This is useful 
        if you want to be the only one who can see it.
        
>And I think that if one wants to rely on the law, it is probably better
>to do it directly than through bogus ways (like the DMCA) that screw
>up the whole software industry.  It is _already_ illegal to make and
>distribute copies.  Trying to artificially control the technology
>("this program is illegal to possess") will probably do far more harm
>than good in the long wrong.  

        I like that, the "long wrong."  In fact, many have asked this
        question about watermarking:  why, when ultimately image 
        registration or other copyright enforcement technologies may be
        better?

        I was working on a scheme once which ultimately required a 
        different (large!) key for every image you wanted to watermark.
        Instead of keeping a huge file like this, why not just do some
        kind of robust high-level hash of the image, and register this
        with an agency?  You'd be storing just as much information,
        and would actually have a better chance in a court of law than
        you would trying to prove some hidden skewed distribution of
        transform coefficients.

>Regardless of the politics, I suspect watermarking of widely published
>material has a very limited future.  It is an attempt to suppress the
>new technologies, rather than adapt to them, and the technical means
>just don't exist to pull it off for long.

        My prediction:  someone will finally get that one clause in the
        DMCA struck down, and mini digital watermark removers, both
        hardware and software, will be legal to sell.

        These won't just be devices only criminals would want (e.g., 
        illicit cable descrambler boxes.)  Ingemar Cox et al mentioned, in
        a paper in the IEEE Journal of Selected Areas in Communications,
        that a personal video scrambling device may be useful for parents
        who want certain movies off-limits for the kids.  By scrambling
        video feed as it enters the DVD recorder, you "mismatch" the
        watermark and the detector so no mark can be found.

        You now have a scrambled DVD.  When you play it back, just use
        your parental control scrambler circuit again, to fix the 
        output signal on the way back to the TV.  Neat, eh.
        
>/Lon Willett   <[EMAIL PROTECTED]>

                                                        -S


------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: What make a cipher resistent to Differential Cryptanalysis?
Date: Mon, 25 Sep 2000 16:48:25 GMT

In article <[EMAIL PROTECTED]>,
  Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
>
>
> Tom St Denis wrote:
> >
> >   Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> > >
> > >
> > > "Paul L. Hodgkinson" wrote:
> > > >
> > > > "Mok-Kong Shen" wrote
> > > >
> > > > > No practical cipher is absolutely secure.
> > > >
> > > > Which isn't entirely true.
> > > > A one-time pad can be shown to be theoretically unbreakable,
which
> > requires
> > > > a random key of the same length as the message.
> > > > Such a pad was used in the Cold War to encrypt communications
> > between Moscow
> > > > and Washington, sent by secure courier.
> > >
> > > The ideal OTP is 'theoretical' and, as you said,
> > > 'theoretically' unbreakable. No practical 'ideal' OTP
> > > exists (or knowable to be ideal, if given one),
> > > however.
> >
> > Actually an OTP is unconditionally secure if you can't guess my
secret
> > pad better then a prob of 1/2.  If I make a CDROM full of random
bits
> > (or two, one for either direction) and the CDs *are not* compromised
> > then the communication is *provably* secure until the pads run out.
>
> The catch is with the term 'random'. The randomness
> required by an ideal OTP, i.e. the exact theoretical
> assumptions of it, cannot be absulutely verified for
> any given sequence pretended to be an ideal OTP.
> That's why an ideal OTP can't be obtained in practice,
> even it could exist in the world according to theory.

Are you sure about that?

In the series "111111111111111" what is the next bit?

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Simon Johnson <[EMAIL PROTECTED]>
Subject: Re: LFSR as a passkey hashing function?
Date: Mon, 25 Sep 2000 17:09:39 GMT

In article <8qmt7h$kns$[EMAIL PROTECTED]>,
  Bryan Olson <[EMAIL PROTECTED]> wrote:
> Simon Johnson wrote:
> > Say we take a 128-bit primitive polynomial mod 2 and
> > covert it to an LSFR. If we want to store a 128-bit
> > passkey for a 128-bit key encryption algorithm, for
> > example, we would enter the key as the initial state
> > of the register. We then clock it 128 times, to
> > clear out the passkey. Then we clock it a futher
> > 128-bit times, and record this bit sequence as the
> > hash. Any problems with this design?
>
> It's overly complicated as way to store a 128 bit
> quatitity, which is the only stated requirment. As
> Tom noted, the state transform is not one-way.  But
> before we consider the merits of solutions, we need
> to define the problem.  What security properties
> are you looking for?
>
> --Bryan
> --
> email: bolson at certicom dot com
>
> Sent via Deja.com http://www.deja.com/
> Before you buy.
>

I just want a 128-bit hash of a 128-bit key so i know the a user has
supplied the correct password to decrypt the file, in my file
encryption utility.
--
Hi, i'm the signuture virus,
help me spread by copying me into Signiture File


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: What make a cipher resistent to Differential Cryptanalysis?
Date: Mon, 25 Sep 2000 19:37:14 +0200



Tom St Denis wrote:
> 

> Oh sorry, yes you're right.  Let's consider DES with ultra weak sboxes,
> at most you add 8! work to the attack (or 16(8!) if you use round
> independent reorderings) which is not a heck of alot)

But how LARGE would be the figure, if each round has a 
different ordering? And if one has 16 S-boxes available 
for choice?

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Tying Up Loose Ends - Correction
Date: Mon, 25 Sep 2000 19:37:20 +0200



John Savard wrote:
> 
> It certainly is true that "perfect" compression, were it attainable,
> would confer a kind of information-theoretic security (although still
> short of that provided by the one-time pad) on messages. However, it
> is equally true that no compression scheme could possibly be devised
> that would, for a given ciphertext, cause decryption with DES to
> produce 2^56 equally plausible plaintexts (and, of course, the
> ciphertext would be another plausible plaintext, hence perfect
> compression would also be steganographic!).

If a pre-pass is used to determine the true frequencies,
then a static Huffman gives the best result for codes
with the prefix property. Adaptive Huffman emits codes
that are only optimal relative to the frequencies of
the ensemble already read in at any time point and has 
to emit a NYT and a standard code for each symbol that
hasn't been encountered before. Evidently, the whole 
cannot be optimal in length. Does anyone happen to have 
empirical results of comparision of the both approaches? 
Thanks.

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Tying Up Loose Ends - Correction
Date: Mon, 25 Sep 2000 19:37:32 +0200



Tim Tyler wrote:
> 
> I was trying to talk about arbitrary-length padding - not just the case of
> < 8 bits.
> 
> The more padding there is, the more potential information you're giving
> the attacker by using an "inefficient" representation of the length of the
> padding.

For padding to 64 bit block boundary, one needs only 
maximal 63 bits padding (per message). I don't think 
there is sufficient amount there for predicting the 
pseudo-random source of the padding, if that's
dependent on some secret information.

BTW, could you answer my question of whether Scott's
1-1 scheme can deal with arbitrary boundaries, since
you are apparently fairly familiar with that? (I
failed to obtain that information from him directly.)

M. K. Shen

------------------------------

From: [EMAIL PROTECTED] (Scott Craver)
Subject: Re: What am I missing?
Date: 25 Sep 2000 17:26:48 GMT

Sagie  <[EMAIL PROTECTED]> wrote:
>
>You are being hypothetical again. It is obvious that they want it to
>survive MP3, but the fact is that you have not tested it on either of
>the technologies. Surviving MP3 compression is actually rather vague,
>because I have no doubt it depends on the target MP3 parameters (stereo
>mode, bitrate, encoder quality, sample rate, etc).

        Okay, but the idea of watermarking is that removing the scheme
        requires distorting it into the ground.  Sufficiently low parameters
        will damage the music, and at that point you can keep the music.

>>      That doesn't matter.  There are some operations that SHOULD
>>      NOT be undone by a compressor, even if they COULD.
>
>The only modifications that should not be performed by a compressor are
>such that are audible by the human ear.

        A compressor must keep its hands off something unless it is
        fair game to change in ALL, or at least MOST music.  

        Here's a less stupid example:  spread spectrum.  Take the 
        FFT of the music, and lightly (but not too lightly) tweak
        100 different low-frequency components.  The tweaking is 
        done by a 100-element key vector:  freq_i = freq_i * vector_i,
        where each vector_i is close to 1.
        
        The watermark is detected by correlation.

        The simple fact is that you can tweak music much more, inaudibly,
        than any compressor can inaudibly remove any correlation.  This
        may sound paradoxical, but is due to the fact that you are 
        using a secret key vector.  The compressor has no idea what your 
        vector is, and chances are any mild changes it makes to the low 
        frequency bins will be almost orthogonal to your vector.

        In order for the compressor's orthogonal change to be so strong that 
        your mark isn't detected, the bins would have to be greatly, audibly, 
        distorted.
        
        I don't know who does this with audio, but it's a common approach
        in image marking.  Compressing out a particular watermark is 
        easy; compressing out the ENTIRE space of ~2^100 watermarks is 
        impossible.

>If the changes are so sublte and undetectable by the human ear you can
>count on it that compression methods will use those tricks for the
>compression process (if they create a more effective representation of
>the signal, obviously) or remove these changes as part of the
>optimization. 

        You can imagine this in theory, but not count on this in
        practice.  Stretch any image you own by 1% in width.
        This is subtle and practically undetectable by most users,
        w/o reference to the original.  But no image compression scheme 
        will know to undo it.  No image compression scheme will take
        all images around 480 pixels wide and scale them to 480 pixels
        to wipe out that undetectable change.  Width is off-limits.

>You're forgetting that we have the "before" and the "after". We already
>have the exact watermark signal that was introduced to the audio, we
>don't have to "find" it. 

        First, you don't necessarily have _the_ signal.  It can be
        different from clip to clip, so different that the difference
        tells you nothing about what will be found in the next audio clip.

        Second, the difference will not necessarily betray a 
        "watermark signal."  If a watermark is embedded in a wacky
        non-additive way, the difference is best described as a signal
        that hints that one method was used, but which itself might
        not contain the mark information.  

        Instead, the difference gives vital clues, but is not itself the 
        watermark, unless you trivally define "the watermark" as the 
        object X such that Original + X = Marked.  This is not a good
        model for a lot of schemes.

> We don't have to crack the technology -- all we have to do is cause enough 
> inaudible (or slightly audible) damage to the signal to render the watermark
> undetectable and get the 10K$ paycheck... 

        Exactly.  Tho, you have to be sure that your inaudible damage works
        not just on this clip but on others as well.  If you have cracked
        the technology, you're much more likely to be certain of this.

>>      Also see the work of Lisa Marvel et al, who developed a spread-
>>      spectrum stego scheme that can conceal about 5 kilobytes very
>>      robustly (and error correction is all taken care of) in a 512x512
>>      monochrome image.
>
>That's very impressive, but one should wonder what artifacts are
>introduced to the image as a result (we are talking about 1-bit depth
>per pixel, right?).

        No, more like 8-bit depth.  But you'd be shocked by what spread-        
        spectrum techniques can survive!  Dither that puppy down to 1 bit,
        print it out, run it through a copy machine, rescan it....

>>      Better to just get the algorithm by reading the patent.
>
>That's the reason why I said SDMI did not give enough information.


                                                        -S


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to