Cryptography-Digest Digest #647, Volume #11      Thu, 27 Apr 00 15:13:01 EDT

Contents:
  Re: Magnetic Remenance on hard drives. ([EMAIL PROTECTED])
  Re: Magnetic Remenance on hard drives. (Francois Grieu)
  Re: "No adjacent image" hash assumption? (David A Molnar)
  Re: Vs: sci.crypt think will be AES? (David A. Wagner)
  Re: U-571 movie (Joaquim Southby)
  Re: papers on stream ciphers ("Douglas A. Gwyn")
  Re: papers on stream ciphers ("Douglas A. Gwyn")
  Re: Looking for a *simple* C Twofish source (David A. Wagner)
  Re: factor large composite (Johnny Bravo)
  Re: factor large composite (Johnny Bravo)
  Re: papers on stream ciphers (David A. Wagner)
  Re: Checksum algorithm which is ASCII (Terry Neckar)
  Re: Checksum algorithm which is ASCII (Terry Neckar)
  Re: "No adjacent image" hash assumption? (David A. Wagner)
  Re: S-Tools4 stego source code [ security evaluation ] - where to get it ? (Mr. 
First Night)
  Re: new Echelon article ("Douglas A. Gwyn")
  Re: new Echelon article ("Douglas A. Gwyn")
  Re: U-571 movie ("Douglas A. Gwyn")
  Re: OAP-L3:  What is the period of the generator? ("Douglas A. Gwyn")

----------------------------------------------------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Magnetic Remenance on hard drives.
Date: Thu, 27 Apr 2000 17:12:49 GMT

[EMAIL PROTECTED] wrote:
> Let me choose the make and model of floppy drive used.  Both will be
> commercial off-the-shelf models.  We'll put one drive in Computer A and
> put the other drive in Computer B.  Allow me to "age" either drive
> by running it through duty cycles equivalent to mormal usage over time.
> Allow me to format a floppy on either drive.  Now put the floppy in
> drive A and fill it up with several ASCII data files.  Now take that
> floppy to Computer B and run a 3 pass PGP wipe.

Let's try a 4 terabyte RAID array instead, since floppies cost
pennies. :) Now, imagine it's filled with corporate information worth
varying amounts of money. The disks are hot-swappable, of course.

Now, how many times do I need to overwrite data so that if John
Q. Public buys a matching disk and quickly swaps it with one in the
array he most likely can't get erased data recovered at a commercial
shop?

The problem with situations like this is that really, nobody
knows. What if J.Q. Public built a custom setup in his basement to
analyse magnetic media. Now he's probably better than the commercial
houses, worse than a professional forensics expert. What's safe?
What's not?

The problem isn't in the black and white areas, it's in the gray ones.

-- 
Matt Gauthier <[EMAIL PROTECTED]>

------------------------------

From: Francois Grieu <[EMAIL PROTECTED]>
Subject: Re: Magnetic Remenance on hard drives.
Date: Thu, 27 Apr 2000 19:13:53 +0200

jungle <[EMAIL PROTECTED]> wrote:
> I have to this day the floppy that has been wiped [ by accident ]
> by one of the company employers ...
> 
> do you like to help to recover data from it ? 
> it is just a floppy disk ... wiped only with 3 passes under pgp ...


I throw in this URL:
 <http://www.kagaku.com/sigma/english/media.html>
with pictures on how to optically view magnetic recording.
The techniques shown are sensitive enough for high density
recording such as DAT (*).

If I was assigned the task to attempt data recovery on a removable 
floppy disk erased with a drive other than the one originally used to 
write on it, I'd first consider the use of these colloidal fluids, with 
a scan using optical techniques. I am reasonably confident that, past 
some degree of drive misalignment, it would work quite a bit, because 
some of the data written by the original drive is out of reach of the 
second one (more overwrite passes will hardly change that).

I have no precise idea on how much misalignment occurs in today's 3.5" 
drive, but still I have some experimental data to throw in: back in the 
days of the Apple][ with 5.25" media, the programmer had exquisite 
control on the head stepper motor of the drive, and could move the head 
with accuracy of a quarter of a track (by playing with 2 phases of the 
stepper motor simultaneously). I personally checked that it was possible 
to reduce the track spacing to 0.75 track with fair success rate, which 
shows there was significant gap between tracks. On the other hand it was 
plain impossible to double the density (at least on the original 
Apple-branded drives), even without moving the disk out of the drive, 
which means that writing misaligned by 0.5 track prevented any further 
successful read on the adjacent tracks. The 0.25 track offset trick was 
very useful at recovering data written by misaligned drives. I tend to 
believe that **using standard hardware**, it was not possible to recover 
overwritten data no matter how bad the drive misalignment.

In hard drives [see title of the thread], it would sure be much much 
more difficult, for a combination of reasons
- higher density
- repeatable head alignment => accurate overwrite
- complex encoding
- more data to examine


   Francois Grieu


(*) The company is real. I use the device at
<http://www.kagaku.com/sigma/english/magnet_viewer.html>
in order to help people grasp how easy it is to read/forge
magnetic cards. But I doubt the thing is sensitive enough for
reading a floppy, much less data recovery.

------------------------------

From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: "No adjacent image" hash assumption?
Date: 27 Apr 2000 17:20:58 GMT

Anton Stiglic <[EMAIL PROTECTED]> wrote:

> I guess this property would be useful if you have a scheme where you
> need to visually verify a key (by it's hash) for example.  What did you have
> in mind?

In private e-mail, someone else heard of a kind of "probabilistic
authentication" challenge-response protocol, based on something
similar. The idea seems to be to use hash functions which are
very fast, but "weak" in the sense that collisions within a small range of
the "real" value are easy to find. By probing randomly outside that range 
on each challenge, one can make the probability of an adversary passing
all responses negligible. 

That's sort of what I had in mind.

The question was somewhat related to a question Prof. Goldwasser had last
year : 

In a signature scheme,
        key generation is probabilistic
        signature creation can be probabilistic, and usually *should be*
        but signature verification is usually deterministic. 
What happens if we allow a small probability of error in verifying
signatures? Do we get any benefits?

I had been thinking that some signature schemes include operations like
raising a generator g to the H(x) power, or multiplying by H(x). So if you
could show (or assume) that H() has some nice properties almost all the
time, you might be able to be more efficient. For instance, if you have an
H(x) which has no adjacent images, maybe you can save a multiplication 
in a verification equation that looks like 

        g^x = g^<product of stuff involving H(x)>

because the last bit of H(x) doesn't actually "matter" in some sense for
identifying what x is supposed to be. That would be a trivial savings, but
maybe you can do better by showing or assuming other relations don't exist
over the hash function. 

That's much less close to the spirit of the question than this
"probabilistic authentication" idea, however. Unfortunately the
correspondent didn't recall the reference, but it can't be too hard to
find...

> I would think that if you could find x and y such as in a), you would be
> able to find collisions (like there would be some reduction or something).

Yeah, it seems like something which "shouldn't happen to a good hash
function." I don't see a way to make the reduction, though I'm not an
expert on hash function constructions.

Thanks,
-David

------------------------------

From: [EMAIL PROTECTED] (David A. Wagner)
Subject: Re: Vs: sci.crypt think will be AES?
Date: 27 Apr 2000 09:50:31 -0700

In article <8e6vhc$a9e$[EMAIL PROTECTED]>,
Helger Lipmaa <[EMAIL PROTECTED]> wrote:
> the fastest bulk encryption
> mode of Twofish (that is still slower than Rijndael) requires key scheduling
> that is at least 40 times slower than the key scheduling of Rijndael.

I believe this to be untrue.
Yes, Twofish's keyschedule is slower, but not *that* much slower.

All told, if you measure this in a fair and representative way,
the right factor seems to be about 5x, for applications where key
scheduling time is important.
It is true that the Rijndael key schedule is faster, but it need
not be 40x faster.

Please see Brian Gladman's Round 2 comment for a detailed analysis
that arrives at the 5x figure.  Here's a cite to his paper:
  Brian Gladman, ``Some Informal Reflections on Rijndael and Twofish'',
  http://csrc.nist.gov/encryption/aes/round2/comments/20000316-bgladman.pdf

You did not state how you arrived at the "40x" figure, but I can
guess that it rests on some shaky premises.  Here are several things
you did not say, which I think are relevant:

1. Twofish supports several keying options, with different
   tradeoffs between key-setup time and encryption time.
   Taking this into account, the factor changes from 40x
   to maybe something like 6x.

2. I think you are quoting keysetup timings for a Rijndael
   implementation that only supports encryption but not
   decryption.  Supporting decryption increases Rijndael's
   keysetup time by about 6x or something of that order,
   as far as I can tell.

3. For best comparison, one should count the time to set up
   a key and use it to encrypt a single block, not just to
   set up a key.  (Counting just keysetup time does make Twofish
   look better than counting the total cost, but it is only
   fair to count the total cost of both operations.)

If you take all this into account, you get a 5x factor in the following
way.  Twofish `zero keying' takes 1250 clocks for key expansion plus 860
blocks to encrypt a single block.  (All numbers are for a PPro-like
platform, best available assembly implementation.)  Comparable numbers
for Rijndael seem to be at 190 + 237 [Aoki+Lipmaa].  Dividing, we get
(1250+860)/(190+237) = 4.94.  (It is not clear to me whether the Rijndael
measurement reflects the cost of the decryption keyschedule.  If it does
not, then the 5x figure would presumably need to be lowered even further.)

If you see any omissions in the above discussion or calculation, I do
hope you will point to where I am in error.

------------------------------

From: Joaquim Southby <[EMAIL PROTECTED]>
Subject: Re: U-571 movie
Date: 27 Apr 2000 17:31:18 GMT

In article <[EMAIL PROTECTED]> Andrew Carol,
[EMAIL PROTECTED] writes:
>> The Japanese had a history of launching unannounced attacks.  Ask the
>> Russians.
>
>They did after Pearl Harbor, but their war with Russia was already old
>news in the 40's and one time does not make a "history" of it.
>
On the night of 8/9 Feb 1904, the Japanese attacked Port Arthur.  The
next day they invaded Korea and began an advance towards Manchuria.  As
an afterthought, they declared war on Russia.

In July of 1937, a Japanese regiment garrisoned by treaty in the Chinese
city of Tientsin used a fabricated incident (they had been setting up
incidents since 1931) as pretext to shell a Chinese fort and to extend
Japanese control over the Tientsin-Peking region.  As an afterthought,
they declared war on China.

On the morning of December 7, 1941...you know the rest.

Does three times make a history?

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: papers on stream ciphers
Date: Thu, 27 Apr 2000 16:51:42 GMT

"David A. Wagner" wrote:
> I thought in the standard communications setting, we assume the
> adversary has control over the network (or, at least, is able to
> send forged packets).  That sounds to me suspiciously like the
> beginnings of a chosen-ciphertext attack -- what am I overlooking?

To conduct a chosen-ciphertext attack, you have to be able to
get hold of the deciphered "plaintext", which isn't possible
in this setting.  (You can of course decipher using your own
key, but that doesn't help you to determine the original key
nor to obtain information about the legitimate plaintext.)

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: papers on stream ciphers
Date: Thu, 27 Apr 2000 16:52:46 GMT

almis wrote:
> The requirement for a cryptographically strong key is only that it be
> unpredictable.
> Given this belief, knowing a "clean" stretch of
> key gives no information about the rest of the key.

Sorry, I was talking about the Real World (tm).

------------------------------

From: [EMAIL PROTECTED] (David A. Wagner)
Subject: Re: Looking for a *simple* C Twofish source
Date: 27 Apr 2000 10:03:36 -0700

You cannot use table lookups?  If so, this is enough of an exotic
special-case requirement that I wouldn't be too surprised if you
can't find any code on the net that already avoids table lookups.

------------------------------

From: Johnny Bravo <[EMAIL PROTECTED]>
Subject: Re: factor large composite
Date: Thu, 27 Apr 2000 13:53:50 -0400

On Thu, 27 Apr 2000 12:01:06 +0200, Runu Knips
<[EMAIL PROTECTED]> wrote:

>Johnny Bravo wrote:
>>  The cost is nothing, ...
>
>Wrong. If your CPU is idle, 

  The CPU is not idle.  It is supposed to be running a shared client,
running the actual client or a modified client trying to factor an RSA
number by random guesses.  No idle CPU cycles are involved at any
point in this process.

-- 
  Best Wishes,
    Johnny Bravo

"The most merciful thing in the world, I think, is the inability
of the human mind to correlate all it's contents." - HPL

------------------------------

From: Johnny Bravo <[EMAIL PROTECTED]>
Subject: Re: factor large composite
Date: Thu, 27 Apr 2000 13:53:49 -0400

On 26 Apr 2000 16:49:22 GMT, David A Molnar <[EMAIL PROTECTED]>
wrote:

>Johnny Bravo <[EMAIL PROTECTED]> wrote:
>> nothing to do so, since the computing power is free and unused anyway.
>> There isn't much need to check each SETI packet 3 times. :)
>
>Remember the thread here a few months ago on evil and modified SETI@Home
>clients?

  Yeah, but those clients were just returning bogus information so
that the score for that person was inflated.  The SETI@Home people
were asking about ways to prevent that from happening.

-- 
  Best Wishes,
    Johnny Bravo

"The most merciful thing in the world, I think, is the inability
of the human mind to correlate all it's contents." - HPL

------------------------------

From: [EMAIL PROTECTED] (David A. Wagner)
Subject: Re: papers on stream ciphers
Date: 27 Apr 2000 10:26:34 -0700

In article <[EMAIL PROTECTED]>, Douglas A. Gwyn <[EMAIL PROTECTED]> wrote:
> To conduct a chosen-ciphertext attack, you have to be able to
> get hold of the deciphered "plaintext", which isn't possible
> in this setting.

I've seen situations where it is doable, and I suspect they may not
even be all that rare.  For example, earlier versions of IPSEC allowed
chosen-ciphertext attacks against the encryption algorithm.

(Consider an IPSEC-encrypted session transporting an email to a SMTP
server S.  Suppose the email will subsequently be forwarded by S to T
along a second SMTP link, this time in the clear.  Then an attacker can
send to S forged packets corresponding to the body of the email; S will
decrypt them, treat the resulting plaintext as the body of an email, and
forward the result to T in the clear.  This gives the attacker the chance
to see the deciphered "plaintext" by snooping the unencrypted S->T link.
We are in essence treating S as a decryption oracle.)

The above is a fairly general way that chosen-ciphertext attacks might
be mounted in practice, and I believe that, in many of today's protocols,
the main defense preventing it from being workable is the inclusion of a
MAC on each packet.  But it is a little scary to rely on the assumption
that the proposed stream cipher will always be used in conjunction with
a MAC; if they are separate standards, odds are that someone somewhere
will use the stream cipher alone without the MAC.  It is this failure
mode that leaves me unsatisfied with any cipher that is vulnerable to
chosen-ciphertext attacks.

------------------------------

From: Terry Neckar <[EMAIL PROTECTED]>
Subject: Re: Checksum algorithm which is ASCII
Date: Thu, 27 Apr 2000 18:08:01 GMT

It was already in the file.

Terry

Tom St Denis wrote:

> Terry Neckar wrote:
> >
> > I'm still willing to pay for help if anyone can figure this thing out.
> >
> > Thanks,
> > Terry
>
> How did you get that checksum?
>
> Tom


------------------------------

From: Terry Neckar <[EMAIL PROTECTED]>
Subject: Re: Checksum algorithm which is ASCII
Date: Thu, 27 Apr 2000 18:09:08 GMT

The characters following the $ can be any uppercase letter, or any number, 0 thru 9.

Michael Wojcik wrote:

> In article <LzlM4.57559$[EMAIL PROTECTED]>, "Terry Neckar" 
><[EMAIL PROTECTED]> writes:
>
> > Does anyone know of a CRC algorithm that has six ASCII characters.  The file
> > I use is a text file similar to below.  If someone has the answer, I'll
> > gratefully pay them.  This algorighm is at least 10 years old.
>
> > [snip data]
>
> > CHECKSUM:   $ABCDE
>
> Are you sure it's a CRC, and not some other kind of checksum?  There
> are many checksum algorithms (an infinite number, in fact) that
> aren't CRCs.
>
> Are you sure it's ASCII?  IBM and some other sources use that
> initial dollar sign to indicate hex numbers in some contexts, and
> "ABCDE" is obviously a valid hex string.  Five hex digits is a
> somewhat unusual number, but there might be leading zeroes or
> this checksum might be computed in, say, IBM packed decimal format
> in a 6-byte field, which would produce five digits and a sign
> indicator that might be discarded from the output.
>
> Do you have any more information about what portions of the data
> the checksum might cover?  It's very difficult to even speculate
> about it without more information.
>
> --
> Michael Wojcik                          [EMAIL PROTECTED]
> AAI Development, MERANT                 (block capitals are a company mandate)
> Department of English, Miami University
>
> Even 300 years later, you should plan it in detail, when it comes to your
> summer vacation.  -- Pizzicato Five


------------------------------

From: [EMAIL PROTECTED] (David A. Wagner)
Subject: Re: "No adjacent image" hash assumption?
Date: 27 Apr 2000 10:31:37 -0700

In article <8e87ie$4dr$[EMAIL PROTECTED]>,
David A Molnar  <[EMAIL PROTECTED]> wrote:
> Given h : {0,1}^* ---> {0,1}^k , it is infeasible to
> a)  find two strings x and y such that abs( h(x) - h(y) ) = 1,
> 
> I think that the "adjacency" may be an
> example of an "evasive" relation -- that is, something you wouldn't expect
> to find easily in the output of a truly random hash function.

Yes.  A simple birthday-style argument shows that, if h is a random
function, finding such an "adjacency" requires O(2^{k/2}) time.

------------------------------

From: Mr. First Night <[EMAIL PROTECTED]>
Subject: Re: S-Tools4 stego source code [ security evaluation ] - where to get it ?
Crossposted-To: comp.security.pgp.discuss,alt.security.scramdisk
Date: Thu, 27 Apr 2000 12:11:39 -0700

Smart is Beautiful ..


* Sent from AltaVista http://www.altavista.com Where you can also find related Web 
Pages, Images, Audios, Videos, News, and Shopping.  Smart is Beautiful

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: new Echelon article
Date: Thu, 27 Apr 2000 17:17:13 GMT

David A Molnar wrote:
> ... Then count the NSF and DARPA grants.

Of *course* when there is a large pot of money available, that's
what people actually tap into.  But that's irrelevant to the issue,
which was whether research is *necessarily* better controlled
(funded) by the government.

A similar poll would show that most Americans send their kids
to public schools, but that doesn't mean that they think private
schools would do a worse job.  They merely have already been
forced to pay for the public schools, so they use them instead
of paying a second time for their own choice (unless they are
sufficiently well-to-do).  This is the reason behind the idea
for "tax vouchers for education", which is a compromise to
continue public school funding while allowing parents to
affordably choose a better alternative.

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: new Echelon article
Date: Thu, 27 Apr 2000 17:10:28 GMT

Diet NSA wrote:
> OTOH, gov't funding helped spur the
> development of modern computing,
> satellites, & data networks which have
> enabled the multi-trillion dollar IT sector
> (in addition the funding has benefited the
> aerospace, energy, health, etc. sectors).

In actuality, most development of computing, communication
satellites, and networks occurred in the commercial sector.
While there was some development in these areas using
government funding, much of it didn't reach the public.
The main item in these areas contributed by government
was the satellite launchers, although even there quite a
lot of the later development funds came from the budget of
the manufacturers.  In fact, the political process that
created the Space Shuttle just about killed off the
launcher business, although after the Columbia disaster
it started to make a comeback, and new manufacturers of
privately-funded launch platforms have popped up.
You of course are also thinking of the ARPAnet, which
was indeed a valuable resource, but networking was
evolving anyway, and who is to say that the Internet
wouldn't have been better if it had evolved from a
different source?  Most people think of the Internet in
terms of Web browsers, which started as an individual's
project that was not the result of government planning
(although some of his salary may have come from
government funds).

I'm not knocking the worthwhile research that was funded
by the government, just the idea that it would not have
been at least as good if the only source of financing
had been private.  In fact I *work* at a research lab
that is owned by the government, but much of what we do
is geared toward specifically governmental applications
that the "private sector" would not normally choose to
develop on its own.

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: U-571 movie
Date: Thu, 27 Apr 2000 17:36:09 GMT

[EMAIL PROTECTED] wrote:
> The Americans never did get wise till late in the war.

Get wise to what?  Americans were extensively involved
with the Enigma cracking effort, both at Bletchley Park
and Stateside.

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: OAP-L3:  What is the period of the generator?
Date: Thu, 27 Apr 2000 17:44:45 GMT

Anthony Stephen Szopa wrote:
> In order to answer your question we must determine what you are
> referring to when you say:  pseudo random number generator.

"What is the period of the output, assuming the input and key
(including any that is prompted for, or otherwise taken from
the environment, during operation) consists of constant 0 bits
(or if 0 bits don't fit the format, whatever does, along the
same lines)?"

This is unlikely to be the simple product of the periods of
the various subsystems.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to