Cryptography-Digest Digest #791, Volume #13       Sun, 4 Mar 01 02:13:00 EST

Contents:
  The Foolish Dozen or so in This News Group (Anthony Stephen Szopa)
  Re: OverWrite freeware completely removes unwanted files fromharddrive ("Douglas A. 
Gwyn")
  Re: super strong crypto, phase 3 ("Douglas A. Gwyn")
  Re: RSA Key Generation ("Michael Brown")
  Re: The Foolish Dozen or so in This News Group ("Douglas A. Gwyn")
  Re: The Foolish Dozen or so in This News Group ("Scott Fluhrer")
  Re: Urgent DES Cipher source code !!!!! (Paul Crowley)
  Re: Completly wiping HD (Paul Crowley)
  Re: The Foolish Dozen or so in This News Group (Anne & Lynn Wheeler)

----------------------------------------------------------------------------

From: Anthony Stephen Szopa <[EMAIL PROTECTED]>
Crossposted-To: alt.hacker
Subject: The Foolish Dozen or so in This News Group
Date: Sat, 03 Mar 2001 20:50:32 -0800

The Foolish Dozen or so in This News Group

Read below.  It'll be just like being forced to look into a mirror. 
You'll see who you all really are.  Read it an weep.

Here is why I think Ciphile Software's OverWrite program actually 
does overwrite the file to be overwritten the 27 times it claims it
does, contrary to the doubt cast by most if not all in these news
groups:

Here is a complete detailed pseudo-code for pass number 2 of 
Ciphile Software's OverWrite Security Program:


1)  At the beginning of the second pass, the file is opened.
If the file fails to open properly an error is thrown and the 
process ends.

2)  If the file opens properly then it is overwritten.

3)  Then the file is closed.
If the file fails to close properly then an error is thrown and the
process ends.

4)  Finally, if the flag is set, that is, if the file OverChk2.txt is
found, then the process ends.  Otherwise the third pass is begun.

I understand your optimization explanations and those given in the 
few documents I have read.  I was also familiar with the fact that a
block of data is read and stored into the cache and the reasons for
this.  I believe that the assumption in the documentation and in your
casts of doubt was that the opened file that is repeatedly written to 
is never instructed to be closed.  In other words, a seek to the
beginning of the file or wherever the subsequent write is to be done 
is made and a second write is attempted regardless if the prior 
write(s) have actually been physically carried out.  This makes sense
when repeated writes or overwrites are being made to a file that 
remains open.

As I have clearly stated above, the source code not only makes the
fclose() command but it checks for the return value from this 
operation.  If the return value is NULL then the fclose() has failed. 
And if the fclose() succeeds then the return value is zero.  This is 
the check I asked any of you if you knew how to make.  Well, it is 
part of the fclose() function as well as the fopen() function and all
other functions in C or C++.  The function has a return value of pass 
or fail.

I do check pass / fail in the source code, as stated above.

So in order for the FUD you are all casting against Ciphile Software's
OverWrite program to succeed and to cover up your own ignorance of the
pass / fail return values of all functions, you will have to claim 
that the OS not only optimizes write operations as you describe but 
in fact LIES because the OS has no idea whether or not a close or open
function was carried out successfully until it is actually PHYSICALLY
carried out.  To optimize as you all have been claiming in Ciphile
Software's OverWrite program, the OS would have to LIE that it had
successfully closed the file in order to proceed to carry out a
subsequent write in cache before the actual prior write and close to 
the file.  Do you see where this leads you?  To NoWheresville, man.  
To NoWheresville.

Do any of you claim that any OS that you know of actually fudges and
outright LIES when an instruction is given to carry out a function()
then claims to the compiled program that the function was carried out
successfully when the OS has no way of knowing this until the 
function has actually physically been carried out just so as to 
optimize its resources?  The specific functions we are talking about
here are the fclose() and fopen() functions.  Can't get more basic 
than these.

I hear that LSD and DMT are great therapies for narrowing the gaps 
in one's conceptual continuity.

P.S.

You do have one last hope.  His initials are Bill Gates.  Yes.  He 
is your last hope to prevail in this thread.  Yes, indeed.  If Bill
Gates is so screwed up as to produce an OS that would make such an
assumption as to whether or not a function such as fclose() or 
fopen() succeeded before these functions were actually physically
carried out just so he could claim his OS is superbly optimized then 
you are all correct.

What do you think? 

(I only get a good laugh like this once in about three months.)

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Crossposted-To: alt.hacker
Subject: Re: OverWrite freeware completely removes unwanted files fromharddrive
Date: Sun, 04 Mar 2001 04:53:17 GMT

Anthony Stephen Szopa wrote:
> If you read the Theory and Processes I & II Help Files from my web
> site you will see that trying to crack my software is like trying to
> guess ever longer strings of random numbers resulting from throwing
> ten sided dice.

The same sort of argument has been used repeatedly in the history
of cryptography, e.g. for the Enigma, and it has usually been
proven wrong.  For such an argument to prove security against
cryptanalysis, you *also* need to show that *every* possible
attack has to blindly "guess" at those strings.  And the lack of
knowledge of any other approach is no proof at all.

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: super strong crypto, phase 3
Date: Sun, 04 Mar 2001 04:58:34 GMT

"Douglas A. Gwyn" wrote:
> So far it is clear that a known-plaintext attack is infeasible.

Oops, what I meant by that was that a brute-force key search
along those lines is infeasible.  Even though I stipulated a
"reasonable" symmetric block encryption be used, I had in mind
that there could be other, "unknown" cryptanalytic attacks.
Those will be stymied by the additional development in phase 3.

------------------------------

From: "Michael Brown" <[EMAIL PROTECTED]>
Subject: Re: RSA Key Generation
Date: Sun, 4 Mar 2001 18:08:46 +1300


You were showing attacks on RSA so I though I'd do a little ad :)

Sorry if I offended anyone,
Michael

"Luis Duarte" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> I wonder what does the "ripple tree" scheme has to do with
> the question "RSA key generation"...
>
> On Sat, 3 Mar 2001 19:01:53 +1300, "Michael Brown"
> <[EMAIL PROTECTED]> wrote:
>
> >You might be interested in my idea here, though:
> >http://odin.prohosting.com/~dakkor/rsa/
> >
> >Michael
> >
> >
>



------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: The Foolish Dozen or so in This News Group
Date: Sun, 04 Mar 2001 05:11:28 GMT

I don't know to what you were responding, but here is the straight
scoop on fclose etc.:

Within a (single-threaded) process, all I/O on the same stdio stream
is coordinated so that the result is consistent with the simple "do
it immediately" model, even though some of the data is cached in
buffers within the process.  Upon normal program termination, all
open streams are closed as though via fclose.  So far, so good.

Within the operating system, for any multitasking system worth using,
all access to the same file by multiple processes (tasks) is
coordinated by the operating system so that the result is consistent
with the simple "do it immediately" model.  This is true even though
some of the file data is cached in fast memory.  Such caches are
eventually written back to secondary storage, unless the system
crashes or is shut down improperly.  So far, so good.

One does have to watch out for atomicity of directory operations
when multiple tasks might access the same file name; for example,
when a file is renamed, there might be a temporary window during
which neither the new nor the old names exist within the directory,
depending on the operating system.  And within a single process,
if a file is opened concurrently more than once with the "same name",
there is a significant likelihood that data in the multiple buffers
associated with those streams will *not* be coordinated by the stdio
implementation.

------------------------------

From: "Scott Fluhrer" <[EMAIL PROTECTED]>
Crossposted-To: alt.hacker
Subject: Re: The Foolish Dozen or so in This News Group
Date: Sat, 3 Mar 2001 22:03:36 -0800


Anthony Stephen Szopa <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> The Foolish Dozen or so in This News Group
>
> Read below.  It'll be just like being forced to look into a mirror.
> You'll see who you all really are.  Read it an weep.
>
> Here is why I think Ciphile Software's OverWrite program actually
> does overwrite the file to be overwritten the 27 times it claims it
> does, contrary to the doubt cast by most if not all in these news
> groups:
>
> Here is a complete detailed pseudo-code for pass number 2 of
> Ciphile Software's OverWrite Security Program:
>
>
> 1)  At the beginning of the second pass, the file is opened.
> If the file fails to open properly an error is thrown and the
> process ends.
>
> 2)  If the file opens properly then it is overwritten.
>
> 3)  Then the file is closed.
> If the file fails to close properly then an error is thrown and the
> process ends.
>
> 4)  Finally, if the flag is set, that is, if the file OverChk2.txt is
> found, then the process ends.  Otherwise the third pass is begun.
>
> I understand your optimization explanations and those given in the
> few documents I have read.  I was also familiar with the fact that a
> block of data is read and stored into the cache and the reasons for
> this.  I believe that the assumption in the documentation and in your
> casts of doubt was that the opened file that is repeatedly written to
> is never instructed to be closed.  In other words, a seek to the
> beginning of the file or wherever the subsequent write is to be done
> is made and a second write is attempted regardless if the prior
> write(s) have actually been physically carried out.  This makes sense
> when repeated writes or overwrites are being made to a file that
> remains open.
>
> As I have clearly stated above, the source code not only makes the
> fclose() command but it checks for the return value from this
> operation.  If the return value is NULL then the fclose() has failed.
> And if the fclose() succeeds then the return value is zero.  This is
> the check I asked any of you if you knew how to make.  Well, it is
> part of the fclose() function as well as the fopen() function and all
> other functions in C or C++.  The function has a return value of pass
> or fail.
>
> I do check pass / fail in the source code, as stated above.
Good for you.

>
> So in order for the FUD you are all casting against Ciphile Software's
> OverWrite program to succeed and to cover up your own ignorance of the
> pass / fail return values of all functions, you will have to claim
> that the OS not only optimizes write operations as you describe but
> in fact LIES because the OS has no idea whether or not a close or open
> function was carried out successfully until it is actually PHYSICALLY
> carried out.  To optimize as you all have been claiming in Ciphile
> Software's OverWrite program, the OS would have to LIE that it had
> successfully closed the file in order to proceed to carry out a
> subsequent write in cache before the actual prior write and close to
> the file.  Do you see where this leads you?  To NoWheresville, man.
> To NoWheresville.
No.  Even if an OS tries to write a buffer, and fails, it's not totally out
of luck.  After all, it's in control of the file system, so it can mark that
sector as bad, and reallocate another disk sector to hold the contents being
written.  See, logically nothing "failed", because the write took place, and
when the application rereads the file, it reads the correct contents.

>
> Do any of you claim that any OS that you know of actually fudges and
> outright LIES when an instruction is given to carry out a function()
> then claims to the compiled program that the function was carried out
> successfully when the OS has no way of knowing this until the
> function has actually physically been carried out just so as to
> optimize its resources?
There is this little-known OS (really, family of OS's) known as Unix which
as precisely this property.  Writes to a file will go into a buffer pool,
and can remain there for quite a while.  Some versions of Unix provide ways
to force writes go directly to the disk, but not all of them, and in any
case, I rather doubt you attempt to use that facility.  And, in any case, as
others pointed out, it's not really a lie, because you fwrite() routine
isn't really "write these physical harddisk sectors", but instead "make
these changes to the file".  That is actually done -- you fread() that same
file immediately afterwards, and you will see the changes.

Traditionally, Unix has assumed error free media (so writes "never fail").
Again, the above trick of dynamically mapping sectors (which the device
driver/controller can just as well as the OS) can make it appear so, even
when it's not.

BTW: I just tried it on my Linux box.  It managed to open a file, write 1k,
close the file one million times in 156 seconds.  No, my disk drive does not
spin at 384,000 RPM, and actually the disk was rather quiet while my program
was running.  Have you tried such a test?


> The specific functions we are talking about
> here are the fclose() and fopen() functions.  Can't get more basic
> than these.
Sure we can.  The caching can also occur in the disk driver, or on the disk
controller itself.  These things typically have no knowledge of the file
system, and are typically are not informed of fclose and fopen operations.
All they see is often limited to "read sector" and "write sector".  Rather
more fundamental.

Another nasty problem arises if disk compression is in use.  Disk
compression (or at least, the disk compression systems I've seen) works by
having a mapping between logical sectors (with are fixed size, and typically
somewhat large), and physical sectors (which are variable, and typically
multiples of 512 bytes or 1k).  The OS compresses the data that makes up the
logical sector, and them places it in a physical sector that's exactly the
right size to hold the compressed data (of course, keeping track of where it
put it).  If the data you are using to overwrite has drastically different
compression characteristics than the data that's currently in the file (that
is, compresses much better or much worse), then the OS can plausibly write
that data to another part of the disk that happens to be more suitable for
the new physical sector size and free up the sectors that were holding the
data (and, of course, modifying the logical/physical mapping data structures
to reflect the move), leaving the old physical sectors are part of the free
physical sector list.  And so, even though the OS did exactly what you told
it to (overwrite the logical file), it didn't do what you wanted it to
(overwrite those precise physical sectors that contained the file).

And, of course, you have the problem that Gwyn (?) cited that disk drives
often write bits to parts of the drive without you asking it to.  Being
careful with your fopen() and your fclose() functions won't help you there
either.

--
poncho




------------------------------

Subject: Re: Urgent DES Cipher source code !!!!!
From: Paul Crowley <[EMAIL PROTECTED]>
Date: Sun, 04 Mar 2001 06:33:06 GMT

"MVJuhl" <[EMAIL PROTECTED]> writes:
> It actually IS possibly not being able to find what you are looking
> for when using search engines. You might not have the knowledge about
> what you are searching for in order to refine your search and find it.

Nonsense.  Search on "DES Cipher source code" on Google - the subject
line.

http://www.google.com/search?q=DES+cipher+source+code

First hit: Applied Cryptography.

The second hit contains many implementations of DES.

So it wasn't a problem with thinking of search terms!

If everyone adopted the practice of asking the newsgroup before trying
even the most trivial Google search, no-one would ever get an answer,
because we would be swamped by trivial questions.
-- 
  __
\/ o\ [EMAIL PROTECTED]
/\__/ http://www.cluefactory.org.uk/paul/

------------------------------

Subject: Re: Completly wiping HD
From: Paul Crowley <[EMAIL PROTECTED]>
Date: Sun, 04 Mar 2001 06:33:06 GMT

David Griffith <[EMAIL PROTECTED]> writes:
> I wish to completly wipe a 2gig harddisk. There is now no data i want to
> keep, however neither do i want anything to be recoverable.

In practice, you can get pretty good security fairly conveniently
using "Wipe":

http://www.citeweb.net/berke/wipe/
-- 
  __
\/ o\ [EMAIL PROTECTED]
/\__/ http://www.cluefactory.org.uk/paul/

------------------------------

Crossposted-To: alt.hacker
Subject: Re: The Foolish Dozen or so in This News Group
Reply-To: Anne & Lynn Wheeler <[EMAIL PROTECTED]>
From: Anne & Lynn Wheeler <[EMAIL PROTECTED]>
Date: Sun, 04 Mar 2001 07:06:25 GMT

"Scott Fluhrer" <[EMAIL PROTECTED]> writes:
> And, of course, you have the problem that Gwyn (?) cited that disk drives
> often write bits to parts of the drive without you asking it to.  Being
> careful with your fopen() and your fclose() functions won't help you there
> either.

it may be possible to use some form of async i/o with raw read/write,
note however that disk electronics criteria for a write tend to be
more demanding than a read ... a write failure and system (or even
controller) re-allocation of a block to a new physical disk sector
doesn't necessarily mean that the original physical disk record is
unreadable. disk sparing with a bad spot can take several seconds as
it tries to find good spare spots.

couple URLs alta-vista found on (sector & record) sparing
http://til.info.apple.com/techinfo.nsf/artnum/n24530
http://www.eros-os.org/design-notes/DiskFormatting.html
http://docs.rinet.ru:8080/UNT4/ch28/ch28.htm
http://mlarchive.ima.com/winnt/1998/Nov/2142.html

misc. words from scsi standard

  Any medium has the potential for defects which can cause user data to be 
lost.  Therefore, each logical block may contain information which allows the 
detection of changes to the user data caused by defects in the medium or other 
phenomena, and may also allow the data to be reconstructed following the 
detection of such a change.  On some devices, the initiator has some control 
some through use of the mode parameters.  Some devices may allow the initiator 
to examine and modify the additional information by using the READ LONG and 
WRITE LONG commands.  Some media having a very low probability of defects may 
not require these structures.

  Defects may also be detected and managed during execution of the FORMAT UNIT 
command.  The FORMAT UNIT command defines four sources of defect information.  
These defects may be reassigned or avoided during the initialization process 
so that they do not appear in a logical block.

  Defects may also be avoided after initialization.  The initiator issues a 
REASSIGN BLOCKS command to request that the specified logical block address be 
reassigned to a different part of the medium.  This operation can be repeated 
if a new defect appears at a later time.  The total number of defects that may 
be handled in this manner can be specified in the mode parameters.

  Defect management on direct-access devices is usually vendor specific.  
Devices not using removable medium typically optimize the defect management 
for capacity or performance or both.  Devices that use removable medium 
typically do not support defect management (e.g., some floppy disk drives) or 
use defect management that is based on the ability to interchange the medium. 

-- 
Anne & Lynn Wheeler   | [EMAIL PROTECTED] -  http://www.garlic.com/~lynn/ 

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to