> For example, if we are running under Unix-style kernel, we can make
> stat calls on both files. And if both st_dev and st_ino fields of
> the resulting stat structures are same, we should consider files same.

Why? Suppose the filesystem internally uses inodes larger than the st_ino
field. Is it guaranteed that a matching st_dev and st_ino field means the
files are the same? Where is this guarantee?

Suppose the filesystem uses internal file versioning and the output file is
a new version of the input. Will they have the same inodes or different
ones? Do you know? I sure don't. (And I could think of reasons to do it
either way.)

Trying to keep the user from shooting himself in the foot risks shooting the
user in the foot even when he did not ask.

> If DOS-style filesystem is mounted locally or remotely under Unix-style
> OS, we can rely on OS kernel for emulation of that much of Unix
> filesystem semantics.

Right, so long as we don't do stupid things (thinking they're smart) that
break the emulation. These suggestions are such smart things that backfire.

> If we are running under DOS/Windows kernel, it is impossible to delete
> open file. So, if we open input file, we can than try to unlink output
> file, and if this fails, we should consider this fatal error (not
> neccessary files are same, but it is not safe to write to this file
> anyway)

Why unlink the output file if the user didn't ask us to do that? That could
destroy version history the user wanted to keep.

> This is why I proposed my three-step approach
> 1. Check if input and output looks same, and if so report error.

This is not possible to do reliably.

> 2. Use special syntax if user really want to override input file with
> encrypted data

> 3. If this special syntax is used, take some steps to avoid
> leaving of unencrypted copies laying around.

You can perhaps solve this problem by adding complexity, but the problem is
that you are forcing the user to to anticipate your "smart algorithm", how
it will screw him over, and fix it by specifying new flags. This makes
things very difficult for the user who now not only has to take into account
the semantics of the filesystem he's using but also exactly how your
heuristics will break on that filesystem.

> With this approach step 1 can be made robust enough with technique,
> described above. And it seems to me, that step 3 also can be implemented
> so it would work in 99.9% cases without digging into filesystem-specific
> things.
>
> Only dubious case is versioned file system (like VMS) mounted under
> Unix or DOS/Windows. It can just hide version mechanism from remote
> system. At least I don't remember tools which allow to manipulate
> versions of files from DOS DECnet client.
>
> But system which works in most often used cases and has clearly
> documented list of exceptions is better than no check at all.

I think a simple tool that does what you ask it to do is best. However, I
don't feel particularly strongly about it in this case.

DS


______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       openssl-dev@openssl.org
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to