On 4 Feb., 23:09, "Matt Wozniski" <[EMAIL PROTECTED]> wrote:
> On Feb 4, 2008 4:29 PM, krischik wrote:
>
> > On 4 Feb., 21:10, "Matt Wozniski" wrote:
>
> > > While this would be nice, it would require support code from every
> > > application you have.  It may only be 6 lines, but 6 lines * 5000
> > > binaries is much more code than is in vim for line ending detection.
>
> > But the other 5000 applications need detection as well. I did mention
> > webserver and browser getting detection wrong. konquror/nautilus/
> > explorer needs detection and far more complex then vim. And sure OS/2
> > needed detection. But there you could always overwrite faulty detected
> > value by manualy correcting the EA's.
>
> I'm not sure that most other apps do need detection.  wget, for
> instance, doesn't have to care what line endings the data it saves
> has.  But, for what you want, it would have to detect and save it,
> even though it doesn't use it.

No, a EA aware wget would request the EA's already attached to the
file from an EA aware ftp server  - the same way it request file
permissions today when used with "--preserve-permissions".

> Shell redirection has no idea what's
> going through it, so it has no way to possibly say what Content-type
> it is...  or dd...

That's true.

> or tar extracting files that were created on a
> filesystem that didn't track extended attributes...

Well, you might like to ask the authors - xattr and acl support has
been added to tar last year:

http://www.redhatmagazine.com/2007/07/02/tips-from-an-rhce-tar-vs-star-the-battle-of-xattrs/

> The issue isn't waiting for it to catch on, it's that until it's
> universally available it only increases the amount of code and
> maintenance burden.

You are proving my point: backward compatibility has hobbled software
development.

> Things like marking the content encoding might be
> more useful, but still, not good to work with...  If, for example, you
> had a file "test.txt" encoded in UTF-16 and with extended attributes
> marking it as such, and your locale is set to use UTF-8, what would
> you expect the result of "cp test.txt test2.txt" to be?

I would use "cp --archive test.txt test2.txt" in which case the EA's
are copied as well. At least on SuSE Linux 9.2 onwards. And of course
the file would still be UTF-16 - after all it's "copy" not "convert".

Without meaning offence: You should read up a little on the subject as
you knowledge is not up to date.

> What about
> "cat text.txt >text2.txt"?  If you expect those two commands to have
> the same effect, I don't see how it can be done without changes to cp
> (mark attrs on dest), cat (mark attrs on stdout), and the kernel
> itself (allow extended attributes on streams).

I do not expect that cat and cp behave the same. For the simple reason
that they never have behaved the same: cp has options like "--
preserve" and "--archive" - cat has not. Even today using cat to copy
a file will mean that you loose the all the meta informations
attached.

> a shell that knows how to get/set
> these attributes (and which it needs to set),

Done: see setfattr and getfattr.

> and a copy of the
> coreutils that are aware of the changes (so that "cp file1 file2"
> creates file2 with the attrs of file1, not the defaults),

Done: GNU cp will do that if --archive is used.

> On a quick glance, I
> see absolutely no way to do this properly without support from the OS.

While I countered most of your arguments - you are absolutely right
here. Only I have once used an operating system (OS/2) which did
exactly that - hence my frustration.

Martin
--~--~---------~--~----~------------~-------~--~----~
You received this message from the "vim_dev" maillist.
For more information, visit http://www.vim.org/maillist.php
-~----------~----~----~----~------~----~------~--~---

Raspunde prin e-mail lui