On Thu, 14 Aug 2003, Sven Neumann wrote:
> I never understood the reasoning for this discussion anyway. IMHO the
> format that Nathan suggested seems like something from the dark ages of
> file formats (where TIFF and the like originated from).
PNG is something from the dark ages?
> I haven't heard a single good argument for it except that it can do
> most of the things that the XML/archive approach can do.
s/most/all, and many other good things besides.
> There was however nothing mentioned that it can do better. Or did I miss
XML is a text markup language. If the designers thought of using it for
raster graphics, it was an afterthought at best. XML is simply the wrong
tool for the job. The XML/archive idea is the software equivalent of
making a motorcycle by strapping a go-cart engine to the back of a
bicycle. It will work, of course, but it's an inelegant hack that will
never be as nice as something designed for the job.
But to answer your question:
1. Putting metadata right next to the data it describes is a Good Thing.
The XML "solution" arbitrarily separates human readable data from binary
data. No one has yet considered what is to be done about non-human
readable metadata, but I imagine it will be crammed into the archive file
some way, or Base64ed or whatever. Either way is total lossage.
2. Imagine a very large image with a sizeable amount of metadata. If this
seems unlikely, imagine you have some useful information stored in
parasites. The user in our example only needs to manipulate a handfull of
layers. A good way of handling this case is to not load everything into
memory. Say that it just parses out the layer list at the start, and then
once a layer is selected and the metadata is requested, it is read in.
With the XML proposal, the parser would have to parse through every byte
until it gets to the part it is interested in, which is inefficient.
Frankly, this wouldn't be feasable. Only two crappy ways would be
possible to get around this: store everything in memory (hope you have
plenty of virtual memory!) or write out a temp file with the metadata in
it, for later use, and in a random-accessable format. If you're going to
do that, why not do it right the first time and save yourself the trouble?
3. None of the current suggestions for archive formats do a good job with
in-place editing. AR can't even do random access. Zip can do an ok job
with in-place editing, but it's messy and often no better than writing a
whole new file from scratch. This means that a program that makes a small
change to a file, such as adding a comment, needs to read in and write a
ton of crap.
4. Implementing a reader for the XML/archive combo is unnecessarily
complex. It involves writing a parser for the semantics and structure of
XML, a parser for the semantics and structure of the archive format, and a
parser for the semantics and structure of the combination. It is true
that libraries might be found that are suitable for some of the work, but
developers of small apps will shun the extra bloat, and such libraries
might involve licensing fun. The semantics and structure of the
combination is not a trivial aspect -- with a corrupt or buggy file, the
XML may not reflect the contents of the archive. With an integrated
approach, this is not a concern.
5. Either the individual layers will be stored as valid files in some
format, or they will be stored as raw data. If they are stored as true
files, they will be needlessly redundant and we will be limited to
whatever limitations the data format we choose uses. If we just store raw
data in the archive, then it's obvious that this is just a kludge around
the crappiness of binary data in XML.
Gimp-developer mailing list