RE: [Gimp-developer] PS vs. PDF

2003-08-15 Thread Austin Donnelly
 Are you sure it hasn't been updated for so long? Take a look at the
 PostScript 3 reference manual.
 
   OK, 5 years instead of 6 (1998).   But in today's world,
 that's a HUGE time...

What you're looking at is a mature standard.  Surely that's a good thing!

If it ain't broke, don't fix it.

Austin


___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Portable XFC

2003-08-15 Thread Sven Neumann
Hi,

On Thu, 2003-08-14 at 22:58, Nathan Carl Summers wrote:

   I haven't heard a single good argument for it except that it can do
  most of the things that the XML/archive approach can do.
 
 s/most/all, and many other good things besides.

Which are?

  There was however nothing mentioned that it can do better. Or did I miss
  something?
 
 XML is a text markup language.  If the designers thought of using it for
 raster graphics, it was an afterthought at best.  XML is simply the wrong
 tool for the job.  The XML/archive idea is the software equivalent of
 making a motorcycle by strapping a go-cart engine to the back of a
 bicycle.  It will work, of course, but it's an inelegant hack that will
 never be as nice as something designed for the job.

I think it is an elegant solution to the problem of designing a file
format w/o knowing beforehand what will have to go into it. I don't
think that binary chunks are feasible for a format that will have to
extend a lot while it is already being used. None of the file formats
mentioned provide this functionality and I think it is essential here.

 But to answer your question:
 
 1. Putting metadata right next to the data it describes is a Good Thing.
 The XML solution arbitrarily separates human readable data from binary
 data.  No one has yet considered what is to be done about non-human
 readable metadata, but I imagine it will be crammed into the archive file
 some way, or Base64ed or whatever.  Either way is total lossage.

How is metadata in the archive total lossage? If the metadata is binary
it should of course be treated just like image data.

 2. Imagine a very large image with a sizeable amount of metadata.  If this
 seems unlikely, imagine you have some useful information stored in
 parasites.  The user in our example only needs to manipulate a handfull of
 layers. A good way of handling this case is to not load everything into
 memory.  Say that it just parses out the layer list at the start, and then
 once a layer is selected and the metadata is requested, it is read in.
 With the XML proposal, the parser would have to parse through every byte
 until it gets to the part it is interested in, which is inefficient.

The XML parser would only have to read in the image structure which
tells it where to locate the actual data in the archive, nothing else.

 4. Implementing a reader for the XML/archive combo is unnecessarily
 complex.  It involves writing a parser for the semantics and structure of
 XML, a parser for the semantics and structure of the archive format, and a
 parser for the semantics and structure of the combination.  It is true
 that libraries might be found that are suitable for some of the work, but
 developers of small apps will shun the extra bloat, and such libraries
 might involve licensing fun.

We are already depending on an XML parser right now. I don't see any
problem here. I do know however that the code that reads stuff like TIFF
or PNG is ugly and almost unreadable. SAX-based XML parsers tend to be
darn simple however.

   The semantics and structure of the
 combination is not a trivial aspect -- with a corrupt or buggy file, the
 XML may not reflect the contents of the archive.  With an integrated
 approach, this is not a concern.

I don't see how an integrated approach avoids this problem any better.

 5. Either the individual layers will be stored as valid files in some
 format, or they will be stored as raw data.  If they are stored as true
 files, they will be needlessly redundant and we will be limited to
 whatever limitations the data format we choose uses.  If we just store raw
 data in the archive, then it's obvious that this is just a kludge around
 the crappiness of binary data in XML.

I don't understand you. If you think that raw data is a good idea, we
can have have raw data in the XML archive. Allowing a set of existing
file formats to be embedded makes the definition of our format a lot
simpler however and allows for various established compression
techniques to be used.


Sven

___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Portable XCF

2003-08-15 Thread Tor Lillqvist
I won't take any stand on either side (or how many sides are there?) in
the ongoing discussion, just air some fresh thoughts...

Many of the image formats suggested are some kind of archive formats
(zip, ar) on the outside.

I understand that one important benefit from this is that you can
store layers and whatnot objects as different files in the archive,
and easily access them separately. Even with other tools like ar or
unzip if need be.

However, these formats have the drawback that even if you can easily
have read access to just one of the component files in the archive,
it is impossible to rewrite a component if its size has changed (well,
at least if it has grown) without rewriting at least the rest of the
archive. (Or, maybe leaving the old version of the component as
garbage bits in the middle, appending the new version and updating the
index, if that is estimated to be less expensive than rewriting.)

Now, what concept do the ar, zip, etc formats closely resemble? What
other thingie is it that you store files in? Yeah, file systems.

Wouldn't it be neat to use a real file system inside the image
file... I.e. the image file would be a self-contained file system,
with the image components (layers, XML files, whatnot) as files.

What file system would be good? I don't know. Presumably something as
small and simple as possible, but not any simpler. Maybe FAT? ;-)
Early V6 Unix style file system (but with longer file names)? Minix?
Or something completely different? ISO9960 (I have no knowledge of
this, it might be way too complex)? UDF?

Does this make any sense?

Yeah, I can think of some drawbacks: For instance, there would have to
be some code to defragment and/or compact the file system image files
when needed (if the amount of data in the file system has radically
decreased, it should be compacted, for instance). Another is that if
the blocks of a layer are scatered here and there, reading it might be
slow than from traditional image file formats, where the data is
contiguous in the image file.

One neat benefit would be that on some operating systems it would be
possible to actually mount the image file as a file system...

--tml
___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Portable XCF

2003-08-15 Thread Raphaël Quinet
On Fri, 15 Aug 2003 13:49:41 +0300 (EET DST), Tor Lillqvist [EMAIL PROTECTED] wrote:
 I won't take any stand on either side (or how many sides are there?) in
 the ongoing discussion, just air some fresh thoughts...

 taking a deep breath of fresh thoughts

[...]
 Now, what concept do the ar, zip, etc formats closely resemble? What
 other thingie is it that you store files in? Yeah, file systems.
 
 Wouldn't it be neat to use a real file system inside the image
 file... I.e. the image file would be a self-contained file system,
 with the image components (layers, XML files, whatnot) as files.
 
 What file system would be good? I don't know. Presumably something as
 small and simple as possible, but not any simpler. Maybe FAT? ;-)
 Early V6 Unix style file system (but with longer file names)? Minix?
 Or something completely different? ISO9960 (I have no knowledge of
 this, it might be way too complex)? UDF?

There is unfortunately one thing that most of these filesystems have
in common: they are designed to store their data in a partition that
has a fixed size.  If you create such a filesystem in a regular file,
you have to pre-allocate the space that you will need for storing your
data.

I have played a lot with loopback filesystems, which are useful for
creating things like a read-only encrypted ext2 or FAT filesystem on a
CD-ROM.  Unfortunately, this only works well when starting with a
600+MB file in which I create the image of the filesystem.  It is not
possible (or not easy) for the filesystem to grow as needed.

We could have a mixed solution, in which the GIMP would start with a
relatively small file containing a filesystem and then replace it with
a larger one whenever necessary.  But this is not elegant nor
efficient, so the solution involving some kind of archive file format
is better IMHO.

The proposal for XML + some kind of archive format looks good, except
that I do not like the fact that all metadata (especially parasites)
will have to be XML-escaped or encoded in Base64.  Some parts may be
stored as separate files in the archive, but that does not make the
decoding easier because this means that some parts of the metadata are
included directly while others are included by reference.  The main
advantage of using XML is that it can easily be debugged by hand.  The
other arguments that have been discussed so far (for or against XML)
are not so significant.  If we want something that can be easily read
and edited by humans, let's go for XML.  If we want something compact
and efficient, let's go for something else.

-Raphaël
___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Portable XCF

2003-08-15 Thread Steinar H. Gunderson
On Fri, Aug 15, 2003 at 01:57:35PM +0200, Raphaël Quinet wrote:
 There is unfortunately one thing that most of these filesystems have
 in common: they are designed to store their data in a partition that
 has a fixed size.  If you create such a filesystem in a regular file,
 you have to pre-allocate the space that you will need for storing your
 data.

Unless, of course, you simply re-use the filesystem, and make the file a
folder instead of a file. It has its definite disadvantages (what do you do
if somebody messes with the case in the filenames, or 8.3 mangle them?), but
I kind of like the idea. :-) (We've discussed this earlier, though. :-) )

/* Steinar */
-- 
Homepage: http://www.sesse.net/

___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Portable XCF

2003-08-15 Thread pcg
On Fri, Aug 15, 2003 at 01:57:35PM +0200, Raphal Quinet [EMAIL PROTECTED] wrote:
 included directly while others are included by reference.  The main
 advantage of using XML is that it can easily be debugged by hand.  The
 other arguments that have been discussed so far (for or against XML)
 are not so significant. 

Opinions differ... for me, debugging is absolutely unimportant. I never
had to debug any xcf file, and I don't really want to change that :)

An XML format can be easily extended or updated, and extending xcf was a
pain, with xml at least this could become easier.

 and edited by humans, let's go for XML.  If we want something compact
 and efficient, let's go for something else.

Indeed, if. Efficiency is not the problem here (efficiency is much more
a problem with the underlying image data storage, i.e. use flat or tiled
areas etc.). XML isn't that inefficient compared to other serialization
schemes, especially when this has to be done on load/save only, while it
might be useful to dynamically swap in/out image data from the file (as
some modern os'es do, while others rely on copying everything to swap
first, as the gimp does :)

-- 
  -==- |
  ==-- _   |
  ---==---(_)__  __   __   Marc Lehmann  +--
  --==---/ / _ \/ // /\ \/ /   [EMAIL PROTECTED]  |e|
  -=/_/_//_/\_,_/ /_/\_\   XX11-RIPE --+
The choice of a GNU generation   |
 |
___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Portable XCF

2003-08-15 Thread Kevin Myers
I could be mistaken, but it doesn't seem that a file system with an
extensible size would be a big problem...

We make a request to store a file in our file system within a file, and
what we want to store exceeds the available capacity of our present file
system.  No problem.  Our file system's space request handling routine
detects the out of space condition, and makes a request to the OS to extend
the size of our real file, then proceeds with allocating the desired space
in our internal file system.  If OS reports out of space, then our file
system reports out of space.  Pointers used in our file system would be
sized such that they could handle any reasonable size, perhaps 32 bit
pointers to 256 byte blocks = 1 terrabyte capacity?  Could even allow the
block size to vary between different OS files to reduce wasted space for
small files or support larger than 1 TB if necessary.

BTW, Microsoft Windows registry is already basically an extensible file
system within a file.  A high end business product that I use called also
SAS has something similar.  I would guess there are others out there as
well.

s/KAM


- Original Message - 
From: Raphaël Quinet [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, August 15, 2003 6:57 AM
Subject: Re: [Gimp-developer] Portable XCF


On Fri, 15 Aug 2003 13:49:41 +0300 (EET DST), Tor Lillqvist [EMAIL PROTECTED]
wrote:
 I won't take any stand on either side (or how many sides are there?) in
 the ongoing discussion, just air some fresh thoughts...

 taking a deep breath of fresh thoughts

[...]
 Now, what concept do the ar, zip, etc formats closely resemble? What
 other thingie is it that you store files in? Yeah, file systems.

 Wouldn't it be neat to use a real file system inside the image
 file... I.e. the image file would be a self-contained file system,
 with the image components (layers, XML files, whatnot) as files.

 What file system would be good? I don't know. Presumably something as
 small and simple as possible, but not any simpler. Maybe FAT? ;-)
 Early V6 Unix style file system (but with longer file names)? Minix?
 Or something completely different? ISO9960 (I have no knowledge of
 this, it might be way too complex)? UDF?

There is unfortunately one thing that most of these filesystems have
in common: they are designed to store their data in a partition that
has a fixed size.  If you create such a filesystem in a regular file,
you have to pre-allocate the space that you will need for storing your
data.

I have played a lot with loopback filesystems, which are useful for
creating things like a read-only encrypted ext2 or FAT filesystem on a
CD-ROM.  Unfortunately, this only works well when starting with a
600+MB file in which I create the image of the filesystem.  It is not
possible (or not easy) for the filesystem to grow as needed.

We could have a mixed solution, in which the GIMP would start with a
relatively small file containing a filesystem and then replace it with
a larger one whenever necessary.  But this is not elegant nor
efficient, so the solution involving some kind of archive file format
is better IMHO.

The proposal for XML + some kind of archive format looks good, except
that I do not like the fact that all metadata (especially parasites)
will have to be XML-escaped or encoded in Base64.  Some parts may be
stored as separate files in the archive, but that does not make the
decoding easier because this means that some parts of the metadata are
included directly while others are included by reference.  The main
advantage of using XML is that it can easily be debugged by hand.  The
other arguments that have been discussed so far (for or against XML)
are not so significant.  If we want something that can be easily read
and edited by humans, let's go for XML.  If we want something compact
and efficient, let's go for something else.

-Raphaël
___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Portable XFC

2003-08-15 Thread pcg
On Thu, Aug 14, 2003 at 09:10:37PM +0200, Sven Neumann [EMAIL PROTECTED] wrote:
 point where no image manipulation program has gone before. However there
 is still the need for a good format for exchanging layered images
 between applications. So perhaps it makes sense to also develop such an

I don't think there is a need for such an extra format. Existing formats
like MIFF can easily cope with layered images and can easily be
extended (linearly) with additional metadata.

And surely if people want to read/write xcf and don't want to use GEGL
i'd firmly say it's their problem entirely. I mean, if people want to
read/write PNG without libpng it's their problem, too, and png was
designed as interchange format, while xcf is not, should not, and will
not.

-- 
  -==- |
  ==-- _   |
  ---==---(_)__  __   __   Marc Lehmann  +--
  --==---/ / _ \/ // /\ \/ /   [EMAIL PROTECTED]  |e|
  -=/_/_//_/\_,_/ /_/\_\   XX11-RIPE --+
The choice of a GNU generation   |
 |
___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Portable XCF

2003-08-15 Thread Mukund

On Fri, Aug 15, 2003 at 07:45:28AM -0500, Kevin Myers wrote:
| BTW, Microsoft Windows registry is already basically an extensible file
| system within a file.  A high end business product that I use called also
| SAS has something similar.  I would guess there are others out there as
| well.

You brought a strange thought to mind.

Subversion (http://subversion.tigris.org/) implements a versioned FS
using a Sleepycat's Berkeley DB database. It has a full library
implementation which any application could use.

Imagine that images could be revisioned. Subversion also uses a hybrid
delta algorithm for binary diffs.

Mukund

___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Portable XCF

2003-08-15 Thread Raphaël Quinet
[Re-sending this because I sent it to Kevin instead of the list.  Grumble...]

On Fri, 15 Aug 2003 07:45:28 -0500, Kevin Myers [EMAIL PROTECTED] wrote:
 I could be mistaken, but it doesn't seem that a file system with an
 extensible size would be a big problem...

It may be a problem with _existing_ filesystems.

 We make a request to store a file in our file system within a file, and
 what we want to store exceeds the available capacity of our present file
 system.  No problem.  Our file system's space request handling routine
 detects the out of space condition, and makes a request to the OS to extend
 the size of our real file, then proceeds with allocating the desired space
 in our internal file system.  [...]

The whole point of Tor's proposal was to use an existing filesystem, such
as FAT, Minix, UDF, ISO9960, etc.  Using the Linux loopback devices (for
example), one coudl easily mount these filesystems-in-a-file and use the
standard tools to work with the files they contain.  We could design a
filesystem that can be extended dynamically, but then we lose the ability
to use existing drivers and tools.

As I mentioned in my previous message, we could of course increase the
size of a filesystem such as FAT, but that would basically require a new
copy of the file in which we extend the file allocation table or inode
table to leave enough room for the new sectors.  The same tricks would
have to be used when we want to shrink the file.  In other words, this is
not trivial.

I'd rather have some kind of archive format.  If we want to replace an
element in the archive by another one that is larger, we can append the
larger one at the end of the archive, update the index and leave some
unused bits in the middle.  That would not waste more space than the
filesystem idea.  In both cases, we could have an option for
defragmenting the file if we do not want to waste space with unused bits
or unused sectors.  Or we simply re-create a clean file when using the
Save As option.  This is exactly what is done by several software
packages, including MS Office.

-Raphaël
___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Portable XCF

2003-08-15 Thread Mukund

On Fri, Aug 15, 2003 at 03:02:46PM +0200, Tino Schwarze wrote:
|  Subversion (http://subversion.tigris.org/) implements a versioned FS
|  using a Sleepycat's Berkeley DB database. It has a full library
|  implementation which any application could use.
| 
| Well, using a database as container might be a good idea. I'm not quite
| familiar with Berkeley DB but it might be useful as a backend.

Subversion provides its own client library for accessing the virtual file
system. You won't have to work with the DB directly. It also provides an
abstracted recover facility in one of its utilities (in case of stale locks).


|  Imagine that images could be revisioned. Subversion also uses a hybrid
|  delta algorithm for binary diffs.
| 
| Worst case: I make my black image white. That's the point where a binary
| diff will only waste processing power.

I said hybrid delta algorithm for binary diffs. I didn't say
straightforward A - B diffing.

Even if your images are black and white, they are most likely stored in a
compressed format (if a Subversion based GIMP file format was ever
invented), and if such compressed files are revisioned, no
generic algorithm is going to give you a good difference.


The whole Subversion thing was a far fetched *idea*. An alternative,
which is most definitely going to be blown off as there are more
reasonable ways of implementing the GIMP file format which are not far
fetched.

Mukund

___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Portable XCF

2003-08-15 Thread Alan Horkan

On Fri, 15 Aug 2003, Tor Lillqvist wrote:

 Date: Fri, 15 Aug 2003 13:49:41 +0300 (EET DST)
 From: Tor Lillqvist [EMAIL PROTECTED]
 To: The Gimp Developers' list [EMAIL PROTECTED]
 Subject: Re: [Gimp-developer] Portable XCF

 I won't take any stand on either side (or how many sides are there?) in
 the ongoing discussion, just air some fresh thoughts...

 Many of the image formats suggested are some kind of archive formats
 (zip, ar) on the outside.

 I understand that one important benefit from this is that you can
 store layers and whatnot objects as different files in the archive,
 and easily access them separately. Even with other tools like ar or
 unzip if need be.

 However, these formats have the drawback that even if you can easily
 have read access to just one of the component files in the archive,
 it is impossible to rewrite a component if its size has changed (well,
 at least if it has grown) without rewriting at least the rest of the
 archive. (Or, maybe leaving the old version of the component as
 garbage bits in the middle, appending the new version and updating the
 index, if that is estimated to be less expensive than rewriting.)

For the XML files you can use whitespace padding, I was reading the Adobe
XMP specifcations and they do this in some places.  It is less than ideal
but it is an option.

The fact that others have already lead the way with these types of file
formats means there is plenty of existing examples to learn from and
solutions to potential pitfalls.

 Now, what concept do the ar, zip, etc formats closely resemble? What
 other thingie is it that you store files in? Yeah, file systems.

 Wouldn't it be neat to use a real file system inside the image
 file... I.e. the image file would be a self-contained file system,
 with the image components (layers, XML files, whatnot) as files.

 What file system would be good? I don't know. Presumably something as
 small and simple as possible, but not any simpler. Maybe FAT? ;-)
 Early V6 Unix style file system (but with longer file names)? Minix?
 Or something completely different? ISO9960 (I have no knowledge of
 this, it might be way too complex)? UDF?

I am pretty sure you can have a Zip Filesytem.  (I found a request for
similar on the linux kernel mailing list but having difficulty finding
something more substantial).

Hopefully someone who knows more about Zip or virtual filesystems can
provide more substantial information.

I recall mumblings about Gnome doing away with the need for programs like
the predecessors of File-Roller and having Gnome-vfs sort it out and use
Nautilus instead.

This looks more promising
http://www.hwaci.com/sw/tobe/zvfs.html
http://webs.demasiado.com/freakpascal/zfs.htm
hopefully someone else will come up with better links.

 One neat benefit would be that on some operating systems it would be
 possible to actually mount the image file as a file system...

Zip is already in wide use and as it is more popular it is therefore more
likely to be available as a filesystem if it is not already available than
an 'ar' based solution.

To change the subject slightly the adhoc name 'Portable XCF' might be a
bit misleading.  Portable implies web formats and I think that PNG/MNG/JNG
and others largely have this area covered and that the next genertion XCF
will needt to many things and hold a fair bit of raw data and be
reasonably fast which goes against being a web ready portable format (or
at least makes it a low priority).  At this early stage hopefully no one
will get too attached to any particular name and that can be left until
later.

Sincerely

Alan Horkan
http://advogato.org/person/AlanHorkan/



___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


RE: [Gimp-developer] Portable XCF

2003-08-15 Thread Austin Donnelly
Tor wrote:
 [filesystem within a file]

It's a nice idea in theory, but makes it quite hard to write a parser for.
MS Word files (until recently) were basically FAT filesystems, which makes
it easy to handle under Windows but harder to parse when you don't have a
convenient DLL to do it lying around.

The FlashPix format (now little used?) is also a FAT filesystem; it was this
fact that persuaded me that writing a Gimp FlashPix loader wouldn't be
particularly easy.

So sure, consider the idea, but bear in mind it might be hard to pull off.  

When this discussion started, I didn't like the idea of XML with binary data
portions.  I liked the current binary, tagged, format we have, and thought
that it should just be extended.  However, after the recent discussion I've
come around to quite liking an ar-style archive with a XML catalog, XML
metadata, and texels as separate members.  I think this is roughly what
Leonard was suggesting; we should listen to the voice of experience. 

Austin


___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Portable XCF

2003-08-15 Thread Tino Schwarze
On Fri, Aug 15, 2003 at 02:22:03PM +0100, Mukund wrote:

 |  Subversion (http://subversion.tigris.org/) implements a versioned FS
 |  using a Sleepycat's Berkeley DB database. It has a full library
 |  implementation which any application could use.
 | 
 | Well, using a database as container might be a good idea. I'm not quite
 | familiar with Berkeley DB but it might be useful as a backend.
 
 Subversion provides its own client library for accessing the virtual file
 system. You won't have to work with the DB directly. It also provides an
 abstracted recover facility in one of its utilities (in case of stale locks).

But we might want to access the DB directly, e.g. for shared memory.

 The whole Subversion thing was a far fetched *idea*. An alternative,
 which is most definitely going to be blown off as there are more
 reasonable ways of implementing the GIMP file format which are not far
 fetched.

Hmmm.. it would be cool to have the Undo Stack saved, so I can _really_
continue where I left off when I saved the image.

Bye, Tino.

-- 
 * LINUX - Where do you want to be tomorrow? *
  http://www.tu-chemnitz.de/linux/tag/
___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Portable XCF

2003-08-15 Thread Sven Neumann
Hi,

On Fri, 2003-08-15 at 15:22, Mukund wrote:
 Even if your images are black and white, they are most likely stored in a
 compressed format (if a Subversion based GIMP file format was ever
 invented), and if such compressed files are revisioned, no
 generic algorithm is going to give you a good difference.

Actually with GEGL, a solid white or black image will be represented
using a special layer node that has no image data at all.


Sven

___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


[Gimp-developer] Re: Portable XCF

2003-08-15 Thread Guillermo S. Romero / Familia Romero
[EMAIL PROTECTED] (2003-08-15 at 1357.35 +0200):
 There is unfortunately one thing that most of these filesystems have
 in common: they are designed to store their data in a partition that
 has a fixed size.  If you create such a filesystem in a regular file,
 you have to pre-allocate the space that you will need for storing your
 data.

Or use a tool to change the size, they exist, and in some cases they
allow changing while online. Examples are ext2resize and growfs.

GSR
 
___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Portable XCF

2003-08-15 Thread Tino Schwarze
On Fri, Aug 15, 2003 at 03:51:53PM +0200, Sven Neumann wrote:

  Even if your images are black and white, they are most likely stored in a
  compressed format (if a Subversion based GIMP file format was ever
  invented), and if such compressed files are revisioned, no
  generic algorithm is going to give you a good difference.
 
 Actually with GEGL, a solid white or black image will be represented
 using a special layer node that has no image data at all.

But only as far as I say create new layer/image with white
background... Or, wait, are you suggesting that filling is an
operation known to GEGL, so a SolidFilledLayer will just change it's
fill_color when it get's filled again?
After all, this optimization does not work any more if I fill an
arbitrary selection...

Bye, Tino.

-- 
 * LINUX - Where do you want to be tomorrow? *
  http://www.tu-chemnitz.de/linux/tag/
___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


[Gimp-developer] Re: Portable XCF

2003-08-15 Thread Guillermo S. Romero / Familia Romero
[EMAIL PROTECTED] (2003-08-15 at 1541.28 +0200):
 BTW: Would it be possible to get a sparse file by zeroing the unused
 bits? Then it would be quite space efficient (at least with some file
 systems).

Yes, try it with dd and cp (GNU version only?):

dd if=/dev/zero of=/tmp/zero-test count=1000
cp --sparse=always /tmp/zero-test /tmp/zero-sparse
ls -l /tmp/zero-test /tmp/zero-sparse
du -cs /tmp/zero-test /tmp/zero-sparse

If you get same byte size, 512000 bytes, but different block usage, 0
vs 503 here, your fs is doing sparse files. Another test I did here
with a 8258506 bytes file, composed by catting a real data file of
7745389 bytes, then 512000 zero bytes and a final 1117 byte group of
random data, gives an usage of 8098 blocks for the original and 7601
for the sparse copy.

What I do not know is how many fs support it, and if they can do on
the fly or a forced copy is needed, or if it is a good idea from
performance point of view.

GSR
 
___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


RE: [Gimp-developer] Re: Portable XCF

2003-08-15 Thread Austin Donnelly
 Yes, try it with dd and cp (GNU version only?):
 
 dd if=/dev/zero of=/tmp/zero-test count=1000
 cp --sparse=always /tmp/zero-test /tmp/zero-sparse
 ls -l /tmp/zero-test /tmp/zero-sparse
 du -cs /tmp/zero-test /tmp/zero-sparse
 
[...]
 What I do not know is how many fs support it, and if they can do on
 the fly or a forced copy is needed

It is the copy which makes the sparse file.  You can't make a hole in a file
merely by writing a bunch of zeros to it.  You can only do it by seeking
past the (current) end of the file, then writing non-zero data.  The bytes
you seeked over are the hole, and will be read as if zeros.

GNU cp uses a bunch of heuristics to discover runs of zeros in the input
file and seek over them in the output file, rather than just writing zeros.

Austin


___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Re: GimpCon RFC: Portable XCF

2003-08-15 Thread David Neary
Leonard Rosenthol wrote:
 At 6:29 PM +0200 8/14/03, Øyvind Kolås wrote:
 Then you jsut want to be able to understand the XML file, which is the
 reason I proposed using something like xml in the first place, the rest
 of the logic would then be contained in your application.
 
   Well, yes, I need to understand the FILE FORMAT...whether 
 that be XML, PNG, TIFF, XCF, etc.
 
   But there seems to be a general belief that there should be a 
 standard library for reading/writing the file format to help reduce 
 the issues of multiple implementations.   That library shoudl ONLY be 
 a file format handler, it should NOT be all of GEGL...

Surely this is a detail, and the important thing, that is using 
some kind of metadata manifest, with binary image data stored in 
some widely supproted image format, is something we can agree on?

Whether gegl provides a separate libxcf or not is surely a detail
that can be taken care of at the implementation stage...

That said, since the general idea is to store layer structure in
the image data, and use compositing to generate the final image,
libxcf would require access to quite a lot of gegl's internal
workings most of the time... at least if the destination
application wanted to use gegl for composing. Of course, if they
wanted to work around gegl, and use a native layer model, then
they wouldn't need to get at gegl's graphing stuff at all. But
they'd be limiting themselves more or less to stacks, or very
simple trees.

Cheers,
Dave.

-- 
   David Neary,
   Lyon, France
  E-Mail: [EMAIL PROTECTED]
___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Portable XCF

2003-08-15 Thread Alastair Robinson
Hi,

On Friday 15 August 2003 2:30 pm, Austin Donnelly wrote:

 When this discussion started, I didn't like the idea of XML with binary
 data portions.  I liked the current binary, tagged, format we have, and
 thought that it should just be extended.  However, after the recent
 discussion I've come around to quite liking an ar-style archive with a XML
 catalog, XML metadata, and texels as separate members.  I think this is
 roughly what Leonard was suggesting; we should listen to the voice of
 experience.

If I may add my two penn'th:

Some thought needs to be given to how parasites are going to be stored - I'm 
thinking particularly of embedded ICC profiles here (IIRC the TIFF plugin 
attaches any encountered profile as a parasite).

Profiles can be large, so that last thing you'd want to do with one is attempt 
to text-encode it within an XML file.

I'd personally lean towards having a Parasites directory within the archive, 
and then filing the parasites within it by name, in text or binary format as 
is appropriate...

All the best,
-- 
Alastair M. Robinson
Email: [EMAIL PROTECTED]

ALIMONY: Corruption of Middle English Alle ye money.

___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


Re: [Gimp-developer] GimpCon RFC: Portable XCF

2003-08-15 Thread Carol Spears
Leonard Rosenthol wrote:

At 3:33 PM -0400 8/14/03, Carol Spears wrote:

So this combination would answer your LAB  CMYK issues and possibly 
my need to use a greater than 256 color palette then?


No, it would not.

ICC profiling is a VERY different thing that actual raw CMYK or 
Lab data...

Paletizing of an image is also different...
Well, I don't understand the color issues that well.  Merely my own 
limitations with TheGIMP so far.



Complaints I remember reading from more technically inclined people 
about tiff were mostly about the lwz compression.  I guess while it 
was not free it was also not the best way to go about doing such a 
thing.


Yes, that was a legal issue, not a truly technical one. (LZW, not 
lwz).
Here is an example of my lazy brain working for me.  As soon as I read 
something that makes me think expensive, selfish and substandard (as 
this compression and those three letters make me think) my brain stops 
giving time and space to the idea.

My worst fear is that TheGIMP will settle for something that came from 
this sort of thought process and development cycle.

Eh, something like spiritually unsound is fine if we are getting the 
best thing.  I don't think we would be if we took this tiff route.

Does tiff have a comments area?  I use jpeg comment often and am anxious
to start using comments in pngs.  Rumor has it that the capability is 
there 




However, I read recently about artifacts appearing in compressed 
pngs, so this might not be the miracle fix I had hoped for.


PNG won't artifact images unless you are palettizing them, which 
is NOT the default.
This was someone bitching on the irc.  I don't know all of the details 
and I did not see the image.

I was without power for more than a day, I am hoping to read the rest
of the mail and see that we will be using mng as a base and redesigning
it some.  :)
carol



___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Portable XFC

2003-08-15 Thread Carol Spears
Nathan Carl Summers wrote:

On Thu, 14 Aug 2003, Sven Neumann wrote:

 

Hi,

I never understood the reasoning for this discussion anyway. IMHO the
format that Nathan suggested seems like something from the dark ages of
file formats (where TIFF and the like originated from).
   

PNG is something from the dark ages?

 

I haven't heard a single good argument for it except that it can do
most of the things that the XML/archive approach can do.
   

s/most/all, and many other good things besides.

 

There was however nothing mentioned that it can do better. Or did I miss
something?
   

XML is a text markup language.  If the designers thought of using it for
raster graphics, it was an afterthought at best.  XML is simply the wrong
tool for the job.  The XML/archive idea is the software equivalent of
making a motorcycle by strapping a go-cart engine to the back of a
bicycle.  It will work, of course, but it's an inelegant hack that will
never be as nice as something designed for the job.
But to answer your question:

1. Putting metadata right next to the data it describes is a Good Thing.
The XML solution arbitrarily separates human readable data from binary
data.  No one has yet considered what is to be done about non-human
readable metadata, but I imagine it will be crammed into the archive file
some way, or Base64ed or whatever.  Either way is total lossage.
 

nonhumanreadablebinary cludge/nonhumanreadable is lossage?  The 
recent time I spent communing with dselect, I saw a couple of binary
editors.  The existence of such software makes me think that binary 
can be easily xmlized also.

Working with software my APT wrote to suit my needs on the new outdated
and unloved web site (http://mmmaybe.gimp.org), it makes me want more
of this same sort of editing ability with gimp stuff.  

I have proven myself to be very very human though.  The machines perhaps
will not like the xml as much as I did.

2. Imagine a very large image with a sizeable amount of metadata.  If this
seems unlikely, imagine you have some useful information stored in
parasites.  The user in our example only needs to manipulate a handfull of
layers. A good way of handling this case is to not load everything into
memory.  Say that it just parses out the layer list at the start, and then
once a layer is selected and the metadata is requested, it is read in.
With the XML proposal, the parser would have to parse through every byte
until it gets to the part it is interested in, which is inefficient.
Frankly, this wouldn't be feasable.  Only two crappy ways would be
possible to get around this: store everything in memory (hope you have
plenty of virtual memory!) or write out a temp file with the metadata in
it, for later use, and in a random-accessable format.  If you're going to
do that, why not do it right the first time and save yourself the trouble?
 

When someone asks me to imagine a large image file I naturally think of 
the biggest image files I ever worked with.  This would be psd.  It 
seems like the GIMP developers should be able to make something smaller 
than this.

Sorry, I got stuck on the first line of this item.  Imagining a very 
large image file, and the previous mail about how psd is a personalized 
tiff makes me even less want to use it ever.

3. None of the current suggestions for archive formats do a good job with
in-place editing.  AR can't even do random access.  Zip can do an ok job
with in-place editing, but it's messy and often no better than writing a
whole new file from scratch.  This means that a program that makes a small
change to a file, such as adding a comment, needs to read in and write a
ton of crap.
4. Implementing a reader for the XML/archive combo is unnecessarily
complex.  It involves writing a parser for the semantics and structure of
XML, a parser for the semantics and structure of the archive format, and a
parser for the semantics and structure of the combination.  It is true
that libraries might be found that are suitable for some of the work, but
developers of small apps will shun the extra bloat, and such libraries
might involve licensing fun.  The semantics and structure of the
combination is not a trivial aspect -- with a corrupt or buggy file, the
XML may not reflect the contents of the archive.  With an integrated
approach, this is not a concern.
mmmaybe has an xml reader.  It is small and nice.  An attribute 
rewritter that skips a lot of the crap that the old tools brought along 
with it.  Or is this mention out of line here

5. Either the individual layers will be stored as valid files in some
format, or they will be stored as raw data.  If they are stored as true
files, they will be needlessly redundant and we will be limited to
whatever limitations the data format we choose uses.  If we just store raw
data in the archive, then it's obvious that this is just a kludge around
the crappiness of binary data in XML.
 

How much binary data is in images.  This is very confusing to me.  For 

Re: [Gimp-developer] Portable XFC

2003-08-15 Thread Carol Spears
Stephen J Baker wrote:

It seems to me that XML was just *made* to do (1) nicely.  It's also 
rather
nice that this is human readable and the parsers for it are likely to 
be easy.
XML is nice and modern and there are loads of supporters of it.  I 
don't think
this should even be a matter of debate - it's *so* obvious that this 
is the
way to go.

I second the obvious part to this.  I have been seriously caught off 
guard lately as this seems so obvious to me that I could not begin to 
envision a conversation being needed to support it.

However, I have had great success with xml, only after I had my own 
tools built for it.  The nice thing about xml is that you can build 
your own issues into it.  This is also why I found it necessary to 
build our own.

carol



___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Portable XCF

2003-08-15 Thread Tor Lillqvist
BTW, what happened to GNOME's libefs? From quickly browsing the
sources, it seems to have been included in bonobo still in
bonobo-1.0.22, but then bonobo was renamed to libbonobo, and I don't
see any trace of it in libbonobo-2.3.6. Was it such a badly designed
disaster that it was dropped? Or did it mutate into part of gnome-vfs
or something?

--tml


___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Re: Portable XCF

2003-08-15 Thread Carol Spears
Austin Donnelly wrote:

Yes, try it with dd and cp (GNU version only?):

dd if=/dev/zero of=/tmp/zero-test count=1000
cp --sparse=always /tmp/zero-test /tmp/zero-sparse
ls -l /tmp/zero-test /tmp/zero-sparse
du -cs /tmp/zero-test /tmp/zero-sparse
   

[...]
 

What I do not know is how many fs support it, and if they can do on
the fly or a forced copy is needed
   

It is the copy which makes the sparse file.  You can't make a hole in a file
merely by writing a bunch of zeros to it.  You can only do it by seeking
past the (current) end of the file, then writing non-zero data.  The bytes
you seeked over are the hole, and will be read as if zeros.
GNU cp uses a bunch of heuristics to discover runs of zeros in the input
file and seek over them in the output file, rather than just writing zeros.
Austin

I looked up heuristic it said it meant heuristisch!  How can this be so?

I thought when i cp'd something i was totally making a copy of the file 
and simply giving it a new name.  The size never changes, so how could 
this be true?

carol



___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer