This also includes "The Future of Gimp" commentary.
>Tino Schwarze wrote:
> Hi Adam,
> On Thu, Dec 21, 2000 at 11:15:17PM +0000, Adam D. Moss wrote:
> > How would the "pupus" functionality be directly exposed to users? The
> > answer is that it most assuredly WOULD NOT. I do not advocate, in fact
> > I ABHOR the idea that the user should end up drawing a little tree
> > of boxes connected with wires. That's your view as a programmer
> I want it! Hey, this is interactive Script-Fu!
I agree with Adam. It so happens that the directed graph
abstraction which is serving his thinking about scheduling
visually coincides with a user interface presentation where
upstream branches of layers composite into common result
sets. This happens to be two places where the abstract tree data
type has taken root in the fecund (if imaginary) soil of Gimp
2.0. Trees happen to furnish a nice framework to think about the
shape of a great many tasks, so this coincidence is common (and
an ongoing source of confusion).
If we dedicate the Gimp 2.0 core to nothing other than staging
(various kinds of) compositing of result sets from different
kinds sources (ideally, not all bit maps), then the user interface
simply falls away from view. That is not to say that user
interfaces are unimportant -- but they are (as much as possible)
independent objects of design, each with their own concerns and
issues. So Gimp is not married to the GTK widget set; if some
(many) ones write a QT interface, bless them. Some few others may
write a native Win32 interface -- bless them too. Script
languages are interfaces as well; I should think that marrying
Gimp with Perl or Scheme or Script Language X should not be
nearly as hard a task in the 2.x series as it had been in 1.x.
What may be commonly said about "user interfaces" is that they
are *any* member of the class (including your favorite script
interpreter) capable of originating work requests with the 'step
manager' (handing over references to typographic or structured
vector graphic or pixel graphic content) and querying
"presentation" black boxes about the result (if any).
It may be a refreshing exercise to inventory the black
box bestiary presented by Adam thus far:
1. Some are capable of interpreting references to some sort
graphic content as, say, PNG files in a file system.
2. Some (cache-boxes) are capable of persisting "upstream"
image presentation for "downstream" black boxes. Some other
component of the application might choose to associate with
such cache-boxes the notion of "layer", but that is the business
of that more or less external (and, me thinks, user-interface
housed) component. To the step manager, it is a step that
persists (parts of) images.
3. Some black boxes (it seems to me) house genetically engineered goats (large ones)
that pixel process. As an aside, These GEGL boxes are (hopefully)
the only interior places where some sort of physical
implementation of pixels matter -- how many bits they have, how
those bits relate to color and transparency components, what sort
of pixel and pixel area operations are capable of mapping a
region in one cache-box to a region in another. To pupus (the
step manager) it is just a surface -- it is neither capable of,
no interested in, the manipulation of the surface 'fabric.'
4. Some black boxes know how to stream to external representations:
perhaps they embed connections to an X server, or something that
knows how to write (particular kinds of) image files.
So what is Gimp 2.0 -- the application -- do? Refreshingly (I
think) not much. It configures a step manager of a particular
arrangement that makes sense for the profile of requirements that
some sort of user interface presents, authenticates that the
version of user interface is capable of talking with the version
of step manager present, (mumble, mumble, other kinds of
sanity/compatibility checks) then steps back and lets the
ensemble articulate. If the profile of requirements call for an
interactive Gimp, then some of those black boxes will be
display-capable. If the profile of requirements is a batch
orchestrated by a perl script, then only file reading/writing and
image processing black boxes will be required. It is this view
that makes Gimp 2.0 largely a collection of shared object code,
each shared object being a thing that a (likely) small group of
individuals can dedicate themselves to and get to know
particularly well, and there will be less of a need for someone
to be knowledgeable about the Whole Massive Thing (as in Gimp
1.x) (the shared object may even be general enough to export
to some other application, unchanged).
What concerns me is this style of thinking places a great deal of
importance on coordinating interfaces and internal protocols; this
is not a topic upon which UI, scheduling, image-processing, and
specialty black-box architects (a kind of Gimp 2.0 third party
contributor) should drift too far apart, reinventing wheels that are
rolling in other parts. The Monolithic Gimp of 1.x
fame, being monolithic, permitted some laxity in the design of internal
interfaces; distributing the application over autonomous processes
require a little more formality in coordination.
My two U. S. cents...