Re: Pupus pipeline: what Adam has been doing, etc. etc.

2000-12-26 Thread Tino Schwarze

Hi Gary,

On Sat, Dec 23, 2000 at 02:45:42PM -0500, Garry R. Osgood wrote:
How would the "pupus" functionality be directly exposed to users?  The
answer is that it most assuredly WOULD NOT.  I do not advocate, in fact
I ABHOR the idea that the user should end up drawing a little tree
of boxes connected with wires.  That's your view as a programmer
   I want it! Hey, this is interactive Script-Fu!
 
 I agree with Adam. It so happens that the directed graph
 abstraction which is serving his thinking about scheduling
 visually coincides with a user interface presentation where
 upstream branches of layers composite into common result
 sets. This happens to be two places where the abstract tree data
 type has taken root in the fecund (if imaginary) soil of Gimp
 2.0. Trees happen to furnish a nice framework to think about the
 shape of a great many tasks, so this coincidence is common (and
 an ongoing source of confusion).
I thought about that tree thing a bit more. We should not opt for trees
but for directed graphs only.

We are only a very small step away from a visual programming language.
For example, a "choice box" would be almost trivial to implement. It
lets you chose/route your image(?) data another path "down" the graph.
So one could try several variants and subvariants. Such graphs could
even be constructed on-tye-fly while working with the GUI (though they
would get mazy very fast).

 If we dedicate the Gimp 2.0 core to nothing other than staging
 (various kinds of) compositing of result sets from different
 kinds sources (ideally, not all bit maps), then the user interface
 simply falls away from view. 
I agree with that.

 2. Some (cache-boxes) are capable of persisting "upstream"
image presentation for "downstream" black boxes. Some other
component of the application might choose to associate with
such cache-boxes the notion of "layer", but that is the business
of that more or less external (and, me thinks, user-interface
housed) component. To the step manager, it is a step that
persists (parts of) images.
The caching issue is very important. A seprerate bunch of code with a
default caching strategy would probably be useful.

 3. Some black boxes (it seems to me) house genetically engineered goats (large ones)
that pixel process. As an aside, These GEGL boxes are (hopefully)
the only interior places where some sort of physical
implementation of pixels matter -- how many bits they have, how
those bits relate to color and transparency components, what sort
of pixel and pixel area operations are capable of mapping a
region in one cache-box to a region in another. To pupus (the
step manager) it is just a surface -- it is neither capable of,
no interested in, the manipulation of the surface 'fabric.'
Ideally, the step manager is a stand-alone library which can be used by
other applications as well (think: a synthesizer like app using
independent sound processing boxes for sequencer, 303, filters etc.).

 So what is Gimp 2.0 -- the application -- do? Refreshingly (I
 think) not much. It configures a step manager of a particular
 arrangement that makes sense for the profile of requirements that
 some sort of user interface presents, authenticates that the
 version of user interface is capable of talking with the version
 of step manager present, (mumble, mumble, other kinds of
 sanity/compatibility checks) then steps back and lets the
 ensemble articulate. 
[...]

 It is this view
 that makes Gimp 2.0 largely a collection of shared object code,
 each shared object being a thing that a (likely) small group of
 individuals can dedicate themselves to and get to know
 particularly well, and there will be less of a need for someone
 to be knowledgeable about the Whole Massive Thing (as in Gimp
 1.x) (the shared object may even be general enough to export
 to some other application, unchanged).
Let's talk about distributed objects! Using this framework, it would be
easy to implement a transport mechanism via some high-speed network and
interconnect several computers to do heavy scientific/industrial image
processing. The "pipelines" connecting the black boxes should be
pluggable.

 What concerns me is this style of thinking places a great deal of
 importance on coordinating interfaces and internal protocols; this
 is not a topic upon which UI, scheduling, image-processing, and
 specialty black-box architects (a kind of Gimp 2.0 third party
 contributor) should drift too far apart, reinventing wheels that are
 rolling in other parts. 

 The Monolithic Gimp of 1.x
 fame, being monolithic, permitted some laxity in the design of internal
 interfaces; distributing the application over autonomous processes
 require a little more formality in coordination.
I think we should set great store by software engineering in GIMP 2.0!
Such a complex system of several almost independent modules needs a
great deal of design considerations.

I recommend 

Re: Pupus pipeline: what Adam has been doing, etc. etc.

2000-12-26 Thread David Hodson

Tino Schwarze wrote:

 I recommend looking into David Hodson's Gimpeon at
 http://www.ozemail.com.au/~hodsond/gimpeon.html
 he's already figured out how to abstract such a system and I guess we
 could get at least some nice ideas from his work.

Just remember that Gimpeon is intended for automatically processing
sequences of images, rather than working on a single image. (It's also
very much a work in progress, nowhere near a finished product!) It
uses some of the ideas suggested for 2.0, but they're in a slightly
different context. (And it's written in C++, which I know some of
you won't like - but when I drop back to straight C to work on the
Gimp, it's sooo frustrating!)

Just to expand a little - Gimpeon is based on film effects work, where
the workflow (using the tools I'm most familiar with) is something like
this:

* get the source image sequences, and make reduced resolution versions
of them. (You generally can't work efficiently at full resolution.)

* set up the basic processing sequence. This is usually done at low
resolution, looking at one frame, but also involves stepping through
the sequence (to check animated efects) and switching to full resolution
(to check fine detail).

* once everything is set, automatically generate the full sequence
at low resolution. If this looks OK, generate the full sequence at
high resolution.

* wait for the effects director to tell you to do it again. (Hah!)

Gimpeon appears to use "boxes and lines" as its main UI component,
but I'm actually planning to provide a better interface on top of
that. The user will always be able to directly edit the processing
graph, but they will generate it in the first place by applying
filters to images - the graph gets built behind the scenes, much
like it would be (perhaps) in the Gimp.


Just as an aside - one of the main annoyances I have with GTK is
that manually setting a widget value triggers it. This makes doing
a clean Model/View/Controller design very messy!


-- 
David Hodson  --  [EMAIL PROTECTED]  --  this night wounds time



Re: Pupus pipeline: what Adam has been doing, etc. etc.

2000-12-23 Thread Garry R. Osgood

Hi.

This also includes "The Future of Gimp" commentary.

Tino Schwarze wrote:

  Hi Adam,

  On Thu, Dec 21, 2000 at 11:15:17PM +, Adam D. Moss wrote:
   How would the "pupus" functionality be directly exposed to users?  The
   answer is that it most assuredly WOULD NOT.  I do not advocate, in fact
   I ABHOR the idea that the user should end up drawing a little tree
   of boxes connected with wires.  That's your view as a programmer
  I want it! Hey, this is interactive Script-Fu!


I agree with Adam. It so happens that the directed graph
abstraction which is serving his thinking about scheduling
visually coincides with a user interface presentation where
upstream branches of layers composite into common result
sets. This happens to be two places where the abstract tree data
type has taken root in the fecund (if imaginary) soil of Gimp
2.0. Trees happen to furnish a nice framework to think about the
shape of a great many tasks, so this coincidence is common (and
an ongoing source of confusion).

If we dedicate the Gimp 2.0 core to nothing other than staging
(various kinds of) compositing of result sets from different
kinds sources (ideally, not all bit maps), then the user interface
simply falls away from view. That is not to say that user
interfaces are unimportant -- but they are (as much as possible)
independent objects of design, each with their own concerns and
issues. So Gimp is not married to the GTK widget set; if some
(many) ones write a QT interface, bless them. Some few others may
write a native Win32 interface -- bless them too. Script
languages are interfaces as well; I should think that marrying
Gimp with Perl or Scheme or Script Language X should not be
nearly as hard a task in the 2.x series as it had been in 1.x.

What may be commonly said about "user interfaces" is that they
are *any* member of the class (including your favorite script
interpreter) capable of originating work requests with the 'step
manager' (handing over references to typographic or structured
vector graphic or pixel graphic content) and querying
"presentation" black boxes about the result (if any).

It may be a refreshing exercise to inventory the black
box bestiary presented by Adam thus far:

1. Some are capable of interpreting references to some sort
   graphic content as, say, PNG files in a file system.

2. Some (cache-boxes) are capable of persisting "upstream"
   image presentation for "downstream" black boxes. Some other
   component of the application might choose to associate with
   such cache-boxes the notion of "layer", but that is the business
   of that more or less external (and, me thinks, user-interface
   housed) component. To the step manager, it is a step that
   persists (parts of) images.

3. Some black boxes (it seems to me) house genetically engineered goats (large ones)
   that pixel process. As an aside, These GEGL boxes are (hopefully)
   the only interior places where some sort of physical
   implementation of pixels matter -- how many bits they have, how
   those bits relate to color and transparency components, what sort
   of pixel and pixel area operations are capable of mapping a
   region in one cache-box to a region in another. To pupus (the
   step manager) it is just a surface -- it is neither capable of,
   no interested in, the manipulation of the surface 'fabric.'

4. Some black boxes know how to stream to external representations:
   perhaps they embed connections to an X server, or something that
   knows how to write (particular kinds of) image files.

So what is Gimp 2.0 -- the application -- do? Refreshingly (I
think) not much. It configures a step manager of a particular
arrangement that makes sense for the profile of requirements that
some sort of user interface presents, authenticates that the
version of user interface is capable of talking with the version
of step manager present, (mumble, mumble, other kinds of
sanity/compatibility checks) then steps back and lets the
ensemble articulate. If the profile of requirements call for an
interactive Gimp, then some of those black boxes will be
display-capable. If the profile of requirements is a batch
orchestrated by a perl script, then only file reading/writing and
image processing black boxes will be required. It is this view
that makes Gimp 2.0 largely a collection of shared object code,
each shared object being a thing that a (likely) small group of
individuals can dedicate themselves to and get to know
particularly well, and there will be less of a need for someone
to be knowledgeable about the Whole Massive Thing (as in Gimp
1.x) (the shared object may even be general enough to export
to some other application, unchanged).

What concerns me is this style of thinking places a great deal of
importance on coordinating interfaces and internal protocols; this
is not a topic upon which UI, scheduling, image-processing, and
specialty black-box architects (a kind of Gimp 2.0 third party
contributor) should drift too far 

Re: Pupus pipeline: what Adam has been doing, etc. etc.

2000-12-22 Thread Steinar H. Gunderson

[lots of interesting stuff and ASCII art not quoted]

Does this mean we can do something like xRes, ie. process first a
`preview' in low-res (to show to the user) and then, as an idle task (or
possibly even scheduled for later work, do the full-res version?

/* Steinar */
-- 
Homepage: http://members.xoom.com/sneeze/



Re: Pupus pipeline: what Adam has been doing, etc. etc.

2000-12-22 Thread Tino Schwarze

Hi Adam,

On Thu, Dec 21, 2000 at 11:15:17PM +, Adam D. Moss wrote:
 How would the "pupus" functionality be directly exposed to users?  The
 answer is that it most assuredly WOULD NOT.  I do not advocate, in fact
 I ABHOR the idea that the user should end up drawing a little tree
 of boxes connected with wires.  That's your view as a programmer
I want it! Hey, this is interactive Script-Fu! There should definitely
be an option to view and manipluate the trees. The possibilities are
endless. (e.g.: change a Dynamic Text and have changes propagate
automatically; very nice for batch jobs - perform task once, launch
"feed-and-save-lots-of-images" plugin and tell it what files to load and
where to save them; etc. etc.)

BTW: I thought about such a concept myself but not very detailed. This
is a MUST for GIMP 2.0. It's written from scratch anyway so we could
back it with some heavy tech like Adam's.

Bye, Tino.

-- 
 * LINUX - Where do you want to be tomorrow? *
  http://www.tu-chemnitz.de/linux/tag/
 3rd Chemnitzer Linux-Tag from 10th to 11th March 2001 
 at Chemnitz University of Technology!



Re: Pupus pipeline: what Adam has been doing, etc. etc.

2000-12-22 Thread Marcelo de G. Malheiros

  Hi, all.

 I want it! Hey, this is interactive Script-Fu! There should definitely
 be an option to view and manipluate the trees. The possibilities are
 endless. (e.g.: change a Dynamic Text and have changes propagate
 automatically; very nice for batch jobs - perform task once, launch
 "feed-and-save-lots-of-images" plugin and tell it what files to load and
 where to save them; etc. etc.)

  Just to throw some info on this truly exciting idea for Gimp, I ask
you dudes to take a look at Khoros, which is a package mainly aimed at
image processing. It uses something similar to such boxes, called
'glyphs'. It even has a visual editor to construct dataflows, called
Cantata.

  It's a pity, Khoros used to be free (in the beer sense, anyway), but
now you have to pay for it. Its website is rather scarce of
screenshots, but you can take a look at:

http://www.khoros.com/
http://www.khoros.com/khoros/cantata_desc.html

  From what I remember from an early version that I used at
university, each box runs as a separate Unix process with the data
passed in a standard file format through pipes. I guess this can be
much more improved preformance-wise...

  BTW, their white paper has few more details and images:

http://www.khoros.com/ideas/technology/cantata.pdf

  Regards,
Marcelo Malheiros

-- 
%!PS % Marcelo de Gomensoro Malheiros [EMAIL PROTECTED] % USE LINUX
/d{def}def/r{rotate}d/t{translate}d/F{0 0 moveto 2 0 lineto 2 0 t stroke}d
/X{dup 0 eq{}{dup 1 sub X -90 r Y F -90 r pop} ifelse}d/Y{dup 0 eq{}{dup 1
sub 90 r F X 90 r Y pop} ifelse}d 220 300 t 15 X pop showpage



Re: Pupus pipeline: what Adam has been doing, etc. etc.

2000-12-22 Thread David Hodson

"Adam D. Moss" wrote:

 The somewhat-simplified idea common to both proposals is that a
 list/tree of little black boxes is set up, where images get
 fed into the tree at the bottom, get chewed up by the black boxes
 through which they are sequentially sent, and at the end of
 the line comes a result.

Adam,

  this all sounds very cool and I'd love to see what code you've
got, keep in touch and check how things are progressing.

  Do you have any experience with commercial systems that work
this way? It's getting to be popular for 3d as well as 2d stuff.
By the way, don't get too upset about the user drawing lines
between boxes - it may not be the best interface, but it can be
very useful to be able to access that level, and some of the top
effects systems use nothing else.

  You might want to take a look at my (very alpha) efforts at:

  http://www.ozemail.com.au/~hodsond/gimpeon.html

  Like you, I'm far too short of time to work on all this stuff.
Been busy bugfixing Gimp and working on some plugins recently,
but I might be able to find some time over the holidays to get
some more work done on it.

-- 
David Hodson  --  [EMAIL PROTECTED]  --  this night wounds time



Pupus pipeline: what Adam has been doing, etc. etc.

2000-12-21 Thread Adam D. Moss


Right.  If anyone knows or remembers who I am, they might
wonder what I've been up to for the past six months
since GimpCon 2000.  =)  If so, thanks for caring -- sit
back and I'll tell you!

Primarily, I'll admit, I've been busy with my super-mundane
day job, and diffused much of my remaining time with scattered
hackings.

/(** %   Oh, bad dog.  Really.  Quite awful.
 '  `

GIMPwise however, apart from minor ambient maintainence and
musings I have been busy with two things:

1) "pquant", a terrifying colour-reduction algorithm probably
doomed to perpetual experimentation.

2) "pupus", an image-processing scheduler and propogation
framework.  This is squarely aimed at GIMP 2.0.

I'd mostly like to explain what "pupus" is about.  The name is a
working title and is short for "PUll-PUSh".  The project has
grown out of the ideas I hashed together in the airport waiting
to fly to GimpCon 2000 and attempted to present for about
eight hours (or four minutes when you factor out the rabbit-in-
headlights panicking; never fear that I shall do a presentation
again).

If you're not familiar with the original proposal then shame
on you!  I *slap* you!  Yet I cannot blame you, and it's okay because
things have changed a great deal.
  ,
'()') baaa  That's a sheep to make sure you're still awake.
 || ||  She'll be keeping an eye on you.  Be wary.

The somewhat-simplified idea common to both proposals is that a
list/tree of little black boxes is set up, where images get
fed into the tree at the bottom, get chewed up by the black boxes
through which they are sequentially sent, and at the end of
the line comes a result.  If you think of the black boxes as
analogous to plug-ins or compositing operators then you'll see
that you've basically got a generalized way for a program to
conceptually project a layer-stack, spin an image around and
blur it -- whatever.  Have the right black boxes at hand, connect
them up just /so/, push in the desired source image(s) and wait
for your beautiful beautiful output to spew forth from the end
of the chain.

In reality the devil is, as always, in the detail.

What, exactly, are we feeding into these black boxes?  Whence?
By what mechanism?  Who owns these 'images' that we're
transferring around?  What constitutes a black box, both physically
and in terms of the interfaces used to poke it with?  How would
we, say, tell a 'blur' box what radius of blur we desire?

How do we know when we've connected a black box's inputs and
outputs 'right'?  Can we set up a cyclic graph within the system?
What happens if we do?  In what order do things happen?  How would we
facilitate incremental rendering?  Can we retroactively revise
data already pushed into the pipeline?  Who is the man behind the
curtain?  How can we improve the user experience?  How do you stop
this crazy thing?

 _ ||_||
8: ) _  )~The roadkill pig of puzzlement knows not.
 ~ || ||

The list is much longer than that.

Well, now I have a revised design and honest-to-goodness embryonic
prototype code, taking into account comments and suggestions from
GimpCon 2000 and various ideas from the intervening six months.

In difference to the earlier proposal:

1) We're not going crazy on the resource-contention-avoidance malarky.
Hopefully that just drops out as a natural side-effect of the resource
ownership model.  There is no explicit resource-lockdown upon black-box
startup.

2) This time we support, nay, encourage in-place rendering and minimized
copying where plausible.

3) We're a lot friendlier towards black boxes who can't/won't work
on a 'regions on demand' basis.

4) Aborting a task pipeline is easier.

5) Changes to geometry (width, height, offsetting) figure into the
grand scheme.

6) We can spontaneously invalidate image regions from upstream while
they are still being processed downstream.

7) Latches and feedback-loops within the system might be facilitated
with a little more effort.  Some of the possibilities seemed too cool
to pass-up.

. o O () O o . o O () O o . o O () O o . 

As the implementation stands,

1) We are toolkit-agnostic.  At the core we deal with tasks and
resources, not a user-interface.

2) We are transport-agnostic.  Only one transport-type is implemented
so far and even then not as cleanly as I'd like, but in theory we
can quite easily invoke these 'black boxes' (called 'steps' within the
code) on remote machines via CORBA or Convergence's GCim (?).

3) Black boxes are instantiated from factories implemented as .so files.
These are dynamically discovered at runtime.  These are currently
dynamically-linked to the main application at discovery-time but (in
theory...) can trivially be dynamically-linked to an alternative
transport
shim and hence run from within a different address space or indeed a
different physical machine.

4) A few black boxes have been written for testing purposes.  All
interfaces are continually in flux and are slowly being pared down to
their essentials.

5) 

Re: Pupus pipeline: what Adam has been doing, etc. etc.

2000-12-21 Thread Lourens Veen

That sounds good, very good. Does that mean that I will get my layer
tree instead of layer stack as well? (from your mail I gather that it's
possible but depends on the UI implementation). Being a programmer I
wouldn't object to the connect-boxes-with-lines model, perhaps it should
still be a possibility..


Lourens