On Thursday 01 November 2007 16:35:05 Bernd Jendrissek wrote:
> On 11/1/07, Peter TB Brett <[EMAIL PROTECTED]> wrote:
> > On Thursday 01 November 2007 13:42:08 Steve Meier wrote:
> > > This allows the application to ask the page to draw itself and the page
> > > to ask the complex components, segments, arcs, boxes, text etc to draw
> > > themselves. Instead of the application having to go through the page
> > > and get each item and then do the drawing.
> >
> > On the other hand, I *don't* like this, because it violates the
> > Model-View-Controller pattern.  I would like it to be possible to have
> > multiple View implementations which can simultaneously use the same
> > Model.
> >
> > In my world, each renderer should keep its own look-up table mapping
> > drawable types to rendering functions.
>
> I have to agree with you here Peter, MVC rocks.  How, though, do you
> think gschem should deal with COMPLEX or TEXT objects, with the
> prim_objs inside?  The respective renderers for these would have to
> know about the substructure and delegate the real work to the
> renderers for each of these sub-objects.

Exactly.  Example for a Cairo renderer:

The renderer connects to the page structure's "changed" signal.  When one of 
the objects in the page changes, or a objects are added or removed from the 
page, the "changed" signal fires, with the changed object as data.  The 
renderer then invalidates that area of the view to be redrawn in the next 
draw cycle.

When a complex (or text) is rendered, it could be rendered to a texture rather 
than to the screen, and then the texture composited into the view when 
needed.  This option would be available to a renderer with the true MVC 
scheme.  Drawing techniques that suit modern GPU accelerated architectures 
are very different to those suited for an older CPU/framebuffer system.

> Would it make sense to be able to attach observers to the sub-objects
> inside a COMPLEX or a TEXT?

Don't see why not.

> Actually I think TEXT are a bit grotty here.  It should be gschem, not
> libgeda, that does the character->glyph font mapping.  It isn't like
> PCB where the text gets rendered onto an electrically significant
> medium.

I agree; the EDA library should specify where & what size the text should be 
rendered, and leave the nitty-gritty of how to draw text to the renderer.

                       Peter

-- 
Peter Brett

Electronic Systems Engineer
Integral Informatics Ltd

Attachment: signature.asc
Description: This is a digitally signed message part.


_______________________________________________
geda-dev mailing list
[email protected]
http://www.seul.org/cgi-bin/mailman/listinfo/geda-dev

Reply via email to