Jody wrote on 06/21/2006 02:21:49 PM:

> Okay I am with you so far, right now uDig uses an extension point to
> list the various Renderers( ie "PortrayalService"), and we ask each
> service to produce a metric describing if, and how well it can portray
> the content based on both content metadata (ie what) and user supplied
> metadata (aka style) information..
>
> We also have used this design to work with external rendering systems
> (ie WMS) and external service chains (Feature Portrayal Service). So the
> approach sounds like the same one.

19117 doesn't provide any operations other than "portrayFeature()".  The
rest is all a data model which describes the portrayal service and the
portrayal specification (e.g. configuration data/rules for a particular
rendering).  It doesn't provide a means of calculating or returning a
metric.  On the other hand, it has more than enough information to drive a
user interface where the person can construct a configuration.

> > 2] Renderers would be responsible only for:
> >       2a] connecting data with relevant PortrayalServices
> >       2b] configuring PortrayalServices (e.g. line styles...)
> >       2c] merging output of PortrayalServices
> >       2d] maintaining a z-order.
> >
> So most of what I am concerned about happens as part of 2a, the rest is
> kind of mechanical. Still this all looks pretty cool.

See below.  Methinks 19117 concentrates on organizing information so a
person can make good choices about which data goes to what parameter after
what modification.

> > 4] Specific renderers (e.g., SLD, Streaming) would be primarily
responsible
> > for parsing their configuration files (e.g., mySLDmap.xml) and would
> > inherit most of #2.  Alternatively, we could define a
PortrayalConfigurator
> > interface which could translate from a specific format to the Portrayal
> > framework.
> >
> The "alternatively" is what we got going on in uDig. We actually allow
> lots of kinds of configuration content to be defined by the user, and
> can wire things up as needed.

I think 19117 supports arbitrary wiring of data with its notion of "rules".
This is a rather abstract notion, where you can pick the language to write
the query in.  Most of the examples are in SQL, but mutants who like XQuery
can also play.  In fact, uDig's native mechanism should also be supported,
so long as you can serialize your "wiring" to/from a String.  As long as
data selection is separated from actions on that data, I think it should
fit.

> A bit worried that I can only help out on such things as they benefit
> uDig (short term gains and visibility), even for work like the FM branch
> I am on volunteer rather then work time. So lets plan a bit first to
> ensure this makes sense.

Patrick, close your eyes.
Jody:
public static final String SLAVE_LABOR = "Grad student";
public static final String SWEAT_SHOP  = "University";
public String PROJECT_WORKER = SLAVE_LABOR + " in " + SWEAT_SHOP ;
Ok, you can open them again, Patrick. :)

> I do need to wonder about 2a) - I work with a service "handle" to
> connect renderer to their data.
> It cost me more then $107 to come up with something similar to the above
> design, but I am afraid my personal budget is pinched
> on this one.

That was US$.  It's probably like a million Canadian$. :)

One way or the other, Patrick's getting a copy.  I think it makes sense
from all standpoints to see a worked example before we go too far.  If
Patrick's doing this for his thesis, he needs to understand it inside and
out.  Perhaps he would be willing to write up a wiki page with a couple of
usage examples prior to doing any coding?  Of course, we'd have to give him
time to read/digest it, then construe an example.  Part 1 of the wiki could
be an introduction to the data model so that everyone's on the same page
(19117 is a really simple spec: 3 packages/10 classes).  If PROJECT_WORKER
believes this to be out of scope, but would still like to pursue 19117/GT
(sounds like a car), I can see what I come up with.  Of course, if
PROJECT_WORKER chooses a different path, it's not very worthwhile to make a
wiki about 19117.

I think that whatever we do, working through progressively more complex
examples (start with Points, end with wind barbs) is critical to
understanding both the capabilities and limitations of the framework, and
determining if it meets the wind barb needs.

> Match Making:
> - look up of PortrayalService - just based on portrayal service
> metadata? Or does the portrayal service have a chance to poke at the
> data before reporting back?
> - same question but based on style metadata - (configuration in your
> summary)

I think they may have a different concept of matchmaking, e.g. get a human
involved the first time you encounter a feature.  You can record what the
human does and reproduce it later on.  Take wind barbs: they can be
represented as vector components or as a magnitude and direction.  Either
way it's probably represented as arbitrary numeric feature attributes.  But
a person can look at the data and say "use the 'wind_dir' attribute as the
'direction' parameter and use 'wind_speed' as 'magnitude'."  19117 even has
hooks to remember which features have been mapped.  Most metadata is human
readable and should be helpful if presented in a dialog box during the
"wiring process".

This is pretty much like SLD.  The human specifies exactly what fields are
rendered by which renderer, and records it in a way that the computer can
read it.  The main advantage to 19117 is that it's not limited to
predefined renderers.

> PortrayalService
> - do "rasters" go in and have data drawn on them? Or does an interface
> go in (like Graphic2D), or do rasters just come out? Concern here is for
> things like OpenGL or printing you cannot work with rasters effectively
> and need to use a Graphics2D interface.

Outside the scope of 19117, probably considered an implementation detail.
But you're going to have to write different portrayal service
implementations for each substantially different platform anyway.  You
can't use a Graphics2D to render to Java3D (or a POV-Ray file.)  19117
wants to support aural and tactile sensors too.  Those have nothing in
common with a visual representation.  A portrayal service is like an
interface.  When you start talking specifics about a particular rendering
environment, you're talking about an implementation class.  A single
description provided by a portrayal service can apply to many diverse
environments.  Could even apply to a C implementation.  Allows communities
to define what they want to see and let software produce that portrayal by
whatever means is available.

In short, we make it the way we want to make it.  We just have separate
implementations of a common "PortrayalCatalog" for each substantially
different system--*if* we support substantially different systems.

> Go-1
> - is this a straight alternative to Go-1 abstractions of Graphic, Canvas
> etc....

No.  Portrayal doesn't address the notion of "Canvas" or "Graphic".  Our
implementation could *use* them, though.  Or could use Graphics2D.  Or
could use Raster. or...

I think examples would be the best way to explore these types of questions.
:)

Bryce


Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
Geotools-devel mailing list
Geotools-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geotools-devel

Reply via email to