Hi Andrea; just a quick follow up ... I viewed the priorities as:
- feature collection (where most of the fun is in including the schema 
used by the feature collection)
- support for the store/status use case (where content is uploaded to an 
FTP site; and you can check on the status of along running application)

Other comments inline.
> - the transmuters API seems overkill, there is a need to create a
>    class per type handled, class level javadoc missing.
>    Justin has implemented on his notebook a tentative replacement that
>    does the same job as the existing 12 classes in 3 simple classes
>    and still leaves the door open to handle raster data (whilst the
>    current complex data handler simply assumes data to be parsed is XML,
>    but the input could be anything such as a geotiff or a compressed
>    shapefile)
>   
What is the transmuters API you are talking about here?
> AVOID THE MID MAN IF POSSIBLE
>   
This was a nice to have; your idea seems fine. Please keep in mind the 
"simple chaining" examples from the WPS spec.
> SCALE UP WITH REAL REMOTE REQUESTS
> If the request is really remote, we have to be prepared and parse whatever is 
> coming in.
>   
We may need to make a new kind of api here - for the code that is in the 
uDig project. I am thinking along the lines of Jesse's "provider" 
classes. So we could have a FeatureCollectionProvider that is passed 
into a Process; if its answer is a FeatureCollection perhaps it could 
give us a FeatureCollectionProvider which we could hook up to the next 
process in the "chain"? If you wanted to "wrap" this something that 
would lazily save the contents to memory or disk in order to prevent 
"reprocessing" that would be transparent? The providers could be 
considered the "edges" and the processes the nodes in a graph producing 
the answer.
> streaming parser and just scan over the remote input stream once.
> We could create a marker interface to identify those processes and act
> consequently.
>   
The processes have a data structures describing their parameter 
requirements; it includes a Map that you can use for hints like what you 
describe. So you could have a process that expects a feature collection 
and you could include a key in the metadata map that is true if the 
feature collection will be used more than once.
> STORE=TRUE
> For long running processes it makes lots of sense to actually
> support storing. Yet just storing the result is kind of unsatisfactory,
> my guess is that most of the client code would be interested in
> being able to access the results of the computation using WMS/WFS.
>   
This is not documented in the spec; they refer to making results 
available on FTP sites. I could see doing this
for processes that are scheduled to occur at a set time (make a "daily" 
layer for example). However why not
make a simple "publish" process - and people can use that at the end of 
their chain if they want the result made
available via WFS.
> Well, that's it. Thoughts?
The per session stuff is only okay; the specification provides some 
guidance about how long a result will be available and so forth.
Jody


-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Geoserver-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

Reply via email to