> Let me modify my previous statement to:
> + WCS implements ContinuousQGC for requests which resample
> + WCS also implements DiscreteGPC for requests which subset with no
> resampling.
>
>   
Right on.
> These are only "approximate" implementations: Neither implementation
> supports evaluation at a single point unless you fudge by making a grid
> with one point in it, and the coverage spec doesn't include an
> evaluate(GM_PointGrid) signature.
>
> Sorry for being so pedantic.  In order for me to understand, I need a
> single vocabulary to describe everything from standards to implementation.
>
>   
I'm with you - I think the WCS spec guys were bloody minded to refuse to 
take the ISO model into account (they knew everything you could possible 
know about 2D grids apparently ;-)  )
This was before OGC was officially implementing ISO specs, but they 
should have at least fixed it in the intervening years. Volunteer time 
<> mandate again.
>>> Goal 4:
>>> Not sure I understand what is the main point.  Why limit ourselves to
>>>       
> POJO
>   
>>> databases?  Conversely, if we can create featuretypes from inspection
>>>       
> of an
>   
>>> arbitrary JDBC data source now, why does this not carry over to POJO
>>> databases?  Why do POJO databases require an extra abstraction layer
>>> instead of just a driver?  I'm not sure I "get it" enough to ask
>>>       
> anything
>   
>>> intelligent.
>>>
>>>       
>> OK - there are two kinda non-obvious reasons for this - one is that its
>> a separate set of business goals that shares the need for operations
>> etc, but also because we I hope can make POJOs to implement coverage
>> operations and inject them using this mechanism, allowing us to extend
>> the types of coverages supported.
>>     
>
> All right.  I think I see.  Reading the proposal again, it looks like you
> want to write an adapter between the new feature model and the Hibernate
> introspection mechanism?
>
> I had an idea like this long ago, only the adapters I had in mind were
> between the new feature model and...
> + EMF
> + standard Java Beans Introspection
>
> I think everyone can coexist happily.  My question is where does this fit
> in the big picture?  Are these really separate adapters or data source
> plugins?
>   
I'm agnostic - I really wanted to put up the business reqs and 
understand where it best fitted in to the development path
>   
>>> Goal 5:
>>> Unsure as to the proposal.  Is this an addition to a WFS which
>>>       
> basically
>   
>>> allows the server to "cascade" to a WCS?  Are we trying to wrap the
>>>       
> bare
>   
>>> bones raster data with GML representative of a Feature possessing the
>>> characteristics of a 19123 Coverage?
>>>       
>> yes
>>     
>
> How far does this extend?  For instance, do you intend to generate GML
> which indicates the presence of Coverage operations, or are these
> operations assumed to be present since you're describing a coverage.  Also,
> do you intend to advertise the operations which are specified in 19123, or
> the operations which are actually implemented in a WCS, and therefore
> available?
>   
Good question.  In general, during data transfer we indicate the 
FeatureType, and pass the values. Operations are assumed to be realised 
when marshalling the feature back into an object.
You have raised an interesting issue about partial support for the base 
Feature Type's operations. My inclination is to disallow this - its 
cheaper to build it once than explain and understand the explanations IMHO.
For added operations, maybe this needs to be done by specialising the 
FeatureType, so we dont have to have another mechanism.
Its a pity that derivation by restriction is so clunky that its hard to 
specialise a FeatureType to a crippled version.
>   
>>> Should we also consider repackaging
>>> the whole shootin' match into GML: each returned pixel/grid cell gets
>>>       
> its
>   
>>> own record in the returned GML (or shapefile)?  In ISO speak, someone
>>>       
> might
>   
>>> want to treat DiscreteGridPointCoverage data as if it were a plain
>>> DiscreteCoverage.
>>>
>>>       
>> we might want to, but at the moment I have seen no strong drivers for
>> this. But we do want to shift the rest of the metadata about coverages
>> through processing chains.
>>     
>
> The coverage schema itself is pretty light on metadata and heavy on
> operations.  Are you talking mainly about exposing the Domain and Range
> associations (WCS: domainSet; rangeSet from describeCoverage)?
>   
partly, but also the record schema, and the phenomenon, sampling method 
etc - ie. treat a coverage as an Observation - which is probably correct 
as we have few if any purely "asserted" continuous grids.
> Also, what counts as coverage (as opposed to server) metadata?  In terms of
> CRS, do you count only the native CRS of the coverage, or all the CRSes
> that the server will reproject to?  The last one almost seems like server
> metadata to me.  But of course, how you look at it depends on what you're
> using the metadata for.
>   
We definitely need to follow the coverage exploitation in processing 
chains Use Cases a bit further.
>   
>>> Goal 6:
>>> This is the Holy Grail!  I know exactly whazzup with this one!  For
>>>       
> some
>   
>>> thoughts on how vertical/temporal subsetting may be handled, see the
>>> analysis of WMS 1.3.0 on the Multidimensional WCS page.  They've
>>>       
> provided
>   
>>> for this functionality.  (
>>> http://docs.codehaus.org/display/GEOS/Multidimensional+WCS)  Perhaps
>>>       
> future
>   
>>> versions of WMS/WFS will adopt the same strategy?  I'm not sure you're
>>> going to be able to use a WFS to query a WCS based on the grid cell
>>>       
> values
>   
>>> unless you adopt a cascading mentality.  The WFS is going to have to
>>>       
> hide
>   
>>> the WCS entirely, and can't just include a link so the user can get the
>>> data directly.
>>>
>>>       
>> I think I agree - WCS 1.0 can be used only for subsetting a small
>> subclass of coverage types we might enable. Maybe a WCS 2.0+ will be
>> more flexible one day.
>>     
>
> 19123 allows queries by values in the range, but does not allow anything so
> flexible as the Filter deal.  You can provide one tuple of range values and
> it'll return all the "matches", where the coverage decides what is meant by
> a "match".  No expressions allowed.  Although I can certainly see that it
> would be useful to provide an implementation where you could set a filter
> on a coverage, then from that point on, "evaluateInverse" obeyed the
> filter.
>
> Let me see...I just looked up what you can do with WCS 1.0 and the spec
> confuses me.  Can anyone explain the optional _PARAMETER_ argument to
> GetCoverage to me?  My issue is that the examples in Table 9 on Page 32
> seem contradictory.  "band=1,3,5" means return only the data in bands 1, 3,
> and 5; but "age=0/18" means return only those points where the value of the
> "age" field is between 0 and 18?  Which is it, an index into a "range
> vector" or limits on the values in one particular element of that vector?
>
> I think this makes a difference as to whether you can just provide a URL
> for the user to go get the data themselves or not.
>   
to be honest, the URL is more likely to be useful for something like 
JPIP - a streaming protocol than a REST-ful WCS, in the latter case I'd 
be happy to pack binary encoded data, but for say a 1gB model we'd need 
a streaming protocol.
>   
>>> One minor point: if it's served by a WCS, it's a discrete grid point
>>> coverage.  No continuous coverages are served over the net.
>>>
>>>       
>> But we want to free ourselves of this very arbitrary limitation. Without
>> having to redefine WCS, by using WFS which is perfectly capable, if we
>> can invoke operations.
>>     
>
> When I said this, I think my mindset was that continuous coverages must be
> discreteized before they can be served.  Once this happens, they're no
> longer continuous.  Perhaps it would be better to say that all WCS-served
> data are the result of an evaluate() operation applied to each point on a
> regular grid in the domain.  This is a sampling of the coverage and not the
> coverage itself. :)
>
> In terms of coverage IO, I think all coverages on disk are going to be
> initially represented with DiscreteGridPointCoverages.  If someone wants to
> interpolate, I think it logical to provide a constructor to ContinuousQGC
> which takes a DiscreteGPC as an initializer.
>
>   
OK - you're getting deeping into implementation land than my wee brain 
can manage today -  please annotate the roadmap with any specific ideas 
on an optimal game plan - remember all I'm trying to do is to sort out 
the dependencies between already floated ideas, longer term options, 
match to business value outcomes and then use this to help people decide 
if and how to invest in progress.
> Coffee.  Need.  Coffee.
>   
Doppio!
> Bryce
>
>
> -------------------------------------------------------------------------
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys - and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> _______________________________________________
> Geotools-devel mailing list
> Geotools-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/geotools-devel
>   



-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Geotools-devel mailing list
Geotools-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geotools-devel

Reply via email to