On Mon, Oct 3, 2011 at 11:28 AM, Andrea Aime
<[email protected]>wrote:
> On Mon, Oct 3, 2011 at 4:05 PM, Justin Deoliveira <[email protected]>wrote:
>
>> At the GeoServer level we know how the encoders are configured/behave,
>>> so what we could do is a filter transformer that picks incoming filters
>>> and transforms them accordingly.
>>> If we know, at a certain pont in time, how null element values are to be
>>> encoded, we can use that knowledge to statically transform the incoming
>>> filter into the existing PropertyIsNull, or Fitler.Exclude.
>>> The way I read the spec the behavior of the two filters is 1-1 linked to
>>> how we'd handle the encoding of a missing value, which is not something
>>> the filter alone can know.
>>>
>>
>> Thanks for the explanation, makes sense. Question is where does the
>> information of null vs nil come from.
>>
>
> For simple features I guess that's part of the encoder behavior and/or its
> configuration. What do we do today when we encounter a null value, skip the
> element entirely?
>
Yeah, it is just skipped.
> But yeah, for app-schema it might be different, I don't know.
>
>
>> Is this something that app-schema currently does? It would be nice to have
>> the app-schema folks weigh in on this. So are you advocating now to add the
>> new interface to the filter class hierarchy?
>>
>
> I'm not pushing, but I believe we should ask ourselves what sense does it
> make.
> If all what the two are testing is about an xml encoding aspect (wheter we
> skip the element or add an attribute... and what if we use an empty element
> instead btw?)
> then it seems to me having the filter in code that handles features does
> not make much sense, since features are not xml encoded and
> do not carry, themselves, an encoding directive.
> Or maybe we can have the indication of how encoding should be performed in
> the feature model, store it somewhere,
> maybe as an extra attribute in the AttributeDescriptor metadata, if not at
> the the single feature level, and stores like
> app-schema might set it up according to what the expected output is, while
> maybe for simple feature we have some
> default behavior.
> At that point it would start making sense to have a filter because it can
> be actually run against live features.
>
Hope what I wrote makes some sense ;-)
>
Yeah it does. I think it is really hard to make a decision either way
without an actual use case. However, I would vote (and this is obviously
biased) to keep the them separated for now since all the work to add the new
classes, implement the xml encoding and parsing bindings, update the factory
and filter visitor interfaces and implementations have all been done. But if
people feel strongly that is not the way to go i am happy to rework stuff.
>
> Cheers
> Andrea
>
> --
> -------------------------------------------------------
> Ing. Andrea Aime
> GeoSolutions S.A.S.
> Tech lead
>
> Via Poggio alle Viti 1187
> 55054 Massarosa (LU)
> Italy
>
> phone: +39 0584 962313
> fax: +39 0584 962313
>
> http://www.geo-solutions.it
> http://geo-solutions.blogspot.com/
> http://www.youtube.com/user/GeoSolutionsIT
> http://www.linkedin.com/in/andreaaime
> http://twitter.com/geowolf
>
> -------------------------------------------------------
>
--
Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.
------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
_______________________________________________
Geotools-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/geotools-devel