That comment was not targeted at you directly, it was targeted at 
everyone, including myself who opened up the point of conversation.

Gabriel Roldan wrote:
> Justin Deoliveira wrote:
>> Putting the philosophical debate aside for the moment there are two 
>> things on the table here:
> What do my comments have of philosophical? Didn't I basically tell that 
> whilst I understand andrea's concerns about speed I am willing to 
> support this, but I just have some reservations about doing it in 1.7.x 
> if for the general case due to lack of exposure of the new code to 
> production conditions?
> wait a minute... reading the thread from the beginning again I see 
> you're talking of 2.0 here... sorry I jut got the alarm on about 1.7.x, 
> this seems totally fine for 2.0 to me as I already told.
> 
>>
>> 1) fast GML
>> 2) cite compliance with a generic setup
>>
>> The current set up can't do both without a complete overhaul of the 
>> current gml2 encoder... which is what the gtxml encoder is.
>>
>> Also to stress the point, I only want to replace the encoder when cite 
>> is enabled which is what? 99% percent of the time? Does anyone in 
>> production actually run with cite enabled?
> I don't know.
>>
>> Asking for the sacrifice of some speed in a 1% case in order to 
>> achieve much better testing and qa of many of our datastores does not 
>> seem like an unreasonable request to me.
> yes, sounds reasonable to me, you're trying to get a better QA end to 
> end by easily running cite against different backends
>>
>> Gabriel Roldan wrote:
>>>>>> What I am proposing is that the GML2OutputFormat be engaged when 
>>>>>> strict cite compliance is set.
>>>>> I would prefer to see the production choice be used for cite 
>>>>> testing as well. Can you point me at what issues there are with the 
>>>>> old gml2
>>>>> encoder? I've had good success fixing it in the past.
>>>>> What about an environment variable telling the encoder which one to 
>>>>> use?
>>>>> This way one can use GML2OutputFormat2 if he wants so.
>>>> Ha, try to run wfs cite tests with a regular database setup and have 
>>>> fun. It took me a couple weeks of spare time to figure out all the 
>>>> issues and fix them cleanly so good luck.
>>>>
>>>> The alternative is to not change anything and keep the old postgis 
>>>> db around with the old encoder and pass the tests for that special 
>>>> case. In which calling ourselves cite compliant would be a stretch.
>>>>
>>>> The whole point for me in this exercises was not to test our WFS 
>>>> protocol, we have already done that, it is to test our backend 
>>>> datastores against the variety of cases that the cite tests throw out.
>>>>
>>>> Anyways, I am curious if other people think the value add here is 
>>>> worth the hit in performance. 
>>> As I see it there are different situations in which people tend to 
>>> use one or other QA factor as the main driver to choose a product. We 
>>> can't deny speed is, even if a lame one compared to robustness, 
>>> scalability, reliability etc, the easiest to assess and hence the 
>>> most often talked about. I have seen a large gov agency wanting to 
>>> spit out GML as-fast-as-possible. I think an organization delivering 
>>> GML to the public will always find the bottleneck being the network 
>>> bandwidth, while an organization willing to use WFS as the 
>>> centralized data edition service in its intranet will want it to be 
>>> really fast.
>>> But, that is to say, I'm very willing to agree with you on this, 
>>> Justin,  I certainly want to have the least code paths possible, a 
>>> single (gt-xsd) tech in use for both gml2 and gml3, and am also 
>>> willing to sacrifice some perf to obtain that. I just want to make 
>>> sure the solution, even if a bit slowerd, do scale up, does not blow 
>>> up resource consumption, AND I would love to sit down with you and 
>>> research for an strategy in which we can a) incorporate a pull/push 
>>> model for gt-xsd streaming and b) make it in a way that the 
>>> underlying tech used for the low level IO is pluggable, such that I 
>>> can as easily reuse all the infrastructure for binary xml streaming.
>>> In conclusion, and sorry if all that comments didn't actually add 
>>> more value to the discussion, this is something I would really love 
>>> to see on  _trunk_, but have my reservations about changing the gml2 
>>> encoder in 1.7.x.
>>>
>>>
>>>> My opinion is I have never seen GML as a format built for speed, it 
>>>> is way too verbose, it requires the loading of an external document 
>>>> to describe itself, etc... I am also curious to know if anyone has 
>>>> actually chosen server software based soley on how fast it spits out 
>>>> GML.
>>>>>> 2) XmlSchemaEncoder: I am proposing replacing the old 1.0 schema 
>>>>>> encoder with the new one. The old one has no notion of schema 
>>>>>> overrides, and quite brutishly builds up a big string buffer and 
>>>>>> then spits out the XML.
>>> +1
>>> Cheers,
>>>
>>> Gabriel
>>>>> Yes, works for me.
>>>>>
>>>>> Cheers
>>>>> Andrea
>>>>
>>>>
>>>
>>>
>>
>>
> 
> 


-- 
Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

------------------------------------------------------------------------------
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
_______________________________________________
Geoserver-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

Reply via email to