On 13 Jan 2010, at 08:54, Felix Meschberger wrote:

> Hi,
> 
> First off, I think scanning the (sub-)tree twice (once for checking,
> once for sending) is not a good idea performance-wise anyway.
> 
> Put this aside, the check-part could scan breath-first, keeping record
> of the number items visited after each level. As soon as the threshhold
> as been reached, the maximum supported level is known.
> 
> The rendering-part can then use the level to actually render the data,
> which is done in a depth-first manner.
> 
> We might be able to combine the two approaches, by building an in-memory
> representation of the JSON data (JSONObject) and when the threshold has
> been reached, just serialize the JSONObject.


I think 2 runs only becomes relevant if you want to adjust some of the response 
(like the response code), since this doesnt look sensible, a simple 
response.reset() might be better (does that always work?) when the limit is hit.

Ian


> 
> Regards
> Felix
> 
> On 13.01.2010 00:51, Simon Gaeremynck wrote:
>> Ok,
>> 
>> Is the following approach better?
>> 
>> Consider node.10.json
>> 
>> Check if the response will contain more than 200 nodes
>> If so, proceed with the way it is now and send the resources along with
>> a 200 response code.
>> If it is not,
>> Check if node.0.json results in a set bigger then 200 nodes.
>> If not check node.1.json, then node.2.json, ...
>> Basically, keep increasing the level until the number of resources is
>> bigger then 200.
>> This would give the highest recursion level you can request.
>> The server would then respond with a 300 and (I think?) a header
>> 'Location' with the highest level.
>> 
>> The thing off course is that you would have to loop over all those nodes
>> again and again.
>> Jackrabbit will have caches for those nodes but I'm not really sure what
>> the impact on performance would be.
>> 
>> 
>> Simon
>> 
>> 
>> On 12 Jan 2010, at 00:53, Roy T. Fielding wrote:
>> 
>>> On Jan 11, 2010, at 10:01 AM, Simon Gaeremynck wrote:
>>> 
>>>> Yes, I guess that could work.
>>>> 
>>>> But then you can still do node.1000000.json which results in the same
>>>> thing.
>>>> 
>>>> I took the liberty to write a patch which checks the amount of
>>>> resources will be in the result.
>>>> If the result is bigger than a pre-defined OSGi property (ex: 200
>>>> resources) it will send a 206
>>>> partial content with the dump of 200 resources and will ignore the rest.
>>>> 
>>>> It can be found at http://codereview.appspot.com/186072
>>>> 
>>>> Simon
>>> 
>>> Sorry, that would violate HTTP.  Consider what impact it has on
>>> caching by intermediaries.
>>> 
>>> Generally speaking, treating HTTP as if it were a database is a
>>> bad design.  If the server has a limit on responses, then it
>>> should only provide identifiers that remain under that limit
>>> and forbid any identifiers that would imply a larger limit.
>>> 
>>> An easy way to avoid this is to respond with 300 and an index of
>>> available resources whenever the resource being requested would be too
>>> big.
>>> The client can then retrieve the individual (smaller) resources from
>>> that index.
>>> 
>>> ....Roy
>>> 
>> 
>> 

Reply via email to