James M Snell wrote:
> 400 entries over 20 pages:
...
> 400 entries in a single feed:

If the server can rely on the client handling 400 entries per feed, then it 
would not need to split the feed into twenty pages; only two pages would be 
needed. Conversely, if the client cannot handle a response with 400 entries in 
it, then the single-page-result query mechanism will always fail.

To see this more clearly, we should test the performance of ten clients 
simultaneously syncing 10,000 10KB entries using each approach, using a 
reasonable page size (say 100K compressed, to stay within the limits of many 
mobile devices), with caching reverse proxy in front of the AtomPub servers, 
and a caching proxy in front of the client, with a new entry inserted in 
between each client request. The single-page-result mechanism would have to 
generate and transmit ten unique 100MB documents, that many desktop clients 
wouldn't even be able to handle, whereas the paging mechanism would be able to 
offload most of its work to the client-side caching proxy server and still 
interoperate with most mobile devices.

I think we could also run the paging version from my $6.00/month shared hosting 
account (a CGI script gets 6 seconds to run before its process gets killed). 
I'm pretty sure that wouldn't be possible with the single-page-result 
implementation.

To summarize, the single-page-result solution might have better raw performance 
when not resource constrained, but the paging mechanism offers better 
cachability and scales up and down nicely. That is why I prefer the paging 
mechanism.

- Brian


Reply via email to