2008/5/2 Jamie McCracken <[EMAIL PROTECTED]>: > > On Fri, 2008-05-02 at 22:22 +0200, Mikkel Kamstrup Erlandsen wrote: > > I have a handful comments about this (Jos also asked about the same on > > IRC recently). > > It was in fact a design decision, but i am writing this from my mobile > > since I'm on holiday, so I'll elaborate when I get home tuesday. > > > > heh cant keep away when your on holiday! > > I find it hard to believe there is a sensible reason for omitting pages > but feel free to surprise us when you get back
The original design decision was to allow support for grep type backends where you have to access the data in a certain order. Another workaround which i'm using is to have only a few default fields (uri, size, mtime) and get the additional ones via GetHitData which does not have the same restriction. So while I initially had the same reaction on rereading this part of the spec, i think it is not so bad because it forces sane querying behavior. Generally, the user will not skip forward 1000 pages in a paged search result. On the other hand, I have no objections to your option 2. > 1) add a hit.offset property > 2) add new api : GetPagedHits (in string search, in int PageStart, in > int PageEnd, out aav results) or similar > 3) add a hit.pagesize property and have GetNextpage/getPrevPage methods 1) and 3) would force you to open a new session for each page. 2) is fine. Cheers, Jos _______________________________________________ Xesam mailing list [email protected] http://lists.freedesktop.org/mailman/listinfo/xesam
