I think that keep a transaction log is the best aproach for your use case.

2008/5/13 Marc Bechler <[EMAIL PROTECTED]>:

> Hi Walter,
>
> thanks for your advice and, indeed, that is correct, too (and I will
> likely implement the cleaning mechanism this way). (Btw: what would the
> query look like to get row 101-200 in the second chunk?) However, using
> chunks is not atomic so you may not get results of inegrity.
>
> Regards,
>
>  marc
>
>
>
> Walter Underwood schrieb:
>
>  Nope. You should fetch all the rows in 100 row chunks. Much, much
> > better than getting them all in one request. I do that to load
> > the auto-complete table.
> >
> > I really cannot think of a good reason to fetch all the rows
> > in one request. That is more like a denial of service attack
> > than like a useful engineering solution.
> >
> > wunder
> >
> > On 5/9/08 11:11 AM, "Marc Bechler" <[EMAIL PROTECTED]> wrote:
> >
> >  Hi all,
> > >
> > > one possible use case could be to synchronize the index against a
> > > given
> > > database. E.g., assume that you have a filesystem that is indexed
> > > periodically. If files are deleted on this filesystem, they will not
> > > be
> > > deleted in the index. This way, you can get (e.g.) the complete
> > > content
> > > from your index in order to check for consistency.
> > >
> > > Btw: I also played around with the rows parameter in order to get the
> > > overall index; but I got exceptions ("not sufficient heap space"),
> > > when
> > > setting up rows above some higher thresholds.
> > >
> > > Regards,
> > >
> > >  marc
> > >
> > >
> > > Erik Hatcher schrieb:
> > >
> > > > Or make two requests...  one with rows=0 to see how many documents
> > > > match
> > > > without retrieving any, then another with that amount specified.
> > > >
> > > >    Erik
> > > >
> > > >
> > > > On May 9, 2008, at 8:54 AM, Francisco Sanmartin wrote:
> > > >
> > > > > Yeah, I understand the possible problems of changing this value.
> > > > > It's
> > > > > just a very particular case and there won't be a lot of documents
> > > > > to
> > > > > return. I guess I'll have to use a very high int number, I just
> > > > > wanted
> > > > > to know if there was any "proper" configuration for this
> > > > > situation.
> > > > >
> > > > > Thanks for the answer!
> > > > >
> > > > > Pako
> > > > >
> > > > >
> > > > > Otis Gospodnetic wrote:
> > > > >
> > > > > > Will something a la rows=<max int here> work? ;) But are you
> > > > > > sure you
> > > > > > want to do that?  It could be sloooooow.
> > > > > >
> > > > > >
> > > > > > Otis
> > > > > > --
> > > > > > Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
> > > > > >
> > > > > >
> > > > > > ----- Original Message ----
> > > > > >
> > > > > >  From: Francisco Sanmartin <[EMAIL PROTECTED]>
> > > > > > > To: solr-user@lucene.apache.org
> > > > > > > Sent: Thursday, May 8, 2008 4:18:46 PM
> > > > > > > Subject: Unlimited number of return documents?
> > > > > > >
> > > > > > > What is the value to set to "rows" in solrconfig.xml in order
> > > > > > > not to
> > > > > > > have any limitation about the number of returned documents?
> > > > > > > I've
> > > > > > > tried with "-1" and "0" but not luck...
> > > > > > >
> > > > > > > solr 0 name="rows">*10*
> > > > > > > I want solr to return all available documents by default.
> > > > > > >
> > > > > > > Thanks!
> > > > > > >
> > > > > > > Pako
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> >


-- 
Alexander Ramos Jardim

Reply via email to