: I now set a high value like 10000 as result limit, which is always enough for
: my needs, but I nevertheless wanted to point at this error.

For the record: while Yonik has fixed solr so that it *can* accept 
MAX_VALUE, doesn't mean you *should* use MAX_VALUE.

even if you want *all* the results, no matter ow many there are, if you 
are confident you know what the upper bound is (ie: 10000 "is always 
enough" for your needs) you should use that upper bound and put an 
assertion in your client that checks hte value of numFound and reports an 
error if it's higher then your epectation.

Example: if you have 1 million documents, and you *know* a query is going 
to match no more then 1000 and you want them all then say rows=1000 
instead of rows='MAX_VALUE' ...

1) it might allow Solr to be more be more efficient in processing your 
request since it knows you only want a certain number anyway. 9ie: in 
allocating arrays and such)

2) it protects you in the event that you are wrong.

what if because of a bug in your indexing code, or some action taken by 
someone else, it turns out that your query actually matches all 1 million 
docs in your index ... by asking for "all" of them, you 
not only risk crashing your client when it gets back 10,000 times 
as much data as it exepcted but you could also hoze your network, and 
maybe even crash the Solr server itself.






-Hoss

Reply via email to