I'm not clear exactly what you mean by batches.
What I'm doing is:
add
doc -- #1
[...]
doc -- #100
/addcommit/
So that's 100, in 1 HTTP POST.
Lucene sometimes just requires many file descriptors (this will
be somewhat alleviated with Solr 1.2).
Is there a way to find out how
On 1-Jun-07, at 11:11 PM, Jordan Hayes wrote:
Lucene sometimes just requires many file descriptors (this will
be somewhat alleviated with Solr 1.2).
Is there a way to find out how many is many ...?
for each segment, 7 + num indexed fields per segment. There should
be log_{base
New user here, and I ran into a problem trying to load a lot of
documents (~900k). I tried to load them all at once, which seemed to
run for a long time and then finally crap out with Too many open files
... so I read in an FAQ that about 100 might be a good number. I
split my documents up
On 1-Jun-07, at 6:35 PM, Jordan Hayes wrote:
New user here, and I ran into a problem trying to load a lot of
documents (~900k). I tried to load them all at once, which seemed
to run for a long time and then finally crap out with Too many
open files ... so I read in an FAQ that about 100
On Jun 1, 2007, at 10:47 PM, Mike Klaas wrote:
Am I just doing something wrong?
No. Lucene sometimes just requires many file descriptors (this
will be somewhat alleviated with Solr 1.2). I suggest upping the
open file limit (I upped mine from 1024 to 45000 to handle huge
indices).
On 6/1/07, Erik Hatcher [EMAIL PROTECTED] wrote:
On Jun 1, 2007, at 10:47 PM, Mike Klaas wrote:
Am I just doing something wrong?
No. Lucene sometimes just requires many file descriptors (this
will be somewhat alleviated with Solr 1.2). I suggest upping the
open file limit (I upped mine