: otherwise I would have done so already. My real question is question number
: one, which did not receive a reply, is there a formula that can tell me if
: what is happening is reasonable and to be expected, or am I doing something

I've never played with the binary fields much, nor have i ever tried to
add more then a few KB of data to any document, but my reading of the docs
doesn't turn up any reason why you should be encountering this problem.

typically, when people report problems like this, it turns out the problem
had nothing to do with lucene -- they were forgetting to close files they
were reading while indexing or something like that.  so i tried writing a
unit test that allocated arbitrary in memory byte arrays to see if i could
reproduce the problem, sure enough i can.

No matter what size heap i use, i can't add more then 9 documents
containing fields of 5mb of data..  It seems i can as many 4mb documents
as my heart desires, but once i go up to 5 all hell breaks loose.

I didn't try playing with the various IndexWriter options to see what
affect they had on the breaking point.

i've open a jira bug about this...

https://issues.apache.org/jira/browse/LUCENE-488


-Hoss


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to