: So, naturally we increased the heap size and things worked
: well for a while and then the errors would happen again.
: We've increased the initial heap size to 2.5GB and it's
: still happening.

is this the same 25,000,000 document index you mentioned before?

2.5GB of heap doesn't seem like much if you are also doing faceting ... 
even if you are faceting on an int field, there's going to be 95MB of 
FieldCache for that field, you said this was a string field, so it's going 
to be 95MB+however much space is needed for all the terms 
(presumably if you are faceting on this field every doc doesn't have a 
unique value, but even assuming a conservative 10% unique values of 10 
characters each that's another ~50MB, so we're up to about 150MB of 
FieldCache to facet that field -- and we haven't even started talking 
about how big the index is itself (or how big the filterCache gets, or 
how many other fields you are faceting on)

how big is your index on disk? are you faceting or sorting on other fields 
as well?

what does the LukeReqeust Handler tell you about the # of distinct terms 
in each field that you facet on?




-Hoss

Reply via email to