The other issue you need to be worried about is long full GC pauses
with -Xmx32000m.

Maybe try reducing your JVM Heap considerably (e.g. -Xmx8g) and
switching to the MMapDirectory - see:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

In solrconfig.xml, this would be:

  <directoryFactory name="DirectoryFactory"

class="${solr.directoryFactory:solr.MMapDirectoryFactory}"/>


On Mon, Feb 25, 2013 at 11:05 AM, zqzuk <ziqizh...@hotmail.co.uk> wrote:
> Hi, thanks for your advice!
>
> I have deliberately allocated 32G to JVM, with the command "java -Xmx32000m
> -jar start.jar" etc. I am using our server which I think has a total of 48G.
> However it still crashes because of that error when I specify any keywords
> in my query. The only query that worked, as I said, is "q=*:*"
>
> I also realised that the best configuration would be a cloudsetting. It's a
> shame that I cannot split this index for that purpose but rather have to
> re-index everything.
>
> But I very much would like to know exactly what has happened with that
> error:
>
> "java.lang.OutOfMemoryError: OutOfMemoryError likely caused by the Sun VM
> Bug described in https://issues.apache.org/jira/browse/LUCENE-1566;
> try calling FSDirectory.setReadChunkSize with a value smaller than the
> current chunk size (2147483647)"
>
> Especially what does the last line tell me?
>
> Many thanks again!
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/170G-index-1-5-billion-documents-out-of-memory-on-query-tp4042696p4042796.html
> Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to