: You could, but before that I'd try to see what's using your memory and see
: if you can decrease that. Maybe identify why you are running OOM now and
: not with your previous Solr version (assuming you weren't, and that you are
: running with the same JVM settings). A bigger heap usually means more work
: to the GC and less memory available for the OS cache.

FWIW: One of the bugs fixed in 6.0 was regarding the fact that the 
oom_killer wasn't being called properly on OOM -- so the fact that you are 
getting OOMErrors in 6.0 may not actually be a new thing, it may just be 
new that you are being made aware of them by the oom_killer

https://issues.apache.org/jira/browse/SOLR-8145

That doesn't negate Tomás's excelent advice about trying to determine
what is causing the OOM, but i wouldn't get too hung up on "what changed" 
between 5.x and 6.0 -- possibly nothing other then "now you know about 
it."



: 
: Tomás
: 
: On Sun, May 1, 2016 at 11:20 PM, Bastien Latard - MDPI AG <
: lat...@mdpi.com.invalid> wrote:
: 
: > Hi Guys,
: >
: > I got several times the OOM script executed since I upgraded to Solr6.0:
: >
: > $ cat solr_oom_killer-8983-2016-04-29_15_16_51.log
: > Running OOM killer script for process 26044 for Solr on port 8983
: >
: > Does it mean that I need to increase my JAVA Heap?
: > Or should I do anything else?
: >
: > Here are some further logs:
: > $ cat solr_gc_log_20160502_0730:
: > }
: > {Heap before GC invocations=1674 (full 91):
: >  par new generation   total 1747648K, used 1747135K [0x00000005c0000000,
: > 0x0000000640000000, 0x0000000640000000)
: >   eden space 1398144K, 100% used [0x00000005c0000000, 0x0000000615560000,
: > 0x0000000615560000)
: >   from space 349504K,  99% used [0x0000000615560000, 0x000000062aa2fc30,
: > 0x000000062aab0000)
: >   to   space 349504K,   0% used [0x000000062aab0000, 0x000000062aab0000,
: > 0x0000000640000000)
: >  concurrent mark-sweep generation total 6291456K, used 6291455K
: > [0x0000000640000000, 0x00000007c0000000, 0x00000007c0000000)
: >  Metaspace       used 39845K, capacity 40346K, committed 40704K, reserved
: > 1085440K
: >   class space    used 4142K, capacity 4273K, committed 4368K, reserved
: > 1048576K
: > 2016-04-29T21:15:41.970+0200: 20356.359: [Full GC (Allocation Failure)
: > 2016-04-29T21:15:41.970+0200: 20356.359: [CMS:
: > 6291455K->6291456K(6291456K), 12.5694653 secs]
: > 8038591K->8038590K(8039104K), [Metaspace: 39845K->39845K(1085440K)],
: > 12.5695497 secs] [Times: user=12.57 sys=0.00, real=12.57 secs]
: >
: >
: > Kind regards,
: > Bastien
: >
: >
: 

-Hoss
http://www.lucidworks.com/

Reply via email to