Re: OOM script executed

2016-05-06 Thread Shawn Heisey
On 5/5/2016 11:42 PM, Bastien Latard - MDPI AG wrote:
> So if I run the two following requests, it will only store once 7.5Mo,
> right?
> - select?q=*:*=bPublic:true=10
> - select?q=field:my_search=bPublic:true=10

That is correct.

Thanks,
Shawn



Re: OOM script executed

2016-05-05 Thread Bastien Latard - MDPI AG

Thank you Shawn!

So if I run the two following requests, it will only store once 7.5Mo, 
right?

- select?q=*:*=bPublic:true=10
- select?q=field:my_search=bPublic:true=10

kr,

Bast

On 04/05/2016 16:22, Shawn Heisey wrote:

On 5/3/2016 11:58 PM, Bastien Latard - MDPI AG wrote:

Thank you for your email.
You said "have big caches or request big pages (e.g. 100k docs)"...
Does a fq cache all the potential results, or only the ones the query
returns?
e.g.: select?q=*:*=bPublic:true=10

=> with this query, if I have 60 millions of public documents, would
it cache 10 or 60 millions of IDs?
...and does it cache it the filter cache (from fq) in the OS cache or
in java heap?

The result of a filter query is a bitset.  If the core contains 60
million documents, each bitset is 7.5 million bytes in length.  It is
not a list of IDs -- it's a large array of bits representing every
document in the Lucene index, including deleted documents (the Max Doc
value from the core overview).  There are two values for each bit - 0 or
1, depending on whether each document matches the filter or not.

Thanks,
Shawn




Kind regards,
Bastien Latard
Web engineer
--
MDPI AG
Postfach, CH-4005 Basel, Switzerland
Office: Klybeckstrasse 64, CH-4057
Tel. +41 61 683 77 35
Fax: +41 61 302 89 18
E-mail:
lat...@mdpi.com
http://www.mdpi.com/



Re: OOM script executed

2016-05-04 Thread Chris Hostetter

: You could, but before that I'd try to see what's using your memory and see
: if you can decrease that. Maybe identify why you are running OOM now and
: not with your previous Solr version (assuming you weren't, and that you are
: running with the same JVM settings). A bigger heap usually means more work
: to the GC and less memory available for the OS cache.

FWIW: One of the bugs fixed in 6.0 was regarding the fact that the 
oom_killer wasn't being called properly on OOM -- so the fact that you are 
getting OOMErrors in 6.0 may not actually be a new thing, it may just be 
new that you are being made aware of them by the oom_killer

https://issues.apache.org/jira/browse/SOLR-8145

That doesn't negate Tomás's excelent advice about trying to determine
what is causing the OOM, but i wouldn't get too hung up on "what changed" 
between 5.x and 6.0 -- possibly nothing other then "now you know about 
it."



: 
: Tomás
: 
: On Sun, May 1, 2016 at 11:20 PM, Bastien Latard - MDPI AG <
: lat...@mdpi.com.invalid> wrote:
: 
: > Hi Guys,
: >
: > I got several times the OOM script executed since I upgraded to Solr6.0:
: >
: > $ cat solr_oom_killer-8983-2016-04-29_15_16_51.log
: > Running OOM killer script for process 26044 for Solr on port 8983
: >
: > Does it mean that I need to increase my JAVA Heap?
: > Or should I do anything else?
: >
: > Here are some further logs:
: > $ cat solr_gc_log_20160502_0730:
: > }
: > {Heap before GC invocations=1674 (full 91):
: >  par new generation   total 1747648K, used 1747135K [0x0005c000,
: > 0x00064000, 0x00064000)
: >   eden space 1398144K, 100% used [0x0005c000, 0x00061556,
: > 0x00061556)
: >   from space 349504K,  99% used [0x00061556, 0x00062aa2fc30,
: > 0x00062aab)
: >   to   space 349504K,   0% used [0x00062aab, 0x00062aab,
: > 0x00064000)
: >  concurrent mark-sweep generation total 6291456K, used 6291455K
: > [0x00064000, 0x0007c000, 0x0007c000)
: >  Metaspace   used 39845K, capacity 40346K, committed 40704K, reserved
: > 1085440K
: >   class spaceused 4142K, capacity 4273K, committed 4368K, reserved
: > 1048576K
: > 2016-04-29T21:15:41.970+0200: 20356.359: [Full GC (Allocation Failure)
: > 2016-04-29T21:15:41.970+0200: 20356.359: [CMS:
: > 6291455K->6291456K(6291456K), 12.5694653 secs]
: > 8038591K->8038590K(8039104K), [Metaspace: 39845K->39845K(1085440K)],
: > 12.5695497 secs] [Times: user=12.57 sys=0.00, real=12.57 secs]
: >
: >
: > Kind regards,
: > Bastien
: >
: >
: 

-Hoss
http://www.lucidworks.com/

Re: OOM script executed

2016-05-04 Thread Shawn Heisey
On 5/3/2016 11:58 PM, Bastien Latard - MDPI AG wrote:
> Thank you for your email.
> You said "have big caches or request big pages (e.g. 100k docs)"...
> Does a fq cache all the potential results, or only the ones the query
> returns?
> e.g.: select?q=*:*=bPublic:true=10
>
> => with this query, if I have 60 millions of public documents, would
> it cache 10 or 60 millions of IDs?
> ...and does it cache it the filter cache (from fq) in the OS cache or
> in java heap? 

The result of a filter query is a bitset.  If the core contains 60
million documents, each bitset is 7.5 million bytes in length.  It is
not a list of IDs -- it's a large array of bits representing every
document in the Lucene index, including deleted documents (the Max Doc
value from the core overview).  There are two values for each bit - 0 or
1, depending on whether each document matches the filter or not.

Thanks,
Shawn



Re: OOM script executed

2016-05-03 Thread Bastien Latard - MDPI AG

Hi Tomás,

Thank you for your email.
You said "have big caches or request big pages (e.g. 100k docs)"...
Does a fq cache all the potential results, or only the ones the query 
returns?

e.g.: select?q=*:*=bPublic:true=10

=> with this query, if I have 60 millions of public documents, would it 
cache 10 or 60 millions of IDs?
...and does it cache it the filter cache (from fq) in the OS cache or in 
java heap?


kr,
Bastien

On 04/05/2016 02:31, Tomás Fernández Löbbe wrote:

You could use some memory analyzer tools (e.g. jmap), that could give you a
hint. But if you are migrating, I'd start to see if you changed something
from the previous version, including jvm settings, schema/solrconfig.
If nothing is different, I'd try to identify which feature is consuming
more memory. If you use faceting/stats/suggester, or you have big caches or
request big pages (e.g. 100k docs) or use Solr Cell for extracting content,
those are some usual suspects. Try to narrow it down, it could be many
things. Turn on/off features as you look at the memory (you could use
something like jconsole/jvisualvm/jstat) and see when it spikes, compare
with the previous version. That's that I would do at least.

If you get to narrow it down to a specific feature, then you can come back
to the users list and ask with some more specifics, that way someone could
point you to the solution, or maybe file a JIRA if it turns out to be a bug.

Tomás

On Mon, May 2, 2016 at 11:34 PM, Bastien Latard - MDPI AG <
lat...@mdpi.com.invalid> wrote:


Hi Tomás,

Thanks for your answer.
How could I see what's using memory?
I tried to add "-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/var/solr/logs/OOM_Heap_dump/"
...but this doesn't seem to be really helpful...

Kind regards,
Bastien


On 02/05/2016 22:55, Tomás Fernández Löbbe wrote:


You could, but before that I'd try to see what's using your memory and see
if you can decrease that. Maybe identify why you are running OOM now and
not with your previous Solr version (assuming you weren't, and that you
are
running with the same JVM settings). A bigger heap usually means more work
to the GC and less memory available for the OS cache.

Tomás

On Sun, May 1, 2016 at 11:20 PM, Bastien Latard - MDPI AG <
lat...@mdpi.com.invalid> wrote:

Hi Guys,

I got several times the OOM script executed since I upgraded to Solr6.0:

$ cat solr_oom_killer-8983-2016-04-29_15_16_51.log
Running OOM killer script for process 26044 for Solr on port 8983

Does it mean that I need to increase my JAVA Heap?
Or should I do anything else?

Here are some further logs:
$ cat solr_gc_log_20160502_0730:
}
{Heap before GC invocations=1674 (full 91):
   par new generation   total 1747648K, used 1747135K [0x0005c000,
0x00064000, 0x00064000)
eden space 1398144K, 100% used [0x0005c000,
0x00061556,
0x00061556)
from space 349504K,  99% used [0x00061556, 0x00062aa2fc30,
0x00062aab)
to   space 349504K,   0% used [0x00062aab, 0x00062aab,
0x00064000)
   concurrent mark-sweep generation total 6291456K, used 6291455K
[0x00064000, 0x0007c000, 0x0007c000)
   Metaspace   used 39845K, capacity 40346K, committed 40704K,
reserved
1085440K
class spaceused 4142K, capacity 4273K, committed 4368K, reserved
1048576K
2016-04-29T21:15:41.970+0200: 20356.359: [Full GC (Allocation Failure)
2016-04-29T21:15:41.970+0200: 20356.359: [CMS:
6291455K->6291456K(6291456K), 12.5694653 secs]
8038591K->8038590K(8039104K), [Metaspace: 39845K->39845K(1085440K)],
12.5695497 secs] [Times: user=12.57 sys=0.00, real=12.57 secs]


Kind regards,
Bastien




Kind regards,
Bastien Latard
Web engineer
--
MDPI AG
Postfach, CH-4005 Basel, Switzerland
Office: Klybeckstrasse 64, CH-4057
Tel. +41 61 683 77 35
Fax: +41 61 302 89 18
E-mail:
lat...@mdpi.com
http://www.mdpi.com/




Kind regards,
Bastien Latard
Web engineer
--
MDPI AG
Postfach, CH-4005 Basel, Switzerland
Office: Klybeckstrasse 64, CH-4057
Tel. +41 61 683 77 35
Fax: +41 61 302 89 18
E-mail:
lat...@mdpi.com
http://www.mdpi.com/



Re: OOM script executed

2016-05-03 Thread Tomás Fernández Löbbe
You could use some memory analyzer tools (e.g. jmap), that could give you a
hint. But if you are migrating, I'd start to see if you changed something
from the previous version, including jvm settings, schema/solrconfig.
If nothing is different, I'd try to identify which feature is consuming
more memory. If you use faceting/stats/suggester, or you have big caches or
request big pages (e.g. 100k docs) or use Solr Cell for extracting content,
those are some usual suspects. Try to narrow it down, it could be many
things. Turn on/off features as you look at the memory (you could use
something like jconsole/jvisualvm/jstat) and see when it spikes, compare
with the previous version. That's that I would do at least.

If you get to narrow it down to a specific feature, then you can come back
to the users list and ask with some more specifics, that way someone could
point you to the solution, or maybe file a JIRA if it turns out to be a bug.

Tomás

On Mon, May 2, 2016 at 11:34 PM, Bastien Latard - MDPI AG <
lat...@mdpi.com.invalid> wrote:

> Hi Tomás,
>
> Thanks for your answer.
> How could I see what's using memory?
> I tried to add "-XX:+HeapDumpOnOutOfMemoryError
> -XX:HeapDumpPath=/var/solr/logs/OOM_Heap_dump/"
> ...but this doesn't seem to be really helpful...
>
> Kind regards,
> Bastien
>
>
> On 02/05/2016 22:55, Tomás Fernández Löbbe wrote:
>
>> You could, but before that I'd try to see what's using your memory and see
>> if you can decrease that. Maybe identify why you are running OOM now and
>> not with your previous Solr version (assuming you weren't, and that you
>> are
>> running with the same JVM settings). A bigger heap usually means more work
>> to the GC and less memory available for the OS cache.
>>
>> Tomás
>>
>> On Sun, May 1, 2016 at 11:20 PM, Bastien Latard - MDPI AG <
>> lat...@mdpi.com.invalid> wrote:
>>
>> Hi Guys,
>>>
>>> I got several times the OOM script executed since I upgraded to Solr6.0:
>>>
>>> $ cat solr_oom_killer-8983-2016-04-29_15_16_51.log
>>> Running OOM killer script for process 26044 for Solr on port 8983
>>>
>>> Does it mean that I need to increase my JAVA Heap?
>>> Or should I do anything else?
>>>
>>> Here are some further logs:
>>> $ cat solr_gc_log_20160502_0730:
>>> }
>>> {Heap before GC invocations=1674 (full 91):
>>>   par new generation   total 1747648K, used 1747135K [0x0005c000,
>>> 0x00064000, 0x00064000)
>>>eden space 1398144K, 100% used [0x0005c000,
>>> 0x00061556,
>>> 0x00061556)
>>>from space 349504K,  99% used [0x00061556, 0x00062aa2fc30,
>>> 0x00062aab)
>>>to   space 349504K,   0% used [0x00062aab, 0x00062aab,
>>> 0x00064000)
>>>   concurrent mark-sweep generation total 6291456K, used 6291455K
>>> [0x00064000, 0x0007c000, 0x0007c000)
>>>   Metaspace   used 39845K, capacity 40346K, committed 40704K,
>>> reserved
>>> 1085440K
>>>class spaceused 4142K, capacity 4273K, committed 4368K, reserved
>>> 1048576K
>>> 2016-04-29T21:15:41.970+0200: 20356.359: [Full GC (Allocation Failure)
>>> 2016-04-29T21:15:41.970+0200: 20356.359: [CMS:
>>> 6291455K->6291456K(6291456K), 12.5694653 secs]
>>> 8038591K->8038590K(8039104K), [Metaspace: 39845K->39845K(1085440K)],
>>> 12.5695497 secs] [Times: user=12.57 sys=0.00, real=12.57 secs]
>>>
>>>
>>> Kind regards,
>>> Bastien
>>>
>>>
>>>
> Kind regards,
> Bastien Latard
> Web engineer
> --
> MDPI AG
> Postfach, CH-4005 Basel, Switzerland
> Office: Klybeckstrasse 64, CH-4057
> Tel. +41 61 683 77 35
> Fax: +41 61 302 89 18
> E-mail:
> lat...@mdpi.com
> http://www.mdpi.com/
>
>


Re: OOM script executed

2016-05-03 Thread Bastien Latard - MDPI AG

Hi Tomás,

Thanks for your answer.
How could I see what's using memory?
I tried to add "-XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/var/solr/logs/OOM_Heap_dump/"

...but this doesn't seem to be really helpful...

Kind regards,
Bastien

On 02/05/2016 22:55, Tomás Fernández Löbbe wrote:

You could, but before that I'd try to see what's using your memory and see
if you can decrease that. Maybe identify why you are running OOM now and
not with your previous Solr version (assuming you weren't, and that you are
running with the same JVM settings). A bigger heap usually means more work
to the GC and less memory available for the OS cache.

Tomás

On Sun, May 1, 2016 at 11:20 PM, Bastien Latard - MDPI AG <
lat...@mdpi.com.invalid> wrote:


Hi Guys,

I got several times the OOM script executed since I upgraded to Solr6.0:

$ cat solr_oom_killer-8983-2016-04-29_15_16_51.log
Running OOM killer script for process 26044 for Solr on port 8983

Does it mean that I need to increase my JAVA Heap?
Or should I do anything else?

Here are some further logs:
$ cat solr_gc_log_20160502_0730:
}
{Heap before GC invocations=1674 (full 91):
  par new generation   total 1747648K, used 1747135K [0x0005c000,
0x00064000, 0x00064000)
   eden space 1398144K, 100% used [0x0005c000, 0x00061556,
0x00061556)
   from space 349504K,  99% used [0x00061556, 0x00062aa2fc30,
0x00062aab)
   to   space 349504K,   0% used [0x00062aab, 0x00062aab,
0x00064000)
  concurrent mark-sweep generation total 6291456K, used 6291455K
[0x00064000, 0x0007c000, 0x0007c000)
  Metaspace   used 39845K, capacity 40346K, committed 40704K, reserved
1085440K
   class spaceused 4142K, capacity 4273K, committed 4368K, reserved
1048576K
2016-04-29T21:15:41.970+0200: 20356.359: [Full GC (Allocation Failure)
2016-04-29T21:15:41.970+0200: 20356.359: [CMS:
6291455K->6291456K(6291456K), 12.5694653 secs]
8038591K->8038590K(8039104K), [Metaspace: 39845K->39845K(1085440K)],
12.5695497 secs] [Times: user=12.57 sys=0.00, real=12.57 secs]


Kind regards,
Bastien




Kind regards,
Bastien Latard
Web engineer
--
MDPI AG
Postfach, CH-4005 Basel, Switzerland
Office: Klybeckstrasse 64, CH-4057
Tel. +41 61 683 77 35
Fax: +41 61 302 89 18
E-mail:
lat...@mdpi.com
http://www.mdpi.com/



Re: OOM script executed

2016-05-02 Thread Tomás Fernández Löbbe
You could, but before that I'd try to see what's using your memory and see
if you can decrease that. Maybe identify why you are running OOM now and
not with your previous Solr version (assuming you weren't, and that you are
running with the same JVM settings). A bigger heap usually means more work
to the GC and less memory available for the OS cache.

Tomás

On Sun, May 1, 2016 at 11:20 PM, Bastien Latard - MDPI AG <
lat...@mdpi.com.invalid> wrote:

> Hi Guys,
>
> I got several times the OOM script executed since I upgraded to Solr6.0:
>
> $ cat solr_oom_killer-8983-2016-04-29_15_16_51.log
> Running OOM killer script for process 26044 for Solr on port 8983
>
> Does it mean that I need to increase my JAVA Heap?
> Or should I do anything else?
>
> Here are some further logs:
> $ cat solr_gc_log_20160502_0730:
> }
> {Heap before GC invocations=1674 (full 91):
>  par new generation   total 1747648K, used 1747135K [0x0005c000,
> 0x00064000, 0x00064000)
>   eden space 1398144K, 100% used [0x0005c000, 0x00061556,
> 0x00061556)
>   from space 349504K,  99% used [0x00061556, 0x00062aa2fc30,
> 0x00062aab)
>   to   space 349504K,   0% used [0x00062aab, 0x00062aab,
> 0x00064000)
>  concurrent mark-sweep generation total 6291456K, used 6291455K
> [0x00064000, 0x0007c000, 0x0007c000)
>  Metaspace   used 39845K, capacity 40346K, committed 40704K, reserved
> 1085440K
>   class spaceused 4142K, capacity 4273K, committed 4368K, reserved
> 1048576K
> 2016-04-29T21:15:41.970+0200: 20356.359: [Full GC (Allocation Failure)
> 2016-04-29T21:15:41.970+0200: 20356.359: [CMS:
> 6291455K->6291456K(6291456K), 12.5694653 secs]
> 8038591K->8038590K(8039104K), [Metaspace: 39845K->39845K(1085440K)],
> 12.5695497 secs] [Times: user=12.57 sys=0.00, real=12.57 secs]
>
>
> Kind regards,
> Bastien
>
>


OOM script executed

2016-05-02 Thread Bastien Latard - MDPI AG

Hi Guys,

I got several times the OOM script executed since I upgraded to Solr6.0:

$ cat solr_oom_killer-8983-2016-04-29_15_16_51.log
Running OOM killer script for process 26044 for Solr on port 8983

Does it mean that I need to increase my JAVA Heap?
Or should I do anything else?

Here are some further logs:
$ cat solr_gc_log_20160502_0730:
}
{Heap before GC invocations=1674 (full 91):
 par new generation   total 1747648K, used 1747135K 
[0x0005c000, 0x00064000, 0x00064000)
  eden space 1398144K, 100% used [0x0005c000, 
0x00061556, 0x00061556)
  from space 349504K,  99% used [0x00061556, 
0x00062aa2fc30, 0x00062aab)
  to   space 349504K,   0% used [0x00062aab, 
0x00062aab, 0x00064000)
 concurrent mark-sweep generation total 6291456K, used 6291455K 
[0x00064000, 0x0007c000, 0x0007c000)
 Metaspace   used 39845K, capacity 40346K, committed 40704K, 
reserved 1085440K
  class spaceused 4142K, capacity 4273K, committed 4368K, reserved 
1048576K
2016-04-29T21:15:41.970+0200: 20356.359: [Full GC (Allocation Failure) 
2016-04-29T21:15:41.970+0200: 20356.359: [CMS: 
6291455K->6291456K(6291456K), 12.5694653 secs] 
8038591K->8038590K(8039104K), [Metaspace: 39845K->39845K(1085440K)], 
12.5695497 secs] [Times: user=12.57 sys=0.00, real=12.57 secs]



Kind regards,
Bastien