Lucene query to Solr query

2020-01-19 Thread Arnold Bronley
Hi,

I have a Lucene query as following (toString represenation of Lucene's
Query object):

+(topics:29)^2 (topics:38)^3 +(-id:41135)

It works fine when I am using it as a lucene query in
SolrIndexSearcher.getDocList function.

However, now I want to use it as a Solr query and query against a
collection. I tried to use the as-is representation from Lucene query
object's toString method but it does not work. How should I proceed?


Re: Solr 7.7 heap space is getting full

2020-01-19 Thread Rajdeep Sahoo
Anything else regarding gc tuning.

On Mon, 20 Jan, 2020, 8:08 AM Rajdeep Sahoo, 
wrote:

> Initially we were getting the warning message as  ulimit is low i.e. 1024
> so we changed it to 65000
> Using ulimit -u 65000.
>
> Then the error was failed to reserve shared memory error =1
>  Because of this we removed
>-xx : +uselargepages
>
> Now in console log it is showing
> Could not find or load main class \
>
> And solr is not starting up
>
>
> On Mon, 20 Jan, 2020, 7:50 AM Mehai, Lotfi, 
> wrote:
>
>> I  had a similar issue with a large number of facets. There is no way (At
>> least I know) your can get an acceptable response time from search engine
>> with high number of facets.
>> The way we solved the issue was to cache shallow Facets data structure in
>> the web services. Facts structures are refreshed periodically. We don't
>> have near real time indexation requirements. Page response time is under
>> 5s.
>>
>> Here the URLs for our worst use case:
>> https://www.govinfo.gov/app/collection/cfr
>> https://www.govinfo.gov/app/cfrparts/month
>>
>> I hope that helps.
>>
>> Lotfi Mehai
>> https://www.linkedin.com/in/lmehai/
>>
>>
>>
>>
>>
>> On Sun, Jan 19, 2020 at 9:05 PM Rajdeep Sahoo > >
>> wrote:
>>
>> > Initially we were getting the warning message as  ulimit is low i.e.
>> 1024
>> > so we changed it to 65000
>> > Using ulimit -u 65000.
>> >
>> > Then the error was failed to reserve shared memory error =1
>> >  Because of this we removed
>> >-xx : +uselargepages
>> >
>> > Now in console log it is showing
>> > Could not find or load main class \
>> >
>> > And solr is not starting up
>> >
>> >
>> >
>> > On Mon, 20 Jan, 2020, 7:20 AM Walter Underwood, 
>> > wrote:
>> >
>> > > What message do you get that means the heap space is full?
>> > >
>> > > Java will always use all of the heap, either as live data or
>> > > not-yet-collected garbage.
>> > >
>> > > wunder
>> > > Walter Underwood
>> > > wun...@wunderwood.org
>> > > http://observer.wunderwood.org/  (my blog)
>> > >
>> > > > On Jan 19, 2020, at 5:47 PM, Rajdeep Sahoo <
>> rajdeepsahoo2...@gmail.com
>> > >
>> > > wrote:
>> > > >
>> > > > Hi,
>> > > > Currently there is no request or indexing is happening.
>> > > >  It's just start up
>> > > > And during that time heap is getting full.
>> > > > Index size is approx 1 g.
>> > > >
>> > > >
>> > > > On Mon, 20 Jan, 2020, 7:01 AM Walter Underwood, <
>> wun...@wunderwood.org
>> > >
>> > > > wrote:
>> > > >
>> > > >> A new garbage collector won’t fix it, but it might help a bit.
>> > > >>
>> > > >> Requesting 200 facet fields and having 50-60 of them with results
>> is a
>> > > >> huge amount of work for Solr. A typical faceting implementation
>> might
>> > > have
>> > > >> three to five facets. Your requests will be at least 10X to 20X
>> > slower.
>> > > >>
>> > > >> Check the CPU during one request. It should use nearly 100% of a
>> > single
>> > > >> CPU. If it a lot lower than 100%, you have another bottleneck. That
>> > > might
>> > > >> be insufficient heap or accessing disk during query requests (not
>> > enough
>> > > >> RAM). If it is near 100%, the only thing you can do is get a faster
>> > CPU.
>> > > >>
>> > > >> One other question, how frequently is the index updated?
>> > > >>
>> > > >> wunder
>> > > >> Walter Underwood
>> > > >> wun...@wunderwood.org
>> > > >> http://observer.wunderwood.org/  (my blog)
>> > > >>
>> > > >>> On Jan 19, 2020, at 4:49 PM, Rajdeep Sahoo <
>> > rajdeepsahoo2...@gmail.com
>> > > >
>> > > >> wrote:
>> > > >>>
>> > > >>> Hi,
>> > > >>> Still facing the same issue...
>> > > >>> Anything else that we need to check.
>> > > >>>
>> > > >>>
>> > > >>> On Mon, 20 Jan, 2020, 4:07 AM Walter Underwood, <
>> > wun...@wunderwood.org
>> > > >
>> > > >>> wrote:
>> > > >>>
>> > >  With Java 1.8, I would use the G1 garbage collector. We’ve been
>> > > running
>> > >  that combination in prod for three years with no problems.
>> > > 
>> > >  SOLR_HEAP=8g
>> > >  # Use G1 GC  -- wunder 2017-01-23
>> > >  # Settings from https://wiki.apache.org/solr/ShawnHeisey
>> > >  GC_TUNE=" \
>> > >  -XX:+UseG1GC \
>> > >  -XX:+ParallelRefProcEnabled \
>> > >  -XX:G1HeapRegionSize=8m \
>> > >  -XX:MaxGCPauseMillis=200 \
>> > >  -XX:+UseLargePages \
>> > >  -XX:+AggressiveOpts \
>> > >  “
>> > > 
>> > >  wunder
>> > >  Walter Underwood
>> > >  wun...@wunderwood.org
>> > >  http://observer.wunderwood.org/  (my blog)
>> > > 
>> > > > On Jan 19, 2020, at 9:25 AM, Rajdeep Sahoo <
>> > > rajdeepsahoo2...@gmail.com
>> > > >>>
>> > >  wrote:
>> > > >
>> > > > We are using solr 7.7 . Ram size is 24 gb and allocated space
>> is 12
>> > > gb.
>> > >  We
>> > > > have completed indexing after starting the server suddenly heap
>> > space
>> > > >> is
>> > > > getting full.
>> > > > Added gc params  , still not working and jdk version is 1.8 .
>> > > > Please find the below gc  params
>> > 

Re: Solr 7.7 heap space is getting full

2020-01-19 Thread Rajdeep Sahoo
Initially we were getting the warning message as  ulimit is low i.e. 1024
so we changed it to 65000
Using ulimit -u 65000.

Then the error was failed to reserve shared memory error =1
 Because of this we removed
   -xx : +uselargepages

Now in console log it is showing
Could not find or load main class \

And solr is not starting up


On Mon, 20 Jan, 2020, 7:50 AM Mehai, Lotfi,  wrote:

> I  had a similar issue with a large number of facets. There is no way (At
> least I know) your can get an acceptable response time from search engine
> with high number of facets.
> The way we solved the issue was to cache shallow Facets data structure in
> the web services. Facts structures are refreshed periodically. We don't
> have near real time indexation requirements. Page response time is under
> 5s.
>
> Here the URLs for our worst use case:
> https://www.govinfo.gov/app/collection/cfr
> https://www.govinfo.gov/app/cfrparts/month
>
> I hope that helps.
>
> Lotfi Mehai
> https://www.linkedin.com/in/lmehai/
>
>
>
>
>
> On Sun, Jan 19, 2020 at 9:05 PM Rajdeep Sahoo 
> wrote:
>
> > Initially we were getting the warning message as  ulimit is low i.e. 1024
> > so we changed it to 65000
> > Using ulimit -u 65000.
> >
> > Then the error was failed to reserve shared memory error =1
> >  Because of this we removed
> >-xx : +uselargepages
> >
> > Now in console log it is showing
> > Could not find or load main class \
> >
> > And solr is not starting up
> >
> >
> >
> > On Mon, 20 Jan, 2020, 7:20 AM Walter Underwood, 
> > wrote:
> >
> > > What message do you get that means the heap space is full?
> > >
> > > Java will always use all of the heap, either as live data or
> > > not-yet-collected garbage.
> > >
> > > wunder
> > > Walter Underwood
> > > wun...@wunderwood.org
> > > http://observer.wunderwood.org/  (my blog)
> > >
> > > > On Jan 19, 2020, at 5:47 PM, Rajdeep Sahoo <
> rajdeepsahoo2...@gmail.com
> > >
> > > wrote:
> > > >
> > > > Hi,
> > > > Currently there is no request or indexing is happening.
> > > >  It's just start up
> > > > And during that time heap is getting full.
> > > > Index size is approx 1 g.
> > > >
> > > >
> > > > On Mon, 20 Jan, 2020, 7:01 AM Walter Underwood, <
> wun...@wunderwood.org
> > >
> > > > wrote:
> > > >
> > > >> A new garbage collector won’t fix it, but it might help a bit.
> > > >>
> > > >> Requesting 200 facet fields and having 50-60 of them with results
> is a
> > > >> huge amount of work for Solr. A typical faceting implementation
> might
> > > have
> > > >> three to five facets. Your requests will be at least 10X to 20X
> > slower.
> > > >>
> > > >> Check the CPU during one request. It should use nearly 100% of a
> > single
> > > >> CPU. If it a lot lower than 100%, you have another bottleneck. That
> > > might
> > > >> be insufficient heap or accessing disk during query requests (not
> > enough
> > > >> RAM). If it is near 100%, the only thing you can do is get a faster
> > CPU.
> > > >>
> > > >> One other question, how frequently is the index updated?
> > > >>
> > > >> wunder
> > > >> Walter Underwood
> > > >> wun...@wunderwood.org
> > > >> http://observer.wunderwood.org/  (my blog)
> > > >>
> > > >>> On Jan 19, 2020, at 4:49 PM, Rajdeep Sahoo <
> > rajdeepsahoo2...@gmail.com
> > > >
> > > >> wrote:
> > > >>>
> > > >>> Hi,
> > > >>> Still facing the same issue...
> > > >>> Anything else that we need to check.
> > > >>>
> > > >>>
> > > >>> On Mon, 20 Jan, 2020, 4:07 AM Walter Underwood, <
> > wun...@wunderwood.org
> > > >
> > > >>> wrote:
> > > >>>
> > >  With Java 1.8, I would use the G1 garbage collector. We’ve been
> > > running
> > >  that combination in prod for three years with no problems.
> > > 
> > >  SOLR_HEAP=8g
> > >  # Use G1 GC  -- wunder 2017-01-23
> > >  # Settings from https://wiki.apache.org/solr/ShawnHeisey
> > >  GC_TUNE=" \
> > >  -XX:+UseG1GC \
> > >  -XX:+ParallelRefProcEnabled \
> > >  -XX:G1HeapRegionSize=8m \
> > >  -XX:MaxGCPauseMillis=200 \
> > >  -XX:+UseLargePages \
> > >  -XX:+AggressiveOpts \
> > >  “
> > > 
> > >  wunder
> > >  Walter Underwood
> > >  wun...@wunderwood.org
> > >  http://observer.wunderwood.org/  (my blog)
> > > 
> > > > On Jan 19, 2020, at 9:25 AM, Rajdeep Sahoo <
> > > rajdeepsahoo2...@gmail.com
> > > >>>
> > >  wrote:
> > > >
> > > > We are using solr 7.7 . Ram size is 24 gb and allocated space is
> 12
> > > gb.
> > >  We
> > > > have completed indexing after starting the server suddenly heap
> > space
> > > >> is
> > > > getting full.
> > > > Added gc params  , still not working and jdk version is 1.8 .
> > > > Please find the below gc  params
> > > > -XX:NewRatio=2
> > > > -XX:SurvivorRatio=3
> > > > -XX:TargetSurvivorRatio=90 \
> > > > -XX:MaxTenuringThreshold=8 \
> > > > -XX:+UseConcMarkSweepGC \
> > > > -XX:+CMSScavengeBeforeRemark \
> > > > -XX:ConcGCThreads=4 

Re: Solr 7.7 heap space is getting full

2020-01-19 Thread Mehai, Lotfi
I  had a similar issue with a large number of facets. There is no way (At
least I know) your can get an acceptable response time from search engine
with high number of facets.
The way we solved the issue was to cache shallow Facets data structure in
the web services. Facts structures are refreshed periodically. We don't
have near real time indexation requirements. Page response time is under
5s.

Here the URLs for our worst use case:
https://www.govinfo.gov/app/collection/cfr
https://www.govinfo.gov/app/cfrparts/month

I hope that helps.

Lotfi Mehai
https://www.linkedin.com/in/lmehai/





On Sun, Jan 19, 2020 at 9:05 PM Rajdeep Sahoo 
wrote:

> Initially we were getting the warning message as  ulimit is low i.e. 1024
> so we changed it to 65000
> Using ulimit -u 65000.
>
> Then the error was failed to reserve shared memory error =1
>  Because of this we removed
>-xx : +uselargepages
>
> Now in console log it is showing
> Could not find or load main class \
>
> And solr is not starting up
>
>
>
> On Mon, 20 Jan, 2020, 7:20 AM Walter Underwood, 
> wrote:
>
> > What message do you get that means the heap space is full?
> >
> > Java will always use all of the heap, either as live data or
> > not-yet-collected garbage.
> >
> > wunder
> > Walter Underwood
> > wun...@wunderwood.org
> > http://observer.wunderwood.org/  (my blog)
> >
> > > On Jan 19, 2020, at 5:47 PM, Rajdeep Sahoo  >
> > wrote:
> > >
> > > Hi,
> > > Currently there is no request or indexing is happening.
> > >  It's just start up
> > > And during that time heap is getting full.
> > > Index size is approx 1 g.
> > >
> > >
> > > On Mon, 20 Jan, 2020, 7:01 AM Walter Underwood,  >
> > > wrote:
> > >
> > >> A new garbage collector won’t fix it, but it might help a bit.
> > >>
> > >> Requesting 200 facet fields and having 50-60 of them with results is a
> > >> huge amount of work for Solr. A typical faceting implementation might
> > have
> > >> three to five facets. Your requests will be at least 10X to 20X
> slower.
> > >>
> > >> Check the CPU during one request. It should use nearly 100% of a
> single
> > >> CPU. If it a lot lower than 100%, you have another bottleneck. That
> > might
> > >> be insufficient heap or accessing disk during query requests (not
> enough
> > >> RAM). If it is near 100%, the only thing you can do is get a faster
> CPU.
> > >>
> > >> One other question, how frequently is the index updated?
> > >>
> > >> wunder
> > >> Walter Underwood
> > >> wun...@wunderwood.org
> > >> http://observer.wunderwood.org/  (my blog)
> > >>
> > >>> On Jan 19, 2020, at 4:49 PM, Rajdeep Sahoo <
> rajdeepsahoo2...@gmail.com
> > >
> > >> wrote:
> > >>>
> > >>> Hi,
> > >>> Still facing the same issue...
> > >>> Anything else that we need to check.
> > >>>
> > >>>
> > >>> On Mon, 20 Jan, 2020, 4:07 AM Walter Underwood, <
> wun...@wunderwood.org
> > >
> > >>> wrote:
> > >>>
> >  With Java 1.8, I would use the G1 garbage collector. We’ve been
> > running
> >  that combination in prod for three years with no problems.
> > 
> >  SOLR_HEAP=8g
> >  # Use G1 GC  -- wunder 2017-01-23
> >  # Settings from https://wiki.apache.org/solr/ShawnHeisey
> >  GC_TUNE=" \
> >  -XX:+UseG1GC \
> >  -XX:+ParallelRefProcEnabled \
> >  -XX:G1HeapRegionSize=8m \
> >  -XX:MaxGCPauseMillis=200 \
> >  -XX:+UseLargePages \
> >  -XX:+AggressiveOpts \
> >  “
> > 
> >  wunder
> >  Walter Underwood
> >  wun...@wunderwood.org
> >  http://observer.wunderwood.org/  (my blog)
> > 
> > > On Jan 19, 2020, at 9:25 AM, Rajdeep Sahoo <
> > rajdeepsahoo2...@gmail.com
> > >>>
> >  wrote:
> > >
> > > We are using solr 7.7 . Ram size is 24 gb and allocated space is 12
> > gb.
> >  We
> > > have completed indexing after starting the server suddenly heap
> space
> > >> is
> > > getting full.
> > > Added gc params  , still not working and jdk version is 1.8 .
> > > Please find the below gc  params
> > > -XX:NewRatio=2
> > > -XX:SurvivorRatio=3
> > > -XX:TargetSurvivorRatio=90 \
> > > -XX:MaxTenuringThreshold=8 \
> > > -XX:+UseConcMarkSweepGC \
> > > -XX:+CMSScavengeBeforeRemark \
> > > -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
> > > -XX:PretenureSizeThreshold=512m \
> > > -XX:CMSFullGCsBeforeCompaction=1 \
> > > -XX:+UseCMSInitiatingOccupancyOnly \
> > > -XX:CMSInitiatingOccupancyFraction=70 \
> > > -XX:CMSMaxAbortablePrecleanTime=6000 \
> > > -XX:+CMSParallelRemarkEnabled
> > > -XX:+ParallelRefProcEnabled
> > > -XX:+UseLargePages \
> > > -XX:+AggressiveOpts \
> > 
> > 
> > >>
> > >>
> >
> >
>


Re: Solr 7.7 heap space is getting full

2020-01-19 Thread Rajdeep Sahoo
Initially we were getting the warning message as  ulimit is low i.e. 1024
so we changed it to 65000
Using ulimit -u 65000.

Then the error was failed to reserve shared memory error =1
 Because of this we removed
   -xx : +uselargepages

Now in console log it is showing
Could not find or load main class \

And solr is not starting up



On Mon, 20 Jan, 2020, 7:20 AM Walter Underwood, 
wrote:

> What message do you get that means the heap space is full?
>
> Java will always use all of the heap, either as live data or
> not-yet-collected garbage.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
> > On Jan 19, 2020, at 5:47 PM, Rajdeep Sahoo 
> wrote:
> >
> > Hi,
> > Currently there is no request or indexing is happening.
> >  It's just start up
> > And during that time heap is getting full.
> > Index size is approx 1 g.
> >
> >
> > On Mon, 20 Jan, 2020, 7:01 AM Walter Underwood, 
> > wrote:
> >
> >> A new garbage collector won’t fix it, but it might help a bit.
> >>
> >> Requesting 200 facet fields and having 50-60 of them with results is a
> >> huge amount of work for Solr. A typical faceting implementation might
> have
> >> three to five facets. Your requests will be at least 10X to 20X slower.
> >>
> >> Check the CPU during one request. It should use nearly 100% of a single
> >> CPU. If it a lot lower than 100%, you have another bottleneck. That
> might
> >> be insufficient heap or accessing disk during query requests (not enough
> >> RAM). If it is near 100%, the only thing you can do is get a faster CPU.
> >>
> >> One other question, how frequently is the index updated?
> >>
> >> wunder
> >> Walter Underwood
> >> wun...@wunderwood.org
> >> http://observer.wunderwood.org/  (my blog)
> >>
> >>> On Jan 19, 2020, at 4:49 PM, Rajdeep Sahoo  >
> >> wrote:
> >>>
> >>> Hi,
> >>> Still facing the same issue...
> >>> Anything else that we need to check.
> >>>
> >>>
> >>> On Mon, 20 Jan, 2020, 4:07 AM Walter Underwood,  >
> >>> wrote:
> >>>
>  With Java 1.8, I would use the G1 garbage collector. We’ve been
> running
>  that combination in prod for three years with no problems.
> 
>  SOLR_HEAP=8g
>  # Use G1 GC  -- wunder 2017-01-23
>  # Settings from https://wiki.apache.org/solr/ShawnHeisey
>  GC_TUNE=" \
>  -XX:+UseG1GC \
>  -XX:+ParallelRefProcEnabled \
>  -XX:G1HeapRegionSize=8m \
>  -XX:MaxGCPauseMillis=200 \
>  -XX:+UseLargePages \
>  -XX:+AggressiveOpts \
>  “
> 
>  wunder
>  Walter Underwood
>  wun...@wunderwood.org
>  http://observer.wunderwood.org/  (my blog)
> 
> > On Jan 19, 2020, at 9:25 AM, Rajdeep Sahoo <
> rajdeepsahoo2...@gmail.com
> >>>
>  wrote:
> >
> > We are using solr 7.7 . Ram size is 24 gb and allocated space is 12
> gb.
>  We
> > have completed indexing after starting the server suddenly heap space
> >> is
> > getting full.
> > Added gc params  , still not working and jdk version is 1.8 .
> > Please find the below gc  params
> > -XX:NewRatio=2
> > -XX:SurvivorRatio=3
> > -XX:TargetSurvivorRatio=90 \
> > -XX:MaxTenuringThreshold=8 \
> > -XX:+UseConcMarkSweepGC \
> > -XX:+CMSScavengeBeforeRemark \
> > -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
> > -XX:PretenureSizeThreshold=512m \
> > -XX:CMSFullGCsBeforeCompaction=1 \
> > -XX:+UseCMSInitiatingOccupancyOnly \
> > -XX:CMSInitiatingOccupancyFraction=70 \
> > -XX:CMSMaxAbortablePrecleanTime=6000 \
> > -XX:+CMSParallelRemarkEnabled
> > -XX:+ParallelRefProcEnabled
> > -XX:+UseLargePages \
> > -XX:+AggressiveOpts \
> 
> 
> >>
> >>
>
>


Re: Solr 7.7 heap space is getting full

2020-01-19 Thread Walter Underwood
What message do you get that means the heap space is full?

Java will always use all of the heap, either as live data or not-yet-collected 
garbage.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Jan 19, 2020, at 5:47 PM, Rajdeep Sahoo  wrote:
> 
> Hi,
> Currently there is no request or indexing is happening.
>  It's just start up
> And during that time heap is getting full.
> Index size is approx 1 g.
> 
> 
> On Mon, 20 Jan, 2020, 7:01 AM Walter Underwood, 
> wrote:
> 
>> A new garbage collector won’t fix it, but it might help a bit.
>> 
>> Requesting 200 facet fields and having 50-60 of them with results is a
>> huge amount of work for Solr. A typical faceting implementation might have
>> three to five facets. Your requests will be at least 10X to 20X slower.
>> 
>> Check the CPU during one request. It should use nearly 100% of a single
>> CPU. If it a lot lower than 100%, you have another bottleneck. That might
>> be insufficient heap or accessing disk during query requests (not enough
>> RAM). If it is near 100%, the only thing you can do is get a faster CPU.
>> 
>> One other question, how frequently is the index updated?
>> 
>> wunder
>> Walter Underwood
>> wun...@wunderwood.org
>> http://observer.wunderwood.org/  (my blog)
>> 
>>> On Jan 19, 2020, at 4:49 PM, Rajdeep Sahoo 
>> wrote:
>>> 
>>> Hi,
>>> Still facing the same issue...
>>> Anything else that we need to check.
>>> 
>>> 
>>> On Mon, 20 Jan, 2020, 4:07 AM Walter Underwood, 
>>> wrote:
>>> 
 With Java 1.8, I would use the G1 garbage collector. We’ve been running
 that combination in prod for three years with no problems.
 
 SOLR_HEAP=8g
 # Use G1 GC  -- wunder 2017-01-23
 # Settings from https://wiki.apache.org/solr/ShawnHeisey
 GC_TUNE=" \
 -XX:+UseG1GC \
 -XX:+ParallelRefProcEnabled \
 -XX:G1HeapRegionSize=8m \
 -XX:MaxGCPauseMillis=200 \
 -XX:+UseLargePages \
 -XX:+AggressiveOpts \
 “
 
 wunder
 Walter Underwood
 wun...@wunderwood.org
 http://observer.wunderwood.org/  (my blog)
 
> On Jan 19, 2020, at 9:25 AM, Rajdeep Sahoo >> 
 wrote:
> 
> We are using solr 7.7 . Ram size is 24 gb and allocated space is 12 gb.
 We
> have completed indexing after starting the server suddenly heap space
>> is
> getting full.
> Added gc params  , still not working and jdk version is 1.8 .
> Please find the below gc  params
> -XX:NewRatio=2
> -XX:SurvivorRatio=3
> -XX:TargetSurvivorRatio=90 \
> -XX:MaxTenuringThreshold=8 \
> -XX:+UseConcMarkSweepGC \
> -XX:+CMSScavengeBeforeRemark \
> -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
> -XX:PretenureSizeThreshold=512m \
> -XX:CMSFullGCsBeforeCompaction=1 \
> -XX:+UseCMSInitiatingOccupancyOnly \
> -XX:CMSInitiatingOccupancyFraction=70 \
> -XX:CMSMaxAbortablePrecleanTime=6000 \
> -XX:+CMSParallelRemarkEnabled
> -XX:+ParallelRefProcEnabled
> -XX:+UseLargePages \
> -XX:+AggressiveOpts \
 
 
>> 
>> 



Re: Solr 7.7 heap space is getting full

2020-01-19 Thread Rajdeep Sahoo
Hi,
 Currently there is no request or indexing is happening.
  It's just start up
 And during that time heap is getting full.
 Index size is approx 1 g.


On Mon, 20 Jan, 2020, 7:01 AM Walter Underwood, 
wrote:

> A new garbage collector won’t fix it, but it might help a bit.
>
> Requesting 200 facet fields and having 50-60 of them with results is a
> huge amount of work for Solr. A typical faceting implementation might have
> three to five facets. Your requests will be at least 10X to 20X slower.
>
> Check the CPU during one request. It should use nearly 100% of a single
> CPU. If it a lot lower than 100%, you have another bottleneck. That might
> be insufficient heap or accessing disk during query requests (not enough
> RAM). If it is near 100%, the only thing you can do is get a faster CPU.
>
> One other question, how frequently is the index updated?
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
> > On Jan 19, 2020, at 4:49 PM, Rajdeep Sahoo 
> wrote:
> >
> > Hi,
> > Still facing the same issue...
> > Anything else that we need to check.
> >
> >
> > On Mon, 20 Jan, 2020, 4:07 AM Walter Underwood, 
> > wrote:
> >
> >> With Java 1.8, I would use the G1 garbage collector. We’ve been running
> >> that combination in prod for three years with no problems.
> >>
> >> SOLR_HEAP=8g
> >> # Use G1 GC  -- wunder 2017-01-23
> >> # Settings from https://wiki.apache.org/solr/ShawnHeisey
> >> GC_TUNE=" \
> >> -XX:+UseG1GC \
> >> -XX:+ParallelRefProcEnabled \
> >> -XX:G1HeapRegionSize=8m \
> >> -XX:MaxGCPauseMillis=200 \
> >> -XX:+UseLargePages \
> >> -XX:+AggressiveOpts \
> >> “
> >>
> >> wunder
> >> Walter Underwood
> >> wun...@wunderwood.org
> >> http://observer.wunderwood.org/  (my blog)
> >>
> >>> On Jan 19, 2020, at 9:25 AM, Rajdeep Sahoo  >
> >> wrote:
> >>>
> >>> We are using solr 7.7 . Ram size is 24 gb and allocated space is 12 gb.
> >> We
> >>> have completed indexing after starting the server suddenly heap space
> is
> >>> getting full.
> >>>  Added gc params  , still not working and jdk version is 1.8 .
> >>> Please find the below gc  params
> >>> -XX:NewRatio=2
> >>> -XX:SurvivorRatio=3
> >>> -XX:TargetSurvivorRatio=90 \
> >>> -XX:MaxTenuringThreshold=8 \
> >>> -XX:+UseConcMarkSweepGC \
> >>> -XX:+CMSScavengeBeforeRemark \
> >>> -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
> >>> -XX:PretenureSizeThreshold=512m \
> >>> -XX:CMSFullGCsBeforeCompaction=1 \
> >>> -XX:+UseCMSInitiatingOccupancyOnly \
> >>> -XX:CMSInitiatingOccupancyFraction=70 \
> >>> -XX:CMSMaxAbortablePrecleanTime=6000 \
> >>> -XX:+CMSParallelRemarkEnabled
> >>> -XX:+ParallelRefProcEnabled
> >>> -XX:+UseLargePages \
> >>> -XX:+AggressiveOpts \
> >>
> >>
>
>


Re: Solr 7.7 heap space is getting full

2020-01-19 Thread Walter Underwood
A new garbage collector won’t fix it, but it might help a bit.

Requesting 200 facet fields and having 50-60 of them with results is a huge 
amount of work for Solr. A typical faceting implementation might have three to 
five facets. Your requests will be at least 10X to 20X slower.

Check the CPU during one request. It should use nearly 100% of a single CPU. If 
it a lot lower than 100%, you have another bottleneck. That might be 
insufficient heap or accessing disk during query requests (not enough RAM). If 
it is near 100%, the only thing you can do is get a faster CPU.

One other question, how frequently is the index updated?

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Jan 19, 2020, at 4:49 PM, Rajdeep Sahoo  wrote:
> 
> Hi,
> Still facing the same issue...
> Anything else that we need to check.
> 
> 
> On Mon, 20 Jan, 2020, 4:07 AM Walter Underwood, 
> wrote:
> 
>> With Java 1.8, I would use the G1 garbage collector. We’ve been running
>> that combination in prod for three years with no problems.
>> 
>> SOLR_HEAP=8g
>> # Use G1 GC  -- wunder 2017-01-23
>> # Settings from https://wiki.apache.org/solr/ShawnHeisey
>> GC_TUNE=" \
>> -XX:+UseG1GC \
>> -XX:+ParallelRefProcEnabled \
>> -XX:G1HeapRegionSize=8m \
>> -XX:MaxGCPauseMillis=200 \
>> -XX:+UseLargePages \
>> -XX:+AggressiveOpts \
>> “
>> 
>> wunder
>> Walter Underwood
>> wun...@wunderwood.org
>> http://observer.wunderwood.org/  (my blog)
>> 
>>> On Jan 19, 2020, at 9:25 AM, Rajdeep Sahoo 
>> wrote:
>>> 
>>> We are using solr 7.7 . Ram size is 24 gb and allocated space is 12 gb.
>> We
>>> have completed indexing after starting the server suddenly heap space is
>>> getting full.
>>>  Added gc params  , still not working and jdk version is 1.8 .
>>> Please find the below gc  params
>>> -XX:NewRatio=2
>>> -XX:SurvivorRatio=3
>>> -XX:TargetSurvivorRatio=90 \
>>> -XX:MaxTenuringThreshold=8 \
>>> -XX:+UseConcMarkSweepGC \
>>> -XX:+CMSScavengeBeforeRemark \
>>> -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
>>> -XX:PretenureSizeThreshold=512m \
>>> -XX:CMSFullGCsBeforeCompaction=1 \
>>> -XX:+UseCMSInitiatingOccupancyOnly \
>>> -XX:CMSInitiatingOccupancyFraction=70 \
>>> -XX:CMSMaxAbortablePrecleanTime=6000 \
>>> -XX:+CMSParallelRemarkEnabled
>>> -XX:+ParallelRefProcEnabled
>>> -XX:+UseLargePages \
>>> -XX:+AggressiveOpts \
>> 
>> 



Re: Solr 7.7 heap space is getting full

2020-01-19 Thread Rajdeep Sahoo
Hi,
Still facing the same issue...
Anything else that we need to check.


On Mon, 20 Jan, 2020, 4:07 AM Walter Underwood, 
wrote:

> With Java 1.8, I would use the G1 garbage collector. We’ve been running
> that combination in prod for three years with no problems.
>
> SOLR_HEAP=8g
> # Use G1 GC  -- wunder 2017-01-23
> # Settings from https://wiki.apache.org/solr/ShawnHeisey
> GC_TUNE=" \
> -XX:+UseG1GC \
> -XX:+ParallelRefProcEnabled \
> -XX:G1HeapRegionSize=8m \
> -XX:MaxGCPauseMillis=200 \
> -XX:+UseLargePages \
> -XX:+AggressiveOpts \
> “
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
> > On Jan 19, 2020, at 9:25 AM, Rajdeep Sahoo 
> wrote:
> >
> > We are using solr 7.7 . Ram size is 24 gb and allocated space is 12 gb.
> We
> > have completed indexing after starting the server suddenly heap space is
> > getting full.
> >   Added gc params  , still not working and jdk version is 1.8 .
> > Please find the below gc  params
> > -XX:NewRatio=2
> > -XX:SurvivorRatio=3
> > -XX:TargetSurvivorRatio=90 \
> > -XX:MaxTenuringThreshold=8 \
> > -XX:+UseConcMarkSweepGC \
> > -XX:+CMSScavengeBeforeRemark \
> > -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
> > -XX:PretenureSizeThreshold=512m \
> > -XX:CMSFullGCsBeforeCompaction=1 \
> > -XX:+UseCMSInitiatingOccupancyOnly \
> > -XX:CMSInitiatingOccupancyFraction=70 \
> > -XX:CMSMaxAbortablePrecleanTime=6000 \
> > -XX:+CMSParallelRemarkEnabled
> > -XX:+ParallelRefProcEnabled
> > -XX:+UseLargePages \
> > -XX:+AggressiveOpts \
>
>


Re: Solr 7.7 heap space is getting full

2020-01-19 Thread Walter Underwood
With Java 1.8, I would use the G1 garbage collector. We’ve been running that 
combination in prod for three years with no problems.

SOLR_HEAP=8g
# Use G1 GC  -- wunder 2017-01-23
# Settings from https://wiki.apache.org/solr/ShawnHeisey
GC_TUNE=" \
-XX:+UseG1GC \
-XX:+ParallelRefProcEnabled \
-XX:G1HeapRegionSize=8m \
-XX:MaxGCPauseMillis=200 \
-XX:+UseLargePages \
-XX:+AggressiveOpts \
“

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Jan 19, 2020, at 9:25 AM, Rajdeep Sahoo  wrote:
> 
> We are using solr 7.7 . Ram size is 24 gb and allocated space is 12 gb. We
> have completed indexing after starting the server suddenly heap space is
> getting full.
>   Added gc params  , still not working and jdk version is 1.8 .
> Please find the below gc  params
> -XX:NewRatio=2
> -XX:SurvivorRatio=3
> -XX:TargetSurvivorRatio=90 \
> -XX:MaxTenuringThreshold=8 \
> -XX:+UseConcMarkSweepGC \
> -XX:+CMSScavengeBeforeRemark \
> -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
> -XX:PretenureSizeThreshold=512m \
> -XX:CMSFullGCsBeforeCompaction=1 \
> -XX:+UseCMSInitiatingOccupancyOnly \
> -XX:CMSInitiatingOccupancyFraction=70 \
> -XX:CMSMaxAbortablePrecleanTime=6000 \
> -XX:+CMSParallelRemarkEnabled
> -XX:+ParallelRefProcEnabled
> -XX:+UseLargePages \
> -XX:+AggressiveOpts \



Re: ConnectionImpl.isValid() does not behave as described in Connection javadocs

2020-01-19 Thread Nick Vercammen
I think so as the ConnectionImpl in solr is not in line with the description of 
the java connection interface

> Op 19 jan. 2020 om 21:23 heeft Erick Erickson  het 
> volgende geschreven:
> 
> Is this a Solr issue?
> 
>> On Sun, Jan 19, 2020, 14:24 Nick Vercammen 
>> wrote:
>> 
>> Hello,
>> 
>> I'm trying to write a solr driver for metabase. Internally metabase uses a
>> C3P0 connection pool. Upon checkout of the connection from the pool the
>> library does a call to isValid(0) (timeout = 0)
>> 
>> According to the javadocs (
>> 
>> https://docs.oracle.com/en/java/javase/11/docs/api/java.sql/java/sql/Connection.html#isValid(int)
>> )
>> a
>> timeout = 0 means no timeout. In the current implementation a timeout = 0
>> means that the connection is always invalid.
>> 
>> I can provide a PR for this.
>> 
>> Nick
>> 
>> --
>> [image: Zeticon]
>> Nick Vercammen
>> CTO
>> +32 9 275 31 31
>> +32 471 39 77 36
>> nick.vercam...@zeticon.com
>> 
>>  <
>> https://twitter.com/mediahaven>
>> www.zeticon.com
>> 


Re: ConnectionImpl.isValid() does not behave as described in Connection javadocs

2020-01-19 Thread Erick Erickson
Is this a Solr issue?

On Sun, Jan 19, 2020, 14:24 Nick Vercammen 
wrote:

> Hello,
>
> I'm trying to write a solr driver for metabase. Internally metabase uses a
> C3P0 connection pool. Upon checkout of the connection from the pool the
> library does a call to isValid(0) (timeout = 0)
>
> According to the javadocs (
>
> https://docs.oracle.com/en/java/javase/11/docs/api/java.sql/java/sql/Connection.html#isValid(int)
> )
> a
> timeout = 0 means no timeout. In the current implementation a timeout = 0
> means that the connection is always invalid.
>
> I can provide a PR for this.
>
> Nick
>
> --
> [image: Zeticon]
> Nick Vercammen
> CTO
> +32 9 275 31 31
> +32 471 39 77 36
> nick.vercam...@zeticon.com
> 
>  <
> https://twitter.com/mediahaven>
> www.zeticon.com
>


ConnectionImpl.isValid() does not behave as described in Connection javadocs

2020-01-19 Thread Nick Vercammen
Hello,

I'm trying to write a solr driver for metabase. Internally metabase uses a
C3P0 connection pool. Upon checkout of the connection from the pool the
library does a call to isValid(0) (timeout = 0)

According to the javadocs (
https://docs.oracle.com/en/java/javase/11/docs/api/java.sql/java/sql/Connection.html#isValid(int))
a
timeout = 0 means no timeout. In the current implementation a timeout = 0
means that the connection is always invalid.

I can provide a PR for this.

Nick

-- 
[image: Zeticon]
Nick Vercammen
CTO
+32 9 275 31 31
+32 471 39 77 36
nick.vercam...@zeticon.com

 
www.zeticon.com


Re: Solr 7.7 heap space is getting full

2020-01-19 Thread Rajdeep Sahoo
Please reply anyone

On Sun, 19 Jan, 2020, 10:55 PM Rajdeep Sahoo, 
wrote:

> We are using solr 7.7 . Ram size is 24 gb and allocated space is 12 gb. We
> have completed indexing after starting the server suddenly heap space is
> getting full.
>Added gc params  , still not working and jdk version is 1.8 .
> Please find the below gc  params
> -XX:NewRatio=2
> -XX:SurvivorRatio=3
> -XX:TargetSurvivorRatio=90 \
> -XX:MaxTenuringThreshold=8 \
> -XX:+UseConcMarkSweepGC \
> -XX:+CMSScavengeBeforeRemark \
> -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
> -XX:PretenureSizeThreshold=512m \
> -XX:CMSFullGCsBeforeCompaction=1 \
> -XX:+UseCMSInitiatingOccupancyOnly \
> -XX:CMSInitiatingOccupancyFraction=70 \
> -XX:CMSMaxAbortablePrecleanTime=6000 \
> -XX:+CMSParallelRemarkEnabled
> -XX:+ParallelRefProcEnabled
> -XX:+UseLargePages \
> -XX:+AggressiveOpts \
>


Solr 7.7 heap space is getting full

2020-01-19 Thread Rajdeep Sahoo
We are using solr 7.7 . Ram size is 24 gb and allocated space is 12 gb. We
have completed indexing after starting the server suddenly heap space is
getting full.
   Added gc params  , still not working and jdk version is 1.8 .
Please find the below gc  params
-XX:NewRatio=2
-XX:SurvivorRatio=3
-XX:TargetSurvivorRatio=90 \
-XX:MaxTenuringThreshold=8 \
-XX:+UseConcMarkSweepGC \
-XX:+CMSScavengeBeforeRemark \
-XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
-XX:PretenureSizeThreshold=512m \
-XX:CMSFullGCsBeforeCompaction=1 \
-XX:+UseCMSInitiatingOccupancyOnly \
-XX:CMSInitiatingOccupancyFraction=70 \
-XX:CMSMaxAbortablePrecleanTime=6000 \
-XX:+CMSParallelRemarkEnabled
-XX:+ParallelRefProcEnabled
-XX:+UseLargePages \
-XX:+AggressiveOpts \