Thanks Joel
here is the information you requested.
Are you doing heavy writes at the time?
we are doing write very frequently, but not very heavy, we will update
about 100 solr document per second.
How many concurrent reads are are happening?
the concurrent reads are about 1000-2000 per minute per node
What version of Solr are you using?
we are using solr 5.5.2
What is the field definition for the double, is it docValues?
the field definition is
<field name="fieldName" type="tdouble" indexed="true" stored="true"
docValues="true"/>


2016-11-03 6:30 GMT-07:00 Joel Bernstein <joels...@gmail.com>:

> Are you doing heavy writes at the time?
>
> How many concurrent reads are are happening?
>
> What version of Solr are you using?
>
> What is the field definition for the double, is it docValues?
>
>
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Thu, Nov 3, 2016 at 12:56 AM, Ray Niu <newry1...@gmail.com> wrote:
>
> > Hello:
> >    We are using export handler in Solr Cloud to get some data, we only
> > request for one field, which type is tdouble, it works well at the
> > beginning, but recently we saw high CPU issue in all the solr cloud
> nodes,
> > we took some thread dump and found following information:
> >
> >    java.lang.Thread.State: RUNNABLE
> >
> >         at java.lang.Thread.isAlive(Native Method)
> >
> >         at
> > org.apache.lucene.util.CloseableThreadLocal.purge(
> > CloseableThreadLocal.java:115)
> >
> >         - locked <0x00000006e24d86a8> (a java.util.WeakHashMap)
> >
> >         at
> > org.apache.lucene.util.CloseableThreadLocal.maybePurge(
> > CloseableThreadLocal.java:105)
> >
> >         at
> > org.apache.lucene.util.CloseableThreadLocal.get(
> > CloseableThreadLocal.java:88)
> >
> >         at
> > org.apache.lucene.index.CodecReader.getNumericDocValues(
> > CodecReader.java:143)
> >
> >         at
> > org.apache.lucene.index.FilterLeafReader.getNumericDocValues(
> > FilterLeafReader.java:430)
> >
> >         at
> > org.apache.lucene.uninverting.UninvertingReader.getNumericDocValues(
> > UninvertingReader.java:239)
> >
> >         at
> > org.apache.lucene.index.FilterLeafReader.getNumericDocValues(
> > FilterLeafReader.java:430)
> >
> > Is this a known issue for export handler? As we only fetch up to 5000
> > documents, it should not be data volume issue.
> >
> > Can anyone help on that? Thanks a lot.
> >
>

Reply via email to