Hello,
I am building a search application based on single core Solr 6.6 server, with
an Angular frontend.
Between the frontend and the Solr server I am thinking of using a Java backend
(this to avoid exposing Solr end points directly to the frontend).
I would like to package all those component
Hi All,
I am trying to setup the graphite reporter for SOLR 6.5.0. i've started
a sample docker instance for graphite with statd (
https://github.com/hopsoft/docker-graphite-statsd).
also i've added the graphite metrics reporter in the SOLR.xml config of the
collection. however post doing this
Hi,
I didn't had a chance to go through the steps you are doing, but I followed
the one written by Varun Thacker via influxdb:
https://github.com/vthacker/solr-metrics-influxdb, and it works fine. Maybe
it can be of some help.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidwo
Hi
I have switched between solr and lucene user lists while debugging this
issue (detail In following thread) My current hypothesis is that since a
large number of indexing threads are being created ( maxIndexingThreads
config is now obsolete) , each output segment is really small . Reference:
ht
Hi Eric,
I am using the restful api directly. In our application, system issues the http
request directly to Solr.
${solr.autoCommit.maxTime:15000}
1
true
Thanks
Hawk
> On 6 Aug 2017, at 11:10 AM, Erick Erickson wrote:
>
> How are you updating 50K docs? SolrJ
You have several possibilities here:
1> you're hitting a massive GC pause that's timing out. You can turn
on GC logging and analyze if that's the case.
2> your updates are getting backed up. At some point it's possible
that the index writer blocks until merges are done IIUC.
Does this ever happen
We found the problem is caused by the delete command. The request is used to
delete document by id.
url --> http://10.91.1.120:8900/solr/taoke/update?&commit=true&wt=json
body --> {"delete":["20ec36ade0ca4da3bcd78269e2300f6f"]}
When we send over 3000 requests, the Solr starts to give OOM ex
On 8/6/2017 10:29 PM, hawk@139.com wrote:
> We found the problem is caused by the delete command. The request is used to
> delete document by id.
>
> url --> http://10.91.1.120:8900/solr/taoke/update?&commit=true&wt=json
> body --> {"delete":["20ec36ade0ca4da3bcd78269e2300f6f"]}
>
> When
Below is the OOM exception.
2017-08-07 12:45:48.446 WARN (qtp33524623-4275) [c:taoke s:shard2 r:core_node4
x:taoke_shard2_replica1] o.e.j.u.t.QueuedThreadPool
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thr