Does anyone have a script that checks if solr is running and then starts it
if it isn't running? Occasionally my solr stops running even if there has
been no Apache restart. I haven't been able to determine the root cause,
so the next best thing might be to check every 15 minutes or so if it's
Hi,
Looks for some advice, sent a few questions on CDCR the last couple of days.
I just want to see if this is expected behavior from Solr or not?
When a document is added to Site A, it is then supposed to replicate across,
however in the statistics page I see the following:
Site A
Or is it not much overhead to give the command to start solr if it is
already running? Maybe it's not necessary to check if it's running? Is
there any downside to giving the start command every 15 minutes or so
whether it is running or not?
Thanks.
On Thu, Jun 4, 2020 at 12:36 PM Ryan W
Fixing the root cause would certainly be the best thing.However if you
still wanna tread that path, you can do a healthchek on admin endpoint and
do the thing. A simple cron job would do trick
On Thu, 4 Jun 2020 at 10:09 PM, Ryan W wrote:
> Or is it not much overhead to give the command to
I think you should not do it in the Jetty xml
Follow the official reference guide.
It should be in solr.in.sh
https://lucene.apache.org/solr/guide/8_4/enabling-ssl.html
> Am 04.06.2020 um 06:48 schrieb yaswanth kumar :
>
> Hi Franke,
>
> I suspect its because of the certificate encryption
Happened again today. Solr stopped running. Apache hasn't stopped in 10
days, so this is not due to a server reboot.
Solr is not being run with the oom-killer. And when I grep for ERROR in
the logs, there is nothing from today.
On Mon, May 18, 2020 at 3:15 PM James Greene
wrote:
> I usually
I haven't done any changes on jetty xml , I am just using what it comes
with the solr package. just doing it in solr.in.sh but I am still seeing
the same issue.
Thanks,
On Thu, Jun 4, 2020 at 12:23 PM Jörn Franke wrote:
> I think you should not do it in the Jetty xml
> Follow the official
You need to separate keystore and truststore.
I would leave the stores in their original format and provide the type in
solr.in.sh
There is no need to convert them to JKS, PKCS12 is perfectly supported
> Am 04.06.2020 um 06:48 schrieb yaswanth kumar :
>
> Hi Franke,
>
> I suspect its
If the keystore and/or truststore is encrypted you need to provide the Passwort
in solr.in.sh
> Am 04.06.2020 um 18:38 schrieb yaswanth kumar :
>
> I haven't done any changes on jetty xml , I am just using what it comes
> with the solr package. just doing it in solr.in.sh but I am still seeing
Erick,
thanks a lot, very clear.
Reinaldo
On Thu, Jun 4, 2020 at 8:37 PM Erick Erickson
wrote:
> Close. Zookeeper is not involved in routing requests. Each Solr node
> queries Zookeeper to get the topology of the cluster, and thereafter
> Zookeeper will notify each node when the topology
Erick,
thanks for the reply.
Your last line puzzled me a bit. You wrote
*"The theory is that all the top-level requests shouldn’t be handled by the
same Solr instance if a client is directly using the http address of a
single node in the cluster for all requests."*
We are using 2 machines (2
Hi Jigar,
Is that a numeric field or not? By the way, have you checked the terms.sort
parameter or json facet sort parameter?
Kind Regards,
Furkan KAMACI
On Mon, Jun 1, 2020 at 11:37 PM Jigar Gajjar
wrote:
> Hello,
> is it possible to retrieve index terms in the descending order using
>
Close. Zookeeper is not involved in routing requests. Each Solr node
queries Zookeeper to get the topology of the cluster, and thereafter
Zookeeper will notify each node when the topology changes, i.e.
a node goes up or down, a replica goes into recovery etc. Zookeeper
does _not_ get involved in
Hello,
We are on solr 8.4.1 and In standalone server mode. We have a core with
497,767,038 Records indexed. It took around 32Hours to load data through DIH.
The disk occupancy is shown below:
82G /var/solr/data//data/index
When I restarted solr instance and went to this core to query on
On 6/4/2020 9:51 PM, Srinivas Kashyap wrote:
We are on solr 8.4.1 and In standalone server mode. We have a core with
497,767,038 Records indexed. It took around 32Hours to load data through DIH.
The disk occupancy is shown below:
82G /var/solr/data//data/index
When I restarted solr
Hi Nicolas,
Commit happens automatically at 100k documents. We don't commit explicitly.
We didn't limit the number of segments. There are 35+ segments in each core.
But unrelated to the question, I would like to know if we can limit the
number of segments in the core. I tried it in the past but
Hi Walter
The plan is that we'll have 3 Solr Clusters using the same ZooKeepers on Data
Centre A. Then each Cluster will replicate across (using Bi-Directional CDCR)
to Data Centre B.
The purpose of CDCR is for DR and also because we switch between Data Centre's
on a regular basis so having
The real questions are:
* how much often do you commit (either explicitly or automatically)?
* how much segments do you allow? If you only allow 1 segment,
then that whole segment is recreated using the old documents and the updates.
And yes, that requires reading the old segment.
It is
I noticed that while indexing, when commit happens, there is high disk read
by Solr. The problem is that it is impacting search performance when the
index is loaded from the disk with respect to the query, as the disk read
speed is not quite good and the whole index is not cached in RAM.
When no
19 matches
Mail list logo