Hello, everyone, we encountered two solr problems and hoped to get help.
Our data volume is very large, 24.5TB a day, and the number of records is
110 billion. We originally used 49 solr nodes. Because of insufficient
storage, we expanded to 100. For a solr cluster composed of multiple
machines, we found that the performance of 60 solrclouds and the overall
performance of 49 solr clusters are the same. How do we optimize it? Now
the cluster speed is 1.5 million on average per second. Why is that?

The second problem solrhome can only specify a solrhome, but now the disk
is divided into two directories, another solr can be stored using hdfs, but
the overall indexing performance is not up to standard, how to do, thank
you for your attention.
[image: Mailtrack]
<https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality5&;>
Sender
notified by
Mailtrack
<https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality5&;>
18/06/25
上午9:38:13

Reply via email to