For quite some time, collection information has been stored in
individual state.json nodes, not clusterstate.json. See the
MIGRATESTATEFORMAT Collections API call.
How many collections and replicas do you have all told?
Best,
Erick
On Sun, Jun 24, 2018 at 6:43 PM, 苗海泉 wrote:
> We found a
We found a problem in the solr6.0 version. After solr restarts data
recovery on the zookeeper, the solr collection configuration will be
centralized in the clusterstate.json file. If the number of clusters and
replicas is a bit more, it is very easy to exceed 1M. Has caused a series
of problems
We ended up using a simple method to copy the directory directly to the
node that was not deleted, then delete the solr data from zookeeper and
restart it to achieve the effect.
Thank you for your suggestion.
[image: Mailtrack]
Hello, everyone, we encountered two solr problems and hoped to get help.
Our data volume is very large, 24.5TB a day, and the number of records is
110 billion. We originally used 49 solr nodes. Because of insufficient
storage, we expanded to 100. For a solr cluster composed of multiple
machines,
On 6/24/2018 8:27 AM, Vivek Singh wrote:
I am trying to ltr in solr but when i uploaded models and features and
restarted the server. I am following steps as mentioned in documentation.I
am getting this error.
- *techproducts:*
bq. The lucene-core7 had some useful functions like incrementToken which I
could not find in previous versions because of that I used this version.
Do not do this. You simply cannot mix jar versions because there was a
function in the old
version that you want to use. The support for that
I am trying to ltr in solr but when i uploaded models and features and
restarted the server. I am following steps as mentioned in documentation.I
am getting this error.
- *techproducts:*
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Failed to create new
Hi,
I mean you should use Maven which would pickup, starting from a number
(e.g. 6.6.1), all the correct dependencies you need for developing the
plugin.
Yes, the "top" libraries (e.g. Solr and Lucene) should have the same
version but on top of that, the plugin could require some other direct
Hi Sujatha,
Did I get it right that you are deleting the same documents that will be
updated afterward? If that’s the case, then you can simply skip deleting, and
just send updated version of document. Solr (Lucene) does not have delete -
it’ll just flag document as deleted. Updating document
Thanks Andrea. Do you mean all of my jar file versions should be 6.6.1?
The lucene-core7 had some useful functions like incrementToken which I
could not find in previous versions because of that I used this version.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi Zahra,
I think your guessing it's right: I see some mess in libraries versions.
If I got you
* the target platform is Solr 6.6.1
* the compile classpath includes solr-core-4.1.0, 1.4.0 (!) and lucene
7.4.0?
If that is correct, with a ClassCastException you're just scraping the
I am using solr 6.6.1. I want to write my own analyzer for the field type
"text_general" in schema. the field type in schema is as follows:
When I test the filter in Java, everything is alright; However, when I start
my solr I get the following error:
12 matches
Mail list logo