e care
of copying down the index and putting the new replicas into service.
Best,
Erick
On Mon, Aug 20, 2018 at 2:51 PM, Sushant Vengurlekar
wrote:
> thanks for the reply Eric
>
> I have one shard per replica but I have 3 replicas on the solrcloud. So how
> do I update from the st
thanks for the reply Eric
I have one shard per replica but I have 3 replicas on the solrcloud. So how
do I update from the standalone solr core to these 3 replicas
On Mon, Aug 20, 2018 at 2:43 PM Erick Erickson
wrote:
> Assuming that your stand-alone indexes are a single core (i.e.
ith a single-replica SolrCloud collection, then ADDREPLICA to
build out the collection.
There's no way to say "reconcile this arbitrary index I built with
stand-alone with my SolrCloud collection", so it'a all manual.
Best,
Erick
On Mon, Aug 20, 2018 at 12:38 PM, Sushant Vengurlek
I have a question regarding updating the indexes on solrcloud with indexes
from a standalone solr server. We have a solrcloud which is running. We
have couple of cores on that standalone solr instance which are also
present on the solrcloud as collections. I need to bring in updated indexes
from
On 8/18/2018 11:03 PM, Zahra Aminolroaya wrote:
Thanks Shawn. For Solr mode I should include all dependencies in lib, so I
thought for Solr Cloud mode I should include the dependencies too.
Even for standalone mode, you don't want those dependency jars in the
lib folder. The same problems can
Thanks Shawn. For Solr mode I should include all dependencies in lib, so I
thought for Solr Cloud mode I should include the dependencies too.
The error is as follows:
Caused by: org.apache.solr.common.SolrException: Unable to reload core
[textd_shard1_replica2]
at org.apache.solr.cor
Thanks Shawn. For Solr mode I should include all dependencies in lib, so I
thought for Solr Cloud mode I should include the dependencies too.
The error is as follows:
java.util.concurrent.ExecutionException:
org.apache.solr.common.SolrException: Unable to create core
[gettingstarted_shard1_repl
On 8/18/2018 4:16 AM, Zahra Aminolroaya wrote:
My plugin works correctly in Solr. Now I want to have my plugin in solrCloud
mode:
I have a main jar file named "norm" and I have other jar files that my
"norm" is dependent on them:
"lucene-analyzers-common-6.6.1",
My plugin works correctly in Solr. Now I want to have my plugin in solrCloud
mode:
I have a main jar file named "norm" and I have other jar files that my
"norm" is dependent on them:
"lucene-analyzers-common-6.6.1","lucene-core-6.6.1","slf4j-api-1.7.7
Any pointer would be much appreciated..
Thanks..
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
tps://issues.apache.org/jira/browse/SOLR-12584?filter=-2>
Thanks
From: Sushant Vengurlekar
Sent: Wednesday, 15 August 2018 10:39 PM
To: solr-user@lucene.apache.org
Subject: Authentication between solr-exporter and solrcloud
I have followed this guide for monitori
I have followed this guide for monitoring the solrcloud
https://lucene.apache.org/solr/guide/7_3/monitoring-solr-with-prometheus-and-grafana.html
I have basic authentication enabled for the solrcloud. How do I configure
the solr-exporter to authenticate with the set username and password.
Thank
Hi,
Yeah if you look above I have stated the same jira. I see your question on
3DCs with Active-Active scenario, will respond there.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramri
Hello..
Is there any restriction we have from the CDCR standpoint to have 3 DCs as
ACTIVE-ACTIVE-ACTIVE scenario?
We are seeing the following:
DC1: We are indexing 1000 Documents.
DC2: Random 300 (some number) documents are getting forwarded here from DC1.
(Not 1000)
DC3: Random 700 (some number
And I was thinking about this one:
https://issues.apache.org/jira/browse/SOLR-11959.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
I am following the workaround mentioned over here related to auth.
https://stackoverflow.com/questions/48790621/solr-cdcr-doesnt-work-if-the-authentication-is-enabled.
My question is why all documents are not getting forwarded? Is there
something else that we are we missing here?
Also wondering
To the concerned,
I am afraid in informing, Authentication are not supported between Solr
clusters: https://issues.apache.org/jira/browse/SOLR-11959.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.
I followed the exact steps you suggested. Now I am not seeing that error.
INFO - 2018-08-10 15:23:58.159; [c:collection_name s:shard2 r:core_node13
x:collection_name_shard2_replica_n10]
org.apache.solr.handler.CdcrReplicator; Forwarded 10 updates to target
collection_name
However, in destinatio
Honestly, any of the in that case. Please follow the following steps;
1. Stop CDCR on cluster-1
2. Stop CDCR on cluster-2
Both the above steps are critical.
3. Shut down all nodes of cluster-1
4. Shut down all nodes of cluster-2
5. Start all nodes at cluster-1
6. Start all nodes at cluster-2
7. St
Really appreciate your response.
I saw this information in some of your earlier posts related to CDCR. We are
using our Cloud Cluster as an Active/Active settings and bi-directional
CDCR.
In that case, which one should we start first?
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f4
To the concerned,
WARN : [c:collection_name s:shard2 r:core_node11
> x:collection_name_shard2_replica_n8]
> org.apache.solr.handler.CdcrRequestHandler; The log reader for target
> collection collection_name is not initialised @ collection_name:shard2
>
This means the source cluster was started fi
Any pointers?
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hello,
Thanks for reading my post!
We have the following environment setup:
SOLRCloud
SOLR Version: 7.3.1
9 Nodes per DC
2 DCs
2 Separate ZK ensemble (one for each SOLR DC)
CDCR bidirectional enabled.
2 Collections.
3 shards per collection, 3 replication factors.
Basic auth enabled. (Aware of
Hello,
Thanks for reading my post!
We have the following environment setup:
SOLRCloud
SOLR Version: 7.3.1
9 Nodes per DC
2 DCs
2 Separate ZK ensemble (one for each SOLR DC)
CDCR bidirectional enabled.
2 Collections.
3 shards per collection, 3 replication factors.
Basic auth enabled. (Aware of
s Erick
>
>
> This is for future. I am exploring to use a custom sharding scheme (which
> will require modification in Solr code) together with the benefits of
> SolrCloud.
>
>
>
> Thanks
> Nawab
>
>
>
> On Tue, Jul 31, 2018 at 4:51 PM, Erick Erickson
&g
Thanks Erick
This is for future. I am exploring to use a custom sharding scheme (which
will require modification in Solr code) together with the benefits of
SolrCloud.
Thanks
Nawab
On Tue, Jul 31, 2018 at 4:51 PM, Erick Erickson
wrote:
> Sure, just use the Collections API ADDREPL
Sure, just use the Collections API ADDREPLICA command to add as many
replicas for specific shards as you want. There's no way to specify
that at creation time though.
Some of the new autoscaling can do this automatically I believe.
I have to ask what it is about your collection that this is true.
Hi,
I am looking at Solr 7.x and couldn't find an answer in the documentation.
Is it possibly to specify different replicationFactor for different shards
in same collection? E.g. if a certain shard is receiving more queries than
rest of the collection I would like to add more replicas for it to h
@shawn Heisey,
My client is facing the same issue. However, I am not sure / have not worked
with the ZkCli script in Zookeeper. Could you please help me with the steps?
If you could send me, where can I find the ZkCli script associated with
Zookeeper and the exact commands to run from each node, i
rs connected by a network with
latency.
Best,
Erick
On Mon, Jul 23, 2018 at 3:01 AM, Pavel Micka wrote:
> Hi,
>
> We are discussing advantages of SolrCloud Replication and Cross Data Center
> Replication (CDCR). In CDCR docs, it is written that
> "The SolrCloud architecture i
Hi,
We are discussing advantages of SolrCloud Replication and Cross Data Center
Replication (CDCR). In CDCR docs, it is written that
"The SolrCloud architecture is not particularly well suited for situations
where a single SolrCloud cluster consists of nodes in separated data clu
Hi Vincenzo,
I used the repo, but encountering following hurdles and trying to solve
them.
I increased the replicas to 3 for both solr and zookeeper.
I dont want to expose the nodeport directly for inter communication hence
created a headless service and used FQDN of the solr-ss-0 pod for the
Hi Sundar,
maybe you already know, but I've developed a small project around SolrCloud
with Kubernetes that explain how to start a small cluster (1 Solr node and
1 zookeeper node).
https://github.com/freedev/solrcloud-zookeeper-kubernetes
Best regards,
Vincenzo
On Wed, Jul 11, 2018 at
Hi Erick,
Thank you for your suggestions.
Regards,
Sharif
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Thanks a lot Shawn for your reply. I'm using SolrCloud v7.3 and using
schema-less approach. I add documents and new fields automatically added in
managed-schema files. I've tried it several times and it stops at around
13,500 fields.
If I try to add fields using SolrCloud API I can add
Thanks a lot Shawn for your details reply.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi,
We are interested in setting up SolrCloud in Kubernetes, is there any
documentation available for similar setup?
Thanks!
Sundar Sivashunmugam
t; On 7/11/2018 2:05 AM, Sharif Shahriar wrote:
>>
>> Is there any limitation on how many field can be added in a solrcloud
>> collection configset?
>> After adding 24,520 fields when I wan't to add new fields, it shows
>> -"Error persisting managed sche
On 7/11/2018 2:05 AM, Sharif Shahriar wrote:
Is there any limitation on how many field can be added in a solrcloud
collection configset?
After adding 24,520 fields when I wan't to add new fields, it shows
-"Error persisting managed schema at /configs/*/managed-schema"
-"zkCli
Is there any limitation on how many field can be added in a solrcloud
collection configset?
After adding 24,520 fields when I wan't to add new fields, it shows
-"Error persisting managed schema at /configs/*/managed-schema"
-"zkClient has disconnected"
Thank you,
Sha
Hi Erick,
Setting the size parameter to 0 in solrconfig.xml can stop document caching,
but it cannot control how much memory it will take initially when creating a
collection, right?
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Does it need to be a SolrCloud? If it is just replication, maybe it can
just be double indexed from the client. Or old style replication. And then
use LotsOfCores autoloading.
Regards,
Alex
On Wed, Jun 27, 2018, 8:46 AM Shawn Heisey, wrote:
> On 6/27/2018 5:10 AM, Sharif Shahrair wr
nitor latency and turn on caches for collections when it is reaching some
> threshold.
>
> -How this can be done? Is there any configuration to turn off caches
> completely in SolrCloud?
>
>
>
>
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
n off caches
completely in SolrCloud?
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
ill be forwarded internally to a Solr instance which has
> that
> >> collection.
> >>
> >> ZooKeeper is used by Solr's Java client to look up the list of Solr
> nodes
> >> having the collection being queried. But if you are using PHP then you
> can
bq. I am trying to see how sharding can be employed to
improve the query performance by adding the route to a shard based on a
field in schema.xml.
This is actually straightforward, just create the collections with
implicit routing.
See the collections API for CREATE and the "Document Routing" se
We have two collections which are 21G and constantly growing. The index on
one of them is also 12G. I am trying to see how sharding can be employed to
improve the query performance by adding the route to a shard based on a
field in schema.xml. So I am trying to figure out how to split the
collectio
This seems like an "XY problem". _Why_ do you want to do this?
Has your collection outgrown one shard and you feel you have to
split it? Sharding should only be used when you can't host your
entire collection on a single replica and still get adequate performance.
When you do reach that point, the
I want to split a collection based on one field. How do I do it and then
query based off that.
Ex: collection1. Field to split off col1
Thank you
We use an AWS ALB for all of our Solr clusters. One is 40 instances.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Jun 29, 2018, at 8:33 PM, Sushant Vengurlekar
> wrote:
>
> What are some of the suggested loadbalancers for solrcloud?
What are some of the suggested loadbalancers for solrcloud? Can AWS ELB be
used for load balancing?
On Fri, Jun 29, 2018 at 8:04 PM, Erick Erickson
wrote:
> In your setup, the load balancer prevents single points of failure.
>
> Since you're pinging a URL, what happens if that n
gt; >> collection.
> >>
> >> ZooKeeper is used by Solr's Java client to look up the list of Solr
> nodes
> >> having the collection being queried. But if you are using PHP then you
> can
> >> probably keep a list of Solr nodes in configuration and randomly
gt;> one. A better implementation would be to setup a load balancer and put all
>> Solr nodes behind it and query the load balancer URL in your application.
>>
>> On Sat, Jun 30, 2018 at 7:31 AM Sushant Vengurlekar <
>> svengurle...@curvolabs.com> w
hant Vengurlekar <
> svengurle...@curvolabs.com> wrote:
>
> > I have a question regarding querying in solrcloud.
> >
> > I am working on php code to query solrcloud for search results. Do I send
> > the query to zookeeper or send it to a particular solr node? Ho
des behind it and query the load balancer URL in your application.
On Sat, Jun 30, 2018 at 7:31 AM Sushant Vengurlekar <
svengurle...@curvolabs.com> wrote:
> I have a question regarding querying in solrcloud.
>
> I am working on php code to query solrcloud for search results.
I have a question regarding querying in solrcloud.
I am working on php code to query solrcloud for search results. Do I send
the query to zookeeper or send it to a particular solr node? How does the
querying process work in general.
Thank you
>>
>> 1. When we are creating collections, each collection is taking about 8 MB
>> to 12 MB of memory when there is no document yet. Is there any way to
>> configure SolrCloud in a way that it takes low memory for each collection
>> initially(like 1MB for each collecti
is taking about 8 MB
to 12 MB of memory when there is no document yet. Is there any way to
configure SolrCloud in a way that it takes low memory for each collection
initially(like 1MB for each collection), then we would be able to create
1500 collection using about 3GB of machines RAM?
Solr
Hi Guys,
We are in a use-case where we need to create a large number of
collections(1000 to 1500) in a SolrCloud. Here most of collections will
have a very limited number of documents(100 to 1000), even some collections
are empty. We are using single shard and 2 replicas.For each replica we
using
On 6/24/2018 7:38 PM, 苗海泉 wrote:
Hello, everyone, we encountered two solr problems and hoped to get help.
Our data volume is very large, 24.5TB a day, and the number of records is
110 billion. We originally used 49 solr nodes. Because of insufficient
storage, we expanded to 100. For a solr cluste
Hi,
With such a big cluster a lot of things can go wrong and it is hard to give any
answer without looking into it more and understanding your model. I assume that
you are monitoring your system (both Solr/ZK and components that index/query)
so it should be the first thing to look at and see if
Hello, everyone, we encountered two solr problems and hoped to get help.
Our data volume is very large, 24.5TB a day, and the number of records is
110 billion. We originally used 49 solr nodes. Because of insufficient
storage, we expanded to 100. For a solr cluster composed of multiple
machines, we
schema.xml
solrconfig.xml
That is not what was recommended.
Three things to note about why your attempt didn't work: 1) ZooKeeper
does not support ".." in znode path names. 2) Only information in ZK is
used for SolrCloud configs, never anything on your
Hi Eric
Based on your suggestion I moved the helpers to be under configsets/conf so
my new folder structure looks
-configsets
- conf
helpers
synonyms_vendors.txt
- collection1
-conf
schema.xml
solrconfig.xm
lone solr to create the collection on the solrcloud
Erick's suggestion of creating a collection with one shard and one
replica, then splitting the shard and adding replicas is one solution.
If properly executed, it can work very well.
Another possibility is to create the collection with the num
ons SPLITSHARD command.
>>>>>
>>>>> Best,
>>>>> Erick
>>>>>
>>>>> On Tue, Jun 19, 2018 at 10:50 AM, Sushant Vengurlekar
>>>>> wrote:
>>>>>> I created a solr cloud collection with 2 shards and a replication
>>> factor
>>>>> of
>>>>>> 2. How can I load data into this collection which I have currently
>>> stored
>>>>>> in a core on a standalone solr. I used the conf from this core on
>>>>>> standalone solr to create the collection on the solrcloud
>>>>>>
>>>>>> Thank you
>>>>>
>>>
>>>
>
up.
>>>>
>>>> After verifying your setup, use the Collections SPLITSHARD command.
>>>>
>>>> Best,
>>>> Erick
>>>>
>>>> On Tue, Jun 19, 2018 at 10:50 AM, Sushant Vengurlekar
>>>> wrote:
>>>>> I created a solr cloud collection with 2 shards and a replication
>> factor
>>>> of
>>>>> 2. How can I load data into this collection which I have currently
>> stored
>>>>> in a core on a standalone solr. I used the conf from this core on
>>>>> standalone solr to create the collection on the solrcloud
>>>>>
>>>>> Thank you
>>>>
>>
>>
rote:
> >>> I created a solr cloud collection with 2 shards and a replication
> factor
> >> of
> >>> 2. How can I load data into this collection which I have currently
> stored
> >>> in a core on a standalone solr. I used the conf from this core on
> >>> standalone solr to create the collection on the solrcloud
> >>>
> >>> Thank you
> >>
>
>
o this collection which I have currently stored
>>> in a core on a standalone solr. I used the conf from this core on
>>> standalone solr to create the collection on the solrcloud
>>>
>>> Thank you
>>
I load data into this collection which I have currently stored
> > in a core on a standalone solr. I used the conf from this core on
> > standalone solr to create the collection on the solrcloud
> >
> > Thank you
>
oud collection with 2 shards and a replication factor of
> 2. How can I load data into this collection which I have currently stored
> in a core on a standalone solr. I used the conf from this core on
> standalone solr to create the collection on the solrcloud
>
> Thank you
Configsets are presumed to contain any auxiliary files under them, not
a relative path _on Zookeeper_.
So try putting your synonyms_vendors.txt in
configsets/conf/helpers/synonyms_vendors.txt, then
reference it as helpers/synonyms_vendors.txt.
Best,
Erick
On Tue, Jun 19, 2018 at 10:28 AM, Sushan
I created a solr cloud collection with 2 shards and a replication factor of
2. How can I load data into this collection which I have currently stored
in a core on a standalone solr. I used the conf from this core on
standalone solr to create the collection on the solrcloud
Thank you
I have this line in my schema.xml
synonyms="../../helpers/synonyms_vendors.txt"
My current folder structure is
solr
- helpers
synonyms_vendors.txt
-configsets
- collection1
-conf
schema.xml
solrconfig.xml
I get the below error wh
On 6/4/2018 5:36 AM, Greenhorn Techie wrote:
1. In the SolrCloud, as a single host can have information about multiple
shards (either leader or replica), how does the backup API handle the
underlying data copy? I presume it will simply copy the data across ALL the
shards (both leader and
Thanks Shawn for your detailed reply. It has helped to better my
understanding. Below is my summarised understanding.
In a SolrCloud setup with version less than 6.1, there is no ‘elegant’ way
of handling collection backups and restore. Instead, have to use the manual
backup and restore APIs
directly?
>>
>
> If you want fault tolerance when not using a cloud-aware client like
> SolrJ, you will need to set up a proxy or load balancer that sits in front
> of your SolrCloud install.
>
> ZK does not use HTTP. It is a proprietary protocol. Your browser won't
> connect to it.
>
> Thanks,
> Shawn
>
>
SolrCloud install.
ZK does not use HTTP. It is a proprietary protocol. Your browser won't
connect to it.
Thanks,
Shawn
Hi,
I am running Solr in Cloud Mode, there is a fault tolerant ZK setup. I
understand that we can use CloudSolrClient, and it will automatically
adjust when servers go down.
However, I would like to check if there is a way for this to work if we are
using URL from browser? Is there any URL that
suggest
you download that version in PDF format if you want a full reference.
A followup:
I suspect that if you try to use the restore functionality on the
replication handler and have multiple shard replicas, that SolrCloud
would not replicate things properly. I could be wrong about that
On 6/1/2018 7:23 AM, Greenhorn Techie wrote:
> We are running SolrCloud with version 5.5.4. As I understand, Solr
> Collection Backup and Restore API are only supported from version 6
> onwards. So wondering what is the best mechanism to get our collections
> backed-up on older Solr ve
Hi,
We are running SolrCloud with version 5.5.4. As I understand, Solr
Collection Backup and Restore API are only supported from version 6
onwards. So wondering what is the best mechanism to get our collections
backed-up on older Solr version.
When I ran backup command on a particular node (curl
lly changed the SOLR_HOST value from uppercase to
> lowercase and I restarted the node. After I fixed the error, I restarted
> again the node but the node name in lowercase is still visible as "gone".
> How to definitively remove a gone node from the Solrcloud graph ?
>
> Regards.
Hi,
On a node, I accidentally changed the SOLR_HOST value from uppercase to
lowercase and I restarted the node. After I fixed the error, I restarted
again the node but the node name in lowercase is still visible as "gone".
How to definitively remove a gone node from the Solrcloud graph
looking for additional information on how to configure an encrypted
> password for the DIH Configuration File, when using solrcloud:
> https://lucene.apache.org/solr/guide/6_6/uploading-structured-data-store-data-with-the-data-import-handler.html#configuring-the-dih-configuration-file
>
>
Hi All,
I'm looking for additional information on how to configure an encrypted
password for the DIH Configuration File, when using solrcloud:
https://lucene.apache.org/solr/guide/6_6/uploading-structured-data-store-data-with-the-data-import-handler.html#configuring-the-dih-configuration-fil
I would like to change the default fields for the query in Solr using the API
https://github.com/apache/lucene-solr/blob/master/solr/solr-ref-guide/src/config-api.adoc
.
I have found examples of changing the default facets for search handler
https://github.com/apache/lucene-solr/tree/master/solr/e
ay 2, 2018 at 3:33 PM, Greenhorn Techie <
greenhorntec...@gmail.com>
>
> wrote:
>
>> Hi,
>>
>> Good Morning!!
>>
>> In the case of a SolrCloud setup with sharing and replication in place,
>> when a document is sent for indexing, what happens when
g. It only tells you how many
> replicas received the live update. So if the value is less than what you
> wanted then it is up to you to retry the update later.
>
> On Wed, May 2, 2018 at 3:33 PM, Greenhorn Techie
>
> wrote:
>
>> Hi,
>>
>> Good Morning!!
>
, Greenhorn Techie
wrote:
> Hi,
>
> Good Morning!!
>
> In the case of a SolrCloud setup with sharing and replication in place,
> when a document is sent for indexing, what happens when only the shard
> leader has indexed the document, but the replicas failed, for whatever
> reas
gt;
> In the case of a SolrCloud setup with sharing and replication in place,
> when a document is sent for indexing, what happens when only the shard
> leader has indexed the document, but the replicas failed, for whatever
> reason. Will the document be resent by the leader to the repli
t up.
>>
>> 2> Yes, the client will get a failure indication.
>>
>> Best,
>> Erick
>>
>> On Wed, May 2, 2018 at 3:03 AM, Greenhorn Techie
>> wrote:
>> > Hi,
>> >
>> > Good Morning!!
>> >
>> > In the case
> 2> Yes, the client will get a failure indication.
>
> Best,
> Erick
>
> On Wed, May 2, 2018 at 3:03 AM, Greenhorn Techie
> wrote:
> > Hi,
> >
> > Good Morning!!
> >
> > In the case of a SolrCloud setup with sharing and replication in place,
Techie
wrote:
> Hi,
>
> Good Morning!!
>
> In the case of a SolrCloud setup with sharing and replication in place,
> when a document is sent for indexing, what happens when only the shard
> leader has indexed the document, but the replicas failed, for whatever
> reason. Will
Hi,
Good Morning!!
In the case of a SolrCloud setup with sharing and replication in place,
when a document is sent for indexing, what happens when only the shard
leader has indexed the document, but the replicas failed, for whatever
reason. Will the document be resent by the leader to the
collection
> data is only stored on few nodes in the SolrCloud setup?"
>
> Yes. There are "node placement rules", but also you can create a
> collection with a createNodeSet that specifies the nodes that the
> replicas are placed on.
>
> " If this is possible,
tored on few nodes in the SolrCloud setup?"
Yes. There are "node placement rules", but also you can create a
collection with a createNodeSet that specifies the nodes that the
replicas are placed on.
" If this is possible, at the end of each month, what is the approach
to be t
"Is it possible to configure a collection such that the collection
data is only stored on few nodes in the SolrCloud setup?"
Yes. There are "node placement rules", but also you can create a
collection with a createNodeSet that specifies the nodes that the
replicas are plac
Hi,
We are building a SolrCloud setup, which will index time-series data. Being
time-series data with write-once semantics, we are planning to have
multiple collections i.e. one collection per month. As per our use case,
end users should be able to query across last 12 months worth of data,
which
)
> On Apr 26, 2018, at 9:43 AM, Tom Peters wrote:
>
> Is there a recommended way of managing external files with SolrCloud. At
> first glance it appears that I would need to manually manage the placement of
> the external_.txt file in each shard's data directory. Is ther
501 - 600 of 5520 matches
Mail list logo