Any pointers here would be appreciated :)
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi
I have a master slave architecture setup currently. I'm evaluating
SolrCloud.
I've read through most of the documentation, but what I can't seem to find
is the preferred way to autoscale the cluster.
In the master slave architecture, we have a autoscaling policy (CPU based)
configured on
Thank you! I just filed the bug in Jira:
https://issues.apache.org/jira/browse/SOLR-15170
About the workaround you mentioned, we ran a quick test on one server and
it apparently worked, but we did not check it properly in a cluster (we
decided that it is better not to go with this in production
: Of course, here is the full stack trace (collection 'techproducts' with
: just one core to make it easier):
Ah yeah ... see -- this looks like a mistake introduced at some point...
: Caused by: org.apache.solr.core.SolrResourceNotFoundException: Can't
: find resource 'elevate.xml' in
is possible to put this file in the dataDir folder when using Solr
> : Cloud. I know that this is possible in the standalone mode, and I haven't
> : seen in the documentation [1] that it can not be done in Cloud.
> :
> : I am using Solr 7.7.2 and ZooKeeper. After creating the tec
: I need to have the elevate.xml file updated frequently and I was wondering
: if it is possible to put this file in the dataDir folder when using Solr
: Cloud. I know that this is possible in the standalone mode, and I haven't
: seen in the documentation [1] that it can not be done in Cloud
Hi,
I need to have the elevate.xml file updated frequently and I was wondering
if it is possible to put this file in the dataDir folder when using Solr
Cloud. I know that this is possible in the standalone mode, and I haven't
seen in the documentation [1] that it can not be done in Cloud.
I am
<https://vimeo.com/unisys>[image: Grey_UB] <http://blogs.unisys.com/>
>
>
>
> *From:* Timothy Potter
> *Sent:* 08 February 2021 15:44
> *To:* solr-user@lucene.apache.org
> *Subject:* Re: Unable to connect to an 8.8.0 Solr Cloud database via API
>
>
>
> *EX
Thanks for the additional details Matthew. I created this JIRA to track
this problem: https://issues.apache.org/jira/browse/SOLR-15145. Please add
any additional information to that ticket if needed.
Are you able to upgrade your SolrJ client JAR to 8.8.0? If not, I
understand but that would be a
What version of SolrJ is embedded in your uleaf.ear file? There have been
changes in how we deal with URLs stored in ZK in 8.8 --> SOLR-12182
On Fri, Feb 5, 2021 at 2:34 AM Flowerday, Matthew J <
matthew.flower...@gb.unisys.com> wrote:
> Hi There
>
>
>
> I have been checking out the latest
Hi There
I have been checking out the latest (8.8.0) SolrCloud database (using
Zookeeper 3.6.2) against our application which talks to Solr via the Solr
API (I am not too sure of the details as I am not a java developer
unfortunately!). The software has Solr 8.7.0/ZooKeeper 3.6.2 libraries
Hi Pawel,
This definitely sounds like garbage collection biting you.
Backups themselves aren't usually memory intensive, but if indexing is
going on at the same time you should expect elevated memory usage.
Essentially this is because for each core being backed up, Solr needs
to hold pieces of
We finally got this fixed by temporarily disabling any updates to the SOLR
index.
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hello everyone,
I have a nasty problem with the scheduled Solr collections backup. From
time to time when a scheduled backup is triggered (backup operation takes
around 10 minutes) Solr freezes for 20-30 seconds. The freeze happens on
one Solr instance at time but this affects all queries latency
I have one collection, 3 shards, 2 replicas, I defined route field: title,
and ID is the unique key.
I index two document with same ID and different title, I configured dedupe
chain and I can see signature is generated, but the old document was
removed by solr, please help, thanks
Thanks for the reply Eric !
I have tried multiple versions of solr cloud , 8.3, 8.6.0,.8.6.2. Every
version has some issues either on indexing or query searching like with 8.3
, indexing throws below error,
request: http://X:8983/solr/searchcollection_shard2_replica_t103/
<http://x:8
in the best way possible.
> On Jan 6, 2021, at 1:40 AM, Ritvik Sharma wrote:
>
> Hi Guys,
>
> Any update.
>
> On Tue, 5 Jan 2021 at 18:06, Ritvik Sharma wrote:
>
>> Hi Guys
>>
>> Happy New Year.
>>
>> We are trying to move to solr cloud 6
Hi Guys,
Any update.
On Tue, 5 Jan 2021 at 18:06, Ritvik Sharma wrote:
> Hi Guys
>
> Happy New Year.
>
> We are trying to move to solr cloud 6.6.6 as we are using same version
> master-slave arch.
>
> solr cloud: 6.6.6
> zk: 3.4.10
>
> We are facing few
Hi Guys
Happy New Year.
We are trying to move to solr cloud 6.6.6 as we are using same version
master-slave arch.
solr cloud: 6.6.6
zk: 3.4.10
We are facing few errors
1. Every time we upload a model-store using curl XPUT command , it is
showing at that time but after reloading collection
t; we are using Solr6.2 , in schema that we use we have an integer field. For
> a given query we want to know how many documents have duplicate value for
> the field , for an example how many documents have same doc_id=10.
>
> So to find this information we fire a query to solr-cloud
Hello All ,
we are using Solr6.2 , in schema that we use we have an integer field. For
a given query we want to know how many documents have duplicate value for
the field , for an example how many documents have same doc_id=10.
So to find this information we fire a query to solr-cloud
the time.
> We are indexing using cloudsolrclient + spring data . It is coming on any
> value,
>
> I am trying to do indexing of ~30 million records. And it is coming on
> Solr cloud mode not on standalone VM.
>
> Here x.x.x.x and x.x.x.y are tlog.
>
> Remote error message: E
Hi Houston,
Thanks for reply
We dont have this kind of field. It's a field value and it is coming
randomly, not all the time.
We are indexing using cloudsolrclient + spring data . It is coming on any
value,
I am trying to do indexing of ~30 million records. And it is coming on
Solr cloud mode
Do you have a field named "314257s_seourls" in your schema?
Is there a dynamic field you are trying to match with that name?
- Houston
On Thu, Dec 10, 2020 at 2:53 PM ritvik wrote:
> Hi ,
> Please suggest, why it is happening.
>
>
>
> --
> Sent from:
Hi ,
Please suggest, why it is happening.
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
> Hi Solr Owner ,
>>
>> Please check the below mail, I am facing the issue. Do you have any
>> solution.
>> -- Forwarded message -
>> From: Ritvik Sharma
>> Date: Thu, 3 Dec 2020 at 16:21
>> Subject: Solr cloud issuse: Async exception
I am not aware of a test. However keep
In mind that HDFS supported will be deprecated.
Additionally - you can configure erasure encoding in HDFS on a per folder /
file basis so you could in the worst case just make the folder for Solr with
the standard HDFS mode.
Erasure encoding has several
Anyone use Solr with Erasure Coding on HDFS? Is that supported?
Thank you
-Joe
www.forensiclogic.com/>
From: Tyrone Tse
Sent: Thursday, September 3, 2020 10:14 AM
To: solr-user@lucene.apache.org
Subject: Solr Cloud Query when one of the Solr instances goes down
I have setup the example Solr Cloud that comes with the built in Zoo Keeper
that runs on localhost:9993.
I create
I have setup the example Solr Cloud that comes with the built in Zoo Keeper
that runs on localhost:9993.
I created my Solr Cloud instance with 2 nodes.
Node 1 url is http://localhost:8983/solr/#/~cloud
Node 2 url is http://localhost:7574/solr/#/~cloud
Currently all Solr queries go through Node 1
On 7/23/2020 8:56 AM, Porritt, Ian wrote:
Note: the solrconfig has class="ClassicIndexSchemaFactory"/> defined.
org.apache.solr.common.SolrException: *This IndexSchema is not mutable*.
at
Hi All,
I made a change to schema to add new fields in a
collection, this was uploaded to Zookeeper via the
below command:
For the Schema
solr zk cp
file:E:\SolrCloud\server\solr\configsets\COLLECTIO
N\conf\schema.xml
zk:/configs/COLLECTION/schema.xml -z
SERVERNAME1.uleaf.site
For the
On 6/17/2020 8:55 PM, yaswanth kumar wrote:
Caused by: javax.crypto.BadPaddingException: RSA private key operation
failed
Something appears to be wrong with the private key that Solr is
attempting to use for a certificate.
Best guess, incorporating everything I can see in the stacktrace, is
On 6/16/2020 8:44 AM, yaswanth kumar wrote:
I don't see anything related in the solr.log file for the same error. Not
sure if there is anyother place where I can check for this.
The underlying request that failed might be happening on one of the
other nodes in the cloud. It might be
nesday, June 17, 2020 4:33 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr cloud backup/restore not working
>
> ATTENTION: External Email – Be Suspicious of Attachments, Links and
> Requests for Login Information.
>
> Can someone please guide me on where can I get mor
user@lucene.apache.org
Subject: Re: Solr cloud backup/restore not working
ATTENTION: External Email – Be Suspicious of Attachments, Links and Requests
for Login Information.
Can someone please guide me on where can I get more detailed error of the above
exception while doing restore?? All that I
>> > collection)
>> > Enabled basic authentication and gave all access to the admin user
>> >
>> > Now trying to use solr cloud backup/restore API, backup is working
>> great,
>> > but when trying to invoke restore API its throwing the
h 2 nodes. (2 replica's for each
> > collection)
> > Enabled basic authentication and gave all access to the admin user
> >
> > Now trying to use solr cloud backup/restore API, backup is working great,
> > but when trying to invoke restore API its throwing the below err
On 6/12/2020 8:38 AM, yaswanth kumar wrote:
Using solr 8.2.0 and setup a cloud with 2 nodes. (2 replica's for each
collection)
Enabled basic authentication and gave all access to the admin user
Now trying to use solr cloud backup/restore API, backup is working great,
but when trying to invoke
with 2 nodes. (2 replica's for each
> >> collection)
> >> Enabled basic authentication and gave all access to the admin user
> >>
> >> Now trying to use solr cloud backup/restore API, backup is working
> great,
> >> but when trying to invoke resto
with 2 nodes. (2 replica's for each
>> collection)
>> Enabled basic authentication and gave all access to the admin user
>>
>> Now trying to use solr cloud backup/restore API, backup is working great,
>> but when trying to invoke restore API its throwing the below error
>
Can anyone here help on the posted question pls??
On Fri, Jun 12, 2020 at 10:38 AM yaswanth kumar
wrote:
> Using solr 8.2.0 and setup a cloud with 2 nodes. (2 replica's for each
> collection)
> Enabled basic authentication and gave all access to the admin user
>
> Now trying to
Using solr 8.2.0 and setup a cloud with 2 nodes. (2 replica's for each
collection)
Enabled basic authentication and gave all access to the admin user
Now trying to use solr cloud backup/restore API, backup is working great,
but when trying to invoke restore API its throwing the below error
I gave up on JMX ages ago, so I can’t help there.
I’d open a bug with New Relic.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On May 31, 2020, at 7:59 PM, Ganesh Sethuraman
> wrote:
>
> Can you suggest Solr Cloud JMX metrics fo
Can you suggest Solr Cloud JMX metrics for collection and replica status?
Trying to centralize the alert generation in NewRelic. New Relic only seems
to support JMX for the same.
On Sun, May 31, 2020, 7:29 PM Walter Underwood
wrote:
> I wrote a Python demon that gets clusterstatus from the
I wrote a Python demon that gets clusterstatus from the API, parses it, and
sends the counts of replicas in each state to InfluxDB. From there, we chart
and alert in Grafana. New Relic is good, but we need other kinds of metrics,
like the load balancer status from CloudWatch.
wunder
Walter
complain to new relic on their lagging solr support!!! I have and
could use some support!
To address your actual question I have found JMX in solr to be crazy
unreliable but the admin/metrics web endpoint is pretty good.
I have some (crappy) python for parsing it for datadog:
Hi
We use New Relic to monitor Sold Cloud Cluster 7.2.1. we would like to get
alerted on any cluster state change. Like for example degraded shard.
Replica down. New relic can monitor any JMX metrices.
Can you suggest JMX metrics that will help monitor degraded cluster,
replica recovering, shard
Any reference on this Is it possible actually?
On Tue, May 12, 2020 at 2:21 PM Vignan Malyala wrote:
> How to add mlt handler in Solr Cloud?
>
> There is very limited documentation on this. Using search component with
> mlt=true doesn't include all configurations like boosting and
check out the videos on this website TROO.TUBE don't be such a
sheep/zombie/loser/NPC. Much love!
https://troo.tube/videos/watch/aaa64864-52ee-4201-922f-41300032f219
On Tue, May 12, 2020 at 12:59 PM Vignan Malyala wrote:
>
> Anyone knows how to add mlt handler in solr cloud?
>
> On
Anyone knows how to add mlt handler in solr cloud?
On Tue, May 12, 2020 at 2:21 PM Vignan Malyala wrote:
> How to add mlt handler in Solr Cloud?
>
> There is very limited documentation on this. Using search component with
> mlt=true doesn't include all configurations like boos
How to add mlt handler in Solr Cloud?
There is very limited documentation on this. Using search component with
mlt=true doesn't include all configurations like boosting and mlt filters.
Also the results with filters don't seem to work.
Adding mlt handler seem better, but how to add in solr cloud
PIs, and when one of those returns a "401 Not Authorized"
>> response, you will be sent to the login screen to provide credentials.
>>
>> Jan
>>
>>>> 24. apr. 2020 kl. 06:36 skrev Amy Bai :
>>>
>>> Hi community,
>>>
>>> I the
ns a "401 Not Authorized"
> response, you will be sent to the login screen to provide credentials.
>
> Jan
>
> > 24. apr. 2020 kl. 06:36 skrev Amy Bai :
> >
> > Hi community,
> >
> > I there any way to password protect apache solr server admin pages
, and when one of those returns a "401 Not Authorized" response,
you will be sent to the login screen to provide credentials.
Jan
> 24. apr. 2020 kl. 06:36 skrev Amy Bai :
>
> Hi community,
>
> I there any way to password protect apache solr server admin pages in solr
Hi community,
I there any way to password protect apache solr server admin pages in solr
cloud mode?
Something like when I open the server admin pages, there will be a login page
need user and password input.
I learn that modify jetty.xml could work in standalone mode, how about in solr
cloud
Hi,
Is there any solution found for this issue. We are using Solr 7.6 and
sometimes we do see lot of QTP threads with the stack trace
sun.misc.Unsafe.park(Native method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
The easiest way to answer questions like this is an under-appreciated parameter
“explainOther” when submitted with “debug=true”. It’ll return an explanation of
how the doc identified by the “explainOther” parameter was scored.
See:
I'm running the following query:
id:COLLECT2601697594_T496 AND (person:[80 TO 100])
That returns 1 hit.
The following query also returns the same hit:
id:COLLECT2601697594_T496 AND ((POP16_Rez1:blue_Sky AND POP16_Sc1:[80 TO
100]) OR (POP16_Rez2:blue_Sky AND POP16_Sc2:[80 TO 100]) OR
We have been experimenting with Solr cloud in Docker for some while,
and tried to do some optimizations by turning off swap on the host completely.
However, that quickly led to OOM crashes although we had 8G physical, 4G heap
and Solr just having a few thousand docs.
This makes me suspect
wrote:
> > >> >
> > >> > Thanks for the suggestion,
> > >> >
> > >> > Is there any way to get the info which operation or which query
> params
> > >> are
> > >> > increasing the response time.
> > >> >
&g
PM, Kevin Sante wrote:
>
> Hello guys,
>
> I need some help understanding the setup with solr cloud. I am a newbie to
> solr and I have successfully set up solr cloud with some alarms on AWS.
>
> I have a two solr nodes and 3 zookeeper nodes for my set up. I already have
>
Hello guys,
I need some help understanding the setup with solr cloud. I am a newbie to solr
and I have successfully set up solr cloud with some alarms on AWS.
I have a two solr nodes and 3 zookeeper nodes for my set up. I already have
data indexed on the nodes and I am able to query the data
production server and want upgrade to
> Solr 7.7.2. We are using solr 5.2.1 from last 5 years, we do have millions
> of documents on production server. We have Solr cloud with 2 shards and 3
> replicas on production server.
>
> I have upgraded Solr 5.2.1 to Solr 6.6.6 ,
Hi All,
Currently we are using Solr 5.2.1 on production server and want upgrade to Solr
7.7.2. We are using solr 5.2.1 from last 5 years, we do have millions of
documents on production server. We have Solr cloud with 2 shards and 3 replicas
on production server.
I have upgraded Solr 5.2.1
shard creation.
Very happy.
From: Dominique Bejean
Sent: 05 February 2020 17:53
To: Dwane Hall
Cc: Scott Stults ;
solr-user@lucene.apache.org
Subject: Re: Solr Cloud on Docker?
Thank you Dwane. Great info :)
Le mer. 5 févr. 2020 à 11:49, Dwane Hall a écrit
; with your deployment!
>
> Dwane
>
> --
> *From:* Scott Stults
> *Sent:* Thursday, 30 January 2020 1:45 AM
> *To:* solr-user@lucene.apache.org
> *Subject:* Re: Solr Cloud on Docker?
>
> One of our clients has been running a big Solr Cl
ading it
(https://sbdevel.wordpress.com/2016/11/30/70tb-16b-docs-4-machines-1-solrcloud/).
As always it's recommend to test for your own conditions and best of luck with
your deployment!
Dwane
From: Scott Stults
Sent: Thursday, 30 January 2020 1:45 AM
To: s
I don’t quite know how TolerantUpdateProcessor works with importing CSV
files, see: https://issues.apache.org/jira/browse/SOLR-445. That is about
sending batches of docs to Solr and frankly I don’t know what path your
process will take. It’s worth a try though.
Otherwise, I typically go with
Hi Shawn/Erick,
This information has been very helpful. Thank you.
So I did some more investigation into our ETL process and I verified that
with the exception of the text I sent above they are all obviously invalid
dates. For example, one field value had 00 for a day so would guess that
field
On 2/2/2020 8:47 AM, Joseph Lorenzini wrote:
1000
1
That autoSoftCommit setting is far too aggressive, especially for bulk
indexing. I don't know whether it's causing the specific problem you're
asking about here, but it's still a setting
be quite
>> substantial. When you turn the Solr instances back on (or ADDREPLICA),
>> they’ll sync back up.
>>
>> Finally, I mistrust just sending a large amount of data via HTTP, just
>> because
>> there’s not much you can do except hope it all works. I
unt of data via HTTP, just
> because
> there’s not much you can do except hope it all works. If this is a
> recurring
> process I’d seriously consider writing a SolrJ program that parsed the
> csv file and sent it to Solr.
>
> Best,
> Erick
>
>
>
> > On Feb 2, 2020
eph Lorenzini wrote:
>
> Hi all,
>
> I have three node solr cloud cluster. The collection has a single shard. I
> am importing 140 GB CSV file into solr using curl with a URL that looks
> roughly like this. I am streaming the file from disk for performance
> reasons.
>
&
Hi all,
I have three node solr cloud cluster. The collection has a single shard. I
am importing 140 GB CSV file into solr using curl with a URL that looks
roughly like this. I am streaming the file from disk for performance
reasons.
http://localhost:8983/solr/example/update?separator=%09=/tmp
Good to know Shawn.
Thanks,
Rajeswari
On 1/29/20, 12:52 PM, "Shawn Heisey" wrote:
On 1/27/2020 4:59 AM, Vignan Malyala wrote:
> We are currently using solr without cloud with 500 cores. It works good.
>
> Now we are planning to expand it using solr cloud
On 1/27/2020 4:59 AM, Vignan Malyala wrote:
We are currently using solr without cloud with 500 cores. It works good.
Now we are planning to expand it using solr cloud with 1000 cores, (2 cores
for each of my client with different domain data).
SolrCloud starts having scalability issues once
One of our clients has been running a big Solr Cloud (100-ish nodes, TB
index, billions of docs) in kubernetes for over a year and it's been
wonderful. I think during that time the biggest scrapes we got were when we
ran out of disk space. Performance and reliability has been solid
otherwise. Like
nning to expand it using solr cloud with 1000 cores, (2
> cores for each of my client with different domain data).
>
> I'm planning to put all fields as "stored".
>
> Is it the right thought? Will it have any issues? Will it become slow??
> How should I take care in produ
;
> >> > Thanks for the suggestion,
> >> >
> >> > Is there any way to get the info which operation or which query params
> >> are
> >> > increasing the response time.
> >> >
> >> >
> >> > On Sat, 18 Jan, 2020, 11:59 PM Dave,
>
ily driven by a need to scale our environment vertically on large
> hardware instances as running 100 nodes on bare metal was too big a
> maintenance and administrative burden for us with a small Dev and support
> team. To date it's been very stable and reliable so I would recommend the
>
Hi all,
We are currently using solr without cloud with 500 cores. It works good.
Now we are planning to expand it using solr cloud with 1000 cores, (2 cores
for each of my client with different domain data).
I'm planning to put all fields as "stored".
Is it the right thought? Will i
>
>> > Is there any way to get the info which operation or which query params
>> are
>> > increasing the response time.
>> >
>> >
>> > On Sat, 18 Jan, 2020, 11:59 PM Dave,
>> wrote:
>> >
>> >> If you’re not getting values, don’t
gt;
> >
> > On Sat, 18 Jan, 2020, 11:59 PM Dave,
> wrote:
> >
> >> If you’re not getting values, don’t ask for the facet. Facets are
> >> expensive as hell, maybe you should think more about your query’s than
> your
> >> infrastructure, solr cloud won’t help
>
> On Sat, 18 Jan, 2020, 11:59 PM Dave, wrote:
>
>> If you’re not getting values, don’t ask for the facet. Facets are
>> expensive as hell, maybe you should think more about your query’s than your
>> infrastructure, solr cloud won’t help you at all especially if your as
k more about your query’s than your
> infrastructure, solr cloud won’t help you at all especially if your asking
> for things you don’t need
>
> > On Jan 18, 2020, at 1:25 PM, Rajdeep Sahoo
> wrote:
> >
> > We have assigned 16 gb out of 24gb for heap .
> >
If you’re not getting values, don’t ask for the facet. Facets are expensive as
hell, maybe you should think more about your query’s than your infrastructure,
solr cloud won’t help you at all especially if your asking for things you don’t
need
> On Jan 18, 2020, at 1:25 PM, Rajdeep Sahoo wr
We have assigned 16 gb out of 24gb for heap .
No other process is running on that node.
200 facets fields are there in the query but we will not be getting the
values for each facets for every search.
There can be max of 50-60 facets for which we will be getting values.
We are using caching,is
On 1/18/2020 10:09 AM, Rajdeep Sahoo wrote:
We are having 2.3 million documents and size is 2.5 gb.
10 core cpu and 24 gb ram . 16 slave nodes.
Still some of the queries are taking 50 sec at solr end.
As we are using solr 4.6 .
Other thing is we are having 200 (avg) facet fields in a
Please address my below query,
> >
> >
> > We are having 2.3 million documents and size is 2.5 gb.
> > With this data do we need solr cloud.
> >
> > 10 core cpu and 24 gb ram . 16 slave nodes.
> >
> > Still some of the queries are taking 50 sec at solr
.
> With this data do we need solr cloud.
>
> 10 core cpu and 24 gb ram . 16 slave nodes.
>
> Still some of the queries are taking 50 sec at solr end.
> As we are using solr 4.6 .
> Other thing is we are having 200 (avg) facet fields in a query.
> And 30 searc
Hi shawn,
Thanks for this info,
Could you Please address my below query,
We are having 2.3 million documents and size is 2.5 gb.
With this data do we need solr cloud.
10 core cpu and 24 gb ram . 16 slave nodes.
Still some of the queries are taking 50 sec at solr end.
As we are using
On 1/18/2020 9:55 AM, Rajdeep Sahoo wrote:
We do parallel indexing in production,
What about search performance in solr cloud in comparison with master
slave.
And what about block join performance in solr cloud.
Do we need to increase the infra for solr cloud as we would
In future the no of documents will increase.
> >> So,please some one recommend about the no of nodes and configuration
> like
> >> ram and cpu core for solr cloud.
> >
> > Indexing is not going to be any faster in SolrCloud. It would probably
> be a little bit
Our Index size is huge and in master slave the full indexing time is almost
>> 24 hrs.
>>In future the no of documents will increase.
>> So,please some one recommend about the no of nodes and configuration like
>> ram and cpu core for solr cloud.
>
> Indexi
Hi shawn,
Thanks for your reply
We do parallel indexing in production,
What about search performance in solr cloud in comparison with master
slave.
And what about block join performance in solr cloud.
Do we need to increase the infra for solr cloud as we would be
maintaining multiple
On 1/18/2020 1:05 AM, Rajdeep Sahoo wrote:
Our Index size is huge and in master slave the full indexing time is almost
24 hrs.
In future the no of documents will increase.
So,please some one recommend about the no of nodes and configuration like
ram and cpu core for solr cloud.
Indexing
gt; >
> > Our Index size is huge and in master slave the full indexing time is
> almost
> > 24 hrs.
> > In future the no of documents will increase.
> > So,please some one recommend about the no of nodes and configuration like
> > ram and cpu core for solr cloud.
uration like
> ram and cpu core for solr cloud.
>
>> On Sat, 18 Jan, 2020, 8:05 AM Walter Underwood,
>> wrote:
>>
>> Why do you want to change to Solr Cloud? Master/slave is a great, stable
>> cluster architecture.
>>
>> wunder
>> Walter Un
ost
> 24 hrs.
> In future the no of documents will increase.
> So,please some one recommend about the no of nodes and configuration like
> ram and cpu core for solr cloud.
>
> On Sat, 18 Jan, 2020, 8:05 AM Walter Underwood,
> wrote:
>
>> Why do you want to change to S
1 - 100 of 2206 matches
Mail list logo