Stop optimizing call and see if that resolves the problem. Also how are
you indexing? (point 3 above). Are you using CloudSolrClient or manually
sending requests to any node?
Thanks,
Susheel
On Tue, Aug 22, 2017 at 9:27 AM, Shreya Kampli wrote:
> Hi,
>
> I have setup a solrcloud with 1 shard a
Regards,
Preeti
-Original Message-
From: John Bickerstaff [mailto:j...@johnbickerstaff.com]
Sent: Thursday, September 22, 2016 9:19 AM
To: solr-user@lucene.apache.org
Subject: Re: SolrCloud setup
I found it to be way less than intuitive when I first started to get going.
I wished for an example
, September 22, 2016 9:19 AM
To: solr-user@lucene.apache.org
Subject: Re: SolrCloud setup
I found it to be way less than intuitive when I first started to get going.
I wished for an example or step by step (including zookeeper)
Pulling it all together from the docs wasn't straightforward altho
I found it to be way less than intuitive when I first started to get going.
I wished for an example or step by step (including zookeeper)
Pulling it all together from the docs wasn't straightforward although I
guess the info is still there.
I'll send you my rough notes in case they're helpful...
Setting up SolrCloud on multiple hosts is exactly the same as a single
host. You just install Solr on all the hosts you care about and start
it up. As long as the hosts can talk to each other via HTTP, it's all
magic.
The "glue" is Zookeeper. All the Solrs are started up with the same ZK
ensemble
performance
> > > is coming. Current memory 7 GB is really low. After that you may want
> to
> > > add another node to partition the index into 2 nodes/shards (assuming
> you
> > > have some partition strategy ) that index size of 150-200 GB can fit
> into
&g
node to partition the index into 2 nodes/shards (assuming you
> > have some partition strategy ) that index size of 150-200 GB can fit into
> > two nodes memory.
> >
> > Thanks,
> > Susheel
> >
> > -Original Message-
> > From: Priti Solanki
ssuming you
> have some partition strategy ) that index size of 150-200 GB can fit into
> two nodes memory.
>
> Thanks,
> Susheel
>
> -Original Message-
> From: Priti Solanki [mailto:pritiatw...@gmail.com]
> Sent: Friday, March 07, 2014 12:50 AM
> To: solr-
have some partition
strategy ) that index size of 150-200 GB can fit into two nodes memory.
Thanks,
Susheel
-Original Message-
From: Priti Solanki [mailto:pritiatw...@gmail.com]
Sent: Friday, March 07, 2014 12:50 AM
To: solr-user@lucene.apache.org
Subject: Re: SolrCloud setup guidance
Furkan, 100 request second would be ideal in out situation.
Regards,
Priti
On Sat, Mar 8, 2014 at 3:41 AM, Furkan KAMACI wrote:
> Hi;
>
> What's your performance expectation for qps (query per second)?
>
> Thanks;
> Furkan KAMACI
> 7 Mar 2014 08:50 tarihinde "Priti Solanki" yazdı:
>
> > Thanks
Hi;
What's your performance expectation for qps (query per second)?
Thanks;
Furkan KAMACI
7 Mar 2014 08:50 tarihinde "Priti Solanki" yazdı:
> Thanks Susheel,
>
> But this index will keep on growing that my worry So I always have to
> increase the RAM .
>
> Can you suggest how many nodes one can
Thanks Susheel,
But this index will keep on growing that my worry So I always have to
increase the RAM .
Can you suggest how many nodes one can think to support this bug index?
Regards,
On Fri, Mar 7, 2014 at 2:50 AM, Susheel Kumar <
susheel.ku...@thedigitalgroup.net> wrote:
> Setting up Sol
Setting up Solr cloud(horizontal scaling) is definitely a good idea for this
big index but before going to Solr cloud, are you able to upgrade your single
node to 128GB of memory(vertical scaling) to see the difference.
Thanks,
Susheel
-Original Message-
From: Priti Solanki [mailto:pri
I think you're right, but you can specify a default value in your schema.xml
to at least see if this is a good path to follow.
Best,
Erick
On Fri, Sep 27, 2013 at 3:46 AM, Neil Prosser wrote:
> Good point. I'd seen docValues and wondered whether they might be of use in
> this situation. However,
Good point. I'd seen docValues and wondered whether they might be of use in
this situation. However, as I understand it they require a value to be set
for all documents until Solr 4.5. Is that true or was I imagining reading
that?
On 25 September 2013 11:36, Erick Erickson wrote:
> H, I con
H, I confess I haven't had a chance to play with this yet,
but have you considered docValues for some of your fields? See:
http://wiki.apache.org/solr/DocValues
And just to tantalize you:
> Since Solr4.2 to build a forward index for a field, for purposes of sorting,
> faceting, grouping, fun
Shawn: unfortunately the current problems are with facet.method=enum!
Erick: We already round our date queries so they're the same for at least
an hour so thankfully our fq entries will be reusable. However, I'll take a
look at reducing the cache and autowarming counts and see what the effect
on h
About caches. The queryResultCache is only useful when you expect there
to be a number of _identical_ queries. Think of this cache as a map where
the key is the query and the value is just a list of N document IDs (internal)
where N is your window size. Paging is often the place where this is used.
On 9/19/2013 9:20 AM, Neil Prosser wrote:
> Apologies for the giant email. Hopefully it makes sense.
Because of its size, I'm going to reply inline like this and I'm going
to trim out portions of your original message. I hope that's not
horribly confusing to you! Looking through my archive of th
Sorry, my bad. For SolrCloud soft commits are enabled (every 15 seconds). I
do a hard commit from an external cron task via curl every 15 minutes.
The version I'm using for the SolrCloud setup is 4.4.0.
Document cache warm-up times are 0ms.
Filter cache warm-up times are between 3 and 7 seconds.
Hi Neil,
Consider using G1 instead. See http://blog.sematext.com/?s=g1
If that doesn't help, we can play with various JVM parameters. The latest
version of SPM for Solr exposes information about sizes and utilization of
JVM memory pools, which may help you understand which JVM params you need
t
Hi Neil,
Although you haven't mentioned it, just wanted to confirm - do you have
soft commits enabled?
Also what's the version of solr you are using for the solr cloud setup?
4.0.0 had lots of memory and zk related issues. What's the warmup time for
your caches? Have you tried disabling the cache
The above does not look right - you probably would want
/usr/solr/example/solr for your solrhome based on other info you give.
You also reference /usr/solr/data/conf as your conf folder, but I'd
expect it to be something like /usr/solr/example/solr/collection1/conf
-DhostPort=8080" #mi
23 matches
Mail list logo