autowarm counts are very high, so it may take some
>>> time to autowarm leading to multiple IndexSearchers and caches open
>>> per replica when you happen to hit a commit point. I usually start
>>> with 16-20 as an autowarm count, the benefit decreases rapidly
ly as you
>> increase the count.
>>
>> I'm not quite sure why it would be different in 7x .vs. 6x. How much
>> heap do you allocate to the JVM? And do you see similar heap dumps in
>> 6.6?
>>
>> Best,
>> Erick
>> On Mon, Sep 3, 2018 at 10:33 AM
re any metrics which could tell me that without a heap dump?
>
> I'm not quite sure why it would be different in 7x .vs. 6x. How much
> heap do you allocate to the JVM? And do you see similar heap dumps in
> 6.6?
>
> Best,
> Erick
Thanks Erick!
Björn
> On Mon, Sep 3, 2
Hello,
we recently upgraded our solrcloud (5 nodes, 25 collections, 1 shard each, 4
replicas each) from 6.6.0 to 7.3.0 and shortly after to 7.4.0. We are running
Zookeeper 4.1.13.
Since the upgrade to 7.3.0 and also 7.4.0 we encountering heap space
exhaustion. After obtaining a heap dump it
Hello,
> On 31. Aug 2018, at 21:53, Shawn Heisey wrote:
>
>
> As Walter hinted, ZooKeeper 3.4.x is not capable of dynamically
> adding/removing servers to/from the ensemble. To do this successfully, all
> ZK servers and all ZK clients must be upgraded to 3.5.x. Solr is a ZK client
> when
Hi Raja,
we are using solrcloud as a statefulset and every pod has its own storage
attached to it.
Thanks
Björn
> On 20. Nov 2017, at 05:59, rajasaur wrote:
>
> Hi Bjorn,
>
> Im trying a similar approach now (to get solrcloud working on kubernetes). I
> have run
one that proxy solr queries?
>
> And what is the difference between the above and "solr discovery"?
>
> Do you specify pod anti affinity for solr hosts?
>
> Regards
> Lars
>
> On Sat, 26 Aug 2017 at 13:19, Björn Häuser <bjoernhaeu...@gmail.com> wrote:
Hi Lars,
we are running Solr in kubernetes and after some initial problems we are
running quite stable now.
Here is the setup we choose for solr:
- separate service for external traffic to solr (called “solr”)
- statefulset for solr with 3 replicas with another service (called
n only be executed for less than 10 collections?
Would love to contribute a patch for this if someone says how that should look
like :)
Thanks
Björn
> On 3. Aug 2017, at 18:51, Björn Häuser <bjoernhaeu...@gmail.com> wrote:
>
> Hey Folks,
>
> we today hit the same error thre
Hey Folks,
we today hit the same error three times, a REPLACENODE call was not successful.
Here is our scenario:
3 Node Solrcloud cluster running in Kubernetes on top of AWS.
Today we wanted to rotate the underlying storage (increased from 50gb to
300gb).
After we rotated one node we
which can
> also cascade a bunch of problems.
>
> In general it's an anti-pattern to allocate such a large portion of
> our physical memory to the JVM, see:
> http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
>
>
>
> Best,
> Erick
>
>
>
enough to have
> sessions drop. Grasping at straws here, but "top" or similar
> should tell you what the system is doing.
>
> Best,
> Erick
>
> On Tue, Nov 3, 2015 at 12:04 AM, Björn Häuser <bjoernhaeu...@gmail.com> wrote:
>> Hi!
>>
>> Thank you f
Hey there,
we are running a SolrCloud, which has 4 nodes, same config. Each node
has 8gb memory, 6GB assigned to the JVM. This is maybe too much, but
worked for a long time.
We currently run with 2 shards, 2 replicas and 11 collections. The
complete data-dir is about 5.3 GB.
I think we should
13 matches
Mail list logo