Thank you Markus , that's kind of relief to know !
Rick,
I spent few minutes looking about puppet/ansible as I have not used them
before, but this seems kind of doable.
Let me give this a try and I'll let you know.
Thanks,
Atita
On Wed, Mar 14, 2018 at 5:01 PM, Rick Leir wrote:
> Could you mana
Could you manage userdict using Puppet or Ansible? Or whatever your automation
system is.
--
Sorry for being brief. Alternate email is rickleir at yahoo dot com
Markus, Atita
We set it higher too.
When zk is recovering from a disconnected state it re-sends all the messages
that it had been trying to send while the machines were disconnected. Is this
stored in a ' transaction log' .tlog file? I am not clear on this. Zk also goes
through the unsent mess
opa ML
>> Sent: Tuesday 13th March 2018 23:18
>> To: solr-user@lucene.apache.org
>> Subject: Re: How to store files larger than zNode limit
>>
>> The documentation has:
>> If this
>> option is changed, the system property must be set on all servers and
>
need that buffer size, you can omit in Hadoop's settings.
Markus
-Original message-
> From:Roopa ML
> Sent: Tuesday 13th March 2018 23:18
> To: solr-user@lucene.apache.org
> Subject: Re: How to store files larger than zNode limit
>
> The documentation has:
&g
The documentation has:
If this
option is changed, the system property must be set on all servers and
clients otherwise problems will arise
Other than Zookeeper java property what are the other places this should be set?
Thank you
Roopa
Sent from my iPhone
> On Mar 13, 2018, at 5:56 PM, Markus
Hi - For now, the only option is to allow larger blobs via jute.maxbuffer
(whatever jute means). Despite ZK being designed for kb sized blobs, Solr
demands us to abuse it. I think there was a ticket for compression support, but
that only stretches the limit.
We are running ZK with 16 MB for max