Re: How to store files larger than zNode limit

2018-03-14 Thread Atita Arora
Thank you Markus , that's kind of relief to know !

Rick,
I spent few minutes looking about puppet/ansible as I have not used them
before, but this seems kind of doable.
Let me give this a try and I'll let you know.
Thanks,
Atita

On Wed, Mar 14, 2018 at 5:01 PM, Rick Leir  wrote:

> Could you manage userdict using Puppet or Ansible? Or whatever your
> automation system is.
> --
> Sorry for being brief. Alternate email is rickleir at yahoo dot com
>


Re: How to store files larger than zNode limit

2018-03-14 Thread Rick Leir
Could you manage userdict using Puppet or Ansible? Or whatever your automation 
system is. 
-- 
Sorry for being brief. Alternate email is rickleir at yahoo dot com 

RE: How to store files larger than zNode limit

2018-03-14 Thread Rick Leir
Markus, Atita
We set it higher too. 

When zk is recovering from a disconnected state it re-sends all the messages 
that it had been trying to send while the machines were disconnected. Is this 
stored in a ' transaction log' .tlog file? I am not clear on this. Zk also goes 
through the unsent messages when Solr starts up, and startup can take a while 
longer.

With this in mind, it might make more sense to use zk for kbyte sized  blobs. 
But machines are faster every year, so maybe Meg and Gig blobs will be 
appropriate. 
Cheers -- Rick

On March 13, 2018 5:56:56 PM EDT, Markus Jelsma  
wrote:
>Hi - For now, the only option is to allow larger blobs via
>jute.maxbuffer (whatever jute means). Despite ZK being designed for kb
>sized blobs, Solr demands us to abuse it. I think there was a ticket
>for compression support, but that only stretches the limit.
>
>We are running ZK with 16 MB for maxbuffer. It holds the large
>dictionaries, it runs fine. 
>
>Regards,
>Markus
> 
>-Original message-
>> From:Atita Arora 
>> Sent: Tuesday 13th March 2018 22:38
>> To: solr-user@lucene.apache.org
>> Subject: How to store files larger than zNode limit
>> 
>> Hi ,
>> 
>> I have a use case supporting multiple clients and multiple languages
>in a
>> single application.
>> So , In order to improve the language support, we want to leverage
>the Solr
>> dictionary (userdict.txt) files as large as 10MB.
>> I understand that ZooKeeper's default zNode file size limit is 1MB.
>> I'm not sure sure if someone tried increasing it before and how does
>that
>> fares in terms of performance.
>> Looking at -
>https://zookeeper.apache.org/doc/r3.2.2/zookeeperAdmin.html
>> It states -
>> Unsafe Options
>> 
>> The following options can be useful, but be careful when you use
>them. The
>> risk of each is explained along with the explanation of what the
>variable
>> does.
>> jute.maxbuffer:
>> 
>> (Java system property:* jute.maxbuffer*)
>> 
>> This option can only be set as a Java system property. There is no
>> zookeeper prefix on it. It specifies the maximum size of the data
>that can
>> be stored in a znode. The default is 0xf, or just under 1M. If
>this
>> option is changed, the system property must be set on all servers and
>> clients otherwise problems will arise. This is really a sanity check.
>> ZooKeeper is designed to store data on the order of kilobytes in
>size.
>> I would appreciate if someone has any suggestions  on what are the
>best
>> practices for handling large config/dictionary files in ZK?
>> 
>> Thanks ,
>> Atita
>> 

-- 
Sorry for being brief. Alternate email is rickleir at yahoo dot com 

Re: How to store files larger than zNode limit

2018-03-13 Thread Roopa ML
Thank you, this is clear
Regards 
Roopa

Sent from my iPhone

> On Mar 13, 2018, at 6:35 PM, Markus Jelsma  wrote:
> 
> Hi - configure it for all servers that connect to ZK and need jute.maxbuffer 
> to be high, and ZK itself of course.
> 
> So if your Solr cluster needs a large buffer, your Solr's environment 
> variables need to match that of ZK. If you simultaneously use ZK for a Hadoop 
> cluster, but don't need that buffer size, you can omit in Hadoop's settings.
> 
> Markus
> 
> 
> 
> -Original message-
>> From:Roopa ML 
>> Sent: Tuesday 13th March 2018 23:18
>> To: solr-user@lucene.apache.org
>> Subject: Re: How to store files larger than zNode limit
>> 
>> The documentation has:
>> If this
>> option is changed, the system property must be set on all servers and
>> clients otherwise problems will arise
>> 
>> Other than Zookeeper java property what are the other places this should be 
>> set?
>> 
>> Thank you
>> Roopa
>> 
>> Sent from my iPhone
>> 
>>> On Mar 13, 2018, at 5:56 PM, Markus Jelsma  
>>> wrote:
>>> 
>>> Hi - For now, the only option is to allow larger blobs via jute.maxbuffer 
>>> (whatever jute means). Despite ZK being designed for kb sized blobs, Solr 
>>> demands us to abuse it. I think there was a ticket for compression support, 
>>> but that only stretches the limit.
>>> 
>>> We are running ZK with 16 MB for maxbuffer. It holds the large 
>>> dictionaries, it runs fine. 
>>> 
>>> Regards,
>>> Markus
>>> 
>>> -Original message-
>>>> From:Atita Arora 
>>>> Sent: Tuesday 13th March 2018 22:38
>>>> To: solr-user@lucene.apache.org
>>>> Subject: How to store files larger than zNode limit
>>>> 
>>>> Hi ,
>>>> 
>>>> I have a use case supporting multiple clients and multiple languages in a
>>>> single application.
>>>> So , In order to improve the language support, we want to leverage the Solr
>>>> dictionary (userdict.txt) files as large as 10MB.
>>>> I understand that ZooKeeper's default zNode file size limit is 1MB.
>>>> I'm not sure sure if someone tried increasing it before and how does that
>>>> fares in terms of performance.
>>>> Looking at - https://zookeeper.apache.org/doc/r3.2.2/zookeeperAdmin.html
>>>> It states -
>>>> Unsafe Options
>>>> 
>>>> The following options can be useful, but be careful when you use them. The
>>>> risk of each is explained along with the explanation of what the variable
>>>> does.
>>>> jute.maxbuffer:
>>>> 
>>>> (Java system property:* jute.maxbuffer*)
>>>> 
>>>> This option can only be set as a Java system property. There is no
>>>> zookeeper prefix on it. It specifies the maximum size of the data that can
>>>> be stored in a znode. The default is 0xf, or just under 1M. If this
>>>> option is changed, the system property must be set on all servers and
>>>> clients otherwise problems will arise. This is really a sanity check.
>>>> ZooKeeper is designed to store data on the order of kilobytes in size.
>>>> I would appreciate if someone has any suggestions  on what are the best
>>>> practices for handling large config/dictionary files in ZK?
>>>> 
>>>> Thanks ,
>>>> Atita
>>>> 
>> 


RE: How to store files larger than zNode limit

2018-03-13 Thread Markus Jelsma
Hi - configure it for all servers that connect to ZK and need jute.maxbuffer to 
be high, and ZK itself of course.

So if your Solr cluster needs a large buffer, your Solr's environment variables 
need to match that of ZK. If you simultaneously use ZK for a Hadoop cluster, 
but don't need that buffer size, you can omit in Hadoop's settings.

Markus

 
 
-Original message-
> From:Roopa ML 
> Sent: Tuesday 13th March 2018 23:18
> To: solr-user@lucene.apache.org
> Subject: Re: How to store files larger than zNode limit
> 
> The documentation has:
>  If this
> option is changed, the system property must be set on all servers and
> clients otherwise problems will arise
> 
> Other than Zookeeper java property what are the other places this should be 
> set?
> 
> Thank you
> Roopa
> 
> Sent from my iPhone
> 
> > On Mar 13, 2018, at 5:56 PM, Markus Jelsma  
> > wrote:
> > 
> > Hi - For now, the only option is to allow larger blobs via jute.maxbuffer 
> > (whatever jute means). Despite ZK being designed for kb sized blobs, Solr 
> > demands us to abuse it. I think there was a ticket for compression support, 
> > but that only stretches the limit.
> > 
> > We are running ZK with 16 MB for maxbuffer. It holds the large 
> > dictionaries, it runs fine. 
> > 
> > Regards,
> > Markus
> > 
> > -Original message-
> >> From:Atita Arora 
> >> Sent: Tuesday 13th March 2018 22:38
> >> To: solr-user@lucene.apache.org
> >> Subject: How to store files larger than zNode limit
> >> 
> >> Hi ,
> >> 
> >> I have a use case supporting multiple clients and multiple languages in a
> >> single application.
> >> So , In order to improve the language support, we want to leverage the Solr
> >> dictionary (userdict.txt) files as large as 10MB.
> >> I understand that ZooKeeper's default zNode file size limit is 1MB.
> >> I'm not sure sure if someone tried increasing it before and how does that
> >> fares in terms of performance.
> >> Looking at - https://zookeeper.apache.org/doc/r3.2.2/zookeeperAdmin.html
> >> It states -
> >> Unsafe Options
> >> 
> >> The following options can be useful, but be careful when you use them. The
> >> risk of each is explained along with the explanation of what the variable
> >> does.
> >> jute.maxbuffer:
> >> 
> >> (Java system property:* jute.maxbuffer*)
> >> 
> >> This option can only be set as a Java system property. There is no
> >> zookeeper prefix on it. It specifies the maximum size of the data that can
> >> be stored in a znode. The default is 0xf, or just under 1M. If this
> >> option is changed, the system property must be set on all servers and
> >> clients otherwise problems will arise. This is really a sanity check.
> >> ZooKeeper is designed to store data on the order of kilobytes in size.
> >> I would appreciate if someone has any suggestions  on what are the best
> >> practices for handling large config/dictionary files in ZK?
> >> 
> >> Thanks ,
> >> Atita
> >> 
> 


Re: How to store files larger than zNode limit

2018-03-13 Thread Roopa ML
The documentation has:
 If this
option is changed, the system property must be set on all servers and
clients otherwise problems will arise

Other than Zookeeper java property what are the other places this should be set?

Thank you
Roopa

Sent from my iPhone

> On Mar 13, 2018, at 5:56 PM, Markus Jelsma  wrote:
> 
> Hi - For now, the only option is to allow larger blobs via jute.maxbuffer 
> (whatever jute means). Despite ZK being designed for kb sized blobs, Solr 
> demands us to abuse it. I think there was a ticket for compression support, 
> but that only stretches the limit.
> 
> We are running ZK with 16 MB for maxbuffer. It holds the large dictionaries, 
> it runs fine. 
> 
> Regards,
> Markus
> 
> -Original message-
>> From:Atita Arora 
>> Sent: Tuesday 13th March 2018 22:38
>> To: solr-user@lucene.apache.org
>> Subject: How to store files larger than zNode limit
>> 
>> Hi ,
>> 
>> I have a use case supporting multiple clients and multiple languages in a
>> single application.
>> So , In order to improve the language support, we want to leverage the Solr
>> dictionary (userdict.txt) files as large as 10MB.
>> I understand that ZooKeeper's default zNode file size limit is 1MB.
>> I'm not sure sure if someone tried increasing it before and how does that
>> fares in terms of performance.
>> Looking at - https://zookeeper.apache.org/doc/r3.2.2/zookeeperAdmin.html
>> It states -
>> Unsafe Options
>> 
>> The following options can be useful, but be careful when you use them. The
>> risk of each is explained along with the explanation of what the variable
>> does.
>> jute.maxbuffer:
>> 
>> (Java system property:* jute.maxbuffer*)
>> 
>> This option can only be set as a Java system property. There is no
>> zookeeper prefix on it. It specifies the maximum size of the data that can
>> be stored in a znode. The default is 0xf, or just under 1M. If this
>> option is changed, the system property must be set on all servers and
>> clients otherwise problems will arise. This is really a sanity check.
>> ZooKeeper is designed to store data on the order of kilobytes in size.
>> I would appreciate if someone has any suggestions  on what are the best
>> practices for handling large config/dictionary files in ZK?
>> 
>> Thanks ,
>> Atita
>> 


RE: How to store files larger than zNode limit

2018-03-13 Thread Markus Jelsma
Hi - For now, the only option is to allow larger blobs via jute.maxbuffer 
(whatever jute means). Despite ZK being designed for kb sized blobs, Solr 
demands us to abuse it. I think there was a ticket for compression support, but 
that only stretches the limit.

We are running ZK with 16 MB for maxbuffer. It holds the large dictionaries, it 
runs fine. 

Regards,
Markus
 
-Original message-
> From:Atita Arora 
> Sent: Tuesday 13th March 2018 22:38
> To: solr-user@lucene.apache.org
> Subject: How to store files larger than zNode limit
> 
> Hi ,
> 
> I have a use case supporting multiple clients and multiple languages in a
> single application.
> So , In order to improve the language support, we want to leverage the Solr
> dictionary (userdict.txt) files as large as 10MB.
> I understand that ZooKeeper's default zNode file size limit is 1MB.
> I'm not sure sure if someone tried increasing it before and how does that
> fares in terms of performance.
> Looking at - https://zookeeper.apache.org/doc/r3.2.2/zookeeperAdmin.html
> It states -
> Unsafe Options
> 
> The following options can be useful, but be careful when you use them. The
> risk of each is explained along with the explanation of what the variable
> does.
> jute.maxbuffer:
> 
> (Java system property:* jute.maxbuffer*)
> 
> This option can only be set as a Java system property. There is no
> zookeeper prefix on it. It specifies the maximum size of the data that can
> be stored in a znode. The default is 0xf, or just under 1M. If this
> option is changed, the system property must be set on all servers and
> clients otherwise problems will arise. This is really a sanity check.
> ZooKeeper is designed to store data on the order of kilobytes in size.
> I would appreciate if someone has any suggestions  on what are the best
> practices for handling large config/dictionary files in ZK?
> 
> Thanks ,
> Atita
>