The idea of using an external program could be good. 

> Am 31.07.2019 um 08:06 schrieb Salmaan Rashid Syed 
> <salmaan.ras...@mroads.com>:
> 
> Hi all,
> 
> Thanks for your invaluable and helpful answers.
> 
> I currently don't have an external zookeeper loaded. I am working as per
> the documentation for solr cloud without external zookeeper. I will later
> add the external zookeeper once the changes works as expected.
> 
> *1) Will I still need to make changes to zookeeper-env.sh? Or the changes
> to solr.in.sh <http://solr.in.sh> will suffice?*
> 
> I have an additional query that is slightly off topic but related to
> synonyms.
> My synonyms file will be updated with new words with time. What is the
> procedure to update the synonyms file without shutting down the solr in
> production?
> 
> What I am thinking is to replace all the similar words in a documents using
> an external program before I index them to Solr. This way I don't have to
> worry about the synonyms file size and updation.
> 
> *2) Do you think this is better way forward?*
> 
> Thanks for all you help.
> 
> Regards,
> Salmaan
> 
> 
> 
> 
> On Tue, Jul 30, 2019 at 4:53 PM Bernd Fehling <
> bernd.fehl...@uni-bielefeld.de> wrote:
> 
>> You have to increase the -Djute.maxbuffer for large configs.
>> 
>> In Solr bin/solr/solr.in.sh use e.g.
>> SOLR_OPTS="$SOLR_OPTS -Djute.maxbuffer=10000000"
>> This will increase maxbuffer for zookeeper on solr side to 10MB.
>> 
>> In Zookeeper zookeeper/conf/zookeeper-env.sh
>> SERVER_JVMFLAGS="$SERVER_JVMFLAGS -Djute.maxbuffer=10000000"
>> 
>> I have a >10MB Thesaurus and use 30MB for jute.maxbuffer, works perfect.
>> 
>> Regards
>> 
>> 
>>> Am 30.07.19 um 13:09 schrieb Salmaan Rashid Syed:
>>> Hi Solr Users,
>>> 
>>> I have a very big synonym file (>5MB). I am unable to start Solr in cloud
>>> mode as it throws an error message stating that the synonmys file is
>>> too large. I figured out that the zookeeper doesn't take a file greater
>>> than 1MB size.
>>> 
>>> I tried to break down my synonyms file to smaller chunks less than 1MB
>>> each. But, I am not sure about how to include all the filenames into the
>>> Solr schema.
>>> 
>>> Should it be seperated by commas like synonyms = "__1_synonyms.txt,
>>> __2_synonyms.txt, __3synonyms.txt"
>>> 
>>> Or is there a better way of doing that? Will the bigger file when broken
>>> down to smaller chunks will be uploaded to zookeeper as well.
>>> 
>>> Please help or please guide me to relevant documentation regarding this.
>>> 
>>> Thank you.
>>> 
>>> Regards.
>>> Salmaan.
>>> 
>> 

Reply via email to