Maybe can you do it as part of the loading to calculate the score?
Which Solr version are you using?
Are you doing some heavily lifting into the constructor or your filter?
> Am 27.11.2019 um 09:34 schrieb Sripra deep :
>
>
> Hi Jörn Franke,
>
> I modified the custo
And have you tried how fast it is if you don’t do anything in this method?
> Am 27.11.2019 um 07:52 schrieb Sripra deep :
>
> Hi Team,
> I wrote a custom sort function that will read the field value and parse
> and returns a float value that will be used for sorting. this field is
> indexed,
What methods do you use for your condition checks? Regexes ? Then you could for
instance precompile the regexes (saves a lot of time). Any other method? I
don’t ask about the exact condition check but only the methods you use within
those checks.
> Am 27.11.2019 um 07:52 schrieb Sripra deep :
Did you update the java version to 8? Did you upgrade the MySQL driver to the
latest version?
> Am 22.11.2019 um 20:43 schrieb Shashank Bellary :
>
>
>
> Hi Folks
> I migrated from Solr 4 to 7.5 and I see an issue with the way DIH is working.
> I use `JdbcDataSource` and here the config
Stemming involved ?
> Am 22.11.2019 um 14:23 schrieb Moyer, Brett :
>
> Hello, we have spellcheck running, using the index as the dictionary. An odd
> use case came up today wanted to get your thoughts and see if what we
> determined is correct. Use case: User sends a query for q=brokerage,
You are switching 2 major versions. You probably need to delete the
collections (fully not only delete command) and reindex
> Am 12.11.2019 um 21:42 schrieb Sujatha Arun :
>
> We recently migrated from 6.6.2 to 8.2. We are seeing issues with indexing
> where the leader and the replica
It sounds like you look for a suggester.
You can use the suggester of Solr.
For the visualization part: Angular has a suggestion box that can ingest the
results from Solr.
> Am 21.11.2019 um 16:42 schrieb rhys J :
>
> Are there any recommended APIs or code examples of using Solr and then
>
will echo what Jorn said though - I wouldn't expose Solr to the internet
> or directly without some sort of API. Whether you do
> authentication/authorization at the API is a separate question.
>
> Kevin Risden
>
>
>> On Wed, Nov 20, 2019 at 1:54 PM Jörn Franke wrote:
>&
I would not give users directly access to Solr - even with LDAP plugin. Build a
rest interface or web interface that does the authentication and authorization
and security sanitization. Then you can also manage better excessive queries or
explicitly forbid certain type of queries (eg specific
Have you checked the log files of Solr?
Do you have a service mesh in-between? Could it be something at the network
layer/container orchestration that is blocking requests for some minutes?
> Am 20.11.2019 um 10:32 schrieb Koen De Groote :
>
> Hello
>
> I was testing some backup/restore
is a bit too much overhead for something we just
> set once.
>
> Mike
>
> -----Original Message-
> From: Jörn Franke
> Sent: Tuesday, November 19, 2019 2:54 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Zk upconfig command is appending local directory to defau
I would use the config set API - it is more clean for production deployments
and you do not have to deal with the zkCli script:
https://lucene.apache.org/solr/guide/7_4/configsets-api.html
> Am 18.11.2019 um 15:48 schrieb Michael Becker :
>
> I’ve run into an issue when attempting to
y:solr.NRTCachingDirectoryFactory}"/>
>
> attaching the screenshot of physical memory and cpu.
> Please let me know your thoughts on the below issue.
>
>
>> On Fri, Nov 15, 2019 at 2:18 AM Jörn Franke wrote:
>> Do you use a updateprocess factory? How does it look like?
taching the screenshot of physical memory and cpu.
>
> Please let me know your thoughts on the below issue.
>
>
>
> On Fri, Nov 15, 2019 at 2:18 AM Jörn Franke wrote:
>
>> Do you use a updateprocess factory? How does it look like?
>>
>> What is the physica
Do you have some suggester or so that is possible automatically rebuilding
after restore?
> Am 15.11.2019 um 16:36 schrieb Koen De Groote :
>
> Greetings all,
>
> I was testing some backup/restore scenarios.
>
> 1 of them is Solr7.6 in a docker container(7.6.0-slim), set up as
> SolrCloud,
Do you use a updateprocess factory? How does it look like?
What is the physical memory size and CPU?
What do you mean by “there are 64 cores sending concurrently?” An application
has 64 threats that send concurrently those updates?
> Am 15.11.2019 um 02:14 schrieb Fiz N :
>
> Hi Solr Experts,
Use Kerberos or JWT token.
> Am 11.11.2019 um 11:41 schrieb Kommu, Vinodh K. :
>
> Hi,
>
> After creating admin user in Solr when security is enabled, we have to store
> the admin user's credentials in plain text format. Is there any option or a
> way to encrypt the plain text password?
>
>
>>
>>
>>
>>
>>
>> The field is a UUID in the database, so it's definitely valid and without
>> prefix. Where can I double check for myself of the DataImportHandler
>> seralises an UUID in the code?
>>
It seems there is a prefix java.util.UUID: in front of your UUID. Any idea
where it comes from? Is it also like this in the database? Is your import
handler maybe receiving a java object java.util.UUID and it is not converted
correctly to string?
> Am 14.11.2019 um 11:52 schrieb Boris Chazalet
You can use nested indexing and Index both types of documents in one core.
https://lucene.apache.org/solr/guide/8_1/indexing-nested-documents.html
However, what is the use case for Solr if you have already a database?
> Am 13.11.2019 um 20:50 schrieb rhys J :
>
> I have more than one core.
Which Solr version are you using?
The below command suggest you are using 6.3 - is this correct?
Have you restarted the Solr server after copying?
> Am 08.11.2019 um 18:42 schrieb Lewin Joy (TMNA) :
>
> Hi,
>
> How do I use the xlsx response writer to extract my results to an excel file?
>
>
I would convert them to UTF-8 before posting and use UTF-8 in your application.
Most of the web and applications use UTF-8. If you use other encodings you will
always run into problems.
> Am 08.11.2019 um 07:47 schrieb lala :
>
> I am using the /update/extract request handler to push
I created a JIRA for this:
https://issues.apache.org/jira/browse/SOLR-13894
On Wed, Nov 6, 2019 at 10:45 AM Jörn Franke wrote:
> I have checked now Solr 8.3 server in admin UI. Same issue.
>
> Reproduction:
> select(search(testcollection,q=“test”,df=“Default”,defType=“edismax”,f
is generated by select (and not coming from the collection) there
must be an issue with select.
Any idea why this is happening.
Debug logs do not show any error and the expression is correctly received by
Solr.
Thank you.
Best regards
> Am 05.11.2019 um 14:59 schrieb Jörn Fra
Never mind. Restart of browser worked.
> Am 06.11.2019 um 10:32 schrieb Jörn Franke :
>
> Hi,
>
> After upgrading to Solr 8.3 I observe that in the Admin UI the collection
> selector is greyed out. I am using Chrome. The core selector works as
> expected.
>
> Any
Hi,
After upgrading to Solr 8.3 I observe that in the Admin UI the collection
selector is greyed out. I am using Chrome. The core selector works as expected.
Any idea why this is happening?
Thank you.
Best regards
m/
>
>
>> On Mon, Nov 4, 2019 at 9:09 AM Jörn Franke wrote:
>>
>> Most likely this issue can bei also reproduced in the admin UI for the
>> streaming handler of a collection.
>>
>>>> Am 04.11.2019 um 13:32 schrieb Jörn Franke :
>>>
&
You can unzip it before. Or am I overlooking something ?
> Am 05.11.2019 um 13:00 schrieb Biswarup Roy :
>
> Hello,
>
> I have a compressed folder (.zip) which contains the PDFs, TXTs, and XML
> file.
> I am trying to index that folder in Solr Cloud, but not being able to do
> that.
> I am
I don’t understand why it is not possible.
However why don’t you simply overwrite the existing document instead of
add+delete
> Am 04.11.2019 um 15:12 schrieb Khare, Kushal (MIND)
> :
>
> Hello mates!
> I want to know how we can delete the documents from the Solr index . Suppose
> for my
Most likely this issue can bei also reproduced in the admin UI for the
streaming handler of a collection.
> Am 04.11.2019 um 13:32 schrieb Jörn Franke :
>
> Hi,
>
> I use streaming expressions, e.g.
> Sort(Select(search(...),id,if(eq(1,1),Y,N) as found), by=“field A asc
Hi,
I use streaming expressions, e.g.
Sort(Select(search(...),id,if(eq(1,1),Y,N) as found), by=“field A asc”)
(Using export handler, sort is not really mandatory , I will remove it later
anyway)
This works perfectly fine if I use Solr 8.2.0 (server + client). It returns
Tuples in the form {
Yes, simply search the mailing list or the web for embedded Solr and you will
find what you need. Nevertheless, running embedded is just for development
(also in case of Spring and others). Avoid it for an end user facing server
application.
> Am 03.11.2019 um 17:02 schrieb Java Developer :
>
descriptions.html>. See,
> even Ref Guide 7.7 mentions
> <https://lucene.apache.org/solr/guide/7_7/analyzers.html> it in one of its
> examples (maybe because the material was just copied).
>
>> On Fri, 1 Nov 2019 at 17:06, Jörn Franke wrote:
>>
>> https://luce
uot;true"/>
> ignoreCase="true" synonyms="synonyms.txt"/>
>
>
>
>
>> On Fri, Nov 1, 2019 at 2:10 PM Jörn Franke wrote:
>>
>> How did you define the field type? Probably you have syntax errors there.
>> I recomme
How did you define the field type? Probably you have syntax errors there. I
recommend to use the schema rest api instead of schema xml as it will give you
better feedback on what is wrong and it allows you also better versioning of
the schema in a source code repository.
I recommend to integrate log4j2 into the app instead of using println. Then you
will see all the log statements including the one of Solr in a log file that
will indicate you the issue.
> Am 01.11.2019 um 07:46 schrieb Khare, Kushal (MIND)
> :
>
> Hello mates !
> Hope you people are doing
to the same cluster. Is there a way to sync
> the data in node 2 which doesn't have a collection as it is joined recently
> and also no collection has been created.
>
>> On Thu, 31 Oct, 2019, 5:58 PM Jörn Franke, wrote:
>>
>> You need to create a replica of the collection
You need to create a replica of the collection on the other node:
https://lucene.apache.org/solr/guide/6_6/collections-api.html
See addreplica
>> Am 31.10.2019 um 09:46 schrieb Pranaya Behera :
> Hi,
> I have one node started with solrcloud. I have created one collection
> with the
Maybe some additional consideration:
If you need to upgrade Solr then eventually you need to reindex.
If you change fields or add fields then you need to reindex.
Both are much faster if you have an external program that converts rich
documents (pdf, word, ocr) to Text once and you use the text
Which Solr version are you using and how often you repeated the test?
> Am 25.10.2019 um 09:16 schrieb Dominique Bejean :
>
> Hi,
>
> I made some benchmarks for bulk indexing in order to compare performances
> and ressources usage for NRT versus TLOG replica.
>
> Environnent :
> * Solrcloud
g to 8 executors within the same spark job by using the
> "dataframe.coalesce" feature which does not shuffle the data at all and
> keeps both spark cluster and solr quiet in term of network.
>
> Thanks
>
>> On Sat, Oct 19, 2019 at 10:10:36PM +0200, Jörn Franke wrote:
>> Maybe you nee
Maybe you need to give more details. I recommend always to try and test
yourself as you know your own solution best. Depending on your spark process
atomic updates could be faster.
With Spark-Solr additional complexity comes. You could have too many executors
for your Solr instance(s), ie a
You need to obtain / renew your Kerberos ticket using kinit
> Am 19.10.2019 um 12:31 schrieb Lvyankui :
>
> SolrCloud mode, Solr and Zookeeper enabled kerberos, create collection
> failed with following command
> curl --negotiate -u : 'http://
>
Even if you do not have a dedicated zkRoot node you will need to provide / in
the connection.
Then, even if the zk nodes can connect with each other it does not mean they
form an ensemble. You need to adapt zoo.cfg of all nodes and add all nodes to
it. Additionally all will need a myid file
Could it be that you start the Solr command too early, ie before the network is
setup in the Docker container?
Normally I would also expect that a zkRoot Is specified.
Can the Zknodes talk to each other?
Have you tried to specify it in the Solr config?
Normally, I would expect that the Solr
I would try JDK11 - it works much better than JDK9 in general.
I don‘t think JDK13 with ZGC will bring you better results. There seems to be
sth strange with the JDk version or Solr version and some settings.
Then , make sure that you have much more free memory for the os cache than the
The best way is always to test yourself. I have used Solr 8.1 / 8.2 with
OpenJDK11 on RHEL 7. OpenJDK11 was chosen as this will be the minimal
compatible one in Solr 9.0 and because older JDKs are already out of support.
however, I don’t know in how far this is comparable with your setting.
>
As the others said: move to the newer GC. I would also use this opportunity to
work with the default Java options of Solr 7 and then tune again. If you change
major versions you should always review the GC settings if they still make
sense.
> Am 02.10.2019 um 23:14 schrieb Solr User :
>
>
Maybe you can sort later using Spark or similar. For that you don’t need a full
blown cluster - it runs also on localhost.
> Am 03.10.2019 um 09:49 schrieb Edward Turner :
>
> Hi Erick,
>
> Many thanks for your detailed reply. It's really good information for us to
> know, and although not
I think it is a Chrome issue. I observed the same, but it disappeared. I guess
due to Chrome update.
> Am 02.10.2019 um 17:47 schrieb Mel Mason :
>
> Hi,
>
> Is anyone else experiencing problems using the admin interface on Chrome?
> It's been working for us for years, but suddenly this last
Why do you need 1000 qps?
> Am 30.09.2019 um 07:45 schrieb Yasufumi Mizoguchi :
>
> Hi,
>
> I am trying some tests to confirm if single Solr instance can perform over
> 1000 queries per second(!).
>
> But now, although CPU usage is 40% or so and iowait is almost 0%,
> throughput does not
Some food for thoughts: if zookeeper can dynamically reconfigure then Solr must
be able to do so as well. Let’s assume you start with an ensemble
server1,server2,server3 and store this in the Solr config. During lifetime of
the Solr service it is changed to server4,server5,server6. Now Solr
Check the log files on the collection reload.
About your regex: check a web page that checks Java regexes - there can be
subtle differences between Java, JavaScript, php etc.
Then it could be that your original text is not UTF-8 encoded, but Windows or
similar.
Check also if you have special
The newest zk version supports dynamic change of the zk instances:
https://zookeeper.apache.org/doc/r3.5.3-beta/zookeeperReconfig.html
However, for that to work properly in case of a Solr restart you always need a
minimal set of servers that do not change and just increase/decrease additional
, 2019 at 8:54 AM Mikhail Khludnev wrote:
> Hello, Jörn.
> Have you tried to find a parent doc in the context which is passed as a
> second argument into ScriptTransformer?
>
> On Wed, Sep 18, 2019 at 9:56 PM Jörn Franke wrote:
> >
> > Hi,
> >
> > I load a set
so you an see how to get started...
>
> https://lucidworks.com/post/indexing-with-solrj/
>
> Best,
> Erick
>
>> On Wed, Sep 18, 2019 at 2:56 PM Jörn Franke wrote:
>>
>> Hi,
>>
>> I load a set of documents. Based on these documents some logic need
Hi,
I load a set of documents. Based on these documents some logic needs to be
applied to split them into chapters (this is done). One whole document is
loaded as a parent. Chapters of the whole document + metadata should be
loaded as child documents of this parent.
I want to now collect
Hi Michael,
Thank you for sharing. You are right about your approach to not customize the
distribution.
Solr supports JDK8 and it latest versions (8.x) also JDK11. I would not
recommend to use it with JDK9 or JDK10 as they are out of support in many Java
distributions. It might be also that
Do you commit after running the delete?
> Am 09.09.2019 um 06:59 schrieb Jayadevan Maymala :
>
> Hello All,
>
> I have a 3-node Solr cluster using a 3-node Zoookeeper system. Solr Version
> is 7.3.0. We have batch deletes which were working a few days ago. All of a
> sudden, they stopped
I am not 100% sure if Solr has something out of the box, but you could
implement a bloom filter https://en.wikipedia.org/wiki/Bloom_filter and store
it in Solr. It is a probabilistic data structure, which is not growing, but can
achieve your use case.
However it has a caveat: it can, for
1 Node zookeeper ensemble does not sound very healthy
> Am 05.09.2019 um 13:07 schrieb Doss :
>
> Hi,
>
> We are using 3 node SOLR (7.0.1) cloud setup 1 node zookeeper ensemble.
> Each system has 16CPUs, 90GB RAM (14GB HEAP), 130 cores (3 replicas NRT)
> with index size ranging from 700MB to
uirement? If yes, then how to deal with that?
>
> Cheers!
> -Original Message-
> From: Jörn Franke [mailto:jornfra...@gmail.com]
> Sent: 04 September 2019 11:17
> To: solr-user@lucene.apache.org
> Subject: Re: Skip Headers & Footers while text extraction using Apach
People here are in different timezones, have their normal jobs for which they
are actually paid to provide answers to questions as those one below etc. There
are also a wide number of resources out on the Internet.
It can also not harm to read more about the formats that you are processing and
How many zookeepers do you have? How many collections? What is there size?
How much CPU / memory do you give per container? How much heap in comparison to
total memory of the container ?
> Am 03.09.2019 um 17:49 schrieb Andrew Kettmann :
>
> Currently our 7.7.2 cluster has ~600 hosts and each
If you have a properly secured cluster eg with Kerberos then you should not
update files in ZK directly. Use the corresponding Solr REST interfaces then
you also less likely to mess something up.
If you want to have HA you should have at least 3 Solr nodes and replicate the
collection to all
How do you send the request? You need to specify the update.chain parameter
with the name of the Update chain or define it as default
> Am 03.09.2019 um 12:14 schrieb Arturas Mazeika :
>
> Hi Solr Fans,
>
> I am trying to figure out how to use the parse-date processor for pdates.
>
> I am
What is the reason for this number of replicas? Solr should work fine, but
maybe it is worth to consolidate some collections to avoid also administrative
overhead.
> Am 29.08.2019 um 05:27 schrieb Hongxu Ma :
>
> Hi
> I have a solr-cloud cluster, but it's unstable when collection number is
It could be sensible to have one spellchecker / language (as different endpoint
or as a queryparameter at runtime). Alternatively, depending on your use case
you could get away with a generic fieldtype that does not do anything language
specific, but I doubt.
> Am 29.08.2019 um 16:20 schrieb
Maybe there are more details in the logfiles?
It could be also that a parameter is configured with a different default? Try
also to change the Solr version in solrconfig.xml to a higher one, e.g. 8.0.0
> Am 29.08.2019 um 16:12 schrieb Joe Obernberger :
>
> Thank you Erick. I'm upgrading from
It is simply a risk. It is not tested. Any functionality may fail eventually or
have unknown side effects in the long run. It is also not clear to me why you
want to update Java, but not Solr. If you want the latest security fixes, bug
fixes and new features then I would go first for a new Solr
You need to provide a little bit more details. What is your Schema? How is the
document structured ? Where do you get metadata from?
Have you read the Solr reference guide? Have you read a book about Solr?
> Am 28.08.2019 um 08:10 schrieb Khare, Kushal (MIND)
> :
>
> Could anyone please help
e & Support: +61 (0) 2 8417 2339
>>
>> 543 NW York Drive, Suite 100, Bend, OR 97703
>>
>> LinkedIn <http://www.linkedin.com/company/manzama> | Twitter
>> <https://twitter.com/ManzamaInc> | Facebook
>> <http://www.facebook.com/manzamainc> |
You should definitely enable HTTPs even if it is not exposed to the Internet.
Even within your own company network it is a good security practice to enable
HTTPs.
About your error. This is due to a setting in Zookeeper:
Hi,
Can you provide an example what you want to achieve?
Multiple requests in parallel?
Are those requests related?
Best regards
> Am 19.08.2019 um 01:44 schrieb Prabhu Dhanaraj
> :
>
> Hi Team
>
> I would like to know if there is any way where we can combine multiple
> requests and send
mentation of
> threads in solr?
>
>> On Fri 16 Aug, 2019, 4:52 PM Jörn Franke, wrote:
>>
>> Is your custom query parser multithreaded and leverages all cores?
>>
>>> Am 16.08.2019 um 13:12 schrieb Vignan Malyala :
>>>
>>> I want response time bel
ur in my
> case.
>
>
>> On Fri 16 Aug, 2019, 11:47 AM Jörn Franke, wrote:
>>
>> How much response time do you require?
>> I think you have to solve the issue in your code by introducing higher
>> parallelism during calculation and potentially more cores.
>>
>
How much response time do you require?
I think you have to solve the issue in your code by introducing higher
parallelism during calculation and potentially more cores.
Maybe you can also precalculate what you do, cache it and use during request
the precalculated values.
> Am 16.08.2019 um
8.2 works with 3.5.5 - with a minor glitch in the Admin UI, which does not
affect Solr itself
> Am 14.08.2019 um 19:46 schrieb Paul Russell :
>
> ALCON,
>
>
>
> We have been asked by our customer to not install Zookeeper V3.4.x for use
> with the SOLR cluster. Currently we have a 3 node
Depends if they do breaking changes in common-lang or not.
By using an old version of a library such as common-lang you may introduce
security issues in your setup.
> Am 13.08.2019 um 06:12 schrieb Zheng Lin Edwin Yeo :
>
> I have found that the Lingo3GClusteringAlgorithm will work if I
Zookeeper on the same machine? Maybe you take memory from it?
Do you observe swapping?
Normally your memory should be much larger than heap, because Spark heavily
uses os caches, which are not on heap.
> Am 07.08.2019 um 15:22 schrieb Abhimeet, Kumar :
>
> Hi Team,
>
>
>
> We are facing
Do you have some more information on index and size?
Do you have to store everything in the Index? Can you store some data (blobs
etc) outside ?
I think you are generally right with your solution, but also be aware that it
is sometimes cheaper to have several servers instead keeping engineer
Not sure if this is possible, but why not create a query handler in Solr with
any custom query and you use that as ping replacement ?
> Am 02.08.2019 um 15:48 schrieb dinesh naik :
>
> Hi all,
> I have few clusters with huge data set and whenever a node goes down its
> not able to recover due
so tried to put * , but I am getting the same error as well.
> 4lw.commands.whitelist=*
>
> Regards,
> Edwin
>
>> On Thu, 1 Aug 2019 at 21:34, Jörn Franke wrote:
>>
>> Spaces did not change the situation. In standalone it works without spaces
>> and the issue i
“ in the source
ode
> Am 02.08.2019 um 03:46 schrieb Zheng Lin Edwin Yeo :
>
> Yes, I tried with space and the same error occurs.
>
> I have also tried to put * , but I am getting the same error as well.
> 4lw.commands.whitelist=*
>
> Regards,
> Edwin
>
>> On T
You can use the configset API:
https://lucene.apache.org/solr/guide/7_7/configsets-api.html
I don’t recommend to use Schema.xml , but managed Schemas:
https://lucene.apache.org/solr/guide/6_6/schema-api.html
For people new to Solr I generally recommend to read a recent book about Solr
from
e results.
>
> You could try to invoke each of the 4LW commands manually on each ZK
> server using telnet, and see if that succeeds for all hosts and commands..
>
> telnet host.name 2181
> conf
>
> Jan Høydahl
>
> > 1. aug. 2019 kl. 15:34 skrev Jörn Franke :
&
mntr,ruok,conf
>
> 3> if <1> and <2> don’t work, what happens if you start your ZooKeepers with
> -Dzookeeper.4lw.commands.whitelist=….
>
> If it’s not <1> or <2>, please raise a JIRA.
>
> Best,
> Erick
>
> Also, see: SOLR-13502 (no work has
d indexing
> even when running ZooKeeper ensemble?
>
> Regards,
> Edwin
>
>
>> On Thu, 1 Aug 2019 at 17:39, Jörn Franke wrote:
>>
>> I confirm the issue.
>>
>> Interestingly it does not happen with ZK standalone, but only in a ZK
>> Ensemble.
>&
I confirm the issue.
Interestingly it does not happen with ZK standalone, but only in a ZK Ensemble.
It seems to be mainly cosmetic in the admin UI because Solr appears to function
normally.
> Am 01.08.2019 um 03:31 schrieb Zheng Lin Edwin Yeo :
>
> Yes. You can get my full solr.log from the
Updated correct zoo.cfg? Did you restart zookeeper after config change ?
> Am 30.07.2019 um 04:05 schrieb Zheng Lin Edwin Yeo :
>
> Hi,
>
> I am using the new Solr 8.2.0 with SolrCloud and external ZooKeeper 3.5.5.
>
> However, after adding in the line under zoo.cfg
>
The idea of using an external program could be good.
> Am 31.07.2019 um 08:06 schrieb Salmaan Rashid Syed
> :
>
> Hi all,
>
> Thanks for your invaluable and helpful answers.
>
> I currently don't have an external zookeeper loaded. I am working as per
> the documentation for solr cloud
Ad 1) it needs to be configured in Zookeeper server and Solr and all other ZK
clients
Ad 2) you never need to shut it down in production for updating Synonym files.
Use the config set API to reupload the full configuration included updated
synonyms:
2 xmx does not make sense,
Your heap seems unusual large usually your heap should be much smaller than
available memory so solr can use it for index caching which is off-heap
> Am 30.07.2019 um 13:25 schrieb Rodrigo Oliveira
> :
>
> Hi,
>
> My environment have 5 servers with solr + zookeeper
Aside that a 5 MB synonym file is rather strange (what is the use case for such
a large synonym file?) and that it will have impact on index size and/or query
time:
You can configure zookeeper server and the Solr client to allow larger files
using the jute.maxbuffer option.
> Am 30.07.2019 um
Maybe it would be good to state your configuration, how many machines, memory,
heap, cpu, os What performance do you have, what performance do you expect,
what queries have the most performance problems.
Sometimes rendering in the UI can also be a performance bottleneck.
> Am 24.07.2019 um
I think you can safely increase heap size to 1 gb or what you need.
Be aware though:
Solrs performance depends heavily on file system caches which are not on the
heap! So you need more memory than what you configure as heap freely available.
How much more depends on your index size.
Another
Do you have exceptions in the log?
How much total memory do you have?
> Am 23.07.2019 um 18:17 schrieb Rodrigo Oliveira
> :
>
> Hi,
>
> In the last 3 months I am using the Solr. However, yesterday my cluster was
> down.
>
> My environment is:
>
> I have 5 nodes from Solr/zookeeper (16 cpu x
Ideally you use scripts that can use JVM/Java - in this way you can always use
the latest SolrJ client library but also other libraries that are relevant (eg
Tika for unstructured content).
This does not have to be Java directly but can be based also on Scala or JVM
script languages, such as
As someone else wrote there are a lot of uncertainties and I recommend to test
yourself to find the optimal configuration. Some food for thought:
How many clients do you have and what is their concurrency? What operations
will they do? Do they Access Solr directly? You can use Jmeter to simulate
I agree - and it would provide you the opportunity to use snapshots for backups
on S3.
> Am 28.06.2019 um 15:06 schrieb Kyle Fransham :
>
> Just my two cents, but why not put your data on EBS volumes and decouple
> from the AMI? This way you're storing the collections in the "amazon
>
101 - 200 of 272 matches
Mail list logo