Thomas:
If you go to the admin UI, pick a collection (or core) and go to the
"analysis" page. Put different values in the "index" and "query" entry
boxes. Sometimes a picture is worth a thousand words ;).
And, indeed, synonyms are one of the prime filters that a
th a
different input and a different goal for the corresponding output:
* index analyzer: input is a field value, output is used for building
the index
* query analyzer: input is a (user) query string, output is used for
building a (Solr) query
At index time a term dictionary is bu
different goal for the corresponding output:
>
> * index analyzer: input is a field value, output is used for building
>the index
> * query analyzer: input is a (user) query string, output is used for
>building a (Solr) query
>
>At index time a term dictionary is built
Hi Thomas,
as you know, the two analyzers play in a different moment, with a
different input and a different goal for the corresponding output:
* index analyzer: input is a field value, output is used for building
the index
* query analyzer: input is a (user) query string, output is used
Hi,
We have the text field below configured on fields that are both stored and
indexed. It seems to me that applying the same filters on both index and query
would be redundant, and perhaps a waste of processing on the retrieval side if
the filter work was already done on the index side. Is
ue?
Solr will check the schema and the index to see what's available, and
use the best option it can find for whatever it's trying to do. As far
as I know, Solr doesn't offer any direct ways to force a particular data
structure for a given operation. There might be some *indire
"My expectation is that scanning Doc Values might be faster than inverted
index if a query matches more than %25 of documents."
I seriously doubt it. Or my expectations are really off base, which is
always possible, I confess I've never measured though. At a high
level:
indexed:
This is just a flatten internal representation of actually nested docs.
On Mon, Aug 13, 2018 at 2:00 PM sonaliw wrote:
> I want to create nested index structure with SOLR import handler. I am
> using
> Solr -7.2 ,I have created data-config.xml
> (https://issues.apache.org/jira/brow
I want to create nested index structure with SOLR import handler. I am using
Solr -7.2 ,I have created data-config.xml
(https://issues.apache.org/jira/browse/SOLR-5147
https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/schema/FieldType.java#L881
https://github.com/apache/lucene-solr/blob/17eb8cd14d27d2680fe7c4b3871f3eb883542d34/solr/core/src/java/org/apache/solr/search/facet/FacetField.java#L106
On Mon, Aug 13, 2018 at 9:02 AM
Thanks Erick, Shawn and Tomoko for complete answers.
If I set both docvalue and indexed "true" in a field, will Solr understand
to use which technique for faceting or searching? Or Is there any way to
inform Solr to use which technique?
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-Use
My expectation is that scanning Doc Values might be faster than inverted
index if a query matches more than %25 of documents.
On Sun, Aug 12, 2018 at 7:59 PM Erick Erickson
wrote:
> bq. I have been informed that the performance of such a search is
> absolutely terrible.
>
> Y
ed. But to answer the question
"Who lives on Maple street" you have to read _everything_ in the
entire phone book. Think "table scan".
To answer the question "Who lives on Maple street", you want to index
all the text.
The whole point of docValues was that the struct
On 8/12/2018 4:39 AM, Zahra Aminolroaya wrote:
Could we say that docvalue technique is better for sorting and faceting and
inverted index one is better for searching?
Yes. That is how things work.
If docValues do not exist, then an equivalent data structure must be
built in heap memory
> Could we say that docvalue technique is better for sorting and faceting
and inverted index one is better for searching?
The short answer is yes.
In addition, there are several special data structures for numeric/date
range/geo spatial search.
https://lucene.apache.org/solr/guide/7_4/field-ty
Could we say that docvalue technique is better for sorting and faceting and
inverted index one is better for searching?
Will I lose anything if I only use docvalue?
Does docvalue technique have better performance?
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi,
Is there a way to monitor the size of the index broken by individual fields
across documents? I understand there are different parts - the inverted
index and the stored fields - and an estimate would be good start.
Thanks
John
On 7/27/2018 11:02 AM, cyndefromva wrote:
> I'm just curious why are there still so many 503 errors being generated
> (Error - Rsolr::Error::Http - 503 Service Unavailable - retrying ...)
>
> Is it related to all the "Error opening new searcher. exceeded limit of
> maxWarmingSearchers=2, try again
bq: Error opening new searcher. exceeded limit of maxWarmingSearchers=2
did you make sure that your indexing client isn't issuing commits all
the time? The other possible culprit (although I'd be very surprised)
is if you have your filterCache and queryResultCache autowarm settings
set extremely h
That makes sense, the ulimit was too small and I've updated it.
I'm just curious why are there still so many 503 errors being generated
(Error - Rsolr::Error::Http - 503 Service Unavailable - retrying ...)
Is it related to all the "Error opening new searcher. exceeded limit of
maxWarmingSearcher
Leistungsspektrum unter
www.pure-host.de
Get our whole services at www.pure-host.de
Am 27.07.2018 um 15:53 schrieb cyndefromva:
I have Rails 5 application that uses solr to index and search our site. The
sunspot gem is used to integrate ruby and sunspot. It's a relatively small
site (no
I have Rails 5 application that uses solr to index and search our site. The
sunspot gem is used to integrate ruby and sunspot. It's a relatively small
site (no more 100,000 records) and has moderate usage (except for the
googlebot).
Until recently we regularly received 503 errors; reloadin
On 7/26/2018 1:32 PM, cyndefromva wrote:
At the point it starts failing I see a java exception: "java.io-IOException:
Too many open files" in the solr log file and a SolrException (Error open
new searcher) is returned to the user.
The operating system where Solr is running needs its open file l
us
>
>
>
> -Original message-
>> From:cyndefromva
>> Sent: Thursday 26th July 2018 22:18
>> To: solr-user@lucene.apache.org
>> Subject: Recent configuration change to our site causes frequent index
>> corruption
>>
>> I have Rails 5 application tha
ve to!
Regards,
Markus
-Original message-
> From:cyndefromva
> Sent: Thursday 26th July 2018 22:18
> To: solr-user@lucene.apache.org
> Subject: Recent configuration change to our site causes frequent index
> corruption
>
> I have Rails 5 application that uses solr to
I have Rails 5 application that uses solr to index and search our site. The
sunspot gem is used to integrate ruby and sunspot. It's a relatively small
site (no more 100,000 records) and has moderate usage (except for the
googlebot).
Until recently we regularly received 503 errors; reloadin
of course i saw this reference .
but it is not clear understood , how exactly geojson looks like .
where do I put an item Id ?
regular json index request looks like :
{
"id":"111",
"geo_srpt":
}
i tried to put a geo json as a geo_srpt value but it does no
https://lucene.apache.org/solr/guide/7_4/spatial-search.html#indexing-geojson-and-wkt
?
Regards,
Alex.
On 25 July 2018 at 16:15, SolrUser1543 wrote:
> I have look in reference guide and different wiki articles , but have not
> found anywhere an example of how index geojson .
>
>
I have look in reference guide and different wiki articles , but have not
found anywhere an example of how index geojson .
I have the following field definition :
how should post request looks like in order to put geojson in this field ?
I have managed to index WKT , but not geojson
I have look in reference guide and different wiki articles , but have not
found anywhere an example of how index geojson .
I have the following field definition :
how should post request looks like in order to put geojson in this field ?
I have managed to index WKT , but not geojson
Subject: RE: Cannot index to 7.2.1 collection alias
>
> Hi Shawn,
>
> Indexing stack trace:
>
> null:java.lang.NullPointerException
> at
> org.apache.solr.servlet.HttpSolrCall.getCoreUrl(HttpSolrCall.java:931)
> at
> org.apache.solr.servlet.HttpSolr
)
at
org.apache.solr.cloud.OverseerCollectionMessageHandler.collectionCmd(OverseerCollectionMessageHandler.java:784)
Thanks,
MArkus
-Original message-
> From:Shawn Heisey
> Sent: Tuesday 17th July 2018 16:39
> To: solr-user@lucene.apache.org
> Subject: Re: Cannot in
On 7/17/2018 6:28 AM, Markus Jelsma wrote:
Just attempted to connect and index a bunch of documents to a collection alias,
got a NPE right away. Can't find this error in Jira, did i overlook something?
Create new ticket?
Indexing to an alias should send the documents only to the
Additionaly, reloading a collection alias also doesn't work. Can't find that
one in Jira either, new ticket?
Thanks,
Markus
-Original message-
> From:Markus Jelsma
> Sent: Tuesday 17th July 2018 14:28
> To: solr-user
> Subject: Cannot index to 7.2.1 colle
Hello,
Just attempted to connect and index a bunch of documents to a collection alias,
got a NPE right away. Can't find this error in Jira, did i overlook something?
Create new ticket?
Thanks,
Markus
Delete by query with an external from job? Maybe even using date math in
the query to avoid hardcoding real dates.
Not everything needs to be inside of Solr.
Regards,
Alex
On Fri, Jul 13, 2018, 4:33 AM Adarsh_infor, wrote:
> Hi All,
>
> I have index which is being lying in produ
1> Well, a day is just 86,400 seconds. But that's just how
often the thread wakes up and looks for docs to delete
2> Maybe. If you already have a field that has the date you
want to use to expire the document, then no. Otherwise
you must re-index
3> no. How could it
Hi All,
I have index which is being lying in production for quite some time. Now we
need to delete the documents based on Date range, i.e., after 270 days we
should be able to delete the old documents. beyond that many days old. I
hear about this feature Time to Live i need to know couple of
Hi All,
I have index which is being lying in production for quite some time. Now we
need to delete the documents based on Date range, i.e., after 270 days we
should be able to delete the old documents. beyond that many days old. I
hear about this feature Time to Live i need to know couple of
Hi All,
I have index which is being lying in production for quite some time. Now we
need to delete the documents based on Date range, i.e., after 270 days we
should be able to delete the old documents. beyond that many days old. I
hear about this feature Time to Live i need to know couple of
I Daphne,
the “possible analysis error” is a misleading error message (to be addressed in
SOLR-12477). The important piece is the
“java.lang.ArrayIndexOutOfBoundsException”, it looks like your index may be
corrupted in some way.
Tomás
> On Jul 11, 2018, at 3:01 PM, Liu, Daphne wr
Hello Solr Expert,
We are using Solr 6.3.0 and lately we are unable to write documents into our
index. Please see below error messages. Can anyone help us?
Thank you
On 6/26/2018 11:48 AM, Ritesh Kumar (Avanade) wrote:
> Is it possible to create an index field of type *dictionary*. I have
> seen stringarry, datetime, bool etc. but I am looking for a field type
> like list of objects.
The types that you find in a Solr schema are just made-up seq
Hey Eric,
Thanks for response, it was a Sitecore related modifications we had to do to
make it work.
Thanks
Ritesh
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Tuesday, June 26, 2018 10:52 AM
To: solr-user
Subject: Re: Create an index field of type
esn't really deal at that level, it just searches
tokens...
Best,
Erick
On Tue, Jun 26, 2018 at 10:48 AM, Ritesh Kumar (Avanade) <
v-kur...@microsoft.com.invalid> wrote:
> Hello,
>
>
>
> Is it possible to create an index field of type *dictionary*. I have seen
> s
Hello,
Is it possible to create an index field of type dictionary. I have seen
stringarry, datetime, bool etc. but I am looking for a field type like list of
objects.
Thanks
[OCP Logo]
Ritesh
Avanade Infrastructure Team
+1 (425) 588-7853 v-kur...@micrsoft.com<mailto:v-kur...@micrsoft.
I'm getting an error on some of the nodes in my solr cloud cluster under
heavy indexing load. Once the error happens, that node, just repeatedly
gets this error over and over and will no longer index documents until a
restart. I believe the root cause of the error is:
File /solr7.1.0/UN
Hi Sushant,
while this is true in general, it won't hold here. If you split your
index, searching on each splitted shard might be a bit faster, but
you'll increase search time much more because Solr needs to send your
search queries to all shards and then combine the results. So instead
Thank you for the detailed response Eric. Very much appreciated. The reason
I am looking into splitting the index into two is because it’s much faster
to search across a smaller index than a larger one.
On Wed, Jun 20, 2018 at 10:46 AM Erick Erickson
wrote:
> You still haven't answer
You still haven't answered _why_ you think splitting even a 20G index
is desirable. We regularly see 200G+ indexes per replica in the field,
so what's the point? Have you measured different setups to see if it's
a good idea? A 200G index needs some beefy hardware admittedly..
The index size is small because this is my local development copy. The
production index is more than 20GB. So I am working on getting the index
split and replicated on different nodes. Our current instance on prod is
single instance solr 6 which we are working on moving towards solrcloud 7
On
Use the indexupgrader tool or optimize your index before using splitshard.
Since this is a small index (< 5G), optimizing will not create an
overly-large segment, so that pitfall is avoided.
You haven't yet explained why you think splitting the index would be
beneficial. Splitting an in
How can I resolve this error?
On Wed, Jun 20, 2018 at 9:11 AM, Alexandre Rafalovitch
wrote:
> This seems more related to an old index upgraded to latest Solr rather than
> the split itself.
>
> Regards,
> Alex
>
> On Wed, Jun 20, 2018, 12:07 PM Sushant Vengu
My old solr instance was 6.6.3 and the current solrcloud I am building is
7.3.1. Are there any issues there?
On Wed, Jun 20, 2018 at 9:11 AM, Alexandre Rafalovitch
wrote:
> This seems more related to an old index upgraded to latest Solr rather than
> the split itself.
>
> Regards
This seems more related to an old index upgraded to latest Solr rather than
the split itself.
Regards,
Alex
On Wed, Jun 20, 2018, 12:07 PM Sushant Vengurlekar, <
svengurle...@curvolabs.com> wrote:
> Thanks for the reply Alessandro! Appreciate it.
>
> Below is the full reques
"QTime":6}},
"solr-1:8081_solr":{
"responseHeader":{
"status":0,
"QTime":1009}}},
"failure":{
"solr-1:8081_solr":"org.apache.solr.client.solrj.impl.Ht
Hi,
in the first place, why do you want to split 2 Gb indexes ?
Nowadays is a fairly small index.
Secondly what you reported is incomplete.
I would expect a Caused By section in the stacktrace.
This are generic recommendations, always spend time in analysing the problem
you had scrupulously
How do I split indexes which are more than 2GB in size.
I get this error when I try to use SPLITSHARD on a collection of size more
than 2GB
2018-06-20 02:25:49.810 ERROR (qtp1025799482-19) [ ] o.a.s.s.HttpSolrCall
null:org.apache.solr.common.SolrException: SPLITSHARD failed to invoke
SPLIT core
Hi Team,
I am trying to index a document from HDFS in version solr 4.9 and getting
below error:
Command used by me :
QUEUE_NAME=default
MORPHLINE_CONF=/home/sshuser/solrruncls/morphline_retail_tr_2017.conf
OUTPUT_DIR=adl:///solr
ZK_HOST=zk0
there's no such syntax OOB.
You could append an index to it. So your input doc would look something like:
doc 1= {
"id": "1",
"status": [
"b1",
"a2"
]
}
and search appropriately.
Perhaps this w
Hi all,
is there a way i can query a particular index of a multivalued field.
e.g lets say i have a document like this
doc 1= {
"id": "1",
"status": [
"b",
"a"
]
}
doc2= {
"id
>
>> bq. To be clear I deleted the actual index files out from under the
>> running master
>>
>> I'm assuming *nix here since Windows won't let you delete a file that
>> has an open file handle...
>>
>> Did you then restart the master? Asi
Yes unix.
It was an amazing moment.
On Mon, Jun 4, 2018, 11:28 PM Erick Erickson
wrote:
> bq. To be clear I deleted the actual index files out from under the
> running master
>
> I'm assuming *nix here since Windows won't let you delete a file that
> has an open f
bq. To be clear I deleted the actual index files out from under the
running master
I'm assuming *nix here since Windows won't let you delete a file that
has an open file handle...
Did you then restart the master? Aside from any checks about refusing
to replicate an empty index, just de
Check the logs. I bet it says something like “refusing to fetch empty index.”
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Jun 4, 2018, at 1:41 PM, Jeff Courtade wrote:
>
> I am thankful for that!
>
> Could you point me at
I am thankful for that!
Could you point me at something that explains this maybe?
J
On Mon, Jun 4, 2018, 4:31 PM Shawn Heisey wrote:
> On 6/4/2018 12:15 PM, Jeff Courtade wrote:
> > This was strange as I would have thought the replica would have
> replicated
> > an empty ind
On 6/4/2018 12:15 PM, Jeff Courtade wrote:
> This was strange as I would have thought the replica would have replicated
> an empty index from the master.
Solr actually has protections in place to specifically PREVENT index
replication when the master has an empty index. This is so
8, 23:57 Jeff Courtade wrote:
>
> > To be clear I deleted the actual index files out from under the running
> > master
> >
> > On Mon, Jun 4, 2018, 2:25 PM Jeff Courtade
> wrote:
> >
> > > So are you saying it should have?
> > >
> &g
that log if it not clear.
Regards,
Aman
On Mon, Jun 4, 2018, 23:57 Jeff Courtade wrote:
> To be clear I deleted the actual index files out from under the running
> master
>
> On Mon, Jun 4, 2018, 2:25 PM Jeff Courtade wrote:
>
> > So are you saying it should have?
> >
To be clear I deleted the actual index files out from under the running
master
On Mon, Jun 4, 2018, 2:25 PM Jeff Courtade wrote:
> So are you saying it should have?
>
> It really acted like a normal function this happened on 5 different pairs
> in the same way.
>
>
> On M
and synchronized up to date
> >
> > I went on the master and deleted the index files while solr was running.
> > solr created new empty index files and continued to serve requests.
> > The slave did not delete its indexes and kept all of the old data in
> place
> &g
ation.
>
> The master and slave were both running and synchronized up to date
>
> I went on the master and deleted the index files while solr was running.
> solr created new empty index files and continued to serve requests.
> The slave did not delete its indexes and kept all of t
Hi,
This I think is a very simple question.
I have a solr 4.3 master slave setup.
Simple replication.
The master and slave were both running and synchronized up to date
I went on the master and deleted the index files while solr was running.
solr created new empty index files and continued to
Someone needs to update the Ref Guide. That can be a patch submitted on a
JIRA issue, or a committer could forego a patch and make changes directly
with commits.
Otherwise, this wiki page is making a bad situation even worse.
On Tue, May 29, 2018 at 12:06 PM Tim Allison wrote:
> I’m happy to co
I’m happy to contribute to this message in any way I can. Let me know how
I can help.
On Tue, May 29, 2018 at 2:31 PM Cassandra Targett
wrote:
> It's not as simple as a banner. Information was added to the wiki that does
> not exist in the Ref Guide.
>
> Before you say "go look at the Ref Guide
It's not as simple as a banner. Information was added to the wiki that does
not exist in the Ref Guide.
Before you say "go look at the Ref Guide" you need to make sure it says
what you want it to say, and the creation of this page just 3 days ago
indicates to me that the Ref Guide is missing somet
On further reflection ,+1 to marking the Wiki page superseded by the
reference guide. I'd be fine with putting a banner at the top of all
the Wiki pages saying "check the Solr reference guide first" ;)
On Tue, May 29, 2018 at 10:59 AM, Cassandra Targett
wrote:
> Couldn't the same information on t
Couldn't the same information on that page be put into the Solr Ref Guide?
I mean, if that's what we recommend, it should be documented officially
that it's what we recommend.
I mean, is anyone surprised people keep stumbling over this? Shawn's wiki
page doesn't point to the Ref Guide (instead po
Thanks! now I can just record the URL and then paste it in ;)
Who knows, maybe people will see it first too!
On Sat, May 26, 2018 at 9:48 AM, Tim Allison wrote:
> W00t! Thank you, Shawn!
>
> The "don't use ERH in production" response comes up frequently enough
>> that I have created a wiki page
Thanks- It's actually more like a localhost/app2:
app2 in question is Omeka (digital publishing platform)
When Omeka is installed on a server, it's usually all alone on the server.
So you *tell *it to index something and what core corresponds to that index
and it indexes it?
If so, I
tored in separate folders in an html directory.
> There is nothing in the main directory other than "This page is left blank"
> (all pages are databases that are for internal use only).
>
> How do I get Solr to index website.university.edu/app2 specifically?
> I've be
nothing in the main directory other than "This page is left blank"
(all pages are databases that are for internal use only).
How do I get Solr to index website.university.edu/app2 specifically?
I've been searching docs and Google for a while, but I can't seem to find
where can I
W00t! Thank you, Shawn!
The "don't use ERH in production" response comes up frequently enough
> that I have created a wiki page we can use for responses:
>
> https://wiki.apache.org/solr/RecommendCustomIndexingWithTika
>
> Tim, you are extremely well-qualified to expand and correct this page.
> Er
On 5/26/2018 4:52 AM, Tim Allison wrote:
Please see Erick Erickson’s evergreen advice and linked blog post:
https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201805.mbox/%3ccan4yxve_0gn0a1y7wjpr27inuddo6+jzwwfgvzkfs40gh3r...@mail.gmail.com%3e
The "don't use ERH in production" response
ract text with Tika standalone first.
>>
>> Regards,
>> Alex
>>
>> On Thu, May 24, 2018, 5:05 AM Dimitris Kardarakos, <
>> dimitris.kardara...@iteam.gr> wrote:
>>
>> > Hello everyone.
>> >
>> > In Solr 7.3.0 I can successfu
rdarakos, <
> dimitris.kardara...@iteam.gr> wrote:
>
> > Hello everyone.
> >
> > In Solr 7.3.0 I can successfully index the content of zip files.
> >
> > But if the zip file is password protected, running something like the
> > below:
> >
> > curl
&g
able to reconstruct document from
> Solr even if it is not stored then answer is it depends on how you index,
> one might be able to partially reconstruct it.
>
> HTH,
> Emir
> --
> Monitoring - Log Management - Alerting - Anomaly Detection
> Solr & Elasticsearch Consultin
Kardarakos, <
dimitris.kardara...@iteam.gr> wrote:
> Hello everyone.
>
> In Solr 7.3.0 I can successfully index the content of zip files.
>
> But if the zip file is password protected, running something like the
> below:
>
> curl
> "
> http://localhost:89
Hello everyone.
In Solr 7.3.0 I can successfully index the content of zip files.
But if the zip file is password protected, running something like the below:
curl
"http://localhost:8983/solr/sample/update/extract?commit=true&&literal.id=enc.zip&resource.password=1234&qu
> The txt files just have plain text, I mapped each line to a field call
> 'sentence' and included the file name as a field using the data import
> handler. No problems here.
>
> The JSON file has metadata: 3 tags: a URL, author and title (for the
> content in the correspo
stored option.
If you are asking about actual documents in original format, it is not even
recommended to be stored in Solr.
If you are asking if someone will be able to reconstruct document from Solr
even if it is not stored then answer is it depends on how you index, one might
be able to par
dear community,
Is it possible to index documents (e.g. pdf, word,...) for fulltextsearch
without storing their content(payload) inside Solr server?
Thanking you in advance for your help
BR
Tom
er. No problems here.
The JSON file has metadata: 3 tags: a URL, author and title (for the
content in the corresponding txt file).
When I index the JSON file (I just used the _default schema, and posted the
fields to the schema, as explained in the official solr tutorial),* I don't
know how to g
Thanks Raymond. As I was doing the indexing of other delimited files
directly with Solr and the terminal (without a client), I thought it would
be possible to index the filename of JSON files this way as well.
But like you say, I'm parsing the search results in Python. So I might as
well buil
ped each line to a field call
> 'sentence' and included the file name as a field using the data import
> handler. No problems here.
>
> The JSON file has metadata: 3 tags: a URL, author and title (for the
> content in the corresponding txt file).
> When I index the J
You create MiniSolrCloudCluster with a base directory and then each Jetty
instance created gets a SolrHome in a subfolder called node{i}. So if
legacyCloud=true you can just preconfigure a core and index under the right
node{i} subfolder. legacyCloud=true should not even exist anymore though,
so
Hi all,
Wondering if anyone has experience (this is with Solr 6.6) in setting up
MiniSolrCloudCluster for unit testing, where we want to use an existing index.
Note that this index wasn’t built with SolrCloud, as it’s generated by a
distributed (Hadoop) workflow.
So there’s no “restore from
to a field call
'sentence' and included the file name as a field using the data import
handler. No problems here.
The JSON file has metadata: 3 tags: a URL, author and title (for the
content in the corresponding txt file).
When I index the JSON file (I just used the _default schema, and po
To add it further, in 6.5.1, while indexing... even sometimes one of solr
node goes down for a while and comes up automatically. During those period
all our calls to index fails. Even in the Solr admin UI, we can see node not
being active for a while and coming up again.
All these happens in 4
Lucene ( the major underlying Tech in SolR ) can handle any data, but it’s
optimized to be an index , not a file store. Better to put that in another DB
or file system like Cassandra, S3, etc. (better than SolR).
In our experience , leveraging the tika binary / microservice as a pre-index
501 - 600 of 7945 matches
Mail list logo