llChecking-Thespellcheck.maxCollationTriesParameter
James Dyer
Ingram Content Group
-Original Message-
From: SRINI SOLR [mailto:srini.s...@gmail.com]
Sent: Friday, July 22, 2016 12:05 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr 4.3.1 - Spell-Checker with MULTI-WORD PHRASE
Hi all - please he
Hi all - please help me here
On Thursday, July 21, 2016, SRINI SOLR wrote:
> Hi All -
> Could you please help me on spell check on multi-word phrase as a whole...
> Scenario -
> I have a problem with solr spellcheck suggestions for multi word phrases.
With the query for
Hi All -
Could you please help me on spell check on multi-word phrase as a whole...
Scenario -
I have a problem with solr spellcheck suggestions for multi word phrases.
With the query for 'red chillies'
q=red+chillies=xml=true=true=true=true
I get
2
4
12
0
-phrasing-tokenfilter/
On Fri, May 6, 2016 at 11:51 AM, SRINI SOLR <srini.s...@gmail.com> wrote:
> Hi All -
> Can you please help me out on the multi-word synonyms with Solr 4.3.1.
>
> I am using the synonyms as below
>
> test1,test2 => movie1 cinema,movie2 cinema,
Hi All -
Can you please help me out on the multi-word synonyms with Solr 4.3.1.
I am using the synonyms as below
test1,test2 => movie1 cinema,movie2 cinema,movie3 cinema
I am able to success with the above syntax like - if I search for
words like test1 or test2 then right hand s
i.s...@gmail.com> wrote:
> Hi Team -
> I am using Solr 4.3.1.
>
> We are using this EmbeddedSolrServer to load Core Containers in one of the
> java application.
>
> This is setup as a cron job for every 1 hour to load the new data on to the
> containers.
>
> Oth
Hi Team -
I am using Solr 4.3.1.
We are using this EmbeddedSolrServer to load Core Containers in one of the
java application.
This is setup as a cron job for every 1 hour to load the new data on to the
containers.
Otherwise - the new data is not getting loaded on the containers , if we
access
Can someone please help me with this?
I am stuck for past few days.
> On 15-Feb-2016, at 6:39 PM, Neeraj Lajpal wrote:
>
> Hi,
>
> I recently asked this question on stackoverflow:
>
> I am trying to access a field in custom request handler. I am accessing it
> like
Hello Neeraj,
Check slide 23 and overall
http://www.slideshare.net/lucenerevolution/what-is-inaluceneagrandfinal
On Mon, Feb 15, 2016 at 4:09 PM, Neeraj Lajpal wrote:
> Hi,
> I recently asked this question on stackoverflow:
> I am trying to access a field in custom
Sorry - replied to wrong thread :(
On 15.02.2016 15:17, Emir Arnautovic wrote:
Hi,
Not sure how ordering will help (maybe missing question) but what
seems to me that would help your case is simple boosting. See
Hi,
Not sure how ordering will help (maybe missing question) but what seems
to me that would help your case is simple boosting. See
https://wiki.apache.org/solr/SolrRelevancyFAQ#How_can_I_make_.22superman.22_in_the_title_field_score_higher_than_in_the_subject_field
Regards,
Emir
On
DocValues has nothing to do with your handler. It is a field property. To
use it simply put docValues=true in your field definitions and reindex.
On Mon, 15 Feb 2016, 18:40 Neeraj Lajpal wrote:
> Hi,
> I recently asked this question on stackoverflow:
> I am trying to
Hi,
I recently asked this question on stackoverflow:
I am trying to access a field in custom request handler. I am accessing it like
this for each document:
Document doc;doc = reader.document(id);DocFields = doc.getValues("state");There
are around 600,000 documents in the solr. For a query
Hello, just a quick question about the expected behavior of the SnapShooter.
We're running Solr 4.3.1 in a SolrCloud configuration, with two separate
virtual machines running Solr and three Zookeepers in various places. Our
search index is about 70GB in size. Today I took a snapshot of just one
On 3/26/2014 10:26 PM, Darrell Burgan wrote:
Okay well it didn't take long for the swapping to start happening on one of
our nodes. Here is a screen shot of the Solr console:
https://s3-us-west-2.amazonaws.com/panswers-darrell/solr.png
And here is a shot of top, with processes sorted by
: Thursday, March 27, 2014 2:59 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr 4.3.1 memory swapping
On 3/26/2014 10:26 PM, Darrell Burgan wrote:
Okay well it didn't take long for the swapping to start happening on one of
our nodes. Here is a screen shot of the Solr console:
https://s3
It could be related to NUMA.
Check out this article about it which has some fixes that worked for me.
http://blog.jcole.us/2010/09/28/mysql-swap-insanity-and-the-numa-architecture/
--
View this message in context:
to work with. Could it be that the swapping is due
to the memory-mapped file in some way?
-Original Message-
From: Lan [mailto:dung@gmail.com]
Sent: Wednesday, March 26, 2014 12:45 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr 4.3.1 memory swapping
It could be related to NUMA
Thanks - we're currently running Solr inside of RHEL virtual machines
inside of VMware. Running numactl --hardware inside the VM shows the
following:
available: 1 nodes (0)
node 0 size: 16139 MB
node 0 free: 364 MB
node distances:
node 0
0: 10
So there is only one node being
, March 26, 2014 8:14 PM
To: solr-user@lucene.apache.org
Subject: RE: Solr 4.3.1 memory swapping
Thanks - we're currently running Solr inside of RHEL virtual machines
inside of VMware. Running numactl --hardware inside the VM shows the
following:
available: 1 nodes (0)
node 0 size: 16139 MB
Hello all, we have a SolrCloud implementation in production, with two servers
running Solr 4.3.1 in a SolrCloud configuration. Our search index is about
70-80GB in size. The trouble is that after several days of uptime, we will
suddenly have periods where the operating system Solr is running
Thanks Shawn. Always grateful for your help...
On Wed, Nov 27, 2013 at 10:37 PM, Shawn Heisey s...@elyograg.org wrote:
On 11/27/2013 9:37 AM, Raheel Hasan wrote:
I got a new issue now. I have Solr 4.3.0 running just fine. However on
Solr
4.3.1, it wont load. I get this issue:
{msg
Hi,
I got a new issue now. I have Solr 4.3.0 running just fine. However on Solr
4.3.1, it wont load. I get this issue:
{msg=SolrCore 'mycore' is not available due to init failure: Plugin
init failure for [schema.xml] fieldType text_ws: Plugin init failure
for [schema.xml] analyzer/filter: Error
On 11/27/2013 9:37 AM, Raheel Hasan wrote:
I got a new issue now. I have Solr 4.3.0 running just fine. However on Solr
4.3.1, it wont load. I get this issue:
{msg=SolrCore 'mycore' is not available due to init failure: Plugin
init failure for [schema.xml] fieldType text_ws: Plugin init failure
]
Sendt: 11. november 2013 15:54
Til: solr-user@lucene.apache.org
Emne: RE: spellcheck solr 4.3.1
There are 2 parameters you want to consider:
First is spellcheck.maxResultsForSuggest. Because you have an OR query,
you'll get hits if only 1 query term is in the index. This parameter lets you
tune
Hey
I am running af solr 4.3.1 and working is implementing spellcheck using
solr.DirectSolrSpellChecker everything seems to be working fine but at have
one issue.
If I search for
http://localhost:8765/solr/MainIndex/spell?q=kim%20AND%20larsen
the result is some hits and the spell component
Message-
From: Daniel Borup [mailto:d...@alpha-solutions.dk]
Sent: Monday, November 11, 2013 7:38 AM
To: solr-user@lucene.apache.org
Subject: spellcheck solr 4.3.1
Hey
I am running af solr 4.3.1 and working is implementing spellcheck using
solr.DirectSolrSpellChecker everything seems
Hi
If I do a search like
/search?q=catid:{123}
I get the results I expect.
But if I do
/search?q=*:*fq=catid{123}
I get an error from Solr like:
org.apache.solr.search.SyntaxError: Cannot parse 'catid:{123}': Encountered
} } at line 1, column 58. Was expecting one of: TO ... RANGE_QUOTED
Missing a colon before the curly bracket in the fq?
On Wed, Oct 23, 2013, at 09:42 AM, Peter Kirk wrote:
Hi
If I do a search like
/search?q=catid:{123}
I get the results I expect.
But if I do
/search?q=*:*fq=catid{123}
I get an error from Solr like:
To: solr-user@lucene.apache.org
Subject: Re: fq with { or } in Solr 4.3.1
Missing a colon before the curly bracket in the fq?
On Wed, Oct 23, 2013, at 09:42 AM, Peter Kirk wrote:
Hi
If I do a search like
/search?q=catid:{123}
I get the results I expect.
But if I do
/search?q=*:*fq
For filtering categories i'm using something like this :
fq=category:(cat1 OR cat2 OR cat3)
-
Thanks,
Michael
--
View this message in context:
http://lucene.472066.n3.nabble.com/fq-with-or-in-Solr-4-3-1-tp4097170p4097183.html
Sent from the Solr - User mailing list archive at Nabble.com.
To: solr-user@lucene.apache.org
Subject: RE: fq with { or } in Solr 4.3.1
Sorry, that was just a typo.
/ search?q=*:*fq=catid:{123}
Gives me the error.
I think that { and } must be used in ranges for fq, and that's why I can't
use them directly like this.
/Peter
-Original Message-
From
Fra: Jack Krupansky j...@basetechnology.com
Sendt: 23. oktober 2013 12:59
Til: solr-user@lucene.apache.org
Emne: Re: fq with { or } in Solr 4.3.1
Are you using the edismax query parser? It traps the syntax error and then
escapes or ignores special characters.
Curly
Hi All,
I've been debugging an issue where the query 'tpms' would make the
spellchecker throw the following exception:
21021 [qtp91486057-17] ERROR org.apache.solr.servlet.SolrDispatchFilter –
null:java.lang.StringIndexOutOfBoundsException: String index out of range:
-1
at
Further to this. If I change:
tpms,service tire monitor,tire monitor,tire pressure monitor,tire pressure
monitoring system,tpm,low tire warning,tire pressure monitor system
to
service tire monitor,tire monitor,tire pressure monitor,tire pressure
monitoring system,tpm,low tire warning,tire
Hi All,
I didn't have the lucene-solr source compiling cleaning in eclipse
initially so I created a very quick maven project to demonstrate this issue:
https://github.com/rainkinz/solr_spellcheck_index_out_of_bounds.git
Having said that I just got everything set up in eclipse, so I can create a
Hi Stefan,
It is apparently a browser feature: works fine in Chrome (Version
28.0.1500.95).
A side note: would return false; following the DOM instruction help here?
Dmitry
On Wed, Aug 7, 2013 at 6:59 PM, Stefan Matheis matheis.ste...@gmail.comwrote:
Hey Dmitry
That sounds a bit odd ..
On the first click the values are refreshed. On the second click the page
gets redirected:
from: http://localhost:8983/solr/#/statements/plugins/cache
to: http://localhost:8983/solr/#/
Is this intentional?
Regards,
Dmitry
It shouldn't .. but from your description sounds as the javascript-onclick
handler doesn't work on the second click (which would do a page reload).
if you use chrome, firefox or safari .. can you open the developer tools and
check if they report any javascript error? which would explain why ..
Hi Stefan,
I was able to debug the second click scenario (was tricky to catch it,
since on click redirect happens and logs statements of the previous are
gone; worked via setting break-points in plugins.js) and got these errors
(firefox 23.0 ubuntu):
[17:20:00.731] TypeError: anonymous function
Hey Dmitry
That sounds a bit odd .. those are more like notices instead of real errors ..
sure that those are stopping the UI from working? if so .. we should see more
reports like those.
Can you verify the problem by using another browser?
I mean .. that is really a basic javascript handler
On 7/27/2013 5:00 PM, Shawn Heisey wrote:
On 7/26/2013 2:03 PM, Gustav wrote:
The problem here is that in my client's application, the query beign encoded
in iso-8859-1 its a *must*. So, this is kind of a trouble here.
I just dont get how this encoding could work on queries in version 3.5, but
Hi, I am using Solr 4.3.1 with 2 Shards and replication factor of 1,
running on apache tomcat 7.0.42 with external zookeeper 3.4.5.
When I query select?q=*:*
I only get the number of documents found, but no actual document. When I
query with rows=0, I do get correct count of documents
Nitin,
You need to ensure the fields you wish to see are marked stored=true in your
schema.xml file, and you should include fields in your fl= parameter
(fl=*,score is a good place to start).
Jason
On Jul 29, 2013, at 8:08 AM, Nitin Agarwal 2nitinagar...@gmail.com wrote:
Hi, I am using Solr
to start).
Jason
On Jul 29, 2013, at 8:08 AM, Nitin Agarwal 2nitinagar...@gmail.com
wrote:
Hi, I am using Solr 4.3.1 with 2 Shards and replication factor of 1,
running on apache tomcat 7.0.42 with external zookeeper 3.4.5.
When I query select?q=*:*
I only get the number of documents
, 100, or whatever, but DO NOT set it to 0 unless you just want the header
without any actual documents.
-- Jack Krupansky
-Original Message-
From: Nitin Agarwal
Sent: Monday, July 29, 2013 11:49 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr 4.3.1 - query does not return
: Here is what my schema looks like
what is your uniqueKey field?
I'm going to bet it's tn_lookup_key_id and i'm going to bet your
lowercase fieldType has an interesting analyzer on it.
you are probably hitting a situation where the analyzer you have on your
uniqueKey field is munging the
Erick, I had typed tn_lookup_key_id as lowercase and it was defined as
fieldType name=lowercase class=solr.TextField
positionIncrementGap=100
analyzer
tokenizer class=solr.KeywordTokenizerFactory /
filter class=solr.LowerCaseFilterFactory /
/analyzer
/fieldType
Nitin
On 7/26/2013 2:03 PM, Gustav wrote:
The problem here is that in my client's application, the query beign encoded
in iso-8859-1 its a *must*. So, this is kind of a trouble here.
I just dont get how this encoding could work on queries in version 3.5, but
it doesnt in 4.3.
I brought up the issue
Hey guys, i have a Solr 4.3 instance running in my server, but Im having some
troubles with encoding URL querystring.
Im currently encoding my query characters, so, when its searched for Café,
its actually encoded to caf%E9 and cão is encoded to c%E3o.
My URLencoding in tomcat is iso-8859-1, but
On 7/26/2013 7:05 AM, Gustav wrote:
Hey guys, i have a Solr 4.3 instance running in my server, but Im having some
troubles with encoding URL querystring.
Im currently encoding my query characters, so, when its searched for Café,
its actually encoded to caf%E9 and cão is encoded to c%E3o.
My
Thanks for the answer Shawn,
The problem here is that in my client's application, the query beign encoded
in iso-8859-1 its a *must*. So, this is kind of a trouble here.
I just dont get how this encoding could work on queries in version 3.5, but
it doesnt in 4.3.
--
View this message in
On 7/26/2013 2:03 PM, Gustav wrote:
Thanks for the answer Shawn,
The problem here is that in my client's application, the query beign encoded
in iso-8859-1 its a *must*. So, this is kind of a trouble here.
I just dont get how this encoding could work on queries in version 3.5, but
it doesnt
Great. Thanks for your suggestions. I'll go through them and see what I can
come up with to try and tame my GC pauses. I'll also make sure I upgrade to
4.4 before I start. Then at least I know I've got all the latest changes.
In the meantime, does anyone have any idea why I am able to get leaders
Log messages?
On Wed, Jul 24, 2013 at 1:37 AM, Neil Prosser neil.pros...@gmail.com wrote:
Great. Thanks for your suggestions. I'll go through them and see what I can
come up with to try and tame my GC pauses. I'll also make sure I upgrade to
4.4 before I start. Then at least I know I've got
Sorry, good point...
https://gist.github.com/neilprosser/d75a13d9e4b7caba51ab
I've included the log files for two servers hosting the same shard for the
same time period. The logging settings exclude anything below WARN
for org.apache.zookeeper, org.apache.solr.core.SolrCore
and
One thing I'm seeing in your logs is the leaderVoteWait safety
mechanism that I mentioned previously:
2013-07-24 07:06:19,856 INFO o.a.s.c.ShardLeaderElectionContext -
Waiting until we see more replicas up: total=2 found=1 timeoutin=45792
From Mark M: This is a safety mechanism - you can turn
On 7/24/2013 10:33 AM, Neil Prosser wrote:
The log for server09 starts with it throwing OutOfMemoryErrors. At this
point I externally have it listed as recovering. Unfortunately I haven't
got the GC logs for either box in that time period.
There's a lot of messages in this thread, so I
That makes sense about all bets being off. I wanted to make sure that
people whose systems are behaving sensibly weren't going to have problems.
I think I need to tame the base amount of memory the field cache takes. We
currently do boosting on several fields during most queries. We boost by at
Neil:
Here's a must-read blog about why allocating more memory
to the JVM than Solr requires is a Bad Thing:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
It turns out that you actually do yourself harm by allocating more
memory to the JVM than it really needs. Of
Hi,
On Tue, Jul 23, 2013 at 8:02 AM, Erick Erickson erickerick...@gmail.com wrote:
Neil:
Here's a must-read blog about why allocating more memory
to the JVM than Solr requires is a Bad Thing:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
It turns out that you
Very true. I was impatient (I think less than three minutes impatient so
hopefully 4.4 will save me from myself) but I didn't realise it was doing
something rather than just hanging. Next time I have to restart a node I'll
just leave and go get a cup of coffee or something.
My configuration is
Wow, you really shouldn't be having nodes go up and down so
frequently, that's a big red flag. That said, SolrCloud should be
pretty robust so this is something to pursue...
But even a 5 minute hard commit can lead to a hefty transaction
log under load, you may want to reduce it substantially
No need to apologise. It's always good to have things like that reiterated
in case I've misunderstood along the way.
I have a feeling that it's related to garbage collection. I assume that if
the JVM heads into a stop-the-world GC Solr can't let ZooKeeper know it's
still alive and so gets marked
Sorry, I should also mention that these leader nodes which are marked as
down can actually still be queried locally with distrib=false with no
problems. Is it possible that they've somehow got themselves out-of-sync?
On 22 July 2013 13:37, Neil Prosser neil.pros...@gmail.com wrote:
No need to
:41
To: solr-user@lucene.apache.org
Subject: Re: Solr 4.3.1 - SolrCloud nodes down and lost documents
Sorry, I should also mention that these leader nodes which are marked as
down can actually still be queried locally with distrib=false with no
problems. Is it possible that they've somehow
: Re: Solr 4.3.1 - SolrCloud nodes down and lost documents
No need to apologise. It's always good to have things like that reiterated
in case I've misunderstood along the way.
I have a feeling that it's related to garbage collection. I assume that if
the JVM heads into a stop-the-world GC
On 7/22/2013 6:45 AM, Markus Jelsma wrote:
You should increase your ZK time out, this may be the issue in your case. You
may also want to try the G1GC collector to keep STW under ZK time out.
When I tried G1, the occasional stop-the-world GC actually got worse. I
tried G1 after trying CMS
A couple of things I've learned along the way ...
I had a similar architecture where we used fairly low numbers for
auto-commits with openSearcher=false. This keeps the tlog to a
reasonable size. You'll need something on the client side to send in
the hard commit request to open a new searcher
Are you feeding Graphite from Solr? If so, how?
On 07/19/2013 01:02 AM, Neil Prosser wrote:
That was overnight so I was unable to track exactly what happened (I'm
going off our Graphite graphs here).
I just have a little python script which I run with cron (luckily that's
the granularity we have in Graphite). It reads the same JSON the admin UI
displays and dumps numeric values into Graphite.
I can open source it if you like. I just need to make sure I remove any
hacks/shortcuts that I've
Well, if I'm reading this right you had a node go out of circulation
and then bounced nodes until that node became the leader. So of course
it wouldn't have the documents (how could it?). Basically you shot
yourself in the foot.
Underlying here is why it took the machine you were re-starting so
While indexing some documents to a SolrCloud cluster (10 machines, 5 shards
and 2 replicas, so one replica on each machine) one of the replicas stopped
receiving documents, while the other replica of the shard continued to grow.
That was overnight so I was unable to track exactly what happened
Vanderbilt li...@datagenic.com wrote:
I'm trying to index documents containing geo-spatial coordinates using
Solr 4.3.1 and am running into some difficulties. Whenever I attempt to
index a particular document containing a geospatial coordinate pair
(using post.jar), the operation fails as follows
Not quite sure what's the problem with the second, but the
first is:
q=:
That just isn't legal, try q=*:*
As for the second, are there any other errors in the solr log?
Sometimes what's returned in the response packet does not
include the true source of the problem.
Best
Erick
On Mon, Jul 15,
Hi ,
We have been using solr 3.6.1 .Recently downloaded the solr 4.3.1 version
and installed the same as multicore setup as follows
Folder Structure
solr.war
solr
conf
core0
core1
solr.xml
Created the context fragment xml file in tomcat/conf/catalina
been using solr 3.6.1 .Recently downloaded the solr 4.3.1 version
and installed the same as multicore setup as follows
Folder Structure
solr.war
solr
conf
core0
core1
solr.xml
Created the context fragment xml file in tomcat/conf/catalina
#Using_the_example_logging_setup_in_containers_other_than_Jetty
On Tue, Jul 16, 2013 at 6:28 PM, Sujatha Arun suja.a...@gmail.com wrote:
Hi ,
We have been using solr 3.6.1 .Recently downloaded the solr 4.3.1
version
and installed the same as multicore setup as follows
Folder Structure
Looks like the JoinQParserPlugin is throwing an NPE.
Query: localhost:8983/solr/location/select?q=*:*fq={!join from=key
to=merchantId fromIndex=merchant}
84343345 [qtp2012387303-16] ERROR org.apache.solr.core.SolrCore –
java.lang.NullPointerException
at
Found this post:
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201302.mbox/%3CCAB_8Yd82aqq=oY6dBRmVjG7gvBBewmkZGF9V=fpne4xgkbu...@mail.gmail.com%3E
And based on the answer, I modified my query: localhost:8983/solr/location/
select?fq={!join from=key to=merchantId
You can only join on indexed fields, our Location:merchantId field is not
indexed.
Best
Erick
On Tue, Jul 16, 2013 at 2:48 PM, Utkarsh Sengar utkarsh2...@gmail.com wrote:
Found this post:
I'm trying to index documents containing geo-spatial coordinates using
Solr 4.3.1 and am running into some difficulties. Whenever I attempt to
index a particular document containing a geospatial coordinate pair
(using post.jar), the operation fails as follows:
SimplePostTool version 1.5
Make sure that dynamicFields are within fields rather than types.
Solr tends to ignore misplaced configuration elements.
-- Jack Krupansky
-Original Message-
From: Scott Vanderbilt
Sent: Monday, July 15, 2013 5:10 PM
To: solr-user@lucene.apache.org
Subject: Solr 4.3.1: Errors When
PM To: solr-user@lucene.apache.org Subject: Solr 4.3.1: Errors
When Attempting to Index LatLon Fields
I'm trying to index documents containing geo-spatial coordinates using
Solr 4.3.1 and am running into some difficulties. Whenever I attempt to
index a particular document containing a geospatial
Hello,
I am trying to join data between two cores: merchant and location
This is my query:
http://_server_.com:8983/solr/location/select?q={!join from=merchantId
to=merchantId fromIndex=merchant}walgreens
Ref: http://wiki.apache.org/solr/Join
Merchants core has documents for the query:
I have also tried these queries (as per this SO answer:
http://stackoverflow.com/questions/12665797/is-solr-4-0-capable-of-using-join-for-multiple-core
)
1. http://_server_.com:8983/solr/location/select?q=:fq={!join
from=merchantId to=merchantId fromIndex=merchant}walgreens
And I get this:
{
Hi,
We are upgrading solr 4.0 to solr 4.3.1 on tomcat 7.
We would like to use the compositeId router. It seems that there are two ways
to do that: 1. using collections API to create a new collection by passing
numShards; 2. Passing numShards in bootstrap process.
For 1, we have a large amount
Hi,
I am adding around 100 million records to SOLR using SOLRJ. I am not
performing commit operation until I add all the documents to SOLR. I see
that my program adds the docs very fast (1 million per minute) for around 18
million documents which is expected result but after 18 million records
The queue size is high, but I doubt that's your issue. Here's what
I'd do:
1 check the Solr server. Is it CPU bound? I/O bound? You have to
identify where the resources are being spent before you get to
implementing a solution.
Are you using SolrCloud? If so, not committing until all 100M docs is
June 2013, Apache Solr™ 4.3.1 available
The Lucene PMC is pleased to announce the release of Apache Solr 4.3.1
Solr is the popular, blazing fast, open source NoSQL search platform
from the Apache Lucene project. Its major features include powerful
full-text search, hit highlighting, faceted
It's already cut and the vote has been passed. It should be out any time
now.
On Mon, Jun 17, 2013 at 11:26 AM, William Bell billnb...@gmail.com wrote:
When is 4.3.1 coming out?
--
Bill Bell
billnb...@gmail.com
cell 720-256-8076
--
Anshum Gupta
http://www.anshumgupta.net
When is 4.3.1 coming out?
--
Bill Bell
billnb...@gmail.com
cell 720-256-8076
92 matches
Mail list logo