I think the problem should be EmbeddedSolrServer can't load existing index
data.
Any committer can help confirm whether it's a bug or not.
Thank you.
Kane
On Mon, Apr 15, 2013 at 7:28 PM, zhu kane kane...@gmail.com wrote:
I'm extending Solr's *AbstractSolrTestCase* for unit testing.
I
Hello Walter,
Have you had a chance to get something working with graphite, codahale and
solr?
Has anyone else tried these tools with Solr 3.x family? How much work is it
to set things up?
We have tried zabbix in the past. Even though it required lots of up front
investment on configuration, it
Just tried the same queries with the 'example' in solr 4.2 build and
getting same issue:
http://localhost:8983/solr/collection1/select?q=*%3A*wt=jsonindent=trueshards=localhost:7574/solr/collection1
trace:java.lang.NullPointerException\r\n\tat
Walter,
Can you share the document count / index size for this shard? Even though
these are not decisive parameters, but suit the data points comparison :)
On Tue, Apr 9, 2013 at 9:00 PM, Walter Underwood wun...@wunderwood.orgwrote:
We mostly run m1.xlarge with an 8GB heap. --wunder
On Apr
Thanks for the answers.
2013/4/23 Erick Erickson erickerick...@gmail.com
bq: However what will happen to that 10 nodes when I specify replication
factor?
I think they just sit around doing nothing.
Best
Erick
On Mon, Apr 22, 2013 at 7:24 AM, Furkan KAMACI furkankam...@gmail.com
wrote:
Hoss,
I use solr as a SolrCluster, the main feature that I use is faceting to do some
analytics and normal queries to do free text search and retrieve data using
filters.
I don't use any custom plugin or contribute plugin.
At the moment I'm importing my data from mysql to solr, I don't use
The solr version is 4.2.1.
Here the stack trace:
SEVERE: org.apache.solr.common.SolrException: Error CREATEing SolrCore XXX':
Could not get shard_id for core: XXX
coreNodeName:192.168.20.47:8983_solr_XXX$
at
Answering myself - adding this line in solrconfig.xml made it work:
codecFactory name=CodecFactory class=solr.SchemaCodecFactory /
On 4/23/13 3:42 PM, Abhishek Sanoujam wrote:
Hi all,
I am trying to experiment with DocValues
(http://wiki.apache.org/solr/DocValues) and use the Disk
Hi All,
I'm using DIH with FileListEntityProcessor in order to index from xml files.
If I perform a DIH with command=abort, it seems that the xml file being
processed by dataimport is not closed.
When I try to delete it, I get an error message this file is opened by
Apache Tomcat
Is it a known
Is there any other enterprise search other than SOLR which supports Complex
Join Query,as Solr does not support the same. As per my requirement I need
to search Complex Join Query which will search from document Indexing or in
main memory. As it is very faster than any disk based database.
Any
Hi all,
We've got quite a lot of (mostly small) solr cores in our Solr instance.
They all share the same solrconfig.xml and schema.xml (only the data
differs).
I'm wondering how far can I go in terms of number of cores. CPU is not an
issue, but memory could be.
An idea/guideline about the
Hi
we solr cloud with 4 shards and when we try to import the data using
dataimporthandler, it does not distribute documents in all 4 shards.
Thanks Regards
Montu v Boda
--
View this message in context:
I'm not an expert, but at some extent I think it will come down to few
factors:
* How much data is been cached per core.
* If memory is an issue and still you want performance, I/O with low
cache could be an issue (SSDs?)
* Soft commits which implies open searchers per soft commit (and
Thanks!
Yeah I know about the caching/commit things
My question is more about the impact of a Pure creation of a Solr core,
indepently of its usage memory requirements (like caches and stuff).
From the experiments I did using JMX, it's not measurable, but I might be
wrong.
On 23 April 2013
Hello,
What is the maximum size limit of the XML document file that is allowed to
import into solr to index from java -Durl. As I am testing to import XMLfile
of 5 GB and it throws an error like
SimplePostTool: WARNING: Solr returned an error #400 Bad Request
SimplePostTool: WARNING: IOException
Hi Jack,
Sorry for late response.
I have used following settings for auto-suggestion:
searchComponent name=terms class=solr.TermsComponent/
requestHandler name=/terms class=solr.SearchHandler startup=lazy
lst name=defaults
bool name=termstrue/bool
/lst
arr
The overhead of just opening a core is insignificant relatively to using it
so, unless you are worried about hitting the max number of open files
limit, it seems unimportant.
Otis
Solr ElasticSearch Support
http://sematext.com/
On Apr 23, 2013 7:46 AM, Jérôme Étévé jerome.et...@gmail.com wrote:
Have a look at ElasticSearch, maybe it's a better fit.
Otis
Solr ElasticSearch Support
http://sematext.com/
On Apr 23, 2013 6:38 AM, ashimbose ashimb...@gmail.com wrote:
Is there any other enterprise search other than SOLR which supports Complex
Join Query,as Solr does not support the same.
This does not seem to be related to the XML size. Check the exact
error message on the server side. Looks to me like the URL may not be
correct. I think in some cases, post.jar automatically adds /update
handler, so maybe you are doubling it up.
Regards,
Alex.
Personal blog:
Hi,
Let me get my crystal ball OK, now let's try inlining.
On Tue, Apr 23, 2013 at 5:48 AM, Furkan KAMACI furkankam...@gmail.com wrote:
* I want to measure how much RAM I should define for my Solr instances,
* I will try to make some predictions about how much disk space I will need
at
Hi!
I'm using solr with tomcat and i need to add a record using
HTTP/Request.php (PEAR).
So, i created a test file with the following code:
?php
require_once HTTP/Request.php;
$req = new HTTP_Request(http://localhost:8080/solr/stats/update;);
$req-setMethod(HTTP_REQUEST_METHOD_POST);
$xml =
Hi!
Currently I'm working on a basica search engine for, the main problem is that
during some tests a problem was detected, in the application if a user search
for the + or - term only or the + string it causes an exception in my
application, the problem is caused for an
Fuzzy Search is looking independent of all the analyzer, but it seems that
its not independent of tokenizer. As If i just change my tokenizer to
*Solr.StandardTokenizerFactory* , Fuzzy search started working fine, If it
is independent of Tokenizer then this should not occur.
And I also , I had
On 4/23/2013 6:02 AM, Sharmila Thapa wrote:
What is the maximum size limit of the XML document file that is allowed to
import into solr to index from java -Durl. As I am testing to import XMLfile
of 5 GB and it throws an error like
SimplePostTool: WARNING: Solr returned an error #400 Bad
Hi,
I have done this many times. First use a curl job or something to download the
complete index as CSV
q=*:*rows=999wt=csv
Then use post.jar to push that csv into the new node.
Alternatively you can query with XML and use xslt update request handler with
parm tr=updateXml which is a
Hi,
you need to escape that char in search terms.
Special chars are + - ! ( ) { } [ ] ^ ~ * ? : \ / at the moment.
The %2B is just the url encoding, but it will still be a + for Solr, so just
put a \ in front of the chars I mentioned.
Cheers,
Kai
Am 23.04.2013 um 15:41 schrieb Jorge Luis
DataImportHandler might be a better way to import very large XML files
if it can be loaded from Solr-local file system.
Regards,
Alex.
Personal blog: http://blog.outerthoughts.com/
LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
- Time is the quality of nature that keeps events from
To be clear, there are no solid and reliable prediction rules for Solr - for
the simple reason that there are too many non-linear variables - you need to
stand up a proof of concept system, load it with representative data and
execute representative queries and then measure that system. You can
On 4/23/2013 7:30 AM, Viviane Ventura wrote:
I'm using solr with tomcat and i need to add a record using
HTTP/Request.php (PEAR).
So, i created a test file with the following code:
?php
require_once HTTP/Request.php;
At a quick glance (and not having much experience with PHP) your code
Hi Kai:
Thanks for your reply, for what I've understood this logic must be included in
my application, It would be possible to, for instance, use some regular
expression at querying time in my schema to avoid a query that contains only
this characters? for instance + and + would be a good
Another aspect I neglected to mention: Think about distinguishing between
development, test, and production systems - all separately. Your
development system is whether you try out ideas and experiment - your proof
of concept. Your test or pre-production system is where you verify that
your
If you want to allow your users to search for '+' , you also define your
'+' as being a regular ALPHA characters:
In config:
delimiter_types.txt:
#
# We let +, # and * be part of normal words.
# This is to let c++, c#, c* and RD as words.
#
+ = ALPHA
# = ALPHA
* = ALPHA
= ALPHA
@ = ALPHA
Hi
Is it correct that when inserting or updating document into solr you
have to talk to a solr host where at least one shard of that collection
is stored?
For select you can talk to any host within the collection.configName?
BR,
Arkadi
Hi Jérôme:
Thanks for your suggestion Jérôme, I'll do as you told me for allowing the
search of this specific tokens. I've also taked into account the option of add
the quote if lenght is 1 in the application level, but I would like to keep
this logic inside of Solr (if possible), this is why
I believe as of 4.2 you can talk to any host in the cloud.
Michael Della Bitta
Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271
www.appinions.com
Where Influence Isn’t a Game
On Tue, Apr 23, 2013 at 10:45 AM, Arkadi Colson
The other thing to keep in the back of your mind as you go through
this process is that search is addicting to most organizations.
Meaning your Solr solution may quickly become a victim of its own
success. The queries we tested before going production 5+ months ago
and the queries we handle today
Hi ,
Can anyone please point out from where a solr search originates
and how it passes to the lucene index searcher and back to solr . I
actually what to know which class in solr directly calls the lucene Index
Searcher .
Thanks.
Pom
org.apache.solr.search.SolrIndexSearcher
On Tue, Apr 23, 2013 at 9:51 AM, parnab kumar parnab.2...@gmail.com wrote:
Hi ,
Can anyone please point out from where a solr search originates
and how it passes to the lucene index searcher and back to solr . I
actually what to know
Hi,
I want to edgeNgram let's say this document that has 'difficult contents' so
that if i query (using disman) q=dif it shows me this result. This is
working fine. But now if i search for q=con it gives me this document as
well. is there any way to only show this document when i search for
If you use jetty - which you should :) It's what we test with. Tomcat only gets
user testing.
If you use tomcat, this won't work in 4.2 or 4.2.1, but probably will in 4.3
(we are voting on 4.3 now).
No clue on other containers.
- Mark
On Apr 23, 2013, at 10:59 AM, Michael Della Bitta
Hi ,
Timothy,Thanks for pointing out . But i have a specific requirement
. For any query it passes through the search handler and solr finally
directs it to lucene Index Searcher. As results are matched and collected
as TopDocs in lucene i want to inspect the top K Docs , reorder them by
Hi,
If you use a codec which is not default, you need to download/build lucene
codec jars and put it in solr_home/lib directory and add the codecfactory in
the solr config file.
Look here for detail instruction
http://wiki.apache.org/solr/SimpleTextCodecExample
Best,
Mou
--
View this
Perhaps http://search-lucene.com/?q=custom+hits+collector ?
Otis
--
Solr ElasticSearch Support
http://sematext.com/
On Tue, Apr 23, 2013 at 12:32 PM, parnab kumar parnab.2...@gmail.com wrote:
Hi ,
Timothy,Thanks for pointing out . But i have a specific requirement
. For any
Take a look at Solr's DelegatingCollector - this article might be of
interest too:
http://hokiesuns.blogspot.com/2012/11/using-solrs-postfiltering-to-collect.html
On Tue, Apr 23, 2013 at 10:32 AM, parnab kumar parnab.2...@gmail.com wrote:
Hi ,
Timothy,Thanks for pointing out . But i
Yes, you can effectively chroot all the configs for a collection (to
support multiple collections in same ensemble) - see wiki:
http://wiki.apache.org/solr/SolrCloud#Zookeeper_chroot
On Tue, Apr 23, 2013 at 11:23 AM, bbarani bbar...@gmail.com wrote:
I have used multiple schema files by using
I have used multiple schema files by using multiple cores but not sure if I
will be able to use multiple schema configuration when integrating SOLR with
zookeeper. Can someone please let me know if its possible and if so, how?
--
View this message in context:
: Yes, you can effectively chroot all the configs for a collection (to
: support multiple collections in same ensemble) - see wiki:
: http://wiki.apache.org/solr/SolrCloud#Zookeeper_chroot
I don't think chroot is suitable for what's being asked about here ...
that would completely isolate two
Hi,
We migrated recently from Solr 1.4 to 3.6.1. In the new version we have
noticed that after some hours (around 8) the autocommit is taking more time
to be executed.
In the new version we have noticed that after some hours the autocommit
is taking more time to be executed. We
Hi,
I'd like to use the SolrEntityProcessor to partially migrate an old index
to Solr 4.1. The source is pretty old (dated 2006-06-10 16:05:12Z)...
maybe Solr 1.2? My data-config.xml is based on the SolrEntityProcessor
example http://wiki.apache.org/solr/DataImportHandler#SolrEntityProcessor
You might be out of luck with the SolrEntityProcessor I'd recommend writing
a simple little script that pages through /select?q=*:* from the source Solr
and write to the destination Solr. Back in the day there was this fun little
beast
Oopps, Mark you said: If you use tomcat, this won't work in 4.2 or 4.2.1
Can you explain more what won't be at Tomcat and what will change at 4.3?
2013/4/23 Mark Miller markrmil...@gmail.com
If you use jetty - which you should :) It's what we test with. Tomcat only
gets user testing.
If you
The request proxying does not work with tomcat without calling an explicit
flush in the code - jetty (which the unit tests are written against) worked
without this flush. The flush is added to 4.3.
- Mark
On Apr 23, 2013, at 2:02 PM, Furkan KAMACI furkankam...@gmail.com wrote:
Oopps, Mark
Sorry but I want to make clears the things in my mind. Is there any
documentation that explains Solr proxying? Is it same thing with that: when
I use SolrCloud and if I send document any of the nodes at my cluster the
document will be routed into the leader of appropriate shard. So you mean I
can
Well, you could copy to another field (using copyField) and then have an
analyzer with a LimitTokenCountFilterFactory that accepts only 1 token, and
then apply the EdgeNGramFilter to that one token. But you would have to
query explicitly against that other field. Since you are using dismax, you
Yeah, I'm confused now too. Do all Solr nodes in a distributed cloud really
have to run in the same container type?? Why isn't it just raw HTTP for one
cloud no to talk to another? I mean each node could/should be on another
machine, right?
-- Jack Krupansky
-Original Message-
From:
This request proxying only applies to the read side. The write side forwards
updates around, it doesn't proxy requests.
- Mark
On Apr 23, 2013, at 2:33 PM, Furkan KAMACI furkankam...@gmail.com wrote:
Sorry but I want to make clears the things in my mind. Is there any
documentation that
On 4/23/2013 11:27 AM, gustavonasu wrote:
We migrated recently from Solr 1.4 to 3.6.1. In the new version we have
noticed that after some hours (around 8) the autocommit is taking more time
to be executed.
In the new version we have noticed that after some hours the autocommit
is
On 4/23/2013 10:14 AM, Mark Miller wrote:
If you use jetty - which you should :) It's what we test with. Tomcat only gets
user testing.
If you use tomcat, this won't work in 4.2 or 4.2.1, but probably will in 4.3
(we are voting on 4.3 now).
No clue on other containers.
- Mark
On Apr 23,
What version of Solr a re you using? In Solr 4.2+ if you don't specify
numShards when creating the collection, the implicit document router will
be used. DIH running under the implicit document router most likely would
not distribute documents.
If this is the case you'll need to recreate the
When I read about SolrCloud wiki there writes something about cluster
overseer. What is the role of that at read and write processes? How can I
see which node is overseer at my cluster?
Hi,
I was unable to find more info about
LimitTokenCountFilterFactory
in solr wiki. Is there any other place to get thorough description of what it
does?
Thanks.
Alex.
-Original Message-
From: Jack Krupansky j...@basetechnology.com
To: solr-user solr-user@lucene.apache.org
On Apr 23, 2013, at 2:53 PM, Furkan KAMACI furkankam...@gmail.com wrote:
When I read about SolrCloud wiki there writes something about cluster
overseer. What is the role of that at read and write processes? How can I
see which node is overseer at my cluster?
The Overseer's main
Always check the javadocs. There's a lot of info to be found there:
http://lucene.apache.org/core/4_0_0-BETA/analyzers-common/org/apache/lucene/analysis/miscellaneous/LimitTokenCountFilterFactory.html
-Original message-
From:alx...@aim.com alx...@aim.com
Sent: Tue 23-Apr-2013 21:06
Thanks for the explanation.
2013/4/23 Mark Miller markrmil...@gmail.com
On Apr 23, 2013, at 2:53 PM, Furkan KAMACI furkankam...@gmail.com wrote:
When I read about SolrCloud wiki there writes something about cluster
overseer. What is the role of that at read and write processes? How can
I
On Apr 23, 2013, at 2:49 PM, Shawn Heisey s...@elyograg.org wrote:
What exactly is the 'request proxying' thing that doesn't work on tomcat? Is
this something different from basic SolrCloud operation where you send any
kind of request to any server and they get directed where they need to
Actually, it is Solr 4.1+ where the implicit router will be used if
nuShards is not specified.
On Tue, Apr 23, 2013 at 2:52 PM, Joel Bernstein joels...@gmail.com wrote:
What version of Solr a re you using? In Solr 4.2+ if you don't specify
numShards when creating the collection, the implicit
Hi Mark;
All in all you say that when 4.3 is tagged at repository (I mean when it is
ready) this feature will work for Tomcat too at a stable version?
2013/4/23 Mark Miller markrmil...@gmail.com
On Apr 23, 2013, at 2:49 PM, Shawn Heisey s...@elyograg.org wrote:
What exactly is the
Thanks Erik. I remember Solr Flare :)
On Tue, Apr 23, 2013 at 11:56 AM, Erik Hatcher erik.hatc...@gmail.comwrote:
You might be out of luck with the SolrEntityProcessor I'd recommend
writing a simple little script that pages through /select?q=*:* from the
source Solr and write to the
At first I will work on 100 Solr nodes and I want to use Tomcat as
container and deploy Solr as a war. I just wonder what folks are using for
large systems and what kind of problems or benefits they have with their
choices.
2013/3/26 Otis Gospodnetic otis.gospodne...@gmail.com
Hi,
This
Ah cool, thanks for clarifying Chris - some of that multi-config
management stuff gets confusing but much clearer from your
description.
Cheers,
Tim
On Tue, Apr 23, 2013 at 11:36 AM, Chris Hostetter
hossman_luc...@fucit.org wrote:
: Yes, you can effectively chroot all the configs for a
I apologize for the length of the previous message.
I do see a problem with spellcheck becoming faster (notice QTime). I also
see an increase in the number of cache hits if spellcheck=false is run one
time followed by the original spellcheck query. Seems like spellcheck=false
alters the
We have a 3rd release candidate for 4.3 being voted on now.
I have never tested this feature with Tomcat - only Jetty. Users have reported
it does not work with Tomcat. That leads one to think it may have a problem in
other containers as well.
A previous contributor donated a patch that
Tomcat should work just fine in most cases. The downside to Tomcat is that all
of the devs generally run Jetty since it's the default. Also, all of our units
tests run against Jetty - in fact, a specific version of Jetty.
Usually, Solr will run fine in other webapps. Many, many users run Solr
If I have a Zookeper Cluster for my Hbase Cluster already, can I use same
Zookeper cluster for my SolrCloud too?
2013/4/23 Timothy Potter thelabd...@gmail.com
Ah cool, thanks for clarifying Chris - some of that multi-config
management stuff gets confusing but much clearer from your
As Timothy mentioned, Solr has the PostFIlter mechanism, but it's not
really suited for ranking/sorting changes. To effect the ranking you'd need
to work with the TopScoreDocCollector which Solr does not give you access
to. If you're doing distributed search you'd need to account for the
ranking
Hi,
Recently I noticed a lot of Reordered DBQs detected messages in logs. As
far as I checked in logs it could be related with deleting documents, but
not sure. Do you know what is the reason of those messages ?
Apr 23, 2013 1:20:14 AM org.apache.solr.search.SolrIndexSearcher init
INFO: Opening
Thanks for the answer. If I find something that explains using embedded
Jetty or Jetty, or Tomcat it would be nice.
2013/4/23 Mark Miller markrmil...@gmail.com
Tomcat should work just fine in most cases. The downside to Tomcat is that
all of the devs generally run Jetty since it's the default.
Hi there,
Looking at one of my shards (about 1M docs) i see lot of unique terms, more
than 8M which is a significant part of my total term count. These are very
likely useless terms, binaries or other meaningless numbers that come with
few of my docs.
I am totally fine with deleting them so these
James, Is there a way to determine how many times the collations were tried?
Is there a parameter that can be issued that can return this in debug
information? This would be very helpful.
Appreciate your help with this.
Thanks.
-- Sandeep
--
View this message in context:
Yes - better use of existing resources. In this case, the chroot would
be helpful to keep Solr znodes separate from HBase. For the most part,
Solr in steady-state doesn't put a lot of stress on Zookeeper, for the
most part my zk nodes are snoozing.
On Tue, Apr 23, 2013 at 1:46 PM, Furkan KAMACI
On 4/23/2013 1:46 PM, Furkan KAMACI wrote:
If I have a Zookeper Cluster for my Hbase Cluster already, can I use same
Zookeper cluster for my SolrCloud too?
Yes, you can. It is strongly recommended that you use a chroot with the
zkHost parameter if you are sharing zookeeper. It's a really
I will use Nutch with map reduce to crawl huge data and use SolrCloud for
many users with high response time. Actually I wonder about performance
issues separating Zookeper cluster or using them for both Hbase and Solr.
2013/4/23 Shawn Heisey s...@elyograg.org
On 4/23/2013 1:46 PM, Furkan
My 2 cents on this is if you have a choice, just stick with Jetty.
This article has some pretty convincing information:
http://www.openlogic.com/wazi/bid/257366/Power-Java-based-web-apps-with-Jetty-application-server
The folks over at OpenLogic definitely know their stuff when it comes
to
Is there any documentation that explains using Jetty as embedded or not? I
use Solr deployed at Tomcat but after you message I will consider about
Jetty. If we think about other issues i.e. when I want to update my Solr
jars/wars etc.(this is just an foo example) does any pros and cons Tomcat
or
On 4/23/2013 1:52 PM, Furkan KAMACI wrote:
Thanks for the answer. If I find something that explains using embedded
Jetty or Jetty, or Tomcat it would be nice.
2013/4/23 Mark Miller markrmil...@gmail.com
Tomcat should work just fine in most cases. The downside to Tomcat is that
all of the devs
According to answers here for a huge crawling system and high response time
searching SolrCloud system I will try Jetty. If anyone has a good reason
they can explain it here, you are right. By the way, Shawn when I read you
answer I understand that I should choose embedded Jetty, is that right?
If you enable debug-level logging for class
org.apache.solr.spelling.SpellCheckCollator, you should get a log message for
every collation it tries like this:
Collation: will return zzz hits.
James Dyer
Ingram Content Group
(615) 213-4311
-Original Message-
From: SandeepM
Ok, thanks for this hint i have two further questions to understand it
completly.
Settingup custom request handler makes it easier to avoid all the mapping
parameters in the query but it
would also be possible with one request handler and all mapping in the request
arguments right?
What about
On 4/23/2013 2:25 PM, Furkan KAMACI wrote:
Is there any documentation that explains using Jetty as embedded or not? I
use Solr deployed at Tomcat but after you message I will consider about
Jetty. If we think about other issues i.e. when I want to update my Solr
jars/wars etc.(this is just an
Thanks for the answers. I will go with embedded Jetty for my SolrCloud. If
I face with something important I would want to share my experiences with
you.
2013/4/23 Shawn Heisey s...@elyograg.org
On 4/23/2013 2:25 PM, Furkan KAMACI wrote:
Is there any documentation that explains using Jetty as
Hi,
I want my minGramSize in ngram filter to be the size of the word passed in
the query. how can i do that?
Because if i put minsize to 2 and write in abc it gives me result for ab and
bc i just want abc or what ever the length of my word is, i want it to be
the minGram Size. how can i do
Hello.
I'm trying to figure out if Solr is going to work for a new project that I am
wanting to build. At it's heart it's a book text searching application. Each
book is broken into chapters and each chapter is broken into lines. I want to
be able to search these books and return relevant
On Tue, Apr 23, 2013 at 3:51 PM, Marcin Rzewucki mrzewu...@gmail.com wrote:
Recently I noticed a lot of Reordered DBQs detected messages in logs. As
far as I checked in logs it could be related with deleting documents, but
not sure. Do you know what is the reason of those messages ?
For high
: Subject: Re: Too many close, count -1
Thanks for the details, nothing jumps out at me, but we're now tracking
this in SOLR-4753...
https://issues.apache.org/jira/browse/SOLR-4753
-Hoss
: . For any query it passes through the search handler and solr finally
: directs it to lucene Index Searcher. As results are matched and collected
: as TopDocs in lucene i want to inspect the top K Docs , reorder them by
: some logic and pass the final TopDocs to solr which solr may send
Hi Shawn,
Thanks for the answer.
If I understand well the autoWarmCount is the number of elements used from
the cache for new searches. I guess that this isn't the problem because
after the commit property increases on the UPDATE HANDLERS (admin UI) I
can see the new docs in the searches result.
Why are you bothering to use an Edge/NGram filter if you are setting the
minGramSize to the token size?!! I mean, why bother - just skip the
Edge/NGrem filter and it would give the same result - setting minGramSize to
the token size means that there would be only a single gram and it would be
Perhaps he needs different analyzer chains for index and query. Create the edge
ngrams when indexing, but not when querying.
wunder
On Apr 23, 2013, at 2:44 PM, Jack Krupansky wrote:
Why are you bothering to use an Edge/NGram filter if you are setting the
minGramSize to the token size?!! I
When I read Lucidworks' Solr Guide I saw that:
Distributed searching does not support the QueryElevationComponent, which
configures the
top results for a given query regardless of Lucene's scoring
is that still true for SolrCloud?
There is no simple, obvious, and direct approach, right out of the box.
Sure, you can highlight passages of raw text, right out of the box, but that
won't give you chapters, pages, and line numbers. To do all of that, you
would have to either:
1. Add chapter, page, and line number as part of
1 - 100 of 123 matches
Mail list logo