Hi,
I am getting the following two error in my solr log file,
SEVERE: SolrIndexWriter was not closed prior to finalize(), indicates a bug --
POSSIBLE RESOURCE LEAK!!!
and
SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed
out:
i want to enable boosting for query and search results. My dismax
queryHandlerConfiguration is as:
*requestHandler name=dismax class=solr.DisMaxRequestHandler
lst name=defaults
str name=echoParamsexplicit/str
str name=defTypedismax/str
float name=tie0.01/float
str
Add score to the fl parameter.
fl=*,score
On 7/4/11 11:09 PM, Romi romijain3...@gmail.com wrote:
I am not returning score for the queries. as i suppose it should be
reflected
in search results. means doc having query string in description field come
higher than the doc having query string in
will merely adding fl=score make difference in search results, i mean will i
get desired results now???
-
Thanks Regards
Romi
--
View this message in context:
http://lucene.472066.n3.nabble.com/configure-dismax-requesthandlar-for-boost-a-field-tp3137239p3139814.html
Sent from the Solr -
Hi,
I know i can add components to my requesthandler. In this situation facets
are dependent of there category. So if a user choose for the category TV:
Inch:
32 inch(5)
34 inch(3)
40 inch(1)
Resolution:
Full HD(5)
HD ready(2)
When a user search for category Computer:
CPU:
Intel(12)
AMD(10)
Did you add: fq={!geofilt} ??
On 7/3/11 11:14 AM, Thomas Heigl tho...@umschalt.com wrote:
Hello,
I just tried up(down?)grading our current Solr 4.0 trunk setup to Solr
3.3.0
as result grouping was the only reason for us to stay with the trunk.
Everything worked like a charm except for one of
Hello,
I have two fields TOWN and POSTALCODE and I want to concat those two in one
field to do faceting
My two fields are declared as followed:
field name=TOWN type=string indexed=true stored=true/
field name=POSTALCODE type=string indexed=true stored=true/
The concat field is declared as
The easiest way is to concat() the fields in SQL, and pass it to indexing
as one field already merged together.
Thanks,
On 7/5/11 1:12 AM, elisabeth benoit elisaelisael...@gmail.com wrote:
Hello,
I have two fields TOWN and POSTALCODE and I want to concat those two in
one
field to do faceting
This is taxonomy/index design...
One way is to have a series of fields by category:
TV - tv_size, resolution
Computer - cpu, gpu
Solr can have as many fields as you need, and if you do not store them
into the index they are ignored.
So if a user picks TV, you pass these to Solr:
Thanks Bill,
That's exactly what i mean. But first i do a request to get the right
facetFields from a category.
So a user search for TV, i do request to a db to get tv_size and resolution.
The next step is to
add this to my query like this: facet.field=tv_sizefacet.field=resolution.
I thought
Are you using the DIH?? You can use the transformer to concat the two fields
--
View this message in context:
http://lucene.472066.n3.nabble.com/faceting-on-field-with-two-values-tp3139870p3139934.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi All
A quick doubt on the index files of Lucene and Solr.
I had an older version of lucene (with UIMA) till recently, and had an index
built thus.
I shifted to Solr (3.3, with UIMA)..and tried to use the same index. While
everything else seems fine, the Solr does not seem to recognize the
Which Lucene version were you using?
Regards,
Tommaso
2011/7/5 Sowmya V.B. vbsow...@gmail.com
Hi All
A quick doubt on the index files of Lucene and Solr.
I had an older version of lucene (with UIMA) till recently, and had an
index
built thus.
I shifted to Solr (3.3, with UIMA)..and tried
I was using 2.4 or 2.5. It was a 2yr old Lucene version.
On Tue, Jul 5, 2011 at 10:07 AM, Tommaso Teofili
tommaso.teof...@gmail.comwrote:
Which Lucene version were you using?
Regards,
Tommaso
2011/7/5 Sowmya V.B. vbsow...@gmail.com
Hi All
A quick doubt on the index files of Lucene
I'm pretty sure my original query contained a distance filter as well. Do I
absolutely need to filter by distance in order to sort my results by it?
I'll write another unit test including a distance filter as soon as I get a
chance.
Cheers,
Thomas
On Tue, Jul 5, 2011 at 9:04 AM, Bill Bell
hmmm... that sounds interesting and brings me somewhere else.
we are actually reindexing data every night but the whole process is done by
talend (reading and formatting data from a database) and this makes me
wondering if we should use Solr instead to do this.
in this case, concat two fields,
Is there any memory leak when I updating the index at the master node?
Here is the stack trace.
o.a.solr.servlet.SolrDispatchFilter - java.lang.OutOfMemoryError: Java heap
space
at
org.apache.solr.handler.ReplicationHandler$FileStream.write(ReplicationHandler.java:1000)
at
Hi,
Let say, I have got 10^10 documents in an index with unique id being document
id which is assigned to each of those from 1 to 10^10 .
Now I want to search a particular query string in a subset of these documents
say ( document id 100 to 1000).
The question here is.. will SOLR able to search
Range query
On Tue, Jul 5, 2011 at 4:37 AM, Jame Vaalet jvaa...@capitaliq.com wrote:
Hi,
Let say, I have got 10^10 documents in an index with unique id being document
id which is assigned to each of those from 1 to 10^10 .
Now I want to search a particular query string in a subset of these
Thanks.
But does this range query just limit the universe logically or does it have any
mechanism to limit this physically as well .Do we leverage time factor by using
the range query ?
Regards,
JAME VAALET
-Original Message-
From: shashi@gmail.com [mailto:shashi@gmail.com] On
On Tue, Jul 5, 2011 at 08:46, Romi romijain3...@gmail.com wrote:
will merely adding fl=score make difference in search results, i mean will i
get desired results now???
The fl parameter stands for field list and allows you to configure
in a request which result fields should be returned.
If
On Mon, Jul 4, 2011 at 17:19, Juan Grande juan.gra...@gmail.com wrote:
Hi Marian,
I guess that your problem isn't related to the number of results, but to the
component's configuration. The configuration that you show is meant to set
up an autocomplete component that will suggest terms from
On Tue, Jul 5, 2011 at 10:21, elisabeth benoit
elisaelisael...@gmail.com wrote:
...
so do you think the dih (which I just discovered) would be appropriate to do
the whole process (read a database, read fields from xml contained in some
of the database columns, add informations from csv
2011/7/5 Chengyang atreey...@163.com:
Is there any memory leak when I updating the index at the master node?
Here is the stack trace.
o.a.solr.servlet.SolrDispatchFilter - java.lang.OutOfMemoryError: Java heap
space
You don't need a memory leak to get a OOM error in Java. It might just
Why katta stores index on HDFS? Any advantages?
--
View this message in context:
http://lucene.472066.n3.nabble.com/what-is-the-diff-between-katta-and-solrcloud-tp2275554p3139983.html
Sent from the Solr - User mailing list archive at Nabble.com.
The limit will always be logical if you have all documents in the same index.
But filters are very efficient when working with subset of your index,
especially if you reuse the same filter for many queries since there is a cache.
If your subsets are always the same subsets, maybe your could use
On Tue, Jul 5, 2011 at 11:24, Marian Steinbach mar...@sendung.de wrote:
On Mon, Jul 4, 2011 at 17:19, Juan Grande juan.gra...@gmail.com wrote:
...
You can learn how to correctly configure a spellchecker here:
http://wiki.apache.org/solr/SpellCheckComponent. Also, I'd recommend to take
a look
nice...where?
I'm trying to figure out 2 things:
1) How to create an analyzer that corresponds to the one in the schema.xml.
analyzer
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
filter class=solr.WordDelimiterFilterFactory
Look at searchblox
On Monday, July 4, 2011, Li Li fancye...@gmail.com wrote:
hi all,
I want to provide full text searching for some small websites.
It seems cloud computing is popular now. And it will save costs
because it don't need employ engineer to maintain
the machine.
For
I have got two applications
1. website
The website will enable any user to search the document repository ,
and the set they search on is known as website presentable
2. windows service
The windows service will search on all the documents in the repository
for fixed set of key
Not yet - I've played around with support in this issue in the past though:
https://issues.apache.org/jira/browse/SOLR-1945
On Jul 4, 2011, at 6:04 AM, Kiwi de coder wrote:
hi,
i wondering solrj @Field annotation support embedded child object ? e.g.
class A {
@field
string
Ok,
the very short question is:
Is there a way to submit the analyzer response so that solr already knows
what to do with that response? (that is, which field are to be treated as
payloads, which are tokens, etc...)
Chris Hostetter-3 wrote:
can you explain a bit more about what you goal is
From what you tell us, I guess a separate index for website docs would be the
best. If you fear that request from the window service would cripple your web
site performance, why not have a totally separated index on another server,
and have your website documents index in both indexes ?
Pierre
I got the point that to boost search result i have to sort by score.
But as in solrconfig for dismax request handler i use
*str name=qf
text^0.5 name^1.0 description^1.5
/str*
because i want docs having querystring in description field come upper in
search results.
but what i am
Let's see the results of adding debugQuery=on to your URL. Are you getting
any documents back at all? If not, then your query isn't getting any
documents to group.
You haven't told us much about what you're trying to do, you might want to
review: http://wiki.apache.org/solr/UsingMailingLists
I got the point that to boost search
result i have to sort by score.
But as in solrconfig for dismax request handler i use
*str name=qf
text^0.5 name^1.0
description^1.5
/str*
because i want docs having querystring in description field
come upper in
search results.
but
When querystring is q=gold^2.0 ring(boost gold) and
qt=standard i got the
results for gold ring and when qt=dismax i got no result
why so please
explain
q=gold^2.0 ring(boost gold)defType=dismax would return a document that
contains exactly gold^2.0 ring(boost gold) in it.
dismax is
I suspect the following should do (1). I'm just not sure about file
references as in stopInit.put(words, stopwords.txt) . (2) should
clarify.
1)
class SchemaAnalyzer extends Analyzer{
@Override
public TokenStream tokenStream(String fieldName, Reader reader) {
Not yet an answer to 2) but this is where and how Solr initializes the
Analyzer defined in the schema.xml into :
//org.apache.solr.schema.IndexSchema
// Load the Tokenizer
// Although an analyzer only allows a single Tokenizer, we load a list
to make sure
// the configuration is ok
But incase the website docs contribute around 50 % of the entire docs , why to
recreate the indexes . don't you think its redundancy ?
Can two web apps (solr instances ) share a single index file to search on it
without interfering each other
Regards,
JAME VAALET
Software Developer
EXT :8108
than what should i do to get the required result. ie. if i want to boost gold
than which querytype i should use.
-
Thanks Regards
Romi
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-boost-a-querystring-at-query-time-tp3139800p3140703.html
Sent from the Solr -
Hi Juan,
Thank you very much..Your code worked pretty awesome and was real
helpfulGreat start of the day...:)
--
View this message in context:
http://lucene.472066.n3.nabble.com/Reading-data-from-Solr-MoreLikeThis-tp3130184p3140715.html
Sent from the Solr - User mailing list archive at
I wouldn't share the same index across two Solr webapps - as they could step on
each others toes.
In this scenario, I think having two Solr instances replicating from the same
master is the way to go, to allow you to scale your load from each application
separately.
Erik
On Jul
than what should i do to get the
required result. ie. if i want to boost gold
than which querytype i should use.
If you want to boost the keyword 'gold', you can use bq parameter.
defType=dismaxbq=someField:gold^100
See the other parameters :
It is redundancy. You have to balance the cost of redundancy with the cost in
performance with your web index requested by your windows service. If your
windows service is not too aggressive in its requests, go for shards.
Pierre
-Message d'origine-
De : Jame Vaalet
The solr download link does not point to or mention nightly builds.
Are they out there?
the answer to 2) is new IndexSchema(solrConf, schema).getAnalyzer();
On Tue, Jul 5, 2011 at 2:48 PM, Gabriele Kahlout
gabri...@mysimpatico.comwrote:
Not yet an answer to 2) but this is where and how Solr initializes the
Analyzer defined in the schema.xml into :
Hello Friends,
I am a newbie to Solr and trying to integrate Apache Nutch 1.3 and Solr 3.2
. I did the steps explained in the following two URL's :
http://wiki.apache.org/nutch/RunningNutchAndSolr
http://thetechietutorials.blogspot.com/2011/06/how-to-build-and-start-apache-solr.html
I
On 07/05/2011 04:08 PM, Benson Margulies wrote:
The solr download link does not point to or mention nightly builds.
Are they out there?
http://lmgtfy.com/?q=%2Bsolr+%2Bnightlybuildsl=1
--
Auther of the book Plone 3 Multimedia - http://amzn.to/dtrp0C
Tom Gross
The reason for the email is not that I can't find them, but because
the project, I claim, should be advertising them more prominently on
the web site than buried in a wiki.
Where I come from, an lmgtfy link is rather hostile.
Oh, and you might want to fix the spelling of 'Author' in your own
On 7/4/2011 12:51 AM, Romi wrote:
Shawn when i reindex data using full-import i got:
*_0.fdt 3310
_0.fdx 23
_0.frq 857
_0.nrm 31
_0.prx 1748
_0.tis 350
_1.fdt 3310
_1.fdx 23
_1.fnm 1
_1.frq
Hello,
On Mon, 2011-07-04 at 13:51 +0200, Jame Vaalet wrote:
What would be the maximum size of a single SOLR index file for resulting in
optimum search time ?
How do you define optimimum? Do you want the fastest possible response time
at any cost or do you have a specific response time
Hello all,
Is there a good way to get the hit count of a search?
Example query:
textField:solr AND documentId:1000
Say document with Id = 1000 has solr 13 times in the document. Any way to
extract that number [13] in the response? I know we can return the score
which is loosely related to hit
On 7/5/11 1:37 PM, Lox wrote:
Ok,
the very short question is:
Is there a way to submit the analyzer response so that solr already knows
what to do with that response? (that is, which field are to be treated as
payloads, which are tokens, etc...)
Check this issue:
Is there a good way to get the hit count of a search?
Example query:
textField:solr AND documentId:1000
Say document with Id = 1000 has solr 13 times in the
document. Any way to
extract that number [13] in the response?
Looks like you are looking for term frequency info:
Two separate
Can you let me know when and where you were getting the error? A screen-shot
will be helpful.
On Tue, Jul 5, 2011 at 8:15 AM, serenity keningston
serenity.kenings...@gmail.com wrote:
Hello Friends,
I am a newbie to Solr and trying to integrate Apache Nutch 1.3 and Solr 3.2
. I did the
Hello all
I'm using Solr 3.2 and am trying to index a document whose primary key is
built from multiple columns selected from an Oracle DB.
I'm getting the following error:
java.lang.IllegalArgumentException: deltaQuery has no column to resolve to
declared primary key pk='ordersorderline_id'
Yes indeed, that is what I was missing. Thanks Ahmet!
On Tue, Jul 5, 2011 at 12:48 PM, Ahmet Arslan iori...@yahoo.com wrote:
Is there a good way to get the hit count of a search?
Example query:
textField:solr AND documentId:1000
Say document with Id = 1000 has solr 13 times in the
Hi, guys,
We have more than 1000 attributes scattered around 700K docs. Each doc might
have about 50 attributes. I would like Solr to return up to 20 facets for
every searches, and each search can return facets dynamically depending on
the matched docs. Anyone done that before? That'll be awesome
Sorry for being vague. Okay so these scores exist on an external server and
they change often enough. The score for each returned user is actually
dependent on the user doing the searching (if I'm making the request, and
you make the same request, the scores are different). So what I'm doing is
You are using the crawl job so you must specify the URL to your Solr instance.
The newly updated wiki has you answer:
http://wiki.apache.org/nutch/bin/nutch_crawl
Hello Friends,
I am a newbie to Solr and trying to integrate Apache Nutch 1.3 and Solr 3.2
. I did the steps explained in the
Please find attached screenshot
On Tue, Jul 5, 2011 at 11:53 AM, Way Cool way1.wayc...@gmail.com wrote:
Can you let me know when and where you were getting the error? A
screen-shot
will be helpful.
On Tue, Jul 5, 2011 at 8:15 AM, serenity keningston
serenity.kenings...@gmail.com wrote:
Sorry for my ignorance, but do you have any lead in the code on where to look
for this? Also, I'd still need a way of finding out how long its been in
the cache because I don't want it to regenerate every time. I'd want it to
regenerate only if its been in the cache for less then 6 hours (or
YH -
One technique (that the Smithsonian employs, I believe) is a technique to index
the field names for the attributes into a separate field, facet on that first,
and then facet on the fields you'd like from that response in a second request
to Solr.
There's a basic hack here so the indexing
: I have two fields TOWN and POSTALCODE and I want to concat those two in one
: field to do faceting
As others have pointed out, copy field doesn't do a concat, it just
adds the field values from the source field to the desc field (so with
those two copyField/ lines you will typically get two
@Test
public void testUpdate() throws IOException,
ParserConfigurationException, SAXException, ParseException {
Analyzer analyzer = getAnalyzer();
QueryParser parser = new QueryParser(Version.LUCENE_32, content,
analyzer);
Query allQ = parser.parse(*:*);
You can issue a new facet search as you drill down from your UI.
You have to specify the fields you want to facet on and they can be
dynamic.
Take a look at recent threads here on taxonomy faceting for help.
Also, look here[1]
[1] http://wiki.apache.org/solr/SimpleFacetParameters
On Tue, 5 Jul
: follow a receipe. So I went to the the solr site, downloaded solr and
: tried to follow the tutorial. In the example folder of solr, using
: java -jar start.jar I got:
:
: 2011-07-04 13:22:38.439:INFO::Logging to STDERR via org.mortbay.log.StdErrLog
: 2011-07-04
Thanks Erik and Darren.
A pre-faceting component (post querying) will be ideal as though maybe a
little performance penalty there. :-) I will try to implement one if no one
has done so.
Darren, I did look at the taxonomy faceting thread. My main concern is that
I want to have dynamic facets to be
After your writer.commit you need to reopen your searcher to see the changes.
Mike McCandless
http://blog.mikemccandless.com
On Tue, Jul 5, 2011 at 1:48 PM, Gabriele Kahlout
gabri...@mysimpatico.com wrote:
@Test
public void testUpdate() throws IOException,
ParserConfigurationException,
and how do you do that? There is no reopen method
On Tue, Jul 5, 2011 at 8:09 PM, Michael McCandless
luc...@mikemccandless.com wrote:
After your writer.commit you need to reopen your searcher to see the
changes.
Mike McCandless
http://blog.mikemccandless.com
On Tue, Jul 5, 2011 at 1:48
Sorry, you must reopen the underlying IndexReader, and then make a new
IndexSearcher from the reopened reader.
Mike McCandless
http://blog.mikemccandless.com
On Tue, Jul 5, 2011 at 2:12 PM, Gabriele Kahlout
gabri...@mysimpatico.com wrote:
and how do you do that? There is no reopen method
On
Hi all,
Someone could tell me what is the OR syntax in SOLR and how to use it in a
search query ?
I tried:
fq=sometag:1+sometag:5
fq=sometag:[1+5]
fq=sometag:[1OR5]
fq=sometag:1+5
and many more but impossible to get what I want.
Thanks for advance
--
View this message in context:
Still won't work (same as before).
@Test
public void testUpdate() throws IOException,
ParserConfigurationException, SAXException, ParseException {
Analyzer analyzer = getAnalyzer();
QueryParser parser = new QueryParser(Version.LUCENE_32, content,
analyzer);
Query allQ
Hi,
This two are valid and equivalent:
- fq=sometag:1 OR sometag:5
- fq=sometag:(1 OR 5)
Also, beware that fq defines a filter query, which is different from a
regular query (http://wiki.apache.org/solr/CommonQueryParameters#fq). For
more details on the query syntax see
Thanks for your response. I'll check this.
--
View this message in context:
http://lucene.472066.n3.nabble.com/The-OR-operator-in-a-query-tp3141843p3141916.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks for your response.
Am 05.07.2011 13:53, schrieb Erick Erickson:
Let's see the results of addingdebugQuery=on to your URL. Are you getting
any documents back at all? If not, then your query isn't getting any
documents to group.
I didn't get any docs back. But they have been in the
You can follow the links below to setup Nutch and Solr:
http://thetechietutorials.blogspot.com/2011/06/solr-and-nutch-integration.html
http://thetechietutorials.blogspot.com/2011/06/how-to-build-and-start-apache-solr.html
http://wiki.apache.org/nutch/RunningNutchAndSolr
Of course, more details
On Mon, Jul 4, 2011 at 11:54 AM, Per Newgro per.new...@gmx.ch wrote:
i've tried to add the params for group=true and group.field=myfield by using
the SolrQuery.
But the result is null. Do i have to configure something? In wiki part for
field collapsing i couldn't
find anything.
No specific
Re-open doens't work, but open does.
@Test
public void testUpdate() throws IOException,
ParserConfigurationException, SAXException, ParseException {
Analyzer analyzer = getAnalyzer();
QueryParser parser = new QueryParser(Version.LUCENE_32, content,
analyzer);
Query
re-open does work, but you cannot ignore its return value! see the
javadocs for an example.
On Tue, Jul 5, 2011 at 3:10 PM, Gabriele Kahlout
gabri...@mysimpatico.com wrote:
Re-open doens't work, but open does.
@Test
public void testUpdate() throws IOException,
On Tue, Jul 5, 2011 at 1:11 PM, Way Cool way1.wayc...@gmail.com wrote:
Sorry, Serenity, somehow I don't see the attachment.
On Tue, Jul 5, 2011 at 11:23 AM, serenity keningston
serenity.kenings...@gmail.com wrote:
Please find attached screenshot
On Tue, Jul 5, 2011 at 11:53 AM, Way
Please suggest..
On Mon, Jul 4, 2011 at 10:37 PM, fire fox fyr3...@gmail.com wrote:
From my exploration so far, I understood that we can opt Solr straightaway
if the index changes are kept to minimal. However, mine is absolutely the
opposite. I'm still vague about the perfect solution for the
Hello,
With an inverted index the term is the key, and the documents are the
values. Is it still however possible that given a document id I get the
terms indexed for that document?
--
Regards,
K. Gabriele
--- unchanged since 20/9/10 ---
P.S. If the subject contains [LON] or the addressee
Hi Gabriele,
I'm not sure to understand your problem, but the TermVectorComponent may fit
your needs ?
http://wiki.apache.org/solr/TermVectorComponent
http://wiki.apache.org/solr/TermVectorComponentExampleEnabled
Ludovic.
-
Jouve
France.
--
View this message in context:
It depends on how many queries you'd be making per second. I know for us, I
have a gradient of index sizes. The first machine, which gets hit most
often is about 2.5 gigs. Most of the queries would only ever need to hit
this index but then I have a bigger indices of about 5-10 gigs each which
I had looked an term vectors but don't understand them to solve my problem.
Consider the following index entries:
t0, doc0, doc1
t1, doc0
From the 2nd entry we know that t1 is only present in doc0.
Now, my problem, given doc0 how can I know which terms occur in in (t0 and
t1) (without storing
sounds like the Luke request handler will get what you're after:
http://wiki.apache.org/solr/LukeRequestHandler
http://wiki.apache.org/solr/LukeRequestHandler#id
cheers,
rob
On Tue, Jul 5, 2011 at 3:59 PM, Gabriele Kahlout
gabri...@mysimpatico.com wrote:
Hello,
With an inverted
You can do this, kind of, but it's a lossy process. Consider indexing
the cat in the hat strikes back, with the, in being stopwords and
strikes getting stemmed to strike. At very best, you can reconstruct
that the original doc contained cat, hat, strike, back. Is
that sufficient?
And it's a very
: Correct me if I am wrong: In a standard distributed search with
: QueryComponent, the first query sent to the shards asks for
: fl=myUniqueKey or fl=myUniqueKey,score. When the response is being
: generated to send back to the coordinator, SolrIndexSearcher.doc (int i,
: SetString fields)
Thank you for your answer. I downloaded solr from the link you sugested
and now it is ok, I can see the administration page. But it is strange
that a download from the solr site does not work. Tanks also to Way Cool.
I don't know why, but it happened the same to me in the past (with
3.2).
: Maybe what I really need is a query parser that does not do disjunction
: maximum at all, but somehow still combines different 'qf' type fields with
: different boosts on each field. I personally don't _neccesarily_ need the
: actual disjunction max calculation, but I do need combining of
: In solr, is it possible to 'chain' copyfields so that you can copy the value
: of one into another?
...
: copyField source=name dest=autocomplete /
: copyField source=autocomplete dest=ac_spellcheck /
:
: Point being, every time I add a new field to the autocomplete, I want it to
:
On Tue, Jul 5, 2011 at 5:13 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
: Correct me if I am wrong: In a standard distributed search with
: QueryComponent, the first query sent to the shards asks for
: fl=myUniqueKey or fl=myUniqueKey,score. When the response is being
: generated to
Ah, thanks Hoss - I had meant to respond to the original email, but
then I lost track of it.
Via pseudo-fields, we actually already have the ability to retrieve
values via FieldCache.
fl=id:{!func}id
But using CSF would probably be better here - no memory overhead for
the FieldCache
patches are always welcome!
On Tue, Jul 5, 2011 at 3:04 PM, Yonik Seeley yo...@lucidimagination.com wrote:
On Mon, Jul 4, 2011 at 11:54 AM, Per Newgro per.new...@gmx.ch wrote:
i've tried to add the params for group=true and group.field=myfield by using
the SolrQuery.
But the result is null.
: But i am not finding any effect in my search results. do i need to do some
: more configuration to see the effect.
posting the solrcofing.xml section for your dismax handler is helpful,
start, but to provide any meaninful assistance to you we need a lot more
info then just that...
* an
Gabriele,
I created a patch that does this about a year ago. See
https://issues.apache.org/jira/browse/SOLR-1837. It was written for Solr
1.4 and is based upon the Document Reconstructor in Luke. The patch adds a
link to the main solr admin page to a docinspector page which will
reconstruct
98 matches
Mail list logo