The thing is that, if you use stemming you would prefer using it both on
index and query phases. So once you have stemmed your data in the index
phase, changing query parser not to stem wouldn't help, right?
In our company we did so that before stemming a word we would append some
unseen character
Thank you all for these advices, you are obviously right that no need for any
copyField instructions to get what we expect.
I will do some tests on using facet or LukeRequestHandler which seem much
more useful in my case.
--
View this message in context:
No respinse !! Bumping it up
*Pranav Prakash*
temet nosce
Twitter http://twitter.com/pranavprakash | Blog http://blog.myblive.com |
Google http://www.google.com/profiles/pranny
On Fri, Dec 9, 2011 at 14:11, Pranav Prakash pra...@gmail.com wrote:
Hi Group,
I would like to have highlighting
Hey everybody,
I'm having an issue importing Decimal numbers from my Mysql DB to Solr.
Is there anybody with some advise, I will start and try to explain my
problem.
According to my findings, I think the lack of a explicit mapping of a
Decimal value in the schema.xml
is causing some issues I'm
Hi, Erick. Thanks for your advice.
Here's another test. Add debugQuery=on to your query and post the
results.
Here is for 2K rows:
response
lst name=responseHeader
int name=status0/int
int name=QTime53153/int
lst name=params
str name=debugQueryon/str
str name=fl*,score/str
str name=shards
Hello guys,
the default search UI doesn't work for me. http://localhost:8983/solr/browse
gives me an HTTP 404 error.
I'm using Solr-1.4. Any idea how to fix this?
Remi
Have you looked here http://wiki.apache.org/solr/VelocityResponseWriter ?
/Martin
On Mon, Dec 19, 2011 at 12:44 PM, remi tassing tassingr...@yahoo.comwrote:
Hello guys,
the default search UI doesn't work for me.
http://localhost:8983/solr/browse gives me an HTTP 404 error.
I'm using
Hi Darul, it actually depends in if you want the top terms in the documents
that hit a query (in which case you'll need something like the faceting
approach you are mentioning) or the top terms for the field in general,
regardless of a specific query, in that case the easiest way to go is with
the
The first case you mentioned is the one I am looking for. I do not want top
terms on a whole index but top terms for a specific query result set.
Faceting on my field appears being the only way to get relevant results of
top terms for documents that hit query.
Thanks for LukeRequestHandler and
Hello all
I'm seeing the following in my web server log file:
[2011-12-19 08:57:00.016] [customersIndex] webapp=/solr path=/dataimport
params={command=delta-importcommit=trueoptimize=true} status=0 QTime=3
[2011-12-19 08:57:00.018] Starting Delta Import
[2011-12-19 08:57:00.018] Read
Hi,
I try to test fuzzy queries with the Solr Admin Analysis page at
/solr/admin/analysis.jsp, but it seems to split query terms with the
fuzzy (~) operator to term and distance value, e.g 'ddog~0.5' gets
converted to 'ddog' and '0.5'. Obviously that's not what is wanted. Is
it possible to test
You may want to take a look at the autocomplete component using Solr with
RankingAlgorithm. It allows you to do edge as well as infix search. You can
get more information and also try out a demo from here:
http://solr-ra.tgels.org/solr-ra-autocomplete.jsp
--
View this message in context:
I try to test fuzzy queries with the Solr Admin Analysis
page at
/solr/admin/analysis.jsp, but it seems to split query terms
with the
fuzzy (~) operator to term and distance value, e.g
'ddog~0.5' gets
converted to 'ddog' and '0.5'. Obviously that's not what is
wanted. Is
it possible to
I can see why you are confused. Re-reading it, I'm confused.
Here's my dilemna.
I am trying index some one hundred or so books all in EPUB format. The
goal is to provide research functions, i.e. people who need to
reference specific quotes, pages and books for their writing.
I don't know if EPUB
I have a SOLR instance running as a proxy (no data of its own), it just uses
multicore setup where each core has a shards parameter in the search handler.
So my setup looks like this:
solr_proxy/
multicore/
/public - solrconfig.xml has shards pointing to some other
SOLR
On 12/19/2011 7:09 AM, Mark Juszczec wrote:
Hello all
I'm seeing the following in my web server log file:
[2011-12-19 08:57:00.016] [customersIndex] webapp=/solr path=/dataimport
params={command=delta-importcommit=trueoptimize=true} status=0 QTime=3
[2011-12-19 08:57:00.018] Starting Delta
Uhm, either I misunderstand your question or you're doing
a lot of extra work for nothing
The whole point of sharding it exactly to collect the top N docs
from each shard and merge them into a single result. So if
you want 10 docs, just specify rows=10. Solr will query all
the shards, get the
A programmer has a problem. She tried to solve it with regular expressions.
Now she has two problems
You could *try* PatternReplaceCharFilterFactory. Note that this is
applied to the entire input string *before* tokenization. I'm thinking
you could write a clever regex that transformed
Why is this? And what happened to
http://lucene.472066.n3.nabble.com/Re-Field-Collapsing-disable-cache-td481783.html
?
I don't see why basic caching of request - result shouldn't be the same?
I know I could put a layer on top, but I'd like to use a built in cache if
possible.
--
View this
My full-data import stopped working all of a sudden. Afaik I have not made
any changes that would cause this.
Everything is deleted from the index, but no files are added anymore. I dont
receive any errors either...:S
STARTING via Cygwin: cd /cygdrive/c/My\
Pravin,
When using the file-based spell checking option, it will try to give you
suggestions for every query term regardless of whether or not thwy are in your
spelling dictionary. Getting the behavior you want would seem to be a worthy
enhancement, but I don't think it is currently supported.
Hi Peter,
the most probable cause is that your database query returns no results.
Have you run the query that DIH is using directly on your database?
In the output you can see that DIH has fetched 0 rows from the DB. Maybe
your query contains a restriction that suddenly had this effect - like a
Many thanks.. I took the suggestion of using a copyField/ and it did the
trick.
--Ronen
--
View this message in context:
http://lucene.472066.n3.nabble.com/multi-value-field-search-tp3594701p3599082.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Chantal,
I reduced my data-config.xml to a bare minimum:
dataConfig
dataSource driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
url=jdbc:sqlserver://localhost:1433;databaseName=tt user=sa
password=dfgjLJSFSD /
document name=weddinglocations
entity name=location query=select
Hi,
I am using HTTP and json to add my documents to Solr. Now, I have defined a
special field in the Solr schema which defines what type of document that is
being added. This field needs to be inserted into the json doc that I receive
from my caller. Is there a way to do this?
Thanks!
Dipti
Hi Dipti,
If you are receiving the JSON within your Java code, you can try any
library, like- GSON [1] to manipulate JSON before sending it to Solr.
- Anuj
1. http://code.google.com/p/google-gson/
On Mon, Dec 19, 2011 at 11:54 PM, Dipti Srivastava
dipti.srivast...@apollogrp.edu wrote:
Hi,
Uhm, either I misunderstand your question or you're doing
a lot of extra work for nothing
The whole point of sharding it exactly to collect the top N docs
from each shard and merge them into a single result. So if
you want 10 docs, just specify rows=10. Solr will query all
the shards,
I see what you are asking. This is an interesting question. It seems
inefficient for Solr to apply the
requested rows to all shards only to discard most of the results on merge.
That would consume lots of resources not used in the final result set.
On 12/19/2011 04:32 PM, ku3ia wrote:
Uhm,
I had a similar requirement in my project, where a user might ask for up to
3000 results. What I did was change SolrIndexSearcher.doc(int, Set) to retrieve
the unique key from the field cache instead of retrieving it as a stored field
from disk. This resulted in a massive speed improvement for
project2501 wrote
I see what you are asking. This is an interesting question. It seems
inefficient for Solr to apply the
requested rows to all shards only to discard most of the results on merge.
That would consume lots of resources not used in the final result set.
Yeah, like Erick says
: Oh, yes on windows, using java 1.6 and Solr 1.4.1.
aparently no one has ever written a FAQ on this, so i just added one...
https://wiki.apache.org/solr/FAQ#Why_doesn.27t_my_index_directory_get_smaller_.28immediately.29_when_i_delete_documents.3F_force_a_merge.3F_optimize.3F
: Are you on
On 12/16/2011 12:44 AM, Shawn Heisey wrote:
I am seeing exceptions from some code I have written using SolrJ.I
have placed it into a pastebin:
http://pastebin.com/XnB83Jay
No reply in three days, does nobody have any ideas for me?
Thanks,
Shawn
Hello All,
I'm having an issue with the way the WordDelimiterFilter parses compound words.
My field declaration is simple, looks like this:
analyzer type=index
tokenizer class=solr.WhitespaceTokenizerFactory/
filter class=solr.WordDelimiterFilterFactory preserveOriginal=1/
According to http://lucene.apache.org/java/3_4_0/fileformats.html, the
FNMVersion changed from -2 to -3 in Lucene 3.4. Is it possible that the new
master is actually running 3.4, and the new slave is running 3.2? (This is just
a wild guess.)
-Michael
That did it. I was running 3.3 on one and 3.4 on another.
Thanks!
Eric
On Mon, Dec 19, 2011 at 11:49 PM, Michael Ryan mr...@moreover.com wrote:
According to http://lucene.apache.org/java/3_4_0/fileformats.html, the
FNMVersion changed from -2 to -3 in Lucene 3.4. Is it possible that the new
Hi,
I understand that we can specify parameters in ExtractingRequestHandler in
solrconfig.xml to capture HTML tags of a particular type and map them to
desired solr fields, like something below.
str name=capturediv/str
str name=fmap.divmysolrfield/str
The above setting will capture content in
Thanks Justin. The reason I decided to ask that is how easy is it to
bootstrap a system like Munin. This of course depends on how fast one needs
it. That is, if SOLR already exposes certain stat via jxm accessible beans,
that will make it easier and faster to set up a tool that can read from
jmx.
37 matches
Mail list logo