Hello,
when I rebuild the spellchecker index ( by optimizing the data index or
by calling cmd=rebuild ) the spellchecker index is not optimized. I even
cannot delete the old indexfiles on the filesystem, because they are
locked by the solr server. I have to stop the solr server(resin) to
optimize
Thanks for the update, i'll have to find another way then :s.
Marc
Date: Mon, 14 Jun 2010 13:44:30 -0700
From: hossman_luc...@fucit.org
To: solr-user@lucene.apache.org
Subject: Re: Copyfield multi valued to single value
: Is there a way to copy a multivalued field to a single value by
Hello Hoss,
So far we have been using the default SearchHandler.
I also looked into a solution proposed on this mailing list by Geert-Jan
Brits using extra sort fields and functions to pick out the maximum.
This however proved rather cumbersome to integrate in our SolrJ client
and I also have
Hi,
I am using SOLR with Apache Tomcat. I have some .html
files(contains the articles) stored at XYZ location. How can I index these
.html files in SOLR?
Regards,
Siddharth
--
View this message in context:
Thanks,
moving it to direcxt child worked.
Olivier
2010/6/14 Chris Hostetter hossman_luc...@fucit.org
: In solrconfig, in update/extract requesthandler I specified str
: name=tika.config./tika-config.xml/str , where tika-config.xml is in
: conf directory (same as solrconfig).
can you show
Dear list,
this sounds stupid, but how to get a full working copy of SOLR?
What I have tried so far:
- started with LucidWorks SOLR. Installs fine, runs fine but has an old tika
version
and can only handle some PDFs.
- changed to SOLR trunk. Installs fine, runs fine but luke 1.0.1 argues
Hi,
I tried downloading solr 1.4.1 from the site. but it shows an empty
directory. where did u get solr 1.4.1 from?
Regards,
Raakhi
On Tue, Jun 8, 2010 at 10:35 PM, Jean-Sebastien Vachon
js.vac...@videotron.ca wrote:
Hi All,
I've been running some tests using 6 shards each one
okay thx. good idea with mod_rewrite =) thx
--
View this message in context:
http://lucene.472066.n3.nabble.com/how-to-use-q-string-in-solrconfig-xml-tp861870p896902.html
Sent from the Solr - User mailing list archive at Nabble.com.
They used to be in the branches if I recall correctly but you're right. They
aren't there anymore.
Maybe someone else can explain why... it looks like they restructure the
repository for the Solr/lucene merge.
On 2010-06-15, at 4:54 AM, Rakhi Khatwani wrote:
Hi,
I tried downloading
On Tue, Jun 15, 2010 at 12:58 AM, Bernd Fehling
bernd.fehl...@uni-bielefeld.de wrote:
- changed to SOLR branch_3x. Installs fine, runs fine, luke works fine but
the extraction with /update/extract (ExtractingRequestHandler) only replies
the metadata but not the content.
Sounds like
I am Sarfaraz, working on a Search Engine
project which is based on Nutch Solr. I am trying to implement a
new Search Algorithm for this engine.
Our search engine is crawling the web and storing the documents in form of
large strings in the database indexed by their urls.
Now
to implement my
Have you taken a look at Solr's TermVector component? It's probably
what you want:
http://wiki.apache.org/solr/TermVectorComponent
didier
On Tue, Jun 15, 2010 at 8:38 AM, sarfaraz masood
sarfarazmasood2...@yahoo.com wrote:
I am Sarfaraz, working on a Search Engine
project which is based on
The TermVectorComponent can return tf/idf:
http://wiki.apache.org/solr/TermVectorComponent
On Jun 15, 2010, at 9:38 AM, sarfaraz masood wrote:
I am Sarfaraz, working on a Search Engine
project which is based on Nutch Solr. I am trying to implement a
new Search Algorithm for this engine.
Hoss,
Thanks for the response.
I was able to get multiple dist queries working, however, I've noticed
another problem.
when using
fq=_query_:{!frange l=0 u=25 v=$qa}
qa=dist(2,44.844833,-93.03528,latitude,longitude)
it returns 9,975 documents. When I change the upper limit to 250 it
returns
Are you using Ubuntu by any chance?
It's a somewhat common problem ...
@http://stackoverflow.com/questions/2854356/java-classpath-problems-in-ubuntu
I'm unsure if this has been resolved but a similar thing happened to me on a
recent VMware image in a dev environment. It worked everywhere
Got it. Thanks!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Custom-faceting-question-tp868015p897390.html
Sent from the Solr - User mailing list archive at Nabble.com.
From what I've seen so far, using separate fields for latitude and
longitude, especially with multiple values of each, does not work correctly
in all situations.
The hole in my understanding is how Solr knows how to pair a latitude and
longitude field _back_ into a POINT.
I can say that it
Hi All,
We are trying implement solr for our newspapers site search.
To build out the index with all the articles published so far, we are
running script which send the request to dataimport handler with different
dates.
What we are seeing is the request is dispatched to solr server,but its not
Don't think so, you probably want to look into this setup of distributed +
sharding
http://www.lucidimagination.com/Community/Hear-from-the-Experts/Articles/Scaling-Lucene-and-Solr#d0e410
it will get you high availability plus more scalable.
Wilson Man | Principal Consultant | Liferay,
Hey guys,
Does anyone know how to patch stuff in Windows? I am trying to patch
Solr with patch 238 but it keeps erroring out with this message:
C:\solr\example\webappspatch solr.war ..\..\SOLR-236-trunk.patch
patching file solr.war
Assertion failed: hunk, file ../patch-2.5.9-src/patch.c, line
I'm pretty sure you need to be running the patch against a checkout of the
trunk sources, not a generated .war file. Once you've done that you can use the
build scripts to make a new war.
-Kallin Nagelberg
-Original Message-
From: Moazzam Khan [mailto:moazz...@gmail.com]
Sent:
Thanks. I finally patched it (I think). I got the source from SVN and
applied the patch using a windows port. A caveat to those to want to
do this on windows - open the file in wordpad and save it as a
different file to replace unix line breaks with DOS line breaks.
Otherwise, the patch program
Hi all
I wrote a small app using solrj and solr. The app has a small wrapper that
handles the reindexing., which was written using groovy. The groovy script
generates the solr docs, and then the java code deletes and recreates the
data
In a singleton ejb, we do this in the post construct
Performing wild card phrase searches can be tricky. Spend some time figuring
this one out.
1. To perform a wildcard search on a phrase, it is very important to escape the
SPACE, so that SOLR treats it as a single phrase.
Ex: Citibank NA = Citibank\ NA
You can use
I'm new to Solr so I expect that I'm making some newbie error. I run my
data-config.xml file through the DataImportHandler Development Console and I
see all the results of the xpath queries scroll past in the debug pane. It
processes all the content without reporting an error in the terminal
DIH skips the documents which has errors and it also shows which field caused
the error. But which documents is skipped and which field caused the error
is only shown in the server console. Is there a way to retrieve that info in
the browser or read the info from the console itself.
Thanks,
Can someone please explain what the inform method should accomplish? Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCoreAware-tp899064p899064.html
Sent from the Solr - User mailing list archive at Nabble.com.
Can someone explain how to register a SolrEventListener?
I am actually interested in using the SpellCheckerListener and it appears
that it would build/rebuild a spellchecker index on commit and/or optimize
but according to the wiki the only events that can be listened for are
firstSearcher and
28 matches
Mail list logo