Jon,
looks like you're (just) missing the transformer=RegexTransformer in
your entity-definition, like documented here:
http://wiki.apache.org/solr/DataImportHandler#RegexTransformer
Regards
Stefan
On Wed, Feb 9, 2011 at 9:16 PM, Jon Drukman j...@cluttered.com wrote:
I am trying to use the
Hi,
I added “slf4j-log4j12-1.5.5.jar” and “log4j-1.2.15.jar” to
$CATALINA_HOME/webapps/solr/WEB-INF/lib ,
then deleted the library “slf4j-jdk14-1.5.5.jar” from
$CATALINA_HOME/webapps/solr/WEB-INF/lib,
then created a directory $CATALINA_HOME/webapps/solr/WEB-INF/classes.
and created
You can do a lot with function queries.
Only you know what the domain specific requirements are, so you should write
application layer code to modify the Solr query based on the user profile to
the one searching.
Example for the 1950 movie lover you could do:
q=goo
Have you tried to start Tomcat with
-Dlog4j.configuration=$CATALINA_HOME/webapps/solr/WEB-INF/classes/log4j.properties
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
On 10. feb. 2011, at 09.41, Xavier Schepler wrote:
Hi,
I added “slf4j-log4j12-1.5.5.jar” and
Hi all,
I'm attempting to set up a simple Solr Cloud, right now almost directly from
the tutorial at: http://wiki.apache.org/solr/SolrCloud
I'm attempting to set up a simple 1 shard cloud, across two servers. I'm not
sure I understand the architecture behind this, but what I'm after is two
Hi,
If the replication window is too small to allow a new searcher to warm
and close the current searcher before the new one needs to be in
place, then the slaves continuously has a high load, and potentially
an OOM error. we've noticed this in our environment where we have
several facets on
Thanks for your response.
How could I do that ?
From: Jan Høydahl jan@cominvent.com
Sent: Thu Feb 10 11:01:15 CET 2011
To: solr-user@lucene.apache.org
Subject: Re: Tomcat6 and Log4j
Have you tried to start Tomcat with
Add it to the CATALINA_OPTS, on Debian systems you could edit
/etc/default/tomcat
On Thursday 10 February 2011 12:27:59 Xavier SCHEPLER wrote:
-Dlog4j.configuration=$CATALINA_HOME/webapps/solr/WEB-INF/classes/log4j.pr
operties
--
Markus Jelsma - CTO - Openindex
I added it to /etc/default/tomcat6.
What happened is that the same error message appeared twice in
/var/log/tomcat6/catalina.out.
Like the same file was loaded twice.
--
Tous les courriers électroniques émis depuis la
Oh, now looking at your log4j.properties, i believe it's wrong. You declared
INFO as rootLogger but you use SOLR.
-log4j.rootLogger=INFO
+log4j.rootLogger=SOLR
try again
On Thursday 10 February 2011 09:41:29 Xavier Schepler wrote:
Hi,
I added “slf4j-log4j12-1.5.5.jar” and
Oh, and for sharing purposes; we use a configuration like this one. It'll have
an info and error log and stores them next to Tomcat's own logs in
/var/log/tomat on Debian systems (or whatever catalina.base is on other
distros).
log4j.rootLogger=DEBUG, info, error
Yes thanks. This works fine :
log4j.rootLogger=INFO, SOLR
log4j.appender.SOLR=org.apache.log4j.DailyRollingFileAppender
log4j.appender.SOLR.file=/home/quetelet_bdq/logs/bdq.log
log4j.appender.SOLR.datePattern='.'-MM-dd
log4j.appender.SOLR.layout=org.apache.log4j.PatternLayout
Hi,
SolrCloud does not currently handle the indexing side at all. So you'll need to
set up replication to tell Solr that node B should be a replica of node A.
http://wiki.apache.org/solr/SolrReplication
After you do this, you can push a document to node A, wait a minute to let it
replicate to
Hi,
I'm using Solr 1.4.1 and trying to share a schema among two cores.
Here is what I did :
solr.xml :
solr persistent=false
cores adminPath=/admin/cores defaultCoreName=prod
shareSchema=true
core name=prod instanceDir=prod
schemaconf/schema.xml/schema
/core
On Jan 20, 2011, at 12:49 AM, Grijesh wrote:
Hi Mark,
I was just working on SolrCloud for my RD and I got a question in my Mind.
Since in SolrCloud the configuration files are being shared on all Cloud
instances and If I have different configuration files for different cores
then how
(may have double posted...apologies if it is)
It seems like when solr home is absent, Solr makes an attempt to look a few
other places to load its configuration. It will try to look for solrconfig.xml
on the classpath as well. It doesn't seem like it makes any attempt to find
solr.xml
Hello everybody,
I use SolR with Tomcat, and I've this problem:
I must to restart SolR without restart Tomcat and I must to do this
operation on shell.
I try to do this operation with this syntax but it doesn't give result:
curl -u user:password
Hi,
I have so far just tested the examples and got a N by M cluster running. My
feedback:
a) First of all, a major update of the SolrCloud Wiki is needed, to clearly
state what is in which version, what are current improvement plans and get rid
of outdated stuff. That said I think there are
pIs there a detailed, perhaps alphabetical amp; hierarchical table of
contents for all ether wikis on the sole site?brbrbr/p
pSent from Yahoo! Mail on Android/p
Yes but it's not very useful:
http://wiki.apache.org/solr/TitleIndex
On Thursday 10 February 2011 16:14:40 Dennis Gearon wrote:
pIs there a detailed, perhaps alphabetical amp; hierarchical table of
contents for all ether wikis on the sole site?brbrbr/p pSent
from Yahoo! Mail on Android/p
--
Hi,
When a slave is replicating from the master instance, it appears a
write lock is created. Will this lock cause issues with writing to the
master while the replication is occurring or does SOLR have some
queuing that occurs to prevent the actual write until the replication
is complete? I've
Jenny,
look inside the documentation of the manager application, I'm guessing you
haven't activated the cross context and privileges in the server.xml to get
this running.
Or does it work with HTML in a browser?
http://localhost:8080/manager/html
paul
Le 10 févr. 2011 à 16:07, Jenny
Ok I found the solution:
First of all schema is an attribute of the core tag so it becomes:
core name=prod instanceDir=prod schema=conf/schema.xml/
Also make sure the conf directory in your classpath or relative to the path
from where you are launching solr.
It is NOT relative to solr.xml path.
Her URL has /text/ in it for some reason, replace that with html
like Paul has:
curl -u user:password http://localhost:8080/manager/html/reload?path=/solr
Alternatively if you have JMX access get the mbean with
domain: Catalina
name: //localhost/solr
j2eeType: WebModule
J2EEServer:
If I execute this comand in shell:
curl -u user:password
http://localhost:8080/manager/html/reload?path=/solr
I get this result:
!DOCTYPE html PUBLIC -//W3C//DTD HTML 4.01//EN
http://www.w3.org/TR/html4/strict.dtd;
html
head
title401 Unauthorized/title
style type=text/css
!--
BODY
Hi Mark, hi all,
I just got a customer request to conduct an analysis on the state of
SolrCloud.
He wants to see SolrCloud part of the next solr 1.5 release and is willing
to sponsor our dev time to close outstanding bugs and open issues that may
prevent the inclusion of SolrCloud in the next
Exactly Jenny,
*you are not authorized*
means the request cannot be authorized to execute.
Means some calls failed with a security error.
manager/html/reload - for browsers by humans
manager/reload- for curl
(at least that's my experience)
paul
Le 10 févr. 2011 à 17:32, Jenny Arduini a
I don't know: either way works for me via cURL.
I can only say double check your typing (make sure you're passing the
user/password you think you are), and double check server.xml.
Oh, the tomcat roles were tightened up a bit in tomcat 7. If you're using
tomcat 7 (especially if you've
Anyone else having problems with the Solr users list suddenly deciding
everything you send is spam? For the last couple of days I've had this
happening from gmail, and as far as I know I haven't changed anything that
would give my mails a different spam score which is being exceeded
according to
I tried posting from gmail this morning and had it rejected. When I
resent as plaintext, it went through.
On Thu, Feb 10, 2011 at 11:51 AM, Erick Erickson
erickerick...@gmail.com wrote:
Anyone else having problems with the Solr users list suddenly deciding
everything you send is spam? For the
I have had the same problem .. my facet pivots was returning results
something like
Cat-A (3)
Item X
Item Y
only 2 items instead of 3
or even
Cat-B (2)
no items
zero items instead of 2
so the parent level count didnt matched with the returned child pivots ..
but once I set the
I am using the edismax query parser -- its awesome! works well for
standard dismax type queries, and allows explicit fields when
necessary.
I have hit a snag when people enter something that looks like a windows path:
lst name=params
str name=qF:\path\to\a\file/str
/lst
this gets parsed as:
str
: extending edismax. Perhaps when F: does not match a given field, it
: could auto escape the rest of the word?
that's actually what yonik initially said it was suppose to do, but when i
tried to add a param to let you control which fields would be supported
using the : syntax i discovered it
Just for the first part: There's no problem here, the write lock
is to keep simultaneous *writes* from occurring, the slave reading
the index doesn't enter in to it. Note that in Solr, when segments
are created in an index, they are write-once. So basically what
happens when a slave replicates is
Hmmm, never noticed that link before, thanks!
Which shows you how much I can ignore that's perfectly
obvious G...
Works like a champ.
Erick
On Thu, Feb 10, 2011 at 2:05 PM, Shane Perry thry...@gmail.com wrote:
I tried posting from gmail this morning and had it rejected. When I
resent as
ah -- that makes sense.
Yonik... looks like you were assigned to it last week -- should I take
a look, or do you already have something in the works?
On Thu, Feb 10, 2011 at 2:52 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
: extending edismax. Perhaps when F: does not match a given
Hi,
I've completed the quickdirty tutorials of SolrCloud ( see
http://wiki.apache.org/solr/SolrCloud ).
The whole concept of SolrCloud and ZooKeeper look indeed very promising.
I found also some info about a 'ZooKeeperComponent' - From this conponent it
should be possible to configure ZooKeeper
On Thu, Feb 10, 2011 at 2:52 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
: extending edismax. Perhaps when F: does not match a given field, it
: could auto escape the rest of the word?
that's actually what yonik initially said it was suppose to do
Hmmm, not really.
essentially that
On Thu, Feb 10, 2011 at 3:05 PM, Ryan McKinley ryan...@gmail.com wrote:
ah -- that makes sense.
Yonik... looks like you were assigned to it last week -- should I take
a look, or do you already have something in the works?
I got busy on other things, and don't have anything in the works.
I
Hi,
I've followed the guide worked perfect for me.
( I had to execute ant compile - not ant example, but not likely that was
your problem ).
2011/1/2 siddharth sid_invinci...@yahoo.co.in
I seemed to have figured out the problem. I think it was an issue with the
JAVA_HOME being set. The
: essentially that FOO:BAR and FOO\:BAR would be equivalent if FOO is
: not the name of a real field according to the IndexSchema
:
: That part is true, but doesn't say anything about escaping. And for
: some unknown reason, this no longer works.
that's the only part i was refering to.
-Hoss
On Thu, Feb 10, 2011 at 5:00 PM, Stijn Vanhoorelbeke
stijn.vanhoorelb...@gmail.com wrote:
I've completed the quickdirty tutorials of SolrCloud ( see
http://wiki.apache.org/solr/SolrCloud ).
The whole concept of SolrCloud and ZooKeeper look indeed very promising.
I found also some info about a
foo_s:foo\-bar
is a valid lucene query (with only a dash between the foo and the
bar), and presumably it should be treated the same in edismax.
Treating it as foo_s:foo\\-bar (a backslash and a dash between foo and
bar) might cause more problems than it's worth?
I don't think we should
On Thu, Feb 10, 2011 at 5:51 PM, Ryan McKinley ryan...@gmail.com wrote:
foo_s:foo\-bar
is a valid lucene query (with only a dash between the foo and the
bar), and presumably it should be treated the same in edismax.
Treating it as foo_s:foo\\-bar (a backslash and a dash between foo and
bar)
Hi,
I've read running optimize is similar to running defrag on a hard disk.
Deleted
docs are removed and segments are reorganized for faster searching.
I have a couple questions.
Is optimize necessary if I never delete documents? I build the index every
hour but we don't delete in between
Let's see what the queries are. If you're searching for single
terms that don't match many docs that's one thing. If you're looking
at many terms that match many documents, I'd expect larger numbers.
Unless you're hitting the document cache and not searching at all
Best
Erick
On Thu, Feb
Does optimize merge all segments into 1 segment on the master after the build?
Or after the build, there's only 1 segment.
thanks,
Tri
From: Erick Erickson erickerick...@gmail.com
To: solr-user@lucene.apache.org
Sent: Thu, February 10, 2011 5:08:44 PM
Optimize will do just what you suggest, although there's a
parameter whose name escapes me controlling how many
segments the index is reduced to so this is configurable.
It's also possible, but kind of unlikely, that the original indexing
process would produce only one segment. You could tell
On Thu, Feb 10, 2011 at 4:08 PM, Stijn Vanhoorelbeke
stijn.vanhoorelb...@gmail.com wrote:
Hi,
I've done some stress testing onto my solr system ( running in the ec2 cloud
).
From what I've noticed during the tests, the QTime drops to just 1 or 2 ms (
on a index of ~2 million documents ).
It would be good if someone added the hits= on group=true in the log.
We are using this parameter and have build a really cool SOLR log analyzer
(that I am pushing to release to open source).
But it is not as effective if we cannot get group=true to output hits= in
the log - since 90% of our
I am not sure I understand your question.
But you can boost the result based on one value over another value.
Look at bf
http://wiki.apache.org/solr/SolrRelevancyFAQ#How_can_I_change_the_score_of_a_document_based_on_the_.2Avalue.2A_of_a_field_.28say.2C_.22popularity.22.29
On Wed, Feb 9, 2011
do you mean queryResultCache? you can comment related paragraph in
solrconfig.xml
see http://wiki.apache.org/solr/SolrCaching
2011/2/8 Isan Fulia isan.fu...@germinait.com:
Hi,
My solrConfig file looks like
config
updateHandler class=solr.DirectUpdateHandler2 /
requestDispatcher
Hi,
You can comment out all sections in solrconfig.xml pointing to a cache.
However, there is a cache deep in Lucence - the fieldcache - that can't be
commented out. This cache will always jump into the picture
If I need to do such things, I restart the whole tomcat6 server to flush ALL
53 matches
Mail list logo