Hello, list.
I found some strange results using the standard analyzer.
I've put it in both query and index time, but when I use the schema browser
to see the commond values for field, i find:
spa1558 s.p.a. 833
Which is pretty strange, since I've used the analyzer to remove the dots
from the
Hi,
I'm trying to write a testing suite to gauge the performance of solr
searches. To do so, I'd like to be able to find out what keywords
will get me search results. Is there anyway to programaticaly do this
with luke? I'm trying to figure out what all it exposes, but I'm not
seeing this.
Yes but even if I run it I've no snapshot created, I don't get how I can fixe
it.
Bill Au wrote:
You only need to run the rsync daemon on the master.
Bill
On Wed, Sep 17, 2008 at 10:54 AM, sunnyfr [EMAIL PROTECTED] wrote:
Hi Raghu,
Thanks it's clear now;
Kashyap, Raghu
On Mon, 22 Sep 2008 15:46:54 +0530
Jacob Singh [EMAIL PROTECTED] wrote:
Hi,
I'm trying to write a testing suite to gauge the performance of solr
searches. To do so, I'd like to be able to find out what keywords
will get me search results. Is there anyway to programaticaly do this
with
Jacob, take a peek at
contrib/miscellaneous/src/java/org/apache/lucene/misc/HighFreqTerms.java
This is under Lucene (svn checkout).
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: Jacob Singh [EMAIL PROTECTED]
To:
Hi,
Are you sure you are not looking at the original field values? (what is the
schema browser are you referring to?)
Yes, tokenizer + filters are applied in the order they are defined in, so the
order is important. For example, you typically want to lower-case tokens
before removing stop
Hi,
I'm not sure if that will work, but have you tried using a full path to the
stopwords file?
If that doesn't work, you can always just create symbolic links to a single
stopwords file to avoid having duplicate files.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
Try running snapshooter with the -V option. That will show debugging
info.
Bill
On Mon, Sep 22, 2008 at 6:49 AM, sunnyfr [EMAIL PROTECTED] wrote:
Yes but even if I run it I've no snapshot created, I don't get how I can
fixe
it.
Bill Au wrote:
You only need to run the rsync daemon
Ok thanks just did it, and it's ok, it runs manually.
I just don't get about this files : are they should be created when update
on an index or new index ?
If yes how come there is several files, is it because the file can't exceed
a size?
Last question concerning snappuller on the slave's
Hi,
There is something wierd :
I've plan cron job every 5mn which heat delta-import's url and works fine :
The point is : It does look like if it doesn't check every data for updating
or creating a new one :
Because every 5mn the delta importa is started again : (even like if
delta-import is not
You can also try the patch at
https://issues.apache.org/jira/browse/SOLR-651and see if it helps you.
On Mon, Sep 22, 2008 at 3:46 PM, Jacob Singh [EMAIL PROTECTED] wrote:
Hi,
I'm trying to write a testing suite to gauge the performance of solr
searches. To do so, I'd like to be able to find
Hi,
I'm in the process of evaluating Blacklight as an open-source opac. I'm
looking for some information. Can someone answer the following
questions:
- Can it display in multiple languages?
- Does it provide ranking?
- Does it support linking to OpenURL link resolver?
-
By default snappuller pulls the most recent snapshot. But you can also
specify a snapshot by name. All the distribution scripts are documented
here:
http://wiki.apache.org/solr/SolrCollectionDistributionScripts
Bill
On Mon, Sep 22, 2008 at 11:28 AM, sunnyfr [EMAIL PROTECTED] wrote:
Ok
Hi, Isabelle.
These questions are probably better asked on the blacklight mailing
list, which you can join here: https://rubyforge.org/mail/?group_id=5235
In short, though, the answers to all your questions are yes, except
for OpenURL. There's no reason it couldn't handle OpenURL, we just
I have a dynamicField declaration:
dynamicField name=*_t type=text indexed=true stored=true/
I want to copy any *_t's into a text field for searching with dismax.
As it is, it appears you can't search dynamicfields this way.
I tried adding a copyField:
copyField source=*_t dest=text/
I do
Hi,
Thank you for replying.
Will OpenURL be handled with the new release?
Is there a list of new and improved features of the new release
available somewhere?
Thanks.
Isabelle
-Original Message-
From: Bess Sadler [mailto:[EMAIL PROTECTED]
Sent: September 22, 2008 12:15 PM
To:
: I haven't heard of or found a way to find the number of times a term
: is found on a page.
: Lucene uses it in scoring, I believe, (solr scoring:
http://tinyurl.com/4tb55r)
Assuming by page you mean document then the term frequency (tf) is
factored into the score, but at a low enough level
: if I want to highlight a mutivalued field I get the following exception:
I don't know much about Highlighting, but when i attempt to highlight a
multivalued field using the example schema docs i don't get an error...
http://localhost:8983/solr/select/?q=features:cachehl=truehl.fl=features
: My problem is, if 1 solr instance process(save) 100 documents one-by-one, it
: would not be very effective, I want to create 10 clones
: (process/threads/cores) of the same solr instance, so that 10 documents get
: processed(saved to solr) simaltaneously.
Perhaps i'm completley
Add echoParams=all to your URL and look for the cat field in one of
the passed parameters. Specifically, in pf and qf. These can be
defaulted in the solrconfig.xml file.
-Sean
Jon Drukman wrote:
whenever i try to use qt=dismax i get the following error:
Sep 22, 2008 11:50:48 AM
Sean Timm wrote:
Add echoParams=all to your URL and look for the cat field in one of
the passed parameters. Specifically, in pf and qf. These can be
defaulted in the solrconfig.xml file.
i tried that but the exception prevents solr from returning anything.
but i did look in solrconfig.xml
On Mon, Sep 22, 2008 at 9:19 PM, sunnyfr [EMAIL PROTECTED] wrote:
Hi,
There is something wierd :
I've plan cron job every 5mn which heat delta-import's url and works fine :
The point is : It does look like if it doesn't check every data for
updating
or creating a new one :
Because every
: I don't think it can works at the index time, because I when somebody look
: for a book I want to boost the search in relation with the user language
: ...so I dont think it can works, except if I didn't get it.
Hmmm... i clearly missunderstod what you were asking, you made it sound
like you
That's excellent. Thanks for the reply.
gene
On Tue, Sep 23, 2008 at 6:39 AM, Chris Hostetter
[EMAIL PROTECTED] wrote:
: I haven't heard of or found a way to find the number of times a term
: is found on a page.
: Lucene uses it in scoring, I believe, (solr scoring:
Folks:
I have an odd situation that I am hoping someone can shed light on.
I have a solr apps running under tomcat 6.0.14 (on a windows xp sp3
machine).
The app is declared in the tomcat config file as follows:
In file merchant.xml for the merchant app:
Context
: And I did change my config file :
:
: !-- A postCommit event is fired after every commit or optimize command
: listener event=postCommit class=solr.RunExecutableListener
...that comment isn't closed, so perhaps you it's closed after the
/listener block and not getting used at all.
: Here is what I was able to get working with your help.
:
: (productId:(102685804)) AND liveDate:[* TO NOW] AND ((endDate:[NOW TO *]) OR
: ((*:* -endDate:[* TO *])))
:
: the *:* is what I was missing.
Please, PLEASE ... do yourself a favor and stop using AND and OR ...
food will taste
Hi,
I have indexed a dynamic field in the adddoc as: field
name=stockPrice_f28.00/field.
It is visible in my query.
However, when I issue a query with a function: ... _val_:sum(stockPrice_f,
10.00)fl=*,score
I received the output of: float name=score36.41818/float
There were no other
: thanks for your reply
:
: the content of xml file is chinese
My mail reader wasn't able to display the chinese characters in your XML
example, but the stack trace you posted doesn't seem to indicate any
problem with characters in the field value (did you try posting a simple
file with only
thank you chris hostetter,
in fact, if i use post.sh script to post xml data to solr, the error
will happen; however, i switch to post.jar, everything is ok.
i'm confused why post.sh doesn't work
Chris Hostetter 写道:
: thanks for your reply
:
: the content of xml file is chinese
My mail
I have temporarily solved the problem by hardcoding the folders in the
dataDir element like so:
dataDirC:\tomcatweb\merchant\data/dataDir (in the
solrconfig.xml)
Any ideas of what I am doing wrong?
Is it solr home or or the data directory that is getting set wrong?
I *think* the
hi, all
a field schema as this:
field name=pubdate type=slong indexed=true stored=true/
when i indexed a doc with null value of this field, an error happened:
SEVERE: org.apache.solr.common.SolrException: Error while creating field
Hi All,
I am new to Solr. I am using Lucene last 2 years.
We create Lucene indexes for database.
Please help to migrate to Solr.
How can achieve this.
If any one have idea, please help.
Thanks In Advance.
Regards,
Dinesh Gupta
Dinesh,
Please have a look at the Solr tutorial first.
Then have a look at the new DataImportHandler - there is a very detailed page
about it on the Wiki.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: Dinesh Gupta [EMAIL PROTECTED]
To:
Thanks I'll try it out.
hossman wrote:
: Here is what I was able to get working with your help.
:
: (productId:(102685804)) AND liveDate:[* TO NOW] AND ((endDate:[NOW TO
*]) OR
: ((*:* -endDate:[* TO *])))
:
: the *:* is what I was missing.
Please, PLEASE ... do yourself a favor
35 matches
Mail list logo