There is a patch for it:
https://issues.apache.org/jira/browse/SOLR-64
Koji
Jón Helgi Jónsson wrote:
Did a bit more creative searching for a solution and came up with this:
http://www.mail-archive.com/solr-user@lucene.apache.org/msg15027.html
I'm using couple of days old nightly build, so u
pgiesin wrote:
I have a client who is interested in using Solr/Lucene as their search
engine. So far I think it meets 85% of their requirements. I have decided to
integrate with JAMon tp provide statistical/performance analysis at
run-time. The piece I am still missing is dynamic configuration of
Edwin Stauthamer wrote:
Hi,
I want to index a perfectly good solr XML-file into an Solr/Lucene instance.
The problem is that the XML has many fields that I don't want to be indexed.
I tried to index the file but Solr gives me an error because the XML
contains fields that I have not declared in
Reuben Firmin wrote:
Is it expected behaviour that "deleteById" will always return OK as a
status, regardless of whether the id was matched?
It is expected behaviour as Solr always returns 0 unless an error occurs
during processing a request (query, update, ...), so you don't need to check
t
Licinio Fernández Maurelo wrote:
i'm trying to do some filtering in the count list retrieved by solr when
doing a faceting query ,
i'm wondering how can i use facet.prefix to gem something like this:
Query
facet.field=foo&facet.prefix=A OR B
Response
-
12560
5440
2357
.
.
.
How
As Solr said in the log, Solr couldn't find solrconfig.xml in classpath
or solr.solr.home, cwd.
My guess is that relative path you set for solr.solr.home
was incorrect? Why don't you try:
solr.solr.home=/home/huenzhao/search/tomcat6/bin/solr
instead of:
solr.solr.home=home/huenzhao/search/tomc
Just an FYI, Lucene 2.9 has FastVectorHighlighter:
http://hudson.zones.apache.org/hudson/job/Lucene-trunk/javadoc/all/org/apache/lucene/search/vectorhighlight/package-summary.html
Features
* fast for large docs
* support N-gram fields
* support phrase-unit highlighting with slops
*
manuel aldana wrote:
hi,
I am having queries:
+a b
a b
I always wondered why the + operator did not work. Looking at the
http://localhost:8983/solr/admin/analysis.jsp analysis trace the query
analzyer indeed is removing the + through the
WordDelemiterFilterFactory. So I removed this filter (
Peter,
It was committed revision 794328.
Please see:
https://issues.apache.org/jira/browse/SOLR-1241
Koji
Peter Wolanin wrote:
Looks like we better update our schema for the Drupal module - what
rev of Solr incorporates this change?
-Peter
On Fri, Jul 24, 2009 at 8:38 AM, Koji Sekiguchi
David,
Try to change solr.CharStreamAwareWhitespaceTokenizerFactory to
solr.WhitespaceTokenizerFactory
in your schema.xml and reboot Solr.
Koji
david wrote:
Otis Gospodnetic wrote:
I think the problem is CharStreamAwareWhitespaceTokenizerFactory,
which used to live in Solr (when Drupal s
Erik Hatcher wrote:
On Jul 17, 2009, at 5:21 PM, Bill Au wrote:
I am faceting based on the indexed terms of a field by using
facet.field.
Is there any way to exclude certain terms from the facet counts?
Only using the facet.prefix feature to limit to facet values beginning
with a specific s
ashokcz wrote:
Hi thanks "Koji Sekiguchi-2" for your reply .
Ya i was looking something like that .
So when doing the solrrequest is should have extra config parameters
facet.tree and i shud give the fields csv to specify the hierarchy will
try and see if its giving me desired res
ashokcz wrote:
Hi all,
i have a scenario where i need to get facet count for combination of fields.
Say i have two fields Manufacturer and Year of manufacture.
I search for something and it gives me 15 results and my facet count as like
this :
Manufacturer : Nokia(5);Motorola(7);iphone(
Gargate, Siddharth wrote:
I read somewhere that it is deprecated
Yeah, as long as you explicitly use 'lucenePlusSort' parser via defType
parameter:
q=*:*;id desc&defType=lucenePlusSort
Koji
Mat Brown wrote:
Hi all,
If I have two fields that are copied into a copyField, and I index
data in these fields using different index-time boosts, are those
boosts propagated into the copyField?
Thanks!
Mat
No, but the norms of source fields of copyField are "propagated"
into the destinat
Edukondalu,
Beside the question, are you aware of Solrj - Java API for Solr.
http://wiki.apache.org/solr/Solrj
Koji
Edukondalu Avula wrote:
Hi Friends,
I am working on Apache solr for indexing data by using java programming.
For Indexing data i used tomcat server and i started solr, i prepar
Sagar,
> I am facing a problem here that even after the core reload and
re-indexing
> the documents the new updated synonym or stop words are not loaded.
> Seems so the filters are not aware that these files are updated so
the solution
> to me is to restart the whole container in which I have
solr jay wrote:
Hi,
I am looking at this piece of configuration in solrconfig.xml
solr
solrconfig.xml
schema.xml
q=solr&version=2.0&start=0&rows=0
server-enabled
I've never used this feature before, but reading source code...
It wasn't cl
ahammad wrote:
Thanks for the suggestions:
Koji: I am aware of Cygwin. The problem is I am not sure how to do the whole
thing. I downloaded a nightly zip file and extracted it to a directory.
Where do I put the .patch file? Where do I execute the "patch..." command
from? It doesn't work when I d
ahammad wrote:
Hello,
I am trying to install a patch for Solr
(https://issues.apache.org/jira/browse/SOLR-284) but I'm not sure how to do
it in Windows.
I have a copy of the nightly build, but I don't know how to proceed. I
looked at the HowToContribute wiki for patch installation instructions,
David Baker wrote:
I am trying to index a solr server from a nightly build. I get the
following error in my catalina.out:
26-Jun-2009 5:52:06 PM
org.apache.solr.update.processor.LogUpdateProcessor
finish
ira/browse/SOLR-291
Thank you,
Koji
Regards,
Francis
-Original Message-----
From: Koji Sekiguchi [mailto:k...@r.email.ne.jp]
Sent: Wednesday, June 17, 2009 8:28 PM
To: solr-user@lucene.apache.org
Subject: Re: OutOfMemory error on solrslaves
Francis Yakin wrote:
We are experiencing &q
Francis Yakin wrote:
We are experiencing "OutOfMemory" error frequently on our slaves, this is the
error:
SEVERE: Error during auto-warming of
key:org.apache.solr.search.queryresult...@a8c6f867:java.lang.OutOfMemoryError:
allocLargeObjectOrArray - Object size: 5120080, Num elements: 1280015
j
ashokc wrote:
Hi,
I copy 'field1' to 'field2' so that I can apply a different set of analyzers
& filters. Content wise, they are identical. 'field2' has to be stored
because it is used for high-lighting. Do I have to declare 'field1' also to
be stored? 'field1' is never returned in the response.
Shalin,
I know it. I've verified that NPE is gone. Thank you.
Koji
Shalin Shekhar Mangar wrote:
This should be fixed in trunk now.
2009/6/1 Koji Sekiguchi
Reopened SOLR-1051:
https://issues.apache.org/jira/browse/SOLR-1051?focusedCommentId=12715030
They are identical. solr.war is a copy of apache-solr-1.3.0.war.
You may want to look at example target in build.xml:
Koji
Francis Yakin wrote:
We are planning to upgrade solr 1.2.0 to 1.3.0
Under 1.3.0 - Which of war file that I need to use and deploy on my application?
We are usi
Reopened SOLR-1051:
https://issues.apache.org/jira/browse/SOLR-1051?focusedCommentId=12715030&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12715030
Koji
Koji Sekiguchi wrote:
> Maybe I did something wrong, I got NPE when trying to MERGEINDEXES:
&
Maybe I did something wrong, I got NPE when trying to MERGEINDEXES:
http://localhost:8983/solr/admin/cores?action=MERGEINDEXES&core=core0&indexDirs=indexname
java.lang.NullPointerException
at
org.apache.solr.update.processor.RunUpdateProcessor.(RunUpdateProcessorFactory.java:55)
at
org.apache.sol
ecify -Dsolr.solr.home when I started)
Mine is still trying to create it under the webapps directory.
How does your look like in solrconfig.xml?
${solr.data.dir:./solr/data}
Koji
Cheers,
Tim
2009/5/28 Tim Haughton
2009/5/28 Koji Sekiguchi
Ok.
I've just tried it (the way you qu
Tim Haughton wrote:
Hi Koji, quoting from that page:
"/some/path/solr.war" is the absolute path to where ever you want to keep
the Solr war using the appropriate syntax for your Operating System. In
Tomcat 5.5 and later,* the war file must be stored outside of the webapps
directory* for this to
If you have webapps directory under /usr/local/tomcat, place solr.war to
there.
http://wiki.apache.org/solr/SolrTomcat
Koji
Tim Haughton wrote:
Hi, I'm all at sea with this. I'm trying to get Ubuntu 9.04 (x64 Server),
Tomcat 6.0.18 and Solr 1.3.0 to play together. So far, no dice. Here's whe
jlist9 wrote:
The wiki page (http://wiki.apache.org/solr/MoreLikeThis) says:
mlt.fl: The fields to use for similarity. NOTE: if possible, these
should have a stored TermVector
I didn't set TermVector to true MoreLikeThis with StandardRequestHandler seems
to work fine. The first question is, is
jlist9 wrote:
Thanks. Will that still be the MoreLikeThisRequestHandler?
Or the StandardRequestHandler with mlt option?
Yes, StandardRequestHandler. MoreLikeThisComponent is
available by default. Set mlt=on when you want to get MLT results.
Koji
jlist9 wrote:
Hi, I'm trying out the mlt handler but I'm getting a 404 error.
HTTP Status 404 - /solr/mlt
solrconfig.xml seem to say that mlt handler is available by default.
i wonder if there's anything else I should do before I can use it?
I'm using version 1.3.
Thanks
Try /solr/select
Grant Ingersoll wrote:
On May 22, 2009, at 11:41 PM, Koji Sekiguchi wrote:
I'm thinking using clustering (SOLR-769) function for my project.
I have a couple of questions:
1. if q=*:* is requested, Carrot2 will receive "MatchAllDocsQuery"
via attributes. Is it OK?
Yes, it o
I'm thinking using clustering (SOLR-769) function for my project.
I have a couple of questions:
1. if q=*:* is requested, Carrot2 will receive "MatchAllDocsQuery"
via attributes. Is it OK?
2. I'd like to use it on an environment other than English, e.g. Japanese.
I've implemented Carrot2Japanese
Vincent,
You need to add defType=dismax when using bf parameter.
Please see:
http://wiki.apache.org/solr/DisMaxRequestHandler
Koji
Vincent Pérès wrote:
Hello,
I'm stuck my the boost feature...
I'm doing the following query :
http://localhost:8983/solr/select/?q=novel&bf=title_s^5.0&fl=titl
msmall wrote:
I've seen the documentation that references the EventListener interface
(DataImportHandler section of WIKI), but in the latest
apache-solr-dataimporthandler-1.3.0.jar, there is no
org.apache.solr.handler.dataimport.EventListener interface. Where is this
interface and do I need to e
Yao Ge wrote:
I have a field named "last-modified" that I like to use in bf (Boot
Functions) parameter:
recip(rord(last-modified),1,1000,1000) in DisMaxRequestHander.
However the Solr query parser complain about the syntax of the formula. I
think it is related with hyphen in the field name. I hav
It must be KeywordTokenizer*Factory* :)
Koji
sunnyfr wrote:
hi
I tried but Ive an error :
May 12 15:48:51 solr-test jsvc.exec[2583]: May 12, 2009 3:48:51 PM
org.apache.solr.common.SolrException log SEVERE:
org.apache.solr.common.SolrException: Error loading class
'solr.KeywordTokenizer' ^Iat
o
Use sdouble instead of double for range queries since the
lexicographic ordering isn't equal to the numeric ordering.
Koji
Iris Soto wrote:
Iris Soto escribió:
Hi all,
I am having problems with search for double datatype, for example:
I'm testing comparing results of search in a range
I'm not sure this is what you are looking for, but you may try to use
fq parameter? &q=*:*&fq=xxx:A&rows=10 for "at most 10 docs with xxx=A".
http://wiki.apache.org/solr/CommonQueryParameters#head-6522ef80f22d0e50d2f12ec487758577506d6002
Koji
Branca Marco wrote:
Hi,
I have a question about fa
Chris Hostetter wrote:
: The exception is expected if you use CharStream aware Tokenizer without
: CharFilters.
Koji: i thought all of the casts had been eliminated and replaced with
a call to CharReader.get(Reader) ?
Yeah, right. After r758137, ClassCastException should be eliminated.
h
I see you are using firstSearcher/newSearcher event listener on your
startup and cause the problem.
If you don't need them, commented out them in solrconfig.xml.
Koji
Eric Sabourin wrote:
I’m using SOLR 1.3.0 (from download, not a nightly build)
apache-tomcat-5.5.27 on Windows XP.
When
Thanh Doan wrote:
Assuming a solr search returns 10 listing items as below
1) 4 digital cameras
2) 4 LCD televisions
3) 2 clothing items
If we navigate to /electronics we want solr to show
us facets specific to 8 electronics items (e.g brand, price).
If we navigate to /electro
Just an FYI: I've never tried, but there seems to be RSS feed sample in DIH:
http://wiki.apache.org/solr/DataImportHandler#head-e68aa93c9ca7b8d261cede2bf1d6110ab1725476
Koji
Tom H wrote:
> Hi,
>
> I've just downloaded solr and got it working, it seems pretty cool.
>
> I have a project which need
me following error,
SEVERE: java.lang.ClassCastException: java.io.StringReader cannot be cast to
org.apache.solr.analysis.CharStream
May be you are typecasting Reader to subclass.
Thanks,
Ashish
Koji Sekiguchi-2 wrote:
If you use CharFilter, you should use "CharStream aware" To
zer or use charFilter??
Thanks,
Ashish
Koji Sekiguchi-2 wrote:
Ashish P wrote:
I want to convert half width katakana to full width katakana. I tried
using
cjk analyzer but not working.
Does cjkAnalyzer do it or is there any other way??
CharFilter which comes with trunk/Solr 1.4 j
Ashish P wrote:
I want to convert half width katakana to full width katakana. I tried using
cjk analyzer but not working.
Does cjkAnalyzer do it or is there any other way??
CharFilter which comes with trunk/Solr 1.4 just covers this type of problem.
If you are using Solr 1.3, try the patch a
Setting random seed name to dynamic field name is the trick.
http://lucene.apache.org/solr/api/org/apache/solr/schema/RandomSortField.html
Koji
hpn br wrote:
Hi,
When I used only lucene, I implemented a criterion of random sort, I send the seed and I get the same numbers. Use this artific
Gargate, Siddharth wrote:
Anybody facing the same issue? Following is my configuration
...
...
...
explicit
500
true
id,score
teaser
teaser
200
200
500
Nasseam Elkarra wrote:
Background:
Set up a system for hierarchal categories using the following scheme:
level one#
level one#level two#
level one#level two#level three#
Trying to find the right combination of field type and query to get
the desired results. Saw some previous posts about hierar
Udaya wrote:
Hi,
Need your help,
I would like to know how we could append or add one field value to another
field in Scheme.xml
My scheme is as follows (only the field part is given):
Scheme.xml
default="http://comp.com/portals/ForumWindow?action=1&v=t&p="topics_id"#"topics_i
Radha C. wrote:
Hi,
Can I have the dynamic field in copyField as follows,
You can do it. See SOLR/client/ruby/solr-ruby/solr/conf/schema.xml.
Koji
Can anyone tell me please how to make the dynamic field to be available in
one field "all" ?
ashokc wrote:
What I am doing right now is to capture all the content under "content_korea"
for example, use 'copyField' to duplicate that content to "content_english".
"content_korea" gets processed with CJK analyzers, and "content_english"
gets processed with usual detailed index/query analyzer
Shalin Shekhar Mangar wrote:
On Mon, Apr 6, 2009 at 1:52 PM, Veselin K wrote:
I'd like to copy the "text" field to a field called "preview" and
then limit the "preview" field to just a few lines of text (or number of
terms).
Then I could configure retrieving the "preview" field instead of "
sunnyfr wrote:
Hi,
How can I be sure about that my IndexReaders are in read-only mode?
Thanks a lot ,
I think the feature was introduced in Solr 1.3.
https://issues.apache.org/jira/browse/SOLR-730
Or you can see CHANGES.txt...
Koji
radha c wrote:
Hi,
I am having below schema.xml, I did not define any string field. But I am
getting the below error when I start Tomcat,
Can anyone please suggest me what is the issue here.
Why you got the exception is because you don't have string field type
defined in schema.xml,
but yo
Gargate, Siddharth wrote:
Hi all,
I am trying to index words containing special characters like 'Räikkönen'.
Using EmbeddedSolrServer indexing is working fine, but if I use
CommonHttpSolrServer then it is indexing garbage values.
I am using Solr 1.4 and set URLEcoding as UTF-8 in tomcat. Is thi
aerox7 wrote:
Hi,
I have a mysql data base in UTF-8. I have a row with "Solène" (solène). I
want to transforme this to solene, so i use Solr
ISOLatin1AccentFilterFactory to perform this task but it dosn't work ?!!
i gess that "Solène" is "solène" in UTF-8 ?! i also set tomcat to utf-8 so
norma
Ashish P wrote:
I have created a field,
Set class="solr.TextField" instead of class="solr.StrField" in your
fieldType definition.
Then reindex and commit.
Koji
dabboo wrote:
Hi,
I am implementing 2 way synonyms in solr using q query parameter. One way
synonym is working fine with q query parameter but 2 way is not working.
for e.g.
If I defined 2 way synonyms in the file like:
value1, value2
It doesnt show any result for either of the value.
Ple
dabboo wrote:
Hi,
I am trying to debug code of QueryParser class and other related files. I
have also taken the code of lucene from its SVN, but it is not going to the
right control during debug.
I wnated to know if I have taken the latest code and if not, from where I
can take the code.
Thans
sunnyfr wrote:
Hi
Sorry I dont remember what is the parameter which show up every parameters
stores in my solrconfig.xml file for the dismax query ? thanks a lot,
echoParams=all ?
regards,
Koji
gwk wrote:
Hello,
The wiki states 'When duplicate doc IDs are received, Solr chooses the
first doc and discards subsequent ones', I was wondering whether "the
first doc" is the doc of the shard which responds first or the doc in
the first shard in the shards GET parameter?
Regards,
gwk
Jacob,
What Solr version are you using? There is a bug in SolrHighlighter of
Solr 1.3,
you may want to look at:
https://issues.apache.org/jira/browse/SOLR-925
https://issues.apache.org/jira/browse/LUCENE-1500
regards,
Koji
Jacob Singh wrote:
Hi,
We ran into a weird one today. We have a
Josh Joy wrote:
Hi,
I would like to do something similar to Google, in that for my list of hits,
I would like to grab the surrounding text around my query term so I can
include that in my search results. What's the easiest way to do this?
Thanks,
Josh
Highlighter?
http://wiki.apache.org/
ad of "maxChars" when
using the patch.
Koji
Mike Topper wrote:
Cool, we are actually still on 1.2 but were planning on upgrading to 1.3
is this a feature of 1.3 or just on the nightly builds?
-Mike
Koji Sekiguchi wrote:
Mike Topper wrote:
Hello,
In one of the fields
Mike Topper wrote:
Hello,
In one of the fields in my schema I am sending somewhat large texts. I
want to be able to index all of it since I want to search on the entire
text, but I only need the first N characters to be returned to me. Is
there a way to do this with one field or would I just c
CharFilter will solve the problem, but it comes with Solr 1.4.
https://issues.apache.org/jira/browse/SOLR-822
Koji
AHMET ARSLAN wrote:
I think best way to do this is to modify
org.apache.lucene.index.memory.SynonymTokenFilter and employ this filter index
time.
if token.termBuffer() has one
Hmm, Otis, very nice!
Koji
Otis Gospodnetic wrote:
Hi,
Wouldn't this be as easy as:
- split email into "paragraphs"
- for each paragraph compute signature (MD5 or something fuzzier, like in
SOLR-799)
- for each signature look for other emails with this signature
- when you find an email with
CharFilter can normalize (convert) traditional chinese to simplified
chinese or vice versa,
if you define mapping.txt. Here is the sample of Chinese character
normalization:
https://issues.apache.org/jira/secure/attachment/12392639/character-normalization.JPG
See SOLR-822 for the detail:
http
Your request seems to be fine. Have you reindexed after setting
termOffsets definition
to document field?
Koji
Jeffrey Baker wrote:
I'm trying to exercise the termOffset functions in the nightly build
(2009-02-11) but it doesn't seem to do anything. I have an item in my
schema like so:
An
Jacob,
Regardless of you are using autocommit or manul commit,
look at Admin > statistics > Update Handlers > status > docsPending.
Koji
Jacob Singh wrote:
Hi,
Is there a way to retrieve the # of documents which are pending commit
(when using autocommit)?
Thanks,
Jacob
李学健 wrote:
> hi, all
>
> to abbreviation, for example, 'US', how can i get results containing
> 'United States' in solr or lucene?
> in solr, synonyms filter, it seems only to handle one-word to one-word.
> but in abbreviation queries, words should be expanded.
>
>
SynonymFilter should support o
Mark,
I'm not solrj user, but I think you don't need to check status code.
Solr server always return 0 for status when success. If something goes
wrong,
Solr server returns HTTP 400/500 response, then you'll get an Exception.
Koji
Mark Ferguson wrote:
Hello,
I am wondering if the UpdateResp
Because the lucene term ordering is lexicographic,
if you index strings "11", "100", and "150",
the terms in the index "100","11","150" in this order.
Koji
Jim Adams wrote:
Why is this?
Thanks.
On Sat, Jan 31, 2009 at 3:50 AM, Koj
Jim Adams wrote:
True, which is what I'll probably do, but is there any way to do this using
'string'? Actually I have even seen this with date fields, which seems very
odd (more data being returned than I expected).
If you want to stick with string, index "011" instead of "11".
Koji
Jim Adams wrote:
I have a string field in my schema that actually numeric data. If I try a
range search:
fieldInQuestion:[ 100 TO 150 ]
I fetch back a lot of data that is NOT in this range, such as 11, etc.
Any idea why this happens? Is it because this is a string?
Thanks.
Yep, try si
fei dong wrote:
Hi buddy, I work on an audio search based on solr engine. I want to realize
lyric search and sort by relevance. Here is my confusion. .
My schema.xml is like this:
text
...
http://localhost:8983/solr/select/?q=lyric:(tear
the hou
Chris,
Sorry about avoiding shingle part but:
> ... boo boo bar car la la la car bar bar bar ...
>
> This too doesn't seem to happen if I disable bigram indexing.
I've seen same thing with bigram tokens (not shingle) and reported it:
https://issues.apache.org/jira/browse/LUCENE-1489
Then I wro
Ron Chan wrote:
I'm using out of the box Solr 1.3 that I had just downloaded, so I guess it is the StandardAnalyzer
It seems WordDelimiterFilter worked for you.
Go to Admin console, click analysis, then give:
Field name: text
Field value (Index): SD/DDeck
verbose output: checked
highlight
Vinay,
This is a bug. I opened SOLR-947. The fix will be committed shortly.
So please try the next nightly build.
Thank you,
Koji
I can reproduce the error using example of current trunk.
I'm seeing what wrong now...
Koji
vinay kumar kaku wrote:
you mean query term? if so i do have ?q=xyz etc... if not what is that parameter you are talking about?i will send to only one mailing list here on.thanks,vinay> Date: Mon, 29 Dec
Prabhu,
Use stream.url instead of stream.file when you specify remote file url.
Koji
RaghavPrabhu wrote:
Hi all,
I want to index the files by calling the solr instance using curl
call.For indexing the files, i gave the local file path that is working
fine. What i need to do is, indexing th
I think you are facing this problem:
https://issues.apache.org/jira/browse/SOLR-925
I'm just looking the issue to solve it, I'm not sure that I can fix it
in my time, though...
Koji
Steffen B. wrote:
Hi everyone,
it seems that I've run into another problem with my Solr setup. :/ The
highlig
Hello Joel,
Using MappingCharFilter with mapping-ISOLatin1Accent.txt on your sort
field can solve your problem:
mapping="mapping-ISOLatin1Accent.txt" />
CharFilter is in trunk/Solr 1.4, though, if you use Solr 1.3, you can
download a patch for Solr 1.3:
ht
Peter,
It is UnInvertedField class. See also:
https://issues.apache.org/jira/browse/SOLR-475
Peter Keegan wrote:
Hi Yonik,
May I ask in which class(es) this improvement was made? I've been using the
DocSet, DocList, BitDocSet, HashDocSet from Solr from a few years ago with a
Lucene based app.
Committed revision 719793.
Thank you for reporting this, ashok!
Koji
Koji Sekiguchi wrote:
ashok,
Hmm, this is a bug. It was accidentally introduced when SOLR-657 was
committed.
http://svn.apache.org/viewvc/lucene/solr/trunk/src/java/org/apache/solr/search/LuceneQParserPlugin.java?p2
ashok,
Hmm, this is a bug. It was accidentally introduced when SOLR-657 was
committed.
http://svn.apache.org/viewvc/lucene/solr/trunk/src/java/org/apache/solr/search/LuceneQParserPlugin.java?p2=%2Flucene%2Fsolr%2Ftrunk%2Fsrc%2Fjava%2Forg%2Fapache%2Fsolr%2Fsearch%2FLuceneQParserPlugin.java&p1=%
Jerry,
> I would like to see the output from snapshooter
snapshooter outputs snapshooter.log. But,
> Is there a way to send snapshooter's output to stdout of the terminal
> which I executed the commit command?
I don't think it's possible.
(You can modify RunExecutableListener to redirect stdou
joe,
This hasn't been committed yet, but SOLR-822 may be your answer.
https://issues.apache.org/jira/browse/SOLR-822
Koji
joeMcElroy wrote:
I need a custom filter to be added to a field which will replace special
foreign characters with their english counterpart.
for example ø => o
Grave À
> curl 'http://localhost:8983/solr/update/csv?commit=true'
--data-binary @books.csv -H 'Content-type:text/plain; charset=utf-8'
>
> ...perhaps there is some eccentricity about windows curl?
I tried this on cygwin and books.csv could be uploaded without problems.
Koji
> SearchComponent is the class I was missing. Looks like if I can
provide an
> entirely new implementation of that it will be a lot cleaner than the
hack I
> had been using in 1.2 over top of facets. What I'm doing is implementing
> some aggregation functions like avg() and sum() that SQL has.
> Ok, obviously rsyncd.conf is generated automaticly by rsync.
> Somebody has an exemple of scripts.conf ?
Have you read this?
http://wiki.apache.org/solr/SolrCollectionDistributionScripts
Koji
on and the solr-ruby version
with a dash instead of dot -- solr-ruby-1.3.0-0.0.6
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: Koji Sekiguchi <[EMAIL PR
From: http://www.nabble.com/CHANGES.txt-td18901774.html
The latest version of solr-ruby is 0.0.6:
solr-ruby-0.0.6.gem
http://rubyforge.org/frs/?group_id=2875&release_id=23885
I think it isn't clear what Solr version is corresponding.
I'd like to change this to solr-ruby-{solrVersion}.{solr-ruby
If you specify single,
write.lock will never be created, so makes no sense.
Koji
sundar shankar wrote:
WIll let u know the results in a bit.
What does single do, btw?
Do i need to use this in conjunction with or do i use it separately??
Date: Thu, 7 Aug 2008 12:26:53 -0400> From: [EM
> > Just on I'm working on the similar issue per customer request.
> > StatsComponent - it will return min,max,sum,qt,avg as follows:
>
> Seems like perhaps one should be able to return any arbitrary function
> (actually multiple), and sort by an arbitrary function also.
Sounds interesting. WRT a
Just on I'm working on the similar issue per customer request.
StatsComponent - it will return min,max,sum,qt,avg as follows:
&stats=on&stats.field=price
10
30
20
60
3
WRT "stats", the component can output sum and avg, but not
sd and var. As our
401 - 500 of 545 matches
Mail list logo