see
from the curl command and there is a header line in the csv file.
And sorry for the missing subject line.
Andrew
From: Ryan McKinley [EMAIL PROTECTED]
Sent: Sunday, December 02, 2007 5:15 PM
To: solr-user@lucene.apache.org
Subject: Re:
Andrew Nagy
I tried all the methods: converting to %26, converting to \ and
encapsulating the url with quotes. All give the same error.
Try sending your curl command to:
http://localhost:8080/solr/debug/dump
When you are confident it is parsed properly, then try /update/csv --
the only way you
hymmm - give it a try without specifying header=true
Looks like if you don't specify header=true, it defaults to true - but
if you do, it throws an error.
I think there may be a bug... Yonik, should line 243 be:
} else if (!hasHeader) {
^!!!
ryan
Andrew Nagy wrote:
Ryan, i
Sunny Bassan wrote:
I have implemented the embedded SOLR approach for indexing of database
records. I am indexing approximately 10 millions records, querying and
indexing 20,000 records at a time. Each record is added to the
updateHandler via the updateHandler.addDoc() function once all 20,000
Otis Gospodnetic wrote:
Hi,
I was looking for the Wiki docs about the multi-core stuff Henri contributed,
but couldn't find any. Do we just not have that yet?
I found http://wiki.apache.org/solr/MultipleIndexes , but that's the old way.
Currently it is only possible programmatically.
Daniel Alheiros wrote:
Hi Hoss.
I'm using Solr 1.2 and a SolrJ client built from the trunk some time ago
(21st of June 2007).
One thing I just thought of do you have a request handler defined at:
requestHandler name=/update class=solr.XmlUpdateRequestHandler
If not, it uses a legacy
Norskog, Lance wrote:
Hi-
What is the filter element in an analyzer element that will load
this class:
org.apache.lucene.analysis.cn.ChineseFilter
This did not work:
filter class=org.apache.lucene.analysis.cn.ChineseFilter /
This is in Solr 1.2.
the class needs to point to a
To be clear: solr *should* fail with an error if you send an unknown
field.
I just tested this with a clean checkout of 1.3-dev and 1.2 and in both
cases I get an error 400 unknown field 'asgasdgasgd'
The suggestion to look at the ignore option is to make sure you don't
have one -- this
Kasi Sankaralingam wrote:
When we have the following set of data, they are first sorted based on Capital
letters and then lower case
. Is there a way to make them sort regardless of character case?
Avaneesh
Bruce
Veda
caroleY
jonathan
junit
So carole would come after Bruce. Thanks
sorting
Grant Ingersoll wrote:
I have an object that I would like to share between two or more
RequestHandlers. One request handler will be responsible for the object
and the other I would like to handle information requests about what the
object is doing. Thus, I need to share the object between
Now when I run the following query:
http://localhost:8080/solr/mlt?q=id:neardup06mlt.fl=featuresmlt.mindf=1mlt.mintf=1mlt.displayTerms=detailswt=jsonindent=on
try adding:
debugQuery=on
to your query string and you can see why each document matches...
My guess is that features uses a text
Jörg Kiegeland wrote:
Yes, SOLR-139 will eventually do what you need.
The most recent patch should not be *too* hard to get running (it may
not apply cleanly though) The patch as is needs to be reworked before
it will go into trunk. I hope this will happen in the next month or so.
As
Evgeniy Strokin wrote:
Hello,..
I have a document indexed with Solr. Originally it had only few fields. I want to add some more fields to the index later, based on ID but I don't want to submit original fields again. I use Solr 1.2, but I think there is no such functionality yet. But I saw a
The URL is
http://localhost:8983/solr/select/?q=solrversion=2.2start=0rows=10indent=on
When i added echoParams=explicit to the query nothing has changed. But when I
find and replaced the word 'explicit' to uppercase 'EXPLICIT' in the solrconfig.xml
it worked. The problem has solved. Thanks
AHMET ARSLAN wrote:
I am a newbie at solr. I have done everything in the solr tutorial section. I
am using the latest versions of both JDK(1.6.03) and Solr(2.2). I can see the
solr admin page http://localhost:8983/solr/admin/ But when I hit the search
button I receive an http error:
HTTP
Eswar K wrote:
We have a scenario, where we want to find out documents which are similar in
content. To elaborate a little more on what we mean here, lets take an
example.
The example of this email chain in which we are interacting on, can be best
used for illustrating the concept of near dupes
Hello-
Solrj has been out there for a while, but is not yet baked into an
official release. If there is anything major to change just so it feels
better, now is the time. Here are a few things I'm thinking about:
1. The setFields() behavior
Currently:
query.setFields( name,id );
I just tried a fresh checkout and ran 'ant example' then started jetty.
Everything looks OK and normal.
$ svn up
$ ant example
$ cd example
$ java -jar start.jar
ryan
Mike Klaas wrote:
Have you build the project ('$ ant example')?
-Mike
On 15-Nov-07, at 2:41 PM, Thiago Jackiw wrote:
Not yet, but there should be!
Currently people learn it from looking at the source and tests. I
started to add something to:
http://wiki.apache.org/solr/Solrj
it (obviously) still needs work.
If you are using eclipse (or similar), after typing solrQuery. you
should get a drop down of
Can you post the full exception?
b) Do a query in the SOLR admin tool title_s: photo book
Do you have a space after the ':'?
q=title_s: photo book
I expect that would fail (though null pointer is not a very nice error)
q=title_s:photo book
should work fine:
title_s:photo book
Standard solr is a .war file that you install on your system and run
within a servlet container (jetty, resin, tomcat, etc)
embedded solr refers to running solr without the servlet container.
ryan
Dave C. wrote:
Hello again,
This is a horribly newbie question, but what exactly is meant by
The advantages of a multi-core setup are configuration flexibility and
dynamically changing available options (without a full restart).
For high-performance production solr servers, I don't think there is
much reason for it. You may want to split the two indexes on to two
machines. You may
and causes FGC.
I am thinking the way to have multiple indexes - one is for ongoing querying
service and one is for update. Once update is done, switch the index by
automatically and/or my application.
Thanks,
Jae joo
On Nov 12, 2007 8:48 AM, Ryan McKinley [EMAIL PROTECTED] wrote
For starters, do you need to be able to search across groups or
sub-groups (in one query?)
If so, then you have to stick everything in one index.
You can add a field to each document saying what 'group' or 'sub-group'
it is in and then limit it at query time
q=kittens +group:A
The
Dilip.TS wrote:
Hello,
Does SOLR supports multiple instances within the same web application? If
so how is this achieved?
If you want multiple indices, you can run multiple web-apps.
If you need multiple indices in the same web-app, check SOLR-350 -- it
is still in development, and make
David Neubert wrote:
Ryan,
Thanks for your response. I infer from your response that you can have a
different analyzer for each field
yes! each field can have its own indexing strategy.
I believe that the Analyzer approach you suggested requires the use
of the same Analzyer at query
I tried deleteid:*/delete
try:
deletequery*:*/query/delete
ryan
the spaces are still there. But if I use the Solrj and query for
documents, the strings are trimmed (whitespace cutted at the end and an
the front). may be is some kind of TrimFilter active? How can I prevent
timming (by solr schema or in the solrj api)?
what is your specific SolrQuery?
Grant Ingersoll wrote:
Hi,
Is there anyway to interrogate a RequestHandler to discover what
parameters it supports at runtime? Kind of like a BeanInfo for
RequestHandlers? Has anyone else thought about doing this and what it
might look like? Seems like it would be useful for building
Schema.xml
field name=id type=string indexed=true stored=true/
Have you edited schema.xml since building a full index from scratch? If
so, try rebuilding the index.
People often get the behavior you describe if the 'id' is a 'text' field.
ryan
Does anyone know what could be the problem?
looks like it was a problem in the new query parser. I just fixed it in
trunk:
http://svn.apache.org/viewvc?view=revrevision=592740
Yonik - do we want to keep this checking for 'null', or should we change
QueryParser.parseSort( ) to always
Yonik Seeley wrote:
On 11/7/07, Ryan McKinley [EMAIL PROTECTED] wrote:
Yonik - do we want to keep this checking for 'null', or should we change
QueryParser.parseSort( ) to always return a valid sortSpec?
In Lucene, a null sort is not equal to score desc... they result in
the same documents
How reliable are the nightly builds? Can it be used in production?
The nightly builds are stable in that they do what they say they do --
and if not, they are fixed quickly. However, the interfaces that have
changed since 1.2 are not totally stable. That is, the interfaces from
1.2 will
Jörg Kiegeland wrote:
I have a query
SolrServer server = getSolrServer();
SolrQuery solrQuery = new SolrQuery();
solrQuery.setQuery(..);
QueryResponse rsp = server.query(solrQuery);
Now where can I set the result limit for this query?
solrQuery.setRows( # )
To
the rejected code appears to be non-vital, so I've just left it out.
Since Solr 1.2 is based on Lucene 2.1, I've used the
lucene-query.2.1.1-dev.jar to compile (after fixing the DEFALT/DEFAULT
typo), and MLT seems to work. Is that the correct procedure? If so, I'll
update the wiki
If you need to allow HTTP access to solr, then just use standard solr
with your embedded stuff in a custom request handler (or something).
Any other path, you will be re-inventing many wheels.
If at all possible, I reccomend checking out:
http://wiki.apache.org/solr/Solrj
this is nice because
Yonik Seeley wrote:
On 10/25/07, Matthew Runo [EMAIL PROTECTED] wrote:
Any ideas on when 1.3 might be released? We're starting a new project
and I'd love to use 1.3 for it - is SVN head stable enough for use?
I think it's stable in the sense of does the right thing and doesn't
crash, but IMO
patrick o'leary wrote:
Actually misspoke it's the XMLWritter that's final that was a little
annoying rather than a handler.
Would have been nice to just extend that, and cut down on the code.
aaah -- Just to be clear, if you could augment the doc list with a
calculated field ('distance')
SOLR-281 looks like it will solve one of my frustrations, another being
that the handlers were final ;-)
What handlers are final that you found annoying?
Is it close to being committed to the trunk?
I hope so ;) Since this patch reworks the *core* query handlers
(dismax/standard) I
This looks good!
Are you interested in contributing it to solr core?
One major thing in the solr pipeline you may want to be aware of is the
search component interface (SOLR-281).
This would let you make simple component that adds the:
DistanceQuery dq = new
try setting the lock type to 'single' in solrconfig.xml
indexDefaults
...
lockTypesingle/lockType
/indexDefaults
I have run into troubles a few times since this was added - putting it
single type in config has fixed it every time though...
ryan
Brian Whitman wrote:
We have a very
So I'll start with an ad hoc session manager within Solr. Where in Solr
should I add such a service?
You may be able to get what you need with just installing something like
clickstream:
http://www.opensymphony.com/clickstream/
If you need to integrate custom user handling and solr
I would imagine there is a library to set up an autocomplete search with
Solr. Does anyone have any suggestions? Scriptaculous has a JavaScript
autocomplete library. However, the server must return an unordered
list.
Solr does not provide an autocomplete UI, but it can return JSON that a
-
From: Ryan McKinley [mailto:[EMAIL PROTECTED]
Sent: Monday, October 15, 2007 4:44 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr + autocomplete
I would imagine there is a library to set up an autocomplete search
with
Solr. Does anyone have any suggestions? Scriptaculous has
the default solrj implementation should do what you need.
As for Solrj, you're probably right, but I'm not going to take any
chances for the time being. The server.add method has an optional
Boolean flag named overwrite that defaults to true. Without knowing
for sure what it does, I'm not
It still seems odd that I have to include the jar, since the
StandardRequestHandler should be picked up in the war right? Is this also a
sign that there must be something wrong with the deployment?
Note that in 1.3, the StandardRequestHandler was moved from
o.a.s.request to o.a.s.handler:
Hello-
I am running into some scaling performance problems with SQL that I hope
a clever solr solution could fix. I've already gone through a bunch of
loops, so I figure I should solicit advice before continuing to chase my
tail.
I have a bunch of things (100K-500K+) that are defined by a
David Whalen wrote:
Make sure you have:
requestHandler name=/admin/luke
class=org.apache.solr.handler.admin.LukeRequestHandler /
defined in solrconfig.xml
What's the consequence of me changing the solrconfig.xml file?
Doesn't that cause a restart of solr?
editing solrconfig.xml does *not*
the most basic stuff, and copyField things around. With SOLR-139, to
rebuild an index you simply reconfigure the copyField settings and
basically `touch` each document to reindex it.
had not thought of that... yes, that would work
Yonik has some pretty prescient design ideas here:
how about:
/select?q=*:*fl=id
(where id is your unique id)
you may need to do paging with:
start=2000rows=1000
if you have a lot of documents
Jay Booth wrote:
Hey all, sorry for the elementary question but I was poking around and
couldn't find an easy answer. Is there an easy way to get
Robert Young wrote:
Hi,
We're just about to start work on a project in Solr and there are a
couple of points which I haven't been able to find out from the wiki
which I'm interested in.
1. Is there a REST interface for getting index stats? I would
particularly like access to terms and their
dooh, should check all my email first!
Will Solr automatically reload the file if it changes or does it have
to be informed of the change?
I'll expose my confusion here and say that I don't know for sure, but
I'm pretty sure that once it's been loaded it won't get reloaded without
bouncing
.
++
| Matthew Runo
| Zappos Development
| [EMAIL PROTECTED]
| 702-943-7833
++
On Oct 4, 2007, at 6:11 AM, Ryan McKinley wrote:
Robert Young wrote:
Hi,
We're just about to start work on a project in Solr and there are a
couple of points which I
Yu-Hui Jin wrote:
Hi, there,
Given that there's some questions on the updated XML schema for the response
in Solr 1.2. Can someone points me to the XML schema? Is it documented
somewhere?
I'm particularly interested in the different status code we would have in
the response for either update
Using embedded solr, there is no (built in) way to access remote
indexes. If you want to access remote indexes you need to run a server.
Solr 1.3 (trunk) includes a java client you may want to look at:
http://wiki.apache.org/solr/Solrj
If you poke around, this also includes simple ways to run
Your query will work if you make sure the URL field is omitted from the
document at index time when the field is blank.
adding something like:
filter class=solr.LengthFilterFactory min=1 max=1 /
to the schema field should do it without needing to ensure it is not
null or on the
- Delete all index files via a delete command
make sure to optimize after deleting the docs -- optimize has lucene get
rid of deleted files rather then appending them to the end of the index.
what version of solr are you running? if you are running 1.3-dev
deleting *:* is fast -- if you
Can you start a JIRA issue and attach the patch?
I have not seen this happen, but I bet it is caused by something from:
https://issues.apache.org/jira/browse/SOLR-215?page=com.atlassian.jira.plugin.ext.subversion:subversion-commits-tabpanel
Can we add that test to trunk? By default it does not
I have had this and other files index correctly using a different
combination version of Tomcat/Solr without any problem (using similar
code, I re-wrote it because I thought it would be better to use Solrj).
I get the same error whether I use a simple StringBuilder to created the
add manually or
Daley, Kristopher M. wrote:
I have tried changing those settings, for example, as:
SolrServer server = new CommonsHttpSolrServer(solrPostUrl);
((CommonsHttpSolrServer)server).setConnectionTimeout(60);
((CommonsHttpSolrServer)server).setDefaultMaxConnectionsPerHost(100);
/commons/httpclient/params/HttpConnectionManagerParams.html
Daley, Kristopher M. wrote:
I tried 1 and 6, same result.
-Original Message-
From: Ryan McKinley [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 19, 2007 11:18 AM
To: solr-user@lucene.apache.org
Subject: Re
Lance Norskog wrote:
I believe I saw in the Javadocs for Lucene that there is the ability to
return the unique values for one field for a search, rather than each
record. Is it possible to add this feature to Solr? It is the equivalent of
'select distinct' in SQL.
Look into faceting:
However, if I go to the tomcat server and restart it after I have issued
the process command, the program returns and the documents are all
posted correctly!
Very strange behavioram I somehow not closing the connection
properly?
What version is the solr you are connecting to? 1.2 or
So it means that distributed search is not a basic component in Solr project.
I think you just need load balancing. Solr is not a load balancer, you
need to find something that works for you and configure that elsewhere.
Solr works fine without persistent connections, so simple round
vanderkerkoff wrote:
I found another post that suggested editing the unlockonstartup value in
solrconfig.xml.
Is that a wise idea?
If you only have a single solr instance at at time, it should be totally
fine.
if you really want #3 and #4 to show up, then have two fields: one using
whitespace tokenizer, one using keyword tokenizer; both using
EdgeNGramFilter ... boost the query to the first field higher then the
second field (or just rely on the coordFactor and the fact that ca will
match on both
Hello-
I'm building an interface where I need to display matching options as a
user types into a search box. Something like google suggest, but it
needs to be a little more flexible in its matches.
It first glance, I thought I just needed to write a filter that chunks
each token into a set
Should the EdgeNGramFilter use the same term position for the ngrams
within a single token?
As is, the EdgeNGramTokenFilter increments the term position for each
character. In analysis.jsp, with the input hello, I get:
term position 1 2 3 4 5
term text h
nope, the field options are created on startup -- you can't change them
dynamically (i don't know all the details, but I think it is a file
format issue, not just a configuration issue)
I'm not sure how your app is structured, from what you describe, it
sounds like you need two fields, one
Venkatraman S wrote:
We are using Lucene and are migrating to Solr 1.2 (we are using Embedded
Solr). During this process we are stumbling on certain problems :
1) IF the same document is added again, then it it getting added in the
index again(duplicated); inspite of the fact that the IDs are
Alexey Shakov wrote:
Hi,
I use EmbeddedSolrServer to communicate with solr.
Query of the index works fine, but if I try add/delete, then comes
org.apache.solr.common.SolrException: unknown handler: /update
The same config as http-server works without problems.
Any ideas?
To get /update to
Jack L wrote:
Hi,
I'm about to start a new solr installation. Given the good quality of
development builds in the past, should I use 1.2 or just grab a
nightly build?
Unless you *need* the new features in trunk, stick with 1.2
While we aim to keep the trunk functioning properly, the API's,
The other problem is that after some time we get a Too Many Open Files
error when autocommit fires.
Have you checked your ulimit settings?
http://wiki.apache.org/lucene-java/LuceneFAQ#head-48921635adf2c968f7936dc07d51dfb40d638b82
ulimit -n number.
As mike mentioned, you may also want to
Adrian Sutton wrote:
On 11/09/2007, at 7:21 AM, Ryan McKinley wrote:
The other problem is that after some time we get a Too Many Open
Files error when autocommit fires.
Have you checked your ulimit settings?
http://wiki.apache.org/lucene-java/LuceneFAQ#head
I've done a bit of poking on the server and ulimit doesn't seem to be
the problem:
e2wiki:~$ ulimit
unlimited
e2wiki:~$ cat /proc/sys/fs/file-max
170355
try: ulimit -n
ulimit on its own is something else. On my machine I get:
[EMAIL PROTECTED]:~$ ulimit
unlimited
[EMAIL PROTECTED]:~$ cat
George L wrote:
I have been trying the MLT Query using EmbeddedSolr and SolrJ clients, which
is resulting in NPE.
Do you get the same error without solrj?
Can you run the same query with:
http://localhost:8987/solr/select?q=id:11mlt=true
(just to make sure we only need to look at
Can somebody help me please, i have already spent a whole saturday night
with the trunk code ;-(
Also, do you get the same error with an empty database?
perhaps:
https://issues.apache.org/jira/browse/SOLR-208
in http://svn.apache.org/repos/asf/lucene/solr/trunk/example/solr/conf/xslt/
check:
example_atom.xsl
example_rss.xsl
Thorsten Scherler wrote:
Hi all,
I am curious whether somebody has written a rss plugin for solr.
The idea is to
1 - if we add fields / remove fields to be indexed, how will this affect
our current indexes. Will we need to completely recreate millions on
indexes (or is it indices)?
Depends what you are trying to do... if you are just adding or removing
fields, the index should be usable. For
Jae Joo wrote:
Hi,
The XML file to be indexed has Case Sensitive
Ex.
field name=field1Computer Software/field
I would like to have facet by field name field1 CASE SENSITIVE and search
by field1 with CASE INSENSITIVE.
If I add solr.LowerCaseFilterFactory in the analyzer in both index and
Where can I find some documentation of Solrj? Does it have a wiki page or
something?
It does not yet have good external documentation. We will definatly
have something before solr 1.3, but for now, looking at the source and
test is the best option.
Am I using this method improperly or
[QUESTION]
What could be the problem? .Or what else can I do to debug this problem?
In general 'luke' is a great tool to figure out what may be happening in
the index.
(assuming you are running 1.2) check your schema fields from:
http://localhost:8983/solr/admin/luke?show=schema
Some form of some files from SOLR-20 should work, but I would suggest
using the client in trunk now:
http://svn.apache.org/repos/asf/lucene/solr/trunk/client/java/solrj/
or you can get it from the nightly builds in:
http://people.apache.org/builds/lucene/solr/nightly/
ryan
Teruhiko Kurosaka
Teruhiko Kurosaka wrote:
or you can get it from the nightly builds in:
http://people.apache.org/builds/lucene/solr/nightly/
For those of you who are interested...
As far as I can tell by inspecting the source code in Trunk,
solrj.jar from the nightly doesn't seem to work with Solr 1.2.
For
Nuno Leitao wrote:
Hi,
I have a 1.3 Solr with the field collapsing patch (SOLR-236 -
http://issues.apache.org/jira/browse/SOLR-236).
Collapsing works great, but only using the dismax and standard query
handler - I haven't managed to get it to work using the MoreLikeThis
handler though - I
Jason P. Weiss wrote:
I had some trouble getting the current production build (1.2.0) working
on 10gR3 (10.1.3.0.0).
I had to remove 3 bad characters off of the front of the web.xml file
and re-jar the WAR file. It worked perfectly after that minor
modification.
Was this a .war you
Nice, valid xml. But If I have an error (for example, commit/comit) I
get an HTML page back.
In 1.2, if you map /update to the XmlUpdateHandler in solrconfig.xml,
errors are returned with an HTTP status error (ie, something != 200) +
message. Your servlet runner (Jetty, Tomcat, etc) will
Has anyone looked into using carrot2 clustering with solr?
I know this is integrated with nutch:
http://lucene.apache.org/nutch/apidocs/org/apache/nutch/clustering/carrot2/Clusterer.html
It looks like carrot has support to read results from a solr index:
nithyavembu wrote:
Hi Otis Gospodnetic,
Thanks for the reply. I tried with this URL, its working but its not
checking the condition. Its showing all the records if i use this URL. Is
there any solution
I'm not sure about the order of operations, but try:
+FID:8 +RES_TYPE:0
Stu Hood wrote:
Can you resend your question without a tar file?
thanks
ryan
no attachments came through...
Off hand, SolrCore.close() should not exit the program, it just closes
the searchers and cleans up after itself.
System.exit(0);
will terminate the program.
Stu Hood wrote:
I'll try that again... (don't let my e-mail failures reflect badly on Webmail.us =)
Lance Lance wrote:
Hi-
I'd like to make a multivalued field of comma-separated phrases. Is there a
class available that I can use for this?
I can see how to create N separate elements for the same field in the update
XML, but is there something I can use in type definition?
If you are
Xuesong Luo wrote:
Hi, there,
We have one master server and multiple slave servers. The multiple slave
servers can be run either on the same box or different boxes. For
slaves on the same box, is there any best practice that they should use
the same index or each should have separate indexes?
solr requires 1.5. It uses generics and a bunch of other 1.5 code.
Jery Cook wrote:
QUESTION:
Jeryl Cook
^ Pharaoh ^
http://pharaohofkush.blogspot.com/
I need to make solr work with java 1.4, the orgnaization I work for has not
approved java 1.5 for the network...Before I download the
How can i run this program?
In apache site they said its like sample example program. If so where i
have to place this file in tomcat?
If you are running tomcat, this is *not* the way to use solr.
Using tomcat, check:
http://wiki.apache.org/solr/SolrTomcat
Can I make a query for example like:
select for query test where module=ARTICLES
query test +module:ARTICLES
check:
http://lucene.apache.org/java/docs/queryparsersyntax.html
ryan
I just took a quick look at solrsharp. I don't really have to use it
yet, so this is not an in depth review.
I like the templated SearchResults -- that seems useful.
I don't quite follow the need to parse the SolrSchema on the client
side? Is that to know what fields are available? Could
in solrconf.xml I found this entry, which is now uncomented
dataDir${solr.data.dir:./solr/data}/dataDir
before it was
!--
dataDir./solr/data/dataDir
--
Don't know if this is the desired behaviour. How should I change the entry
not to have the data in the working directory and not to
Saurabh Dani wrote:
Just like Luke, can Solr search any Lucene index by just changing
something in the configuration or Solr stores any specific information in
the indexes which must be there in order to do searches using Solr?
solr uses regular lucene indexes. It can search an index created
Check:
https://issues.apache.org/jira/browse/SOLR-283
This is now fixed in trunk
ryan
Xuesong Luo wrote:
Hi,
I set up solr to autocommit each minute. It works well if I sent an add
request, but it does not work for delete, nothing happened after 1
minute. Is this a bug or a designed
401 - 500 of 607 matches
Mail list logo