Hello,
I was wondering if the following is possible. If you do a search on
wikeseek, you get a summary of categories on top of the page.
Look here : http://www.wikiseek.com/results.php?q=java
I was wondering if it i possible to do something similar using Solr. So far
the only solution I see
Hello,
Currently we are using lucene for our search on intranet, but we are
thinking of replacing it with solr.
While indexing, we have built in a system that assures us that there will be
never be no lucene index for more than a few seconds.
I was wondering if solr has something like that. I
Sorry. Did a send by accident. This the next part of the mail.
I mean if I do the following.
- delete all documents from the index
- add all documents
- do a commit.
Will this result in a temporary empty index, or will I always have results?
On 3/21/07, Thierry Collogne [EMAIL PROTECTED] wrote:
...I mean if I do the following.
- delete all documents from the index
- add all documents
- do a commit.
Will this result in a temporary empty index, or will I always have results?...
Changes to the index are invisible
the documents are only deleted when you do a commit ...
so you should never have an empty index (or at least not for more then a
couple of seconds)
note that you dont have to delete all documents you can just upload
new documents with the same UniqueID and Solr will delete the old
Hello,
I am using the post.jar file to update the search indexes. Problem is that
foreign characters like é, à, ... don't work correctly.
Even when I use the example xml files (like utf8-example.xml), the
characters don't work. Could this be a problem with the post.jar?
When I open the files
On 3/21/07, Thierry Collogne [EMAIL PROTECTED] wrote:
...I am using the post.jar file to update the search indexes. Problem is that
foreign characters like é, à, ... don't work correctly...
You're right, I have entered the issue in
https://issues.apache.org/jira/browse/SOLR-194
For now,
On 3/21/07, Bertrand Delacretaz [EMAIL PROTECTED] wrote:
...For now, using this as a workaround should help:
java -Dfile.encoding=UTF-8 -jar post.jar
http://localhost:8983/solr/update utf8-example.xml..
Should be fixed now, if you can grab the latest SimplePostToolCode [1]
it should work
I used the new jar file and removed -Dfile.encoding=UTF-8 from my jar call
and the problem isn't there anymore.
Thanks a lot for the help.
On 21/03/07, Bertrand Delacretaz [EMAIL PROTECTED] wrote:
On 3/21/07, Thierry Collogne [EMAIL PROTECTED] wrote:
...What would be the best way of
On 3/21/07 1:33 AM, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
note that you dont have to delete all documents you can just upload
new documents with the same UniqueID and Solr will delete the old
documents automaticly ... this way you are guaranteed not to have an empty
index
That
: new documents with the same UniqueID and Solr will delete the old
: documents automaticly ... this way you are guaranteed not to have an empty
: index
:
: That works if you keep track of all documents that have disappeared
: since the last index run. Otherwise, you end up with orphans in
a
Hello all:
While I realize this goes against the grain of an indexing server, is there
any way to do wildcard searching like the following:
Term indexed is 123456789
Searching for *456* would find 123456789
Is there any mechanism to enable or allow for that scenario?
Thanks!
--
Michael
Lucene now supports *456* type queries, however it requires setting
an attribute to allow leading wildcards on the QueryParser. Solr
does not set this flag (that I can tell in my quick search) so I
don't believe you can do this with Solr currently, until/unless an
option is made to set
You all rock. I'm clearing the semi-official legal hurdle with my CTO and
our head counsel to full (or something close to full) disclosure of some of
the architectural details, so stay tuned for as much as I'm allowed to share
(and btw, for any of you that live/work/vacation in the SF Bay area,
In an function that eventually becomes a Solr query, I create a few
TermQuery clauses that go into a BooleanQuery.
For each TermQuery, I do tq.setBoost( score ); where score is a float
my app generates. This usually works except when the numbers get real
small, like 2.712607e-4 that I just
On 3/21/07, Brian Whitman [EMAIL PROTECTED] wrote:
In an function that eventually becomes a Solr query, I create a few
TermQuery clauses that go into a BooleanQuery.
For each TermQuery, I do tq.setBoost( score ); where score is a float
my app generates. This usually works except when the
This looks like a lucene issue.
http://www.nabble.com/-jira--Created%3A-%28LUCENE-839%29-WildcardQuery-do-not-find-documents-if-leading-and-trailing-*-is-used-tf3435336.html
And it appears to have been fixed recently:
This problem was already fixed since 2.1.0.
When was 2.1.0 out? Oh - last
Well, I recompiled SOLR against the latest lucene release (2.1.0) and it
still doesn't work. The nabble reference page there indicates that it might
not have worked right in 2.1.0 but someone there is suggesting that it works
in the latest trunk.
Is there perhaps something else that would need
Is it Parse error or tomcat not find jeasy.analysis.MMAnalyzer
tomcat lib have this class jar. and set classpath to it.
winxp + tomcat 6+ java 1.6, it work well.
now i use freebsd6+tomcat 6+java 1.5_07(i recompiled solr.war)
any one can help me to fix it?
tomcat
I just change analyzer from jeasy.analysis.MMAnalyzer to*
org.apache.lucene.analysis.standard.StandardAnalyzer*
it will work well.
but i write test.java to use jeasy.analysis.MMAnalyzer and test.java work
well.
2007/3/22, James liu [EMAIL PROTECTED]:
Is it Parse error or tomcat not find
: Is it Parse error or tomcat not find jeasy.analysis.MMAnalyzer
it's a problem parsing your schema.xml, because it can't find the analyzer
class.
: winxp + tomcat 6+ java 1.6, it work well.
:
: now i use freebsd6+tomcat 6+java 1.5_07(i recompiled solr.war)
if it works in windows with java
Yepi fix it.
The problem is Tomcat's question.
Thk u, Chris.
i find you always first answer my question .Thk u
22 matches
Mail list logo