Reopened SOLR-1051:
https://issues.apache.org/jira/browse/SOLR-1051?focusedCommentId=12715030page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12715030
Koji
Koji Sekiguchi wrote:
Maybe I did something wrong, I got NPE when trying to MERGEINDEXES:
Walter,
The analysis link does not produce any matches for either @ or !...@#$%^*()
strings when I try to match against bathing. I'm worried that this might be
the symptom of another problem (which has not revealed itself yet) and want
to get to the bottom of this...
Thank you.
sm
Walter
OK, here's the deal:
str name=rawquerystring-features:foo features:(\...@#$%\^\*\(\))/str
str name=querystring-features:foo features:(\...@#$%\^\*\(\))/str
str name=parsedquery-features:foo/str
str name=parsedquery_toString-features:foo/str
The text analysis is throwing away non alphanumeric
So the fix for this problem would be
1. Stop using WordDelimiterFilter for queries (what is the alternative) OR
2. Not allow any search strings without any alphanumeric characters..
SM.
Yonik Seeley-2 wrote:
OK, here's the deal:
str name=rawquerystring-features:foo
On Mon, Jun 1, 2009 at 10:50 AM, Sam Michaels mas...@yahoo.com wrote:
So the fix for this problem would be
1. Stop using WordDelimiterFilter for queries (what is the alternative) OR
2. Not allow any search strings without any alphanumeric characters..
Short term workaround for you, yes.
I
Hi,
The 'content' field that I am indexing is usually large (e.g. a pdf doc of a
few Mb in size). I need highlighting to be on. This 'seems' to require that
I have to set the 'content' field to be STORED. This returns the whole
content field in the search result XML. for each matching document.
Use the fl param to ask for only the fields you need, but also keep hl=true.
Something like this:
http://localhost:8080/solr/select/?q=bearversion=2.2start=0rows=10indent=onhl=truefl=id
Note that fl=id means the only field returned in the XML will be the id
field.
Highlights are still returned
Hello,
I'm looking for a simple way to automate (in a shell script) a request for
the number of times an index has been optimized (since the Solr webapp has
last started). I know that this information is available on the Solr stats
page (http://host:port/solr/admin/stats.jsp) under Update
Not sure if it's simpler, but the JMX interface is more structured.
I think that just grabbing the page and parsing out the content with
your favorite tool (Ruby Hpricot) is pretty simple.
Eric
On Jun 1, 2009, at 1:17 PM, iamithink wrote:
Hello,
I'm looking for a simple way to
HI All,
Is there a way to perform filtering based on keyword density?
Thanks
--
Alex Shevchenko
Thanks a lot for your answer it fixed all my issues !!!
It's really well working !
Cheers,
Vincent
--
View this message in context:
http://www.nabble.com/User-search-in-Facebook-like-tp23804854p23818867.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks for the quick response. I agree that for this one-off task the grab
and parse method works fine, but I'll keep the JMX interface in mind for
other tasks in the future.
Here's my particular hack solution in case this helps anyone else:
wget -q -O-
Hello,
That stats page is really XML + XSLT that transforms the XML to HTML. View the
source of the stats page. That should make it very easy to parse the stats
response/page and extract the data you need.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original
Hi Chris,
I am new in solr.
When it is initialized for the first time, how can I change it?
Thanks
Francis
-Original Message-
From: Chris Harris [mailto:rygu...@gmail.com]
Sent: Sunday, May 31, 2009 3:00 PM
To: solr-user@lucene.apache.org
Subject: Re: Java OutOfmemory error during
Something like that. Just not ' N times' but 'numbers of foo
appears/total number of words some value'
On Mon, Jun 1, 2009 at 21:00, Otis Gospodnetic
otis_gospodne...@yahoo.comwrote:
Hi Alex,
Could you please provide an example of this? Are you looking to do
something like find all docs
Hello,
I'm using the dismax handler for the phrase matching. I have a few legal
resources in my index in the following format for example
title state
dui faq1 california
dui faq2 florida
dui faq3 federal
But I don't need to sort using this value. I need to cut results, where this
value (for particular term of query!) not in some range.
On Mon, Jun 1, 2009 at 22:20, Walter Underwood wunderw...@netflix.comwrote:
That is the normal relevance scoring formula in Solr and Lucene.
It is a bit fancier
That is the normal relevance scoring formula in Solr and Lucene.
It is a bit fancier than that, but you don't have to do anything
special to get that behavior.
Solr also uses the inverse document frequency (rarity) of each
word for weighting.
Look up tf.idf for more info.
wunder
On 6/1/09
We have too many issues with 1.3 running for longer than 12 hours and want to
look into a more updated version, either a nightly or a specific svn
revision that we can pull to replace it. Any recommendations for a date
since the 1.3.0 release 9 months ago? Doesn't have to be super new or
What sort of issues? We run Solr 1.3 for days or weeks with almost no
problems. We have one odd failure that we haven't been able to reproduce
in test, but it is very rare, once or twice per month across five servers.
wunder
On 6/1/09 12:34 PM, sroussey srous...@network54.com wrote:
We have
Hi,
1.3 is quite solid, so my guess is memory problems may be a question of
configuration, inappropriate data input or analysis or inadequate hw.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: sroussey srous...@network54.com
To:
Hi All,
I am facing an issue while adding multi language support in the
Solr.
Here is what I am doing.
1) have a field of type text_de which has analyzer as
snowballporterFilterFactory with German2 as language
2) copy the german locationName into this field at the index
I found what I was doing wrong. The XML document that I was posting didn't have
the char encoding info, due to which the solr was ignoring the special chars.
Thanks,
Kalyan Manepalli
-Original Message-
From: Manepalli, Kalyan [mailto:kalyan.manepa...@orbitz.com]
Sent: Monday, June 01,
We are planning to upgrade solr 1.2.0 to 1.3.0
Under 1.3.0 - Which of war file that I need to use and deploy on my application?
We are using weblogic.
There are two war files under
/opt//apache-solr-1.3.0/dist/apache-solr-1.3.0.war and under
/opt/apache-solr-1.3.0/example/webapps/solr.war.
Hey all,
I was just wondering if anyone else is getting an error with today's nightly
while sorting the random field.
Thanks Rob.
Jun 1, 2009 4:52:37 PM org.apache.solr.common.SolrException log
SEVERE: java.lang.NullPointerException
at
Can you provide details on the errors? I don't think we have a
specific how to, but I wouldn't think it would be much different from
1.2
-Grant
On May 31, 2009, at 10:31 PM, Fer-Bj wrote:
Hello,
is there any how to already created to get me up using SOLR 1.3
running
for a chinese
They are identical. solr.war is a copy of apache-solr-1.3.0.war.
You may want to look at example target in build.xml:
target name=example
description=Creates a runnable example configuration.
depends=init-forrest-entities,dist-contrib,dist-war
!-- copy apache-solr-1.3.0.war
I am using the 2009-05-27 build of solr 1.4. Under this build, I get a facet
count on my category field named Seasonal of 7 values. However, when I do
a filter query of 'fq=cat:Seasonal', I get only 1 result.
I switched back to Solr 1.3 to see if it's a problem with my config. I
found that
I'm sending 3 files:
- schema.xml
- solrconfig.xml
- error.txt (with the error description)
I can confirm by now that this error is due to invalid characters for the
XML format (ASCII 0 or 11).
However, this problem now is taking a different direction: how to start
using the CJK instead of the
Hi,
I am using solr nightly bind for my search.
I have to search in the location field of the table which is not my default
search field.
I will briefly explain my requirement below:
I want to get the same/similar result when I give location multiple
keywords, say San jose ca USA
or USA ca
30 matches
Mail list logo