According to the documentation, sorting by function has been a feature
since Solr 1.5. It seems like a major regression if this no longer
works.
http://wiki.apache.org/solr/FunctionQuery#Sort_By_Function
The _val_ trick does not seem to work if used with a query term,
although I can try some more
Hello,
I have a file with the input string 91{40}9490949090, and I wanted to return
this file when I search for the query string +91?40?9*. The problem is that,
the input string is getting indexed as 3 terms 91, 40, 9490949090. Is
there a way to consider { and } as part of the string
On Thu, Sep 9, 2010 at 3:57 AM, Sandhya Agarwal sagar...@opentext.comwrote:
Hello,
I have a file with the input string 91{40}9490949090, and I wanted to
return this file when I search for the query string +91?40?9*. The
problem is that, the input string is getting indexed as 3 terms 91, 40,
Hi,
We are using SOLR 1.3 and getting this error often. There is only one
instance that index the data. I did some analysis which I have put below and
scenario when this error can happen. Can you guys please validate the issue?
thanks a lot in advance.
SEVERE:
set splitWordsPart=0,splitNumberPart=0
-
Grijesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/Regarding-WordDelimiterFactory-tp1444694p1444742.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi all,
As I've mentioned in the past, I've created some custom field types
which make use of the AbstractSubTypeFieldType class in the current
trunk version of solr for a service we're working on. We're getting
close to putting our service into production (early 2011) and we're
now
Hi Peter,
I've also faced the same problem which you did. I also changed my
data-config.xml file same as yours and checked that my MS Sql Server is
running on default port i.e, 3306 given the same to connection string but
still its not working.
Could you pls. share your solrconfig.xml and
http://svn.apache.org/repos/asf/lucene/dev/branches/
-Original message-
From: Mark Allan mark.al...@ed.ac.uk
Sent: Thu 09-09-2010 10:44
To: solr-user@lucene.apache.org;
Subject: svn branch issues
Hi all,
As I've mentioned in the past, I've created some custom field types
which make
I can't reproduce reliably, so I'm suspecting there are issues in our code.
I'm refactoring to avoid the problem entirely.
Thanks for the response though Erick.
Greg
On 8 September 2010 21:51, Greg Pendlebury greg.pendleb...@gmail.comwrote:
Thanks,
I'll create a deliberate test tomorrow
Thanks. Are you suggesting I use branch_3x and is that considered
stable?
Cheers
Mark
On 9 Sep 2010, at 10:47 am, Markus Jelsma wrote:
http://svn.apache.org/repos/asf/lucene/dev/branches/
-Original message-
From: Mark Allan mark.al...@ed.ac.uk
Sent: Thu 09-09-2010 10:44
To:
If you are using SimpleFSDirectory (either explicitly or via
FSDirectory.open on Windows) with Solr/Lucene trunk or 3.x branch since July
30,
you might have index corruption and you should svn up and rebuild.
More details available here:
https://issues.apache.org/jira/browse/LUCENE-2637
Well, it's under heavy development but the 3.x branch is more likely to become
released than 1.5.x, which is highly unlikely to be ever released.
On Thursday 09 September 2010 13:04:38 Mark Allan wrote:
Thanks. Are you suggesting I use branch_3x and is that considered
stable?
Cheers
Mark
Hello,
I am using solrJs. I am trying to do Reuters example. I have followed
all the steps but it's giving error in error console of browser that
AjaxSolr is undefined.
I have all the jar files that are required.
So tell me what should i do? Can you give me simple example of solrJS for
Hi,
I am looking for a way to store the checksum of a field's value, something like:
field name=text...
!-- the SHA1 checksum of text (before applying analyzer) --
field name=text_sha1 type=checksum indexed=true stored=true
...
copyField source=text dest=text_sha1
I haven't found anything like
Hi,
You can use an UpdateProcessor to do so. This can be used to deduplicate
documents based on exact or near matches with fields in other documents. Check
the wiki page on deduplication [1] for an example.
[1]: http://wiki.apache.org/solr/Deduplication
Cheers,
On Thursday 09 September 2010
Hi,Running on aDebian 5.0.564bit box. Usingsolr-1.4.1 with Javaversion "1.6.0_20"I am seeing weird facets results along with the "right" looking ones. Garbled data, stuff that looks like a buffer overflow / index off by ...And I even get them when I do a zero hit search. I wouldn't expect any
Could you show us the fieldType definitions for your fields? I suspect
you're not getting the tokens you expect. This will almost certainly
be true if the type is string rather than text.
The solr admin page (especially analysis) will help you a lot here, as
will adding debugQuery=on to your
Looks like AND is your defaultOperator [1]. Check your schema.xml and try
adding q.op=or to your query.
[1]: http://wiki.apache.org/solr/SearchHandler#q.op
On Thursday 09 September 2010 15:34:52 Stéphane Corlosquet wrote:
Hi all,
I'm new to solr so please let me know if there is a more
That's normal behavior if you haven't configured facet.mincount. Check the
wiki.
On Thursday 09 September 2010 16:05:01 Dennis Schafroth wrote:
I am definitely not excluding the idea that index is garbled, but.. it
doesn't explain that I get facets on zero hit.
The schema is as following:
Evening,
I'm trying to break down the data over a year into facets by month; to avoid
overlap, I'm using -1MILLI on the start and end dates and using a gap of
+1MONTH.
However, it seems like February completely breaks my monthly cycles, leading
to incorrect counts further down the line; facets
What I wanted was a was to determine that simply the query q=one two is
equivalent to q=two one, by normalizing I might have
q=one two for both for example, and then the q.hashCode() would be the
same
Simply using q.hashCode() returns different values for each query above so
this is not
Those two queries might NOT always be 'the same', depending on how you
have your Solr request handler set up.
For instance, if you have dismax with a ps boost, then two one may end
up with different relevancy scores than one two, because the query as
a phrase will be used for boosting, and
On Thu, Sep 9, 2010 at 1:20 AM, Jonathan Rochkind rochk...@jhu.edu wrote:
You _could_ use SolrJ with EmbeddedSolrServer. But personally I wouldn't
unless there's a reason to. There's no automatic reason not to use the
ordinary Solr HTTP api, even for an in-house application which is not a
Hi
I'm trying to get my (overly complex and strange) product IDs sorting properly
in Solr.
Approaches I've tried so far, that I've given up on for various reasons:
--Normalizing/padding the IDs so they naturally sort
alphabetically/alphanumerically.
--Splitting the ID into multiple Solr fields
Hi,
so I suppose there is no solution. Is there a chance that SchemaField
becomes extensible in the future ? Because, at the moment, all the field
attributes (indexed, stored, etc.) are hardcoded inside SchemaField. Do
you think it is worth opening an issue about it ?
--
Renaud Delbru
On
Hi Erick,
On Thu, Sep 9, 2010 at 9:41 AM, Erick Erickson erickerick...@gmail.comwrote:
Could you show us the fieldType definitions for your fields? I suspect
you're not getting the tokens you expect. This will almost certainly
be true if the type is string rather than text.
I should
Hi Markus,
On Thu, Sep 9, 2010 at 9:55 AM, Markus Jelsma markus.jel...@buyways.nlwrote:
Looks like AND is your defaultOperator [1].
yes, my schema.xml file have solrQueryParser defaultOperator=AND/ which
is why I thought that the number of hits would decrease every time you add a
keyword.
: $ svn co
: http://svn.apache.org/viewvc/lucene/solr/branches/branch-1.5-dev
: svn: Repository moved permanently to
: '/viewvc/lucene/solr/branches/branch-1.5-dev/'; please relocate
those aren't SVN URLs, those are urls for the viewvc SVN Browsing tool
your SVN client is trying to
Hi,
With the Lucene svn merge a lot of tentative release dates seemed to have
slipped. Which is fine, because I think the merge is for the greater good of
both projects in the long run.
However I do subscribe to the school of thought that believes OSS is best
served with a release often
I find myself in need of the ability to access one field by more than
one name, for application transition purposes. Right now we have a
field (ft_text, by far the largest part of the index) that is indexed
but not stored. This field and three others are copied into an
additional field
yes, my schema.xml file have solrQueryParser
defaultOperator=AND/ which
is why I thought that the number of hits would decrease
every time you add a
keyword.
You are using dismax so, it is determined by mm parameter.
Indeed, it's the dismax, i missed it! My bad..
-Original message-
From: Ahmet Arslan iori...@yahoo.com
Sent: Thu 09-09-2010 20:37
To: solr-user@lucene.apache.org;
Subject: Re: Inconsistent search results with multiple keywords
yes, my schema.xml file have solrQueryParser
You should check Jira's roadmap [1] instead. It shows a clear picture of what
has been done since the 1.4.1 release and pending issues for the 3.x branch and
others.
[1]:
https://issues.apache.org/jira/browse/SOLR?report=com.atlassian.jira.plugin.system.project:roadmap-panel
On 09.09.2010, at 20:47, Markus Jelsma wrote:
You should check Jira's roadmap [1] instead. It shows a clear picture of what
has been done since the 1.4.1 release and pending issues for the 3.x branch
and others.
[1]:
It works as expected. The append, well, appends the parameter and because each
collection has a unique value, specifying two filters on different collections
will always yield zero results.
This, of course, won't work for values that are shared between collections.
-Original
Thank you Erick, Markus and Ahmet! That answered my question. Changing the
value of the mm parameter in solrconfig.xml did have an effect on the 3
keyword query (it was set to 2-25%), and removing it entirely forced all
keywords to be present, and the number of hits was decreasing as expected.
sorry, mm was set to 2-35%, not 2-25%, but nevermind.
Steph.
On Thu, Sep 9, 2010 at 3:13 PM, Stéphane Corlosquet
scorlosq...@gmail.comwrote:
Thank you Erick, Markus and Ahmet! That answered my question. Changing the
value of the mm parameter in solrconfig.xml did have an effect on the 3
On 9/8/2010 4:32 PM, David Yang wrote:
I have a table that I want to index, and the table has no datetime
stamp. However, the table is append only so the primary key can only go
up. Is it possible to store the last primary key, and use some delta
query=select id where id${last_id_value}
I
Shawn,
Can you provide a sample of passing the parameter via URL? And how using it
would look in the data-config.xml
Thanks!
-Vladimir
-Original Message-
From: Shawn Heisey [mailto:elyog...@elyograg.org]
Sent: Thursday, September 09, 2010 3:04 PM
To: solr-user@lucene.apache.org
Hi All,
I am building a custom DIH transformer to massage the data before its indexed.
Extending org.apache.solr.hander.dataimport.Transformer requires me to
implement the
transformRow(MapString, Object row, Context context)
During the actual import - SOLR complains because its looking for
I am trying to use the spellchecker but cannot get past the point of having the
spelling possibilities returned.
I have a text field define in the schema.xml file as:
field name=text type=text_ws indexed=true stored=false
multiValued=true/
I modified solrconfig.xml to point the analyzer to
Hi,
I am using solr 1.4 and was wondering if the dataimport.properties location
can be configured for DataImportHandler. Is there a config file or a
property that I can use to specify a custom location ?
Thanks,
Manali
I don't see you passing spellcheck parameters in the query string. Are they
configured as default in your search handler?
-Original message-
From: Gregg Hoshovsky hosho...@ohsu.edu
Sent: Thu 09-09-2010 22:40
To: solr-user@lucene.apache.org;
Subject: Help on spelling.
I am trying to
But how do you know when the document actually makes it to solr,
especially if you are using commitWithin and not explicitly calling
commit.
One solution is to have a status field in the database such as
0 - unindexed
1 - indexing
2 - committed / verified
And have a separate process query solr
Hi,
I'm using EmbeddedSolrServer for my unit tests. I just can't figure out how to
add my data (stored in xml files similar to those in the example application
example/exampleDocs) after instantiating the server. The source code for the
simple post tool seems to require a stream to write the
I have this field in my schema.xml:
field name=partylocation type=boolean indexed=true stored=true/
This one in my data-config:
field name=partylocation column=PARTYLOCATION /
Now, how can I return all results for which partylocation = true?
Thanks!
--
View this message in
You're right for the general case. I should have added that our setup is
perhaps a little bit out of the ordinary in that we send explicit commits to
solr as part of our indexing app.
Once a commit has finished we're sure all docs until then are present in
solr. For us it's much more difficult to
: I'm trying to break down the data over a year into facets by month; to avoid
: overlap, I'm using -1MILLI on the start and end dates and using a gap of
: +1MONTH.
:
: However, it seems like February completely breaks my monthly cycles, leading
Yep.
Everything you posted makes sense to me in
I didn't build SOLR, I downloaded a prebuilt zip file. Am I correct in my
understanding that the charFilter class PatternReplaceCharFilterFactory is
not part of the prebuilt SOLR 1.4.1 distribution? If that's right, how do I
go about adding it? The more explicit the instructions the better, as I
Okay putting spellcheck=true makes all the difference in the world.
Thanks
On 9/9/10 1:58 PM, Markus Jelsma markus.jel...@buyways.nl wrote:
I don't see you passing spellcheck parameters in the query string. Are they
configured as default in your search handler?
-Original
Could you give us an idea of why you think it isn't present? As far as I can
tell,
it's been around for a while. Are you getting an error and if so, can you
show it
to us?
Look in schema.xml of what you downloaded (probably in the example
directory).
Is it mentioned there? If so, it should just
Hi Chris,
Yes, I saw the facet.range.include feature and briefly tried to implement it
before realising that it was Solr 3.1 only :) I agree that it seems like
the best solution to problem.
Reindexing with a +1MILLI hack had occurred to me and I guess that's what
I'll do in the meantime; it
Hello-
There were a few bugs in this area that are fixed in Solr 1.4. There are
many other bugs which were also fixed. We suggest everyone upgrade to 1.4.
There are different locking managers, and you may be able to use a
different one. Also, if this is over NFS that can cause further
The stream.file and stream.url parameters should do this.
Lance
Rico Lelina wrote:
Hi,
I'm using EmbeddedSolrServer for my unit tests. I just can't figure out how to
add my data (stored in xml files similar to those in the example application
example/exampleDocs) after instantiating the
On 9/9/2010 1:23 PM, Vladimir Sutskever wrote:
Shawn,
Can you provide a sample of passing the parameter via URL? And how using it
would look in the data-config.xml
Here's the URL that I send to do a full build on my last shard:
On 9/9/2010 5:38 PM, Erick Erickson wrote:
Could you give us an idea of why you think it isn't present? As far as I can
tell,
it's been around for a while. Are you getting an error and if so, can you
show it
to us?
Look in schema.xml of what you downloaded (probably in the example
directory).
I use the PingRequestHandler option that tells my load balancer
whether a machine is available.
When the service is disabled, every one of those requests, which my load
balancer makes every five seconds, results in the following in the log:
Sep 9, 2010 6:06:58 PM
iwAccess is the reader lock to iwCommit's writer lock - so the scenario
you bring up should be protected - the reader lock is used in only one
place in the class (addDoc), while every other call to openWriter is
protected by the writer lock.
I'd worry more about the case where two add documents
In the general case, this would require a new method on compound queries
to sort themselves into a canonical order, or refuse to. Somehow, I
don't think this will happen. However, it could be done with boolean
queries only, which would make it somewhat easier to combinatorically
compose OR
Look at Deduplication:
http://wiki.apache.org/solr/Deduplication
It implements a unique hashcode (Lookup3Signature
http://wiki.apache.org/solr/Lookup3Signature ) as a tool that avoids
rewriting the same document over and over. It declares this in
solrconfig.xml instead of schema.xml.
Lance
I just checked out the trunk, and branch 3.x This query is accepted on
both, but gives no responses:
http://localhost:8983/solr/select/?q=*:*sort=dist(2,x_dt,y_dt,0,0)+asc
x_dt and y_dt are wildcard fields with the tdouble type. tdouble
explicitly says it is stored and indexed. Your
On Thu, Sep 9, 2010 at 21:00, Lance Norskog goks...@gmail.com wrote:
I just checked out the trunk, and branch 3.x This query is accepted on both,
but gives no responses:
http://localhost:8983/solr/select/?q=*:*sort=dist(2,x_dt,y_dt,0,0)+asc
So you are saying when you add the sort parameter you
I use nutch to crawl and index to Solr. My code is working. Now, I want to
update the value of one of the fields of a document in the solr index after the
document was already indexed, and I have only the document id. How do I do
that?
Thanks.
63 matches
Mail list logo