I'll reply the solution to this thread on my own (with a different email
address).
Did some debugging on 1.4.1 source code, my issue is in the data-config.xml
file, the field column name when stored in Map object uses the DBs column
casing (e.g. ID -- id):
entity name=parentEntity ...
field
Hi,
I'm using ComplexPhraseQueryParser and I'm quite happy with it.
However, there are some queries using wildcards nor working.
Exemple: I want to do a proximity search between the word compiler and the
expression 'cross linker' or 'cross linking' or 'cross linked' ...
(cross-linker
did you get any exceptions ?
usually wild card term you mentioned would be expanded before being actually
searched .
thanks.
On Mon, Mar 28, 2011 at 1:24 PM, jmr jmpala...@free.fr wrote:
Hi,
I'm using ComplexPhraseQueryParser and I'm quite happy with it.
However, there are some queries
Thanks Erick for replied,
I used protwords.txt for matching the result for singular and plural
words like bag and bags.
Regards
Anurag Walia
--
View this message in context:
http://lucene.472066.n3.nabble.com/problem-with-snowballporterfilterfactory-tp2729589p2742365.html
Sent from the Solr
Chandan Tamrakar-2 wrote:
did you get any exceptions ?
usually wild card term you mentioned would be expanded before being
actually
searched .
No exception. Just no results returned.
JMR
--
View this message in context:
Hi,
you must encode the umlaut in the URL. In your case it must be q=title:f%FCr
then it must be work.
Von: Christopher Bottaro [mailto:cjbott...@onespot.com]
Gesendet: Freitag, 25. März 2011 18:48
An: solr-user@lucene.apache.org
Cc: Martin Rödig
Betreff: Re:
Hi all ,
I would like to know is there any relation between autocommit and
rambuffersize.
My solr config does not contain rambuffersize which mean its
deault(32mb).Autocommit setting are after 500 docs or 80 sec
whichever is first.
Solr starts with Xmx 2700M .Total Ram is 4 GB.
Does the
there are 3 conditions that will trigger an auto flushing in lucene
1. size of index in ram is larger than ram buffer size
2. documents in mamory is larger than the number set by setMaxBufferedDocs.
3. deleted term number is larger than the ratio set by
setMaxBufferedDeleteTerms.
auto flushing by
Hi there,
I am new to solr and have just installed it on a suse box with mysql
backend.
Install and MySQL connector seem to be running. I can see the solr admin
interface.
Now I tried to index a table with about 0.5 Mio rows. That seemed to
work as well. However, I do get 0 results doing a
What query are you doing?
Try q=*:*
Also, what does /solr/admin/stats.jsp report for number of docs?
Upayavira
On Mon, 28 Mar 2011 04:28 -0700, Merlin Morgenstern
merli...@fastmail.fm wrote:
Hi there,
I am new to solr and have just installed it on a suse box with mysql
backend.
Also note that making RAMBufferSize too big isn't useful. Lucid
recommends 128M as the point over which you hit diminishing
returns. But unless you're having problems speed-wise with the
default, why change it?
And are you actually getting OOMs or is this a background question?
Best
Erick
On
Hi there,
I am trying to get solr indexing mysql tables. Seems like I have
misconfigured schema.xml:
HTTP ERROR: 500
Severe errors in solr configuration.
-
org.apache.solr.common.SolrException: copyField destination :'text' does
not
Hi,
i want to use delete by query method to delete indexes.
i try for example:
http://10.0.0.178:8983/solr/update?stream.body=
deletequeryfield1:value/query/delete
and it works
but how can delete indexes by 2 filters?
http://10.0.0.178:8983/solr/update?stream.body=deletequeryfield1:value1
AND
i resolved:
http://10.0.0.178:8983/solr/update?stream.body=
deletequery(field1:value1)AND(field2:value2)/query/delete
Thanx
2011/3/28 Gastone Penzo gastone.pe...@gmail.com
Hi,
i want to use delete by query method to delete indexes.
i try for example:
The error is saying you have a copyfield-directive in schema.xml that wants
to copy the value of a field to the destination field 'text' that doesn't
exist (which indeed is the case given your supplied fields) Search your
schema.xml for 'copyField'. There's probably something configured related to
Thank you both for your input. I ended up using Ahmet's way because it seems
to fit better with the rest of the application.
On Sat, Mar 26, 2011 at 6:02 AM, lboutros boutr...@gmail.com wrote:
The other way could be to extend the SolrQueryParser to read a per field
default operator in the solr
I'm also new but I was able to get DIH working.
From your response your have:
...
Indexing completed. Added/Updated: 0 documents. Deleted 0 documents.
...
str name=Total Documents Processed0/str
I believe your fetch (db source and query) is correct based on the response
but perhaps your mapping
Hi Everyone,
I setup a server and began to index my data. I have two questions I am hoping
someone can help me with. Many of my files seem to index without any problems.
Others, I get a host of different errors. I am indexing primarily web based
content and have identified my text field as
Hi,
I assume you try to post HTML files from post.jar, and use HTMLStripCharFilter
to sanitize the HTML.
But you refer to my file as if you have multiple docs in one file? XML or
HTML? Multiple files?
To what UpdateRequestHandler are you posting? /update/xml or /update/extract ?
For us to
Jan,
thank you for such a quick reply. I have a feed coming in that I convert to an
adddoc/docdoc/doc
Here is the type for text including index and query with the changes suggested.
fieldtype name=text class=solr.TextField
positionIncrementGap=100
analyzer type=index
The analyzer order doesn't really matter, char filters are regardless of
position in the analyzer always executed first. Multiple filters of the same
type, however, are affected by order. Also, your error is not caused by a
faulty analyzer, there is something wrong in your XML.
Anyway,
Also, don't forget to encode entities or wrap them in CDATA.
Jan,
thank you for such a quick reply. I have a feed coming in that I convert to
an adddoc/docdoc/doc Here is the type for text including index
and query with the changes suggested.
fieldtype name=text
In the spellchecker search component declaration:
http://wiki.apache.org/solr/SpellCheckComponent#Configuration
What role does the name play, which is default in this
sample? Can this be any arbitrary name? Should this name
match with something else in the configuration files?
I came to this
I have about 1000 documents per xml file. I am not really doing anything with
the data other than putting the xml tags around it.
So essentially the data is okay with the exception of a few documents that are
causing the errors.
Let's say document # 47 in the xml file has a problem, is the
(This is one of those messages that I would have responded to at the time if I
only noticed it.)
There is not yet indexing of arbitrary shapes (i.e. your data can only be
points), but with SOLR-2155 you can query via WKT thanks to JTS. If you want
to index shapes then you'll have to wait a
I have about 1000 documents per xml file. I am not really doing anything
with the data other than putting the xml tags around it. So essentially
the data is okay with the exception of a few documents that are causing
the errors.
Let's say document # 47 in the xml file has a problem, is the
On Mon, Mar 28, 2011 at 4:58 PM, Merlin Morgenstern
merli...@fastmail.fm wrote:
[...]
You should probably hide passwords when posting to
public lists.
document name=content
entity name=node query=select phrase, country from
search_site
field column=ID name=id
Can you please attach the other files.
It doesn't seem to find the enable.master property, so you may want to
check the properties file exists on the box having issues
We have the following configuration in the core :-
Core -
- solrconfig.xml - Master Slave
Outstanding! Thanks David...I can't wait to take a look at it.
Adam
Sent from my iPhone
On Mar 28, 2011, at 2:16 PM, Smiley, David W. dsmi...@mitre.org wrote:
(This is one of those messages that I would have responded to at the time if
I only noticed it.)
There is not yet indexing of
I'm still interested on what steps I could take to get to the bottom of the
failing tests. Is there additional information that I should provide?
Some of the output below got mangled in the email - here are the (hopefully)
complete lines:
This has a a shape=rect
Grijesh wrote:
Try to send HTML data using format CDATA .
Doesn't work with
$content = ;
And my goal is not to avoid extraction, but have no problems with
non-english chars
--
View this message in context:
Hi,
Here's my problem: I'm indexing a corpus with text in a variety of
languages. I'm planning to detect these at index time and send the
text to one of a suitably-configured field (e.g. mytext_de for
German, mytext_cjk for Chinese/Japanese/Korean etc.)
At search time I want to search all of
dear solr specialists,
my data looks like this:
j]s(dh)fjk [hf]sjkadh asdj(kfh) [skdjfh aslkfjhalwe uigfrhj bsd bsdfga sjfg
asdlfj.
if I want to query for the first word, the following queries must match:
j]s(dh)fjk
j]s(dhfjk
j]sdhfjk
jsdhfjk
dhf
So the matching should ignore some characters
Hi,
I m unable to index data, looks like the datasource is not even read by
solr, even created an empty dataimport.properties file at /conf but the problem
persists.
Following is the response text:
response
−
lst name=responseHeader
int name=status0/int
int name=QTime0/int
/lst
−
lst
On Mon, 28 Mar 2011 13:12 +0100, Upayavira u...@odoko.co.uk wrote:
What query are you doing?
/solr/select/?q=welpe%0D%0Aversion=2.2start=0rows=10indent=on
Try q=*:*
returns:
response
−
lst name=responseHeader
int name=status0/int
int name=QTime5/int
−
lst name=params
str name=q*:*/str
Hi guys,
I have an Javabin object ( it is actually a List data structure) and I need
to convert that to a JSon object.
I am using Gson and pass my List to it: gson.toJson(myList); , the rerturn
is just the same with couple of () added to the begining and and end
could anybody help here,
: Subject: DIH relating multiple DataSources
: In-Reply-To: 1301054278.18711.1433747...@webmail.messagingengine.com
: References:
: AANLkTimY0a7PYnpFa7-KrZ0R8=-duZS9=ivquxyei...@mail.gmail.comAANLkTinr=r
: +-N3HFNRbT1Cx4gvkv-A=cgw5femuox...@mail.gmail.com
:
Ah cool, thanks for your help.
I'll get digging, and see what I can do.
Mark
On Tue, Mar 29, 2011 at 11:36 AM, Chris Hostetter
hossman_luc...@fucit.orgwrote:
: I can't seem to find any references to this issue anywhere except :
: https://issues.apache.org/jira/browse/SOLR-1750
:
: (Which
Can someone take a look at this and let me know what I am doing wrong.
According to luke, only guid, tags, and aquiDate are available.
Schema is below as well.
add
doc
field
name=guidhttp://twitter.com/AshleyxArsenic/statuses/52164920388763648/field
![CDATA[field name=title@Richard_Colo I realy
: Subject: Fields not being indexed?
: In-Reply-To: AANLkTi=iXPPKufBBv=gwz5+j_g+divp6urgnaev66...@mail.gmail.com
: References: AANLkTimXfaWLjeQM4=4oApzQu-=qcql9cnd+ebhxz...@mail.gmail.com
: alpine.DEB.2.00.1103281733100.21091@bester
: AANLkTi=iXPPKufBBv=gwz5+j_g+divp6urgnaev66...@mail.gmail.com
On Mon, Mar 28, 2011 at 3:59 PM, Firdous Ali firdous.al...@yahoo.com wrote:
Hi,
I m unable to index data, looks like the datasource is not even read by
solr, even created an empty dataimport.properties file at /conf but the
problem
persists.
[...]
Look at the Solr log files, which will
On Mon, Mar 28, 2011 at 2:15 PM, Tom Mortimer t...@flax.co.uk wrote:
Hi,
Here's my problem: I'm indexing a corpus with text in a variety of
languages. I'm planning to detect these at index time and send the
text to one of a suitably-configured field (e.g. mytext_de for
German, mytext_cjk for
Thanks in advance.
Find the screen shot of analyzer for this
http://lucene.472066.n3.nabble.com/file/n2746849/solr.jpg solr.jpg problem
.
I have a problem with number of character in Term Text . I entered
Polymer but after snowballporterfilterfactory it become Polym while it
was not exist in
Tom,
Could you share the method you use to perform language detection? Any open
source tools that do that?
Thanks.
--- On Mon, 3/28/11, Tom Mortimer t...@flax.co.uk wrote:
From: Tom Mortimer t...@flax.co.uk
Subject: copyField at search time / multi-language support
To:
On Tue, Mar 29, 2011 at 9:46 AM, anurag.walia walia.anu...@hotmail.com wrote:
[...]
I have a problem with number of character in Term Text . I entered
Polymer but after snowballporterfilterfactory it become Polym while it
was not exist in protwords.txt file . I want if any word does not exist
Hi Gora,
Thanks for relied.
i applied this snowballporterfilterfactory for remove difference of result
in case of plural or singular.
if i entered polymer then it working fine but again polymers giving me
polym.
while bag or bags giving me bag after snowballporterfilterfactory .
Please find
On Tue, Mar 29, 2011 at 10:12 AM, anurag.walia walia.anu...@hotmail.com wrote:
Hi Gora,
Thanks for relied.
i applied this snowballporterfilterfactory for remove difference of result
in case of plural or singular.
if i entered polymer then it working fine but again polymers giving me
LGPL licenses and Apache aren't exactly compatible, see:
http://www.apache.org/legal/3party.html#transition-examples-lgpl
http://www.apache.org/legal/resolved.html#category-x
In practice, this was the reason we started the SIS project.
Cheers,
Chris
On Mar 28, 2011, at 11:16 AM, Smiley, David
it will be polymers but result will come different in case of polymer and
polymers (singular/plural).
or there can be more words like polymer.
Regards
Anurag Walia
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-result-problem-tp2746849p2746947.html
Sent from the Solr -
On Tue, Mar 29, 2011 at 10:41 AM, anurag.walia walia.anu...@hotmail.com wrote:
it will be polymers but result will come different in case of polymer and
polymers (singular/plural).
or there can be more words like polymer.
[...]
Your only alternative then is to implement a filter that works the
is there any other filter which can solved my singular plural problem?
Anurag
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-result-problem-tp2746849p2746956.html
Sent from the Solr - User mailing list archive at Nabble.com.
https://issues.apache.org/jira/browse/SOLR-1979
Tom,
Could you share the method you use to perform language detection? Any open
source tools that do that?
Thanks.
--- On Mon, 3/28/11, Tom Mortimer t...@flax.co.uk wrote:
From: Tom Mortimer t...@flax.co.uk
Subject: copyField at
Thanks Markus.
Do you know if this patch is good enough for production use? Thanks.
Andy
--- On Tue, 3/29/11, Markus Jelsma markus.jel...@openindex.io wrote:
From: Markus Jelsma markus.jel...@openindex.io
Subject: Re: copyField at search time / multi-language support
To:
I haven't tried this as an UpdateProcessor but it relies on Tika and that
LanguageIdentifier works well, except for short texts.
Thanks Markus.
Do you know if this patch is good enough for production use? Thanks.
Andy
--- On Tue, 3/29/11, Markus Jelsma markus.jel...@openindex.io wrote:
54 matches
Mail list logo