Hi Julien,
just one additional thing,
if you developed some plugins/filters, you will have to adapt and compile
them for the Solr 5 API.
Ludovic.
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/collection-API-timeout-tp4238150p4238511.html
Sent from
Hi Alessandro,
I think you are facing this issue:
https://issues.apache.org/jira/browse/SOLR-6246
Ludovic.
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Cloud-suggesters-Collection-Reload-Fail-tp4230478p4230591.html
Sent from the Solr - User
Andrei,
Pivot faceting is the Solr implementation for Hierarchical Facets. I don't
think this is what you need.
Could you please describe the original use case ? Just to eliminate XY
problem.
I don't know if this is acceptable for you in term of performance but you
could try to solve your
I don't know if this is possible for you but:
could you pre-process the group and create nested documents with
pre-computed document counts ?
You could try to denormalize even more :
Create two collections:
- one with user groups in mind
- the second collection with user and groupSignature groups in mind.
For instance, with user groups in mind :
{
id: svsKQSFfzhu-SznsU8FUII,
user: admin,
furniture_count:2,
To me, it seems you are looking for faceting with the parameter
facet.mincount:
facet=truefacet.field=groupFieldfacet.mincount=yourMinimumValue.
Ludovic.
-
Jouve
France.
--
View this message in context:
Dear all,
I would like to get the new terms of fields since last update (once a week).
If I retrieve some terms which were already present, it's not a problem (but
terms which did not exist before must be retrieved).
Is there an easy way to do that ?
I'm currently investigating the
The Apache Solr community is sooo great !
Interesting problem with 3 interesting answers in less than 2 hours !
Thank you all, really.
Erik,
I'm already saving the billion of terms each week. It's hard to diff 1
billion of terms.
I'm already rebuilding the whole dictionaries each week in a
I think payloads are per posting informations which means that it's not
trivial (to me at least ;)) to get terms for a given payload. And it's quite
intensive to scan all postings.
I will check for the bloom filter idea.
Thx
Ludovic.
-
Jouve
France.
--
View this message in context:
Hi Stephon,
nothing obvious to me. But it is early in the morning for a saturday :D
Did you comment out the old-style replication configuration since your first
message ?
Do you always see the same behavior ?
Ludovic.
-
Jouve
France.
--
View this message in context:
Hi Stephon,
do you see Zookeeper timeout errors in your log files ?
Could you please give us additional informations like :
How often is your index updated ? Which version of Solr do you use ? What is
the size of your index ?
Make sure you have this handler in your solr configuration file :
Hello Suchi,
I'm using this Lucene function with Solr 4.6.1 in a specific Update
Processor and it's working well.
How do you test the update ?
I'm using a ValueSourceRangeFilter with a LongFieldSource parameter.
Ludovic.
-
Jouve
France.
--
View this message in context:
Hi Giovanni,
we had this problem as well.
The cause was that the different nodes have slightly different idf values.
We solved this problem by doing an optimize operation which really remove
suppressed data.
Ludovic.
-
Jouve
France.
--
View this message in context:
How many different values do you have in your fields and do you know them ?
Faceting by query is not an option for you ?
Ludovic.
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Combining-several-fields-for-facets-tp4160679p4160866.html
Sent from the
That's excellent Mikhail !
Thanks so much.
I have to use it in my custom query parser now.
Ludovic.
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/AND-operator-in-multi-valued-fields-tp4159715p4160668.html
Sent from the Solr - User mailing list
Thx Alex for your answer.
1) This could be tricky, because the application users write very complex
combined queries with main document fields and event fields too. A custom
parser does the abstraction. I think that could be very tricky to extract
event part of a complex query in order to filter
I 've just finished a first implementation of a CrossFieldSpanNearQuery and
it just works perfectly :D
I can now play with position increments and slops to get exact results
within two multi valued fields.
And for the 1st proposal, my user queries can be bigger than 10k with lots
of different
Dear all,
let's say you have two multivalued fields with two different complex
analyzers in a quite complex schema.
I would like to match specific combinations of values in these fields.
For instance :
Field1 : Value1, Value2
Field2 : Value3, Value4
I would like to match this document with a
Alexandre Rafalovitch wrote
Are you saying you want to match 1st value with 1st value (like positional
constraints?).
That's exactly what I would like to do. :)
-
Jouve
France.
--
View this message in context:
Thx Alex.
We have main documents in the index. (more than 100 complex fields).
Each document can have events attached.
An event contains 4 fields with 3 different analyzers.
We need more than just filtering on them (highlighting on documents and
events at the same time for instance).
That
Hi Moshe,
If I understand correctly your needs, I think you want to use the
CollapsingQParser post filter:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=40509582
I think that, basically, adding this filter to your query should solve your
problem:
fq={!collapse
Hi,
this could be a network issue, like dns resolving.
To verify this you could update your host file with the ip address of your
nodes.
Ludovic.
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Inconsistent-query-times-tp4138956p4142098.html
Sent
Hi Erick,
I do not pass the LBHttpSolrServer to the c'tor of CloudSolrServer.
thx,
Ludovic.
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Replica-active-during-warming-tp4135274p4136828.html
Sent from the Solr - User mailing list archive at
Thank you Mark.
The issue : https://issues.apache.org/jira/browse/SOLR-6086
Ludovic.
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Replica-active-during-warming-tp4135274p4136038.html
Sent from the Solr - User mailing list archive at Nabble.com.
In other words, is there a way for the LBHttpSolrServer to ignore replicas
which are currently cold ?
Ludovic.
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Replica-active-during-warming-tp4135274p4135542.html
Sent from the Solr - User mailing list
Dear All,
we just finished the migration of a cluster from Solr 4.3.1 to Solr 4.6.1.
With solr 4.3.1 a node was not considered as active before the end of the
warming process.
Now, with solr 4.6.1 a replica is considered as active during the warming
process.
This means that if you restart a
Dear all,
I would like to group my query results on two different fields (not at the
same time).
I also would like to get the exact group count.
And I'm working with a sharded index.
I know that to get the exact group count, all documents from a group must be
indexed in a unique shard.
Now, is
Thanks a lot for your answer.
Is there a web page, on the wiki for instance, where we could find some JVM
settings or recommandations that we should used for Solr with some index
configurations?
Ludovic.
-
Jouve
France.
--
View this message in context:
Dear all,
we are currenty using Solr 4.3.1 in production (With SolrCloud).
We encounter quite the same problem described in this other old post:
http://lucene.472066.n3.nabble.com/SolrCloud-CloudSolrServer-Zookeeper-disconnects-and-re-connects-with-heavy-memory-usage-consumption-td4026421.html
You have to create your own parser which extends the current query parser.
You have to override the newFuzzyQuery protected function to call the
FuzzyQuery constructor with a configured maximum expansion value or
something like that.
Ludovic.
-
Jouve
France.
--
View this message in
Hi Lou,
The Solr query Parser creates fuzzy queries with a maximum of 50 term
expansions.
This is the default value and this is hard coded in the FuzzyQuery class.
I would say this is your problem.
I think you could create a new Query Parser which could create the fuzzy
query with a bigger
Thanks Jack.
I finally managed to replicate the external files with my own replication
handler.
But now, there's an issue with Solr in the Update Log replay process.
The default processor chain is not used, this means that my processor which
manage the external files is not used...
I have
Thanks Jack for your answers.
All files in the index directory are replicated ? I thought that only the
lucene index files were replicated.
If you are right, that's great, because I could create an ExternalFileField
type which could get its input file from the index directory and not from
the
Oh, I see :) I did not catch well what you said.
Well, my index could contain 80 millions of elements and a big amount of
them could be hidden.
As you already said, I don't think that ZooKeeper is the right place to
store these files, they are too big.
Thank you again, that gave me some ideas I
Ok, I have created a processor which manages to update the external file.
Basically,
until a commit request, the hidden document IDs are stored in a Set and when
a commit is requested, a new file is created by copying the last one, then
the additional IDs are appended to the external file.
Now
Hi Jack,
the external files involved in External File Fields are not stored in the
configuration directory and cannot be replicated this way, furthermore in
Solr Cloud, additional files are not replicated anymore.
There is something like that in the code:
/ if (confFileNameAlias.size() 1 ||
Hi Kamaci,
why don't you use the Nutch indexing functionality ?
The Nutch Crawling script already contains the Solr indexing step.
http://wiki.apache.org/nutch/bin/nutch%20solrindex
Ludovic.
-
Jouve
France.
--
View this message in context:
Dear all,
I would like to mark documents as hidden.
I could add a field hidden and pass the value to true, but the whole
documents will be reindexed.
And External file fields are not searchable.
I could store the document keys in an external database and filter the
result with these ids. But if
Excellent Erik ! It works perfectly.
Normal filter queries are cached. Is it the same for frange filter queries
like this one ? :
fq={!frange l=0 u=10}removed_revision
Thanks to both for your answers.
Ludovic.
-
Jouve
France.
--
View this message in context:
One more question, is there already a way to update the external file (add
values) in Solr ?
Ludovic.
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Mark-document-as-hidden-tp4045756p4045823.html
Sent from the Solr - User mailing list archive at
Ok, thanks Erik.
Do you see any problem in modifying the Update handler in order to append
some values to this file ?
Ludovic
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Mark-document-as-hidden-tp4045756p4045839.html
Sent from the Solr - User
I could create an UpdateRequestProcessorFactory that could update this file,
it seems to be better ?
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Mark-document-as-hidden-tp4045756p4045842.html
Sent from the Solr - User mailing list archive at
Hi Michael,
it was late yesterday when I wrote my last message.
And it did not help that much.
Feel free to contact me directly. I can not share the code I wrote for legal
obligations.
But I can help you :)
Ludovic.
-
Jouve
France.
--
View this message in context:
You could add a default value in your field via the schema :
field ... default=mynuvalue/
and then your query could be :
-body:mynuvalue
but I prefer the Chris's solution which is what I usually do.
Ludovic.
-
Jouve
France.
--
View this message in context:
Hi Bruno,
don't forget the OS disk cache.
On linux you can clear it with this tiny script :
#!/bin/bash
sync echo 3 /proc/sys/vm/drop_caches
Ludovic.
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Disable-cache-tp3995575p3995589.html
Sent from
It is possible to use the expungeDeletes option in the commit, that could
solve your problem.
http://wiki.apache.org/solr/UpdateXmlMessages#Optional_attributes_for_.22commit.22
Sadly, there is currently a bug with the TieredMergePolicy :
https://issues.apache.org/jira/browse/SOLR-2725 SOLR-2725
Hi Suajtha,
each webapps has its own solr home ?
Ludovic.
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Multicore-Issue-Server-Restart-tp3986516p3986602.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Bruno,
will you use facets and result sorting ?
What is the update frequency/volume ?
This could impact the amount of memory/server count.
Ludovic.
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/System-requirements-in-my-case-tp3985309p3985327.html
Hi David,
what do you want to do with the 'commonField' option ?
Is it possible to have the part of the schema for the author field please ?
Is the author field stored ?
Ludovic.
-
Jouve
France.
--
View this message in context:
ok, not that easy :)
I did not test it myself but it seems that you could use an XSL
preprocessing with the 'xsl' option in your XPathEntityProcessor :
http://wiki.apache.org/solr/DataImportHandler#Configuration_in_data-config.xml-1
You could transform the author part as you wish and then
Hi David,
I think you should add this option : flatten=true
and the could you try to use this XPath :
/MedlineCitationSet/MedlineCitation/AuthorList/Author
see here for the description :
http://wiki.apache.org/solr/DataImportHandler#Configuration_in_data-config.xml-1
I don't think the that
Hi,
I was looking for something similar.
I tried this patch :
https://issues.apache.org/jira/browse/SOLR-2112
it's working quite well (I've back-ported the code in Solr 3.5.0...).
Is it really different from what you are trying to achieve ?
Ludovic.
-
Jouve
France.
--
View this message
Hello,
you can get the source code from the svn repository too :
http://svn.apache.org/repos/asf/lucene/dev/branches/lucene_solr_3_5/
Ludovic.
-
Jouve
France.
--
View this message in context:
I think this is what you are looking for :
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.PositionFilterFactory
Ludovic.
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Splitting-Words-but-retaining-offsets-tp3546104p3547977.html
Sent
Hi,
you have to use the 'expungeDeletes' additional parameter:
http://wiki.apache.org/solr/UpdateXmlMessages
and depending on the version of Solr you are using, you perhaps have to use
a merge policy like the LogByteSizeMergePolicy.
See : https://issues.apache.org/jira/browse/SOLR-2725
Hi,
I'm not sure to see what you mean, but perhaps synonyms could solve your
problem ?
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.SynonymFilterFactory
Ludovic.
-
Jouve
France.
--
View this message in context:
Hi Jason,
you could add this filter to the end of your analyzer :
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.PositionFilterFactory
That should solve your problem.
Ludovic.
-
Jouve
France.
--
View this message in context:
Hi Remy,
could you paste the analyzer part of the field merchant_name_t please ?
And when you say it should return more than that, could you explain why
with examples ?
If I'm not wrong, the field collapsing function is based on indexed values,
so if your analyzer is complex (not string),
Ok, thanks for the schema.
the merchant Cult Beauty Ltd should be indexed like this:
cult
beauty
ltd
I think some other merchants contain at least one of these words.
you should try to group with a special field used for field collapsing:
dynamicField name=*_t_group type=string
I just checked, you can disable the storing parameter and use this field:
dynamicField name=*_t_group type=stringindexed=true
stored=false/
Ludovic.
-
Jouve
France.
--
View this message in context:
excellent !
and yes, il fait très beau en France :)
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/FieldCollapsing-don-t-return-every-groups-tp3376036p3376362.html
Sent from the Solr - User mailing list archive at Nabble.com.
And to complete the answer of Erick,
in this search,
customer_name:Joh*
* is not considered as a wildcard, it is an exact search.
another thing, (it is not your problem...),
Words with wildcards are not analyzed,
so, if your analyzer contains a lower case filter,
in the index, these words
Hi,
it is possible to create a new similarity class which returns the term
occurrences.
You have to disable Idf (just return1), normalization and co.
then you have to declare it in your schema:
http://wiki.apache.org/solr/SchemaXml#Similarity
http://wiki.apache.org/solr/SolrPlugins#Similarity
instanceDir=.
does that fit your needs ?
Ludovic.
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/core-creation-and-instanceDir-parameter-tp3287124p3302496.html
Sent from the Solr - User mailing list archive at Nabble.com.
We had this type of error too.
Now we are using the StreamingUpdateSolrServer with a quite big queue and
2-4 threads depending on data type:
http://lucene.apache.org/solr/api/org/apache/solr/client/solrj/impl/StreamingUpdateSolrServer.html
And we do not do any intermediate commit. We send only
Hi,
if you are using the schema from the Solr example, the fields with the type
string are not analyzed.
You should find a text field type or you can create one like shown in this
example:
http://svn.apache.org/viewvc/lucene/dev/trunk/solr/example/solr/conf/schema.xml?view=markup
take a look to
Hi Gabriele,
I'm not sure to understand your problem, but the TermVectorComponent may fit
your needs ?
http://wiki.apache.org/solr/TermVectorComponent
http://wiki.apache.org/solr/TermVectorComponentExampleEnabled
Ludovic.
-
Jouve
France.
--
View this message in context:
You could add this filter after the NGram filter to prevent the phrase query
creation :
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.PositionFilterFactory
Ludovic.
-
Jouve
France.
--
View this message in context:
If you are using Tomcat, perhaps you could use Valve to protect a given
context of your application I think :
Context
path=/solr/dataimport
docBase=${catalina.home}/server/solr/dataimport
privileged=true
Valve className=org.apache.catalina.valves.RemoteAddrValve
Hi Bryan,
how do you index your html files ? I mean do you create fields for different
parts of your document (for different stop words lists, stemming, etc) ?
with DIH or solrj or something else ?
iorixxx, could you please explain a bit more your solution, because I don't
see how your solution
I am not (yet) a tika user, perhaps that the iorixxx's solution is good for
you.
We will share the highlighter module and 2 other developments soon. ('have
to see how to do that)
Ludovic.
-
Jouve
France.
--
View this message in context:
Hi Thomas,
I don't use it myself (but I will soon), so I may be wrong, but did you try
to use the ComplexPhraseQueryParser :
ComplexPhraseQueryParser
QueryParser which permits complex phrase query syntax eg (john
jon jonathan~) peters*.
It seems that you could do such type of queries
Hi Kurt,
I think this is a bit more tricky than that.
For example, if a user searches for oranges, the stemmer may return
orang which is not an existing word.
So getting stemmed words might/will not work for your highlighting purpose.
Ludovic.
-
Jouve
France.
--
View this message in
Hi all,
I need to highlight searched words in the original text (xml) of a document.
So I'm trying to develop a new Highlighter which uses the defaultHighlighter
to highlight some fields and then retrieve the original text file/document
(external or internal storage) and put the highlighted
To clarify a bit more, I took a look to this function :
termPositions
public TermPositions termPositions()
throws IOException
Description copied from class: IndexReader
Returns an unpositioned TermPositions enumerator.
But it returns an unpositioned
The original document is not indexed. Currently it is just stored and could
be stored in an filesystem or a database in the future.
The different parts of a document are indexed in multiple different fields
with some different analyzers (stemming, multiple languages, regex,...).
So, I don't
Hi Darren,
I think that if I had to get the parsing result, I would create my own
QueryComponent which would create the parser in the 'prepare' function (you
can take a look to the actual QueryComponent class) and instead of resolving
the query in the 'process' function, I would just parse the
Darren,
you can even take a look to the DebugComponent which returns the parsed
query in a string form.
It uses the QueryParsing class to parse the query, you could perhaps do the
same.
Ludovic.
-
Jouve
France.
--
View this message in context:
Hi Alexei,
We have the same issue/behavior.
The highlighting component fragments the fields to highlight and choose the
bests to be returned and highlighted.
You can return all fragments with the maximum size for each one, but it will
never return fragments with scores equal to 0, I mean without
the key phrase was this one :) :
A sloppy phrase query specifies a maximum slop, or the number of
positions tokens need to be moved to get a match.
so you could search for foo bar~101 in your example.
Ludovic.
-
Jouve
France.
--
View this message in context:
I would prefer to put a higher slop number instead of a boolean clause : 200
perhaps in your specific case.
Ludovic.
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Order-of-words-in-proximity-search-tp2938427p2946645.html
Sent from the Solr - User
The analyzer of the field you are using could impact the Phrase Query Slop.
Could you copy/paste the part of the schema ?
Ludovic.
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Order-of-words-in-proximity-search-tp2938427p2946764.html
Sent from the
Hi,
see here for an explanation :
http://wiki.apache.org/solr/SolrRelevancyFAQ#How_can_I_search_for_one_term_near_another_term_.28say.2C_.22batman.22_and_.22movie.22.29
Ludovic.
-
Jouve
France.
--
View this message in context:
Hi,
synonyms could be an option, but could you describe a bit more your problem
please (current analyzer, documents, solr version) ?
Ludovic.
2011/5/13 roySolr [via Lucene]
ml-node+2934742-186141045-383...@n3.nabble.com
Hello,
My index looks like this:
Soccer club
Football club
etc.
Perhaps the query elevation component is what you are looking for :
http://wiki.apache.org/solr/QueryElevationComponent
Ludovic.
-
Jouve
France.
--
View this message in context:
Very nice Steve ! Thanks again. (I'm building from svn so that's perfect for
me)
Is this file referenced somewhere in the wiki ?
Ludovic.
-
Jouve
France.
--
View this message in context:
Steve,
I'm not used to update wikis, but I've added a small part after the IntelliJ
part here :
http://wiki.apache.org/solr/HowToContribute
Ludovic.
-
Jouve
France.
--
View this message in context:
Thanks Steve, this will be really simpler next time :)
Is it documented somewhere ? If no, perhaps could we add something in this
page for example ?
http://wiki.apache.org/solr/FrontPage#Solr_Development
or here :
http://wiki.apache.org/solr/NightlyBuilds
Ludovic.
2011/5/5 steve_rowe [via
In the ant script there is a target to generate maven's artifacts.
After that, you will be able to open the project as a standard maven
project.
Ludovic.
2011/5/4 Gabriele Kahlout [via Lucene]
ml-node+2898068-621882422-383...@n3.nabble.com
Hello,
I'm trying to modify Solr and I think
oups,
sorry, this was not the target I used (this one should work too, but...),
the one I used is get-maven-poms. That will just create pom files and copy
them to their right target locations.
I'm using netbeans and I'm using the plugin Automatic Projects to do
everything inside the IDE.
Which
ok, this is part of my build.xml (from the svn repository) :
property name=version value=3.1-SNAPSHOT/
target name=get-maven-poms
description=Copy Maven POMs from dev-tools/maven/ to their target
locations
copy todir=. overwrite=true
fileset
did you update this part in your solrconfig.xml ?
luceneMatchVersionLUCENE_31/luceneMatchVersion
Ludovic.
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Deprication-warnings-in-Solr-log-tp2898163p2898749.html
Sent from the Solr - User mailing list
I do not build this part, I don't need it.
The lib was present in the branch_3x branch, but is not there anymore.
You can download it here :
http://search.lucidimagination.com/search/out?u=http%3A%2F%2Fdownloads.osafoundation.org%2Fdb%2Fdb-4.7.25.jar
You have to install it locally.
Ludovic.
I opened and built my needed projects in Netbeans, i.e.: Solr Core, Solr
Search Server, Solrj, Lucene Core etc
But with the given library you should go to the next step.
Ludovic.
-
Jouve
France.
--
View this message in context:
Hi,
I think you have to use stemming on both side (index and query) if you
really want to use stemming.
Ludovic
2011/5/3 Dmitry Kan [via Lucene]
ml-node+2893599-894006307-383...@n3.nabble.com
Dear list,
In SOLR schema on the index side we use no stemming to support favor
wildcard search.
Dmitry,
I don't know any way to keep both stemming and consistent wildcard support
in the same field.
To me, you have to create 2 different fields.
Ludovic.
2011/5/3 Dmitry Kan [via Lucene]
ml-node+2893628-993677979-383...@n3.nabble.com
Hi Ludovic,
That's an option we had before we decided
Do you want to search on the datas from the tables together or seperately ?
Is there a join between the two tables ?
Ludovic.
2011/5/2 Greg Georges [via Lucene]
ml-node+2891256-222073995-383...@n3.nabble.com
Hello all,
I have a system where I have a dataimporthandler defined for one table
ok, so It seems you should create a new index and core as you said.
see here for the management :
http://wiki.apache.org/solr/CoreAdmin
But it seems that is a problem for you. Is it ?
Ludovic.
2011/5/2 Greg Georges [via Lucene]
ml-node+2891277-472183207-383...@n3.nabble.com
No, the data
you could use EdgeNGramFilterFactory :
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.EdgeNGramFilterFactory
And you should mix front and back ngram process in your analyzer :
filter class=solr.EdgeNGramFilterFactory minGramSize=2 maxGramSize=15
side=front/
filter
coud you try to escape white spaces like this:
Hind\ claw
Ludovic.
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Facing-problem-with-white-space-in-synonyms-tp2870193p2870552.html
Sent from the Solr - User mailing list archive at Nabble.com.
1 - 100 of 126 matches
Mail list logo