On 24 November 2011 15:18, Tomasz Wegrzanowski
tomasz.wegrzanow...@gmail.com wrote:
On 22 November 2011 14:28, Jan Høydahl jan@cominvent.com wrote:
Why do you need spaces in the replacement?
Try pattern=\+ replacement=plus - it will cause the transformed
charstream to contain as many
On 22 November 2011 14:28, Jan Høydahl jan@cominvent.com wrote:
Why do you need spaces in the replacement?
Try pattern=\+ replacement=plus - it will cause the transformed
charstream to contain as many tokens as the original and avoid the
highlighting crash.
I tried that, it still
Hi,
I've been trying to match some phrases with + and (like c++,
google+, rd etc.),
but tokenized gets rid of them before I can do anything with synonym filters.
So I tried using CharFilters like this:
fieldType name=text class=solr.TextField
positionIncrementGap=100
On 15 November 2011 15:55, Dyer, James james.d...@ingrambook.com wrote:
Writing your own spellchecker to do what you propose might be difficult. At
issue is the fact that both the index-based and file-based spellcheckers
are designed to work off a Lucene index and use the document frequency
Hi,
I have a very large index, and I'm trying to add a spell checker for it.
I don't want to copy all text in index to extra spell field, since that would
be prohibitively big, and index is already close to how big it can
reasonably be,
so I just want to extract word frequencies as I index for
Hi,
I'm having oome problems with solr. From random browsing
I'm getting an impression that a lot of memory fixes happened
recently in solr and lucene.
Could you give me a quick summary how (un)stable are different
lucene / solr branches and how much improvement I can expect?
On 12 August 2010 13:46, Koji Sekiguchi k...@r.email.ne.jp wrote:
(10/08/12 21:06), Tomasz Wegrzanowski wrote:
Hi,
I'm having oome problems with solr. From random browsing
I'm getting an impression that a lot of memory fixes happened
recently in solr and lucene.
Could you give me a quick