It may have been a permissions problem, or it stared working after the master
had done another fresh scheduled full-import and jumped an index version.
Timestamp issue?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Problem-with-replication-tp2294313p3704559.html
Sent from th
Actually, I get:
No files to download for index generation:
this is after deleting the data directory on the slave.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Problem-with-replication-tp2294313p3704457.html
Sent from the Solr - User mailing list archive at Nabble.com.
This seems like a real shame. As soon as you search across more than one
field, the mm setting becomes nearly useless.
--
View this message in context:
http://lucene.472066.n3.nabble.com/dismax-limiting-term-match-to-one-field-tp2056498p3685850.html
Sent from the Solr - User mailing list archive
For future reference, I had this problem, and it was the debug statements in
commons HTTP that were printing out all the binary data to the log, but my
console appender was set to INFO so I wasn't seeing them. Setting http
commons to INFO level fixed my speed issue (two orders of magnitude faster).
Did this ever progress? Shall we make a jira?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Detecting-replication-slave-health-tp677584p3664739.html
Sent from the Solr - User mailing list archive at Nabble.com.
if you're unique id settings are correct, there won't be any redundancy as
solr will not keep two copies with the same unique id.
--
View this message in context:
http://lucene.472066.n3.nabble.com/dih-last-index-time-exacty-what-time-is-this-capturing-tp499851p3633391.html
Sent from the Solr - U
That's exactly what I need. I'm using phonetic tokens on ngrams, and there's
lots of dupes. Can you submit it as a patch? What's the easiest way to get
this into my solr?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Generic-RemoveDuplicatesTokenFilter-tp3581656p3632499.html
I'm also very interested in this - for my regex augmenter. If we could get an
augmenter to add highlighting results directly to the doc, like the explain
augmenter does, then I could definitely write up that regex augmenter..
http://lucene.472066.n3.nabble.com/Regex-DocTransformer-td3627314.html
Been looking at the code a bit, and it seems it's not disabled per se, it's
just not there. The normal searcher has an inbuilt result set cache check
before and after executing, where as the Grouping#execute doesn't have any
concept of result cache.
Can't they just share the same cache and implem
Why is this? And what happened to
http://lucene.472066.n3.nabble.com/Re-Field-Collapsing-disable-cache-td481783.html
?
I don't see why basic caching of request -> result shouldn't be the same?
I know I could put a layer on top, but I'd like to use a built in cache if
possible.
--
View this mess
10 matches
Mail list logo