Re: using distributed search with the suggest component

2011-08-04 Thread mdz-munich
Hi Tobias,

sadly, it seems you are right.  

After a little bit investigation we also recognized that some names (we use
it for auto-completing author-names), are missing. And since it is a
distributed setup ... 

But I am almost sure it worked with Solr 3.2. 



Best regards,

Sebastian 

--
View this message in context: 
http://lucene.472066.n3.nabble.com/using-distributed-search-with-the-suggest-component-tp3197651p3226082.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr 3.3: Exception in thread Lucene Merge Thread #1

2011-07-26 Thread mdz-munich
It seems to work now. 


We simply added 

/ulimit -v unlimited /

to our tomcat-startup-script. 


@Yonik: Thanks again! 


Best regards,

Sebastian 


--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-3-3-Exception-in-thread-Lucene-Merge-Thread-1-tp3185248p3200105.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: using distributed search with the suggest component

2011-07-26 Thread mdz-munich
Hi Tobias,

try this, it works for us (Solr 3.3):

solrconfig.xml:

/searchComponent name=suggest class=solr.SpellCheckComponent
str name=queryAnalyzerFieldTypeword/str
lst name=spellchecker
str name=namesuggestion/str
str name=classnameorg.apache.solr.spelling.suggest.Suggester/str
str name=lookupImplorg.apache.solr.spelling.suggest.fst.FSTLookup/str
str name=fieldwordCorpus/str
str name=comparatorClassscore/str
str name=storeDir./suggester/str
str name=buildOnCommitfalse/str
str name=buildOnOptimizetrue/str
float name=threshold0.005/float
/lst

requestHandler name=/suggest class=solr.SearchHandler
lst name=defaults
str name=omitHeadertrue/str
str name=spellchecktrue/str
str name=spellcheck.onlyMorePopulartrue/str
str name=spellcheck.collatetrue/str
str name=spellcheck.dictionarysuggestion/str
str name=spellcheck.count50/str
str name=spellcheck.maxCollations50/str
/lst
arr name=components
strsuggest/str
/arr
/requestHandler/

Query like that:

http://localhost:8080/solr/core.01/suggest?q=wordPrefixshards=localhost:8080/solr/core.01,localhost:8080/solr/core.02shards.qt=/suggest


Greetz,

Sebastian



Tobias Rübner wrote:
 
 Hi,
 
 I try to use the suggest component (solr 3.3) with multiple cores.
 I added a search component and a request handler as described in the docs
 (
 http://wiki.apache.org/solr/Suggester) to my solrconfig.
 That works fine for 1 core but querying my solr instance with the shards
 parameter does not query multiple cores.
 It just ignores the shards parameter.
 http://localhost:/solr/core1/suggest?q=sashards=localhost:/solr/core1,localhost:/solr/core2
 
 The documentation of the SpellCheckComponent (
 http://wiki.apache.org/solr/SpellCheckComponent#Distributed_Search_Support)
 is a bit vage in that point, because I don't know if this feature really
 works with solr 3.3. It is targeted for solr 1.5, which will never come,
 but
 says, it is now available.
 I also tried the shards.qt paramater, but it does not change my results.
 
 Thanks for any help,
 Tobias
 


--
View this message in context: 
http://lucene.472066.n3.nabble.com/using-distributed-search-with-the-suggest-component-tp3197651p3200143.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Solr 3.3: Exception in thread Lucene Merge Thread #1

2011-07-22 Thread mdz-munich

mdz-munich wrote:
 
 Yeah, indeed.
 
 But since the VM is equipped with plenty of RAM (22GB) and it works so far
 (Solr 3.2) very well with this setup, I AM slightly confused, am I?
 
 Maybe we should LOWER the dedicated Physical Memory? The remaining 10GB
 are used for a second tomcat (8GB) and the OS (Suse). As far as I
 understand NIO (mostly un-far), this package can directly use the most
 efficient operations of the underlying platform.
 

After three days of banging my head against the wall: Problem solved! 

It seems there was not enough Memory left for NIO. 

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-3-3-Exception-in-thread-Lucene-Merge-Thread-1-tp3185248p3190916.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Solr 3.3: Exception in thread Lucene Merge Thread #1

2011-07-22 Thread mdz-munich
I was wrong. 

After rebooting tomcat we discovered a new sweetness: 

/SEVERE: REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@3c753c75
(core.name) has a reference count of 1
22.07.2011 11:52:07 org.apache.solr.common.SolrException log
SEVERE: java.lang.RuntimeException: java.io.IOException: Map failed
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1099)
at org.apache.solr.core.SolrCore.init(SolrCore.java:585)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:463)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:316)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:207)
at
org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:130)
at
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:94)
at
org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:273)
at
org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:254)
at
org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:372)
at
org.apache.catalina.core.ApplicationFilterConfig.init(ApplicationFilterConfig.java:98)
at
org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4584)
at
org.apache.catalina.core.StandardContext$2.call(StandardContext.java:5262)
at
org.apache.catalina.core.StandardContext$2.call(StandardContext.java:5257)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
at java.util.concurrent.FutureTask.run(FutureTask.java:149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
at java.lang.Thread.run(Thread.java:736)
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:782)
at
org.apache.lucene.store.MMapDirectory$MMapIndexInput.init(MMapDirectory.java:264)
at 
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:216)
at org.apache.lucene.index.FieldsReader.init(FieldsReader.java:129)
at
org.apache.lucene.index.SegmentCoreReaders.openDocStores(SegmentCoreReaders.java:244)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:116)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:92)
at 
org.apache.lucene.index.DirectoryReader.init(DirectoryReader.java:113)
at
org.apache.lucene.index.ReadOnlyDirectoryReader.init(ReadOnlyDirectoryReader.java:29)
at
org.apache.lucene.index.DirectoryReader$1.doBody(DirectoryReader.java:81)
at
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:750)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:75)
at org.apache.lucene.index.IndexReader.open(IndexReader.java:428)
at org.apache.lucene.index.IndexReader.open(IndexReader.java:371)
at
org.apache.solr.core.StandardIndexReaderFactory.newReader(StandardIndexReaderFactory.java:38)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1088)
... 18 more
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:779)
... 33 more/

Any ideas and/or suggestions? 

Best regards  thank you,

Sebastian 

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-3-3-Exception-in-thread-Lucene-Merge-Thread-1-tp3185248p3190976.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr 3.3: Exception in thread Lucene Merge Thread #1

2011-07-22 Thread mdz-munich
Hi Yonik,

thanks for your reply! 

 Are you specifically selecting MMapDirectory in solrconfig.xml? 

Nope.

We installed Oracle's Runtime from 

http://java.com/de/download/linux_manual.jsp?locale=de

/java.runtime.name = Java(TM) SE Runtime Environment
sun.boot.library.path = /usr/java/jdk1.6.0_26/jre/lib/amd64
java.vm.version = 20.1-b02
shared.loader = 
java.vm.vendor = Sun Microsystems Inc.
enable.master = true
java.vendor.url = http://java.sun.com/
path.separator = :
java.vm.name = Java HotSpot(TM) 64-Bit Server VM
tomcat.util.buf.StringCache.byte.enabled = true
file.encoding.pkg = sun.io
java.util.logging.config.file =
/local/master01_tomcat7x_solr33x/conf/logging.properties
user.country = DE
sun.java.launcher = SUN_STANDARD
sun.os.patch.level = unknown
java.vm.specification.name = Java Virtual Machine Specification
user.dir = /local/master01_tomcat7x_solr33x/logs
solr.abortOnConfigurationError = true
java.runtime.version = 1.6.0_26-b03
java.awt.graphicsenv = sun.awt.X11GraphicsEnvironment
java.endorsed.dirs = /local/master01_tomcat7x_solr33x/endorsed
os.arch = amd64
java.io.tmpdir = /local/master01_tomcat7x_solr33x/temp
line.separator =  /

But no success with 1000 docs/batch, this was thrown during optimize: 

/
22.07.2011 18:44:05 org.apache.solr.core.SolrCore execute
INFO: [core.digi20] webapp=/solr path=/update params={} status=500
QTime=87540 
22.07.2011 18:44:05 org.apache.solr.common.SolrException log
SEVERE: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
at
org.apache.lucene.store.MMapDirectory$MMapIndexInput.init(MMapDirectory.java:303)
at 
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:217)
at org.apache.lucene.index.FieldsReader.init(FieldsReader.java:129)
at
org.apache.lucene.index.SegmentCoreReaders.openDocStores(SegmentCoreReaders.java:245)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:117)
at 
org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:703)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4196)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3863)
at
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:37)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2715)
at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2525)
at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2462)
at
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:410)
at
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85)
at
org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:154)
at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:177)
at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77)
at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:67)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1368)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:240)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:164)
at
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:462)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:164)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100)
at
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:563)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:403)
at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:301)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:162)
at
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:309)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.OutOfMemoryError: Map failed

Re: Solr 3.3: Exception in thread Lucene Merge Thread #1

2011-07-22 Thread mdz-munich
It says:

/core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 257869
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) 28063940
open files  (-n) 8192
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 257869
virtual memory  (kbytes, -v) 27216080
file locks  (-x) unlimited/


Best regards,

Sebastian



Yonik Seeley-2-2 wrote:
 
 OK, best guess is that you're going over some per-process address space
 limit.
 
 Try seeing what ulimit -a says.
 
 -Yonik
 http://www.lucidimagination.com
 
 On Fri, Jul 22, 2011 at 12:51 PM, mdz-munich
 lt;sebastian.lu...@bsb-muenchen.degt; wrote:
 Hi Yonik,

 thanks for your reply!

 Are you specifically selecting MMapDirectory in solrconfig.xml?

 Nope.

 We installed Oracle's Runtime from

 http://java.com/de/download/linux_manual.jsp?locale=de

 /java.runtime.name = Java(TM) SE Runtime Environment
 sun.boot.library.path = /usr/java/jdk1.6.0_26/jre/lib/amd64
 java.vm.version = 20.1-b02
 shared.loader =
 java.vm.vendor = Sun Microsystems Inc.
 enable.master = true
 java.vendor.url = http://java.sun.com/
 path.separator = :
 java.vm.name = Java HotSpot(TM) 64-Bit Server VM
 tomcat.util.buf.StringCache.byte.enabled = true
 file.encoding.pkg = sun.io
 java.util.logging.config.file =
 /local/master01_tomcat7x_solr33x/conf/logging.properties
 user.country = DE
 sun.java.launcher = SUN_STANDARD
 sun.os.patch.level = unknown
 java.vm.specification.name = Java Virtual Machine Specification
 user.dir = /local/master01_tomcat7x_solr33x/logs
 solr.abortOnConfigurationError = true
 java.runtime.version = 1.6.0_26-b03
 java.awt.graphicsenv = sun.awt.X11GraphicsEnvironment
 java.endorsed.dirs = /local/master01_tomcat7x_solr33x/endorsed
 os.arch = amd64
 java.io.tmpdir = /local/master01_tomcat7x_solr33x/temp
 line.separator =  /

 But no success with 1000 docs/batch, this was thrown during optimize:

 /
 22.07.2011 18:44:05 org.apache.solr.core.SolrCore execute
 INFO: [core.digi20] webapp=/solr path=/update params={} status=500
 QTime=87540
 22.07.2011 18:44:05 org.apache.solr.common.SolrException log
 SEVERE: java.io.IOException: Map failed
        at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
        at
 org.apache.lucene.store.MMapDirectory$MMapIndexInput.init(MMapDirectory.java:303)
        at
 org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:217)
        at
 org.apache.lucene.index.FieldsReader.init(FieldsReader.java:129)
        at
 org.apache.lucene.index.SegmentCoreReaders.openDocStores(SegmentCoreReaders.java:245)
        at
 org.apache.lucene.index.SegmentReader.get(SegmentReader.java:117)
        at
 org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:703)
        at
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4196)
        at
 org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3863)
        at
 org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:37)
        at
 org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2715)
        at
 org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2525)
        at
 org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2462)
        at
 org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:410)
        at
 org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85)
        at
 org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:154)
        at
 org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:177)
        at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77)
        at
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:67)
        at
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1368)
        at
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
        at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
        at
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
        at
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
        at
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:240)
        at
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:164

Re: Solr 3.3: Exception in thread Lucene Merge Thread #1

2011-07-22 Thread mdz-munich
Maybe it's important:

- The OS (Open Suse 10) is virtualized on VMWare
- Network Attached Storage

Best regards

Sebastian


--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-3-3-Exception-in-thread-Lucene-Merge-Thread-1-tp3185248p3191986.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr 3.3: Exception in thread Lucene Merge Thread #1

2011-07-20 Thread mdz-munich
Update.

After adding 1626 documents without doing a commit or optimize:

/Exception in thread Lucene Merge Thread #1
org.apache.lucene.index.MergePolicy$MergeException: java.io.IOException: Map
failed
at
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:517)
at
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:782)
at
org.apache.lucene.store.MMapDirectory$MMapIndexInput.init(MMapDirectory.java:264)
at 
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:216)
at org.apache.lucene.index.FieldsReader.init(FieldsReader.java:129)
at
org.apache.lucene.index.SegmentCoreReaders.openDocStores(SegmentCoreReaders.java:244)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:116)
at 
org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:702)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4192)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3859)
at
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:388)
at
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:456)
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:779)
... 10 more
/

Any ideas, any suggestions?

Greetz  thank you,

Sebastian



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-3-3-Exception-in-thread-Lucene-Merge-Thread-1-tp3185248p3185344.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr 3.3: Exception in thread Lucene Merge Thread #1

2011-07-20 Thread mdz-munich
Here we go ...

This time we tried to use the old LogByteSizeMergePolicy and
SerialMergeScheduler:

mergePolicy class=org.apache.lucene.index.LogByteSizeMergePolicy/
mergeScheduler class=org.apache.lucene.index.SerialMergeScheduler/

We did this before, just to be sure ... 

~300 Documents:

/
SEVERE: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:782)
at
org.apache.lucene.store.MMapDirectory$MMapIndexInput.init(MMapDirectory.java:264)
at 
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:216)
at org.apache.lucene.index.FieldsReader.init(FieldsReader.java:129)
at
org.apache.lucene.index.SegmentCoreReaders.openDocStores(SegmentCoreReaders.java:244)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:116)
at 
org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:702)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4192)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3859)
at
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:37)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2714)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2709)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2705)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3509)
at 
org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:1850)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1814)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1778)
at 
org.apache.solr.update.SolrIndexWriter.close(SolrIndexWriter.java:143)
at
org.apache.solr.update.DirectUpdateHandler2.closeWriter(DirectUpdateHandler2.java:183)
at
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:416)
at
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85)
at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:98)
at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77)
at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:67)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1368)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:240)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:164)
at
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:462)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:164)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100)
at
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:563)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:403)
at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:301)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:162)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:140)
at
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:309)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
at java.lang.Thread.run(Thread.java:736)
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:779)
... 44 more

20.07.2011 18:07:30 org.apache.solr.core.SolrCore execute
INFO: [core.digi20] webapp=/solr path=/update params={} status=500
QTime=12302 
20.07.2011 18:07:30 org.apache.solr.common.SolrException log
SEVERE: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:782)
at
org.apache.lucene.store.MMapDirectory$MMapIndexInput.init(MMapDirectory.java:264)
at 
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:216)
at 

RE: Solr 3.3: Exception in thread Lucene Merge Thread #1

2011-07-20 Thread mdz-munich
Yeah, indeed.

But since the VM is equipped with plenty of RAM (22GB) and it works so far
(Solr 3.2) very well with this setup, I AM slightly confused, am I?

Maybe we should LOWER the dedicated Physical Memory? The remaining 10GB are
used for a second tomcat (8GB) and the OS (Suse). As far as I understand NIO
(mostly un-far), this package can directly use the most efficient
operations of the underlying platform. 






 

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-3-3-Exception-in-thread-Lucene-Merge-Thread-1-tp3185248p3186986.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr 3.3: SEVERE: java.io.IOException: seek past EOF

2011-07-19 Thread mdz-munich
Hi Developers and Users,

a serious Problem occurred: 

19.07.2011 10:50:32 org.apache.solr.common.SolrException log
SEVERE: java.io.IOException: seek past EOF
at
org.apache.lucene.store.MMapDirectory$MMapIndexInput.seek(MMapDirectory.java:343)
at org.apache.lucene.index.FieldsReader.seekIndex(FieldsReader.java:226)
at org.apache.lucene.index.FieldsReader.doc(FieldsReader.java:242)
at 
org.apache.lucene.index.SegmentReader.document(SegmentReader.java:471)
at
org.apache.lucene.index.DirectoryReader.document(DirectoryReader.java:564)
at
org.apache.solr.search.SolrIndexReader.document(SolrIndexReader.java:260)
at 
org.apache.solr.search.SolrIndexSearcher.doc(SolrIndexSearcher.java:440)
at
org.apache.solr.util.SolrPluginUtils.optimizePreFetchDocs(SolrPluginUtils.java:270)
at
org.apache.solr.handler.component.QueryComponent.doPrefetch(QueryComponent.java:358)
at
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:265)
at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:202)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1368)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:240)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:164)
at
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:462)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:164)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100)
at
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:562)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at
org.apache.catalina.valves.RequestFilterValve.process(RequestFilterValve.java:210)
at
org.apache.catalina.valves.RemoteAddrValve.invoke(RemoteAddrValve.java:85)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:395)
at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:250)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:188)
at
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:302)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
at java.lang.Thread.run(Thread.java:736)

Fresh index with Solr 3.3. It only occurs with some Words (in this case it
was Graf, no idea). Query-Type (dismax, standard, edismax), Highlighting
and Faceting have no affect, only the term to search. And it seems to affect
only OCR-fields, which are usually larger than fields for meta-data.

Any ideas? 

Grettings  best regards,

Sebastian 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-3-3-SEVERE-java-io-IOException-seek-past-EOF-tp3181869p3181869.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr 3.3: SEVERE: java.io.IOException: seek past EOF

2011-07-19 Thread mdz-munich
Ups, false alarm.

CustomSimilarity, combined with a very small set of documents caused the
problem.

Greetings,

Sebastian 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-3-3-SEVERE-java-io-IOException-seek-past-EOF-tp3181869p3181943.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: wildcards and German umlauts

2011-05-29 Thread mdz-munich
Hi,

if i type complete word (such as übersicht).
But there are no hits, if i use wildcards (such as über*)
Searching with wildcards and without umlauts works as well. 

I can confirm that. 

Greetz,

Sebastian

--
View this message in context: 
http://lucene.472066.n3.nabble.com/wildcards-and-German-umlauts-tp499972p2998425.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: wildcards and German umlauts

2011-05-29 Thread mdz-munich
Ah, BTW,

since the problem seems to be a query-parser-issue a simple workarround
could be done by simple replace all Umlauts with ASCII-Characters (ä = ae, ö
= oe, ü = ue for example) before sending the query to Solr and use a
solr.MappingCharFilterFactory with the same replacements (ä = ae, ö = oe, ü
= ue) while indexing. 

It's unflexible in some cases, but it works so far. 

Greetz,

Sebastian 

--
View this message in context: 
http://lucene.472066.n3.nabble.com/wildcards-and-German-umlauts-tp499972p2998449.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: wildcards and German umlauts

2011-05-29 Thread mdz-munich
I don't get you. Did I wrote something of an Analyzer? Actually not. 

--
View this message in context: 
http://lucene.472066.n3.nabble.com/wildcards-and-German-umlauts-tp499972p2999074.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: wildcards and German umlauts

2011-05-29 Thread mdz-munich
Ah, NOW I got it. It's not a bug, it's a feature. 

But that would mean, that every character-manipulation (e.g.
char-mapping/replacement, Porter-Stemmer in some cases ...) would cause a
wildcard-query to fail. That too bad.

But why? What's the Problem with passing the prefix through the
analyzer/filter-chain?  

Greetz,

Sebastian

--
View this message in context: 
http://lucene.472066.n3.nabble.com/wildcards-and-German-umlauts-tp499972p2999237.html
Sent from the Solr - User mailing list archive at Nabble.com.


TermsCompoment + Dist. Search + Large Index + HEAP SPACE

2011-04-26 Thread mdz-munich
Hi!

We've got one index splitted into 4 shards á 70.000 records of large
full-text data from (very dirty) OCR. Thus we got a lot of unique terms. 
No we try to obtain the first 400 most common words for CommonGramsFilter
via TermsComponent but the request runs allways out of memory. The VM is
equipped with 32 GB of RAM, 16-26 GB alocated to the Java-VM. 

Any Ideas how to get the most common terms without increasing VMs Memory?   
 
Thanks  best regards,

Sebastian 

--
View this message in context: 
http://lucene.472066.n3.nabble.com/TermsCompoment-Dist-Search-Large-Index-HEAP-SPACE-tp2865609p2865609.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: TermsCompoment + Dist. Search + Large Index + HEAP SPACE

2011-04-26 Thread mdz-munich
Thanks for your suggestion. It seems to be the use of shards and
TermsComponent together. Now we simple requesting shard-by-shard without
shard and shard.qt params and merge the results via XSLT.

Sebastian 



 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/TermsCompoment-Dist-Search-Large-Index-HEAP-SPACE-tp2865609p2866499.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Query-Expansion, copyFields, flexibility and size of Index (Solr-3.1-SNAPSHOT)

2010-12-14 Thread mdz-munich

Okay, I start guessing:

- Do we have to write a customized QueryParserPlugin?
- On which point does the RequestHandler/QueryParser/whatever decide what
query-analyzer to use?

10% for every copied field is a lot for us, we're facing Terra-bytes of
digitized Book-Data. So we want to keep the index simple, small and flexible
and just append IR-Functionalities on Query-Time.   

Greetings  thank you,

Sebastian
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Query-Expansion-copyFields-flexibility-and-size-of-Index-Solr-3-1-SNAPSHOT-tp2078573p2085018.html
Sent from the Solr - User mailing list archive at Nabble.com.