Re: DIH Blob data
I had a similar problem and didnt find any solution to use the fields in JSON Blob for a filter ... Not with DIH. -- View this message in context: http://lucene.472066.n3.nabble.com/DIH-Blob-data-tp4168896p4168925.html Sent from the Solr - User mailing list archive at Nabble.com.
RE: Using CachedSqlEntityProcessor with delta imports in DIH
hey. are sending the cacheImpl in your request? or where are defining it? cacheImpl=${cache.impl} if i let this string blank, import fails =( -- View this message in context: http://lucene.472066.n3.nabble.com/Using-CachedSqlEntityProcessor-with-delta-imports-in-DIH-tp4091620p4163106.html Sent from the Solr - User mailing list archive at Nabble.com.
RE: DIH - cacheImpl=SortedMapBackedCache - empty rows from sub entity
thx. this is a little bit better, but now i got only one row from entity en2 in my index. it seems, that the lookup dont work for me =(( -- View this message in context: http://lucene.472066.n3.nabble.com/DIH-cacheImpl-SortedMapBackedCache-empty-rows-from-sub-entity-tp4162316p4162879.html Sent from the Solr - User mailing list archive at Nabble.com.
RE: DIH - cacheImpl=SortedMapBackedCache - empty rows from sub entity
i dont know why. but it works if i dont use cacheKey/cacheLookup. But if i use where it works fine. http://wiki.apache.org/solr/DataImportHandler#CachedSqlEntityProcessor - From Example 2: where=id=en1.id ... strange. -- View this message in context: http://lucene.472066.n3.nabble.com/DIH-cacheImpl-SortedMapBackedCache-empty-rows-from-sub-entity-tp4162316p4162882.html Sent from the Solr - User mailing list archive at Nabble.com.
DIH - cacheImpl=SortedMapBackedCache - empty rows from sub entity
Hello i am fighting with cacheImpl=SortedMapBackedCache. I want to refactor my ugly entities and so i try out sub-entities with caching. My Problem is that my cached subquery do not return any values from the select. but why? thats my entity entity name=en1 pk=id transformer=DateFormatTransformer query=SELECT id, product FROM table WHERE product = 'abc' entity name=en2 pk=id transformer=DateFormatTransformer cacheImpl=SortedMapBackedCache query= SELECT id, code FROM table2 where=id = '${en1.id}'/ /entity this is very fast an clear and nice... but it does not work. all from table2 is not coming to my index =( BUT if i remove the line with cacheImpl=SortedMapBackedCache all data is present, but every row is selecte each by each. i thought that this construct, hopefully replace my ugly big join-query in a single entity!? -- View this message in context: http://lucene.472066.n3.nabble.com/DIH-cacheImpl-SortedMapBackedCache-empty-rows-from-sub-entity-tp4162316.html Sent from the Solr - User mailing list archive at Nabble.com.
best way to monitor the documents per second in dataimporthandler?
Hello my friend :) i want to monitor the taken time of dih with zabbix. which is the best indicator of the update-time when you using the dataimporthandler. is time taken correct or do you think one of request time in mbeans is better? i dont think that the value of 5minRateReqsPerSecond is a good indicator for the docs per second. thx -- View this message in context: http://lucene.472066.n3.nabble.com/best-way-to-monitor-the-documents-per-second-in-dataimporthandler-tp4135577.html Sent from the Solr - User mailing list archive at Nabble.com.
XInclude in data-config.xml
hello. is it possible to include some entities with XInclude in my data-config.xml? i tried with this line: xi:include href=solr/entity.xml xmlns:xi=http://www.w3.org/2001/XInclude; / in my entity.xml is something like: entity name=name query=SELECT * FROM table/entity some ideas, why does not work? this blog sounds good for me =( http://www.raspberry.nl/2010/10/30/solr-xml-config-includes/ -- View this message in context: http://lucene.472066.n3.nabble.com/XInclude-in-data-config-xml-tp4055487.html Sent from the Solr - User mailing list archive at Nabble.com.
compare two shards.
hello. i want to compare two shards each other, because these shards should have the same index. but this isnt so =( so i want to find these documents, there are missing in one shard of my both shards. my ideas - distrubuted shard request on my nodes and fire a facet search on my unique-field. but the result of facet component isnt reversable =( - grouping. but its not working correctly i think so. no groups of the same uniquekey in the resultset. does anyone some better ideas? -- View this message in context: http://lucene.472066.n3.nabble.com/compare-two-shards-tp4039887.html Sent from the Solr - User mailing list archive at Nabble.com.
Calculate a sum.
hello. My problem is, that i need to calculate a sum of amounts. this amount is in my index (stored=true). my php script get all values with paging. but if a request takes too long, jetty is killing this process and i get a broken pipe. Which is the best/fastest way to get the values of many fields from index? exists an ResponseHandler for exports? Or which is the fastest? -- View this message in context: http://lucene.472066.n3.nabble.com/Calculate-a-sum-tp4033091.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Calculate a sum.
Hey, thx for your reply. i forgot to say. StatsComponent doesnt work with our application. too slow and buggy. but i test with this component with version 1.4 ... maybe some bugfixes in 4.0 ? this is the reason for calculating the sum on client side and some pages. but sometimes its too much for server. -- View this message in context: http://lucene.472066.n3.nabble.com/Calculate-a-sum-tp4033091p4033097.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Calculate a sum.
Mikhail Khludnev wrote You can spend some heap for uninverting the index and utilize wiki.apache.org/solr/StatsComponent what do you mean with this? Edward Garrett wrote how many documents are you working with? ~90 million documents ... -- View this message in context: http://lucene.472066.n3.nabble.com/Calculate-a-sum-tp4033091p4033152.html Sent from the Solr - User mailing list archive at Nabble.com.
which way for export
hello. Which is the best/fastest way to get the value of many fields from index? My problem is, that i need to calculate a sum of amounts. this amount is in my index (stored=true). my php script get all values with paging. but if a request takes too long, jetty is killing this process of export. is it better to get all the fields with wt=csv/json/xml or something other handler? -- View this message in context: http://lucene.472066.n3.nabble.com/which-way-for-export-tp4032487.html Sent from the Solr - User mailing list archive at Nabble.com.
Out Of Memory =( Too many cores on one server?
Hello. if my server is running for a while i get some OOM Problems. I think the problem is, that i running to many cores on one Server with too many documents. this is my server concept: 14 cores. 1 with 30 million docs 1 with 22 million docs 1 with growing 25 million docs 1 with 67 million docs and the other cores are under 1 million docs. all these cores are running fine in one jetty and searching is very fast and we are satisfied with this. yesterday we got OOM. Do you think that we should outsource the big cores into another virtual instance of the server? so that the JVM not share the memory and going OOM? starting with: MEMORY_OPTIONS=-Xmx6g -Xms2G -Xmn1G -- View this message in context: http://lucene.472066.n3.nabble.com/Out-Of-Memory-Too-many-cores-on-one-server-tp4020675.html Sent from the Solr - User mailing list archive at Nabble.com.
solr host name on solrconfig.xml
Hello i need the host name of my solr-server in my solrconfig.xml anybody knows the correct variable? something like ${solr.host} or ${solr.host.name} ... exists an documantation about ALL available variables in the solr namespaces? thx a lot -- View this message in context: http://lucene.472066.n3.nabble.com/solr-host-name-on-solrconfig-xml-tp3997371.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: solr host name on solrconfig.xml
okay. thx. i knw this way but its not so nice :P i set a new variable in my core.properties file which i load in solr.xml for each core =)) -- View this message in context: http://lucene.472066.n3.nabble.com/solr-host-name-on-solrconfig-xml-tp3997371p3997652.html Sent from the Solr - User mailing list archive at Nabble.com.
Eof.Exception - flushBuffer
Hello. I have no idea when this error message occurs. Anybody an idea? no search requests are running in this time. every minute starts an import, so i think this cause this exception. but why? SEVERE: null:org.eclipse.jetty.io.EofException at org.eclipse.jetty.http.HttpGenerator.flushBuffer(HttpGenerator.java:952) at org.eclipse.jetty.http.AbstractGenerator.blockForOutput(AbstractGenerator.java:518) at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:159) at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:101) at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202) at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:263) at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:106) at java.io.OutputStreamWriter.write(OutputStreamWriter.java:190) at org.apache.solr.util.FastWriter.write(FastWriter.java:100) at java.io.Writer.write(Writer.java:140) at org.apache.solr.response.JSONWriter.writeDouble(JSONResponseWriter.java:567) at org.apache.solr.response.TextResponseWriter.writeDouble(TextResponseWriter.java:339) at org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:145) at org.apache.solr.response.JSONWriter.writeSolrDocument(JSONResponseWriter.java:355) at org.apache.solr.response.TextResponseWriter.writeSolrDocumentList(TextResponseWriter.java:222) at org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:184) at org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:179) at org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:295) at org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:91) at org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:57) at org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:398) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:276) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1332) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:477) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:225) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1031) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:406) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:965) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111) at org.eclipse.jetty.server.Server.handle(Server.java:348) at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:452) at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:884) at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:938) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:630) at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:230) at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:77) at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:620) at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:46) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:603) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:538) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcher.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:72) at sun.nio.ch.IOUtil.write(IOUtil.java:28) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334) -- View this message in context: http://lucene.472066.n3.nabble.com/Eof-Exception-flushBuffer-tp3996671.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: DIH include Fieldset in query
So you want to re-use same SQL sentence in many entities? Yes is it necessary to deploy complete solr and lucene for this? -- View this message in context: http://lucene.472066.n3.nabble.com/DIH-include-Fieldset-in-query-tp3994798p3995228.html Sent from the Solr - User mailing list archive at Nabble.com.
DIH include Fieldset in query
hello.. i have many big entities in my data-config.xml. in the many entities is the same query. the entities look like this: entity name=name transformer=DateFormatTransformer pk=id query= SELECT field as fielname, IF(bla NOT NULL, 1, 0) AS blob, fieldname, fieldname AS field, ... more and more. is it possible to include text from a file or something like this, in data-config.xml??? -- View this message in context: http://lucene.472066.n3.nabble.com/DIH-include-Fieldset-in-query-tp3994798.html Sent from the Solr - User mailing list archive at Nabble.com.
Alternative to waitFlush in Solr4.0 !!!?
exists an alternative to waitFlush? in my setup this command is very usefull for my NRT. is nobody here with the same problem? -- View this message in context: http://lucene.472066.n3.nabble.com/Alternative-to-waitFlush-in-Solr4-0-tp3991489.html Sent from the Solr - User mailing list archive at Nabble.com.
FileNotFoundException during commit. concurrences process?!
Hello again. this is my Exception. with SolrVersion: 4.0.0.2012.04.26.09.00.41 SEVERE: Exception while solr commit. java.io.FileNotFoundException: _8l.cfs at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:266) at org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:216) at org.apache.lucene.index.TieredMergePolicy.size(TieredMergePolicy.java:640) at org.apache.lucene.index.TieredMergePolicy.useCompoundFile(TieredMergePolicy.java:616) at org.apache.lucene.index.IndexWriter.useCompoundFile(IndexWriter.java:2078) at org.apache.lucene.index.IndexWriter.prepareFlushedSegment(IndexWriter.java:1968) at org.apache.lucene.index.DocumentsWriter.publishFlushedSegment(DocumentsWriter.java:497) at org.apache.lucene.index.DocumentsWriter.finishFlush(DocumentsWriter.java:477) at org.apache.lucene.index.DocumentsWriterFlushQueue$SegmentFlushTicket.publish(DocumentsWriterFlushQueue.java:201) at org.apache.lucene.index.DocumentsWriterFlushQueue.innerPurge(DocumentsWriterFlushQueue.java:119) at org.apache.lucene.index.DocumentsWriterFlushQueue.tryPurge(DocumentsWriterFlushQueue.java:148) at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:438) at org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:553) at org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:2416) at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2548) at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2530) at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:414) at org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:82) at org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64) at org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:783) at org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:154) at org.apache.solr.handler.dataimport.SolrWriter.commit(SolrWriter.java:107) at org.apache.solr.handler.dataimport.DocBuilder.finish(DocBuilder.java:286) at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:246) at org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:404) at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:443) at org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:422) Jun 26, 2012 4:28:05 PM org.apache.solr.handler.dataimport.SimplePropertiesWriter readIndexerProperties My Architecture is. 2 Solr Instances. One Instance update a index (updater, and another Instance is only for searching. (searcher) every minute is coming an update. - updater runs without problems - after commit of updater all changes are available in the updater-instance - NOW is commin my searcher and start an commit=true on each of his core to refresh the changes. NOW i get SOMETIMES my Exception =( Anybody a idea ? here is a part of my solrconfig.xml (updater AND searcher) - indexConfig indexConfig useCompoundFiletrue/useCompoundFile ramBufferSizeMB128/ramBufferSizeMB mergeFactor2/mergeFactor lockTypesingle/lockType writeLockTimeout1000/writeLockTimeout commitLockTimeout1/commitLockTimeout unlockOnStartupfalse/unlockOnStartup reopenReaderstrue/reopenReaders infoStream file=INFOSTREAM.txtfalse/infoStream deletionPolicy class=solr.SolrDeletionPolicy str name=maxCommitsToKeep1/str str name=maxOptimizedCommitsToKeep0/str /deletionPolicy /indexConfig updateHandler class=solr.DirectUpdateHandler2 / /indexConfig -- View this message in context: http://lucene.472066.n3.nabble.com/FileNotFoundException-during-commit-concurrences-process-tp3991384.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: FileNotFoundException during commit. concurrences process?!
In my older version of solr this was possible, but it seems not possible in this new =( -- View this message in context: http://lucene.472066.n3.nabble.com/FileNotFoundException-during-commit-concurrences-process-tp3991384p3991388.html Sent from the Solr - User mailing list archive at Nabble.com.
Replication. confFiles and permissions.
Hello. i running a solr replication. works well, but i need to replicate my dataimport-properties. if server1 replicate this file after he create everytime a new file, with *.timestamp, because the first replication run create this file with wrong permissions ... how can is say to solr replication chmod 755 dataimport-properties ... ? ;-) thx - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/Replication-confFiles-and-permissions-tp3973825.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Replication. confFiles and permissions.
my setup includes a asynchron replication. this means, both are master AND slave at the same time. so i can easy switch master and slave on the fly without resarting any server with mass of scripts ... i trigger a replication via cronjob and look everytime, if server is master or slave. only slave is allowed to fetch index from master. but when i need to switch, i need the dataimport.properties to get correct deltas ... - --- System 2 server, each 12 GB RAM, 2 Solr instances with 12 cores, asynchronous recplication. -- View this message in context: http://lucene.472066.n3.nabble.com/Replication-confFiles-and-permissions-tp3973825p3974427.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Replication. confFiles and permissions.
my setup includes a asynchron replication. this means, both are master AND slave at the same time. so i can easy switch master and slave on the fly without resarting any server with mass of scripts ... i trigger a replication via cronjob and look everytime, if server is master or slave. only slave is allowed to fetch index from master. but when i need to switch, i need the dataimport.properties to get correct deltas ... - --- System 2 server, each 12 GB RAM, 2 Solr instances with 12 cores, asynchronous recplication. -- View this message in context: http://lucene.472066.n3.nabble.com/Replication-confFiles-and-permissions-tp3973825p3974440.html Sent from the Solr - User mailing list archive at Nabble.com.
DIH NoClassFoundError.
is it not fucking possible to import DIH !?!?!? WTF! i load the new solr 3.6. from website. but when i started solr i got evertime no DIH found if i put in my solrconfig lib dir=../dist/ regex=apache-solr-dataimporthandler-.*\.jar / AND have this structure http://lucene.472066.n3.nabble.com/file/n3938253/Bildschirmfoto.png i got this message: SEVERE: java.lang.NoClassDefFoundError: org/apache/solr/util/plugin/SolrCoreAware - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/DIH-NoClassFoundError-tp3938253p3938253.html Sent from the Solr - User mailing list archive at Nabble.com.
Replication failed without an error =(
hello.. anyone a idea how i can figure out why my replication failed ? i got no errors =( my configuratio is. 2 server! both are master and slave at the same time. only one server makes updates and is so the master. on slave is started via cron a replication. is one server crashed, i can easy switch master to slave, this is because both are master AND slave at the same time. this works well but now no replicate is working since i deleted the pollInterval !?!? is this a reason? thx - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/Replication-failed-without-an-error-tp3934655p3934655.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Replication failed without an error =(
bevore this problem i got this problem https://issues.apache.org/jira/browse/SOLR-1781 - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/Replication-failed-without-an-error-tp3934655p3934813.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: querying on shards
@Shawn Heisey-4 how look your requestHandler of your broker? i think about your idea to do the same ;) - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/querying-on-shards-tp3841446p3852001.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Master/Slave switch on teh fly. Replication
i have 8 cores ;-) i thought that replication is defined in solrconfig.xml and this file is only load on startup and i cannot change master to slave and slave to master without restarting the servlet-container ?!?!?! - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/Master-Slave-switch-on-the-fly-Replication-tp3828313p3831948.html Sent from the Solr - User mailing list archive at Nabble.com.
Master/Slave switch on teh fly. Replication
Hello. Is it possible to switch master/slave on the fly without restarting the server? - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/Master-Slave-switch-on-teh-fly-Replication-tp3828313p3828313.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Using two repeater to rapidly switching Master and Slave (Replication)?
Did your configuration works ? i have the same issue and i dont know if it works... i have 2 servers. each with 2 solr instances (one for updates other for searching) now i need replication from solr1 to solr2. but what the hell do solr if master crashed ??? - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/Using-two-repeater-to-rapidly-switching-Master-and-Slave-Replication-tp3089653p3826234.html Sent from the Solr - User mailing list archive at Nabble.com.
Best requestHandler for typing error.
Hello. Which RH do you use to find typing errors like goolge = do you mean google ?! I want to use my Autosuggestion EdgeNGram with a clever AutoCorrection! What do you use ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/Best-requestHandler-for-typing-error-tp3749576p3749576.html Sent from the Solr - User mailing list archive at Nabble.com.
Replace Patter , with
Why does this not work? fieldType name=city class=solr.TextField analyzer charfilter class=solr.PatternReplaceFilterFactory pattern=^(\, )$ replacement= replace=first / OR charfilter class=solr.PatternReplaceFilterFactory pattern=, replacement= replace=first / tokenizer class=solr.WhitespaceTokenizerFactory / /analyzer /fieldType i dont know where is my error? i only want to replace comma with a blank ... thx =))) - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/Replace-Patter-with-tp3662813p3662813.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Replace Patter , with
okay, thx =) but i replace it now in my data-config ;) - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/Replace-Patter-with-tp3662813p3663027.html Sent from the Solr - User mailing list archive at Nabble.com.
new NRT in my case quite useful?
I Read this Articel from Mark Miller http://www.lucidimagination.com/blog/2011/07/11/benchmarking-the-new-solr-%E2%80%98near-realtime%E2%80%99-improvements/ Now i want to know if its useful to update on a new solr version. My version is: 4.0.0.2010.10.26.08.43.14 I need a really good NRT search for my application, and i realized it with several cores/indices and each of them is splitted into one core for search request and another core for indexing. my dih need ~15 seconds for 1500 docs to index in my main and biggest index. its not so bad. but make it sense to use a newer version of solr, with hard- and soft-commits? in specially i want to split my 50M index into several smaller index for a faster search, update and optimizations. - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/new-NRT-in-my-case-quite-useful-tp3572433p3572433.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: best way for sum of fields
sry. i need the sum of values of the found documents. e.g. the total amount of one day. each doc in index has ist own amount. i try out something with StatsComponent but with 48 Million docs in Index its to slow. - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/best-way-for-sum-of-fields-tp3477517p3486406.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: best way for sum of fields
yes, this way i am using on another part in my application. i hoped, that exists another way to avoid the way over php - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/best-way-for-sum-of-fields-tp3477517p3486593.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: best way for sum of fields
hi thanks for the big reply ;) i had the idea with the several and small 5M shards too. and i think thats the next step i have to go, because our biggest index grows each day with avg. 50K documents. but make it sense to keep searcher AND updater cores on one big server? i dont want to use replication, because with our own high avalibility solution is this not possible. my system is split into searcher and updater cores, each with his own index. some search requests are over all this 8 cores with distributed search. - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/best-way-for-sum-of-fields-tp3477517p3486652.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: some basic information on Solr
i think with incident he mean, failures / downtimes / problems with solr !? - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/some-basic-information-on-Solr-tp3448957p3453837.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: How do i get results for quering with separated words?
which type in the schema.xml do you use. try out WordDelimiterFilterFactory or some other filters from this site: http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.WordDelimiterFilterFactory - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/How-do-i-get-results-for-quering-with-separated-words-tp3395966p3395982.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: How do i get results for quering with separated words?
index this field without whitespaces ? XD - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/How-do-i-get-results-for-quering-with-separated-words-tp3395966p3396207.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: How To perform SQL Like Join
http://wiki.apache.org/solr/Join - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/How-To-perform-SQL-Like-Join-tp3351090p3352322.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: math with date and modulo
okay, thanks a lot. I thought, that isnt possible to get the month in my case =( i will try out another way. - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/math-with-date-and-modulo-tp3335800p3338207.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Schema fieldType y-m-d ?!?!
thx =) i think i will save this as an string if ranges really works =) - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/Schema-fieldType-y-m-d-tp3335359p3339160.html Sent from the Solr - User mailing list archive at Nabble.com.
Schema fieldType y-m-d ?!?!
is it possible to index a datefield in the format of y-m-d ? i dont need the timestamp. so i can save me some space. which ways exists to search with a complex date-filter !? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/Schema-fieldType-y-m-d-tp3335359p3335359.html Sent from the Solr - User mailing list archive at Nabble.com.
math with date and modulo
Hello. i am fighting with the FunctionQuery of Solr. I try to get a diff of today and an dateField. from this diff, i want do a modulo from another field with values of 1,3,6,12 in a function somthing like this. ( i know that some functions are not available in solr) q={!func}$v2=0v1=(NOW - $var)v2=modulo($v1,interval) OR (DIFF(Month of Today - Month of Search) MOD interval) = 0 can anybody give me some tipps ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 8 Cores, 1 Core with 45 Million Documents other Cores 200.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/math-with-date-and-modulo-tp3335800p3335800.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: how to convert YYYY-MM-DD to YYY-MM-DD hh:mm:ss - DIH
okay, thx =) - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-convert--MM-DD-to-YYY-MM-DD-hh-mm-ss-DIH-tp2961481p2978639.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: how to convert YYYY-MM-DD to YYY-MM-DD hh:mm:ss - DIH
SELECT CONCAT('tablename_', CAST(cp.id AS CHAR)) AS uniquekey, cp.id, cp.fieldname .., mp.fieldname, mp. FROM consumer AS cp INNER JOIN morepush AS mp ON cp.id = mp.id in my query is a norma JOIN select. - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-convert--MM-DD-to-YYY-MM-DD-hh-mm-ss-DIH-tp2961481p2979311.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: how to convert YYYY-MM-DD to YYY-MM-DD hh:mm:ss - DIH
new error message: -00-00^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@' can not be represented as java.sql.Date ??? what ist that ??? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-convert--MM-DD-to-YYY-MM-DD-hh-mm-ss-DIH-tp2961481p2979337.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: how to convert YYYY-MM-DD to YYY-MM-DD hh:mm:ss - DIH
i am so an IDIOT !!! SORRRYY XD hehe i wrote zeroDateTimeBehavOIr and not zeroDateTimeBehavior ... - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-convert--MM-DD-to-YYY-MM-DD-hh-mm-ss-DIH-tp2961481p2979386.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: how to convert YYYY-MM-DD to YYY-MM-DD hh:mm:ss - DIH
yes. thx for your help =) - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-convert--MM-DD-to-YYY-MM-DD-hh-mm-ss-DIH-tp2961481p2979765.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: how to convert YYYY-MM-DD to YYY-MM-DD hh:mm:ss - DIH
okay. i didn find the problem =( it sstill the same shit. i cannot conver with DateTimeFormater dates form -00-00 = -MM-dd'T'hh:mm:ss'Z' i put my date fields into another entity: entity name=dates query=SELECT * FROM consumer WHERE id='${main.cp_id}' field column=start_date_consumer dateTimeFormat=-MM-dd'T'hh:mm:ss'Z' / field column=end_date_consumer dateTimeFormat=-MM-dd'T'hh:mm:ss'Z' / field column=end_date_bla dateTimeFormat=-MM-dd'T'hh:mm:ss'Z' / field column=start_date_bla dateTimeFormat=-MM-dd'T'hh:mm:ss'Z' / /entity solr throws exceptopn like above. WHY cannot transform this Transformer this correctly ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-convert--MM-DD-to-YYY-MM-DD-hh-mm-ss-DIH-tp2961481p2975235.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: how to convert YYYY-MM-DD to YYY-MM-DD hh:mm:ss - DIH
yes. i put the zeroDateTimeBehavior=convertToNull to my url like: url=jdbc:mysql://localhost/databaseName?zeroDateTimeBehavoir=convertToNull ExceptoiN: May 23, 2011 3:30:22 PM org.apache.solr.handler.dataimport.DataImporter doFullImport SEVERE: Full Import failed org.apache.solr.handler.dataimport.DataImportHandlerException: Error reading data from database Processing Document # 1 at org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:72) at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.getARow(JdbcDataSource.java:319) at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.access$700(JdbcDataSource.java:226) at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.next(JdbcDataSource.java:264) at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.next(JdbcDataSource.java:258) at org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:76) at org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:73) at org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:233) at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:579) at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:605) at org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:260) at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:184) at org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:334) at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:392) at org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:373) Caused by: java.sql.SQLException: Value 'XXX 2011-01-07 -00-00 2011-01-21 -00-0030311414501210open2011-01-07 15:10:47 10.1.0.1212011-01-07 15:10:472011-01-07 15:10:47' can not be represented as java.sql.Date at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1055) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:956) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:926) at com.mysql.jdbc.ResultSetRow.getDateFast(ResultSetRow.java:140) at com.mysql.jdbc.BufferRow.getDateFast(BufferRow.java:706) at com.mysql.jdbc.ResultSetImpl.getDate(ResultSetImpl.java:2174) at com.mysql.jdbc.ResultSetImpl.getDate(ResultSetImpl.java:2127) at com.mysql.jdbc.ResultSetImpl.getObject(ResultSetImpl.java:4956) at com.mysql.jdbc.ResultSetImpl.getObject(ResultSetImpl.java:5012) at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.getARow(JdbcDataSource.java:284) ... 13 more XXX are data which nobody should see ;-) - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-convert--MM-DD-to-YYY-MM-DD-hh-mm-ss-DIH-tp2961481p2975285.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: how to convert YYYY-MM-DD to YYY-MM-DD hh:mm:ss - DIH
the problems are not the empty -00-00 values. the problem is the missing timestamp at the end of the string ! - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-convert--MM-DD-to-YYY-MM-DD-hh-mm-ss-DIH-tp2961481p2975293.html Sent from the Solr - User mailing list archive at Nabble.com.
how to convert YYYY-MM-DD to YYY-MM-DD hh:mm:ss - DIH
Hello i want to index some datefields with this dateformat: -mm-dd. Solr thwows an exception like this: can not be represented as java.sql.Date i am unsing ...transformer=DateFormatTransformer and ...zeroDateTimeBehavoir=convertToNull how can i say to DIH to convert this fields in correct format ?? thx - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-convert--MM-DD-to-YYY-MM-DD-hh-mm-ss-DIH-tp2961481p2961481.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: how to convert YYYY-MM-DD to YYY-MM-DD hh:mm:ss - DIH
did you mean something like this ? DATE_FORMAT(cp.field, '%Y-%m-%di %H:%i:%s') AS field ??? i think i need to add the timestamp to my date fields? or not ? why cannot DIH handle with this ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-convert--MM-DD-to-YYY-MM-DD-hh-mm-ss-DIH-tp2961481p2961684.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: how to convert YYYY-MM-DD to YYY-MM-DD hh:mm:ss - DIH
entity name=foo pk=cp_id transformer=DateFormatTransformer query=SELECT ..., ...some fields ... cp.start_date_1, cp.start_date_2, cp.end_date_1, cp.end_date_2, .. some other fields .. FROM ... /entity that not works with fields with this value: -00-00 OR/AND 2011-05-18 id tried with: field column=start_date_1 dateTimeFormat=-MM-dd'T'hh:mm:ss / but solr say always that these fields have a wrong format ! i try my sql-selects before i post it here ,-) - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-convert--MM-DD-to-YYY-MM-DD-hh-mm-ss-DIH-tp2961481p2961787.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: how to convert YYYY-MM-DD to YYY-MM-DD hh:mm:ss - DIH
okay, i found the problem. i put the fields two times in my data-config ;-) - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-convert--MM-DD-to-YYY-MM-DD-hh-mm-ss-DIH-tp2961481p2961834.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: how to abort a running optimize
what do you mean with segments-number ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-abort-a-running-optimize-tp2838721p2870638.html Sent from the Solr - User mailing list archive at Nabble.com.
timezone DIH and dataimport.properties
Hello. How can i set the timezone oft java in my java properties ? my problem is, that in the dataimport-properties is a wrong timezone and i dont know how to set the correct timezone ... !?!? thx - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/timezone-DIH-and-dataimport-properties-tp2864928p2864928.html Sent from the Solr - User mailing list archive at Nabble.com.
change DIH default optimize=false
Hello. How can i change the default value of optimize in DIH to false ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/change-DIH-default-optimize-false-tp2838622p2838622.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: change DIH default optimize=false
yes, but as default! i dont want to set it from me to false. i need not after ervery commit an optimize and i want it default=false ! - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/change-DIH-default-optimize-false-tp2838622p2838676.html Sent from the Solr - User mailing list archive at Nabble.com.
how to abort a running optimize
hello. my optimize is taking tooo much time and sometimes i started a optimize but i dont want it ... :/ stupit, i know. is it possible to abort a runnung optimize ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-abort-a-running-optimize-tp2838721p2838721.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Updates during Optimize
The current limitation or pause is when the ram buffer is flushing to disk - when an optimize starts and is running ~4 hours, you say, that DIH is flushing the doc`s during this pause into the index ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/Updates-during-Optimize-tp2811183p2815064.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: jetty update
is it necessary to update for solr ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/jetty-update-tp2816084p2816650.html Sent from the Solr - User mailing list archive at Nabble.com.
exceeded limit of maxWarmingSearchers = 4 =(
hello. my NRT-Search is not correctly configured =( 2 Solr-Instances. one searcher and one updater the updater start every minute an update of around 3000 documents. and the searcher start an commit ervery minute to refresh the index and read the new doc`s these are my Cache values for an 36 Million Document Index: after a restart my warmuptime is about 1700 MS. do you think i need to set autowarmcount of every cache to near of zero ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-4-tp2810380p2810380.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: exceeded limit of maxWarmingSearchers = 4 =(
i start a commit on searcher-Core with: .../core/update?commit=truewaitFlush=false - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-4-tp2810380p2810458.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: exceeded limit of maxWarmingSearchers = 4 =(
my filterCache has a warmupTime from ~6000 ... but my config is like this: LRU Cache(maxSize=3000, initialSize=50, autowarmCount=50 ...) should i set maxSize to 50 or similar value ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-4-tp2810380p2810561.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: exceeded limit of maxWarmingSearchers = 4 =(
oooh. my queryResultCache has a warmupTime from 54000 = ~1 Minute any suggestions ?? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/exceeded-limit-of-maxWarmingSearchers-4-tp2810380p2810572.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Decrease warmupTime
i fighting with the same problem but with jetty. its in this case necessary to delete also the jetty work-DIR ??? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/Decrease-warmupTime-tp494023p2810607.html Sent from the Solr - User mailing list archive at Nabble.com.
Updates during Optimize
Hello. When is start an optimize (which takes more than 4 hours) no updates from DIH are possible. i thougt solr is copy the hole index and then start an optimize from the copy and not lock the index and optimize this ... =( any way to do both in the same time ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/Updates-during-Optimize-tp2811183p2811183.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: DIH OutOfMemoryError?
Make sure streaming is on. -- how to check ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/DIH-OutOfMemoryError-tp2759013p2811270.html Sent from the Solr - User mailing list archive at Nabble.com.
StreamingUpdateSolrServer and PHP
is it possible to use StreamingUpdateSolrServer with a php application ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/StreamingUpdateSolrServer-and-PHP-tp2794542p2794542.html Sent from the Solr - User mailing list archive at Nabble.com.
Tutorial StreamingUpdateSolrServer
Hello. i want to change my full-imports from DIH to use of Java and StreamingUpdateSolrServer ... is in the wiki a little how to or something similar ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/Tutorial-StreamingUpdateSolrServer-tp2795023p2795023.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: how to start GarbageCollector
why is solr copy my complete index to somewhere when i start an delta-import? i copy one core, start an full-import from 35Million docs and then start an delta-import from the last hour (~2000Docs). dih/solr need start to copy the hole index... why ? i think he is copy the index, because my hdd-space starts to increase imediatly ... my live core ended a delta in 5-10 seconds !?!?!?!?!?!? i run jconsole during this time, what say it to me ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-start-GarbageCollector-tp2748080p2783923.html Sent from the Solr - User mailing list archive at Nabble.com.
very slow commit. copy of index ?
Hello again ;-) after a full-import from 36M Doc`s my delta import dont work fine. if i starts my delta (which runs on another core very fast) the commit need vry long. I think, that solr copys the hole index and commit the new documents in the index and then reduce the index size after this operations !?!!?!?!?! i start delta over DIH with: command=delta-importoptimize=falsecommit=true jconsole is running with but i dont know in which way jconsole can help me ... thx ! =) - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/very-slow-commit-copy-of-index-tp2783940p2783940.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: command is still running ? delta-import?
i have the same problem. any resolutions ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/command-is-still-running-delta-import-tp48p2783986.html Sent from the Solr - User mailing list archive at Nabble.com.
Form too large ...
Hello. i get sometimes much results, and solr or jetty give me the error. EVERE: java.lang.IllegalStateException: Form too large1787345100 numfound ist 94000, not really much, but i get the a double-value from each doc and calculate the sum over php. when i put the query into browser, a download of a file getting startet. i dont want to set this org.mortbay.http.HttpRequest.maxFormContentSize=50 is it possible to return the response in other wise ? can i cache or save a part of the result in solr ? i.e. : get the first 10K docs, and then the next 10K docs. so that resultXM isnt so big !? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/Form-too-large-tp2752676p2752676.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Exporting to CSV
http://yonik.wordpress.com/2010/07/29/csv-output-for-solr/ - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/Exporting-to-CSV-tp2751988p2752679.html Sent from the Solr - User mailing list archive at Nabble.com.
how to start GarbageCollector
Hello, my problem is, that after a full-import solr reserved all of my RAM and my delta-imports need about 1 hour for less than 5000 small documents. How can i start GarbageCollector to get the RAM back ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-start-GarbageCollector-tp2748080p2748080.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: how to start GarbageCollector
i run an full-import via DIH, 35 Million Documents, i dont restart solr. my cronjob start automaticly an delta. if i restart solr, delta obtain in ~10 seconds ... free -m show me how many RAM is beeing used and with top. the server is only for solr, so no other processes are using my RAM. - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-start-GarbageCollector-tp2748080p2748134.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: how to start GarbageCollector
okay, i installed an monitor, jconsole and jvisualvm. how can i see with this, where my probem is ? what data are needed ? :/ - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-start-GarbageCollector-tp2748080p2748421.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: NRT and warmupTime of filterCache
it'll negatively impact the desired goal of low latency new index readers? - yes, i think so, thats the reason because i dont understand the wiki-article ... i set the warmupCount to 500 and i got no error messages, that solr isnt available ... but solr-stats.jsp show me a warmuptime of warmupTime : 12174 why ? is the warmuptime in solrconfig.xml the maximum time in ms, for autowarming ? or what does it really means ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/NRT-and-warmupTime-of-filterCache-tp2654886p2659560.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: NRT and warmupTime of filterCache
okay, not the time ... the items ... - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/NRT-and-warmupTime-of-filterCache-tp2654886p2659562.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: NRT and warmupTime of filterCache
Maybe the article is out of date? - maybe .. i dont know in my case it make no sense and i use another configuration ... - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/NRT-and-warmupTime-of-filterCache-tp2654886p2660814.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: NRT in Solr
i am using solr for NRT with this version of solr ... Solr Specification Version: 4.0.0.2010.10.26.08.43.14 Solr Implementation Version: 4.0-2010-10-26_08-05-39 1027394 - hudson - 2010-10-26 08:43:14 Lucene Specification Version: 4.0-2010-10-26_08-05-39 Lucene Implementation Version: 4.0-2010-10-26_08-05-39 1027394 - 2010-10-26 08:43:44 is this version ready for NRT or not ? it works, but if it can work better i gonna be update solr ... thx - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 4GB Xmx - Solr2 for Update-Request - delta every 2 Minutes - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/NRT-in-Solr-tp2652689p2654472.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: NRT in Solr
question: http://wiki.apache.org/solr/NearRealtimeSearchTuning 'PERFORMANCE WARNING: Overlapping onDeckSearchers=x i got this message. in my solrconfig.xml: maxWarmingSearchers=4, if i set this to 1 or 2 i got exception. with 4 i got nothing, but the Performance Warning. the wiki-articel says, that the best solution is to set the warmingSearcher to 1!!! how can this work ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/NRT-in-Solr-tp2652689p2654696.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: getting much double-Values from solr -- timeout
Are you using shards or have everything in same index? - shards == distributed Search over several cores ? = yes, but not always. but in generally not. What problem did you experience with the StatsCompnent? - if i use stats on my 34Million Index, no matter how many docs founded, the sum takes VEERY long time. How did you use it? - like in the wiki, i think statscomp is not so dynamic usable !? I think the right approach will be to optimize StatsComponent to do quick sum() - how can i optimize this ? change the code vom statscomponent and create a new solr ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/getting-much-double-Values-from-solr-timeout-tp2650981p2654721.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: getting much double-Values from solr -- timeout
i am using NRT, and the caches are not always warmed, i think this is almost a problem !? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/getting-much-double-Values-from-solr-timeout-tp2650981p2654725.html Sent from the Solr - User mailing list archive at Nabble.com.
NRT and warmupTime of filterCache
I tried to create an NRT like in the wiki but i got some problems with autowarming and ondeckSearchers. ervery minute i start a delta of one core and the other core start every minute a commit of the index to search for it. wiki says ... = 1 Searcher and fitlerCache warmupCount=3600. with this config i got exception that no searcher is available ... so i cannot use this config ... my config is, 4 Searchers and warmupCount=3000... with this settings i got Performance Warning, but it works. BUT when the complete 30 seconds (or more) needed to warming the searcher, i cannot ping my server in this time and i got errors ... make it sense to decrese my warmupCount to 0 ??? how serchers do i need for 7 Cores ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/NRT-and-warmupTime-of-filterCache-tp2654886p2654886.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: NRT and warmupTime of filterCache
make it sense to update solr for getting SOLR-571 ??? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 5GB Xmx - Solr2 for Update-Request - delta every Minute - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/NRT-and-warmupTime-of-filterCache-tp2654886p2655073.html Sent from the Solr - User mailing list archive at Nabble.com.
getting much double-Values from solr -- timeout
Hello. i have 34.000.000 documents in my index and each doc have a field with a double-value. i want the sum of these fields. i testet it with the statscomponent but this is not usable. !! so i get all my values directly from solr, from the index and with php-sum() i get my sum. that works fine but, when a user search over really much documents (~ 30.000), my skript need longer than 30 seconds and php skipped this. how can i tune solr, to geht much faster this double-values from the index !? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 4GB Xmx - Solr2 for Update-Request - delta every 2 Minutes - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/getting-much-double-Values-from-solr-timeout-tp2650981p2650981.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr not Available with Ping when DocBuilder is running
my error is, that solr is not reachable with a ping. ping over php-HttpRequest ... - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 4GB Xmx - Solr2 for Update-Request - delta every 2 Minutes - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-not-Available-with-Ping-when-DocBuilder-is-running-tp2500214p2508686.html Sent from the Solr - User mailing list archive at Nabble.com.
strange search-behavior over dynamic field
Hello. i have the field reason_1 and reason_2. this two fields is in my schema one dynamicField: dynamicField name=reason_* type=textgen indexed=true stored=false/ i copy this field in my text-default search field: copyField source=reason_* dest=text/ And in a new field reason: copyField source=reason_* dest=reason/ --- if i have two documents with the exactly same value in the reason_1 field, solr can only find ONE document, not both. why ? is it a behavior of solr or a wrong usage of me ? - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 4GB Xmx - Solr2 for Update-Request - delta every 2 Minutes - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/strange-search-behavior-over-dynamic-field-tp2508711p2508711.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: strange search-behavior over dynamic field
the fieldType is textgen. - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 4GB Xmx - Solr2 for Update-Request - delta every 2 Minutes - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/strange-search-behavior-over-dynamic-field-tp2508711p2509166.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: strange search-behavior over dynamic field
the documents havent the same uniquekey, only reason is the same. i cannot show the exactly search request, because of privacy policy... the query is like that: reason_1: firstname lastname, reason_2: 1234, 02.02.2011 -- in field reason: firstname lastname, 1234, 02.02.2011 the search request is form an PHP-Application. On my TestEnvironment i cannot rebuild this case ... =(( okay ... i dont know why, but after a delta-import, its all okay ... - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 4GB Xmx - Solr2 for Update-Request - delta every 2 Minutes - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/strange-search-behavior-over-dynamic-field-tp2508711p2509610.html Sent from the Solr - User mailing list archive at Nabble.com.
Solr not Available with Ping when DocBuilder is running
Hello. I do every 2 Minutes a Delta and if one Core (of 7) is running a delta, solr isnt available. when i look in the logFile the ping comes in this time, when DocBuilder is running ... Feb 15, 2011 11:49:20 AM org.apache.solr.handler.dataimport.DocBuilder doDelta INFO: Delta Import completed successfully Feb 15, 2011 11:49:20 AM org.apache.solr.handler.dataimport.DocBuilder execute INFO: Time taken = 0:0:0.15 Feb 15, 2011 11:50:28 AM org.apache.solr.core.SolrCore execute PHP Error at 11:50:12 Error: ... so i get errors, but nothing is wrong for me ... !?!? thx - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 4GB Xmx - Solr2 for Update-Request - delta every 2 Minutes - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-not-Available-with-Ping-when-DocBuilder-is-running-tp2500214p2500214.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Open Too Many Files
or change the index to a compound-index solrconfig.xml: useCompoundFiletrue/useCompoundFile so solr creates one index file and not thousands. - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 4GB Xmx - Solr2 for Update-Request - delta every 2 Minutes - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/Open-Too-Many-Files-tp2406289p2411736.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: field=string with value: 0, 1 and 2
i found the problem. DIH or i think the JDBC-Driver casting 0 and 1 to boolean, if the field in database from type (tinyint(1)). iam using tow fields with type of tinyint(1) and tinyint(2) -.- - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 4GB Xmx - Solr2 for Update-Request - delta every 2 Minutes - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/field-string-with-value-0-1-and-2-tp2367038p2389508.html Sent from the Solr - User mailing list archive at Nabble.com.
field=string with value: 0, 1 and 2
Hello- i am using shard-requests over several cores. each core has his own index and own schema. but every core have the field status ! regularly the status is 0 or 1. but one core can have the status: 0, 1 OR 2 -- the status field type i used is string but string make the cores (with only 0 and 1 values) to boolean: true and false. the core wich can have status 2 only indexing the value 0 1 2 ... so i cannot filter with shard request over these cores, because solr cannor find if you using status:true, when the field is 1 ... how can i say to solr that he index the 0,1 values as integer and not boolean ??? (fieldtype int shows the same behaviour) ?? thx - --- System One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 1 Core with 31 Million Documents other Cores 100.000 - Solr1 for Search-Requests - commit every Minute - 4GB Xmx - Solr2 for Update-Request - delta every 2 Minutes - 4GB Xmx -- View this message in context: http://lucene.472066.n3.nabble.com/field-string-with-value-0-1-and-2-tp2367038p2367038.html Sent from the Solr - User mailing list archive at Nabble.com.