[JENKINS] Lucene-Solr-SmokeRelease-4.x - Build # 47 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-4.x/47/ No tests ran. Build Log: [...truncated 12386 lines...] package-src-tgz: [exec] svn: E175002: timed out waiting for server [exec] svn: E175002: OPTIONS request failed on '/repos/asf/lucene/dev/branches/branch_4x' BUILD FAILED /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/build.xml:273: The following error occurred while executing this line: /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/solr/build.xml:389: The following error occurred while executing this line: /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.x/lucene/common-build.xml:1745: exec returned: 1 Total time: 10 minutes 45 seconds Build step 'Invoke Ant' marked build as failure Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-4746) Create a move method in Directory.
Mark Miller created LUCENE-4746: --- Summary: Create a move method in Directory. Key: LUCENE-4746 URL: https://issues.apache.org/jira/browse/LUCENE-4746 Project: Lucene - Core Issue Type: Improvement Reporter: Mark Miller Assignee: Mark Miller Fix For: 4.2, 5.0 I'd like to make a move method for directory. We already have a move for Solr in DirectoryFactory, but it seems it belongs at the directory level really. The default impl can do a copy and delete, but most implementations will be able to optimize to a rename. Besides the move we do for Solr (to move a replicated index into place), it would also be useful for another feature I'd like to add - the ability to merge an index with moves rather than copies. In some cases, you don't need/want to copy all the files and could just rename/move them. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Stress test deadlocks
On Feb 1, 2013, at 9:02 PM, Erick Erickson wrote: > First, about the thread interrupt exceptions; what do you think about not > logging them? I could argue that if they are benign, reporting them adds > unnecessary stress. I kinda figured they were harmless but thought it might > be worth mentioning. I guess I'd open a JIRA issue to discuss it - we would probably want to consistently tackle the code base. Currently, I think we usually log something on interruptions. > > Second, I re-worked the stress test program to use the old-style solr.xml > file then re-ran the tests from trunk w/o any of the changes for SOLR-4196. I > worked for a couple of hours then I had to stop, but tonight it ran for just > a few minutes (I updated the code this morning) and got the same error (stack > below just in case I'm imagining things). Next step I guess is I'll apply the > changes you indicated above to trunk and see if it I can make it happen > again. That said, it's a bit of apples-to-oranges but worth doing > nonetheless... It's still clearly happening from some relatively new code > related to the transient core thing given the trace is coming form > removeEldestEntry eventually…. Since you have the tests and can easily check this, I would appreciate it. We would like to fix this. The below is the same issue. I'm pretty sure the patch addresses it (though don't commit it, it's still hackey), but confirmation would be great. - Mark > > > > Found one Java-level deadlock: > = > "commitScheduler-15616-thread-1": > waiting to lock monitor 7f920387fd58 (object 7879df120, a > org.apache.solr.update.DefaultSolrCoreState), > which is held by "qtp1490642445-15" > "qtp1490642445-15": > waiting to lock monitor 7f9204803bc0 (object 786d3df78, a java.lang.Object), > which is held by "commitScheduler-15616-thread-1" > > Java stack information for the threads listed above: > === > "commitScheduler-15616-thread-1": > at > org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:78) > - waiting to lock <7879df120> (a > org.apache.solr.update.DefaultSolrCoreState) > at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1359) > at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1535) > at > org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:550) > - locked <786d3df78> (a java.lang.Object) > at org.apache.solr.update.CommitTracker.run(CommitTracker.java:216) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) > at java.util.concurrent.FutureTask.run(FutureTask.java:138) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:680) > "qtp1490642445-15": > at > org.apache.solr.update.DirectUpdateHandler2.closeWriter(DirectUpdateHandler2.java:705) > - waiting to lock <786d3df78> (a java.lang.Object) > at > org.apache.solr.update.DefaultSolrCoreState.closeIndexWriter(DefaultSolrCoreState.java:64) > - locked <7879df120> (a org.apache.solr.update.DefaultSolrCoreState) > at > org.apache.solr.update.DefaultSolrCoreState.close(DefaultSolrCoreState.java:272) > - locked <7879df120> (a org.apache.solr.update.DefaultSolrCoreState) > at org.apache.solr.core.SolrCore.decrefSolrCoreState(SolrCore.java:888) > - locked <7879df120> (a org.apache.solr.update.DefaultSolrCoreState) > at org.apache.solr.core.SolrCore.close(SolrCore.java:980) > at > org.apache.solr.core.CoreContainer$2.removeEldestEntry(CoreContainer.java:385) > at java.util.LinkedHashMap.addEntry(LinkedHashMap.java:410) > at java.util.HashMap.put(HashMap.java:385) > at > org.apache.solr.core.CoreContainer.registerCore(CoreContainer.java:864) > - locked <785614df8> (a org.apache.solr.core.CoreContainer$2) > at > org.apache.solr.core.CoreContainer.registerLazyCore(CoreContainer.java:829) > at org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:1321) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:190) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:13
[jira] [Updated] (SOLR-4196) Untangle XML-specific nature of Config and Container classes
[ https://issues.apache.org/jira/browse/SOLR-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson updated SOLR-4196: - Attachment: StressTest.zip Added the ability to create the old-style solr.xml program rather than the new-style properties file for testing against current trunk. See -x param. > Untangle XML-specific nature of Config and Container classes > > > Key: SOLR-4196 > URL: https://issues.apache.org/jira/browse/SOLR-4196 > Project: Solr > Issue Type: Improvement > Components: Schema and Analysis >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Minor > Fix For: 4.2, 5.0 > > Attachments: SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, > SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, > SOLR-4196.patch, SOLR-4196.patch, SOLR-4196.patch, StressTest.zip, > StressTest.zip, StressTest.zip > > > sub-task for SOLR-4083. If we're going to try to obsolete solr.xml, we need > to pull all of the specific XML processing out of Config and Container. > Currently, we refer to xpaths all over the place. This JIRA is about > providing a thunking layer to isolate the XML-esque nature of solr.xml and > allow a simple properties file to be used instead which will lead, > eventually, to solr.xml going away. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-2850) Do not refine facets when minCount == 1
[ https://issues.apache.org/jira/browse/SOLR-2850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Lundgren updated SOLR-2850: -- Attachment: SOLR-2850.patch Modify the code to not refine facets when it is not needed. > Do not refine facets when minCount == 1 > --- > > Key: SOLR-2850 > URL: https://issues.apache.org/jira/browse/SOLR-2850 > Project: Solr > Issue Type: Improvement > Components: SearchComponents - other >Affects Versions: 3.4 > Environment: Ubuntu, distributed >Reporter: Matt Smith > Attachments: SOLR-2850.patch > > > Currently there is a special case in the code to not refine facets if > minCount==0. It seems this could be extended to minCount <= 1 as there would > be no need to take the extra step to refine facets if minCount is 1. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Issue Comment Deleted] (SOLR-4146) Error handling 'status' action, cannot access GUI
[ https://issues.apache.org/jira/browse/SOLR-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leonardo Fedalto updated SOLR-4146: --- Comment: was deleted (was: UI Screen shot) > Error handling 'status' action, cannot access GUI > - > > Key: SOLR-4146 > URL: https://issues.apache.org/jira/browse/SOLR-4146 > Project: Solr > Issue Type: Bug > Components: SolrCloud, web gui >Affects Versions: 5.0 >Reporter: Markus Jelsma > Fix For: 5.0 > > Attachments: solr.png > > > We sometimes see a node not responding to GUI requests. It then generates the > stack trace below. It does respond to search requests. > {code} > 2012-12-05 15:53:24,329 ERROR [solr.core.SolrCore] - [http-8080-exec-7] - : > org.apache.solr.common.SolrException: Error handling 'status' action > at > org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:725) > at > org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:158) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144) > at > org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:372) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181) > at > org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) > at > org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) > at > org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) > at > org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) > at > org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) > at > org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) > at > org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) > at > org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) > at > org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889) > at > org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:744) > at > org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2274) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > Caused by: org.apache.solr.common.SolrException: > java.util.concurrent.RejectedExecutionException > at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1674) > at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1330) > at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1265) > at > org.apache.solr.handler.admin.CoreAdminHandler.getCoreStatus(CoreAdminHandler.java:997) > at > org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:711) > ... 18 more > Caused by: java.util.concurrent.RejectedExecutionException > at > java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1768) > at > java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:767) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:658) > at > java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92) > at > java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:603) > at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1605) > ... 22 more > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-4146) Error handling 'status' action, cannot access GUI
[ https://issues.apache.org/jira/browse/SOLR-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leonardo Fedalto updated SOLR-4146: --- Attachment: solr.png UI Screen shot > Error handling 'status' action, cannot access GUI > - > > Key: SOLR-4146 > URL: https://issues.apache.org/jira/browse/SOLR-4146 > Project: Solr > Issue Type: Bug > Components: SolrCloud, web gui >Affects Versions: 5.0 >Reporter: Markus Jelsma > Fix For: 5.0 > > Attachments: solr.png > > > We sometimes see a node not responding to GUI requests. It then generates the > stack trace below. It does respond to search requests. > {code} > 2012-12-05 15:53:24,329 ERROR [solr.core.SolrCore] - [http-8080-exec-7] - : > org.apache.solr.common.SolrException: Error handling 'status' action > at > org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:725) > at > org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:158) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144) > at > org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:372) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181) > at > org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) > at > org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) > at > org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) > at > org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) > at > org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) > at > org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) > at > org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) > at > org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) > at > org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889) > at > org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:744) > at > org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2274) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > Caused by: org.apache.solr.common.SolrException: > java.util.concurrent.RejectedExecutionException > at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1674) > at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1330) > at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1265) > at > org.apache.solr.handler.admin.CoreAdminHandler.getCoreStatus(CoreAdminHandler.java:997) > at > org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:711) > ... 18 more > Caused by: java.util.concurrent.RejectedExecutionException > at > java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1768) > at > java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:767) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:658) > at > java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92) > at > java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:603) > at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1605) > ... 22 more > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Issue Comment Deleted] (SOLR-4146) Error handling 'status' action, cannot access GUI
[ https://issues.apache.org/jira/browse/SOLR-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leonardo Fedalto updated SOLR-4146: --- Comment: was deleted (was: Hi, I'm having a similar issue. Only the UI for Solr slaves seems not to work (screenshot attached). Slaves still handle requests correctly. {code} 2013-02-01 16:58:58,528 4968280 ERROR [org.apache.solr.core.SolrCore] (web-9:::) - org.apache.solr.common.SolrException: Error handling 'status' action at org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:714) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:157) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:372) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:554) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:864) at org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:579) at org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:2173) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.lucene.store.AlreadyClosedException: this Directory is closed at org.apache.lucene.store.Directory.ensureOpen(Directory.java:255) at org.apache.lucene.store.RAMDirectory.listAll(RAMDirectory.java:107) at org.apache.lucene.store.NRTCachingDirectory.listAll(NRTCachingDirectory.java:124) at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:679) at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:630) at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:343) at org.apache.lucene.index.StandardDirectoryReader.isCurrent(StandardDirectoryReader.java:326) at org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:553) at org.apache.solr.handler.admin.CoreAdminHandler.getCoreStatus(CoreAdminHandler.java:988) at org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:700) ... 19 more {code}) > Error handling 'status' action, cannot access GUI > - > > Key: SOLR-4146 > URL: https://issues.apache.org/jira/browse/SOLR-4146 > Project: Solr > Issue Type: Bug > Components: SolrCloud, web gui >Affects Versions: 5.0 >Reporter: Markus Jelsma > Fix For: 5.0 > > > We sometimes see a node not responding to GUI requests. It then generates the > stack trace below. It does respond to search requests. > {code} > 2012-12-05 15:53:24,329 ERROR [solr.core.SolrCore] - [http-8080-exec-7] - : > org.apache.solr.common.SolrException: Error handling 'status' action > at > org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:725) > at > org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:158) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144) > at > org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:372) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181) > at > org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) > at > org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) > at > org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrappe
[jira] [Commented] (SOLR-4146) Error handling 'status' action, cannot access GUI
[ https://issues.apache.org/jira/browse/SOLR-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13569162#comment-13569162 ] Leonardo Fedalto commented on SOLR-4146: Hi, I'm having a similar issue. Only the UI for Solr slaves seems not to work (screenshot attached). Slaves still handle requests correctly. {code} 2013-02-01 16:58:58,528 4968280 ERROR [org.apache.solr.core.SolrCore] (web-9:::) - org.apache.solr.common.SolrException: Error handling 'status' action at org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:714) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:157) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:372) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:554) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:864) at org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:579) at org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:2173) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.lucene.store.AlreadyClosedException: this Directory is closed at org.apache.lucene.store.Directory.ensureOpen(Directory.java:255) at org.apache.lucene.store.RAMDirectory.listAll(RAMDirectory.java:107) at org.apache.lucene.store.NRTCachingDirectory.listAll(NRTCachingDirectory.java:124) at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:679) at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:630) at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:343) at org.apache.lucene.index.StandardDirectoryReader.isCurrent(StandardDirectoryReader.java:326) at org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:553) at org.apache.solr.handler.admin.CoreAdminHandler.getCoreStatus(CoreAdminHandler.java:988) at org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:700) ... 19 more {code} > Error handling 'status' action, cannot access GUI > - > > Key: SOLR-4146 > URL: https://issues.apache.org/jira/browse/SOLR-4146 > Project: Solr > Issue Type: Bug > Components: SolrCloud, web gui >Affects Versions: 5.0 >Reporter: Markus Jelsma > Fix For: 5.0 > > > We sometimes see a node not responding to GUI requests. It then generates the > stack trace below. It does respond to search requests. > {code} > 2012-12-05 15:53:24,329 ERROR [solr.core.SolrCore] - [http-8080-exec-7] - : > org.apache.solr.common.SolrException: Error handling 'status' action > at > org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:725) > at > org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:158) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144) > at > org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:372) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181) > at > org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) > at > org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) > at > org.apache.catalina.core.Standa
[jira] [Commented] (SOLR-4146) Error handling 'status' action, cannot access GUI
[ https://issues.apache.org/jira/browse/SOLR-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13569161#comment-13569161 ] Leonardo Fedalto commented on SOLR-4146: Hi, I'm having a similar issue. Only the UI for Solr slaves seems not to work (screenshot attached). Slaves still handle requests correctly. {code} 2013-02-01 16:58:58,528 4968280 ERROR [org.apache.solr.core.SolrCore] (web-9:::) - org.apache.solr.common.SolrException: Error handling 'status' action at org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:714) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:157) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:372) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:554) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:864) at org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:579) at org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:2173) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.lucene.store.AlreadyClosedException: this Directory is closed at org.apache.lucene.store.Directory.ensureOpen(Directory.java:255) at org.apache.lucene.store.RAMDirectory.listAll(RAMDirectory.java:107) at org.apache.lucene.store.NRTCachingDirectory.listAll(NRTCachingDirectory.java:124) at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:679) at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:630) at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:343) at org.apache.lucene.index.StandardDirectoryReader.isCurrent(StandardDirectoryReader.java:326) at org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:553) at org.apache.solr.handler.admin.CoreAdminHandler.getCoreStatus(CoreAdminHandler.java:988) at org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:700) ... 19 more {code} > Error handling 'status' action, cannot access GUI > - > > Key: SOLR-4146 > URL: https://issues.apache.org/jira/browse/SOLR-4146 > Project: Solr > Issue Type: Bug > Components: SolrCloud, web gui >Affects Versions: 5.0 >Reporter: Markus Jelsma > Fix For: 5.0 > > > We sometimes see a node not responding to GUI requests. It then generates the > stack trace below. It does respond to search requests. > {code} > 2012-12-05 15:53:24,329 ERROR [solr.core.SolrCore] - [http-8080-exec-7] - : > org.apache.solr.common.SolrException: Error handling 'status' action > at > org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:725) > at > org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:158) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144) > at > org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:372) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181) > at > org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) > at > org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) > at > org.apache.catalina.core.Standa
[jira] [Updated] (SOLR-4397) Support for filtering after sorting took place
[ https://issues.apache.org/jira/browse/SOLR-4397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Molloy updated SOLR-4397: --- Attachment: solr-4397.txt Could probably be improved a lot, but here's a first stab at a patch that seems to work. > Support for filtering after sorting took place > -- > > Key: SOLR-4397 > URL: https://issues.apache.org/jira/browse/SOLR-4397 > Project: Solr > Issue Type: Improvement > Components: search >Affects Versions: 4.1 >Reporter: Steve Molloy > Attachments: solr-4397.txt > > > For really costly filters, we want to be able to stop filtering (and accept > everything else) after enough results have been accepted to fill the result > page. This would not produce appropriate counts, but would allow offering > acceptable response time for heavy filters (security calling back external > system). This can only work if sorting is done before filtering though, so > the proposition is to have a way of plugging in filtering after sorting has > occurred. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-4397) Support for filtering after sorting took place
Steve Molloy created SOLR-4397: -- Summary: Support for filtering after sorting took place Key: SOLR-4397 URL: https://issues.apache.org/jira/browse/SOLR-4397 Project: Solr Issue Type: Improvement Components: search Affects Versions: 4.1 Reporter: Steve Molloy For really costly filters, we want to be able to stop filtering (and accept everything else) after enough results have been accepted to fill the result page. This would not produce appropriate counts, but would allow offering acceptable response time for heavy filters (security calling back external system). This can only work if sorting is done before filtering though, so the proposition is to have a way of plugging in filtering after sorting has occurred. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-3550) Create example code for core
[ https://issues.apache.org/jira/browse/LUCENE-3550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568969#comment-13568969 ] Manpreet commented on LUCENE-3550: -- Thanks Shai. I have started work on the above examples. I could see with latest changes even facets examples are moved under 'demo' module. Cheers -Mandy > Create example code for core > > > Key: LUCENE-3550 > URL: https://issues.apache.org/jira/browse/LUCENE-3550 > Project: Lucene - Core > Issue Type: New Feature > Components: core/other >Reporter: Shai Erera > Labels: newdev > > Trunk has gone under lots of API changes. Some of which are not trivial, and > the migration path from 3.x to 4.0 seems hard. I'd like to propose some way > to tackle this, by means of live example code. > The facet module implements this approach. There is live Java code under > src/examples that demonstrate some well documented scenarios. The code itself > is documented, in addition to javadoc. Also, the code itself is being unit > tested regularly. > We found it very difficult to keep documentation up-to-date -- javadocs > always lag behind, Wiki pages get old etc. However, when you have live Java > code, you're *forced* to keep it up-to-date. It doesn't compile if you break > the API, it fails to run if you change internal impl behavior. If you keep it > simple enough, its documentation stays simple to. > And if we are successful at maintaining it (which we must be, otherwise the > build should fail), then people should have an easy experience migrating > between releases. So say you take the simple scenario "I'd like to index > documents which have the fields ID, date and body". Then you create an > example class/method that accomplishes that. And between releases, this code > gets updated, and people can follow the changes required to implement that > scenario. > I'm not saying the examples code should always stay optimized. We can aim at > that, but I don't try to fool myself thinking that we'll succeed. But at > least we can get it compiled and regularly unit tested. > I think that it would be good if we introduce the concept of examples such > that if a module (core, contrib, modules) have an src/examples, we package it > in a .jar and include it with the binary distribution. That's for a first > step. We can also have meta examples, under their own module/contrib, that > show how to combine several modules together (this might even uncover API > problems), but that's definitely a second phase. > At first, let's do the "unit examples" (ala unit tests) and better start with > core. Whatever we succeed at writing for 4.0 will only help users. So let's > use this issue to: > # List example scenarios that we want to demonstrate for core > # Building the infrastructure in our build system to package and distribute a > module's examples. > Please feel free to list here example scenarios that come to mind. We can > then track what's been done and what's not. The more we do the better. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4733) Make CompressingTermVectorsFormat the new default term vectors format?
[ https://issues.apache.org/jira/browse/LUCENE-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568960#comment-13568960 ] Commit Tag Bot commented on LUCENE-4733: [branch_4x commit] Robert Muir http://svn.apache.org/viewvc?view=revision&revision=1441578 LUCENE-4733: merge Lucene42 codec from lucene-4547 branch > Make CompressingTermVectorsFormat the new default term vectors format? > -- > > Key: LUCENE-4733 > URL: https://issues.apache.org/jira/browse/LUCENE-4733 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Assignee: Adrien Grand >Priority: Trivial > Fix For: 4.2 > > Attachments: LUCENE-4733-javadocs.patch, LUCENE-4733-tests.patch > > > In LUCENE-4599, I wrote an alternate term vectors format which has a more > compact format, and I think it could replace the current > Lucene40TermVectorsFormat for the next (4.2) release? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4733) Make CompressingTermVectorsFormat the new default term vectors format?
[ https://issues.apache.org/jira/browse/LUCENE-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568954#comment-13568954 ] Robert Muir commented on LUCENE-4733: - ok its done: have fun! > Make CompressingTermVectorsFormat the new default term vectors format? > -- > > Key: LUCENE-4733 > URL: https://issues.apache.org/jira/browse/LUCENE-4733 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Assignee: Adrien Grand >Priority: Trivial > Fix For: 4.2 > > Attachments: LUCENE-4733-javadocs.patch, LUCENE-4733-tests.patch > > > In LUCENE-4599, I wrote an alternate term vectors format which has a more > compact format, and I think it could replace the current > Lucene40TermVectorsFormat for the next (4.2) release? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4733) Make CompressingTermVectorsFormat the new default term vectors format?
[ https://issues.apache.org/jira/browse/LUCENE-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568939#comment-13568939 ] Commit Tag Bot commented on LUCENE-4733: [trunk commit] Robert Muir http://svn.apache.org/viewvc?view=revision&revision=1441571 LUCENE-4733: merge Lucene42 codec from lucene-4547 branch > Make CompressingTermVectorsFormat the new default term vectors format? > -- > > Key: LUCENE-4733 > URL: https://issues.apache.org/jira/browse/LUCENE-4733 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Assignee: Adrien Grand >Priority: Trivial > Fix For: 4.2 > > Attachments: LUCENE-4733-javadocs.patch, LUCENE-4733-tests.patch > > > In LUCENE-4599, I wrote an alternate term vectors format which has a more > compact format, and I think it could replace the current > Lucene40TermVectorsFormat for the next (4.2) release? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_10) - Build # 2478 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Windows/2478/ Java: 32bit/jdk1.7.0_10 -client -XX:+UseConcMarkSweepGC All tests passed Build Log: [...truncated 30559 lines...] BUILD FAILED C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\build.xml:305: The following error occurred while executing this line: C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\extra-targets.xml:120: The following files are missing svn:eol-style (or binary svn:mime-type): * solr/core/src/test/org/apache/solr/update/HardAutoCommitTest.java Total time: 71 minutes 54 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Recording test results Description set: Java: 32bit/jdk1.7.0_10 -client -XX:+UseConcMarkSweepGC Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4733) Make CompressingTermVectorsFormat the new default term vectors format?
[ https://issues.apache.org/jira/browse/LUCENE-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568896#comment-13568896 ] Robert Muir commented on LUCENE-4733: - unrelated branches shouldn't delay trunk development. let me try to merge the relevant commits for 4.2 codec (and 4.1 impersonator and so on) to trunk... > Make CompressingTermVectorsFormat the new default term vectors format? > -- > > Key: LUCENE-4733 > URL: https://issues.apache.org/jira/browse/LUCENE-4733 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Assignee: Adrien Grand >Priority: Trivial > Fix For: 4.2 > > Attachments: LUCENE-4733-javadocs.patch, LUCENE-4733-tests.patch > > > In LUCENE-4599, I wrote an alternate term vectors format which has a more > compact format, and I think it could replace the current > Lucene40TermVectorsFormat for the next (4.2) release? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1298 - Failure
I'm sad you blew a gaping whole in my theory On Fri, Feb 1, 2013 at 11:50 AM, Mark Miller wrote: > Nope, no Git involved. Just a lazy conflicts merge up it seems. All SVN. > > It seems I don't have the latest svn config on my newest dev machine. Prob > lost it after an ubuntu reinstall. > > - Mark > > On Feb 1, 2013, at 11:43 AM, Robert Muir wrote: > >> I supposed that's my answer (to the git-data-loss question)... >> >> Git considered harmful! >> >> On Fri, Feb 1, 2013 at 11:30 AM, Apache Jenkins Server >> wrote: >>> Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/1298/ >>> >>> All tests passed >>> >>> Build Log: >>> [...truncated 29944 lines...] >>> BUILD FAILED >>> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/build.xml:305: >>> The following error occurred while executing this line: >>> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/extra-targets.xml:120: >>> The following files are missing svn:eol-style (or binary svn:mime-type): >>> * solr/core/src/test/org/apache/solr/update/HardAutoCommitTest.java >>> >>> Total time: 56 minutes 3 seconds >>> Build step 'Invoke Ant' marked build as failure >>> Archiving artifacts >>> Recording test results >>> Email was triggered for: Failure >>> Sending email for trigger: Failure >>> >>> >>> >>> >>> - >>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >>> For additional commands, e-mail: dev-h...@lucene.apache.org >> >> - >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> For additional commands, e-mail: dev-h...@lucene.apache.org >> > > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1298 - Failure
Nope, no Git involved. Just a lazy conflicts merge up it seems. All SVN. It seems I don't have the latest svn config on my newest dev machine. Prob lost it after an ubuntu reinstall. - Mark On Feb 1, 2013, at 11:43 AM, Robert Muir wrote: > I supposed that's my answer (to the git-data-loss question)... > > Git considered harmful! > > On Fri, Feb 1, 2013 at 11:30 AM, Apache Jenkins Server > wrote: >> Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/1298/ >> >> All tests passed >> >> Build Log: >> [...truncated 29944 lines...] >> BUILD FAILED >> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/build.xml:305: >> The following error occurred while executing this line: >> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/extra-targets.xml:120: >> The following files are missing svn:eol-style (or binary svn:mime-type): >> * solr/core/src/test/org/apache/solr/update/HardAutoCommitTest.java >> >> Total time: 56 minutes 3 seconds >> Build step 'Invoke Ant' marked build as failure >> Archiving artifacts >> Recording test results >> Email was triggered for: Failure >> Sending email for trigger: Failure >> >> >> >> >> - >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> For additional commands, e-mail: dev-h...@lucene.apache.org > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1298 - Failure
I supposed that's my answer (to the git-data-loss question)... Git considered harmful! On Fri, Feb 1, 2013 at 11:30 AM, Apache Jenkins Server wrote: > Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/1298/ > > All tests passed > > Build Log: > [...truncated 29944 lines...] > BUILD FAILED > /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/build.xml:305: > The following error occurred while executing this line: > /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/extra-targets.xml:120: > The following files are missing svn:eol-style (or binary svn:mime-type): > * solr/core/src/test/org/apache/solr/update/HardAutoCommitTest.java > > Total time: 56 minutes 3 seconds > Build step 'Invoke Ant' marked build as failure > Archiving artifacts > Recording test results > Email was triggered for: Failure > Sending email for trigger: Failure > > > > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_10) - Build # 4097 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/4097/ Java: 32bit/jdk1.7.0_10 -server -XX:+UseG1GC All tests passed Build Log: [...truncated 30716 lines...] BUILD FAILED /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:305: The following error occurred while executing this line: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:120: The following files are missing svn:eol-style (or binary svn:mime-type): * solr/core/src/test/org/apache/solr/update/HardAutoCommitTest.java Total time: 37 minutes 0 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Recording test results Description set: Java: 32bit/jdk1.7.0_10 -server -XX:+UseG1GC Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: svn commit: r1441483 - in /lucene/dev/trunk/solr: ./ core/src/java/org/apache/solr/core/ core/src/java/org/apache/solr/update/ core/src/test-files/solr/collection1/conf/ core/src/test/org/apache/s
I am also curious if you used git here, and if this was unintentional. I've seen this exact thing in patches with people using git: so either git itself is broken or encourages broken workflows with result in silent data loss... very scary. On Fri, Feb 1, 2013 at 11:12 AM, Alan Woodward wrote: > Hey Mark, did you mean to back out these CHANGES.TXT entries? > > Alan Woodward > www.flax.co.uk > > > On 1 Feb 2013, at 15:29, markrmil...@apache.org wrote: > > Author: markrmiller > Date: Fri Feb 1 15:29:08 2013 > New Revision: 1441483 > > URL: http://svn.apache.org/viewvc?rev=1441483&view=rev > Log: > SOLR-4370: Allow configuring commitWithin to do hard commits. > > Added: > > lucene/dev/trunk/solr/core/src/test/org/apache/solr/update/HardAutoCommitTest.java > (with props) > Modified: >lucene/dev/trunk/solr/CHANGES.txt >lucene/dev/trunk/solr/core/src/java/org/apache/solr/core/SolrConfig.java > > lucene/dev/trunk/solr/core/src/java/org/apache/solr/update/DirectUpdateHandler2.java > > lucene/dev/trunk/solr/core/src/test-files/solr/collection1/conf/solrconfig.xml > > lucene/dev/trunk/solr/core/src/test/org/apache/solr/update/AutoCommitTest.java > > Modified: lucene/dev/trunk/solr/CHANGES.txt > URL: > http://svn.apache.org/viewvc/lucene/dev/trunk/solr/CHANGES.txt?rev=1441483&r1=1441482&r2=1441483&view=diff > == > --- lucene/dev/trunk/solr/CHANGES.txt (original) > +++ lucene/dev/trunk/solr/CHANGES.txt Fri Feb 1 15:29:08 2013 > @@ -68,6 +68,9 @@ New Features > > * SOLR-2827: RegexpBoost Update Processor (janhoy) > > +* SOLR-4370: Allow configuring commitWithin to do hard commits. > + (Mark Miller, Senthuran Sivananthan) > + > Bug Fixes > -- > > @@ -97,11 +100,6 @@ Bug Fixes > > * SOLR-4342: Fix DataImportHandler stats to be a prper Map (hossman) > > -* SOLR-3967: langid.enforceSchema option checks source field instead of > target field (janhoy) > - > -* SOLR-4380: Replicate after startup option would not replicate until the > - IndexWriter was lazily opened. (Mark Miller, Gregg Donovan) > - > Optimizations > -- > > @@ -119,11 +117,6 @@ Optimizations > * SOLR-4306: Utilize indexInfo=false when gathering core names in UI > (steffkes) > > -* SOLR-4284: Admin UI - make core list scrollable separate from the rest of > - the UI (steffkes) > - > -* SOLR-4364: Admin UI - Locale based number formatting (steffkes) > - > Other Changes > -- > > @@ -132,8 +125,6 @@ Other Changes > > * SOLR-4353: Renamed example jetty context file to reduce confusion > (hossman) > > -* SOLR-4384: Make post.jar report timing information (Upayavira via janhoy) > - > == 4.1.0 == > > Versions of Major Components > > Modified: > lucene/dev/trunk/solr/core/src/java/org/apache/solr/core/SolrConfig.java > URL: > http://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/java/org/apache/solr/core/SolrConfig.java?rev=1441483&r1=1441482&r2=1441483&view=diff > == > --- lucene/dev/trunk/solr/core/src/java/org/apache/solr/core/SolrConfig.java > (original) > +++ lucene/dev/trunk/solr/core/src/java/org/apache/solr/core/SolrConfig.java > Fri Feb 1 15:29:08 2013 > @@ -238,7 +238,8 @@ public class SolrConfig extends Config { > getBool("updateHandler/autoCommit/openSearcher",true), > getInt("updateHandler/commitIntervalLowerBound",-1), > getInt("updateHandler/autoSoftCommit/maxDocs",-1), > -getInt("updateHandler/autoSoftCommit/maxTime",-1)); > +getInt("updateHandler/autoSoftCommit/maxTime",-1), > +getBool("updateHandler/commitWithin/softCommit",true)); > } > > private void loadPluginInfo(Class clazz, String tag, boolean requireName, > boolean requireClass) { > @@ -402,6 +403,7 @@ public class SolrConfig extends Config { > public final int > autoCommmitMaxDocs,autoCommmitMaxTime,commitIntervalLowerBound, > autoSoftCommmitMaxDocs,autoSoftCommmitMaxTime; > public final boolean openSearcher; // is opening a new searcher part of > hard autocommit? > +public final boolean commitWithinSoftCommit; > > /** > * @param autoCommmitMaxDocs set -1 as default > @@ -409,7 +411,7 @@ public class SolrConfig extends Config { > * @param commitIntervalLowerBound set -1 as default > */ > public UpdateHandlerInfo(String className, int autoCommmitMaxDocs, int > autoCommmitMaxTime, boolean openSearcher, int commitIntervalLowerBound, > -int autoSoftCommmitMaxDocs, int autoSoftCommmitMaxTime) { > +int autoSoftCommmitMaxDocs, int autoSoftCommmitMaxTime, boolean > commitWithinSoftCommit) { > this.className = className; > this.autoCommmitMaxDocs = autoCommmitMaxDocs; > this.autoCommmitMaxTime = autoCommmitMaxTime; > @@ -418,6 +420,8 @@ public class SolrConfig extends Config { > >
Re: svn commit: r1441483 - in /lucene/dev/trunk/solr: ./ core/src/java/org/apache/solr/core/ core/src/java/org/apache/solr/update/ core/src/test-files/solr/collection1/conf/ core/src/test/org/apache/s
woah, def not - bad merge of somehow, thanks. - Mark On Feb 1, 2013, at 11:12 AM, Alan Woodward wrote: > Hey Mark, did you mean to back out these CHANGES.TXT entries? > > Alan Woodward > www.flax.co.uk > > > On 1 Feb 2013, at 15:29, markrmil...@apache.org wrote: > >> Author: markrmiller >> Date: Fri Feb 1 15:29:08 2013 >> New Revision: 1441483 >> >> URL: http://svn.apache.org/viewvc?rev=1441483&view=rev >> Log: >> SOLR-4370: Allow configuring commitWithin to do hard commits. >> >> Added: >> >> lucene/dev/trunk/solr/core/src/test/org/apache/solr/update/HardAutoCommitTest.java >>(with props) >> Modified: >>lucene/dev/trunk/solr/CHANGES.txt >>lucene/dev/trunk/solr/core/src/java/org/apache/solr/core/SolrConfig.java >> >> lucene/dev/trunk/solr/core/src/java/org/apache/solr/update/DirectUpdateHandler2.java >> >> lucene/dev/trunk/solr/core/src/test-files/solr/collection1/conf/solrconfig.xml >> >> lucene/dev/trunk/solr/core/src/test/org/apache/solr/update/AutoCommitTest.java >> >> Modified: lucene/dev/trunk/solr/CHANGES.txt >> URL: >> http://svn.apache.org/viewvc/lucene/dev/trunk/solr/CHANGES.txt?rev=1441483&r1=1441482&r2=1441483&view=diff >> == >> --- lucene/dev/trunk/solr/CHANGES.txt (original) >> +++ lucene/dev/trunk/solr/CHANGES.txt Fri Feb 1 15:29:08 2013 >> @@ -68,6 +68,9 @@ New Features >> >> * SOLR-2827: RegexpBoost Update Processor (janhoy) >> >> +* SOLR-4370: Allow configuring commitWithin to do hard commits. >> + (Mark Miller, Senthuran Sivananthan) >> + >> Bug Fixes >> -- >> >> @@ -97,11 +100,6 @@ Bug Fixes >> >> * SOLR-4342: Fix DataImportHandler stats to be a prper Map (hossman) >> >> -* SOLR-3967: langid.enforceSchema option checks source field instead of >> target field (janhoy) >> - >> -* SOLR-4380: Replicate after startup option would not replicate until the >> - IndexWriter was lazily opened. (Mark Miller, Gregg Donovan) >> - >> Optimizations >> -- >> >> @@ -119,11 +117,6 @@ Optimizations >> * SOLR-4306: Utilize indexInfo=false when gathering core names in UI >> (steffkes) >> >> -* SOLR-4284: Admin UI - make core list scrollable separate from the rest of >> - the UI (steffkes) >> - >> -* SOLR-4364: Admin UI - Locale based number formatting (steffkes) >> - >> Other Changes >> -- >> >> @@ -132,8 +125,6 @@ Other Changes >> >> * SOLR-4353: Renamed example jetty context file to reduce confusion (hossman) >> >> -* SOLR-4384: Make post.jar report timing information (Upayavira via janhoy) >> - >> == 4.1.0 == >> >> Versions of Major Components >> >> Modified: >> lucene/dev/trunk/solr/core/src/java/org/apache/solr/core/SolrConfig.java >> URL: >> http://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/java/org/apache/solr/core/SolrConfig.java?rev=1441483&r1=1441482&r2=1441483&view=diff >> == >> --- lucene/dev/trunk/solr/core/src/java/org/apache/solr/core/SolrConfig.java >> (original) >> +++ lucene/dev/trunk/solr/core/src/java/org/apache/solr/core/SolrConfig.java >> Fri Feb 1 15:29:08 2013 >> @@ -238,7 +238,8 @@ public class SolrConfig extends Config { >> getBool("updateHandler/autoCommit/openSearcher",true), >> getInt("updateHandler/commitIntervalLowerBound",-1), >> getInt("updateHandler/autoSoftCommit/maxDocs",-1), >> -getInt("updateHandler/autoSoftCommit/maxTime",-1)); >> +getInt("updateHandler/autoSoftCommit/maxTime",-1), >> +getBool("updateHandler/commitWithin/softCommit",true)); >> } >> >> private void loadPluginInfo(Class clazz, String tag, boolean requireName, >> boolean requireClass) { >> @@ -402,6 +403,7 @@ public class SolrConfig extends Config { >> public final int >> autoCommmitMaxDocs,autoCommmitMaxTime,commitIntervalLowerBound, >> autoSoftCommmitMaxDocs,autoSoftCommmitMaxTime; >> public final boolean openSearcher; // is opening a new searcher part of >> hard autocommit? >> +public final boolean commitWithinSoftCommit; >> >> /** >> * @param autoCommmitMaxDocs set -1 as default >> @@ -409,7 +411,7 @@ public class SolrConfig extends Config { >> * @param commitIntervalLowerBound set -1 as default >> */ >> public UpdateHandlerInfo(String className, int autoCommmitMaxDocs, int >> autoCommmitMaxTime, boolean openSearcher, int commitIntervalLowerBound, >> -int autoSoftCommmitMaxDocs, int autoSoftCommmitMaxTime) { >> +int autoSoftCommmitMaxDocs, int autoSoftCommmitMaxTime, boolean >> commitWithinSoftCommit) { >> this.className = className; >> this.autoCommmitMaxDocs = autoCommmitMaxDocs; >> this.autoCommmitMaxTime = autoCommmitMaxTime; >> @@ -418,6 +420,8 @@ public class SolrConfig extends Config { >> >> this.autoSoftCommmitMax
[JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1298 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/1298/ All tests passed Build Log: [...truncated 29944 lines...] BUILD FAILED /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/build.xml:305: The following error occurred while executing this line: /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java6/extra-targets.xml:120: The following files are missing svn:eol-style (or binary svn:mime-type): * solr/core/src/test/org/apache/solr/update/HardAutoCommitTest.java Total time: 56 minutes 3 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Recording test results Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: svn commit: r1441483 - in /lucene/dev/trunk/solr: ./ core/src/java/org/apache/solr/core/ core/src/java/org/apache/solr/update/ core/src/test-files/solr/collection1/conf/ core/src/test/org/apache/s
Hey Mark, did you mean to back out these CHANGES.TXT entries? Alan Woodward www.flax.co.uk On 1 Feb 2013, at 15:29, markrmil...@apache.org wrote: > Author: markrmiller > Date: Fri Feb 1 15:29:08 2013 > New Revision: 1441483 > > URL: http://svn.apache.org/viewvc?rev=1441483&view=rev > Log: > SOLR-4370: Allow configuring commitWithin to do hard commits. > > Added: > > lucene/dev/trunk/solr/core/src/test/org/apache/solr/update/HardAutoCommitTest.java >(with props) > Modified: >lucene/dev/trunk/solr/CHANGES.txt >lucene/dev/trunk/solr/core/src/java/org/apache/solr/core/SolrConfig.java > > lucene/dev/trunk/solr/core/src/java/org/apache/solr/update/DirectUpdateHandler2.java > > lucene/dev/trunk/solr/core/src/test-files/solr/collection1/conf/solrconfig.xml > > lucene/dev/trunk/solr/core/src/test/org/apache/solr/update/AutoCommitTest.java > > Modified: lucene/dev/trunk/solr/CHANGES.txt > URL: > http://svn.apache.org/viewvc/lucene/dev/trunk/solr/CHANGES.txt?rev=1441483&r1=1441482&r2=1441483&view=diff > == > --- lucene/dev/trunk/solr/CHANGES.txt (original) > +++ lucene/dev/trunk/solr/CHANGES.txt Fri Feb 1 15:29:08 2013 > @@ -68,6 +68,9 @@ New Features > > * SOLR-2827: RegexpBoost Update Processor (janhoy) > > +* SOLR-4370: Allow configuring commitWithin to do hard commits. > + (Mark Miller, Senthuran Sivananthan) > + > Bug Fixes > -- > > @@ -97,11 +100,6 @@ Bug Fixes > > * SOLR-4342: Fix DataImportHandler stats to be a prper Map (hossman) > > -* SOLR-3967: langid.enforceSchema option checks source field instead of > target field (janhoy) > - > -* SOLR-4380: Replicate after startup option would not replicate until the > - IndexWriter was lazily opened. (Mark Miller, Gregg Donovan) > - > Optimizations > -- > > @@ -119,11 +117,6 @@ Optimizations > * SOLR-4306: Utilize indexInfo=false when gathering core names in UI > (steffkes) > > -* SOLR-4284: Admin UI - make core list scrollable separate from the rest of > - the UI (steffkes) > - > -* SOLR-4364: Admin UI - Locale based number formatting (steffkes) > - > Other Changes > -- > > @@ -132,8 +125,6 @@ Other Changes > > * SOLR-4353: Renamed example jetty context file to reduce confusion (hossman) > > -* SOLR-4384: Make post.jar report timing information (Upayavira via janhoy) > - > == 4.1.0 == > > Versions of Major Components > > Modified: > lucene/dev/trunk/solr/core/src/java/org/apache/solr/core/SolrConfig.java > URL: > http://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/java/org/apache/solr/core/SolrConfig.java?rev=1441483&r1=1441482&r2=1441483&view=diff > == > --- lucene/dev/trunk/solr/core/src/java/org/apache/solr/core/SolrConfig.java > (original) > +++ lucene/dev/trunk/solr/core/src/java/org/apache/solr/core/SolrConfig.java > Fri Feb 1 15:29:08 2013 > @@ -238,7 +238,8 @@ public class SolrConfig extends Config { > getBool("updateHandler/autoCommit/openSearcher",true), > getInt("updateHandler/commitIntervalLowerBound",-1), > getInt("updateHandler/autoSoftCommit/maxDocs",-1), > -getInt("updateHandler/autoSoftCommit/maxTime",-1)); > +getInt("updateHandler/autoSoftCommit/maxTime",-1), > +getBool("updateHandler/commitWithin/softCommit",true)); > } > > private void loadPluginInfo(Class clazz, String tag, boolean requireName, > boolean requireClass) { > @@ -402,6 +403,7 @@ public class SolrConfig extends Config { > public final int > autoCommmitMaxDocs,autoCommmitMaxTime,commitIntervalLowerBound, > autoSoftCommmitMaxDocs,autoSoftCommmitMaxTime; > public final boolean openSearcher; // is opening a new searcher part of > hard autocommit? > +public final boolean commitWithinSoftCommit; > > /** > * @param autoCommmitMaxDocs set -1 as default > @@ -409,7 +411,7 @@ public class SolrConfig extends Config { > * @param commitIntervalLowerBound set -1 as default > */ > public UpdateHandlerInfo(String className, int autoCommmitMaxDocs, int > autoCommmitMaxTime, boolean openSearcher, int commitIntervalLowerBound, > -int autoSoftCommmitMaxDocs, int autoSoftCommmitMaxTime) { > +int autoSoftCommmitMaxDocs, int autoSoftCommmitMaxTime, boolean > commitWithinSoftCommit) { > this.className = className; > this.autoCommmitMaxDocs = autoCommmitMaxDocs; > this.autoCommmitMaxTime = autoCommmitMaxTime; > @@ -418,6 +420,8 @@ public class SolrConfig extends Config { > > this.autoSoftCommmitMaxDocs = autoSoftCommmitMaxDocs; > this.autoSoftCommmitMaxTime = autoSoftCommmitMaxTime; > + > + this.commitWithinSoftCommit = commitWithinSoftCommit; > } > } > > > Modified: > lucene/dev/trunk/solr/core/src/java/or
[jira] [Created] (SOLR-4396) Make JSONWriter a public class
Stephen Tallamy created SOLR-4396: - Summary: Make JSONWriter a public class Key: SOLR-4396 URL: https://issues.apache.org/jira/browse/SOLR-4396 Project: Solr Issue Type: Wish Components: Response Writers Reporter: Stephen Tallamy Priority: Minor By making org.apache.solr.response.JSONWriter a public class it will allow people to use this as a basis for their own custom writers that use a JSON format. Would it be possible to move it out to allow for sub-classing? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-4370) Ability to control the commit behaviour of commitWithin
[ https://issues.apache.org/jira/browse/SOLR-4370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller resolved SOLR-4370. --- Resolution: Fixed > Ability to control the commit behaviour of commitWithin > --- > > Key: SOLR-4370 > URL: https://issues.apache.org/jira/browse/SOLR-4370 > Project: Solr > Issue Type: New Feature > Components: replication (java) >Affects Versions: 4.0, 4.1 >Reporter: Senthuran Sivananthan >Assignee: Mark Miller > Fix For: 4.2, 5.0 > > Attachments: SOLR-4370.patch, with_commit.txt, without_commit.txt > > > We need the ability to control the hard/soft commit behaviour of commitWithin > parameter. > Since Solr 4.0, the commitWithin's performs a soft-commit which prevents > slaves from picking up the changes in a master/slave configuration. > The behaviour I'm thinking is as follows: > 1. By default, commitWithin will trigger soft commits. > 2. But this behaviour can be overwritten in solrconfig.xml to allow > commitWithin to perform hard commits, which will allow slaves to pick up the > changes. > > true > > Related to SOLR-4100 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-4395) Create a mechanism to promote a replica to leader
Yago Riveiro created SOLR-4395: -- Summary: Create a mechanism to promote a replica to leader Key: SOLR-4395 URL: https://issues.apache.org/jira/browse/SOLR-4395 Project: Solr Issue Type: New Feature Components: SolrCloud Reporter: Yago Riveiro Priority: Minor In following scenario: # Cluster compose by 2 nodes, node A and node B. # Two collection: - Collection 1 with the leader in a A and the replica in B and - Collection 2 with the leader in B and the replica in A. If a node A goes down, then the replica of collection 2 located in node A will be promote to leader to respect the failover. If the node A comes back online again, we have the scenario where all leader are in the same node. This maybe can be have performance issues in indexing throughput. I think that can be desirable have a mechanism to promote one replica to leader and recover the initial layout to leverage the physical machines, I/O CPU and so on. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4370) Ability to control the commit behaviour of commitWithin
[ https://issues.apache.org/jira/browse/SOLR-4370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568812#comment-13568812 ] Commit Tag Bot commented on SOLR-4370: -- [branch_4x commit] Mark Robert Miller http://svn.apache.org/viewvc?view=revision&revision=1441490 SOLR-4370: Allow configuring commitWithin to do hard commits. > Ability to control the commit behaviour of commitWithin > --- > > Key: SOLR-4370 > URL: https://issues.apache.org/jira/browse/SOLR-4370 > Project: Solr > Issue Type: New Feature > Components: replication (java) >Affects Versions: 4.0, 4.1 >Reporter: Senthuran Sivananthan >Assignee: Mark Miller > Fix For: 4.2, 5.0 > > Attachments: SOLR-4370.patch, with_commit.txt, without_commit.txt > > > We need the ability to control the hard/soft commit behaviour of commitWithin > parameter. > Since Solr 4.0, the commitWithin's performs a soft-commit which prevents > slaves from picking up the changes in a master/slave configuration. > The behaviour I'm thinking is as follows: > 1. By default, commitWithin will trigger soft commits. > 2. But this behaviour can be overwritten in solrconfig.xml to allow > commitWithin to perform hard commits, which will allow slaves to pick up the > changes. > > true > > Related to SOLR-4100 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4370) Ability to control the commit behaviour of commitWithin
[ https://issues.apache.org/jira/browse/SOLR-4370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568806#comment-13568806 ] Commit Tag Bot commented on SOLR-4370: -- [trunk commit] Mark Robert Miller http://svn.apache.org/viewvc?view=revision&revision=1441483 SOLR-4370: Allow configuring commitWithin to do hard commits. > Ability to control the commit behaviour of commitWithin > --- > > Key: SOLR-4370 > URL: https://issues.apache.org/jira/browse/SOLR-4370 > Project: Solr > Issue Type: New Feature > Components: replication (java) >Affects Versions: 4.0, 4.1 >Reporter: Senthuran Sivananthan >Assignee: Mark Miller > Fix For: 4.2, 5.0 > > Attachments: SOLR-4370.patch, with_commit.txt, without_commit.txt > > > We need the ability to control the hard/soft commit behaviour of commitWithin > parameter. > Since Solr 4.0, the commitWithin's performs a soft-commit which prevents > slaves from picking up the changes in a master/slave configuration. > The behaviour I'm thinking is as follows: > 1. By default, commitWithin will trigger soft commits. > 2. But this behaviour can be overwritten in solrconfig.xml to allow > commitWithin to perform hard commits, which will allow slaves to pick up the > changes. > > true > > Related to SOLR-4100 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4381) Query-time multi-word synonym expansion
[ https://issues.apache.org/jira/browse/SOLR-4381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568745#comment-13568745 ] Nolan Lawson commented on SOLR-4381: {quote} Could you specify which private methods in eDisMax you needed to copy/paste? Perhaps we can look at how to make it more extension friendly? {quote} [These lines|https://github.com/healthonnet/hon-lucene-synonyms/blob/master/src/main/java/org/apache/solr/search/SynonymExpandingExtendedDismaxQParserPlugin.java#L494]. {quote} If this issue is to be seriously pursued as part of edismax, the following should be included here in JIRA: {quote} I don't think it should be included in EDisMax itself. Extending EDisMax was just a temporary shortcut I took, but [Jan points out|https://github.com/healthonnet/hon-lucene-synonyms/issues/6] that the solution itself could be applied outside EDisMax, or even outside Solr. {quote} 1. A concise summary of the overall approach, with key technical details. {quote} Please see [this blog post|http://nolanlawson.com/2012/10/31/better-synonym-handling-in-solr/] for the best explanation. {quote} 2. A few example queries, both source and the resulting "parsed query". Key test cases, if you will. {quote} Good idea. [Added to the README.|https://github.com/healthonnet/hon-lucene-synonyms#tweaking-the-results] {quote} 3. A semi-detailed summary of what the user of the change needs to know, in terms of how to set it up, manage it, use it, and its precise effects. {quote} [In the README|https://github.com/healthonnet/hon-lucene-synonyms#query-parameters] for now. {quote} 4. Detail any limitations. {quote} Currently handling this in the [Issues page|https://github.com/healthonnet/hon-lucene-synonyms/issues?state=open]. Otherwise the standard query-time expansion concerns apply: increased delay in query execution, configuration is in the request parameters instead of the {{schema.xml}}, query becomes bloated and incomprehensible. Also potential user confusion on the single "best practice" solution for synonyms in Solr, since Solr already has a well-documented way of handling synonyms through the [SynonymFilterFactory|http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.SynonymFilterFactory]. As of right now, I assume people will only use my solution if they try the standard solution and are unsatisfied. {quote} 4. Specifically what features of the Synonym Filter will be lost by using this approach. {quote} As far as I know, none, because [I'm still using the SynonymFilterFactory|https://github.com/healthonnet/hon-lucene-synonyms/blob/master/README.md#step-6] and it's configurable by the user. In general, I agree with you that some rapid iteration outside of the Solr core would probably be a better approach than outright integration. Please consider my "merge request" withdrawn; I'll let the code incubate for a bit, and then look into integration later. > Query-time multi-word synonym expansion > --- > > Key: SOLR-4381 > URL: https://issues.apache.org/jira/browse/SOLR-4381 > Project: Solr > Issue Type: Improvement > Components: query parsers >Reporter: Nolan Lawson >Priority: Minor > Labels: multi-word, queryparser, synonyms > Fix For: 4.2, 5.0 > > Attachments: SOLR-4381-2.patch, SOLR-4381.patch > > > This is an issue that seems to come up perennially. > The [Solr > docs|http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.SynonymFilterFactory] > caution that index-time synonym expansion should be preferred to query-time > synonym expansion, due to the way multi-word synonyms are treated and how IDF > values can be boosted artificially. But query-time expansion should have huge > benefits, given that changes to the synonyms don't require re-indexing, the > index size stays the same, and the IDF values for the documents don't get > permanently altered. > The proposed solution is to move the synonym expansion logic from the > analysis chain (either query- or index-type) and into a new QueryParser. See > the attached patch for an implementation. > The core Lucene functionality is untouched. Instead, the EDismaxQParser is > extended, and synonym expansion is done on-the-fly. Queries are parsed into > a lattice (i.e. all possible synonym combinations), while individual > components of the query are still handled by the EDismaxQParser itself. > It's not an ideal solution by any stretch. But it's nice and self-contained, > so it invites experimentation and improvement. And I think it fits in well > with the merry band of misfit query parsers, like {{func}} and {{frange}}. > More details about this solution can be found in [this blog > post|http://nolanlawson.com/2012/10/31/better-synonym-
[jira] [Updated] (LUCENE-4728) Allow CommonTermsQuery to be highlighted
[ https://issues.apache.org/jira/browse/LUCENE-4728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simon Willnauer updated LUCENE-4728: Attachment: LUCENE-4728.patch here is the - solves everybodies problem - patch. I added the rewrite trick to both highlighters, remove the field requirement in the WeightedSpanTermExtractor and hacked a reusing AtomicReader that simply delegates to the MemoryIndex AtomicReader shadowing the field. kind of hacky but that entire thing is hacky ey? Nice thing is that we don't need to index the tokenstream twice if we need two different fields. > Allow CommonTermsQuery to be highlighted > > > Key: LUCENE-4728 > URL: https://issues.apache.org/jira/browse/LUCENE-4728 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/highlighter >Affects Versions: 4.1 >Reporter: Simon Willnauer >Assignee: Simon Willnauer > Fix For: 4.2, 5.0 > > Attachments: LUCENE-4728.patch, LUCENE-4728.patch, LUCENE-4728.patch > > > Add support for CommonTermsQuery to all highlighter impls. > This might add a dependency (query-jar) to the highlighter so we might think > about adding it to core? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4745) Allow FuzzySlop customization in classic QueryParser
[ https://issues.apache.org/jira/browse/LUCENE-4745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Florian Schilling updated LUCENE-4745: -- Attachment: LUCENE-4745-raw.patch LUCENE-4745-full.patch fixed an error in the new methods > Allow FuzzySlop customization in classic QueryParser > > > Key: LUCENE-4745 > URL: https://issues.apache.org/jira/browse/LUCENE-4745 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/queryparser >Affects Versions: 4.1 >Reporter: Florian Schilling > Fix For: 4.2, 5.0 > > Attachments: LUCENE-4745-full.patch, LUCENE-4745-full.patch, > LUCENE-4745-raw.patch, LUCENE-4745-raw.patch > > > It turns out searching arbitrary fields with define FUZZY_SLOP values could > be problematic on some types of values. For example a FUZZY_SLOP on dates is > ambiguous and needs a definition of a unit like months, days, minutes, etc. > An extension on the query grammar that allows some arbitrary text behind the > values in combination with a possibility to override the method parsing those > values could solve these kinds of problems. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4745) Allow FuzzySlop customization in classic QueryParser
[ https://issues.apache.org/jira/browse/LUCENE-4745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Florian Schilling updated LUCENE-4745: -- Attachment: LUCENE-4745-raw.patch LUCENE-4745-full.patch This patch extends the query grammar by allow an arbitrary suffix on FUZZY_SLOP. With this extension it will be possible to append units to the values defined by FUZZY_SLOPS. I.e. those units can be used for dates to specify a slop interval to be defined in month, days, hours, etc. File: LUCENE-4745-raw.patch contains the changes on the query grammar and the QueryParserBase File: LUCENE-4745-full.patch also contains changes on the auto-generated parser classes. > Allow FuzzySlop customization in classic QueryParser > > > Key: LUCENE-4745 > URL: https://issues.apache.org/jira/browse/LUCENE-4745 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/queryparser >Affects Versions: 4.1 >Reporter: Florian Schilling > Fix For: 4.2, 5.0 > > Attachments: LUCENE-4745-full.patch, LUCENE-4745-raw.patch > > > It turns out searching arbitrary fields with define FUZZY_SLOP values could > be problematic on some types of values. For example a FUZZY_SLOP on dates is > ambiguous and needs a definition of a unit like months, days, minutes, etc. > An extension on the query grammar that allows some arbitrary text behind the > values in combination with a possibility to override the method parsing those > values could solve these kinds of problems. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-4745) Allow FuzzySlop customization in classic QueryParser
Florian Schilling created LUCENE-4745: - Summary: Allow FuzzySlop customization in classic QueryParser Key: LUCENE-4745 URL: https://issues.apache.org/jira/browse/LUCENE-4745 Project: Lucene - Core Issue Type: Improvement Components: modules/queryparser Affects Versions: 4.1 Reporter: Florian Schilling Fix For: 4.2, 5.0 It turns out searching arbitrary fields with define FUZZY_SLOP values could be problematic on some types of values. For example a FUZZY_SLOP on dates is ambiguous and needs a definition of a unit like months, days, minutes, etc. An extension on the query grammar that allows some arbitrary text behind the values in combination with a possibility to override the method parsing those values could solve these kinds of problems. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4728) Allow CommonTermsQuery to be highlighted
[ https://issues.apache.org/jira/browse/LUCENE-4728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568722#comment-13568722 ] Uwe Schindler commented on LUCENE-4728: --- bq. But isnt this easily solvable? we just wrap a ParallelReader around that. +1 thats the solution! > Allow CommonTermsQuery to be highlighted > > > Key: LUCENE-4728 > URL: https://issues.apache.org/jira/browse/LUCENE-4728 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/highlighter >Affects Versions: 4.1 >Reporter: Simon Willnauer >Assignee: Simon Willnauer > Fix For: 4.2, 5.0 > > Attachments: LUCENE-4728.patch, LUCENE-4728.patch > > > Add support for CommonTermsQuery to all highlighter impls. > This might add a dependency (query-jar) to the highlighter so we might think > about adding it to core? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4728) Allow CommonTermsQuery to be highlighted
[ https://issues.apache.org/jira/browse/LUCENE-4728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568717#comment-13568717 ] Robert Muir commented on LUCENE-4728: - {quote} private Map readers = new HashMap(10); {quote} Well this is the core problem: per-field readers. Currently the only reasons multitermqueries rewrite is because they know the field. But isnt this easily solvable? we just wrap a ParallelReader around that. > Allow CommonTermsQuery to be highlighted > > > Key: LUCENE-4728 > URL: https://issues.apache.org/jira/browse/LUCENE-4728 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/highlighter >Affects Versions: 4.1 >Reporter: Simon Willnauer >Assignee: Simon Willnauer > Fix For: 4.2, 5.0 > > Attachments: LUCENE-4728.patch, LUCENE-4728.patch > > > Add support for CommonTermsQuery to all highlighter impls. > This might add a dependency (query-jar) to the highlighter so we might think > about adding it to core? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4740) Weak references cause extreme GC churn
[ https://issues.apache.org/jira/browse/LUCENE-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568716#comment-13568716 ] Kristofer Karlsson commented on LUCENE-4740: That looks good to me. Will this only be applied to latest 4.x or also be backported to 3.x? > Weak references cause extreme GC churn > -- > > Key: LUCENE-4740 > URL: https://issues.apache.org/jira/browse/LUCENE-4740 > Project: Lucene - Core > Issue Type: Bug > Components: core/store >Affects Versions: 3.6.1 > Environment: Linux debian squeeze 64 bit, Oracle JDK 6, 32 GB RAM, 16 > cores >Reporter: Kristofer Karlsson >Priority: Critical > Attachments: LUCENE-4740.patch, LUCENE-4740.patch > > > We are running a set of independent search machines, running our custom > software using lucene as a search library. We recently upgraded from lucene > 3.0.3 to 3.6.1 and noticed a severe degradation of performance. > After doing some heap dump digging, it turns out the process is stalling > because it's spending so much time in GC. We noticed about 212 million > WeakReference, originating from WeakIdentityMap, originating from > MMapIndexInput. > Our problem completely went away after removing the clones weakhashmap from > MMapIndexInput, and as a side-effect, disabling support for explictly > unmapping the mmapped data. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4728) Allow CommonTermsQuery to be highlighted
[ https://issues.apache.org/jira/browse/LUCENE-4728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568715#comment-13568715 ] Uwe Schindler commented on LUCENE-4728: --- Hi, I had the same problem while implementing a custom query for a customer. The query was very easy, it just rewrote after expanding terms to MultiPhraseQuery - you would expect that this works with highlighter! - But it doen't. The problem is that highligther does not even try to rewrite the query, it only checks via instanceof checks the *original* query type, failing to highlight my simple query without custom weights and scorers, just a very simple rewrite method. That is not a good design! If the highlighter would rewrite the query as a last chance this problem would have been solved. The problem with that is a second one in the crazy Lucene Highlighter: You need the field name for highlighter to work :( For this customer my only chance was to use Javassist to hot-patch the WeightedSpanTermExtractor and add another instanceof check. Overriding the fallback to handle other queries was impossible because the customer's framework was ElasticSearch which has a highly private, unextendable WeightedSpanTermExtractor with no possibility to override the Lucene default :( [same applies for Solr] This brings us back to a very old issue: We should extend the Query class by a simple additional API, so it can provide all metadata needed to do highlighting without crazy instaceof chains. > Allow CommonTermsQuery to be highlighted > > > Key: LUCENE-4728 > URL: https://issues.apache.org/jira/browse/LUCENE-4728 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/highlighter >Affects Versions: 4.1 >Reporter: Simon Willnauer >Assignee: Simon Willnauer > Fix For: 4.2, 5.0 > > Attachments: LUCENE-4728.patch, LUCENE-4728.patch > > > Add support for CommonTermsQuery to all highlighter impls. > This might add a dependency (query-jar) to the highlighter so we might think > about adding it to core? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4728) Allow CommonTermsQuery to be highlighted
[ https://issues.apache.org/jira/browse/LUCENE-4728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568702#comment-13568702 ] Simon Willnauer commented on LUCENE-4728: - bq. wait i dont get it... its another instanceof block. no not necessarily. Since for rewrite I need to get the field of the query otherwise I can't get a IndexReader in WeightedSpanTermExtractor. The other problem here is that WeightedSpanTermExtractor doesn't rewrite against a global reader but rather against a "reanalyzed" reader which might bring up problems in the case of CommonTermsQuery which will in-turn create a different BooleanQuery. bq. I dont think highlighters should depend on the concrete queries, only the abstract apis (just like i think modules shouldnt depend on analyzers modules)... otherwise its a sign something is wrong. dude this is wishful thinking unless we fix our API to allow to do positional queries. Really we already rely on it with ConstantScore / FilteredQuery calling getQuery() we also rely on BQ etc. and TermQuery which is not abstract api. > Allow CommonTermsQuery to be highlighted > > > Key: LUCENE-4728 > URL: https://issues.apache.org/jira/browse/LUCENE-4728 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/highlighter >Affects Versions: 4.1 >Reporter: Simon Willnauer >Assignee: Simon Willnauer > Fix For: 4.2, 5.0 > > Attachments: LUCENE-4728.patch, LUCENE-4728.patch > > > Add support for CommonTermsQuery to all highlighter impls. > This might add a dependency (query-jar) to the highlighter so we might think > about adding it to core? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4728) Allow CommonTermsQuery to be highlighted
[ https://issues.apache.org/jira/browse/LUCENE-4728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568696#comment-13568696 ] Robert Muir commented on LUCENE-4728: - wait i dont get it... its another instanceof block. I think we would just do the same trick in WeightedSpanTermExtractor, and then in the test have a mock query that rewrites to primitives (e.g. boolean query) and not have highlighter depend on the queries module? I dont think highlighters should depend on the concrete queries, only the abstract apis (just like i think modules shouldnt depend on analyzers modules)... otherwise its a sign something is wrong. > Allow CommonTermsQuery to be highlighted > > > Key: LUCENE-4728 > URL: https://issues.apache.org/jira/browse/LUCENE-4728 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/highlighter >Affects Versions: 4.1 >Reporter: Simon Willnauer >Assignee: Simon Willnauer > Fix For: 4.2, 5.0 > > Attachments: LUCENE-4728.patch, LUCENE-4728.patch > > > Add support for CommonTermsQuery to all highlighter impls. > This might add a dependency (query-jar) to the highlighter so we might think > about adding it to core? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4728) Allow CommonTermsQuery to be highlighted
[ https://issues.apache.org/jira/browse/LUCENE-4728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568691#comment-13568691 ] Simon Willnauer commented on LUCENE-4728: - bq. nice: I like it! Is it possible to fix WeightedSpanTermExtractor the same way for the other highlighter? this one is tricky since it involves getting the field from the query in order to fetch a reader. This involves instanceof checks again which doesn't buy use much. Fix positions on Query / Scorer would help you know :) I think this is ready, I will commit that soon. > Allow CommonTermsQuery to be highlighted > > > Key: LUCENE-4728 > URL: https://issues.apache.org/jira/browse/LUCENE-4728 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/highlighter >Affects Versions: 4.1 >Reporter: Simon Willnauer >Assignee: Simon Willnauer > Fix For: 4.2, 5.0 > > Attachments: LUCENE-4728.patch, LUCENE-4728.patch > > > Add support for CommonTermsQuery to all highlighter impls. > This might add a dependency (query-jar) to the highlighter so we might think > about adding it to core? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4393) 'Execute Query' in the dashboard does not build the url correctly
[ https://issues.apache.org/jira/browse/SOLR-4393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568687#comment-13568687 ] Artem Prygunov commented on SOLR-4393: -- The same problem in Safari 6.0.2 on MacOS. Chrome works fine. And I can confirm that SOLR-4349 solves the problem for Safari. > 'Execute Query' in the dashboard does not build the url correctly > - > > Key: SOLR-4393 > URL: https://issues.apache.org/jira/browse/SOLR-4393 > Project: Solr > Issue Type: Bug > Components: web gui >Affects Versions: 4.1 >Reporter: Anand > > Recently upgraded to 4.1 and started seeing this issue since the upgrade. We > also went from single core to multiple core at the same time. > > Steps to reproduce > 1. Select a core from the dashboard. > 2. Select 'Query' > 3. Without changing anything, click 'Execture Query'. > Expected: 10 hits (or less depending on data indexed) displayed on the screen. > Observed: See response below. > http://localhost:8080/solr/coreName/select? > > > 0 name="QTime">0 numFound="0" start="0"> > > Issue: "http://localhost:8080/solr/coreName/select?"; is incomplete and > "q=*:*" is not appended to the url. > Tested on Firefox and Chrome and both suffer from this issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4733) Make CompressingTermVectorsFormat the new default term vectors format?
[ https://issues.apache.org/jira/browse/LUCENE-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568685#comment-13568685 ] Adrien Grand commented on LUCENE-4733: -- I forgot to mention that the patch adds the format, but no codec. Should we wait for lucene4547 to be merged back into trunk and then just change the term vectors format or should we add the new Lucene42Codec in trunk without taking care of the lucene4547 branch? I guess it depends on how soon this branch is going to land on trunk? > Make CompressingTermVectorsFormat the new default term vectors format? > -- > > Key: LUCENE-4733 > URL: https://issues.apache.org/jira/browse/LUCENE-4733 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Assignee: Adrien Grand >Priority: Trivial > Fix For: 4.2 > > Attachments: LUCENE-4733-javadocs.patch, LUCENE-4733-tests.patch > > > In LUCENE-4599, I wrote an alternate term vectors format which has a more > compact format, and I think it could replace the current > Lucene40TermVectorsFormat for the next (4.2) release? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-Tests-4.x-java7 - Build # 954 - Still Failing
It was a test bug, my bad. I just committed a fix. -- Adrien - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4740) Weak references cause extreme GC churn
[ https://issues.apache.org/jira/browse/LUCENE-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-4740: -- Attachment: LUCENE-4740.patch Better patch using the getter method instead stupid MMapDirectory.this.useUnmap. > Weak references cause extreme GC churn > -- > > Key: LUCENE-4740 > URL: https://issues.apache.org/jira/browse/LUCENE-4740 > Project: Lucene - Core > Issue Type: Bug > Components: core/store >Affects Versions: 3.6.1 > Environment: Linux debian squeeze 64 bit, Oracle JDK 6, 32 GB RAM, 16 > cores >Reporter: Kristofer Karlsson >Priority: Critical > Attachments: LUCENE-4740.patch, LUCENE-4740.patch > > > We are running a set of independent search machines, running our custom > software using lucene as a search library. We recently upgraded from lucene > 3.0.3 to 3.6.1 and noticed a severe degradation of performance. > After doing some heap dump digging, it turns out the process is stalling > because it's spending so much time in GC. We noticed about 212 million > WeakReference, originating from WeakIdentityMap, originating from > MMapIndexInput. > Our problem completely went away after removing the clones weakhashmap from > MMapIndexInput, and as a side-effect, disabling support for explictly > unmapping the mmapped data. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4740) Weak references cause extreme GC churn
[ https://issues.apache.org/jira/browse/LUCENE-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-4740: -- Attachment: (was: LUCENE-4740.patch) > Weak references cause extreme GC churn > -- > > Key: LUCENE-4740 > URL: https://issues.apache.org/jira/browse/LUCENE-4740 > Project: Lucene - Core > Issue Type: Bug > Components: core/store >Affects Versions: 3.6.1 > Environment: Linux debian squeeze 64 bit, Oracle JDK 6, 32 GB RAM, 16 > cores >Reporter: Kristofer Karlsson >Priority: Critical > Attachments: LUCENE-4740.patch, LUCENE-4740.patch > > > We are running a set of independent search machines, running our custom > software using lucene as a search library. We recently upgraded from lucene > 3.0.3 to 3.6.1 and noticed a severe degradation of performance. > After doing some heap dump digging, it turns out the process is stalling > because it's spending so much time in GC. We noticed about 212 million > WeakReference, originating from WeakIdentityMap, originating from > MMapIndexInput. > Our problem completely went away after removing the clones weakhashmap from > MMapIndexInput, and as a side-effect, disabling support for explictly > unmapping the mmapped data. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4740) Weak references cause extreme GC churn
[ https://issues.apache.org/jira/browse/LUCENE-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-4740: -- Attachment: LUCENE-4740.patch Patch that makes the unmapping behaviour consistent. The "unmap" setting is now maintained by each MMapIndexInput. The unmap method was also moved into the impl class (the previous implementation was a relict from older times) > Weak references cause extreme GC churn > -- > > Key: LUCENE-4740 > URL: https://issues.apache.org/jira/browse/LUCENE-4740 > Project: Lucene - Core > Issue Type: Bug > Components: core/store >Affects Versions: 3.6.1 > Environment: Linux debian squeeze 64 bit, Oracle JDK 6, 32 GB RAM, 16 > cores >Reporter: Kristofer Karlsson >Priority: Critical > Attachments: LUCENE-4740.patch, LUCENE-4740.patch > > > We are running a set of independent search machines, running our custom > software using lucene as a search library. We recently upgraded from lucene > 3.0.3 to 3.6.1 and noticed a severe degradation of performance. > After doing some heap dump digging, it turns out the process is stalling > because it's spending so much time in GC. We noticed about 212 million > WeakReference, originating from WeakIdentityMap, originating from > MMapIndexInput. > Our problem completely went away after removing the clones weakhashmap from > MMapIndexInput, and as a side-effect, disabling support for explictly > unmapping the mmapped data. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-Tests-4.x-java7 - Build # 954 - Still Failing
ha! i bet this 3.x codec has the same termsenum.seek bug? On Fri, Feb 1, 2013 at 6:02 AM, Apache Jenkins Server wrote: > Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-java7/954/ > > 1 tests failed. > FAILED: > org.apache.lucene.codecs.lucene3x.TestLucene3xTermVectorsFormat.testRandom > > Error Message: > expected:<[oadguelkoy, jfbtlwzfek]> but was:<[dzhnlix, jfbtlwzfek]> > > Stack Trace: > java.lang.AssertionError: expected:<[oadguelkoy, jfbtlwzfek]> but > was:<[dzhnlix, jfbtlwzfek]> > at > __randomizedtesting.SeedInfo.seed([347CCCF3A020208D:4630E9FC114096FE]:0) > at org.junit.Assert.fail(Assert.java:93) > at org.junit.Assert.failNotEquals(Assert.java:647) > at org.junit.Assert.assertEquals(Assert.java:128) > at org.junit.Assert.assertEquals(Assert.java:147) > at > org.apache.lucene.index.BaseTermVectorsFormatTestCase.assertEquals(BaseTermVectorsFormatTestCase.java:375) > at > org.apache.lucene.index.BaseTermVectorsFormatTestCase.testRandom(BaseTermVectorsFormatTestCase.java:625) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:601) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787) > at > org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) > at > org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) > at > com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) > at > org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) > at > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) > at > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) > at > org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) > at > com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) > at > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) > at > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) > at > org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner
[jira] [Commented] (LUCENE-4740) Weak references cause extreme GC churn
[ https://issues.apache.org/jira/browse/LUCENE-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568666#comment-13568666 ] Uwe Schindler commented on LUCENE-4740: --- bq. I propose adding this to close(): thats not good and makes reuse of ByteBufferIndexInput complicated. MMapDirectory have to take care in its overridden abstract method (by using the IndexInput's instance setting. > Weak references cause extreme GC churn > -- > > Key: LUCENE-4740 > URL: https://issues.apache.org/jira/browse/LUCENE-4740 > Project: Lucene - Core > Issue Type: Bug > Components: core/store >Affects Versions: 3.6.1 > Environment: Linux debian squeeze 64 bit, Oracle JDK 6, 32 GB RAM, 16 > cores >Reporter: Kristofer Karlsson >Priority: Critical > Attachments: LUCENE-4740.patch > > > We are running a set of independent search machines, running our custom > software using lucene as a search library. We recently upgraded from lucene > 3.0.3 to 3.6.1 and noticed a severe degradation of performance. > After doing some heap dump digging, it turns out the process is stalling > because it's spending so much time in GC. We noticed about 212 million > WeakReference, originating from WeakIdentityMap, originating from > MMapIndexInput. > Our problem completely went away after removing the clones weakhashmap from > MMapIndexInput, and as a side-effect, disabling support for explictly > unmapping the mmapped data. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-4.x-java7 - Build # 954 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-java7/954/ 1 tests failed. FAILED: org.apache.lucene.codecs.lucene3x.TestLucene3xTermVectorsFormat.testRandom Error Message: expected:<[oadguelkoy, jfbtlwzfek]> but was:<[dzhnlix, jfbtlwzfek]> Stack Trace: java.lang.AssertionError: expected:<[oadguelkoy, jfbtlwzfek]> but was:<[dzhnlix, jfbtlwzfek]> at __randomizedtesting.SeedInfo.seed([347CCCF3A020208D:4630E9FC114096FE]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:147) at org.apache.lucene.index.BaseTermVectorsFormatTestCase.assertEquals(BaseTermVectorsFormatTestCase.java:375) at org.apache.lucene.index.BaseTermVectorsFormatTestCase.testRandom(BaseTermVectorsFormatTestCase.java:625) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358) at java.lang.Thread.run(Thread.java:722) Build Log: [...truncated 1119 lines...] [junit4:junit4] Suite: org.apache.lucene.codecs.lucene3x.TestLucene3xTermVectorsFormat [junit4:junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestLucene3xTermVectorsF
[jira] [Commented] (LUCENE-4740) Weak references cause extreme GC churn
[ https://issues.apache.org/jira/browse/LUCENE-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568665#comment-13568665 ] Uwe Schindler commented on LUCENE-4740: --- bq. Right, but the already existing indexinputs will have buffers pointing to the same bytebuffer, so if you close the master, you would get SIGSEGV in the clones, since the master can not forcibly close the clones. Right that can happen, the fix is to make the freeBuffer method use the setting of the actual IndexInput, not the global one of MMapDirectory. > Weak references cause extreme GC churn > -- > > Key: LUCENE-4740 > URL: https://issues.apache.org/jira/browse/LUCENE-4740 > Project: Lucene - Core > Issue Type: Bug > Components: core/store >Affects Versions: 3.6.1 > Environment: Linux debian squeeze 64 bit, Oracle JDK 6, 32 GB RAM, 16 > cores >Reporter: Kristofer Karlsson >Priority: Critical > Attachments: LUCENE-4740.patch > > > We are running a set of independent search machines, running our custom > software using lucene as a search library. We recently upgraded from lucene > 3.0.3 to 3.6.1 and noticed a severe degradation of performance. > After doing some heap dump digging, it turns out the process is stalling > because it's spending so much time in GC. We noticed about 212 million > WeakReference, originating from WeakIdentityMap, originating from > MMapIndexInput. > Our problem completely went away after removing the clones weakhashmap from > MMapIndexInput, and as a side-effect, disabling support for explictly > unmapping the mmapped data. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4728) Allow CommonTermsQuery to be highlighted
[ https://issues.apache.org/jira/browse/LUCENE-4728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568662#comment-13568662 ] Robert Muir commented on LUCENE-4728: - nice: I like it! Is it possible to fix WeightedSpanTermExtractor the same way for the other highlighter? it seems like the default if it doesnt know the query is... to do nothing? > Allow CommonTermsQuery to be highlighted > > > Key: LUCENE-4728 > URL: https://issues.apache.org/jira/browse/LUCENE-4728 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/highlighter >Affects Versions: 4.1 >Reporter: Simon Willnauer >Assignee: Simon Willnauer > Fix For: 4.2, 5.0 > > Attachments: LUCENE-4728.patch, LUCENE-4728.patch > > > Add support for CommonTermsQuery to all highlighter impls. > This might add a dependency (query-jar) to the highlighter so we might think > about adding it to core? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4733) Make CompressingTermVectorsFormat the new default term vectors format?
[ https://issues.apache.org/jira/browse/LUCENE-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand updated LUCENE-4733: - Attachment: LUCENE-4733-javadocs.patch New patch that adds Lucene42TermVectorsFormat with file format javadocs. I have set chunkSize=4KB as it seemed to be a good compromise between compression and speed in LUCENE-4599 (this chunk size only accounts for terms and payloads). Moreover, it uses LZ4 compression (which is very light) so that compression/decompression is not the bottleneck even for small indexes which fit into memory. > Make CompressingTermVectorsFormat the new default term vectors format? > -- > > Key: LUCENE-4733 > URL: https://issues.apache.org/jira/browse/LUCENE-4733 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Assignee: Adrien Grand >Priority: Trivial > Fix For: 4.2 > > Attachments: LUCENE-4733-javadocs.patch, LUCENE-4733-tests.patch > > > In LUCENE-4599, I wrote an alternate term vectors format which has a more > compact format, and I think it could replace the current > Lucene40TermVectorsFormat for the next (4.2) release? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4740) Weak references cause extreme GC churn
[ https://issues.apache.org/jira/browse/LUCENE-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568659#comment-13568659 ] Kristofer Karlsson commented on LUCENE-4740: freeBuffers in MMapIndexInput only looks at MMapDirectory.useUnmap, which is the thing that may change, unlike the trackClones / clones which is fixed once the master has been created. I propose adding this to close(): {noformat} if (clones != null) { freeBuffers(); } {noformat} > Weak references cause extreme GC churn > -- > > Key: LUCENE-4740 > URL: https://issues.apache.org/jira/browse/LUCENE-4740 > Project: Lucene - Core > Issue Type: Bug > Components: core/store >Affects Versions: 3.6.1 > Environment: Linux debian squeeze 64 bit, Oracle JDK 6, 32 GB RAM, 16 > cores >Reporter: Kristofer Karlsson >Priority: Critical > Attachments: LUCENE-4740.patch > > > We are running a set of independent search machines, running our custom > software using lucene as a search library. We recently upgraded from lucene > 3.0.3 to 3.6.1 and noticed a severe degradation of performance. > After doing some heap dump digging, it turns out the process is stalling > because it's spending so much time in GC. We noticed about 212 million > WeakReference, originating from WeakIdentityMap, originating from > MMapIndexInput. > Our problem completely went away after removing the clones weakhashmap from > MMapIndexInput, and as a side-effect, disabling support for explictly > unmapping the mmapped data. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4740) Weak references cause extreme GC churn
[ https://issues.apache.org/jira/browse/LUCENE-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568657#comment-13568657 ] Kristofer Karlsson commented on LUCENE-4740: bq. If you change the setting to true, the already existing indexinputs will not be tracked (as before), but new indexinputs will get a map and all of their clones will be freed correctly. Right, but the already existing indexinputs will have buffers pointing to the same bytebuffer, so if you close the master, you would get SIGSEGV in the clones, since the master can not forcibly close the clones. > Weak references cause extreme GC churn > -- > > Key: LUCENE-4740 > URL: https://issues.apache.org/jira/browse/LUCENE-4740 > Project: Lucene - Core > Issue Type: Bug > Components: core/store >Affects Versions: 3.6.1 > Environment: Linux debian squeeze 64 bit, Oracle JDK 6, 32 GB RAM, 16 > cores >Reporter: Kristofer Karlsson >Priority: Critical > Attachments: LUCENE-4740.patch > > > We are running a set of independent search machines, running our custom > software using lucene as a search library. We recently upgraded from lucene > 3.0.3 to 3.6.1 and noticed a severe degradation of performance. > After doing some heap dump digging, it turns out the process is stalling > because it's spending so much time in GC. We noticed about 212 million > WeakReference, originating from WeakIdentityMap, originating from > MMapIndexInput. > Our problem completely went away after removing the clones weakhashmap from > MMapIndexInput, and as a side-effect, disabling support for explictly > unmapping the mmapped data. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4728) Allow CommonTermsQuery to be highlighted
[ https://issues.apache.org/jira/browse/LUCENE-4728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simon Willnauer updated LUCENE-4728: Attachment: LUCENE-4728.patch new patch - I added a comment there to make sure we don't run into stack overflow exceptions as well as a test for that! > Allow CommonTermsQuery to be highlighted > > > Key: LUCENE-4728 > URL: https://issues.apache.org/jira/browse/LUCENE-4728 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/highlighter >Affects Versions: 4.1 >Reporter: Simon Willnauer >Assignee: Simon Willnauer > Fix For: 4.2, 5.0 > > Attachments: LUCENE-4728.patch, LUCENE-4728.patch > > > Add support for CommonTermsQuery to all highlighter impls. > This might add a dependency (query-jar) to the highlighter so we might think > about adding it to core? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-2899) Add OpenNLP Analysis capabilities as a module
[ https://issues.apache.org/jira/browse/LUCENE-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568575#comment-13568575 ] Kai Gülzau edited comment on LUCENE-2899 at 2/1/13 10:45 AM: - I have applied the Patch to trunk, modified the build scripts manually (ignoring javadoc tasks) and built the opennlp jars. Jars are running in a vanilla Solr 4.1 environment. - solr_server4.1\solr\lib\opennlp\ -- -jwnl-1.4_rc3.jar- -- lucene-analyzers-opennlp-5.0-SNAPSHOT.jar (build with patch) -- opennlp-maxent-3.0.2-incubating.jar -- opennlp-tools-1.5.2-incubating.jar -- solr-opennlp-5.0-SNAPSHOT.jar (build with patch) with in solrconfig.xml Works for me: http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201301.mbox/%3CB65DA877C3F93B4FB39EA49A1A03C95CC27AB1%40email.novomind.com%3E *edit*: removed jwnl*.jar as stated by Joern was (Author: kguel...@novomind.com): I have applied the Patch to trunk, modified the build scripts manually (ignoring javadoc tasks) and built the opennlp jars. Jars are running in a vanilla Solr 4.1 environment. - solr_server4.1\solr\lib\opennlp\ -- -jwnl-1.4_rc3.jar- -- lucene-analyzers-opennlp-5.0-SNAPSHOT.jar (build with patch) -- opennlp-maxent-3.0.2-incubating.jar -- opennlp-tools-1.5.2-incubating.jar -- solr-opennlp-5.0-SNAPSHOT.jar (build with patch) with in solrconfig.xml Works for me: http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201301.mbox/%3CB65DA877C3F93B4FB39EA49A1A03C95CC27AB1%40email.novomind.com%3E > Add OpenNLP Analysis capabilities as a module > - > > Key: LUCENE-2899 > URL: https://issues.apache.org/jira/browse/LUCENE-2899 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/analysis >Reporter: Grant Ingersoll >Assignee: Grant Ingersoll >Priority: Minor > Fix For: 4.2, 5.0 > > Attachments: LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, > LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, OpenNLPFilter.java, > OpenNLPTokenizer.java, opennlp_trunk.patch > > > Now that OpenNLP is an ASF project and has a nice license, it would be nice > to have a submodule (under analysis) that exposed capabilities for it. Drew > Farris, Tom Morton and I have code that does: > * Sentence Detection as a Tokenizer (could also be a TokenFilter, although it > would have to change slightly to buffer tokens) > * NamedEntity recognition as a TokenFilter > We are also planning a Tokenizer/TokenFilter that can put parts of speech as > either payloads (PartOfSpeechAttribute?) on a token or at the same position. > I'd propose it go under: > modules/analysis/opennlp -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-2899) Add OpenNLP Analysis capabilities as a module
[ https://issues.apache.org/jira/browse/LUCENE-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568575#comment-13568575 ] Kai Gülzau edited comment on LUCENE-2899 at 2/1/13 10:41 AM: - I have applied the Patch to trunk, modified the build scripts manually (ignoring javadoc tasks) and built the opennlp jars. Jars are running in a vanilla Solr 4.1 environment. - solr_server4.1\solr\lib\opennlp\ -- -jwnl-1.4_rc3.jar- -- lucene-analyzers-opennlp-5.0-SNAPSHOT.jar (build with patch) -- opennlp-maxent-3.0.2-incubating.jar -- opennlp-tools-1.5.2-incubating.jar -- solr-opennlp-5.0-SNAPSHOT.jar (build with patch) with in solrconfig.xml Works for me: http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201301.mbox/%3CB65DA877C3F93B4FB39EA49A1A03C95CC27AB1%40email.novomind.com%3E was (Author: kguel...@novomind.com): I have applied the Patch to trunk, modified the build scripts manually (ignoring javadoc tasks) and built the opennlp jars. Jars are running in a vanilla Solr 4.1 environment. - solr_server4.1\solr\lib\opennlp\ -- jwnl-1.4_rc3.jar -- lucene-analyzers-opennlp-5.0-SNAPSHOT.jar (build with patch) -- opennlp-maxent-3.0.2-incubating.jar -- opennlp-tools-1.5.2-incubating.jar -- solr-opennlp-5.0-SNAPSHOT.jar (build with patch) with in solrconfig.xml Works for me: http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201301.mbox/%3CB65DA877C3F93B4FB39EA49A1A03C95CC27AB1%40email.novomind.com%3E > Add OpenNLP Analysis capabilities as a module > - > > Key: LUCENE-2899 > URL: https://issues.apache.org/jira/browse/LUCENE-2899 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/analysis >Reporter: Grant Ingersoll >Assignee: Grant Ingersoll >Priority: Minor > Fix For: 4.2, 5.0 > > Attachments: LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, > LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, OpenNLPFilter.java, > OpenNLPTokenizer.java, opennlp_trunk.patch > > > Now that OpenNLP is an ASF project and has a nice license, it would be nice > to have a submodule (under analysis) that exposed capabilities for it. Drew > Farris, Tom Morton and I have code that does: > * Sentence Detection as a Tokenizer (could also be a TokenFilter, although it > would have to change slightly to buffer tokens) > * NamedEntity recognition as a TokenFilter > We are also planning a Tokenizer/TokenFilter that can put parts of speech as > either payloads (PartOfSpeechAttribute?) on a token or at the same position. > I'd propose it go under: > modules/analysis/opennlp -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2899) Add OpenNLP Analysis capabilities as a module
[ https://issues.apache.org/jira/browse/LUCENE-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568640#comment-13568640 ] Joern Kottmann commented on LUCENE-2899: The jwnl library is only needed if you use the OpenNLP Coreference component, otherwise its safe to exclude it. The 1.4_rc3 version is not tested anyway and its likely that the Coreferencer does not probably run with it. > Add OpenNLP Analysis capabilities as a module > - > > Key: LUCENE-2899 > URL: https://issues.apache.org/jira/browse/LUCENE-2899 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/analysis >Reporter: Grant Ingersoll >Assignee: Grant Ingersoll >Priority: Minor > Fix For: 4.2, 5.0 > > Attachments: LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, > LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, OpenNLPFilter.java, > OpenNLPTokenizer.java, opennlp_trunk.patch > > > Now that OpenNLP is an ASF project and has a nice license, it would be nice > to have a submodule (under analysis) that exposed capabilities for it. Drew > Farris, Tom Morton and I have code that does: > * Sentence Detection as a Tokenizer (could also be a TokenFilter, although it > would have to change slightly to buffer tokens) > * NamedEntity recognition as a TokenFilter > We are also planning a Tokenizer/TokenFilter that can put parts of speech as > either payloads (PartOfSpeechAttribute?) on a token or at the same position. > I'd propose it go under: > modules/analysis/opennlp -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4733) Make CompressingTermVectorsFormat the new default term vectors format?
[ https://issues.apache.org/jira/browse/LUCENE-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568639#comment-13568639 ] Commit Tag Bot commented on LUCENE-4733: [branch_4x commit] Adrien Grand http://svn.apache.org/viewvc?view=revision&revision=1441379 LUCENE-4733: Refactor term vectors formats tests around a BaseTermVectorsFormatTestCase (merged from r1441367). > Make CompressingTermVectorsFormat the new default term vectors format? > -- > > Key: LUCENE-4733 > URL: https://issues.apache.org/jira/browse/LUCENE-4733 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Assignee: Adrien Grand >Priority: Trivial > Fix For: 4.2 > > Attachments: LUCENE-4733-tests.patch > > > In LUCENE-4599, I wrote an alternate term vectors format which has a more > compact format, and I think it could replace the current > Lucene40TermVectorsFormat for the next (4.2) release? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4740) Weak references cause extreme GC churn
[ https://issues.apache.org/jira/browse/LUCENE-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568637#comment-13568637 ] Uwe Schindler commented on LUCENE-4740: --- bq. I would prefer an AtomicBoolean since it uses a volatile field. As far as I know, you can't make contents of arrays volatile. This kills performance. MMapIndexInput would be slower than SimpleFSIndexInput! This is why the array is used as a fake "reference" to a boolean. The current approach of unmapping the byte buffers and protecting for sigsegv by nulling them is not 100% safe. The JVM may still crash if another thread does not yet see the nulled buffer. But in *most* cases the use will get a AlreadyClosedException and can fix his code before he goes into production and his JVM crashes suddenly. > Weak references cause extreme GC churn > -- > > Key: LUCENE-4740 > URL: https://issues.apache.org/jira/browse/LUCENE-4740 > Project: Lucene - Core > Issue Type: Bug > Components: core/store >Affects Versions: 3.6.1 > Environment: Linux debian squeeze 64 bit, Oracle JDK 6, 32 GB RAM, 16 > cores >Reporter: Kristofer Karlsson >Priority: Critical > Attachments: LUCENE-4740.patch > > > We are running a set of independent search machines, running our custom > software using lucene as a search library. We recently upgraded from lucene > 3.0.3 to 3.6.1 and noticed a severe degradation of performance. > After doing some heap dump digging, it turns out the process is stalling > because it's spending so much time in GC. We noticed about 212 million > WeakReference, originating from WeakIdentityMap, originating from > MMapIndexInput. > Our problem completely went away after removing the clones weakhashmap from > MMapIndexInput, and as a side-effect, disabling support for explictly > unmapping the mmapped data. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4740) Weak references cause extreme GC churn
[ https://issues.apache.org/jira/browse/LUCENE-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568635#comment-13568635 ] Uwe Schindler commented on LUCENE-4740: --- bq. but what happens if you start with having useUnmap = false, then creating a bunch of clones, and then setting it back to useUnmap = true? If I read the code correctly (which I am not certain of), closing the original input will then unmap the data and break all the existing clones. The settings are decoupled: If you start with useUnmap = false, all IndexInputs created will have no weak map, so when they are closed, the clones are not touched. If you change the setting to true, the already existing indexinputs will not be tracked (as before), but new indexinputs will get a map and all of their clones will be freed correctly. The other special case: If you change the setting from true -> false, all existing IndexInputs will keep their maps and will be cleaned up on close (buffers set to null). But the cleanMapping() method will get a no-op, so they are correctly nulled, but no longer unmapped. In any case a SIGSEGV is prevented (as good as we can without locking). In general, nothing breaks if you change the setting later, but you should really do it only after construction. > Weak references cause extreme GC churn > -- > > Key: LUCENE-4740 > URL: https://issues.apache.org/jira/browse/LUCENE-4740 > Project: Lucene - Core > Issue Type: Bug > Components: core/store >Affects Versions: 3.6.1 > Environment: Linux debian squeeze 64 bit, Oracle JDK 6, 32 GB RAM, 16 > cores >Reporter: Kristofer Karlsson >Priority: Critical > Attachments: LUCENE-4740.patch > > > We are running a set of independent search machines, running our custom > software using lucene as a search library. We recently upgraded from lucene > 3.0.3 to 3.6.1 and noticed a severe degradation of performance. > After doing some heap dump digging, it turns out the process is stalling > because it's spending so much time in GC. We noticed about 212 million > WeakReference, originating from WeakIdentityMap, originating from > MMapIndexInput. > Our problem completely went away after removing the clones weakhashmap from > MMapIndexInput, and as a side-effect, disabling support for explictly > unmapping the mmapped data. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4740) Weak references cause extreme GC churn
[ https://issues.apache.org/jira/browse/LUCENE-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568634#comment-13568634 ] Kristofer Karlsson commented on LUCENE-4740: bq. I agree that might be aproblem and you may be facing it. How mayn requests per second do you have on your server? Not that many - about 8000 per minute on yesterdays peak, which is about 133 per second. However, each requests leads to several complex lucene queries, though I don't have any numbers on the actual lucene query throughput. bq. This behaviour is Java's weak reference GC behaviour, it has nothing to do with WeakIdentityMap. The default WeakHashMap from JDK has the same problems. Agreed. bq. My ide was that the master creates some boolean[1] and passes this boolen[1] array to all childs. When the master closes, it does set the b[0] to false. All childs would do a check on b[0]... Not sure how this affects performance. Yes, I thought about this too, and I am not sure the performance penalty would be that problematic (but it would need to be measured). And if possibly, users of the inputs should avoid doing small individual byte gets, and instead try to consume chunks of bytes to avoid the overhead. I would prefer an AtomicBoolean since it uses a volatile field. As far as I know, you can't make contents of arrays volatile. In any case, wouldn't it would possible to skip the whole master/slave relationship and make everyone equal, just sharing the closed state flag? Though then running close() on a clone would close everything, which is possibly not what you want to happen. > Weak references cause extreme GC churn > -- > > Key: LUCENE-4740 > URL: https://issues.apache.org/jira/browse/LUCENE-4740 > Project: Lucene - Core > Issue Type: Bug > Components: core/store >Affects Versions: 3.6.1 > Environment: Linux debian squeeze 64 bit, Oracle JDK 6, 32 GB RAM, 16 > cores >Reporter: Kristofer Karlsson >Priority: Critical > Attachments: LUCENE-4740.patch > > > We are running a set of independent search machines, running our custom > software using lucene as a search library. We recently upgraded from lucene > 3.0.3 to 3.6.1 and noticed a severe degradation of performance. > After doing some heap dump digging, it turns out the process is stalling > because it's spending so much time in GC. We noticed about 212 million > WeakReference, originating from WeakIdentityMap, originating from > MMapIndexInput. > Our problem completely went away after removing the clones weakhashmap from > MMapIndexInput, and as a side-effect, disabling support for explictly > unmapping the mmapped data. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (LUCENE-4744) Attempt to get rid of FieldCache.StopFillCacheException
[ https://issues.apache.org/jira/browse/LUCENE-4744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simon Willnauer reassigned LUCENE-4744: --- Assignee: Simon Willnauer > Attempt to get rid of FieldCache.StopFillCacheException > --- > > Key: LUCENE-4744 > URL: https://issues.apache.org/jira/browse/LUCENE-4744 > Project: Lucene - Core > Issue Type: Bug > Components: core/other >Affects Versions: 4.1 >Reporter: Simon Willnauer >Assignee: Simon Willnauer >Priority: Minor > Fix For: 4.2, 5.0 > > Attachments: LUCENE-4744.patch > > > FieldCache.StopFillCacheException bugged me for a while and I think its a > pretty hacky way to make our FC work with prefix coded terms. I think we > should try to get rid of it... I will attach a patch soon. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4733) Make CompressingTermVectorsFormat the new default term vectors format?
[ https://issues.apache.org/jira/browse/LUCENE-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568628#comment-13568628 ] Commit Tag Bot commented on LUCENE-4733: [trunk commit] Adrien Grand http://svn.apache.org/viewvc?view=revision&revision=1441367 LUCENE-4733: Refactor term vectors formats tests around a BaseTermVectorsFormatTestCase. > Make CompressingTermVectorsFormat the new default term vectors format? > -- > > Key: LUCENE-4733 > URL: https://issues.apache.org/jira/browse/LUCENE-4733 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Assignee: Adrien Grand >Priority: Trivial > Fix For: 4.2 > > Attachments: LUCENE-4733-tests.patch > > > In LUCENE-4599, I wrote an alternate term vectors format which has a more > compact format, and I think it could replace the current > Lucene40TermVectorsFormat for the next (4.2) release? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4744) Attempt to get rid of FieldCache.StopFillCacheException
[ https://issues.apache.org/jira/browse/LUCENE-4744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simon Willnauer updated LUCENE-4744: Attachment: LUCENE-4744.patch here my proposal... I basically added a TermsEnum filter(TermsEnum) method to the Parser Interface such that Numeric parsers can simply use a FilteredTermsEnum instead of throwing this exception. I think this is way more elegant and might be helpful for other usecases where you only want certain terms to be in the FieldCache (ie. only high-frequent?) but that is a different story > Attempt to get rid of FieldCache.StopFillCacheException > --- > > Key: LUCENE-4744 > URL: https://issues.apache.org/jira/browse/LUCENE-4744 > Project: Lucene - Core > Issue Type: Bug > Components: core/other >Affects Versions: 4.1 >Reporter: Simon Willnauer >Priority: Minor > Fix For: 4.2, 5.0 > > Attachments: LUCENE-4744.patch > > > FieldCache.StopFillCacheException bugged me for a while and I think its a > pretty hacky way to make our FC work with prefix coded terms. I think we > should try to get rid of it... I will attach a patch soon. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-4744) Attempt to get rid of FieldCache.StopFillCacheException
Simon Willnauer created LUCENE-4744: --- Summary: Attempt to get rid of FieldCache.StopFillCacheException Key: LUCENE-4744 URL: https://issues.apache.org/jira/browse/LUCENE-4744 Project: Lucene - Core Issue Type: Bug Components: core/other Affects Versions: 4.1 Reporter: Simon Willnauer Priority: Minor Fix For: 4.2, 5.0 FieldCache.StopFillCacheException bugged me for a while and I think its a pretty hacky way to make our FC work with prefix coded terms. I think we should try to get rid of it... I will attach a patch soon. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4740) Weak references cause extreme GC churn
[ https://issues.apache.org/jira/browse/LUCENE-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568615#comment-13568615 ] Kristofer Karlsson commented on LUCENE-4740: Looks good, but what happens if you start with having useUnmap = false, then creating a bunch of clones, and then setting it back to useUnmap = true? If I read the code correctly (which I am not certain of), closing the original input will then unmap the data and break all the existing clones. > Weak references cause extreme GC churn > -- > > Key: LUCENE-4740 > URL: https://issues.apache.org/jira/browse/LUCENE-4740 > Project: Lucene - Core > Issue Type: Bug > Components: core/store >Affects Versions: 3.6.1 > Environment: Linux debian squeeze 64 bit, Oracle JDK 6, 32 GB RAM, 16 > cores >Reporter: Kristofer Karlsson >Priority: Critical > Attachments: LUCENE-4740.patch > > > We are running a set of independent search machines, running our custom > software using lucene as a search library. We recently upgraded from lucene > 3.0.3 to 3.6.1 and noticed a severe degradation of performance. > After doing some heap dump digging, it turns out the process is stalling > because it's spending so much time in GC. We noticed about 212 million > WeakReference, originating from WeakIdentityMap, originating from > MMapIndexInput. > Our problem completely went away after removing the clones weakhashmap from > MMapIndexInput, and as a side-effect, disabling support for explictly > unmapping the mmapped data. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4740) Weak references cause extreme GC churn
[ https://issues.apache.org/jira/browse/LUCENE-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568611#comment-13568611 ] Uwe Schindler commented on LUCENE-4740: --- bq. After doing some more thinking and micro-benchmarking, I think the problem occurs when you create clones at a faster rate than the GC can cope with. I agree that might be aproblem and you may be facing it. How mayn requests per second do you have on your server? This behaviour is Java's weak reference GC behaviour, it has nothing to do with WeakIdentityMap. The default WeakHashMap from JDK has the same problems. bq. Agree that the weakreferences for classes is probably a very minor part of it, and very unlikely part of the problem here. That is very common, the JDK uses the same mechanism like in AttributeSource at several places. It is definitely not part of the problem. The problem here is the weak map that has a very high throughput of puts (every query produces at least one IndexInput clone, possibly more). The high throughput already lead to the change to WeakIdentityMap recently, because a synchronized WeakHashMap was not able to handle the large number of concurrent puts (Lucene 3.6.0 regression). I am currently thinking of making the whole thing work without weak references and instead have some "hard reference" from the clone to master (it is already there, MappedByteBuffer.duplicate() returns a duplicate buffer that has a reference to the master). The problem with this is, that you need a check on every access of the IndexInput if the buffer is still valid. If it is only some null check, we may add it, but its risky for performance too. My ide was that the master creates some boolean[1] and passes this boolen[1] array to all childs. When the master closes, it does set the b[0] to false. All childs would do a check on b[0]... Not sure how this affects performance. > Weak references cause extreme GC churn > -- > > Key: LUCENE-4740 > URL: https://issues.apache.org/jira/browse/LUCENE-4740 > Project: Lucene - Core > Issue Type: Bug > Components: core/store >Affects Versions: 3.6.1 > Environment: Linux debian squeeze 64 bit, Oracle JDK 6, 32 GB RAM, 16 > cores >Reporter: Kristofer Karlsson >Priority: Critical > Attachments: LUCENE-4740.patch > > > We are running a set of independent search machines, running our custom > software using lucene as a search library. We recently upgraded from lucene > 3.0.3 to 3.6.1 and noticed a severe degradation of performance. > After doing some heap dump digging, it turns out the process is stalling > because it's spending so much time in GC. We noticed about 212 million > WeakReference, originating from WeakIdentityMap, originating from > MMapIndexInput. > Our problem completely went away after removing the clones weakhashmap from > MMapIndexInput, and as a side-effect, disabling support for explictly > unmapping the mmapped data. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4740) Weak references cause extreme GC churn
[ https://issues.apache.org/jira/browse/LUCENE-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-4740: -- Attachment: LUCENE-4740.patch Attached you will find the patch, that disables tracking of clones if the unmapping is disabled. We dont need to make the setting in MMapDirectory unmodifiable, after changing it all IndexInputs created afterwards, the new setting is used. This does not differ from the previous behaviour or unmapping at all. In general, people should in any case set this setting after constrcution (we may add a ctor param, too). > Weak references cause extreme GC churn > -- > > Key: LUCENE-4740 > URL: https://issues.apache.org/jira/browse/LUCENE-4740 > Project: Lucene - Core > Issue Type: Bug > Components: core/store >Affects Versions: 3.6.1 > Environment: Linux debian squeeze 64 bit, Oracle JDK 6, 32 GB RAM, 16 > cores >Reporter: Kristofer Karlsson >Priority: Critical > Attachments: LUCENE-4740.patch > > > We are running a set of independent search machines, running our custom > software using lucene as a search library. We recently upgraded from lucene > 3.0.3 to 3.6.1 and noticed a severe degradation of performance. > After doing some heap dump digging, it turns out the process is stalling > because it's spending so much time in GC. We noticed about 212 million > WeakReference, originating from WeakIdentityMap, originating from > MMapIndexInput. > Our problem completely went away after removing the clones weakhashmap from > MMapIndexInput, and as a side-effect, disabling support for explictly > unmapping the mmapped data. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4743) ComplexPhraseQuery hightlight problem after rewriting using ComplexPhraseQuery.rewrite(IndexReader)
[ https://issues.apache.org/jira/browse/LUCENE-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Nacional updated LUCENE-4743: --- Issue Type: Bug (was: Wish) > ComplexPhraseQuery hightlight problem after rewriting using > ComplexPhraseQuery.rewrite(IndexReader) > --- > > Key: LUCENE-4743 > URL: https://issues.apache.org/jira/browse/LUCENE-4743 > Project: Lucene - Core > Issue Type: Bug > Components: core/search, modules/queryparser >Affects Versions: 3.6.2 >Reporter: Jason Nacional > Labels: complexqueryparser, newbie, queryparser > > Just want to ask an assistance using ComplexPhraseQuery. I mean, when it > comes to highlighting the hits are not correct. I also started using > ComplexPhraseQueryParser to support complex proximity searches. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4743) ComplexPhraseQuery hightlight problem after rewriting using ComplexPhraseQuery.rewrite(IndexReader)
[ https://issues.apache.org/jira/browse/LUCENE-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Nacional updated LUCENE-4743: --- Summary: ComplexPhraseQuery hightlight problem after rewriting using ComplexPhraseQuery.rewrite(IndexReader) (was: Using ComplexPhraseQuery) > ComplexPhraseQuery hightlight problem after rewriting using > ComplexPhraseQuery.rewrite(IndexReader) > --- > > Key: LUCENE-4743 > URL: https://issues.apache.org/jira/browse/LUCENE-4743 > Project: Lucene - Core > Issue Type: Wish > Components: core/search, modules/queryparser >Affects Versions: 3.6.2 >Reporter: Jason Nacional > Labels: complexqueryparser, newbie, queryparser > > Just want to ask an assistance using ComplexPhraseQuery. I mean, when it > comes to highlighting the hits are not correct. I also started using > ComplexPhraseQueryParser to support complex proximity searches. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4393) 'Execute Query' in the dashboard does not build the url correctly
[ https://issues.apache.org/jira/browse/SOLR-4393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568587#comment-13568587 ] Stefan Matheis (steffkes) commented on SOLR-4393: - Hey Anand, that was appearing for me only in Internet Explorer so i opened (and already fixed) SOLR-4349 - would you mind verifying that that solves your problem too? > 'Execute Query' in the dashboard does not build the url correctly > - > > Key: SOLR-4393 > URL: https://issues.apache.org/jira/browse/SOLR-4393 > Project: Solr > Issue Type: Bug > Components: web gui >Affects Versions: 4.1 >Reporter: Anand > > Recently upgraded to 4.1 and started seeing this issue since the upgrade. We > also went from single core to multiple core at the same time. > > Steps to reproduce > 1. Select a core from the dashboard. > 2. Select 'Query' > 3. Without changing anything, click 'Execture Query'. > Expected: 10 hits (or less depending on data indexed) displayed on the screen. > Observed: See response below. > http://localhost:8080/solr/coreName/select? > > > 0 name="QTime">0 numFound="0" start="0"> > > Issue: "http://localhost:8080/solr/coreName/select?"; is incomplete and > "q=*:*" is not appended to the url. > Tested on Firefox and Chrome and both suffer from this issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-4743) Using ComplexPhraseQuery
Jason Nacional created LUCENE-4743: -- Summary: Using ComplexPhraseQuery Key: LUCENE-4743 URL: https://issues.apache.org/jira/browse/LUCENE-4743 Project: Lucene - Core Issue Type: Wish Components: core/search, modules/queryparser Affects Versions: 3.6.2 Reporter: Jason Nacional Just want to ask an assistance using ComplexPhraseQuery. I mean, when it comes to highlighting the hits are not correct. I also started using ComplexPhraseQueryParser to support complex proximity searches. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2899) Add OpenNLP Analysis capabilities as a module
[ https://issues.apache.org/jira/browse/LUCENE-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568575#comment-13568575 ] Kai Gülzau commented on LUCENE-2899: I have applied the Patch to trunk, modified the build scripts manually (ignoring javadoc tasks) and built the opennlp jars. Jars are running in a vanilla Solr 4.1 environment. - solr_server4.1\solr\lib\opennlp\ -- jwnl-1.4_rc3.jar -- lucene-analyzers-opennlp-5.0-SNAPSHOT.jar (build with patch) -- opennlp-maxent-3.0.2-incubating.jar -- opennlp-tools-1.5.2-incubating.jar -- solr-opennlp-5.0-SNAPSHOT.jar (build with patch) with in solrconfig.xml Works for me: http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201301.mbox/%3CB65DA877C3F93B4FB39EA49A1A03C95CC27AB1%40email.novomind.com%3E > Add OpenNLP Analysis capabilities as a module > - > > Key: LUCENE-2899 > URL: https://issues.apache.org/jira/browse/LUCENE-2899 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/analysis >Reporter: Grant Ingersoll >Assignee: Grant Ingersoll >Priority: Minor > Fix For: 4.2, 5.0 > > Attachments: LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, > LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, OpenNLPFilter.java, > OpenNLPTokenizer.java, opennlp_trunk.patch > > > Now that OpenNLP is an ASF project and has a nice license, it would be nice > to have a submodule (under analysis) that exposed capabilities for it. Drew > Farris, Tom Morton and I have code that does: > * Sentence Detection as a Tokenizer (could also be a TokenFilter, although it > would have to change slightly to buffer tokens) > * NamedEntity recognition as a TokenFilter > We are also planning a Tokenizer/TokenFilter that can put parts of speech as > either payloads (PartOfSpeechAttribute?) on a token or at the same position. > I'd propose it go under: > modules/analysis/opennlp -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org