Re: Problem with using classes from different modules
On Jul 5, 2013, at 1:34, Johan Jonkers jo...@seecr.nl wrote: On 7/5/13 10:16 AM, Andi Vajda wrote: On Jul 5, 2013, at 0:11, Johan Jonkers jo...@seecr.nl wrote: Hi Andi, I was able to compile it all into one module and that worked perfectly. I then tried to compile it again into two seperate modules and compared the generated wrappers. What I saw was that all methods in the SumWrapper class that had a reference to the Sum class were not wrapped (didn't see them in SumWrapper.h). I also checked the output to see if there was some sort of notification/warning saying there was a problem with these methods but didn't see anything. I am thinking that they aren't being wrapped because they somehow can't be resolved. I tried looking in the JCC code where that happens but haven't been very successful at that so far. Please, list the commands you used. It's easier to debug this way. Did you list the Sum class on the second jcc command line ? Below is part of the script I use to compile the modules. javac nl/seecr/freestyle/Sum.java -d build_seecr (cd build_seecr; jar -c nl ../seecr.jar) javac org/cq2/freestyle/SumWrapper.java -d build_cq2 -cp ./seecr.jar (cd build_cq2; jar -c org ../cq2.jar) JCC=python -m jcc.__main__ echo '# # Building CQ2 module # ' ${JCC} \ --root ${ROOT} \ --use_full_names \ --shared \ --arch x86_64 \ --jar cq2.jar \ --classpath ./seecr.jar \ Why are you listing seecr.jar here ? Andi.. --python cq2 \ --build \ --install echo '# # Building Seecr module # ' export PYTHONPATH=$(find ${ROOT} -type d -name site-packages | head -n 1) ${JCC} \ --root ${ROOT} \ --use_full_names \ --import cq2 \ --shared \ --arch x86_64 \ --jar seecr.jar \ --python seecr \ --build \ --install \ nl.seecr.freestyle.Sum Andi.. Regards, Johan On 7/4/13 10:18 PM, Andi Vajda wrote: On Thu, 4 Jul 2013, Johan Jonkers wrote: I've tried added the Sum class to the 2nd JCC call but it didn't solve the problem. The 'asSum' method isn't getting wrapped. Also, instantiating a SumWrapper with a Sum as argument results in the constructor without parameters being called; which I wasn't expecting. c=SumWrapper(Sum(10)) Empty constructor I tried compiling everything into one file and then I do get the asSum method wrapped but the constructor with a Sum as argument doesn't seem to work still. It doesn't call the empty constructor anymore but neither does it seem to set Sum object passed to it. I am a bit at a loss here on whats going wrong (or what I am doing wrong). Yeah, let's take one thing at a time and the simpler one first. Compiling all into one module, I was not able to reproduce the problem as reported. I'm able to make a SumWrapper(Sum(10)) just fine. Here are the commands I used to try to reproduce this: - created two class files Sum.java and SumWrapper.java in their respective packages as specified in your example - mkdir classes - javac -d classes *.java - jar -cvf sum.jar -C classes . - python -m jcc.__main__ --shared --arch x86_64 --use_full_names --jar sum.jar --classpath . --python sum --build --install - python - import sum - sum.initVM() jcc.JCCEnv object at 0x10029c0f0 - from nl.seecr.freestyle import Sum - from org.cq2.freestyle import SumWrapper - Sum(10) Sum: nl.seecr.freestyle.Sum@64fef26a - SumWrapper(Sum(10)) SumWrapper: org.cq2.freestyle.SumWrapper@70e69696 Please try to reproduce these steps and report back. Once that works, let's move on to the problem of compiling these into separate extension modules. Andi.. Any thoughts on this problem would be appreciated :-) Regards, Johan On 7/1/13 8:00 PM, Andi Vajda wrote: On Mon, 1 Jul 2013, Johan Jonkers wrote: Hello, I have been playing around with JCC to see if it would provide in the needs we have here at work to interface Java with Python. I have encountered one issue in which I hope someone on this mailinglist might be able to help me with. If this is not the right place to ask then I apologize in advance. This issue I am having is that I would like to create two packages compiled with JCC in which classes from one package are used by classes in the other pacakge. I would like to use those classes in Python but am having problems doing so that I don't understand yet. In package 1 is the class shown below: package nl.seecr.freestyle; public class Sum { private int _sum; public Sum() { _sum = 0; } public void add(int value) { _sum += value; } public int value() { return _sum; } } The second package holds a class what uses the Sum Class: package org.cq2.freestyle; import nl.seecr.freestyle.Sum; public class SumWrapper { private Sum total; public SumWrapper() { this(new Sum()); System.out.println(Empty constructor); } public SumWrapper(Sum sum) { total = sum; } public
Re: Problem with using classes from different modules
On 7/5/13 10:47 AM, Andi Vajda wrote: On Jul 5, 2013, at 1:34, Johan Jonkers jo...@seecr.nl wrote: On 7/5/13 10:16 AM, Andi Vajda wrote: On Jul 5, 2013, at 0:11, Johan Jonkers jo...@seecr.nl wrote: Hi Andi, I was able to compile it all into one module and that worked perfectly. I then tried to compile it again into two seperate modules and compared the generated wrappers. What I saw was that all methods in the SumWrapper class that had a reference to the Sum class were not wrapped (didn't see them in SumWrapper.h). I also checked the output to see if there was some sort of notification/warning saying there was a problem with these methods but didn't see anything. I am thinking that they aren't being wrapped because they somehow can't be resolved. I tried looking in the JCC code where that happens but haven't been very successful at that so far. Please, list the commands you used. It's easier to debug this way. Did you list the Sum class on the second jcc command line ? Below is part of the script I use to compile the modules. javac nl/seecr/freestyle/Sum.java -d build_seecr (cd build_seecr; jar -c nl ../seecr.jar) javac org/cq2/freestyle/SumWrapper.java -d build_cq2 -cp ./seecr.jar (cd build_cq2; jar -c org ../cq2.jar) JCC=python -m jcc.__main__ echo '# # Building CQ2 module # ' ${JCC} \ --root ${ROOT} \ --use_full_names \ --shared \ --arch x86_64 \ --jar cq2.jar \ --classpath ./seecr.jar \ Why are you listing seecr.jar here ? Thank you so much for pointing this out, it triggered me looking more close at the compile statement and realize that I had it all backwards. The cq2 package should have an import for seecr package and not the other way around. By adding a seecr import to the jcc statement compiling the cq2 package it works: ${JCC} \ --root ${ROOT} \ --use_full_names \ --shared \ --arch x86_64 \ --jar seecr.jar \ --python seecr \ --build \ --install export PYTHONPATH=$(find ${ROOT} -type d -name site-packages | head -n 1) ${JCC} \ --root ${ROOT} \ --use_full_names \ --import seecr \ --shared \ --arch x86_64 \ --jar cq2.jar \ --python cq2 \ --build \ --install Although this does put an order in which to compile things; the seecr package has to be present now before the cq2 package can be compiled but that won't be an problem. There now also seems to be an order in which things have to be imported in python: [GCC 4.4.7 20120313 (Red Hat 4.4.7-3)] on linux2 Type help, copyright, credits or license for more information. import cq2 Traceback (most recent call last): File stdin, line 1, in module File /home/zp/zandbak_jj/playwithjcc/root/usr/lib64/python2.6/site-packages/cq2/__init__.py, line 28, in module from seecr._seecr import * File /home/zp/zandbak_jj/playwithjcc/root/usr/lib64/python2.6/site-packages/seecr/__init__.py, line 29, in module from java.io import PrintWriter, StringWriter ImportError: No module named java.io versus Python 2.6.6 (r266:84292, Feb 22 2013, 00:00:18) [GCC 4.4.7 20120313 (Red Hat 4.4.7-3)] on linux2 Type help, copyright, credits or license for more information. import seecr from java.io import PrintWriter, StringWriter But that is something we can live with. Thank you for all help Johan
[jira] [Commented] (SOLR-5007) TestRecoveryHdfs seems to be leaking a thread occasionally that ends up failing a completely different test.
[ https://issues.apache.org/jira/browse/SOLR-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700479#comment-13700479 ] Dawid Weiss commented on SOLR-5007: --- Yeah, HADOOP-9703 looks like the core of the problem, thanks Mark. I still need to figure out how the thread group propagates -- I have it on my list. TestRecoveryHdfs seems to be leaking a thread occasionally that ends up failing a completely different test. Key: SOLR-5007 URL: https://issues.apache.org/jira/browse/SOLR-5007 Project: Solr Issue Type: Test Reporter: Mark Miller Assignee: Mark Miller -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1761 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/1761/ 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=1433, name=recoveryCmdExecutor-541-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at java.net.Socket.connect(Socket.java:546) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:679) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=1433, name=recoveryCmdExecutor-541-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at java.net.Socket.connect(Socket.java:546) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:679) at __randomizedtesting.SeedInfo.seed([6A4A4C8217568469]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=1433, name=recoveryCmdExecutor-541-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.7.0) - Build # 609 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/609/ Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch Error Message: collection already exists: awholynewcollection_0 Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: collection already exists: awholynewcollection_0 at __randomizedtesting.SeedInfo.seed([EC2F2FEE365122B9:6DC9A1F6410E4285]:0) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:424) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:264) at org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:318) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1522) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:438) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:146) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:835) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Updated] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer
[ https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-5086: -- Attachment: LUCENE-5086-branch4x.patch Patch for branch4x (Lucene 4.x) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer - Key: LUCENE-5086 URL: https://issues.apache.org/jira/browse/LUCENE-5086 Project: Lucene - Core Issue Type: Improvement Reporter: Shay Banon Assignee: Dawid Weiss Attachments: LUCENE-5086-branch4x.patch, LUCENE-5086.patch, LUCENE-5086-trunk.patch Yea, that type of day and that type of title :). Since the last update of Java 6 on OS X, I started to see an annoying icon pop up at the doc whenever running elasticsearch. By default, all of our scripts add headless AWT flag so people will probably not encounter it, but, it was strange that I saw it when before I didn't. I started to dig around, and saw that when RamUsageEstimator was being loaded, it was causing AWT classes to be loaded. Further investigation showed that actually for some reason, calling ManagementFactory#getPlatformMBeanServer now with the new Java version causes AWT classes to be loaded (at least on the mac, haven't tested on other platforms yet). There are several ways to try and solve it, for example, by identifying the bug in the JVM itself, but I think that there should be a fix for it in Lucene itself, specifically since there is no need to call #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy call...). Here is a simple call that will allow to get the hotspot mxbean without using the #getPlatformMBeanServer method, and not causing it to be loaded and loading all those nasty AWT classes: {code} Object getHotSpotMXBean() { try { // Java 6 Class sunMF = Class.forName(sun.management.ManagementFactory); return sunMF.getMethod(getDiagnosticMXBean).invoke(null); } catch (Throwable t) { // ignore } // potentially Java 7 try { return ManagementFactory.class.getMethod(getPlatformMXBean, Class.class).invoke(null, Class.forName(com.sun.management.HotSpotDiagnosticMXBean)); } catch (Throwable t) { // ignore } return null; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer
[ https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-5086: -- Attachment: (was: LUCENE-5086.patch) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer - Key: LUCENE-5086 URL: https://issues.apache.org/jira/browse/LUCENE-5086 Project: Lucene - Core Issue Type: Improvement Reporter: Shay Banon Assignee: Dawid Weiss Attachments: LUCENE-5086-branch4x.patch, LUCENE-5086.patch, LUCENE-5086-trunk.patch Yea, that type of day and that type of title :). Since the last update of Java 6 on OS X, I started to see an annoying icon pop up at the doc whenever running elasticsearch. By default, all of our scripts add headless AWT flag so people will probably not encounter it, but, it was strange that I saw it when before I didn't. I started to dig around, and saw that when RamUsageEstimator was being loaded, it was causing AWT classes to be loaded. Further investigation showed that actually for some reason, calling ManagementFactory#getPlatformMBeanServer now with the new Java version causes AWT classes to be loaded (at least on the mac, haven't tested on other platforms yet). There are several ways to try and solve it, for example, by identifying the bug in the JVM itself, but I think that there should be a fix for it in Lucene itself, specifically since there is no need to call #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy call...). Here is a simple call that will allow to get the hotspot mxbean without using the #getPlatformMBeanServer method, and not causing it to be loaded and loading all those nasty AWT classes: {code} Object getHotSpotMXBean() { try { // Java 6 Class sunMF = Class.forName(sun.management.ManagementFactory); return sunMF.getMethod(getDiagnosticMXBean).invoke(null); } catch (Throwable t) { // ignore } // potentially Java 7 try { return ManagementFactory.class.getMethod(getPlatformMXBean, Class.class).invoke(null, Class.forName(com.sun.management.HotSpotDiagnosticMXBean)); } catch (Throwable t) { // ignore } return null; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer
[ https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700494#comment-13700494 ] Uwe Schindler commented on LUCENE-5086: --- The given patches for trunk and 4.x seem to work correctly. I changed the 4.x patch a little bit and removed extra Class.forName's and reordered the catch clauses. It now first tries to use the official java 7 API and only if not on Java 7 falls back to the Java 6 thing. Maybe we can use Constants to detect the JDK earlier without trying. From my perspective the 2 patches fix the issue perfectly, the trunk code is the official Java7 way to get this bean without a RPC server/proxy overhead and now AWT :-) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer - Key: LUCENE-5086 URL: https://issues.apache.org/jira/browse/LUCENE-5086 Project: Lucene - Core Issue Type: Improvement Reporter: Shay Banon Assignee: Dawid Weiss Attachments: LUCENE-5086-branch4x.patch, LUCENE-5086.patch, LUCENE-5086-trunk.patch Yea, that type of day and that type of title :). Since the last update of Java 6 on OS X, I started to see an annoying icon pop up at the doc whenever running elasticsearch. By default, all of our scripts add headless AWT flag so people will probably not encounter it, but, it was strange that I saw it when before I didn't. I started to dig around, and saw that when RamUsageEstimator was being loaded, it was causing AWT classes to be loaded. Further investigation showed that actually for some reason, calling ManagementFactory#getPlatformMBeanServer now with the new Java version causes AWT classes to be loaded (at least on the mac, haven't tested on other platforms yet). There are several ways to try and solve it, for example, by identifying the bug in the JVM itself, but I think that there should be a fix for it in Lucene itself, specifically since there is no need to call #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy call...). Here is a simple call that will allow to get the hotspot mxbean without using the #getPlatformMBeanServer method, and not causing it to be loaded and loading all those nasty AWT classes: {code} Object getHotSpotMXBean() { try { // Java 6 Class sunMF = Class.forName(sun.management.ManagementFactory); return sunMF.getMethod(getDiagnosticMXBean).invoke(null); } catch (Throwable t) { // ignore } // potentially Java 7 try { return ManagementFactory.class.getMethod(getPlatformMXBean, Class.class).invoke(null, Class.forName(com.sun.management.HotSpotDiagnosticMXBean)); } catch (Throwable t) { // ignore } return null; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5084) EliasFanoDocIdSet
[ https://issues.apache.org/jira/browse/LUCENE-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700499#comment-13700499 ] Adrien Grand commented on LUCENE-5084: -- The patch looks ready. I think we should just add a bit more randomization to the tests before committing. EliasFanoDocIdSet - Key: LUCENE-5084 URL: https://issues.apache.org/jira/browse/LUCENE-5084 Project: Lucene - Core Issue Type: Improvement Reporter: Paul Elschot Assignee: Adrien Grand Priority: Minor Fix For: 5.0 Attachments: LUCENE-5084.patch, LUCENE-5084.patch DocIdSet in Elias-Fano encoding -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/ibm-j9-jdk7) - Build # 6432 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/6432/ Java: 64bit/ibm-j9-jdk7 -Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;} 1 tests failed. REGRESSION: org.apache.solr.client.solrj.embedded.SolrExampleJettyTest.testAddDelete Error Message: IOException occured when talking to server at: https://127.0.0.1:40006/solr/collection1 Stack Trace: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: https://127.0.0.1:40006/solr/collection1 at __randomizedtesting.SeedInfo.seed([DA01579E0F124328:12E12A94FEBA90FE]:0) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:435) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117) at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:168) at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:146) at org.apache.solr.client.solrj.SolrExampleTests.testAddDelete(SolrExampleTests.java:738) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:88) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) at java.lang.reflect.Method.invoke(Method.java:613) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at
[jira] [Updated] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer
[ https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-5086: -- Attachment: LUCENE-5086-trunk.patch LUCENE-5086-branch4x.patch Some restructuring and preventing a useless NPE that needs to be catched on non-Hotspot JVMs. I also removed the separate private method and made the whole thing with nested catches like at the other places in the static {} initializer (I don't like class' methods called from a static initializer). Also this one is easier to debug, as you can add printlns without working around the return escape of the original method! I tested on Windows: - Oracle JDK 6u32 and 6u45 (64 bit) work, proprietary sunMF API used - Oracle JDK 6u32, 32 bit is unsupported as before (the object alignment JVM option does not exist at all, but also the sunMF API is used) - Oracle JDK 7u25 (64 bit) works, new public Java 7 API is used (via reflection) or statically typed in trunk - All other JVMs are unsupported, but they don't have obejct alignment available By the way: Our code looks better and more universal, because it does not rely on the management bean to detect compressed oops. We measure them! So it also works with J9! RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer - Key: LUCENE-5086 URL: https://issues.apache.org/jira/browse/LUCENE-5086 Project: Lucene - Core Issue Type: Improvement Reporter: Shay Banon Assignee: Dawid Weiss Attachments: LUCENE-5086-branch4x.patch, LUCENE-5086-branch4x.patch, LUCENE-5086.patch, LUCENE-5086-trunk.patch, LUCENE-5086-trunk.patch Yea, that type of day and that type of title :). Since the last update of Java 6 on OS X, I started to see an annoying icon pop up at the doc whenever running elasticsearch. By default, all of our scripts add headless AWT flag so people will probably not encounter it, but, it was strange that I saw it when before I didn't. I started to dig around, and saw that when RamUsageEstimator was being loaded, it was causing AWT classes to be loaded. Further investigation showed that actually for some reason, calling ManagementFactory#getPlatformMBeanServer now with the new Java version causes AWT classes to be loaded (at least on the mac, haven't tested on other platforms yet). There are several ways to try and solve it, for example, by identifying the bug in the JVM itself, but I think that there should be a fix for it in Lucene itself, specifically since there is no need to call #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy call...). Here is a simple call that will allow to get the hotspot mxbean without using the #getPlatformMBeanServer method, and not causing it to be loaded and loading all those nasty AWT classes: {code} Object getHotSpotMXBean() { try { // Java 6 Class sunMF = Class.forName(sun.management.ManagementFactory); return sunMF.getMethod(getDiagnosticMXBean).invoke(null); } catch (Throwable t) { // ignore } // potentially Java 7 try { return ManagementFactory.class.getMethod(getPlatformMXBean, Class.class).invoke(null, Class.forName(com.sun.management.HotSpotDiagnosticMXBean)); } catch (Throwable t) { // ignore } return null; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer
[ https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700513#comment-13700513 ] Uwe Schindler edited comment on LUCENE-5086 at 7/5/13 8:17 AM: --- Some restructuring and preventing a useless NPE that needs to be catched on non-Hotspot JVMs. I also removed the separate private method and made the whole thing with nested catches like at the other places in the static {} initializer (I don't like class' methods called from a static initializer). Also this one is easier to debug, as you can add printlns without working around the return escape of the original method! I tested on Windows: - Oracle JDK 6u32 and 6u45 (64 bit) work, proprietary sunMF API used - Oracle JDK 6u32, 32 bit is unsupported as before (the object alignment JVM option does not exist at all, but also the sunMF API is used) - Oracle JDK 7u25 and Oracle JDK 8 preview (64 bit) works, new public Java 7 API is used (via reflection) or statically typed in trunk - All other JVMs are unsupported, but they don't have obejct alignment available By the way: Our code looks better and more universal, because it does not rely on the management bean to detect compressed oops. We measure them! So it also works with J9! was (Author: thetaphi): Some restructuring and preventing a useless NPE that needs to be catched on non-Hotspot JVMs. I also removed the separate private method and made the whole thing with nested catches like at the other places in the static {} initializer (I don't like class' methods called from a static initializer). Also this one is easier to debug, as you can add printlns without working around the return escape of the original method! I tested on Windows: - Oracle JDK 6u32 and 6u45 (64 bit) work, proprietary sunMF API used - Oracle JDK 6u32, 32 bit is unsupported as before (the object alignment JVM option does not exist at all, but also the sunMF API is used) - Oracle JDK 7u25 (64 bit) works, new public Java 7 API is used (via reflection) or statically typed in trunk - All other JVMs are unsupported, but they don't have obejct alignment available By the way: Our code looks better and more universal, because it does not rely on the management bean to detect compressed oops. We measure them! So it also works with J9! RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer - Key: LUCENE-5086 URL: https://issues.apache.org/jira/browse/LUCENE-5086 Project: Lucene - Core Issue Type: Improvement Reporter: Shay Banon Assignee: Dawid Weiss Attachments: LUCENE-5086-branch4x.patch, LUCENE-5086-branch4x.patch, LUCENE-5086.patch, LUCENE-5086-trunk.patch, LUCENE-5086-trunk.patch Yea, that type of day and that type of title :). Since the last update of Java 6 on OS X, I started to see an annoying icon pop up at the doc whenever running elasticsearch. By default, all of our scripts add headless AWT flag so people will probably not encounter it, but, it was strange that I saw it when before I didn't. I started to dig around, and saw that when RamUsageEstimator was being loaded, it was causing AWT classes to be loaded. Further investigation showed that actually for some reason, calling ManagementFactory#getPlatformMBeanServer now with the new Java version causes AWT classes to be loaded (at least on the mac, haven't tested on other platforms yet). There are several ways to try and solve it, for example, by identifying the bug in the JVM itself, but I think that there should be a fix for it in Lucene itself, specifically since there is no need to call #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy call...). Here is a simple call that will allow to get the hotspot mxbean without using the #getPlatformMBeanServer method, and not causing it to be loaded and loading all those nasty AWT classes: {code} Object getHotSpotMXBean() { try { // Java 6 Class sunMF = Class.forName(sun.management.ManagementFactory); return sunMF.getMethod(getDiagnosticMXBean).invoke(null); } catch (Throwable t) { // ignore } // potentially Java 7 try { return ManagementFactory.class.getMethod(getPlatformMXBean, Class.class).invoke(null, Class.forName(com.sun.management.HotSpotDiagnosticMXBean)); } catch (Throwable t) { // ignore } return null; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information
[jira] [Assigned] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer
[ https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler reassigned LUCENE-5086: - Assignee: Uwe Schindler (was: Dawid Weiss) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer - Key: LUCENE-5086 URL: https://issues.apache.org/jira/browse/LUCENE-5086 Project: Lucene - Core Issue Type: Improvement Reporter: Shay Banon Assignee: Uwe Schindler Attachments: LUCENE-5086-branch4x.patch, LUCENE-5086-branch4x.patch, LUCENE-5086.patch, LUCENE-5086-trunk.patch, LUCENE-5086-trunk.patch Yea, that type of day and that type of title :). Since the last update of Java 6 on OS X, I started to see an annoying icon pop up at the doc whenever running elasticsearch. By default, all of our scripts add headless AWT flag so people will probably not encounter it, but, it was strange that I saw it when before I didn't. I started to dig around, and saw that when RamUsageEstimator was being loaded, it was causing AWT classes to be loaded. Further investigation showed that actually for some reason, calling ManagementFactory#getPlatformMBeanServer now with the new Java version causes AWT classes to be loaded (at least on the mac, haven't tested on other platforms yet). There are several ways to try and solve it, for example, by identifying the bug in the JVM itself, but I think that there should be a fix for it in Lucene itself, specifically since there is no need to call #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy call...). Here is a simple call that will allow to get the hotspot mxbean without using the #getPlatformMBeanServer method, and not causing it to be loaded and loading all those nasty AWT classes: {code} Object getHotSpotMXBean() { try { // Java 6 Class sunMF = Class.forName(sun.management.ManagementFactory); return sunMF.getMethod(getDiagnosticMXBean).invoke(null); } catch (Throwable t) { // ignore } // potentially Java 7 try { return ManagementFactory.class.getMethod(getPlatformMXBean, Class.class).invoke(null, Class.forName(com.sun.management.HotSpotDiagnosticMXBean)); } catch (Throwable t) { // ignore } return null; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer
[ https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700520#comment-13700520 ] Dawid Weiss commented on LUCENE-5086: - This looks good, thanks Uwe. RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer - Key: LUCENE-5086 URL: https://issues.apache.org/jira/browse/LUCENE-5086 Project: Lucene - Core Issue Type: Improvement Reporter: Shay Banon Assignee: Uwe Schindler Attachments: LUCENE-5086-branch4x.patch, LUCENE-5086-branch4x.patch, LUCENE-5086.patch, LUCENE-5086-trunk.patch, LUCENE-5086-trunk.patch Yea, that type of day and that type of title :). Since the last update of Java 6 on OS X, I started to see an annoying icon pop up at the doc whenever running elasticsearch. By default, all of our scripts add headless AWT flag so people will probably not encounter it, but, it was strange that I saw it when before I didn't. I started to dig around, and saw that when RamUsageEstimator was being loaded, it was causing AWT classes to be loaded. Further investigation showed that actually for some reason, calling ManagementFactory#getPlatformMBeanServer now with the new Java version causes AWT classes to be loaded (at least on the mac, haven't tested on other platforms yet). There are several ways to try and solve it, for example, by identifying the bug in the JVM itself, but I think that there should be a fix for it in Lucene itself, specifically since there is no need to call #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy call...). Here is a simple call that will allow to get the hotspot mxbean without using the #getPlatformMBeanServer method, and not causing it to be loaded and loading all those nasty AWT classes: {code} Object getHotSpotMXBean() { try { // Java 6 Class sunMF = Class.forName(sun.management.ManagementFactory); return sunMF.getMethod(getDiagnosticMXBean).invoke(null); } catch (Throwable t) { // ignore } // potentially Java 7 try { return ManagementFactory.class.getMethod(getPlatformMXBean, Class.class).invoke(null, Class.forName(com.sun.management.HotSpotDiagnosticMXBean)); } catch (Throwable t) { // ignore } return null; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Issue Comment Deleted] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer
[ https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss updated LUCENE-5086: Comment: was deleted (was: Test the patch with: * Windows ** jrockit ** hotspot 1.6 ** hotspot 1.7 ** j9 * Mac ** (/) default osx 1.6 [first clause does the job] ** openjdk 1.7? * Linux ** jrockit ** hotspot 1.6 ** hotspot 1.7 ** j9 * BSD ** jrockit ** hotspot 1.6 ** hotspot 1.7 ** j9 ) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer - Key: LUCENE-5086 URL: https://issues.apache.org/jira/browse/LUCENE-5086 Project: Lucene - Core Issue Type: Improvement Reporter: Shay Banon Assignee: Uwe Schindler Attachments: LUCENE-5086-branch4x.patch, LUCENE-5086-branch4x.patch, LUCENE-5086.patch, LUCENE-5086-trunk.patch, LUCENE-5086-trunk.patch Yea, that type of day and that type of title :). Since the last update of Java 6 on OS X, I started to see an annoying icon pop up at the doc whenever running elasticsearch. By default, all of our scripts add headless AWT flag so people will probably not encounter it, but, it was strange that I saw it when before I didn't. I started to dig around, and saw that when RamUsageEstimator was being loaded, it was causing AWT classes to be loaded. Further investigation showed that actually for some reason, calling ManagementFactory#getPlatformMBeanServer now with the new Java version causes AWT classes to be loaded (at least on the mac, haven't tested on other platforms yet). There are several ways to try and solve it, for example, by identifying the bug in the JVM itself, but I think that there should be a fix for it in Lucene itself, specifically since there is no need to call #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy call...). Here is a simple call that will allow to get the hotspot mxbean without using the #getPlatformMBeanServer method, and not causing it to be loaded and loading all those nasty AWT classes: {code} Object getHotSpotMXBean() { try { // Java 6 Class sunMF = Class.forName(sun.management.ManagementFactory); return sunMF.getMethod(getDiagnosticMXBean).invoke(null); } catch (Throwable t) { // ignore } // potentially Java 7 try { return ManagementFactory.class.getMethod(getPlatformMXBean, Class.class).invoke(null, Class.forName(com.sun.management.HotSpotDiagnosticMXBean)); } catch (Throwable t) { // ignore } return null; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 311 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/311/ 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=4478, name=recoveryCmdExecutor-2317-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=4478, name=recoveryCmdExecutor-2317-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) at __randomizedtesting.SeedInfo.seed([6AE0D05A3727D6B3]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=4478, name=recoveryCmdExecutor-2317-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at
[jira] [Commented] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer
[ https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700522#comment-13700522 ] ASF subversion and git services commented on LUCENE-5086: - Commit 1499935 from [~thetaphi] [ https://svn.apache.org/r1499935 ] LUCENE-5086: RamUsageEstimator now uses official Java 7 API or a proprietary Oracle Java 6 API to get Hotspot MX bean, preventing AWT classes to be loaded on MacOSX RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer - Key: LUCENE-5086 URL: https://issues.apache.org/jira/browse/LUCENE-5086 Project: Lucene - Core Issue Type: Improvement Reporter: Shay Banon Assignee: Uwe Schindler Attachments: LUCENE-5086-branch4x.patch, LUCENE-5086-branch4x.patch, LUCENE-5086.patch, LUCENE-5086-trunk.patch, LUCENE-5086-trunk.patch Yea, that type of day and that type of title :). Since the last update of Java 6 on OS X, I started to see an annoying icon pop up at the doc whenever running elasticsearch. By default, all of our scripts add headless AWT flag so people will probably not encounter it, but, it was strange that I saw it when before I didn't. I started to dig around, and saw that when RamUsageEstimator was being loaded, it was causing AWT classes to be loaded. Further investigation showed that actually for some reason, calling ManagementFactory#getPlatformMBeanServer now with the new Java version causes AWT classes to be loaded (at least on the mac, haven't tested on other platforms yet). There are several ways to try and solve it, for example, by identifying the bug in the JVM itself, but I think that there should be a fix for it in Lucene itself, specifically since there is no need to call #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy call...). Here is a simple call that will allow to get the hotspot mxbean without using the #getPlatformMBeanServer method, and not causing it to be loaded and loading all those nasty AWT classes: {code} Object getHotSpotMXBean() { try { // Java 6 Class sunMF = Class.forName(sun.management.ManagementFactory); return sunMF.getMethod(getDiagnosticMXBean).invoke(null); } catch (Throwable t) { // ignore } // potentially Java 7 try { return ManagementFactory.class.getMethod(getPlatformMXBean, Class.class).invoke(null, Class.forName(com.sun.management.HotSpotDiagnosticMXBean)); } catch (Throwable t) { // ignore } return null; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer
[ https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700528#comment-13700528 ] ASF subversion and git services commented on LUCENE-5086: - Commit 1499936 from [~thetaphi] [ https://svn.apache.org/r1499936 ] Merged revision(s) 1499935 from lucene/dev/trunk: LUCENE-5086: RamUsageEstimator now uses official Java 7 API or a proprietary Oracle Java 6 API to get Hotspot MX bean, preventing AWT classes to be loaded on MacOSX RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer - Key: LUCENE-5086 URL: https://issues.apache.org/jira/browse/LUCENE-5086 Project: Lucene - Core Issue Type: Improvement Reporter: Shay Banon Assignee: Uwe Schindler Attachments: LUCENE-5086-branch4x.patch, LUCENE-5086-branch4x.patch, LUCENE-5086.patch, LUCENE-5086-trunk.patch, LUCENE-5086-trunk.patch Yea, that type of day and that type of title :). Since the last update of Java 6 on OS X, I started to see an annoying icon pop up at the doc whenever running elasticsearch. By default, all of our scripts add headless AWT flag so people will probably not encounter it, but, it was strange that I saw it when before I didn't. I started to dig around, and saw that when RamUsageEstimator was being loaded, it was causing AWT classes to be loaded. Further investigation showed that actually for some reason, calling ManagementFactory#getPlatformMBeanServer now with the new Java version causes AWT classes to be loaded (at least on the mac, haven't tested on other platforms yet). There are several ways to try and solve it, for example, by identifying the bug in the JVM itself, but I think that there should be a fix for it in Lucene itself, specifically since there is no need to call #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy call...). Here is a simple call that will allow to get the hotspot mxbean without using the #getPlatformMBeanServer method, and not causing it to be loaded and loading all those nasty AWT classes: {code} Object getHotSpotMXBean() { try { // Java 6 Class sunMF = Class.forName(sun.management.ManagementFactory); return sunMF.getMethod(getDiagnosticMXBean).invoke(null); } catch (Throwable t) { // ignore } // potentially Java 7 try { return ManagementFactory.class.getMethod(getPlatformMXBean, Class.class).invoke(null, Class.forName(com.sun.management.HotSpotDiagnosticMXBean)); } catch (Throwable t) { // ignore } return null; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-5086) RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer
[ https://issues.apache.org/jira/browse/LUCENE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler resolved LUCENE-5086. --- Resolution: Fixed Fix Version/s: 4.4 5.0 Thanks Shay and Dawid! RamUsageEstimator causes AWT classes to be loaded by calling ManagementFactory#getPlatformMBeanServer - Key: LUCENE-5086 URL: https://issues.apache.org/jira/browse/LUCENE-5086 Project: Lucene - Core Issue Type: Improvement Reporter: Shay Banon Assignee: Uwe Schindler Fix For: 5.0, 4.4 Attachments: LUCENE-5086-branch4x.patch, LUCENE-5086-branch4x.patch, LUCENE-5086.patch, LUCENE-5086-trunk.patch, LUCENE-5086-trunk.patch Yea, that type of day and that type of title :). Since the last update of Java 6 on OS X, I started to see an annoying icon pop up at the doc whenever running elasticsearch. By default, all of our scripts add headless AWT flag so people will probably not encounter it, but, it was strange that I saw it when before I didn't. I started to dig around, and saw that when RamUsageEstimator was being loaded, it was causing AWT classes to be loaded. Further investigation showed that actually for some reason, calling ManagementFactory#getPlatformMBeanServer now with the new Java version causes AWT classes to be loaded (at least on the mac, haven't tested on other platforms yet). There are several ways to try and solve it, for example, by identifying the bug in the JVM itself, but I think that there should be a fix for it in Lucene itself, specifically since there is no need to call #getPlatformMBeanServer to get the hotspot diagnostics one (its a heavy call...). Here is a simple call that will allow to get the hotspot mxbean without using the #getPlatformMBeanServer method, and not causing it to be loaded and loading all those nasty AWT classes: {code} Object getHotSpotMXBean() { try { // Java 6 Class sunMF = Class.forName(sun.management.ManagementFactory); return sunMF.getMethod(getDiagnosticMXBean).invoke(null); } catch (Throwable t) { // ignore } // potentially Java 7 try { return ManagementFactory.class.getMethod(getPlatformMXBean, Class.class).invoke(null, Class.forName(com.sun.management.HotSpotDiagnosticMXBean)); } catch (Throwable t) { // ignore } return null; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5007) TestRecoveryHdfs seems to be leaking a thread occasionally that ends up failing a completely different test.
[ https://issues.apache.org/jira/browse/SOLR-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700530#comment-13700530 ] Dawid Weiss commented on SOLR-5007: --- Ok, I think I know. Whenever you're using this: {code} @ThreadLeakScope(Scope.NONE) // hdfs mini cluster currently leaks threads {code} it means any threads this test leaves behind will be a problem to debug. And any threads they themselves create will cause a thread leak later. So it's not a bug in the test framework. This annotation is present in a number of classes; it'd be best to get rid of it as soon as possible... TestRecoveryHdfs seems to be leaking a thread occasionally that ends up failing a completely different test. Key: SOLR-5007 URL: https://issues.apache.org/jira/browse/SOLR-5007 Project: Solr Issue Type: Test Reporter: Mark Miller Assignee: Mark Miller -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4995) Implementing a Server Capable of Propagating Requests
[ https://issues.apache.org/jira/browse/SOLR-4995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700532#comment-13700532 ] Furkan KAMACI commented on SOLR-4995: - Hi Shalin; I will consider your thoughts and apply a new patch. Implementing a Server Capable of Propagating Requests - Key: SOLR-4995 URL: https://issues.apache.org/jira/browse/SOLR-4995 Project: Solr Issue Type: New Feature Reporter: Furkan KAMACI Attachments: SOLR-4995.patch Currently Solr servers are interacting with only one Solr node. There should be an implementation that propagates requests into multiple Solr nodes. For example when Solr is used as SolrCloud sending a LukeRequest should be made to one node at each shard. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5007) TestRecoveryHdfs seems to be leaking a thread occasionally that ends up failing a completely different test.
[ https://issues.apache.org/jira/browse/SOLR-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700531#comment-13700531 ] Dawid Weiss commented on SOLR-5007: --- Also: we could add those IPC threads to the thread leak filters if they're harmless instead of doing Scope.NONE. TestRecoveryHdfs seems to be leaking a thread occasionally that ends up failing a completely different test. Key: SOLR-5007 URL: https://issues.apache.org/jira/browse/SOLR-5007 Project: Solr Issue Type: Test Reporter: Mark Miller Assignee: Mark Miller -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Latest code of Lucene SOLR
Hi, Does github have the latest SOLR code? Or should I be getting it from SVN lucene/dev branch? Thanks, Prathik
RE: Latest code of Lucene SOLR
Hi, You can use https://github.com/apache/lucene-solr/tree/trunk, it might be a little bit outdated (a few hours only). Please note this reflects Lucene/Solr trunk (coming version 5.0), so you need Java 7. Lucene 4.x branch is here: https://github.com/apache/lucene-solr/tree/branch_4x Both don’t show the commit I did 20 minutes ago, but they are up-to-date with 8 hours lag. Uwe - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de http://www.thetaphi.de/ eMail: u...@thetaphi.de From: Prathik Puthran [mailto:prathik.puthra...@gmail.com] Sent: Friday, July 05, 2013 10:38 AM To: dev@lucene.apache.org Subject: Latest code of Lucene SOLR Hi, Does github have the latest SOLR code? Or should I be getting it from SVN lucene/dev branch? Thanks, Prathik
[JENKINS] Lucene-Solr-Tests-4.x-Java7 - Build # 1381 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java7/1381/ 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=995, name=recoveryCmdExecutor-457-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=995, name=recoveryCmdExecutor-457-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) at __randomizedtesting.SeedInfo.seed([1E81CF62C05C1C77]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=995, name=recoveryCmdExecutor-457-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b94) - Build # 6433 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/6433/ Java: 64bit/jdk1.8.0-ea-b94 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. REGRESSION: org.apache.solr.client.solrj.embedded.LargeVolumeJettyTest.testMultiThreaded Error Message: IOException occured when talking to server at: https://127.0.0.1:43589/solr/collection1 Stack Trace: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: https://127.0.0.1:43589/solr/collection1 at __randomizedtesting.SeedInfo.seed([5EE06A6B9A2A608:647BDC465173ACFE]:0) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:436) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117) at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:168) at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:146) at org.apache.solr.client.solrj.LargeVolumeTestBase.testMultiThreaded(LargeVolumeTestBase.java:61) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:491) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at
[JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1762 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/1762/ 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=3129, name=recoveryCmdExecutor-1113-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at java.net.Socket.connect(Socket.java:546) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:679) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=3129, name=recoveryCmdExecutor-1113-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at java.net.Socket.connect(Socket.java:546) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:679) at __randomizedtesting.SeedInfo.seed([330EC003E43BEF65]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=3129, name=recoveryCmdExecutor-1113-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) at
[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_25) - Build # 3004 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3004/ Java: 32bit/jdk1.7.0_25 -server -XX:+UseSerialGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.TestCoreContainer Error Message: Resource in scope SUITE failed to close. Resource was registered from thread Thread[id=2183, name=TEST-TestCoreContainer.testSharedLib-seed#[C2F0897425272444], state=RUNNABLE, group=TGRP-TestCoreContainer], registration stack trace below. Stack Trace: com.carrotsearch.randomizedtesting.ResourceDisposalError: Resource in scope SUITE failed to close. Resource was registered from thread Thread[id=2183, name=TEST-TestCoreContainer.testSharedLib-seed#[C2F0897425272444], state=RUNNABLE, group=TGRP-TestCoreContainer], registration stack trace below. at __randomizedtesting.SeedInfo.seed([C2F0897425272444]:0) at java.lang.Thread.getStackTrace(Thread.java:1568) at com.carrotsearch.randomizedtesting.RandomizedContext.closeAtEnd(RandomizedContext.java:150) at org.apache.lucene.util.LuceneTestCase.closeAfterSuite(LuceneTestCase.java:545) at org.apache.lucene.util._TestUtil.getTempDir(_TestUtil.java:131) at org.apache.solr.core.TestCoreContainer.testSharedLib(TestCoreContainer.java:337) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at
[JENKINS-MAVEN] Lucene-Solr-Maven-4.x #378: POMs out of sync
Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/378/ 2 tests failed. FAILED: org.apache.solr.cloud.BasicDistributedZkTest.org.apache.solr.cloud.BasicDistributedZkTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=6724, name=recoveryCmdExecutor-4012-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at java.net.Socket.connect(Socket.java:546) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:679) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=6724, name=recoveryCmdExecutor-4012-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at java.net.Socket.connect(Socket.java:546) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:679) at __randomizedtesting.SeedInfo.seed([E9E2CB79D27769B6]:0) FAILED: org.apache.solr.cloud.BasicDistributedZkTest.org.apache.solr.cloud.BasicDistributedZkTest Error Message: There are still zombie threads that couldn't be terminated: 1) Thread[id=6724, name=recoveryCmdExecutor-4012-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) at
Several builds hanging pecause of permgen
Several Jenkins builds now hang because of permgen. The runner JVM is dead (can only be killed by -9), last example: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6360/console - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4916) Add support to write and read Solr index files and transaction log files to and from HDFS.
[ https://issues.apache.org/jira/browse/SOLR-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700704#comment-13700704 ] Steve Rowe commented on SOLR-4916: -- {quote} I guess the main use case is for downstream projects to have the ability to filter out these dependencies and avoid pulling down the test time dependencies - but it seems we would care about that in the maven shadow build, not here - we don't publish based on the ivy files right? In that case, it would seem we should simply do the same thing as with some of the other jars in core that are excluded from the webapp - exclude them in the build.xml and have the maven build treat them as part of a test configuration? Steve Rowe, does any of that make any sense? {quote} Yes, it does - if these deps were moved to solr-core and declared in the maven conf as test scope, they would not be pulled in as transitive deps by consumers of the solr-core artifact. bq. My best idea would be to add a second lib folder (test-framework/runtime-libs) that is not packed into the binary ZIP file distribution. It's easy to add: We can add a separate resolve with another target folder. In Maven it should also definitely not be listed as dependency for runtime, too! If we leave the deps where they are now, on test-framework (which I don't think we should do, since these are really only solr-core deps), then they could be declared optional in the maven conf, but then all consumers that need these deps would need to declare them; so, at least in the maven config, there is zero point in keeping them as deps of test-framework. My vote is to move the deps to solr-core. Add support to write and read Solr index files and transaction log files to and from HDFS. -- Key: SOLR-4916 URL: https://issues.apache.org/jira/browse/SOLR-4916 Project: Solr Issue Type: New Feature Reporter: Mark Miller Assignee: Mark Miller Fix For: 5.0, 4.4 Attachments: SOLR-4916-ivy.patch, SOLR-4916.patch, SOLR-4916.patch, SOLR-4916.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-5003) Add option to add rowid/line number to CSV Update Handler
[ https://issues.apache.org/jira/browse/SOLR-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Grant Ingersoll updated SOLR-5003: -- Attachment: SOLR-5003.patch Pretty straightforward Add option to add rowid/line number to CSV Update Handler - Key: SOLR-5003 URL: https://issues.apache.org/jira/browse/SOLR-5003 Project: Solr Issue Type: Improvement Reporter: Grant Ingersoll Assignee: Grant Ingersoll Priority: Minor Fix For: 5.0, 4.4 Attachments: SOLR-5003.patch In some cases of exporting from a DB to CSV, the only unique id you have is the rowid. This issue is to add an optional (off by default) rowid field to the document which simply contains the line number of the row. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4916) Add support to write and read Solr index files and transaction log files to and from HDFS.
[ https://issues.apache.org/jira/browse/SOLR-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700741#comment-13700741 ] Uwe Schindler commented on SOLR-4916: - bq. My vote is to move the deps to solr-core. +1. Like test-only in Maven, for IVY, I would put them into a separate config and store in a separate directory, so they are not packaged: {{solr/core/test-libs}} Add support to write and read Solr index files and transaction log files to and from HDFS. -- Key: SOLR-4916 URL: https://issues.apache.org/jira/browse/SOLR-4916 Project: Solr Issue Type: New Feature Reporter: Mark Miller Assignee: Mark Miller Fix For: 5.0, 4.4 Attachments: SOLR-4916-ivy.patch, SOLR-4916.patch, SOLR-4916.patch, SOLR-4916.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Solr Wikis and Reference guide
What's our approach to editing/updating the old wiki vs. the new ref guide? I know from talking w/ Hoss how we are going to version it, but are we going to maintain the docs on both wikis? I've got a minor tweak to CSV handling that I want to document and just wondering how best to handle it. Other: 1. Are we deprecating the MoinMoin for Solr? 2. I'll add a link to the website and the old wiki. -Grant - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5003) Add option to add rowid/line number to CSV Update Handler
[ https://issues.apache.org/jira/browse/SOLR-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700923#comment-13700923 ] Yonik Seeley commented on SOLR-5003: {code} +//validate our row id +if (rowId != null rowId.equals() == false){ + SchemaField sf = schema.getFieldOrNull(rowId); + if(sf == null) +throw new SolrException( SolrException.ErrorCode.BAD_REQUEST,Invalid field name for rowId:'+ rowId +'); +} {code} In general, we should let downstream handle this type of stuff so things like schemaless will work. Add option to add rowid/line number to CSV Update Handler - Key: SOLR-5003 URL: https://issues.apache.org/jira/browse/SOLR-5003 Project: Solr Issue Type: Improvement Reporter: Grant Ingersoll Assignee: Grant Ingersoll Priority: Minor Fix For: 5.0, 4.4 Attachments: SOLR-5003.patch In some cases of exporting from a DB to CSV, the only unique id you have is the rowid. This issue is to add an optional (off by default) rowid field to the document which simply contains the line number of the row. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5003) Add option to add rowid/line number to CSV Update Handler
[ https://issues.apache.org/jira/browse/SOLR-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700933#comment-13700933 ] ASF subversion and git services commented on SOLR-5003: --- Commit 1500046 from [~gsingers] [ https://svn.apache.org/r1500046 ] SOLR-5003: add rowid (line number) option to CSV Loader Add option to add rowid/line number to CSV Update Handler - Key: SOLR-5003 URL: https://issues.apache.org/jira/browse/SOLR-5003 Project: Solr Issue Type: Improvement Reporter: Grant Ingersoll Assignee: Grant Ingersoll Priority: Minor Fix For: 5.0, 4.4 Attachments: SOLR-5003.patch In some cases of exporting from a DB to CSV, the only unique id you have is the rowid. This issue is to add an optional (off by default) rowid field to the document which simply contains the line number of the row. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5084) EliasFanoDocIdSet
[ https://issues.apache.org/jira/browse/LUCENE-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700938#comment-13700938 ] Paul Elschot commented on LUCENE-5084: -- Would you have any specific purpose for randomized testing? Randomized test cases with uniform data distributions are not likely to test exceptional situations in the high bits such as long high bit words with all zeros or all ones. EliasFanoDocIdSet - Key: LUCENE-5084 URL: https://issues.apache.org/jira/browse/LUCENE-5084 Project: Lucene - Core Issue Type: Improvement Reporter: Paul Elschot Assignee: Adrien Grand Priority: Minor Fix For: 5.0 Attachments: LUCENE-5084.patch, LUCENE-5084.patch DocIdSet in Elias-Fano encoding -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5003) Add option to add rowid/line number to CSV Update Handler
[ https://issues.apache.org/jira/browse/SOLR-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700941#comment-13700941 ] ASF subversion and git services commented on SOLR-5003: --- Commit 1500049 from [~gsingers] [ https://svn.apache.org/r1500049 ] SOLR-5003: merge Add option to add rowid/line number to CSV Update Handler - Key: SOLR-5003 URL: https://issues.apache.org/jira/browse/SOLR-5003 Project: Solr Issue Type: Improvement Reporter: Grant Ingersoll Assignee: Grant Ingersoll Priority: Minor Fix For: 5.0, 4.4 Attachments: SOLR-5003.patch In some cases of exporting from a DB to CSV, the only unique id you have is the rowid. This issue is to add an optional (off by default) rowid field to the document which simply contains the line number of the row. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4115 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4115/ 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=1725, name=recoveryCmdExecutor-880-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=1725, name=recoveryCmdExecutor-880-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) at __randomizedtesting.SeedInfo.seed([941929A070FE861D]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=1725, name=recoveryCmdExecutor-880-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
[jira] [Resolved] (SOLR-5003) Add option to add rowid/line number to CSV Update Handler
[ https://issues.apache.org/jira/browse/SOLR-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Grant Ingersoll resolved SOLR-5003. --- Resolution: Fixed Add option to add rowid/line number to CSV Update Handler - Key: SOLR-5003 URL: https://issues.apache.org/jira/browse/SOLR-5003 Project: Solr Issue Type: Improvement Reporter: Grant Ingersoll Assignee: Grant Ingersoll Priority: Minor Fix For: 5.0, 4.4 Attachments: SOLR-5003.patch In some cases of exporting from a DB to CSV, the only unique id you have is the rowid. This issue is to add an optional (off by default) rowid field to the document which simply contains the line number of the row. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-4982) Creating a core while referencing system properties looks like it loses files.
[ https://issues.apache.org/jira/browse/SOLR-4982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson updated SOLR-4982: - Attachment: SOLR-4982.patch Latest version. Thanks to Shalin I managed to find a way to have the test terminate. Along the way I've added a new method to SolrTestCaseJ4 that creates a no core test harness that assumes discovery mode on the theory that it took me a while to figure out how to do that and we were going to do this a lot more in the future. Any easier ways to do this? Running tests and doing some manual inspection. If that all works out I'll probably check this in today. Creating a core while referencing system properties looks like it loses files. -- Key: SOLR-4982 URL: https://issues.apache.org/jira/browse/SOLR-4982 Project: Solr Issue Type: Bug Components: multicore Affects Versions: 4.3, 5.0 Reporter: Erick Erickson Assignee: Erick Erickson Attachments: SOLR-4982.patch, SOLR-4982.patch, SOLR-4982.patch If you use the core admin handler to create core and reference system properties and index files without restarting Solr, your files are indexed to the wrong place. Say for instance I define a sys prop EOE=/Users/Erick/tmp and create a core with this request localhost:8983/solr/admin/cores?action=CREATEname=coreZinstanceDir=coreZdataDir=%24%7BEOE%7D where %24%7BEOE%7D is really ${EOE} after URL escaping. What gets preserved in solr.xml is correct, dataDir is set to ${EOE}. And if I restart Solr, then index documents, they wind up in /Users/Erick/tmp. This is as it should be. HOWEVER, if rather than immediately restart Solr I index some documents to CoreZ, they go in solr_home/CoreZ/${EOE}. The literal path is ${EOE}, dollar sign, curly braces and all. How important is this to fix for 4.4? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-4.x-Linux (32bit/ibm-j9-jdk7) - Build # 6362 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6362/ Java: 32bit/ibm-j9-jdk7 -Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;} 1 tests failed. FAILED: org.apache.solr.cloud.BasicDistributedZk2Test.testDistribSearch Error Message: Server at http://127.0.0.1:54637/onenodecollectioncore returned non ok status:404, message:Can not find: /onenodecollectioncore/update Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Server at http://127.0.0.1:54637/onenodecollectioncore returned non ok status:404, message:Can not find: /onenodecollectioncore/update at __randomizedtesting.SeedInfo.seed([AD1C83ABC14F84FD:2CFA0DB3B610E4C1]:0) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:385) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:116) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:102) at org.apache.solr.cloud.BasicDistributedZk2Test.testNodeWithoutCollectionForwarding(BasicDistributedZk2Test.java:196) at org.apache.solr.cloud.BasicDistributedZk2Test.doTest(BasicDistributedZk2Test.java:88) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:835) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:88) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) at java.lang.reflect.Method.invoke(Method.java:613) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at
[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.6.0) - Build # 610 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/610/ Java: 64bit/jdk1.6.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC All tests passed Build Log: [...truncated 8897 lines...] [junit4] ERROR: JVM J0 ended with an exception, command line: /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/bin/java -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/heapdumps -Dtests.prefix=tests -Dtests.seed=AB804A514AA67162 -Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random -Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.4 -Dtests.cleanthreads=perClass -Djava.util.logging.config.file=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/tools/junit4/logging.properties -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true -Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. -Djava.io.tmpdir=. -Djunit4.tempDir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/test/temp -Dclover.db.dir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/clover/db -Djava.security.manager=org.apache.lucene.util.TestSecurityManager -Djava.security.policy=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/tools/junit4/tests.policy -Dlucene.version=4.4-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory -Djava.awt.headless=true -Dfile.encoding=ISO-8859-1 -classpath
[JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 306 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/306/ 4 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.AliasIntegrationTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.AliasIntegrationTest: 1) Thread[id=2541, name=recoveryCmdExecutor-1012-thread-1, state=RUNNABLE, group=TGRP-AliasIntegrationTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at java.net.Socket.connect(Socket.java:546) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:679) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.AliasIntegrationTest: 1) Thread[id=2541, name=recoveryCmdExecutor-1012-thread-1, state=RUNNABLE, group=TGRP-AliasIntegrationTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at java.net.Socket.connect(Socket.java:546) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:679) at __randomizedtesting.SeedInfo.seed([1F8BB3BBA2F229D4]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.AliasIntegrationTest Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=2541, name=recoveryCmdExecutor-1012-thread-1, state=RUNNABLE, group=TGRP-AliasIntegrationTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at
[jira] [Updated] (SOLR-4982) Creating a core while referencing system properties looks like it loses files.
[ https://issues.apache.org/jira/browse/SOLR-4982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson updated SOLR-4982: - Priority: Blocker (was: Major) Turning to a blocker so I don't lose track of it for 4.4. Need o get SOLR-4948 checked in to untangle the test harness so I can invoke it cleanly with no cores defined in discovery mode. Maybe there's a better way? Creating a core while referencing system properties looks like it loses files. -- Key: SOLR-4982 URL: https://issues.apache.org/jira/browse/SOLR-4982 Project: Solr Issue Type: Bug Components: multicore Affects Versions: 4.3, 5.0 Reporter: Erick Erickson Assignee: Erick Erickson Priority: Blocker Attachments: SOLR-4982.patch, SOLR-4982.patch, SOLR-4982.patch If you use the core admin handler to create core and reference system properties and index files without restarting Solr, your files are indexed to the wrong place. Say for instance I define a sys prop EOE=/Users/Erick/tmp and create a core with this request localhost:8983/solr/admin/cores?action=CREATEname=coreZinstanceDir=coreZdataDir=%24%7BEOE%7D where %24%7BEOE%7D is really ${EOE} after URL escaping. What gets preserved in solr.xml is correct, dataDir is set to ${EOE}. And if I restart Solr, then index documents, they wind up in /Users/Erick/tmp. This is as it should be. HOWEVER, if rather than immediately restart Solr I index some documents to CoreZ, they go in solr_home/CoreZ/${EOE}. The literal path is ${EOE}, dollar sign, curly braces and all. How important is this to fix for 4.4? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5084) EliasFanoDocIdSet
[ https://issues.apache.org/jira/browse/LUCENE-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701030#comment-13701030 ] Adrien Grand commented on LUCENE-5084: -- I tend to like testing different scenarii every time tests are run (and tests, especially lucene-core tests, are run very very often by the CI servers), this helped find many unsuspected bugs in the past. For example, the random variable can be used to compute slight variations from the exceptional situations which are interesting to test. EliasFanoDocIdSet - Key: LUCENE-5084 URL: https://issues.apache.org/jira/browse/LUCENE-5084 Project: Lucene - Core Issue Type: Improvement Reporter: Paul Elschot Assignee: Adrien Grand Priority: Minor Fix For: 5.0 Attachments: LUCENE-5084.patch, LUCENE-5084.patch DocIdSet in Elias-Fano encoding -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-4386) Variable expansion doesn't work in DIH SimplePropertiesWriter's filename
[ https://issues.apache.org/jira/browse/SOLR-4386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Dyer updated SOLR-4386: - Attachment: SOLR-4386.patch Here is a failing test for this. The problem is PropertiesWriter gets a DataImporter in its init method, which does not give it access to the Context/VariableResolver. Either this api should change (no problem, it is experimental) or we should expose the context at the DataImporter. In reality, we should reorganize DataImporter/DocBuilder/Context so that there is just 1 class every component needs to go to to get answers, etc.. Variable expansion doesn't work in DIH SimplePropertiesWriter's filename Key: SOLR-4386 URL: https://issues.apache.org/jira/browse/SOLR-4386 Project: Solr Issue Type: Bug Components: contrib - DataImportHandler Affects Versions: 4.1 Reporter: Jonas Birgander Labels: dataimport Attachments: SOLR-4386.patch I'm testing Solr 4.1, but I've run into some problems with DataImportHandler's new propertyWriter tag. I'm trying to use variable expansion in the `filename` field when using SimplePropertiesWriter. Here are the relevant parts of my configuration: conf/solrconfig.xml - requestHandler name=/dataimport class=org.apache.solr.handler.dataimport.DataImportHandler lst name=defaults str name=configdb-data-config.xml/str /lst lst name=invariants !-- country_code is available -- str name=country_code${country_code}/str !-- In the real config, more variables are set here -- /lst /requestHandler conf/db-data-config.xml - dataConfig propertyWriter dateFormat=-MM-dd HH:mm:ss type=SimplePropertiesWriter directory=conf filename=${dataimporter.request.country_code}.dataimport.properties / dataSource type=JdbcDataSource driver=${dataimporter.request.db_driver} url=${dataimporter.request.db_url} user=${dataimporter.request.db_user} password=${dataimporter.request.db_password} batchSize=${dataimporter.request.db_batch_size} / document entity name=item query=my normal SQL, not really relevant -- country=${dataimporter.request.country_code} field column=id/ !-- ...more field tags... -- field column=$deleteDocById/ field column=$skipDoc/ /entity /document /dataConfig If country_code is set to gb, I want the last_index_time to be read and written in the file conf/gb.dataimport.properties, instead of the default conf/dataimport.properties The variable expansion works perfectly in the SQL and setup of the data source, but not in the property writer's filename field. When initiating an import, the log file shows: Jan 30, 2013 11:25:42 AM org.apache.solr.handler.dataimport.DataImporter maybeReloadConfiguration INFO: Loading DIH Configuration: db-data-config.xml Jan 30, 2013 11:25:42 AM org.apache.solr.handler.dataimport.config.ConfigParseUtil verifyWithSchema INFO: The field :$skipDoc present in DataConfig does not have a counterpart in Solr Schema Jan 30, 2013 11:25:42 AM org.apache.solr.handler.dataimport.config.ConfigParseUtil verifyWithSchema INFO: The field :$deleteDocById present in DataConfig does not have a counterpart in Solr Schema Jan 30, 2013 11:25:42 AM org.apache.solr.handler.dataimport.DataImporter loadDataConfig INFO: Data Configuration loaded successfully Jan 30, 2013 11:25:42 AM org.apache.solr.handler.dataimport.DataImporter doFullImport INFO: Starting Full Import Jan 30, 2013 11:25:42 AM org.apache.solr.handler.dataimport.SimplePropertiesWriter readIndexerProperties WARNING: Unable to read: ${dataimporter.request.country_code}.dataimport.properties -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_25) - Build # 3004 - Failure!
This test fails consistently for me on Windows... does anyone know when it started, and is someone looking into it? -Yonik http://lucidworks.com On Fri, Jul 5, 2013 at 7:58 AM, Policeman Jenkins Server jenk...@thetaphi.de wrote: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3004/ Java: 32bit/jdk1.7.0_25 -server -XX:+UseSerialGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.TestCoreContainer Error Message: Resource in scope SUITE failed to close. Resource was registered from thread Thread[id=2183, name=TEST-TestCoreContainer.testSharedLib-seed#[C2F0897425272444], state=RUNNABLE, group=TGRP-TestCoreContainer], registration stack trace below. Stack Trace: com.carrotsearch.randomizedtesting.ResourceDisposalError: Resource in scope SUITE failed to close. Resource was registered from thread Thread[id=2183, name=TEST-TestCoreContainer.testSharedLib-seed#[C2F0897425272444], state=RUNNABLE, group=TGRP-TestCoreContainer], registration stack trace below. at __randomizedtesting.SeedInfo.seed([C2F0897425272444]:0) at java.lang.Thread.getStackTrace(Thread.java:1568) at com.carrotsearch.randomizedtesting.RandomizedContext.closeAtEnd(RandomizedContext.java:150) at org.apache.lucene.util.LuceneTestCase.closeAfterSuite(LuceneTestCase.java:545) at org.apache.lucene.util._TestUtil.getTempDir(_TestUtil.java:131) at org.apache.solr.core.TestCoreContainer.testSharedLib(TestCoreContainer.java:337) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at
[jira] [Updated] (SOLR-4694) DataImporter uses wrong format for 'last_index_time'
[ https://issues.apache.org/jira/browse/SOLR-4694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Dyer updated SOLR-4694: - Priority: Minor (was: Blocker) DataImporter uses wrong format for 'last_index_time' Key: SOLR-4694 URL: https://issues.apache.org/jira/browse/SOLR-4694 Project: Solr Issue Type: Bug Components: contrib - DataImportHandler Affects Versions: 4.2 Reporter: Arul Kalaipandian Priority: Minor Labels: formatDate DataImporter uses wrong format for first import(no dataimport.propeties in /conf folder). {code} R.LAST_MODIFICATION_DATE = (TO_DATE('${dih.last_index_time}'; formatted as follows, R.LAST_MODIFICATION_DATE = (TO_DATE('Thu Jan 01 01:00:00 CET 1970','-mm-dd hh24:mi:ss'). {code} It's similar to SOLR-1496. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-3972) Improve AllGroupsCollector implementations
[ https://issues.apache.org/jira/browse/LUCENE-3972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701060#comment-13701060 ] Paul Masurel commented on LUCENE-3972: -- (e-commerce Solr user here) We hit the very same performance hit with pathological queries with 1M+ unique groups and need to solve this issue for our business. Would an hybrid approach switching implementation half-ways when the number of unique groups detected gets too high be welcomed? I also wonder whether the number of segments plays a great role in this. Did you observe that in your benchmarking? Improve AllGroupsCollector implementations -- Key: LUCENE-3972 URL: https://issues.apache.org/jira/browse/LUCENE-3972 Project: Lucene - Core Issue Type: Improvement Components: modules/grouping Reporter: Martijn van Groningen Attachments: LUCENE-3972.patch, LUCENE-3972.patch I think that the performance of TermAllGroupsCollectorm, DVAllGroupsCollector.BR and DVAllGroupsCollector.SortedBR can be improved by using BytesRefHash to store the groups instead of an ArrayList. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5003) Add option to add rowid/line number to CSV Update Handler
[ https://issues.apache.org/jira/browse/SOLR-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701063#comment-13701063 ] Grant Ingersoll commented on SOLR-5003: --- Going to add a rowid_offset as well, in case people want to use this in connection with more than one file Add option to add rowid/line number to CSV Update Handler - Key: SOLR-5003 URL: https://issues.apache.org/jira/browse/SOLR-5003 Project: Solr Issue Type: Improvement Reporter: Grant Ingersoll Assignee: Grant Ingersoll Priority: Minor Fix For: 5.0, 4.4 Attachments: SOLR-5003.patch In some cases of exporting from a DB to CSV, the only unique id you have is the rowid. This issue is to add an optional (off by default) rowid field to the document which simply contains the line number of the row. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5003) Add option to add rowid/line number to CSV Update Handler
[ https://issues.apache.org/jira/browse/SOLR-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701065#comment-13701065 ] ASF subversion and git services commented on SOLR-5003: --- Commit 1500097 from [~gsingers] [ https://svn.apache.org/r1500097 ] SOLR-5003: add rowidOffset as well Add option to add rowid/line number to CSV Update Handler - Key: SOLR-5003 URL: https://issues.apache.org/jira/browse/SOLR-5003 Project: Solr Issue Type: Improvement Reporter: Grant Ingersoll Assignee: Grant Ingersoll Priority: Minor Fix For: 5.0, 4.4 Attachments: SOLR-5003.patch In some cases of exporting from a DB to CSV, the only unique id you have is the rowid. This issue is to add an optional (off by default) rowid field to the document which simply contains the line number of the row. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #901: POMs out of sync
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/901/ 2 tests failed. FAILED: org.apache.solr.cloud.BasicDistributedZkTest.org.apache.solr.cloud.BasicDistributedZkTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=7870, name=recoveryCmdExecutor-4604-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=7870, name=recoveryCmdExecutor-4604-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) at __randomizedtesting.SeedInfo.seed([ECD72E35B07ACC27]:0) FAILED: org.apache.solr.cloud.BasicDistributedZkTest.org.apache.solr.cloud.BasicDistributedZkTest Error Message: There are still zombie threads that couldn't be terminated: 1) Thread[id=7870, name=recoveryCmdExecutor-4604-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at
[jira] [Commented] (SOLR-5003) Add option to add rowid/line number to CSV Update Handler
[ https://issues.apache.org/jira/browse/SOLR-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701066#comment-13701066 ] ASF subversion and git services commented on SOLR-5003: --- Commit 1500098 from [~gsingers] [ https://svn.apache.org/r1500098 ] SOLR-5003: merge to 4x Add option to add rowid/line number to CSV Update Handler - Key: SOLR-5003 URL: https://issues.apache.org/jira/browse/SOLR-5003 Project: Solr Issue Type: Improvement Reporter: Grant Ingersoll Assignee: Grant Ingersoll Priority: Minor Fix For: 5.0, 4.4 Attachments: SOLR-5003.patch In some cases of exporting from a DB to CSV, the only unique id you have is the rowid. This issue is to add an optional (off by default) rowid field to the document which simply contains the line number of the row. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5084) EliasFanoDocIdSet
[ https://issues.apache.org/jira/browse/LUCENE-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701067#comment-13701067 ] Dawid Weiss commented on LUCENE-5084: - bq. Randomized test cases with uniform data distributions are not likely to test exceptional situations in the high bits such as long high bit words with all zeros or all ones. The exceptional situations you can test separately. I am constantly surprised at how many exceptional conditions in pretty much regular code one can overlook. Like Adrien said -- it doesn't hurt to be in there and if it catches something, it's even better. EliasFanoDocIdSet - Key: LUCENE-5084 URL: https://issues.apache.org/jira/browse/LUCENE-5084 Project: Lucene - Core Issue Type: Improvement Reporter: Paul Elschot Assignee: Adrien Grand Priority: Minor Fix For: 5.0 Attachments: LUCENE-5084.patch, LUCENE-5084.patch DocIdSet in Elias-Fano encoding -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_25) - Build # 3004 - Failure!
I've looked into it briefly - on my Windows box, there's a file jar1.jar that isn't getting deleted, apparently because the class loader is holding it open after the test accesses a resource inside the archive, so the containing temp directory can't be deleted. It's quite strange though, since there is another file jar2.jar in the same test that is properly cleaned up. I'm not sure what it all means. - Steve On Jul 5, 2013, at 2:14 PM, Yonik Seeley yo...@lucidworks.com wrote: This test fails consistently for me on Windows... does anyone know when it started, and is someone looking into it? -Yonik http://lucidworks.com On Fri, Jul 5, 2013 at 7:58 AM, Policeman Jenkins Server jenk...@thetaphi.de wrote: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3004/ Java: 32bit/jdk1.7.0_25 -server -XX:+UseSerialGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.TestCoreContainer Error Message: Resource in scope SUITE failed to close. Resource was registered from thread Thread[id=2183, name=TEST-TestCoreContainer.testSharedLib-seed#[C2F0897425272444], state=RUNNABLE, group=TGRP-TestCoreContainer], registration stack trace below. Stack Trace: com.carrotsearch.randomizedtesting.ResourceDisposalError: Resource in scope SUITE failed to close. Resource was registered from thread Thread[id=2183, name=TEST-TestCoreContainer.testSharedLib-seed#[C2F0897425272444], state=RUNNABLE, group=TGRP-TestCoreContainer], registration stack trace below. at __randomizedtesting.SeedInfo.seed([C2F0897425272444]:0) at java.lang.Thread.getStackTrace(Thread.java:1568) at com.carrotsearch.randomizedtesting.RandomizedContext.closeAtEnd(RandomizedContext.java:150) at org.apache.lucene.util.LuceneTestCase.closeAfterSuite(LuceneTestCase.java:545) at org.apache.lucene.util._TestUtil.getTempDir(_TestUtil.java:131) at org.apache.solr.core.TestCoreContainer.testSharedLib(TestCoreContainer.java:337) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at
[jira] [Commented] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null
[ https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701077#comment-13701077 ] ASF subversion and git services commented on SOLR-5002: --- Commit 1500102 from [~rcmuir] [ https://svn.apache.org/r1500102 ] SOLR-5002: optimize numDocs(Query,DocSet) when filterCache is null optimize numDocs(Query,DocSet) when filterCache is null --- Key: SOLR-5002 URL: https://issues.apache.org/jira/browse/SOLR-5002 Project: Solr Issue Type: Improvement Reporter: Robert Muir Attachments: SOLR-5002.patch getDocSet(Query, DocSet) has this opto, but numDocs does not. Especially in this case, where we just want the intersection count, its faster to do a filtered query with TotalHitCountCollector and not create bitsets at all... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_25) - Build # 3005 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3005/ Java: 32bit/jdk1.7.0_25 -client -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.TestCoreContainer Error Message: Resource in scope SUITE failed to close. Resource was registered from thread Thread[id=6806, name=TEST-TestCoreContainer.testSharedLib-seed#[1383A6DC06866EF1], state=RUNNABLE, group=TGRP-TestCoreContainer], registration stack trace below. Stack Trace: com.carrotsearch.randomizedtesting.ResourceDisposalError: Resource in scope SUITE failed to close. Resource was registered from thread Thread[id=6806, name=TEST-TestCoreContainer.testSharedLib-seed#[1383A6DC06866EF1], state=RUNNABLE, group=TGRP-TestCoreContainer], registration stack trace below. at __randomizedtesting.SeedInfo.seed([1383A6DC06866EF1]:0) at java.lang.Thread.getStackTrace(Thread.java:1568) at com.carrotsearch.randomizedtesting.RandomizedContext.closeAtEnd(RandomizedContext.java:150) at org.apache.lucene.util.LuceneTestCase.closeAfterSuite(LuceneTestCase.java:545) at org.apache.lucene.util._TestUtil.getTempDir(_TestUtil.java:131) at org.apache.solr.core.TestCoreContainer.testSharedLib(TestCoreContainer.java:337) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at
Re: [JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_25) - Build # 3004 - Failure!
Maybe the ResourceLoader is not correctly closed. Also the test should be disabled on Java 6 VMs. See a previous commit by Robert. Steve Rowe sar...@gmail.com schrieb: I've looked into it briefly - on my Windows box, there's a file jar1.jar that isn't getting deleted, apparently because the class loader is holding it open after the test accesses a resource inside the archive, so the containing temp directory can't be deleted. It's quite strange though, since there is another file jar2.jar in the same test that is properly cleaned up. I'm not sure what it all means. - Steve On Jul 5, 2013, at 2:14 PM, Yonik Seeley yo...@lucidworks.com wrote: This test fails consistently for me on Windows... does anyone know when it started, and is someone looking into it? -Yonik http://lucidworks.com On Fri, Jul 5, 2013 at 7:58 AM, Policeman Jenkins Server jenk...@thetaphi.de wrote: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3004/ Java: 32bit/jdk1.7.0_25 -server -XX:+UseSerialGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.TestCoreContainer Error Message: Resource in scope SUITE failed to close. Resource was registered from thread Thread[id=2183, name=TEST-TestCoreContainer.testSharedLib-seed#[C2F0897425272444], state=RUNNABLE, group=TGRP-TestCoreContainer], registration stack trace below. Stack Trace: com.carrotsearch.randomizedtesting.ResourceDisposalError: Resource in scope SUITE failed to close. Resource was registered from thread Thread[id=2183, name=TEST-TestCoreContainer.testSharedLib-seed#[C2F0897425272444], state=RUNNABLE, group=TGRP-TestCoreContainer], registration stack trace below. at __randomizedtesting.SeedInfo.seed([C2F0897425272444]:0) at java.lang.Thread.getStackTrace(Thread.java:1568) at com.carrotsearch.randomizedtesting.RandomizedContext.closeAtEnd(RandomizedContext.java:150) at org.apache.lucene.util.LuceneTestCase.closeAfterSuite(LuceneTestCase.java:545) at org.apache.lucene.util._TestUtil.getTempDir(_TestUtil.java:131) at org.apache.solr.core.TestCoreContainer.testSharedLib(TestCoreContainer.java:337) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at
Refactoring Lucene to Variable-Width DocIds
Many distributed systems (ie git, dynamo) use a 16-byte or greater psudorandom (or random) identifier for documents. It would be nice to refactor Lucene to return a variable-width document ID so that indices could be implemented over databases such as HBase, Accumulo, Cassandra, etc. using a large, non-sequential identifier instead of the current system which requires ID's to be sequential and 4 bytes. Has anyone thought about doing this? Is there interest in such a refactoring or prototype?
[jira] [Updated] (SOLR-4788) Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time is empty
[ https://issues.apache.org/jira/browse/SOLR-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Dyer updated SOLR-4788: - Attachment: SOLR-4788.patch Here is a patch with test coverage a fix. I can commit this after the weekend. Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time is empty -- Key: SOLR-4788 URL: https://issues.apache.org/jira/browse/SOLR-4788 Project: Solr Issue Type: Bug Affects Versions: 4.2, 4.3 Environment: solr-spec 4.2.1.2013.03.26.08.26.55 solr-impl 4.2.1 1461071 - mark - 2013-03-26 08:26:55 lucene-spec 4.2.1 lucene-impl 4.2.1 1461071 - mark - 2013-03-26 08:23:34 OR solr-spec 4.3.0 solr-impl 4.3.0 1477023 - simonw - 2013-04-29 15:10:12 lucene-spec 4.3.0 lucene-impl 4.3.0 1477023 - simonw - 2013-04-29 14:55:14 Reporter: chakming wong Assignee: Shalin Shekhar Mangar Attachments: entitytest.patch, entitytest.patch, entitytest.patch, entitytest.patch, entitytest.patch, SOLR-4788.patch {code:title=conf/dataimport.properties|borderStyle=solid}entity1.last_index_time=2013-05-06 03\:02\:06 last_index_time=2013-05-06 03\:05\:22 entity2.last_index_time=2013-05-06 03\:03\:14 entity3.last_index_time=2013-05-06 03\:05\:22 {code} {code:title=conf/solrconfig.xml|borderStyle=solid}?xml version=1.0 encoding=UTF-8 ? ... requestHandler name=/dataimport class=org.apache.solr.handler.dataimport.DataImportHandler lst name=defaults str name=configdihconfig.xml/str /lst /requestHandler ... {code} {code:title=conf/dihconfig.xml|borderStyle=solid}?xml version=1.0 encoding=UTF-8 ? dataConfig dataSource name=source1 type=JdbcDataSource driver=com.mysql.jdbc.Driver url=jdbc:mysql://*:*/* user=* password=*/ document name=strings entity name=entity1 pk=id dataSource=source1 query=SELECT * FROM table_a deltaQuery=SELECT table_a_id FROM table_b WHERE last_modified '${dataimporter.entity1.last_index_time}' deltaImportQuery=SELECT * FROM table_a WHERE id = '${dataimporter.entity1.id}' transformer=TemplateTransformer field ... ... ... /field /entity entity name=entity2 ... ... /entity entity name=entity3 ... ... /entity /document /dataConfig {code}Â In above setup, *dataimporter.entity1.last_index_time* is *empty string* and cause the sql query having error -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null
[ https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701087#comment-13701087 ] ASF subversion and git services commented on SOLR-5002: --- Commit 1500110 from [~rcmuir] [ https://svn.apache.org/r1500110 ] SOLR-5002: optimize numDocs(Query,DocSet) when filterCache is null optimize numDocs(Query,DocSet) when filterCache is null --- Key: SOLR-5002 URL: https://issues.apache.org/jira/browse/SOLR-5002 Project: Solr Issue Type: Improvement Reporter: Robert Muir Attachments: SOLR-5002.patch getDocSet(Query, DocSet) has this opto, but numDocs does not. Especially in this case, where we just want the intersection count, its faster to do a filtered query with TotalHitCountCollector and not create bitsets at all... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null
[ https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Muir resolved SOLR-5002. --- Resolution: Fixed Fix Version/s: 4.4 5.0 optimize numDocs(Query,DocSet) when filterCache is null --- Key: SOLR-5002 URL: https://issues.apache.org/jira/browse/SOLR-5002 Project: Solr Issue Type: Improvement Reporter: Robert Muir Fix For: 5.0, 4.4 Attachments: SOLR-5002.patch getDocSet(Query, DocSet) has this opto, but numDocs does not. Especially in this case, where we just want the intersection count, its faster to do a filtered query with TotalHitCountCollector and not create bitsets at all... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-1045) Build Solr index using Hadoop MapReduce
[ https://issues.apache.org/jira/browse/SOLR-1045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701092#comment-13701092 ] Otis Gospodnetic commented on SOLR-1045: bq. Is there any improvement for that issue otherwise I can make a development for it? Please go for it! :) See also SOLR-1301 Build Solr index using Hadoop MapReduce --- Key: SOLR-1045 URL: https://issues.apache.org/jira/browse/SOLR-1045 Project: Solr Issue Type: New Feature Reporter: Ning Li Fix For: 4.4 Attachments: SOLR-1045.0.patch The goal is a contrib module that builds Solr index using Hadoop MapReduce. It is different from the Solr support in Nutch. The Solr support in Nutch sends a document to a Solr server in a reduce task. Here, the goal is to build/update Solr index within map/reduce tasks. Also, it achieves better parallelism when the number of map tasks is greater than the number of reduce tasks, which is usually the case. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [CONF] Apache Solr Reference Guide Uploading Data with Index Handlers
The ASRG commit emails seem to be sending the whole confluence wiki page rather than just a diff like the old Solr wiki. Is that a tunable preference for confluence? Thanks. -- Jack Krupansky -Original Message- From: Grant Ingersoll (Confluence) Sent: Friday, July 05, 2013 2:54 PM To: comm...@lucene.apache.org Subject: [CONF] Apache Solr Reference Guide Uploading Data with Index Handlers Space: Apache Solr Reference Guide (https://cwiki.apache.org/confluence/display/solr) Page: Uploading Data with Index Handlers (https://cwiki.apache.org/confluence/display/solr/Uploading+Data+with+Index+Handlers) Edited by Grant Ingersoll: - {section} {column:width=75%} Index Handlers are Update Handlers designed to add, delete and update documents to the index. Solr includes several of these to allow indexing documents in XML, CSV and JSON. The example URLs given here reflect the handler configuration in the supplied {{solrconfig.xml}}. If the name associated with the handler is changed then the URLs will need to be modified. It is quite possible to access the same handler using more than one name, which can be useful if you wish to specify different sets of default options. New {{UpdateProcessors}} now default to the {{uniqueKey}} field if it is the appropriate type for configured fields. The processors automatically add fields with new UUIDs and Timestamps to {{SolrInputDocuments}}. These work similarly to the field default=.../ option in {{schema.xml}}, but are applied in the {{UpdateProcessorChain}}. They may be used prior to other {{UpdateProcessors}}, or to generate a {{uniqueKey}} field value when using the {{DistributedUpdateProcessor}} (i.e., SolrCloud), {{TimestampUpdateProcessorFactory}}, {{UUIDUpdateProcessorFactory}}, and {{DefaultValueUpdateProcessorFactory}}. {column} {column:width=25%} {panel} Index Handlers covered in this section: {toc:minLevel=2|maxLevel=2} {panel} {column} {section} h2. Combined UpdateRequestHandlers For the separate XML, CSV, JSON, and javabin update request handlers explained below, Solr provides a single {{RequestHandler}}, and chooses the appropriate {{ContentStreamLoader}} based on the the {{Content-Type}} header, entered as the {{qt}} (query type) parameter matching the name of registered handlers. The standard request handler is the default and will be used if {{qt}} is not specified in the request. {code:lang=xml|borderStyle=solid|borderColor=#66} requestHandler name=standard / requestHandler name=custom / {code} h3. Configuring Shard Handlers for Distributed Searches Inside the RequestHandler, you can configure and specify the shard handler used for distributed search. You can also plug in custom shard handlers as well. Configuring the standard handler, set up the configuration as in this example: {code:lang=xml|borderStyle=solid|borderColor=#66} requestHandler name=standard default=true !-- other params go here -- shardHandlerFactory int name=socketTimeOut1000/int int name=connTimeOut5000/int /shardHandler /requestHandler {code} The parameters that can be specified are as follows: || Parameter || Default || Explanation || | socketTimeout | default: 0 (use OS default) | The amount of time in ms that a socket is allowed to wait | | connTimeout | default: 0 (use OS default) | The amount of time in ms that is accepted for binding / connection a socket | | maxConnectionsPerHost | default: 20 | The maximum number of connections that is made to each individual shard in a distributed search | | corePoolSize | default: 0 | The retained lowest limit on the number of threads used in coordinating distributed search | | maximumPoolSize | default: Integer.MAX_VALUE | The maximum number of threads used for coordinating distributed search | | maxThreadIdleTime | default: 5 seconds | The amount of time to wait for before threads are scaled back in response to a reduction in load | | sizeOfQueue | default: \-1 | If specified, the thread pool will use a backing queue instead of a direct handoff buffer. High throughput systems will want to configure this to be a direct hand off (with \-1). Systems that desire better latency will want to configure a reasonable size of queue to handle variations in requests. | | fairnessPolicy | default: false | Chooses the JVM specifics dealing with fair policy queuing. If enabled, distributed searches will be handled in a First in - First out method at a cost to throughput. If disabled, throughput will be favored over latency. | {topofpage} h2. XMLUpdateRequestHandler for XML-formatted Data h3. Configuration The default configuration file has the update request handler configured by default. {code:lang=xml|borderStyle=solid|borderColor=#66} requestHandler name=/update class=solr.XmlUpdateRequestHandler / {code} h3. Adding Documents Documents are added to the index
[JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1764 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/1764/ 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=3043, name=recoveryCmdExecutor-1169-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at java.net.Socket.connect(Socket.java:546) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:679) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=3043, name=recoveryCmdExecutor-1169-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at java.net.Socket.connect(Socket.java:546) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:679) at __randomizedtesting.SeedInfo.seed([3FDBA1EAD7EDF002]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=3043, name=recoveryCmdExecutor-1169-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) at
[jira] [Created] (SOLR-5008) Add Support for composite CSV fields
Grant Ingersoll created SOLR-5008: - Summary: Add Support for composite CSV fields Key: SOLR-5008 URL: https://issues.apache.org/jira/browse/SOLR-5008 Project: Solr Issue Type: Improvement Reporter: Grant Ingersoll Priority: Minor Fix For: 5.0, 4.4 It is often useful to be able to create a single field from more than one CSV column. For instance, I may want to take a latitude and longitude column in my CSV and map that to a single field name -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5008) Add Support for composite CSV fields
[ https://issues.apache.org/jira/browse/SOLR-5008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701098#comment-13701098 ] Robert Muir commented on SOLR-5008: --- What does this have to do with CSV? combining two fields together seems unrelated and better handled at e.g. updateprocessor or something? Add Support for composite CSV fields Key: SOLR-5008 URL: https://issues.apache.org/jira/browse/SOLR-5008 Project: Solr Issue Type: Improvement Reporter: Grant Ingersoll Priority: Minor Fix For: 5.0, 4.4 It is often useful to be able to create a single field from more than one CSV column. For instance, I may want to take a latitude and longitude column in my CSV and map that to a single field name -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5008) Add Support for composite CSV fields
[ https://issues.apache.org/jira/browse/SOLR-5008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701099#comment-13701099 ] Grant Ingersoll commented on SOLR-5008: --- [~rcmuir] Yeah, kind of, except we allow splits at the CSV level and various other transformations that could just as well be handled at the updateprocessor level too, except they are already baked in. Add Support for composite CSV fields Key: SOLR-5008 URL: https://issues.apache.org/jira/browse/SOLR-5008 Project: Solr Issue Type: Improvement Reporter: Grant Ingersoll Priority: Minor Fix For: 5.0, 4.4 It is often useful to be able to create a single field from more than one CSV column. For instance, I may want to take a latitude and longitude column in my CSV and map that to a single field name -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5008) Add Support for composite CSV fields
[ https://issues.apache.org/jira/browse/SOLR-5008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701100#comment-13701100 ] Robert Muir commented on SOLR-5008: --- I dont see that as a good reason to add more stuff to the wrong place. Anything that currently doesnt belong there should be fixed instead. I'd rather see it easier to stuff like combine two fields in a way thats not specific to the format, because its *totally unrelated* to csv. and useful for other formats, too. Add Support for composite CSV fields Key: SOLR-5008 URL: https://issues.apache.org/jira/browse/SOLR-5008 Project: Solr Issue Type: Improvement Reporter: Grant Ingersoll Priority: Minor Fix For: 5.0, 4.4 It is often useful to be able to create a single field from more than one CSV column. For instance, I may want to take a latitude and longitude column in my CSV and map that to a single field name -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Refactoring Lucene to Variable-Width DocIds
Hi, Lucene heavily relies on the fact that the internal doc IDs are dense and sequential. This is at the core of Lucene's design and is the key to compact postings lists and easily addressable doc values, stored fields, etc... Is there a specific reason why you don't want to handle these 16-bytes identifiers on top of the Lucene index (as a standard field)? -- Adrien - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-4.x-Java7 - Build # 1383 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java7/1383/ 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=2539, name=recoveryCmdExecutor-1172-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=2539, name=recoveryCmdExecutor-1172-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) at __randomizedtesting.SeedInfo.seed([1BE8DA0669CA28DC]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=2539, name=recoveryCmdExecutor-1172-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at
[jira] [Commented] (SOLR-5008) Add Support for composite CSV fields
[ https://issues.apache.org/jira/browse/SOLR-5008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701112#comment-13701112 ] Grant Ingersoll commented on SOLR-5008: --- I guess I don't see it as wrong, but simply a choice that someone made. In this particular case, I do think, for the most part, it is relatively specific to CSV since CSV's often come in a format that one doesn't control (which is the case I have now and why I am even considering this) and some simple operations on them to make it easier to consume in Solr w/o having to install update processors, etc. are a good thing, which is why the map, split and other stuff were added in the first place. Add Support for composite CSV fields Key: SOLR-5008 URL: https://issues.apache.org/jira/browse/SOLR-5008 Project: Solr Issue Type: Improvement Reporter: Grant Ingersoll Priority: Minor Fix For: 5.0, 4.4 It is often useful to be able to create a single field from more than one CSV column. For instance, I may want to take a latitude and longitude column in my CSV and map that to a single field name -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4916) Add support to write and read Solr index files and transaction log files to and from HDFS.
[ https://issues.apache.org/jira/browse/SOLR-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701116#comment-13701116 ] ASF subversion and git services commented on SOLR-4916: --- Commit 1500135 from [~steve_rowe] [ https://svn.apache.org/r1500135 ] SOLR-4916: IntelliJ configuration (merged trunk r1497105) Add support to write and read Solr index files and transaction log files to and from HDFS. -- Key: SOLR-4916 URL: https://issues.apache.org/jira/browse/SOLR-4916 Project: Solr Issue Type: New Feature Reporter: Mark Miller Assignee: Mark Miller Fix For: 5.0, 4.4 Attachments: SOLR-4916-ivy.patch, SOLR-4916.patch, SOLR-4916.patch, SOLR-4916.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5008) Add Support for composite CSV fields
[ https://issues.apache.org/jira/browse/SOLR-5008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701117#comment-13701117 ] Jack Krupansky commented on SOLR-5008: -- bq. w/o having to install update processors, etc. The new StatelessScriptUpdateProcessor makes it super easy to do a lot of this basic field-level processing (see examples in my book!). In fact, you might consider implementing the totality of this Jira as one such script with parameters - or at least as a prototype for easy experimentation. Add Support for composite CSV fields Key: SOLR-5008 URL: https://issues.apache.org/jira/browse/SOLR-5008 Project: Solr Issue Type: Improvement Reporter: Grant Ingersoll Priority: Minor Fix For: 5.0, 4.4 It is often useful to be able to create a single field from more than one CSV column. For instance, I may want to take a latitude and longitude column in my CSV and map that to a single field name -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4978) Time is stripped from datetime column when imported into Solr date field
[ https://issues.apache.org/jira/browse/SOLR-4978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701129#comment-13701129 ] Bill Au commented on SOLR-4978: --- This will only have an effect if convertType is enabled and will only affect date field type. convertType is disabled by default. I guess most people use the default setting, which is probably why no one has noticed this bug before. The current behavior for date field type is incorrect when converType is enabled. Making the change will fix the bug. So date fields indexed by DIH when convertType is enabled will actually have a time portion. Time is stripped from datetime column when imported into Solr date field Key: SOLR-4978 URL: https://issues.apache.org/jira/browse/SOLR-4978 Project: Solr Issue Type: Bug Components: contrib - DataImportHandler Reporter: Bill Au I discovered that all dates I imported into a Solr date field from a MySQL datetime column have the time stripped (ie time portion is always 00:00:00). After double checking my DIH config and trying different things, I decided to take a look at the DIH code. When I looked at the source code of DIH JdbcDataSource class, I discovered that it is using java.sql.ResultSet and its getDate() method to handle date field. The getDate() method returns java.sql.Date. The java api doc for java.sql.Date http://docs.oracle.com/javase/6/docs/api/java/sql/Date.html states that: To conform with the definition of SQL DATE, the millisecond values wrapped by a java.sql.Date instance must be 'normalized' by setting the hours, minutes, seconds, and milliseconds to zero in the particular time zone with which the instance is associated. I am so surprise at my finding that I think I may not be right. What am I doing wrong here? This is such a big hole in DIH, how could it be possible that no one has noticed this until now? Has anyone successfully imported a datetime column into a Solr date field using DIH? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4116 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4116/ 4 tests failed. REGRESSION: org.apache.solr.cloud.UnloadDistributedZkTest.testDistribSearch Error Message: Server at http://127.0.0.1:11281/_opr/p returned non ok status:503, message:Server is shutting down Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Server at http://127.0.0.1:11281/_opr/p returned non ok status:503, message:Server is shutting down at __randomizedtesting.SeedInfo.seed([E66C784C6709F049:678AF65410569075]:0) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:385) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.UnloadDistributedZkTest.testCoreUnloadAndLeaders(UnloadDistributedZkTest.java:231) at org.apache.solr.cloud.UnloadDistributedZkTest.doTest(UnloadDistributedZkTest.java:75) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:835) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at
[JENKINS] Lucene-Solr-4.x-Linux (32bit/ibm-j9-jdk6) - Build # 6365 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6365/ Java: 32bit/ibm-j9-jdk6 -Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;} 1 tests failed. REGRESSION: org.apache.solr.cloud.BasicDistributedZk2Test.testDistribSearch Error Message: Server at http://127.0.0.1:35324/qpv/y/onenodecollectioncore returned non ok status:404, message:Can not find: /qpv/y/onenodecollectioncore/update Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Server at http://127.0.0.1:35324/qpv/y/onenodecollectioncore returned non ok status:404, message:Can not find: /qpv/y/onenodecollectioncore/update at __randomizedtesting.SeedInfo.seed([72A747B3022302AF:F341C9AB757C6293]:0) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:385) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:116) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:102) at org.apache.solr.cloud.BasicDistributedZk2Test.testNodeWithoutCollectionForwarding(BasicDistributedZk2Test.java:196) at org.apache.solr.cloud.BasicDistributedZk2Test.doTest(BasicDistributedZk2Test.java:88) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:835) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:611) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at
[JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1765 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/1765/ 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=2796, name=recoveryCmdExecutor-1913-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at java.net.Socket.connect(Socket.java:546) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:679) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=2796, name=recoveryCmdExecutor-1913-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at java.net.Socket.connect(Socket.java:546) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:679) at __randomizedtesting.SeedInfo.seed([A78B7363E8FD1119]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=2796, name=recoveryCmdExecutor-1913-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) at
[jira] [Created] (SOLR-5009) CoreContainer instantiates 2 SolrResourceLoaders (implicit) but does not close all of them
Uwe Schindler created SOLR-5009: --- Summary: CoreContainer instantiates 2 SolrResourceLoaders (implicit) but does not close all of them Key: SOLR-5009 URL: https://issues.apache.org/jira/browse/SOLR-5009 Project: Solr Issue Type: Bug Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.0, 4.4 Attachments: SOLR-5009.patch Windows fails to delete files when they are open. CoreContainer opens a second SolrResourceLoader (implicit) when calling ConfigSolr.fromFile(). It should not do this and use the main loader, whcih is closed on shutdown. This will remove the support for implicit ResourceLoader in ConfigSolr, preventing multiple classloaders for the same solr home. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-5009) CoreContainer instantiates 2 SolrResourceLoaders (implicit) but does not close all of them
[ https://issues.apache.org/jira/browse/SOLR-5009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated SOLR-5009: Attachment: SOLR-5009.patch CoreContainer instantiates 2 SolrResourceLoaders (implicit) but does not close all of them -- Key: SOLR-5009 URL: https://issues.apache.org/jira/browse/SOLR-5009 Project: Solr Issue Type: Bug Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.0, 4.4 Attachments: SOLR-5009.patch Windows fails to delete files when they are open. CoreContainer opens a second SolrResourceLoader (implicit) when calling ConfigSolr.fromFile(). It should not do this and use the main loader, whcih is closed on shutdown. This will remove the support for implicit ResourceLoader in ConfigSolr, preventing multiple classloaders for the same solr home. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: [JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_25) - Build # 3004 - Failure!
I opened https://issues.apache.org/jira/browse/SOLR-5009 I will commit soon and backport to 4.x (once tests are ran). The bug was implicit SolrResourceLoaders which were not closed and also wasting resources (see a Solr startup log, where you see that every config loads 2 resourceloaders, visible by multiple addToClassLoader calls with same URL). - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de/ http://www.thetaphi.de eMail: u...@thetaphi.de From: Uwe Schindler [mailto:u...@thetaphi.de] Sent: Friday, July 05, 2013 9:09 PM To: dev@lucene.apache.org Subject: Re: [JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_25) - Build # 3004 - Failure! Maybe the ResourceLoader is not correctly closed. Also the test should be disabled on Java 6 VMs. See a previous commit by Robert. Steve Rowe sar...@gmail.com schrieb: I've looked into it briefly - on my Windows box, there's a file jar1.jar that isn't getting deleted, apparently because the class loader is holding it open after the test accesses a resource inside the archive, so the containing temp directory can't be deleted. It's quite strange though, since there is another file jar2.jar in the same test that is properly cleaned up. I'm not sure what it all means. - Steve On Jul 5, 2013, at 2:14 PM, Yonik Seeley yo...@lucidworks.com wrote: This test fails consistently for me on Windows... does anyone know when it started, and is someone looking into it? -Yonik http://lucidworks.com On Fri, Jul 5, 2013 at 7:58 AM, Policeman Jenkins Server jenk...@thetaphi.de wrote: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3004/ Java: 32bit/jdk1.7.0_25 -server -XX:+UseSerialGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.TestCoreContainer Error Message: Resource in scope SUITE failed to close. Resource was registered from thread Thread[id=2183, name=TEST-TestCoreContainer.testSharedLib-seed#[C2F0897425272444], state=RUNNABLE, group=TGRP-TestCoreContainer], registration stack trace below. Stack Trace: com.carrotsearch.randomizedtesting.ResourceDisposalError: Resource in scope SUITE failed to close. Resource was registered from thread Thread[id=2183, name=TEST-TestCoreContainer.testSharedLib-seed#[C2F0897425272444], state=RUNNABLE, group=TGRP-TestCoreContainer], registration stack trace below. at __randomizedtesting.SeedInfo.seed([C2F0897425272444]:0) at java.lang.Thread.getStackTrace(Thread.java:1568) at com.carrotsearch.randomizedtesting.RandomizedContext.closeAtEnd(RandomizedContext.java:150) at org.apache.lucene.util.LuceneTestCase.closeAfterSuite(LuceneTestCase.java:545) at org.apache.lucene.util._TestUtil.getTempDir(_TestUtil.java:131) at org.apache.solr.core.TestCoreContainer.testSharedLib(TestCoreContainer.java:337) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782) at
[jira] [Updated] (SOLR-5009) CoreContainer instantiates 2 SolrResourceLoaders (implicit) but does not close all of them
[ https://issues.apache.org/jira/browse/SOLR-5009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated SOLR-5009: Component/s: multicore CoreContainer instantiates 2 SolrResourceLoaders (implicit) but does not close all of them -- Key: SOLR-5009 URL: https://issues.apache.org/jira/browse/SOLR-5009 Project: Solr Issue Type: Bug Components: multicore Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.0, 4.4 Attachments: SOLR-5009.patch Windows fails to delete files when they are open. CoreContainer opens a second SolrResourceLoader (implicit) when calling ConfigSolr.fromFile(). It should not do this and use the main loader, whcih is closed on shutdown. This will remove the support for implicit ResourceLoader in ConfigSolr, preventing multiple classloaders for the same solr home. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 627 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/627/ Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseSerialGC 1 tests failed. REGRESSION: org.apache.solr.client.solrj.TestBatchUpdate.testWithBinaryBean Error Message: IOException occured when talking to server at: https://127.0.0.1:53143/solr/collection1 Stack Trace: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: https://127.0.0.1:53143/solr/collection1 at __randomizedtesting.SeedInfo.seed([67649BA91E409666:48F9AAB7E245144]:0) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:435) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117) at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:168) at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:146) at org.apache.solr.client.solrj.TestBatchUpdate.testWithBinaryBean(TestBatchUpdate.java:92) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559) at com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70) at
[jira] [Commented] (SOLR-5002) optimize numDocs(Query,DocSet) when filterCache is null
[ https://issues.apache.org/jira/browse/SOLR-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701161#comment-13701161 ] ASF subversion and git services commented on SOLR-5002: --- Commit 1500156 from [~thetaphi] [ https://svn.apache.org/r1500156 ] SOLR-5002: Don't create multiple SolrResourceLoaders for same Solr home, wasting resources and slowing down startup. This fixes the problem where the loader was not correctly closed, making tests fail on Windows. optimize numDocs(Query,DocSet) when filterCache is null --- Key: SOLR-5002 URL: https://issues.apache.org/jira/browse/SOLR-5002 Project: Solr Issue Type: Improvement Reporter: Robert Muir Fix For: 5.0, 4.4 Attachments: SOLR-5002.patch getDocSet(Query, DocSet) has this opto, but numDocs does not. Especially in this case, where we just want the intersection count, its faster to do a filtered query with TotalHitCountCollector and not create bitsets at all... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5009) CoreContainer instantiates 2 SolrResourceLoaders (implicit) but does not close all of them
[ https://issues.apache.org/jira/browse/SOLR-5009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701163#comment-13701163 ] ASF subversion and git services commented on SOLR-5009: --- Commit 1500157 from [~thetaphi] [ https://svn.apache.org/r1500157 ] SOLR-5009: fix issue number CoreContainer instantiates 2 SolrResourceLoaders (implicit) but does not close all of them -- Key: SOLR-5009 URL: https://issues.apache.org/jira/browse/SOLR-5009 Project: Solr Issue Type: Bug Components: multicore Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.0, 4.4 Attachments: SOLR-5009.patch Windows fails to delete files when they are open. CoreContainer opens a second SolrResourceLoader (implicit) when calling ConfigSolr.fromFile(). It should not do this and use the main loader, whcih is closed on shutdown. This will remove the support for implicit ResourceLoader in ConfigSolr, preventing multiple classloaders for the same solr home. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4948) Tidy up CoreContainer construction logic
[ https://issues.apache.org/jira/browse/SOLR-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701171#comment-13701171 ] Uwe Schindler commented on SOLR-4948: - Alan: Once you backported this, can you also backport SOLR-5009 ? With this commit you introduced the SOLR-5009 bug (creates multiple SolrResourceLoaders for the same config dir). I can help to backport, but I was unable to merge. Tidy up CoreContainer construction logic Key: SOLR-4948 URL: https://issues.apache.org/jira/browse/SOLR-4948 Project: Solr Issue Type: Improvement Reporter: Alan Woodward Assignee: Alan Woodward Priority: Minor Attachments: SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch While writing tests for SOLR-4914, I discovered that it's *really difficult* to create a CoreContainer. There are a bunch of constructors which initialise different things, one (but only one!) of which also loads all the cores. Then you have the Initializer object, which basically does the same thing. Sort of. And then the TestHarness doesn't actually use CoreContainer, but an anonymous subclass of CoreContainer which has it's own initialisation logic. It would be nice to clean this up! -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4948) Tidy up CoreContainer construction logic
[ https://issues.apache.org/jira/browse/SOLR-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701173#comment-13701173 ] Erick Erickson commented on SOLR-4948: -- [~romseygeek] Uwe uncovered a problem with this patch, possibly accounting for the PermGen errors we've seen lately in the tests. Plus, I have some fixes I want to get into 4.4. So my vote is to go ahead and merge this into 4.x and we'll fix anything that crops up. If Uwe backports stuff then merging the changes for this JIRA and his changes get complicated. Maybe it's time to just bite the bullet? Tidy up CoreContainer construction logic Key: SOLR-4948 URL: https://issues.apache.org/jira/browse/SOLR-4948 Project: Solr Issue Type: Improvement Reporter: Alan Woodward Assignee: Alan Woodward Priority: Minor Attachments: SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch While writing tests for SOLR-4914, I discovered that it's *really difficult* to create a CoreContainer. There are a bunch of constructors which initialise different things, one (but only one!) of which also loads all the cores. Then you have the Initializer object, which basically does the same thing. Sort of. And then the TestHarness doesn't actually use CoreContainer, but an anonymous subclass of CoreContainer which has it's own initialisation logic. It would be nice to clean this up! -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4948) Tidy up CoreContainer construction logic
[ https://issues.apache.org/jira/browse/SOLR-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701176#comment-13701176 ] ASF subversion and git services commented on SOLR-4948: --- Commit 1500166 from [~thetaphi] [ https://svn.apache.org/r1500166 ] SOLR-5009: CHANGES.txt now only list SOLR-5009 as part of SOLR-4948 Tidy up CoreContainer construction logic Key: SOLR-4948 URL: https://issues.apache.org/jira/browse/SOLR-4948 Project: Solr Issue Type: Improvement Reporter: Alan Woodward Assignee: Alan Woodward Priority: Minor Attachments: SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch While writing tests for SOLR-4914, I discovered that it's *really difficult* to create a CoreContainer. There are a bunch of constructors which initialise different things, one (but only one!) of which also loads all the cores. Then you have the Initializer object, which basically does the same thing. Sort of. And then the TestHarness doesn't actually use CoreContainer, but an anonymous subclass of CoreContainer which has it's own initialisation logic. It would be nice to clean this up! -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5009) CoreContainer instantiates 2 SolrResourceLoaders (implicit) but does not close all of them
[ https://issues.apache.org/jira/browse/SOLR-5009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701175#comment-13701175 ] ASF subversion and git services commented on SOLR-5009: --- Commit 1500166 from [~thetaphi] [ https://svn.apache.org/r1500166 ] SOLR-5009: CHANGES.txt now only list SOLR-5009 as part of SOLR-4948 CoreContainer instantiates 2 SolrResourceLoaders (implicit) but does not close all of them -- Key: SOLR-5009 URL: https://issues.apache.org/jira/browse/SOLR-5009 Project: Solr Issue Type: Bug Components: multicore Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.0, 4.4 Attachments: SOLR-5009.patch Windows fails to delete files when they are open. CoreContainer opens a second SolrResourceLoader (implicit) when calling ConfigSolr.fromFile(). It should not do this and use the main loader, whcih is closed on shutdown. This will remove the support for implicit ResourceLoader in ConfigSolr, preventing multiple classloaders for the same solr home. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5009) CoreContainer instantiates 2 SolrResourceLoaders (implicit) but does not close all of them
[ https://issues.apache.org/jira/browse/SOLR-5009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701174#comment-13701174 ] ASF subversion and git services commented on SOLR-5009: --- Commit 1500166 from [~thetaphi] [ https://svn.apache.org/r1500166 ] SOLR-5009: CHANGES.txt now only list SOLR-5009 as part of SOLR-4948 CoreContainer instantiates 2 SolrResourceLoaders (implicit) but does not close all of them -- Key: SOLR-5009 URL: https://issues.apache.org/jira/browse/SOLR-5009 Project: Solr Issue Type: Bug Components: multicore Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 5.0, 4.4 Attachments: SOLR-5009.patch Windows fails to delete files when they are open. CoreContainer opens a second SolrResourceLoader (implicit) when calling ConfigSolr.fromFile(). It should not do this and use the main loader, whcih is closed on shutdown. This will remove the support for implicit ResourceLoader in ConfigSolr, preventing multiple classloaders for the same solr home. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4948) Tidy up CoreContainer construction logic
[ https://issues.apache.org/jira/browse/SOLR-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701177#comment-13701177 ] Uwe Schindler commented on SOLR-4948: - One comment: When you backport to 4.x, don't forget to add the assumeTrue(Constants.JRE_IS_MINIMUM_JAVA_7), otherwise windows will fail again (see the other CoreContainer tests). Tidy up CoreContainer construction logic Key: SOLR-4948 URL: https://issues.apache.org/jira/browse/SOLR-4948 Project: Solr Issue Type: Improvement Reporter: Alan Woodward Assignee: Alan Woodward Priority: Minor Attachments: SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch While writing tests for SOLR-4914, I discovered that it's *really difficult* to create a CoreContainer. There are a bunch of constructors which initialise different things, one (but only one!) of which also loads all the cores. Then you have the Initializer object, which basically does the same thing. Sort of. And then the TestHarness doesn't actually use CoreContainer, but an anonymous subclass of CoreContainer which has it's own initialisation logic. It would be nice to clean this up! -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-4948) Tidy up CoreContainer construction logic
[ https://issues.apache.org/jira/browse/SOLR-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701177#comment-13701177 ] Uwe Schindler edited comment on SOLR-4948 at 7/5/13 11:18 PM: -- One comment: When you backport to 4.x, don't forget to preserve the assumeTrue(Constants.JRE_IS_MINIMUM_JAVA_7), otherwise windows will fail again. was (Author: thetaphi): One comment: When you backport to 4.x, don't forget to add the assumeTrue(Constants.JRE_IS_MINIMUM_JAVA_7), otherwise windows will fail again (see the other CoreContainer tests). Tidy up CoreContainer construction logic Key: SOLR-4948 URL: https://issues.apache.org/jira/browse/SOLR-4948 Project: Solr Issue Type: Improvement Reporter: Alan Woodward Assignee: Alan Woodward Priority: Minor Attachments: SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch, SOLR-4948.patch While writing tests for SOLR-4914, I discovered that it's *really difficult* to create a CoreContainer. There are a bunch of constructors which initialise different things, one (but only one!) of which also loads all the cores. Then you have the Initializer object, which basically does the same thing. Sort of. And then the TestHarness doesn't actually use CoreContainer, but an anonymous subclass of CoreContainer which has it's own initialisation logic. It would be nice to clean this up! -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4258) Incremental Field Updates through Stacked Segments
[ https://issues.apache.org/jira/browse/LUCENE-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701189#comment-13701189 ] Scott Schneider commented on LUCENE-4258: - Gotcha. Could my help speed this up significantly, or not much? I hesitate to even ask, because on the one hand, I'm not familiar with Lucene internals, and on the other hand (or the same hand), I doubt I could get time to work on this... but I am an experienced Java developer and if my help would make a difference, I could make a case for it! Incremental Field Updates through Stacked Segments -- Key: LUCENE-4258 URL: https://issues.apache.org/jira/browse/LUCENE-4258 Project: Lucene - Core Issue Type: Improvement Components: core/index Reporter: Sivan Yogev Fix For: 4.4 Attachments: IncrementalFieldUpdates.odp, LUCENE-4258-API-changes.patch, LUCENE-4258.branch.1.patch, LUCENE-4258.branch.2.patch, LUCENE-4258.branch3.patch, LUCENE-4258.branch.4.patch, LUCENE-4258.branch.5.patch, LUCENE-4258.branch.6.patch, LUCENE-4258.branch.6.patch, LUCENE-4258.r1410593.patch, LUCENE-4258.r1412262.patch, LUCENE-4258.r1416438.patch, LUCENE-4258.r1416617.patch, LUCENE-4258.r1422495.patch, LUCENE-4258.r1423010.patch Original Estimate: 2,520h Remaining Estimate: 2,520h Shai and I would like to start working on the proposal to Incremental Field Updates outlined here (http://markmail.org/message/zhrdxxpfk6qvdaex). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [CONF] Apache Solr Reference Guide Uploading Data with Index Handlers
Hoss set up a CWIKI account named lucene_pmc_notification_role to watch the ref guide and email commits@l.a.o when content changes. See https://issues.apache.org/jira/browse/SOLR-4887 for some details. Lucene PMC members have access to the credentials for this CWIKI role - I logged in as that account and looked at the email notification config. In the currently installed version of Confluence at the ASF (v3.5.17), the option to include diffs instead of the full content is grayed out for text format emails. We would have to first select HTML format emails in order to get diffs instead of full contents. Not sure which is worse. Maybe Confluence v5.1, which ASF Infra plans on upgrading to eventually, has better options for this? Steve On Jul 5, 2013, at 3:42 PM, Jack Krupansky j...@basetechnology.com wrote: The ASRG commit emails seem to be sending the whole confluence wiki page rather than just a diff like the old Solr wiki. Is that a tunable preference for confluence? Thanks. -- Jack Krupansky -Original Message- From: Grant Ingersoll (Confluence) Sent: Friday, July 05, 2013 2:54 PM To: comm...@lucene.apache.org Subject: [CONF] Apache Solr Reference Guide Uploading Data with Index Handlers Space: Apache Solr Reference Guide (https://cwiki.apache.org/confluence/display/solr) Page: Uploading Data with Index Handlers (https://cwiki.apache.org/confluence/display/solr/Uploading+Data+with+Index+Handlers) Edited by Grant Ingersoll: - {section} {column:width=75%} Index Handlers are Update Handlers designed to add, delete and update documents to the index. Solr includes several of these to allow indexing documents in XML, CSV and JSON. The example URLs given here reflect the handler configuration in the supplied {{solrconfig.xml}}. If the name associated with the handler is changed then the URLs will need to be modified. It is quite possible to access the same handler using more than one name, which can be useful if you wish to specify different sets of default options. New {{UpdateProcessors}} now default to the {{uniqueKey}} field if it is the appropriate type for configured fields. The processors automatically add fields with new UUIDs and Timestamps to {{SolrInputDocuments}}. These work similarly to the field default=.../ option in {{schema.xml}}, but are applied in the {{UpdateProcessorChain}}. They may be used prior to other {{UpdateProcessors}}, or to generate a {{uniqueKey}} field value when using the {{DistributedUpdateProcessor}} (i.e., SolrCloud), {{TimestampUpdateProcessorFactory}}, {{UUIDUpdateProcessorFactory}}, and {{DefaultValueUpdateProcessorFactory}}. {column} {column:width=25%} {panel} Index Handlers covered in this section: {toc:minLevel=2|maxLevel=2} {panel} {column} {section} h2. Combined UpdateRequestHandlers For the separate XML, CSV, JSON, and javabin update request handlers explained below, Solr provides a single {{RequestHandler}}, and chooses the appropriate {{ContentStreamLoader}} based on the the {{Content-Type}} header, entered as the {{qt}} (query type) parameter matching the name of registered handlers. The standard request handler is the default and will be used if {{qt}} is not specified in the request. {code:lang=xml|borderStyle=solid|borderColor=#66} requestHandler name=standard / requestHandler name=custom / {code} h3. Configuring Shard Handlers for Distributed Searches Inside the RequestHandler, you can configure and specify the shard handler used for distributed search. You can also plug in custom shard handlers as well. Configuring the standard handler, set up the configuration as in this example: {code:lang=xml|borderStyle=solid|borderColor=#66} requestHandler name=standard default=true !-- other params go here -- shardHandlerFactory int name=socketTimeOut1000/int int name=connTimeOut5000/int /shardHandler /requestHandler {code} The parameters that can be specified are as follows: || Parameter || Default || Explanation || | socketTimeout | default: 0 (use OS default) | The amount of time in ms that a socket is allowed to wait | | connTimeout | default: 0 (use OS default) | The amount of time in ms that is accepted for binding / connection a socket | | maxConnectionsPerHost | default: 20 | The maximum number of connections that is made to each individual shard in a distributed search | | corePoolSize | default: 0 | The retained lowest limit on the number of threads used in coordinating distributed search | | maximumPoolSize | default: Integer.MAX_VALUE | The maximum number of threads used for coordinating distributed search | | maxThreadIdleTime | default: 5 seconds | The amount of time to wait for before threads are scaled back in response to a reduction in load | | sizeOfQueue | default: \-1 | If
[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4117 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4117/ 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=2188, name=recoveryCmdExecutor-1018-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=2188, name=recoveryCmdExecutor-1018-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) at __randomizedtesting.SeedInfo.seed([B0C7376F6F448595]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=2188, name=recoveryCmdExecutor-1018-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at
[JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1766 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/1766/ 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=2264, name=recoveryCmdExecutor-1393-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at java.net.Socket.connect(Socket.java:546) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:679) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=2264, name=recoveryCmdExecutor-1393-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at java.net.Socket.connect(Socket.java:546) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:679) at __randomizedtesting.SeedInfo.seed([3117EB87919DE328]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=2264, name=recoveryCmdExecutor-1393-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) at
[JENKINS] Lucene-Solr-Tests-4.x-Java7 - Build # 1385 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java7/1385/ 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.AliasIntegrationTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.AliasIntegrationTest: 1) Thread[id=969, name=recoveryCmdExecutor-358-thread-1, state=RUNNABLE, group=TGRP-AliasIntegrationTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.AliasIntegrationTest: 1) Thread[id=969, name=recoveryCmdExecutor-358-thread-1, state=RUNNABLE, group=TGRP-AliasIntegrationTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) at __randomizedtesting.SeedInfo.seed([D1B09307AE0858FD]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.AliasIntegrationTest Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=969, name=recoveryCmdExecutor-358-thread-1, state=RUNNABLE, group=TGRP-AliasIntegrationTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at
[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4118 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4118/ 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=1392, name=recoveryCmdExecutor-734-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=1392, name=recoveryCmdExecutor-734-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180) at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294) at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180) at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) at __randomizedtesting.SeedInfo.seed([B73D2B3875389AAE]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=1392, name=recoveryCmdExecutor-734-thread-1, state=RUNNABLE, group=TGRP-BasicDistributedZkTest] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
[jira] [Commented] (SOLR-2592) Custom Hashing
[ https://issues.apache.org/jira/browse/SOLR-2592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701256#comment-13701256 ] Leonid Krogliak commented on SOLR-2592: --- I don't see this changesin the code. I use solr 4.3.1, but I looked for this changes in solr 4.1 too. Custom Hashing -- Key: SOLR-2592 URL: https://issues.apache.org/jira/browse/SOLR-2592 Project: Solr Issue Type: New Feature Components: SolrCloud Affects Versions: 4.0-ALPHA Reporter: Noble Paul Assignee: Yonik Seeley Fix For: 4.1, 5.0 Attachments: dbq_fix.patch, pluggable_sharding.patch, pluggable_sharding_V2.patch, SOLR-2592_collectionProperties.patch, SOLR-2592_collectionProperties.patch, SOLR-2592.patch, SOLR-2592_progress.patch, SOLR-2592_query_try1.patch, SOLR-2592_r1373086.patch, SOLR-2592_r1384367.patch, SOLR-2592_rev_2.patch, SOLR_2592_solr_4_0_0_BETA_ShardPartitioner.patch If the data in a cloud can be partitioned on some criteria (say range, hash, attribute value etc) It will be easy to narrow down the search to a smaller subset of shards and in effect can achieve more efficient search. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org