[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+107) - Build # 16187 - Still Failing!

2016-03-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16187/
Java: 64bit/jdk-9-ea+107 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.lucene.search.TestTermScorer.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([2FAA34B5A8AF0523:A7FE0B6F065368DB]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at org.apache.lucene.search.TestTermScorer.test(TestTermScorer.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:804)




Build Log:
[...truncated 1053 lines...]
   [junit4] Suite: org.apache.lucene.search.TestTermScorer
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestTermScorer 
-Dtests.method=test -Dtests.seed=2FAA34B5A8AF0523 -Dtests.multiplier=3 
-Dtests.slow=true -Dtests.locale=mgh-MZ 
-Dtests.timezone=America/Argentina/Ushuaia -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.01s J2 | TestTermScorer.test <<<
   [junit4]> Throwable #1: java.lang.AssertionError
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([2FAA34B5A8AF0523:A7FE0B6F065368DB]:0)
   [junit4]>at 
org.apache.lucene.search.TestTermScorer.test(TestTermScorer.java:80)
   

[JENKINS-MAVEN] Lucene-Solr-Maven-master #1704: POMs out of sync

2016-03-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-master/1704/

No tests ran.

Build Log:
[...truncated 8478 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:756: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:291: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/common-build.xml:561:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/common-build.xml:556:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/build.xml:480:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/common-build.xml:2520:
 Can't get https://issues.apache.org/jira/rest/api/2/project/LUCENE to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/build/docs/changes/jiraVersionList.json

Total time: 5 minutes 22 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Artifacts-6.x - Build # 10 - Failure

2016-03-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Artifacts-6.x/10/

No tests ran.

Build Log:
[...truncated 8218 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Artifacts-6.x/lucene/build.xml:359: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Artifacts-6.x/lucene/common-build.xml:2520:
 Can't get https://issues.apache.org/jira/rest/api/2/project/LUCENE to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Artifacts-6.x/lucene/build/src-export/lucene/docs/changes/jiraVersionList.json

Total time: 12 minutes 26 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Compressed 136.49 MB of artifacts by 36.0% relative to #9
Publishing Javadoc
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-master - Build # 981 - Failure

2016-03-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/981/

1 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.test

Error Message:
Wrong doc count on shard1_0. See SOLR-5309 expected:<175> but was:<176>

Stack Trace:
java.lang.AssertionError: Wrong doc count on shard1_0. See SOLR-5309 
expected:<175> but was:<176>
at 
__randomizedtesting.SeedInfo.seed([AE64209AB9C63708:26301F40173A5AF0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.ShardSplitTest.checkDocCountsAndShardStates(ShardSplitTest.java:463)
at 
org.apache.solr.cloud.ShardSplitTest.splitByUniqueKeyTest(ShardSplitTest.java:246)
at org.apache.solr.cloud.ShardSplitTest.test(ShardSplitTest.java:83)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_72) - Build # 16186 - Still Failing!

2016-03-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16186/
Java: 64bit/jdk1.8.0_72 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.UnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=3560, 
name=testExecutor-1571-thread-13, state=RUNNABLE, 
group=TGRP-UnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=3560, name=testExecutor-1571-thread-13, 
state=RUNNABLE, group=TGRP-UnloadDistributedZkTest]
at 
__randomizedtesting.SeedInfo.seed([19EED985479A7720:91BAE65FE9661AD8]:0)
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:57096
at __randomizedtesting.SeedInfo.seed([19EED985479A7720]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:583)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:57096
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:581)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:482)
... 8 more




Build Log:
[...truncated 11032 lines...]
   [junit4] Suite: org.apache.solr.cloud.UnloadDistributedZkTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.UnloadDistributedZkTest_19EED985479A7720-001/init-core-data-001
   [junit4]   2> 365184 INFO  
(SUITE-UnloadDistributedZkTest-seed#[19EED985479A7720]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 365187 INFO  
(TEST-UnloadDistributedZkTest.test-seed#[19EED985479A7720]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 365187 INFO  (Thread-1225) [] o.a.s.c.ZkTestServer client 

[JENKINS] Lucene-Solr-Tests-6.x - Build # 42 - Still Failing

2016-03-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/42/

All tests passed

Build Log:
[...truncated 53054 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/build.xml:740: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/build.xml:101: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/build.xml:138: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/build.xml:480: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/common-build.xml:2520:
 Can't get https://issues.apache.org/jira/rest/api/2/project/LUCENE to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/build/docs/changes/jiraVersionList.json

Total time: 74 minutes 58 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+107) - Build # 16185 - Failure!

2016-03-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16185/
Java: 64bit/jdk-9-ea+107 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.lucene.search.TestFieldCacheRewriteMethod.testRegexps

Error Message:
source=3 is out of bounds (maxState is 2)

Stack Trace:
java.lang.IllegalArgumentException: source=3 is out of bounds (maxState is 2)
at 
__randomizedtesting.SeedInfo.seed([3F4CCB2A244112FD:DE108A3BFAEB4575]:0)
at 
org.apache.lucene.util.automaton.Automaton.addTransition(Automaton.java:165)
at 
org.apache.lucene.util.automaton.MinimizationOperations.minimize(MinimizationOperations.java:245)
at 
org.apache.lucene.util.automaton.RegExp.toAutomatonInternal(RegExp.java:537)
at 
org.apache.lucene.util.automaton.RegExp.toAutomatonInternal(RegExp.java:546)
at org.apache.lucene.util.automaton.RegExp.findLeaves(RegExp.java:617)
at 
org.apache.lucene.util.automaton.RegExp.toAutomatonInternal(RegExp.java:519)
at org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:495)
at org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:466)
at org.apache.lucene.search.RegexpQuery.(RegexpQuery.java:109)
at org.apache.lucene.search.RegexpQuery.(RegexpQuery.java:79)
at 
org.apache.lucene.search.TestFieldCacheRewriteMethod.assertSame(TestFieldCacheRewriteMethod.java:36)
at 
org.apache.lucene.search.TestRegexpRandom2.testRegexps(TestRegexpRandom2.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

Re: Branch update merge blizzards

2016-03-11 Thread David Smiley
I hear ya Jack; it sucks.  We filled an issue already, at least as far as
the git JIRA commit bot:
https://issues.apache.org/jira/browse/INFRA-11198
~ David

On Fri, Mar 11, 2016 at 7:55 PM Jack Krupansky 
wrote:

> I was curious if there was a specific intentional reason for some of the
> git commit email that seems to come as a blizzard of like 50 messages
> whenever somebody simply updates a work in progress branch and doesn't seem
> related to an actual commit to a release branch.
>
> The latest just now was related to "SOLR-445: Merge remote-tracking branch
> 'origin' into jira/SOLR-445"
>
> Is there some specific intent why there should be commit-level email for a
> mere non-release branch update? I mean, does anybody get any value from
> them? I do want to see them if master or branch_xy gets merged into, but
> not for branches LUCENE/SOLR-, especially when each of those Jira
> issues needs the same merge updates.
>
> Just curious.
>
> -- Jack Krupansky
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 434 - Failure

2016-03-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/434/

No tests ran.

Build Log:
[...truncated 8210 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/build.xml:520:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build.xml:480:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/common-build.xml:2520:
 Can't get https://issues.apache.org/jira/rest/api/2/project/LUCENE to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/docs/changes/jiraVersionList.json

Total time: 5 minutes 41 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 957 - Still Failing

2016-03-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/957/

3 tests failed.
FAILED:  org.apache.solr.cloud.UnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=14126, 
name=testExecutor-4324-thread-10, state=RUNNABLE, 
group=TGRP-UnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=14126, name=testExecutor-4324-thread-10, 
state=RUNNABLE, group=TGRP-UnloadDistributedZkTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:43904/iu_n/s
at __randomizedtesting.SeedInfo.seed([FBC2105739482AE5]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$3(BasicDistributedZkTest.java:583)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$6(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:43904/iu_n/s
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$3(BasicDistributedZkTest.java:581)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:482)
... 8 more


FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
timed out waiting for collection1 startAt time to exceed: Sat Mar 12 10:39:36 
GMT+08:00 2016

Stack Trace:
java.lang.AssertionError: timed out waiting for collection1 startAt time to 
exceed: Sat Mar 12 10:39:36 GMT+08:00 2016
at 
__randomizedtesting.SeedInfo.seed([FBC2105739482AE5:206910913C604356]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1422)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:774)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_72) - Build # 100 - Failure!

2016-03-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/100/
Java: 64bit/jdk1.8.0_72 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.UnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=8371, 
name=testExecutor-4012-thread-2, state=RUNNABLE, 
group=TGRP-UnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=8371, name=testExecutor-4012-thread-2, 
state=RUNNABLE, group=TGRP-UnloadDistributedZkTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:49416/yru
at __randomizedtesting.SeedInfo.seed([7E3877C40A70DC9F]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:583)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:49416/yru
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:581)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:482)
... 8 more




Build Log:
[...truncated 11410 lines...]
   [junit4] Suite: org.apache.solr.cloud.UnloadDistributedZkTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.UnloadDistributedZkTest_7E3877C40A70DC9F-001/init-core-data-001
   [junit4]   2> 894674 INFO  
(SUITE-UnloadDistributedZkTest-seed#[7E3877C40A70DC9F]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /yru/
   [junit4]   2> 894676 INFO  
(TEST-UnloadDistributedZkTest.test-seed#[7E3877C40A70DC9F]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 894676 INFO  (Thread-2837) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 894676 INFO  (Thread-2837) [] o.a.s.c.ZkTestServer 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_72) - Build # 16183 - Still Failing!

2016-03-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16183/
Java: 32bit/jdk1.8.0_72 -client -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd.testFullImport

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:55208/solr/collection1

Stack Trace:
java.lang.AssertionError: IOException occured when talking to server at: 
http://127.0.0.1:55208/solr/collection1
at 
__randomizedtesting.SeedInfo.seed([8FE6E3F36DA8679:8D5213A48ED51859]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd.testFullImport(TestSolrEntityProcessorEndToEnd.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd.testFullImportFieldsParam

Error Message:
IOException occured when talking to server at: 

Branch update merge blizzards

2016-03-11 Thread Jack Krupansky
I was curious if there was a specific intentional reason for some of the
git commit email that seems to come as a blizzard of like 50 messages
whenever somebody simply updates a work in progress branch and doesn't seem
related to an actual commit to a release branch.

The latest just now was related to "SOLR-445: Merge remote-tracking branch
'origin' into jira/SOLR-445"

Is there some specific intent why there should be commit-level email for a
mere non-release branch update? I mean, does anybody get any value from
them? I do want to see them if master or branch_xy gets merged into, but
not for branches LUCENE/SOLR-, especially when each of those Jira
issues needs the same merge updates.

Just curious.

-- Jack Krupansky


[JENKINS] Lucene-Solr-SmokeRelease-6.x - Build # 8 - Failure

2016-03-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.x/8/

No tests ran.

Build Log:
[...truncated 8209 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/build.xml:520: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build.xml:480:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/common-build.xml:2520:
 Can't get https://issues.apache.org/jira/rest/api/2/project/LUCENE to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/docs/changes/jiraVersionList.json

Total time: 5 minutes 36 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+107) - Build # 16182 - Failure!

2016-03-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16182/
Java: 32bit/jdk-9-ea+107 -server -XX:+UseSerialGC

63 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([EAD86FA35D526BED]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest

Error Message:
Captured an uncaught exception in thread: Thread[id=4850, 
name=SocketProxy-Response-39091:57325, state=RUNNABLE, 
group=TGRP-HttpPartitionTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=4850, name=SocketProxy-Response-39091:57325, 
state=RUNNABLE, group=TGRP-HttpPartitionTest]
Caused by: java.lang.RuntimeException: java.net.SocketException: Socket is 
closed
at __randomizedtesting.SeedInfo.seed([EAD86FA35D526BED]:0)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:347)
Caused by: java.net.SocketException: Socket is closed
at java.net.Socket.setSoTimeout(Socket.java:1139)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:344)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=12677, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:230) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2106)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1131)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:848)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:804)2) Thread[id=12678, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:230) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2106)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1131)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:848)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:804)3) Thread[id=12676, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:230) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2106)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1131)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:848)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:804)4) Thread[id=12675, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)5) Thread[id=12679, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:230) 
at 

[jira] [Commented] (LUCENE-7099) add newDistanceSort to sandbox LatLonPoint

2016-03-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191626#comment-15191626
 ] 

Michael McCandless commented on LUCENE-7099:


+1, very simple!

> add newDistanceSort to sandbox LatLonPoint
> --
>
> Key: LUCENE-7099
> URL: https://issues.apache.org/jira/browse/LUCENE-7099
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7099.patch
>
>
> This field does not support sorting by distance, which is a very common use 
> case. 
> We can add {{LatLonPoint.newDistanceSort(field, latitude, longitude)}} which 
> returns a suitable SortField. There are a lot of optimizations esp when e.g. 
> the priority queue gets full to avoid tons of haversin() computations.
> Also, we can make use of the SortedNumeric data to switch 
> newDistanceQuery/newPolygonQuery to the two-phase iterator api, so they 
> aren't doing haversin() calls on bkd leaf nodes. It should look a lot like 
> LUCENE-7019



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7099) add newDistanceSort to sandbox LatLonPoint

2016-03-11 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-7099:

Attachment: LUCENE-7099.patch

Here is a patch. It uses the slowest possible algorithm but has a decent test.

> add newDistanceSort to sandbox LatLonPoint
> --
>
> Key: LUCENE-7099
> URL: https://issues.apache.org/jira/browse/LUCENE-7099
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7099.patch
>
>
> This field does not support sorting by distance, which is a very common use 
> case. 
> We can add {{LatLonPoint.newDistanceSort(field, latitude, longitude)}} which 
> returns a suitable SortField. There are a lot of optimizations esp when e.g. 
> the priority queue gets full to avoid tons of haversin() computations.
> Also, we can make use of the SortedNumeric data to switch 
> newDistanceQuery/newPolygonQuery to the two-phase iterator api, so they 
> aren't doing haversin() calls on bkd leaf nodes. It should look a lot like 
> LUCENE-7019



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7099) add newDistanceSort to sandbox LatLonPoint

2016-03-11 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-7099:
---

 Summary: add newDistanceSort to sandbox LatLonPoint
 Key: LUCENE-7099
 URL: https://issues.apache.org/jira/browse/LUCENE-7099
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


This field does not support sorting by distance, which is a very common use 
case. 

We can add {{LatLonPoint.newDistanceSort(field, latitude, longitude)}} which 
returns a suitable SortField. There are a lot of optimizations esp when e.g. 
the priority queue gets full to avoid tons of haversin() computations.

Also, we can make use of the SortedNumeric data to switch 
newDistanceQuery/newPolygonQuery to the two-phase iterator api, so they aren't 
doing haversin() calls on bkd leaf nodes. It should look a lot like LUCENE-7019



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7098) BKDWriter should write ords as ints when possible during offline sort

2016-03-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191544#comment-15191544
 ] 

Michael McCandless commented on LUCENE-7098:


Thanks [~rcmuir], those are good ideas, I'll fold those in.

> BKDWriter should write ords as ints when possible during offline sort
> -
>
> Key: LUCENE-7098
> URL: https://issues.apache.org/jira/browse/LUCENE-7098
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Attachments: LUCENE-7098.patch
>
>
> Today we write all ords as longs, since we support more than 2.1B values in 
> one segment, but the vast majority of the time an int would suffice.
> We could look into vLong, but this quickly gets tricky because {{BKDWriter}} 
> needs random access to the file and we rely on fixed-width entries to do this 
> now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8542) Integrate Learning to Rank into Solr

2016-03-11 Thread Joshua Pantony (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191539#comment-15191539
 ] 

Joshua Pantony commented on SOLR-8542:
--

Hey Alessandro, thanks for all the interest! We actually wrote our own script 
to parse RankLib to the LTR Plugin format. Do you think it would be prudent to 
add that to this push? It seemed somewhat outside the scope of this ticket 
because we wanted the plugin to be as agnostic to the model training as 
possible, but I could see the logic in having some library specific utilities. 

I'll add some more documentation for the training phase. 

> Integrate Learning to Rank into Solr
> 
>
> Key: SOLR-8542
> URL: https://issues.apache.org/jira/browse/SOLR-8542
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joshua Pantony
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: README.md, README.md, SOLR-8542-branch_5x.patch, 
> SOLR-8542-trunk.patch
>
>
> This is a ticket to integrate learning to rank machine learning models into 
> Solr. Solr Learning to Rank (LTR) provides a way for you to extract features 
> directly inside Solr for use in training a machine learned model. You can 
> then deploy that model to Solr and use it to rerank your top X search 
> results. This concept was previously presented by the authors at Lucene/Solr 
> Revolution 2015 ( 
> http://www.slideshare.net/lucidworks/learning-to-rank-in-solr-presented-by-michael-nilsson-diego-ceccarelli-bloomberg-lp
>  ).
> The attached code was jointly worked on by Joshua Pantony, Michael Nilsson, 
> David Grohmann and Diego Ceccarelli.
> Any chance this could make it into a 5x release? We've also attached 
> documentation as a github MD file, but are happy to convert to a desired 
> format.
> h3. Test the plugin with solr/example/techproducts in 6 steps
> Solr provides some simple example of indices. In order to test the plugin 
> with 
> the techproducts example please follow these steps
> h4. 1. compile solr and the examples 
> cd solr
> ant dist
> ant example
> h4. 2. run the example
> ./bin/solr -e techproducts 
> h4. 3. stop it and install the plugin:
>
> ./bin/solr stop
> mkdir example/techproducts/solr/techproducts/lib
> cp build/contrib/ltr/lucene-ltr-6.0.0-SNAPSHOT.jar 
> example/techproducts/solr/techproducts/lib/
> cp contrib/ltr/example/solrconfig.xml 
> example/techproducts/solr/techproducts/conf/
> h4. 4. run the example again
> 
> ./bin/solr -e techproducts
> h4. 5. index some features and a model
> curl -XPUT 'http://localhost:8983/solr/techproducts/schema/fstore'  
> --data-binary "@./contrib/ltr/example/techproducts-features.json"  -H 
> 'Content-type:application/json'
> curl -XPUT 'http://localhost:8983/solr/techproducts/schema/mstore'  
> --data-binary "@./contrib/ltr/example/techproducts-model.json"  -H 
> 'Content-type:application/json'
> h4. 6. have fun !
> *access to the default feature store*
> http://localhost:8983/solr/techproducts/schema/fstore/_DEFAULT_ 
> *access to the model store*
> http://localhost:8983/solr/techproducts/schema/mstore
> *perform a query using the model, and retrieve the features*
> http://localhost:8983/solr/techproducts/query?indent=on=test=json={!ltr%20model=svm%20reRankDocs=25%20efi.query=%27test%27}=*,[features],price,score,name=true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7098) BKDWriter should write ords as ints when possible during offline sort

2016-03-11 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191527#comment-15191527
 ] 

Robert Muir commented on LUCENE-7098:
-

I like the idea, but some minor style changes:
* instead of maxPointCount, can we use 'total' or 'sum'. its not a max. i 
realize it includes deletions, but we can just indicate that with a comment...
* can we not do the optimization for Integer.MAX_VALUE+1. I think thats being 
too sneaky!

> BKDWriter should write ords as ints when possible during offline sort
> -
>
> Key: LUCENE-7098
> URL: https://issues.apache.org/jira/browse/LUCENE-7098
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Attachments: LUCENE-7098.patch
>
>
> Today we write all ords as longs, since we support more than 2.1B values in 
> one segment, but the vast majority of the time an int would suffice.
> We could look into vLong, but this quickly gets tricky because {{BKDWriter}} 
> needs random access to the file and we rely on fixed-width entries to do this 
> now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8832) Faulty DaemonStream shutdown procedures

2016-03-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-8832.
--
Resolution: Fixed

> Faulty DaemonStream shutdown procedures
> ---
>
> Key: SOLR-8832
> URL: https://issues.apache.org/jira/browse/SOLR-8832
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8832.patch, SOLR-8832.patch
>
>
> The following test run fails everytime due to faulty DaemonStream shutdown 
> procedures.
> ant test  -Dtestcase=StreamExpressionTest -Dtests.method=testAll 
> -Dtests.seed=A8E5206069146FC0 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=lv-LV -Dtests.timezone=America/Iqaluit -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8831) allow _version_ field to be retrievable via docValues

2016-03-11 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191504#comment-15191504
 ] 

Jack Krupansky commented on SOLR-8831:
--

Now that docValues is supported for _version_, the question arises as to which 
is preferred (faster, less memory), stored or docValues. IOW, which should be 
the default. I presume it should be docValues, but I have no real clue.

Also, the doc for Atomic Update has this example as a Power Tip, that has BOTH 
stored and docValues set:

{code}

{code}

See:
https://cwiki.apache.org/confluence/display/solr/Updating+Parts+of+Documents

Should that be changed to stored="false"? Or, is there actually some aditional 
hidden benefit to store="true" AND docValues="true"?


> allow _version_ field to be retrievable via docValues
> -
>
> Key: SOLR-8831
> URL: https://issues.apache.org/jira/browse/SOLR-8831
> Project: Solr
>  Issue Type: Improvement
>Reporter: Yonik Seeley
> Fix For: 6.0
>
> Attachments: SOLR-8831.patch
>
>
> Right now, one is prohibited from having an unstored _version_ field, even if 
> docValues are enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7098) BKDWriter should write ords as ints when possible during offline sort

2016-03-11 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7098:
---
Attachment: LUCENE-7098.patch

Patch.  {{BKDWriter}} figures out up front whether it can use {{int}} or 
{{long}} to write all ords.  The caller must specific max number of values it 
will pass to this instance (hmm, I'll add checks to verify caller didn't exceed 
what it had promised).

This gives a nice speed up on the 6.1M London UK test, with the final merge 
going from 192.1 sec down to 171.1 sec to merge points.

I'll make sure {{Test2BPoints}} passes with this change.

> BKDWriter should write ords as ints when possible during offline sort
> -
>
> Key: LUCENE-7098
> URL: https://issues.apache.org/jira/browse/LUCENE-7098
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Attachments: LUCENE-7098.patch
>
>
> Today we write all ords as longs, since we support more than 2.1B values in 
> one segment, but the vast majority of the time an int would suffice.
> We could look into vLong, but this quickly gets tricky because {{BKDWriter}} 
> needs random access to the file and we rely on fixed-width entries to do this 
> now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7098) BKDWriter should write ords as ints when possible during offline sort

2016-03-11 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-7098:
--

 Summary: BKDWriter should write ords as ints when possible during 
offline sort
 Key: LUCENE-7098
 URL: https://issues.apache.org/jira/browse/LUCENE-7098
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless


Today we write all ords as longs, since we support more than 2.1B values in one 
segment, but the vast majority of the time an int would suffice.

We could look into vLong, but this quickly gets tricky because {{BKDWriter}} 
needs random access to the file and we rely on fixed-width entries to do this 
now.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8832) Faulty DaemonStream shutdown procedures

2016-03-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191488#comment-15191488
 ] 

ASF subversion and git services commented on SOLR-8832:
---

Commit 99b9e71db21a34eca4da1639ac13b91cbdcca813 in lucene-solr's branch 
refs/heads/branch_6_0 from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=99b9e71 ]

SOLR-8832: Faulty DaemonStream shutdown procedures


> Faulty DaemonStream shutdown procedures
> ---
>
> Key: SOLR-8832
> URL: https://issues.apache.org/jira/browse/SOLR-8832
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8832.patch, SOLR-8832.patch
>
>
> The following test run fails everytime due to faulty DaemonStream shutdown 
> procedures.
> ant test  -Dtestcase=StreamExpressionTest -Dtests.method=testAll 
> -Dtests.seed=A8E5206069146FC0 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=lv-LV -Dtests.timezone=America/Iqaluit -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8832) Faulty DaemonStream shutdown procedures

2016-03-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191481#comment-15191481
 ] 

ASF subversion and git services commented on SOLR-8832:
---

Commit 26f230a4740e281ee9b43ed60bb8d24c4ed8dbdc in lucene-solr's branch 
refs/heads/branch_6x from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=26f230a ]

SOLR-8832: Faulty DaemonStream shutdown procedures


> Faulty DaemonStream shutdown procedures
> ---
>
> Key: SOLR-8832
> URL: https://issues.apache.org/jira/browse/SOLR-8832
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8832.patch, SOLR-8832.patch
>
>
> The following test run fails everytime due to faulty DaemonStream shutdown 
> procedures.
> ant test  -Dtestcase=StreamExpressionTest -Dtests.method=testAll 
> -Dtests.seed=A8E5206069146FC0 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=lv-LV -Dtests.timezone=America/Iqaluit -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8832) Faulty DaemonStream shutdown procedures

2016-03-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191467#comment-15191467
 ] 

ASF subversion and git services commented on SOLR-8832:
---

Commit 007d41c9f5073ee796dc35168d397e7a5b501997 in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=007d41c ]

SOLR-8832: Faulty DaemonStream shutdown procedures


> Faulty DaemonStream shutdown procedures
> ---
>
> Key: SOLR-8832
> URL: https://issues.apache.org/jira/browse/SOLR-8832
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8832.patch, SOLR-8832.patch
>
>
> The following test run fails everytime due to faulty DaemonStream shutdown 
> procedures.
> ant test  -Dtestcase=StreamExpressionTest -Dtests.method=testAll 
> -Dtests.seed=A8E5206069146FC0 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=lv-LV -Dtests.timezone=America/Iqaluit -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_72) - Build # 98 - Still Failing!

2016-03-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/98/
Java: 32bit/jdk1.8.0_72 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.UnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=6748, 
name=testExecutor-3119-thread-9, state=RUNNABLE, 
group=TGRP-UnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=6748, name=testExecutor-3119-thread-9, 
state=RUNNABLE, group=TGRP-UnloadDistributedZkTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:60808
at __randomizedtesting.SeedInfo.seed([D900758174D3FCD2]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:583)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:60808
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:581)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:482)
... 8 more




Build Log:
[...truncated 11380 lines...]
   [junit4] Suite: org.apache.solr.cloud.UnloadDistributedZkTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.UnloadDistributedZkTest_D900758174D3FCD2-001/init-core-data-001
   [junit4]   2> 829487 INFO  
(SUITE-UnloadDistributedZkTest-seed#[D900758174D3FCD2]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 829488 INFO  
(TEST-UnloadDistributedZkTest.test-seed#[D900758174D3FCD2]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 829488 INFO  (Thread-2463) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 829489 INFO  (Thread-2463) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 

[jira] [Updated] (SOLR-8659) Improve Solr JDBC Driver to support more SQL Clients

2016-03-11 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8659:
---
Description: 
SOLR-8502 was a great start to getting JDBC support to be more complete. This 
ticket is to track items that could further improve the JDBC support for more 
SQL clients and their features. A few SQL clients are:
* DbVisualizer
* SQuirrel SQL
* Apache Zeppelin (incubating)
* IntelliJ IDEA Database Tool
* ODBC clients like Excel/Tableau

  was:
SOLR-8502 was a great start to getting JDBC support to be more complete. This 
ticket is to track items that could further improve the JDBC support for more 
SQL clients and their features. A few SQL clients are:
* DbVisualizer
* SquirrelSQL
* Apache Zeppelin (incubating)
* IntelliJ IDEA Database Tool
* ODBC clients like Excel/Tableau


> Improve Solr JDBC Driver to support more SQL Clients
> 
>
> Key: SOLR-8659
> URL: https://issues.apache.org/jira/browse/SOLR-8659
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
> Attachments: 
> iODBC_Demo__Unicode__-_Connected_to__remotesolr__and_Attach_screenshot_-_ASF_JIRA.png
>
>
> SOLR-8502 was a great start to getting JDBC support to be more complete. This 
> ticket is to track items that could further improve the JDBC support for more 
> SQL clients and their features. A few SQL clients are:
> * DbVisualizer
> * SQuirrel SQL
> * Apache Zeppelin (incubating)
> * IntelliJ IDEA Database Tool
> * ODBC clients like Excel/Tableau



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8825) SolrJ JDBC - SQuirrel SQL documentation

2016-03-11 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8825:
---
Description: Like SOLR-8521, it would be great to document how SQuirrel SQL 
can be used with SolrJ JDBC.  (was: Like SOLR-8521, it would be great to 
document how SquirrelSQL can be used with SolrJ JDBC.)

> SolrJ JDBC - SQuirrel SQL documentation
> ---
>
> Key: SOLR-8825
> URL: https://issues.apache.org/jira/browse/SOLR-8825
> Project: Solr
>  Issue Type: Sub-task
>  Components: documentation, SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
>
> Like SOLR-8521, it would be great to document how SQuirrel SQL can be used 
> with SolrJ JDBC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8825) SolrJ JDBC - SQuirrel SQL documentation

2016-03-11 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8825:
---
Summary: SolrJ JDBC - SQuirrel SQL documentation  (was: SolrJ JDBC - 
SquirrelSQL documentation)

> SolrJ JDBC - SQuirrel SQL documentation
> ---
>
> Key: SOLR-8825
> URL: https://issues.apache.org/jira/browse/SOLR-8825
> Project: Solr
>  Issue Type: Sub-task
>  Components: documentation, SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
>
> Like SOLR-8521, it would be great to document how SquirrelSQL can be used 
> with SolrJ JDBC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7097) Can we increase the stack depth before Introsorter switches to heapsort?

2016-03-11 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7097:
---
Attachment: LUCENE-7097.patch

Thanks [~jpountz], that's a good idea, here's a patch using the existing 
{{MathUtil.log}}.

> Can we increase the stack depth before Introsorter switches to heapsort?
> 
>
> Key: LUCENE-7097
> URL: https://issues.apache.org/jira/browse/LUCENE-7097
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: trunk, 6.1
>
> Attachments: LUCENE-7097.patch
>
>
> Introsort is a "safe" quicksort: it uses quicksort but detects when an 
> adversary is at work and cuts over to heapsort at that point.
> The description at https://en.wikipedia.org/wiki/Introsort shows the cutover 
> as 2X log_2(N) but our impl ({{IntroSorter}}) currently uses just log_2.
> So I tested using 2X log_2 instead, and I see a decent (~5.6%, from 98.2 sec 
> to 92.7 sec) speedup in the time for offline sorter to sort when doing the 
> force merge of 6.1 LatLonPoints from the London UK benchmark.
> Is there any reason not to switch?  I know this means 2X the stack required, 
> but since this is log_2 space that seems fine?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8827) SolrJ JDBC - Ensure that SquirrelSQL works with SolrJ JDBC

2016-03-11 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8827:
---
Attachment: SQuirreL_SQL_Client_Version_3_7.png

Attached screenshot of SQuirrel SQL that shows it can execute queries.

> SolrJ JDBC - Ensure that SquirrelSQL works with SolrJ JDBC
> --
>
> Key: SOLR-8827
> URL: https://issues.apache.org/jira/browse/SOLR-8827
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
> Attachments: SQuirreL_SQL_Client_Version_3_7.png
>
>
> There are a bunch of NPE exceptions that SolrJ JDBC causes in SquirrelSQL. 
> These need to be tracked down and fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8827) SolrJ JDBC - Ensure that SQuirrel SQL works with SolrJ JDBC

2016-03-11 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8827:
---
Summary: SolrJ JDBC - Ensure that SQuirrel SQL works with SolrJ JDBC  (was: 
SolrJ JDBC - Ensure that SquirrelSQL works with SolrJ JDBC)

> SolrJ JDBC - Ensure that SQuirrel SQL works with SolrJ JDBC
> ---
>
> Key: SOLR-8827
> URL: https://issues.apache.org/jira/browse/SOLR-8827
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
> Attachments: SQuirreL_SQL_Client_Version_3_7.png
>
>
> There are a bunch of NPE exceptions that SolrJ JDBC causes in SquirrelSQL. 
> These need to be tracked down and fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8827) SolrJ JDBC - Ensure that SQuirrel SQL works with SolrJ JDBC

2016-03-11 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-8827.

Resolution: Not A Problem

> SolrJ JDBC - Ensure that SQuirrel SQL works with SolrJ JDBC
> ---
>
> Key: SOLR-8827
> URL: https://issues.apache.org/jira/browse/SOLR-8827
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
> Attachments: SQuirreL_SQL_Client_Version_3_7.png
>
>
> There are a bunch of NPE exceptions that SolrJ JDBC causes in SquirrelSQL. 
> These need to be tracked down and fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8831) allow _version_ field to be retrievable via docValues

2016-03-11 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-8831.

   Resolution: Fixed
Fix Version/s: 6.0

> allow _version_ field to be retrievable via docValues
> -
>
> Key: SOLR-8831
> URL: https://issues.apache.org/jira/browse/SOLR-8831
> Project: Solr
>  Issue Type: Improvement
>Reporter: Yonik Seeley
> Fix For: 6.0
>
> Attachments: SOLR-8831.patch
>
>
> Right now, one is prohibited from having an unstored _version_ field, even if 
> docValues are enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8831) allow _version_ field to be retrievable via docValues

2016-03-11 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191403#comment-15191403
 ] 

Yonik Seeley commented on SOLR-8831:


I think that was for searchability purposes... it allowed indexed OR docValues 
(and said nothing about stored)

> allow _version_ field to be retrievable via docValues
> -
>
> Key: SOLR-8831
> URL: https://issues.apache.org/jira/browse/SOLR-8831
> Project: Solr
>  Issue Type: Improvement
>Reporter: Yonik Seeley
> Fix For: 6.0
>
> Attachments: SOLR-8831.patch
>
>
> Right now, one is prohibited from having an unstored _version_ field, even if 
> docValues are enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8831) allow _version_ field to be retrievable via docValues

2016-03-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191397#comment-15191397
 ] 

ASF subversion and git services commented on SOLR-8831:
---

Commit ff8cedcb11638ee52f91bf81bad2ee01f3c3d59a in lucene-solr's branch 
refs/heads/branch_6_0 from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ff8cedc ]

SOLR-8831: allow _version_ field to be retrievable via docValues


> allow _version_ field to be retrievable via docValues
> -
>
> Key: SOLR-8831
> URL: https://issues.apache.org/jira/browse/SOLR-8831
> Project: Solr
>  Issue Type: Improvement
>Reporter: Yonik Seeley
> Attachments: SOLR-8831.patch
>
>
> Right now, one is prohibited from having an unstored _version_ field, even if 
> docValues are enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8790) Add node name back to the core level responses in OverseerMessageHandler

2016-03-11 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191369#comment-15191369
 ] 

Anshum Gupta commented on SOLR-8790:


[~varunthacker] The patch leads to failing TestRebalanceLeaders. 
You are trying to extract the NODE_NAME from message but that is always be null.
{code}
message.getStr(ZkStateReader.NODE_NAME_PROP)
{code}

Also, your patch loses this information:
{code}
sreq.purpose = ShardRequest.PURPOSE_PRIVATE;
{code}

> Add node name back to the core level responses in OverseerMessageHandler
> 
>
> Key: SOLR-8790
> URL: https://issues.apache.org/jira/browse/SOLR-8790
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
> Attachments: SOLR-8790-followup.patch, SOLR-8790.patch
>
>
> Continuing from SOLR-8789, now that this test runs, time to fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8831) allow _version_ field to be retrievable via docValues

2016-03-11 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8831:
---
Summary: allow _version_ field to be retrievable via docValues  (was: allow 
_version_ field to be unstored)

> allow _version_ field to be retrievable via docValues
> -
>
> Key: SOLR-8831
> URL: https://issues.apache.org/jira/browse/SOLR-8831
> Project: Solr
>  Issue Type: Improvement
>Reporter: Yonik Seeley
> Attachments: SOLR-8831.patch
>
>
> Right now, one is prohibited from having an unstored _version_ field, even if 
> docValues are enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8672) Unique Suggestions getter in Solrj

2016-03-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191305#comment-15191305
 ] 

Tomás Fernández Löbbe commented on SOLR-8672:
-

Thanks for the patch Alessandro. Wouldn't it be better to resolve this at the 
suggester level? There has been some discussions about this in LUCENE-6336

> Unique Suggestions getter in Solrj
> --
>
> Key: SOLR-8672
> URL: https://issues.apache.org/jira/browse/SOLR-8672
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 5.4.1
>Reporter: Alessandro Benedetti
> Attachments: SOLR-8672.patch
>
>
> Currently all the suggesters based on full field content retrieval gives back 
> possibly duplicated suggestions.
> First observation , related the suggester domain, is that is unlikely we need 
> to return duplicates ( actually we don't return the id of the document, or 
> anything else, so having duplicates is arguably not a benefit) .
> I propose at least to offer via SolrJ  the possibility of getting the 
> suggestions without duplicates.
> Any feedback is welcome.
> The patch provided is really simple.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8832) Faulty DaemonStream shutdown procedures

2016-03-11 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191264#comment-15191264
 ] 

Joel Bernstein edited comment on SOLR-8832 at 3/11/16 5:41 PM:
---

The test failures were occurring due to faulty shutdown behavior in the 
DaemonStream.

This patch makes the following changes:

1) Removes the interrupt on shutdown. The interrupt was just faulty and causing 
the internal thread to exit during unsafe times. Now shutdown just flags the 
internal thread so that it will exit it's loop after completing a full run of 
the internal stream. 

2) Adds a shutdown method to the DaemonStream. When the DaemonStreams internal 
queue is enabled for continuous pull streaming, the contract for shutdown is:
 
 *a) call DaemonStream.shutdown(). This signals the internal thread to shutdown 
after it finishes it's run.
  *b) call DaemonStream.read() until the EOF Tuple is read. This will clear the 
internal queue and unblock the internal stream if it's blocked on the queue.
*c) Call DaemonStream.close();

If the internal queue is not enabled, in the continuous push streaming use 
case, calling close()  will suffice. 



was (Author: joel.bernstein):
The test failures were occurring due to faulty shutdown behavior in the 
DaemonStream.

This patch makes the following changes:

1) Removes the interrupt on shutdown. The interrupt was just faulty and causing 
the internal thread to exit during unsafe times. Now shutdown just flags the 
internal thread so that it will exit it's loop after completing a full run of 
the internal stream. 

2) Adds a shutdown method to the DaemonStream. When the DaemonStreams internal 
queue is enabled for continuous pull streaming, the contract for shutdown is:
* 
a) call DaemonStream.shutdown(). This signals the internal thread to shutdown 
after it finishes it's run.
 b) call DaemonStream.read() until the EOF Tuple is read. This will clear 
the internal queue and unblock the internal stream if it's blocked on the queue.
 c) Call DaemonStream.close();
*
If the internal queue is not enabled, in the continuous push streaming use 
case, calling close()  will suffice. 


> Faulty DaemonStream shutdown procedures
> ---
>
> Key: SOLR-8832
> URL: https://issues.apache.org/jira/browse/SOLR-8832
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8832.patch, SOLR-8832.patch
>
>
> The following test run fails everytime due to faulty DaemonStream shutdown 
> procedures.
> ant test  -Dtestcase=StreamExpressionTest -Dtests.method=testAll 
> -Dtests.seed=A8E5206069146FC0 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=lv-LV -Dtests.timezone=America/Iqaluit -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8832) Faulty DaemonStream shutdown procedures

2016-03-11 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191264#comment-15191264
 ] 

Joel Bernstein edited comment on SOLR-8832 at 3/11/16 5:40 PM:
---

The test failures were occurring due to faulty shutdown behavior in the 
DaemonStream.

This patch makes the following changes:

1) Removes the interrupt on shutdown. The interrupt was just faulty and causing 
the internal thread to exit during unsafe times. Now shutdown just flags the 
internal thread so that it will exit it's loop after completing a full run of 
the internal stream. 

2) Adds a shutdown method to the DaemonStream. When the DaemonStreams internal 
queue is enabled for continuous pull streaming, the contract for shutdown is:
* 
a) call DaemonStream.shutdown(). This signals the internal thread to shutdown 
after it finishes it's run.
 b) call DaemonStream.read() until the EOF Tuple is read. This will clear 
the internal queue and unblock the internal stream if it's blocked on the queue.
 c) Call DaemonStream.close();
*
If the internal queue is not enabled, in the continuous push streaming use 
case, calling close()  will suffice. 



was (Author: joel.bernstein):
The test failures were occurring due to faulty shutdown behavior in the 
DaemonStream.

This patch makes the following changes:

1) Removes the interrupt on shutdown. The interrupt was just faulty and causing 
the internal thread to exit during unsafe times. Now shutdown just flags the 
internal thread so that it will exit it's loop after completing a full run of 
the internal stream. 

2) Adds a shutdown method to the DaemonStream. When the DaemonStreams internal 
queue is enabled for continuous pull streaming, the contract for shutdown is:
 a) call DaemonStream.shutdown(). This signals the internal thread to 
shutdown after it finishes it's run.
 b) call DaemonStream.read() until the EOF Tuple is read. This will clear 
the internal queue and unblock the internal stream if it's blocked on the queue.
 c) Call DaemonStream.close();

If the internal queue is not enabled, in the continuous push streaming use 
case, calling close()  will suffice. 


> Faulty DaemonStream shutdown procedures
> ---
>
> Key: SOLR-8832
> URL: https://issues.apache.org/jira/browse/SOLR-8832
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8832.patch, SOLR-8832.patch
>
>
> The following test run fails everytime due to faulty DaemonStream shutdown 
> procedures.
> ant test  -Dtestcase=StreamExpressionTest -Dtests.method=testAll 
> -Dtests.seed=A8E5206069146FC0 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=lv-LV -Dtests.timezone=America/Iqaluit -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [More Like This] Query building

2016-03-11 Thread Alessandro Benedetti
Hi Anshum,
my complaint was not a polemic but a sad observation :(
I perfectly know that it has been more lacking time than the intent!
Hopefully I will get some feedback and we can solve/improve the MLT
together !

Cheers

On 11 March 2016 at 17:26, Anshum Gupta  wrote:

> Hi Alessandro,
>
> I've updated the JIRA. The committers try and review code whenever they
> get time and in this case, like other such times, I think we were all just
> lacking time, rather than the intent.
>
> Also, not all committers work on all parts of the code, so that narrows
> down the people who could potentially help you.
>
> On Fri, Mar 11, 2016 at 8:49 AM, Alessandro Benedetti <
> abenede...@apache.org> wrote:
>
>> I start to feel that is not that easy to contribute improvements or small
>> fix to Solr ( if they are not super interesting to the mass) .
>> I think this one could be a good improvement in the MLT but I would love
>> to discuss this with some committer.
>> The patch is attached, it is there since months ago...
>> Any feedback would be appreciated, I want to contribute, but I need some
>> second opinions ...
>>
>> Cheers
>>
>> On 11 February 2016 at 13:48, Alessandro Benedetti > > wrote:
>>
>>> Hi Guys,
>>> is it possible to have any feedback ?
>>> Is there any process to speed up bug resolution / discussions ?
>>> just want to understand if the patch is not good enough, if I need to
>>> improve it or simply no-one took a look ...
>>>
>>> https://issues.apache.org/jira/browse/LUCENE-6954
>>>
>>> Cheers
>>>
>>> On 11 January 2016 at 15:25, Alessandro Benedetti >> > wrote:
>>>
 Hi guys,
 the patch seems fine to me.
 I didn't spend much more time on the code but I checked the tests and
 the pre-commit checks.
 It seems fine to me.
 Let me know ,

 Cheers

 On 31 December 2015 at 18:40, Alessandro Benedetti <
 abenede...@apache.org> wrote:

> https://issues.apache.org/jira/browse/LUCENE-6954
>
> First draft patch available, I will check better the tests new year !
>
> On 29 December 2015 at 13:43, Alessandro Benedetti <
> abenede...@apache.org> wrote:
>
>> Sure, I will proceed tomorrow with the Jira and the simple patch +
>> tests.
>>
>> In the meantime let's try to collect some additional feedback.
>>
>> Cheers
>>
>> On 29 December 2015 at 12:43, Anshum Gupta 
>> wrote:
>>
>>> Feel free to create a JIRA and put up a patch if you can.
>>>
>>> On Tue, Dec 29, 2015 at 4:26 PM, Alessandro Benedetti <
>>> abenede...@apache.org
>>> > wrote:
>>>
>>> > Hi guys,
>>> > While I was exploring the way we build the More Like This query, I
>>> > discovered a part I am not convinced of :
>>> >
>>> >
>>> >
>>> > Let's see how we build the query :
>>> > org.apache.lucene.queries.mlt.MoreLikeThis#retrieveTerms(int)
>>> >
>>> > 1) we extract the terms from the interesting fields, adding them
>>> to a map :
>>> >
>>> > Map termFreqMap = new HashMap<>();
>>> >
>>> > *( we lose the relation field-> term, we don't know anymore where
>>> the term
>>> > was coming ! )*
>>> >
>>> > org.apache.lucene.queries.mlt.MoreLikeThis#createQueue
>>> >
>>> > 2) we build the queue that will contain the query terms, at this
>>> point we
>>> > connect again there terms to some field, but :
>>> >
>>> > ...
>>> >> // go through all the fields and find the largest document
>>> frequency
>>> >> String topField = fieldNames[0];
>>> >> int docFreq = 0;
>>> >> for (String fieldName : fieldNames) {
>>> >>   int freq = ir.docFreq(new Term(fieldName, word));
>>> >>   topField = (freq > docFreq) ? fieldName : topField;
>>> >>   docFreq = (freq > docFreq) ? freq : docFreq;
>>> >> }
>>> >> ...
>>> >
>>> >
>>> > We identify the topField as the field with the highest document
>>> frequency
>>> > for the term t .
>>> > Then we build the termQuery :
>>> >
>>> > queue.add(new ScoreTerm(word, *topField*, score, idf, docFreq,
>>> tf));
>>> >
>>> > In this way we lose a lot of precision.
>>> > Not sure why we do that.
>>> > I would prefer to keep the relation between terms and fields.
>>> > The MLT query can improve a lot the quality.
>>> > If i run the MLT on 2 fields : *description* and *facilities* for
>>> example.
>>> > It is likely I want to find documents with similar terms in the
>>> > description and similar terms in the facilities, without mixing up
>>> the
>>> > things and loosing the semantic of the terms.
>>> >
>>> > Let me know your opinion,
>>> >
>>> > Cheers
>>> >
>>> >
>>> > --
>>> > --
>>> >
>>> > Benedetti Alessandro
>>> > Visiting 

[jira] [Updated] (SOLR-8832) Faulty DaemonStream shutdown procedures

2016-03-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8832:
-
Description: 
The following test run fails everytime due to faulty DaemonStream shutdown 
procedures.

ant test  -Dtestcase=StreamExpressionTest -Dtests.method=testAll 
-Dtests.seed=A8E5206069146FC0 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=lv-LV -Dtests.timezone=America/Iqaluit -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

  was:
The following test run re

ant test  -Dtestcase=StreamExpressionTest -Dtests.method=testAll 
-Dtests.seed=A8E5206069146FC0 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=lv-LV -Dtests.timezone=America/Iqaluit -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1


> Faulty DaemonStream shutdown procedures
> ---
>
> Key: SOLR-8832
> URL: https://issues.apache.org/jira/browse/SOLR-8832
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8832.patch, SOLR-8832.patch
>
>
> The following test run fails everytime due to faulty DaemonStream shutdown 
> procedures.
> ant test  -Dtestcase=StreamExpressionTest -Dtests.method=testAll 
> -Dtests.seed=A8E5206069146FC0 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=lv-LV -Dtests.timezone=America/Iqaluit -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8832) Faulty DaemonStream shutdown procedures

2016-03-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8832:
-
Description: 
The following test run re

ant test  -Dtestcase=StreamExpressionTest -Dtests.method=testAll 
-Dtests.seed=A8E5206069146FC0 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=lv-LV -Dtests.timezone=America/Iqaluit -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

  was:
The following test fails every time:

ant test  -Dtestcase=StreamExpressionTest -Dtests.method=testAll 
-Dtests.seed=A8E5206069146FC0 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=lv-LV -Dtests.timezone=America/Iqaluit -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1


> Faulty DaemonStream shutdown procedures
> ---
>
> Key: SOLR-8832
> URL: https://issues.apache.org/jira/browse/SOLR-8832
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8832.patch, SOLR-8832.patch
>
>
> The following test run re
> ant test  -Dtestcase=StreamExpressionTest -Dtests.method=testAll 
> -Dtests.seed=A8E5206069146FC0 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=lv-LV -Dtests.timezone=America/Iqaluit -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8832) Faulty DaemonStream shutdown procedures

2016-03-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8832:
-
Summary: Faulty DaemonStream shutdown procedures  (was: Reproducible 
DaemonStream test failures)

> Faulty DaemonStream shutdown procedures
> ---
>
> Key: SOLR-8832
> URL: https://issues.apache.org/jira/browse/SOLR-8832
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8832.patch, SOLR-8832.patch
>
>
> The following test fails every time:
> ant test  -Dtestcase=StreamExpressionTest -Dtests.method=testAll 
> -Dtests.seed=A8E5206069146FC0 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=lv-LV -Dtests.timezone=America/Iqaluit -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8832) Reproducible DaemonStream test failures

2016-03-11 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191264#comment-15191264
 ] 

Joel Bernstein edited comment on SOLR-8832 at 3/11/16 5:34 PM:
---

The test failures were occurring due to faulty shutdown behavior in the 
DaemonStream.

This patch makes the following changes:

1) Removes the interrupt on shutdown. The interrupt was just faulty and causing 
the internal thread to exit during unsafe times. Now shutdown just flags the 
internal thread so that it will exit it's loop after completing a full run of 
the internal stream. 

2) Adds a shutdown method to the DaemonStream. When the DaemonStreams internal 
queue is enabled for continuous pull streaming, the contract for shutdown is:
 a) call DaemonStream.shutdown(). This signals the internal thread to 
shutdown after it finishes it's run.
 b) call DaemonStream.read() until the EOF Tuple is read. This will clear 
the internal queue and unblock the internal stream if it's blocked on the queue.
 c) Call DaemonStream.close();

If the internal queue is not enabled, in the continuous push streaming use 
case, calling close()  will suffice. 



was (Author: joel.bernstein):
The test failures were occurring due to faulty shutdown behavior in the 
DaemonStream.

This patch makes the following changes:

1) Removes the interrupt on shutdown. The interrupt was just faulty and causing 
the internal thread to exit during unsafe times. Now shutdown just flags the 
internal thread so that it will exit it's loop after completing a full run of 
the internal stream. 

2) Adds a shutdown method to the DaemonStream. When the DaemonStreams internal 
queue is enabled for continuous pull streaming, the contract for shutdown is:
 a) call DaemonStream.shutdown(). This signals the internal thread to 
shutdown after it finishes it's run.
 b) call DaemonStream.read() until the EOF Tuple is read. This will clear 
the internal queue and unblock the internal stream if it's blocked on the queue.
 c) Call DaemonStream.close();

If the internal queue is not enabled, continuous push streaming, calling 
close()  will suffice. 


> Reproducible DaemonStream test failures
> ---
>
> Key: SOLR-8832
> URL: https://issues.apache.org/jira/browse/SOLR-8832
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8832.patch, SOLR-8832.patch
>
>
> The following test fails every time:
> ant test  -Dtestcase=StreamExpressionTest -Dtests.method=testAll 
> -Dtests.seed=A8E5206069146FC0 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=lv-LV -Dtests.timezone=America/Iqaluit -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+107) - Build # 97 - Still Failing!

2016-03-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/97/
Java: 64bit/jdk-9-ea+107 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.lucene.codecs.lucene50.TestLucene50NormsFormat.testByteRange

Error Message:
Unable to unmap the mapped buffer: 
MMapIndexInput(path="/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/build/backward-codecs/test/J1/temp/lucene.codecs.lucene50.TestLucene50NormsFormat_4B47B4EF4759E544-001/index-MMapDirectory-001/_1a.nvd")

Stack Trace:
java.io.IOException: Unable to unmap the mapped buffer: 
MMapIndexInput(path="/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/build/backward-codecs/test/J1/temp/lucene.codecs.lucene50.TestLucene50NormsFormat_4B47B4EF4759E544-001/index-MMapDirectory-001/_1a.nvd")
at 
__randomizedtesting.SeedInfo.seed([4B47B4EF4759E544:82388DDD27CFA5D1]:0)
at 
org.apache.lucene.store.MMapDirectory.lambda$unmapHackImpl$1(MMapDirectory.java:384)
at 
org.apache.lucene.store.ByteBufferIndexInput.freeBuffer(ByteBufferIndexInput.java:376)
at 
org.apache.lucene.store.ByteBufferIndexInput.close(ByteBufferIndexInput.java:355)
at 
org.apache.lucene.store.MockIndexInputWrapper.close(MockIndexInputWrapper.java:61)
at 
org.apache.lucene.index.IndexWriter.slowFileExists(IndexWriter.java:4815)
at org.apache.lucene.index.IndexWriter.filesExist(IndexWriter.java:4352)
at 
org.apache.lucene.index.IndexWriter.startCommit(IndexWriter.java:4423)
at 
org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2876)
at 
org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2989)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2956)
at 
org.apache.lucene.index.RandomIndexWriter.commit(RandomIndexWriter.java:288)
at 
org.apache.lucene.index.BaseNormsFormatTestCase.doTestNormsVersusDocValues(BaseNormsFormatTestCase.java:262)
at 
org.apache.lucene.index.BaseNormsFormatTestCase.testByteRange(BaseNormsFormatTestCase.java:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Comment Edited] (SOLR-8832) Reproducible DaemonStream test failures

2016-03-11 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191264#comment-15191264
 ] 

Joel Bernstein edited comment on SOLR-8832 at 3/11/16 5:33 PM:
---

The test failures were occurring due to faulty shutdown behavior in the 
DaemonStream.

This patch makes the following changes:

1) Removes the interrupt on shutdown. The interrupt was just faulty and causing 
the internal thread to exit during unsafe times. Now shutdown just flags the 
internal thread so that it will exit it's loop after completing a full run of 
the internal stream. 

2) Adds a shutdown method to the DaemonStream. When the DaemonStreams internal 
queue is enabled for continuous pull streaming, the contract for shutdown is:
 a) call DaemonStream.shutdown(). This signals the internal thread to 
shutdown after it finishes it's run.
 b) call DaemonStream.read() until the EOF Tuple is read. This will clear 
the internal queue and unblock the internal stream if it's blocked on the queue.
 c) Call DaemonStream.close();

If the internal queue is not enabled, continuous push streaming, calling 
close()  will suffice. 



was (Author: joel.bernstein):
The test failures were occurring due to faulty shutdown behavior in the 
DaemonStream.

This patch makes the following changes:

1) Removes the interrupt on shutdown. The interrupt was just faulty and causing 
the internal thread to exit during unsafe times. Now shutdown just flags the 
internal thread so that it will exit it's loop after completing a full run of 
the internal stream. 

2) Adds a shutdown method to the DaemonStream. When the DaemonStreams internal 
queue is enabled for continuous pull streaming, the contract for shutdown is:
 a) call DaemonStream.shutdown();
 b) call DaemonStream.read() until the EOF Tuple is read. This will clear 
the internal queue and unblock the internal stream if it's blocked on the queue.
 c) Call DaemonStream.close();

If the internal queue is not enabled, continuous push streaming, calling 
close()  will suffice. 


> Reproducible DaemonStream test failures
> ---
>
> Key: SOLR-8832
> URL: https://issues.apache.org/jira/browse/SOLR-8832
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8832.patch, SOLR-8832.patch
>
>
> The following test fails every time:
> ant test  -Dtestcase=StreamExpressionTest -Dtests.method=testAll 
> -Dtests.seed=A8E5206069146FC0 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=lv-LV -Dtests.timezone=America/Iqaluit -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8832) Reproducible DaemonStream test failures

2016-03-11 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191264#comment-15191264
 ] 

Joel Bernstein commented on SOLR-8832:
--

The test failures were occurring due to faulty shutdown behavior in the 
DaemonStream.

This patch makes the following changes:

1) Removes the interrupt on shutdown. The interrupt was just faulty and causing 
the internal thread to exit during unsafe times. Now shutdown just flags the 
internal thread so that it will exit it's loop after completing a full run of 
the internal stream. 

2) Adds a shutdown method to the DaemonStream. When the DaemonStreams internal 
queue is enabled for continuous pull streaming, the contract for shutdown is:
 a) call DaemonStream.shutdown();
 b) call DaemonStream.read() until the EOF Tuple is read. This will clear 
the internal queue and unblock the internal stream if it's blocked on the queue.
 c) Call DaemonStream.close();

If the internal queue is not enabled, continuous push streaming, calling 
close()  will suffice. 


> Reproducible DaemonStream test failures
> ---
>
> Key: SOLR-8832
> URL: https://issues.apache.org/jira/browse/SOLR-8832
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8832.patch, SOLR-8832.patch
>
>
> The following test fails every time:
> ant test  -Dtestcase=StreamExpressionTest -Dtests.method=testAll 
> -Dtests.seed=A8E5206069146FC0 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=lv-LV -Dtests.timezone=America/Iqaluit -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [More Like This] Query building

2016-03-11 Thread Anshum Gupta
Hi Alessandro,

I've updated the JIRA. The committers try and review code whenever they get
time and in this case, like other such times, I think we were all just
lacking time, rather than the intent.

Also, not all committers work on all parts of the code, so that narrows
down the people who could potentially help you.

On Fri, Mar 11, 2016 at 8:49 AM, Alessandro Benedetti  wrote:

> I start to feel that is not that easy to contribute improvements or small
> fix to Solr ( if they are not super interesting to the mass) .
> I think this one could be a good improvement in the MLT but I would love
> to discuss this with some committer.
> The patch is attached, it is there since months ago...
> Any feedback would be appreciated, I want to contribute, but I need some
> second opinions ...
>
> Cheers
>
> On 11 February 2016 at 13:48, Alessandro Benedetti 
> wrote:
>
>> Hi Guys,
>> is it possible to have any feedback ?
>> Is there any process to speed up bug resolution / discussions ?
>> just want to understand if the patch is not good enough, if I need to
>> improve it or simply no-one took a look ...
>>
>> https://issues.apache.org/jira/browse/LUCENE-6954
>>
>> Cheers
>>
>> On 11 January 2016 at 15:25, Alessandro Benedetti 
>> wrote:
>>
>>> Hi guys,
>>> the patch seems fine to me.
>>> I didn't spend much more time on the code but I checked the tests and
>>> the pre-commit checks.
>>> It seems fine to me.
>>> Let me know ,
>>>
>>> Cheers
>>>
>>> On 31 December 2015 at 18:40, Alessandro Benedetti <
>>> abenede...@apache.org> wrote:
>>>
 https://issues.apache.org/jira/browse/LUCENE-6954

 First draft patch available, I will check better the tests new year !

 On 29 December 2015 at 13:43, Alessandro Benedetti <
 abenede...@apache.org> wrote:

> Sure, I will proceed tomorrow with the Jira and the simple patch +
> tests.
>
> In the meantime let's try to collect some additional feedback.
>
> Cheers
>
> On 29 December 2015 at 12:43, Anshum Gupta 
> wrote:
>
>> Feel free to create a JIRA and put up a patch if you can.
>>
>> On Tue, Dec 29, 2015 at 4:26 PM, Alessandro Benedetti <
>> abenede...@apache.org
>> > wrote:
>>
>> > Hi guys,
>> > While I was exploring the way we build the More Like This query, I
>> > discovered a part I am not convinced of :
>> >
>> >
>> >
>> > Let's see how we build the query :
>> > org.apache.lucene.queries.mlt.MoreLikeThis#retrieveTerms(int)
>> >
>> > 1) we extract the terms from the interesting fields, adding them to
>> a map :
>> >
>> > Map termFreqMap = new HashMap<>();
>> >
>> > *( we lose the relation field-> term, we don't know anymore where
>> the term
>> > was coming ! )*
>> >
>> > org.apache.lucene.queries.mlt.MoreLikeThis#createQueue
>> >
>> > 2) we build the queue that will contain the query terms, at this
>> point we
>> > connect again there terms to some field, but :
>> >
>> > ...
>> >> // go through all the fields and find the largest document
>> frequency
>> >> String topField = fieldNames[0];
>> >> int docFreq = 0;
>> >> for (String fieldName : fieldNames) {
>> >>   int freq = ir.docFreq(new Term(fieldName, word));
>> >>   topField = (freq > docFreq) ? fieldName : topField;
>> >>   docFreq = (freq > docFreq) ? freq : docFreq;
>> >> }
>> >> ...
>> >
>> >
>> > We identify the topField as the field with the highest document
>> frequency
>> > for the term t .
>> > Then we build the termQuery :
>> >
>> > queue.add(new ScoreTerm(word, *topField*, score, idf, docFreq, tf));
>> >
>> > In this way we lose a lot of precision.
>> > Not sure why we do that.
>> > I would prefer to keep the relation between terms and fields.
>> > The MLT query can improve a lot the quality.
>> > If i run the MLT on 2 fields : *description* and *facilities* for
>> example.
>> > It is likely I want to find documents with similar terms in the
>> > description and similar terms in the facilities, without mixing up
>> the
>> > things and loosing the semantic of the terms.
>> >
>> > Let me know your opinion,
>> >
>> > Cheers
>> >
>> >
>> > --
>> > --
>> >
>> > Benedetti Alessandro
>> > Visiting card : http://about.me/alessandro_benedetti
>> >
>> > "Tyger, tyger burning bright
>> > In the forests of the night,
>> > What immortal hand or eye
>> > Could frame thy fearful symmetry?"
>> >
>> > William Blake - Songs of Experience -1794 England
>> >
>>
>>
>>
>> --
>> Anshum Gupta
>>
>
>
>
> --
> --
>
> Benedetti Alessandro
> Visiting 

[jira] [Commented] (SOLR-5743) Faceting with BlockJoin support

2016-03-11 Thread Vijay Sekhri (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191254#comment-15191254
 ] 

Vijay Sekhri commented on SOLR-5743:


I created a new JIRA and also attached a rudimentary patch that takes care of 
NPE and honors facet.prefix. 
https://issues.apache.org/jira/secure/attachment/12792872/SOLR-8834.patch
https://issues.apache.org/jira/browse/SOLR-8834

Vijay


> Faceting with BlockJoin support
> ---
>
> Key: SOLR-5743
> URL: https://issues.apache.org/jira/browse/SOLR-5743
> Project: Solr
>  Issue Type: New Feature
>  Components: faceting
>Reporter: abipc
>Assignee: Mikhail Khludnev
>  Labels: features
> Fix For: 5.5, master
>
> Attachments: SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, 
> SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, 
> SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, 
> SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, 
> cluster.jpg, service_baseline.png, service_new_baseline.jpg, 
> solr_baseline.jpg, solr_new_baseline.jpg
>
>
> For a sample inventory(note - nested documents) like this -   
>  
> 10
> parent
> Nike
> 
> 11
> Red
> XL
> 
> 
> 12
> Blue
> XL
> 
> 
> Faceting results must contain - 
> Red(1)
> XL(1) 
> Blue(1) 
> for a "q=*" query. 
> PS : The inventory example has been taken from this blog - 
> http://blog.griddynamics.com/2013/09/solr-block-join-support.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8834) NPE for BlockJoinFacetComponent and facet.prefix not working

2016-03-11 Thread Vijay Sekhri (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay Sekhri updated SOLR-8834:
---
Attachment: SOLR-8834.patch

This patch will honor facet.prefix and handle NPE

> NPE for BlockJoinFacetComponent and facet.prefix not working
> 
>
> Key: SOLR-8834
> URL: https://issues.apache.org/jira/browse/SOLR-8834
> Project: Solr
>  Issue Type: Bug
>  Components: faceting
>Affects Versions: 5.5
>Reporter: Vijay Sekhri
>Priority: Minor
> Attachments: SOLR-8834.patch
>
>
> Sometime for some types of queries a NPE is thrown .  This is the code where 
> it was happening.
> {code}
> 14:00:20,751 ERROR [org.apache.solr.servlet.HttpSolrCall] 
> (http-/10.235.43.43:8580-82) null:java.lang.NullPointerException
> at 
> org.apache.solr.search.join.BlockJoinFacetCollector.incrementFacets(BlockJoinFacetCollector.java:100)
> at 
> org.apache.solr.search.join.BlockJoinFacetCollector.collect(BlockJoinFacetCollector.java:87)
> {code}
> It could be related to stats query that does not even have any 
> ToParentBlockJoin syntax . Here is one example
> {code}
> 15:07:56,736 INFO  [org.apache.solr.core.SolrCore.Request] 
> (http-/10.235.43.43:8580-143) [core1]  webapp=/solr path=/select 
> params={shards.qt=searchStandard=0.01=true=false=*:*=10.235.52.131=search1=true=2=1454360876733=http://solrx331p.qa.ch3.s.com:8580/solr/core1/|http://solrx351p.qa.ch3.s.com:8580/solr/core1/=id=score=%0a=1=searchStandard=true=catalogs:(("10104"))=searchableAttributes:(("Metal%3DTri+color"))=brand:("Black+Hills+Gold")=discount:("70")=primaryCategory:("10104_3_Jewelry_Diamonds_Rings")=%0a2<-1+5<-2+6<-50%25%0a=1=%0a+primaryLnames^5.0+partnumber^11.0+itemnumber^11.0+fullmfpartno^5.0+mfpartno^5.0+xref^10.0+storeOriginSearchable^3.0+nameSearchable^10.0+brandSearchable^5.0++searchPhrase^1.0++searchableAttributesSearchable^1.0%0a=javabin=0=%0a+++primaryLnames^0.5+nameSearchable^1.0+storeOriginSearchable^0.3+brandSearchable^0.5++xref^1.1+searchableAttributesSearchable^0.1%0a=516=0=white+diamonds+diamonds+elizabeth+taylor+body+lotion=true=price_10151_f=true=100}
>  hits=0 status=0 QTime=0
> 15:07:56,758 ERROR [org.apache.solr.handler.RequestHandlerBase] 
> (http-/10.235.43.43:8580-26) java.lang.NullPointerException
> at 
> org.apache.solr.search.join.BlockJoinFacetCollector.incrementFacets(BlockJoinFacetCollector.java:100)
> at 
> org.apache.solr.search.join.BlockJoinFacetCollector.collect(BlockJoinFacetCollector.java:87)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocSet(SolrIndexSearcher.java:1153)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:350)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:273)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
> {code}
> Furthermore,
> when facet.prefix is passed it is not being honored by child.facet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6954) More Like This Query Generation

2016-03-11 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191248#comment-15191248
 ] 

Anshum Gupta commented on LUCENE-6954:
--

Hi Alessandro, I really want to take a look at this but there's a lot on my 
plate at the moment. I'll try and look at this next week if no one else gets to 
it.

> More Like This Query Generation 
> 
>
> Key: LUCENE-6954
> URL: https://issues.apache.org/jira/browse/LUCENE-6954
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/other
>Affects Versions: 5.4
>Reporter: Alessandro Benedetti
>  Labels: morelikethis
> Attachments: LUCENE-6954.patch
>
>
> Currently the query is generated : 
> org.apache.lucene.queries.mlt.MoreLikeThis#retrieveTerms(int)
> 1) we extract the terms from the interesting fields, adding them to a map :
> Map termFreqMap = new HashMap<>();
> ( we lose the relation field-> term, we don't know anymore where the term was 
> coming ! )
> org.apache.lucene.queries.mlt.MoreLikeThis#createQueue
> 2) we build the queue that will contain the query terms, at this point we 
> connect again there terms to some field, but :
> ...
> // go through all the fields and find the largest document frequency
> String topField = fieldNames[0];
> int docFreq = 0;
> for (String fieldName : fieldNames) {
>   int freq = ir.docFreq(new Term(fieldName, word));
>   topField = (freq > docFreq) ? fieldName : topField;
>   docFreq = (freq > docFreq) ? freq : docFreq;
> }
> ...
> We identify the topField as the field with the highest document frequency for 
> the term t .
> Then we build the termQuery :
> queue.add(new ScoreTerm(word, topField, score, idf, docFreq, tf));
> In this way we lose a lot of precision.
> Not sure why we do that.
> I would prefer to keep the relation between terms and fields.
> The MLT query can improve a lot the quality.
> If i run the MLT on 2 fields : weSell and weDontSell for example.
> It is likely I want to find documents with similar terms in the weSell and 
> similar terms in the weDontSell, without mixing up the things and loosing the 
> semantic of the terms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_72) - Build # 16180 - Still Failing!

2016-03-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16180/
Java: 64bit/jdk1.8.0_72 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.UnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=2065, 
name=testExecutor-1130-thread-5, state=RUNNABLE, 
group=TGRP-UnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=2065, name=testExecutor-1130-thread-5, 
state=RUNNABLE, group=TGRP-UnloadDistributedZkTest]
at 
__randomizedtesting.SeedInfo.seed([4212B5FA8FED56E8:CA468A2021113B10]:0)
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:53083
at __randomizedtesting.SeedInfo.seed([4212B5FA8FED56E8]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:583)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:53083
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:581)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:482)
... 8 more




Build Log:
[...truncated 10916 lines...]
   [junit4] Suite: org.apache.solr.cloud.UnloadDistributedZkTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.UnloadDistributedZkTest_4212B5FA8FED56E8-001/init-core-data-001
   [junit4]   2> 234892 INFO  
(SUITE-UnloadDistributedZkTest-seed#[4212B5FA8FED56E8]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 234894 INFO  
(TEST-UnloadDistributedZkTest.test-seed#[4212B5FA8FED56E8]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 234894 INFO  (Thread-735) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   

[jira] [Updated] (SOLR-8832) Reproducible DaemonStream test failures

2016-03-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8832:
-
Attachment: SOLR-8832.patch

> Reproducible DaemonStream test failures
> ---
>
> Key: SOLR-8832
> URL: https://issues.apache.org/jira/browse/SOLR-8832
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8832.patch, SOLR-8832.patch
>
>
> The following test fails every time:
> ant test  -Dtestcase=StreamExpressionTest -Dtests.method=testAll 
> -Dtests.seed=A8E5206069146FC0 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=lv-LV -Dtests.timezone=America/Iqaluit -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8834) NPE for BlockJoinFacetComponent and facet.prefix not working

2016-03-11 Thread Vijay Sekhri (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay Sekhri updated SOLR-8834:
---
Description: 
Sometime for some types of queries a NPE is thrown .  This is the code where it 
was happening.
{code}
14:00:20,751 ERROR [org.apache.solr.servlet.HttpSolrCall] 
(http-/10.235.43.43:8580-82) null:java.lang.NullPointerException
at 
org.apache.solr.search.join.BlockJoinFacetCollector.incrementFacets(BlockJoinFacetCollector.java:100)
at 
org.apache.solr.search.join.BlockJoinFacetCollector.collect(BlockJoinFacetCollector.java:87)
{code}

It could be related to stats query that does not even have any 
ToParentBlockJoin syntax . Here is one example


{code}
15:07:56,736 INFO  [org.apache.solr.core.SolrCore.Request] 
(http-/10.235.43.43:8580-143) [core1]  webapp=/solr path=/select 
params={shards.qt=searchStandard=0.01=true=false=*:*=10.235.52.131=search1=true=2=1454360876733=http://solrx331p.qa.ch3.s.com:8580/solr/core1/|http://solrx351p.qa.ch3.s.com:8580/solr/core1/=id=score=%0a=1=searchStandard=true=catalogs:(("10104"))=searchableAttributes:(("Metal%3DTri+color"))=brand:("Black+Hills+Gold")=discount:("70")=primaryCategory:("10104_3_Jewelry_Diamonds_Rings")=%0a2<-1+5<-2+6<-50%25%0a=1=%0a+primaryLnames^5.0+partnumber^11.0+itemnumber^11.0+fullmfpartno^5.0+mfpartno^5.0+xref^10.0+storeOriginSearchable^3.0+nameSearchable^10.0+brandSearchable^5.0++searchPhrase^1.0++searchableAttributesSearchable^1.0%0a=javabin=0=%0a+++primaryLnames^0.5+nameSearchable^1.0+storeOriginSearchable^0.3+brandSearchable^0.5++xref^1.1+searchableAttributesSearchable^0.1%0a=516=0=white+diamonds+diamonds+elizabeth+taylor+body+lotion=true=price_10151_f=true=100}
 hits=0 status=0 QTime=0


15:07:56,758 ERROR [org.apache.solr.handler.RequestHandlerBase] 
(http-/10.235.43.43:8580-26) java.lang.NullPointerException
at 
org.apache.solr.search.join.BlockJoinFacetCollector.incrementFacets(BlockJoinFacetCollector.java:100)
at 
org.apache.solr.search.join.BlockJoinFacetCollector.collect(BlockJoinFacetCollector.java:87)
at 
org.apache.solr.search.SolrIndexSearcher.getDocSet(SolrIndexSearcher.java:1153)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:350)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:273)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)

{code}



Furthermore,
when facet.prefix is passed it is not being honored by child.facet.

  was:
Sometime for some types of queries a NPE is thrown .  This is the code where it 
was happening.
{code}
14:00:20,751 ERROR [org.apache.solr.servlet.HttpSolrCall] 
(http-/10.235.43.43:8580-82) null:java.lang.NullPointerException
at 
org.apache.solr.search.join.BlockJoinFacetCollector.incrementFacets(BlockJoinFacetCollector.java:100)
at 
org.apache.solr.search.join.BlockJoinFacetCollector.collect(BlockJoinFacetCollector.java:87)
{code}

It could be related to stats query that does not even have any 
ToParentBlockJoin syntax . Here is one example


{code}
15:07:56,736 INFO  [org.apache.solr.core.SolrCore.Request] 
(http-/10.235.43.43:8580-143) [core1]  webapp=/solr path=/select 
params={shards.qt=searchStandard=0.01=true=false=*:*=10.235.52.131=search1=true=2=1454360876733=http://solrx331p.qa.ch3.s.com:8580/solr/core1/|http://solrx351p.qa.ch3.s.com:8580/solr/core1/=id=score=%0a=1=searchStandard=true=catalogs:(("10104"))=searchableAttributes:(("Metal%3DTri+color"))=brand:("Black+Hills+Gold")=discount:("70")=primaryCategory:("10104_3_Jewelry_Diamonds_Rings")=%0a2<-1+5<-2+6<-50%25%0a=1=%0a+primaryLnames^5.0+partnumber^11.0+itemnumber^11.0+fullmfpartno^5.0+mfpartno^5.0+xref^10.0+storeOriginSearchable^3.0+nameSearchable^10.0+brandSearchable^5.0++searchPhrase^1.0++searchableAttributesSearchable^1.0%0a=javabin=0=%0a+++primaryLnames^0.5+nameSearchable^1.0+storeOriginSearchable^0.3+brandSearchable^0.5++xref^1.1+searchableAttributesSearchable^0.1%0a=516=0=white+diamonds+diamonds+elizabeth+taylor+body+lotion=true=price_10151_f=true=100}
 hits=0 status=0 QTime=0


15:07:56,758 ERROR [org.apache.solr.handler.RequestHandlerBase] 
(http-/10.235.43.43:8580-26) java.lang.NullPointerException
at 
org.apache.solr.search.join.BlockJoinFacetCollector.incrementFacets(BlockJoinFacetCollector.java:100)
at 
org.apache.solr.search.join.BlockJoinFacetCollector.collect(BlockJoinFacetCollector.java:87)
at 
org.apache.solr.search.SolrIndexSearcher.getDocSet(SolrIndexSearcher.java:1153)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:350)
at 

[jira] [Updated] (SOLR-8834) NPE for BlockJoinFacetComponent and facet.prefix not working

2016-03-11 Thread Vijay Sekhri (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay Sekhri updated SOLR-8834:
---
Summary: NPE for BlockJoinFacetComponent and facet.prefix not working  
(was: NPE for BlockJoinFacetComponent)

> NPE for BlockJoinFacetComponent and facet.prefix not working
> 
>
> Key: SOLR-8834
> URL: https://issues.apache.org/jira/browse/SOLR-8834
> Project: Solr
>  Issue Type: Bug
>  Components: faceting
>Affects Versions: 5.5
>Reporter: Vijay Sekhri
>Priority: Minor
>
> Sometime for some types of queries a NPE is thrown .  This is the code where 
> it was happening.
> {code}
> 14:00:20,751 ERROR [org.apache.solr.servlet.HttpSolrCall] 
> (http-/10.235.43.43:8580-82) null:java.lang.NullPointerException
> at 
> org.apache.solr.search.join.BlockJoinFacetCollector.incrementFacets(BlockJoinFacetCollector.java:100)
> at 
> org.apache.solr.search.join.BlockJoinFacetCollector.collect(BlockJoinFacetCollector.java:87)
> {code}
> It could be related to stats query that does not even have any 
> ToParentBlockJoin syntax . Here is one example
> {code}
> 15:07:56,736 INFO  [org.apache.solr.core.SolrCore.Request] 
> (http-/10.235.43.43:8580-143) [core1]  webapp=/solr path=/select 
> params={shards.qt=searchStandard=0.01=true=false=*:*=10.235.52.131=search1=true=2=1454360876733=http://solrx331p.qa.ch3.s.com:8580/solr/core1/|http://solrx351p.qa.ch3.s.com:8580/solr/core1/=id=score=%0a=1=searchStandard=true=catalogs:(("10104"))=searchableAttributes:(("Metal%3DTri+color"))=brand:("Black+Hills+Gold")=discount:("70")=primaryCategory:("10104_3_Jewelry_Diamonds_Rings")=%0a2<-1+5<-2+6<-50%25%0a=1=%0a+primaryLnames^5.0+partnumber^11.0+itemnumber^11.0+fullmfpartno^5.0+mfpartno^5.0+xref^10.0+storeOriginSearchable^3.0+nameSearchable^10.0+brandSearchable^5.0++searchPhrase^1.0++searchableAttributesSearchable^1.0%0a=javabin=0=%0a+++primaryLnames^0.5+nameSearchable^1.0+storeOriginSearchable^0.3+brandSearchable^0.5++xref^1.1+searchableAttributesSearchable^0.1%0a=516=0=white+diamonds+diamonds+elizabeth+taylor+body+lotion=true=price_10151_f=true=100}
>  hits=0 status=0 QTime=0
> 15:07:56,758 ERROR [org.apache.solr.handler.RequestHandlerBase] 
> (http-/10.235.43.43:8580-26) java.lang.NullPointerException
> at 
> org.apache.solr.search.join.BlockJoinFacetCollector.incrementFacets(BlockJoinFacetCollector.java:100)
> at 
> org.apache.solr.search.join.BlockJoinFacetCollector.collect(BlockJoinFacetCollector.java:87)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocSet(SolrIndexSearcher.java:1153)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:350)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:273)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8834) NPE for BlockJoinFacetComponent

2016-03-11 Thread Vijay Sekhri (JIRA)
Vijay Sekhri created SOLR-8834:
--

 Summary: NPE for BlockJoinFacetComponent
 Key: SOLR-8834
 URL: https://issues.apache.org/jira/browse/SOLR-8834
 Project: Solr
  Issue Type: Bug
  Components: faceting
Affects Versions: 5.5
Reporter: Vijay Sekhri
Priority: Minor


Sometime for some types of queries a NPE is thrown .  This is the code where it 
was happening.
{code}
14:00:20,751 ERROR [org.apache.solr.servlet.HttpSolrCall] 
(http-/10.235.43.43:8580-82) null:java.lang.NullPointerException
at 
org.apache.solr.search.join.BlockJoinFacetCollector.incrementFacets(BlockJoinFacetCollector.java:100)
at 
org.apache.solr.search.join.BlockJoinFacetCollector.collect(BlockJoinFacetCollector.java:87)
{code}

It could be related to stats query that does not even have any 
ToParentBlockJoin syntax . Here is one example


{code}
15:07:56,736 INFO  [org.apache.solr.core.SolrCore.Request] 
(http-/10.235.43.43:8580-143) [core1]  webapp=/solr path=/select 
params={shards.qt=searchStandard=0.01=true=false=*:*=10.235.52.131=search1=true=2=1454360876733=http://solrx331p.qa.ch3.s.com:8580/solr/core1/|http://solrx351p.qa.ch3.s.com:8580/solr/core1/=id=score=%0a=1=searchStandard=true=catalogs:(("10104"))=searchableAttributes:(("Metal%3DTri+color"))=brand:("Black+Hills+Gold")=discount:("70")=primaryCategory:("10104_3_Jewelry_Diamonds_Rings")=%0a2<-1+5<-2+6<-50%25%0a=1=%0a+primaryLnames^5.0+partnumber^11.0+itemnumber^11.0+fullmfpartno^5.0+mfpartno^5.0+xref^10.0+storeOriginSearchable^3.0+nameSearchable^10.0+brandSearchable^5.0++searchPhrase^1.0++searchableAttributesSearchable^1.0%0a=javabin=0=%0a+++primaryLnames^0.5+nameSearchable^1.0+storeOriginSearchable^0.3+brandSearchable^0.5++xref^1.1+searchableAttributesSearchable^0.1%0a=516=0=white+diamonds+diamonds+elizabeth+taylor+body+lotion=true=price_10151_f=true=100}
 hits=0 status=0 QTime=0


15:07:56,758 ERROR [org.apache.solr.handler.RequestHandlerBase] 
(http-/10.235.43.43:8580-26) java.lang.NullPointerException
at 
org.apache.solr.search.join.BlockJoinFacetCollector.incrementFacets(BlockJoinFacetCollector.java:100)
at 
org.apache.solr.search.join.BlockJoinFacetCollector.collect(BlockJoinFacetCollector.java:87)
at 
org.apache.solr.search.SolrIndexSearcher.getDocSet(SolrIndexSearcher.java:1153)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:350)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:273)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)

{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8659) Improve Solr JDBC Driver to support more SQL Clients

2016-03-11 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191214#comment-15191214
 ] 

Kevin Risden commented on SOLR-8659:


[~joel.bernstein] - It would be great to get the following 3 JIRAs into Solr 6:
* SOLR-8819
* SOLR-8809
* SOLR-8810

> Improve Solr JDBC Driver to support more SQL Clients
> 
>
> Key: SOLR-8659
> URL: https://issues.apache.org/jira/browse/SOLR-8659
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
> Attachments: 
> iODBC_Demo__Unicode__-_Connected_to__remotesolr__and_Attach_screenshot_-_ASF_JIRA.png
>
>
> SOLR-8502 was a great start to getting JDBC support to be more complete. This 
> ticket is to track items that could further improve the JDBC support for more 
> SQL clients and their features. A few SQL clients are:
> * DbVisualizer
> * SquirrelSQL
> * Apache Zeppelin (incubating)
> * IntelliJ IDEA Database Tool
> * ODBC clients like Excel/Tableau



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8832) Reproducible DaemonStream test failures

2016-03-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8832:
-
Attachment: SOLR-8832.patch

> Reproducible DaemonStream test failures
> ---
>
> Key: SOLR-8832
> URL: https://issues.apache.org/jira/browse/SOLR-8832
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8832.patch
>
>
> The following test fails every time:
> ant test  -Dtestcase=StreamExpressionTest -Dtests.method=testAll 
> -Dtests.seed=A8E5206069146FC0 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=lv-LV -Dtests.timezone=America/Iqaluit -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [More Like This] Query building

2016-03-11 Thread Alessandro Benedetti
I start to feel that is not that easy to contribute improvements or small
fix to Solr ( if they are not super interesting to the mass) .
I think this one could be a good improvement in the MLT but I would love to
discuss this with some committer.
The patch is attached, it is there since months ago...
Any feedback would be appreciated, I want to contribute, but I need some
second opinions ...

Cheers

On 11 February 2016 at 13:48, Alessandro Benedetti 
wrote:

> Hi Guys,
> is it possible to have any feedback ?
> Is there any process to speed up bug resolution / discussions ?
> just want to understand if the patch is not good enough, if I need to
> improve it or simply no-one took a look ...
>
> https://issues.apache.org/jira/browse/LUCENE-6954
>
> Cheers
>
> On 11 January 2016 at 15:25, Alessandro Benedetti 
> wrote:
>
>> Hi guys,
>> the patch seems fine to me.
>> I didn't spend much more time on the code but I checked the tests and the
>> pre-commit checks.
>> It seems fine to me.
>> Let me know ,
>>
>> Cheers
>>
>> On 31 December 2015 at 18:40, Alessandro Benedetti > > wrote:
>>
>>> https://issues.apache.org/jira/browse/LUCENE-6954
>>>
>>> First draft patch available, I will check better the tests new year !
>>>
>>> On 29 December 2015 at 13:43, Alessandro Benedetti <
>>> abenede...@apache.org> wrote:
>>>
 Sure, I will proceed tomorrow with the Jira and the simple patch +
 tests.

 In the meantime let's try to collect some additional feedback.

 Cheers

 On 29 December 2015 at 12:43, Anshum Gupta 
 wrote:

> Feel free to create a JIRA and put up a patch if you can.
>
> On Tue, Dec 29, 2015 at 4:26 PM, Alessandro Benedetti <
> abenede...@apache.org
> > wrote:
>
> > Hi guys,
> > While I was exploring the way we build the More Like This query, I
> > discovered a part I am not convinced of :
> >
> >
> >
> > Let's see how we build the query :
> > org.apache.lucene.queries.mlt.MoreLikeThis#retrieveTerms(int)
> >
> > 1) we extract the terms from the interesting fields, adding them to
> a map :
> >
> > Map termFreqMap = new HashMap<>();
> >
> > *( we lose the relation field-> term, we don't know anymore where
> the term
> > was coming ! )*
> >
> > org.apache.lucene.queries.mlt.MoreLikeThis#createQueue
> >
> > 2) we build the queue that will contain the query terms, at this
> point we
> > connect again there terms to some field, but :
> >
> > ...
> >> // go through all the fields and find the largest document frequency
> >> String topField = fieldNames[0];
> >> int docFreq = 0;
> >> for (String fieldName : fieldNames) {
> >>   int freq = ir.docFreq(new Term(fieldName, word));
> >>   topField = (freq > docFreq) ? fieldName : topField;
> >>   docFreq = (freq > docFreq) ? freq : docFreq;
> >> }
> >> ...
> >
> >
> > We identify the topField as the field with the highest document
> frequency
> > for the term t .
> > Then we build the termQuery :
> >
> > queue.add(new ScoreTerm(word, *topField*, score, idf, docFreq, tf));
> >
> > In this way we lose a lot of precision.
> > Not sure why we do that.
> > I would prefer to keep the relation between terms and fields.
> > The MLT query can improve a lot the quality.
> > If i run the MLT on 2 fields : *description* and *facilities* for
> example.
> > It is likely I want to find documents with similar terms in the
> > description and similar terms in the facilities, without mixing up
> the
> > things and loosing the semantic of the terms.
> >
> > Let me know your opinion,
> >
> > Cheers
> >
> >
> > --
> > --
> >
> > Benedetti Alessandro
> > Visiting card : http://about.me/alessandro_benedetti
> >
> > "Tyger, tyger burning bright
> > In the forests of the night,
> > What immortal hand or eye
> > Could frame thy fearful symmetry?"
> >
> > William Blake - Songs of Experience -1794 England
> >
>
>
>
> --
> Anshum Gupta
>



 --
 --

 Benedetti Alessandro
 Visiting card : http://about.me/alessandro_benedetti

 "Tyger, tyger burning bright
 In the forests of the night,
 What immortal hand or eye
 Could frame thy fearful symmetry?"

 William Blake - Songs of Experience -1794 England

>>>
>>>
>>>
>>> --
>>> --
>>>
>>> Benedetti Alessandro
>>> Visiting card : http://about.me/alessandro_benedetti
>>>
>>> "Tyger, tyger burning bright
>>> In the forests of the night,
>>> What immortal hand or eye
>>> Could frame thy fearful symmetry?"
>>>
>>> William Blake - Songs of Experience 

Re: [Solr Suggester Component] Unique suggestions

2016-03-11 Thread Alessandro Benedetti
No one didn't care of the topic, let me try in the solr-user list as well !
Does anyone think that should exist a parameter that allow a Suggester to
not return duplicate suggestions ?
In my opinion could be a good improvement !
In the initial patch i was acting at SolrJ level, but to be honest I think
the enhancement should be done internally in Solr.
let me know your opinion and I can proceed with a patch.

Cheers

On 16 February 2016 at 16:19, Alessandro Benedetti 
wrote:

> It has been some time I was wondering why in a lot of scenarios we return
> duplicates per suggestions.
> Apart the rare scenarios where someone is associating a payload to each
> suggestion, suggestions with the same label are practically
> undistinguishable from a Human perspective.
> I would suggest to add a configuration parameter for the component to
> avoid duplicates when the parameter is true.
>
> I submitted a patch for SolrJ but now I think the problem should be solved
> in the Solr core and the possibility of configuring the strategy given to
> the user at Solrconfig level.
> If you agree I will create a new specific Jira issue to solve the problem
> in Solr itself.
>
> SolrJ patch
> https://issues.apache.org/jira/browse/SOLR-8672
>
> Cheers
>
> --
> --
>
> Benedetti Alessandro
> Visiting card : http://about.me/alessandro_benedetti
>
> "Tyger, tyger burning bright
> In the forests of the night,
> What immortal hand or eye
> Could frame thy fearful symmetry?"
>
> William Blake - Songs of Experience -1794 England
>



-- 
--

Benedetti Alessandro
Visiting card : http://about.me/alessandro_benedetti

"Tyger, tyger burning bright
In the forests of the night,
What immortal hand or eye
Could frame thy fearful symmetry?"

William Blake - Songs of Experience -1794 England


[jira] [Issue Comment Deleted] (LUCENE-7097) Can we increase the stack depth before Introsorter switches to heapsort?

2016-03-11 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7097:
-
Comment: was deleted

(was: If we change it to add this 2x factor then maybe we should also take the 
floor of the log2 instead of the ceil to be on par with the paper.)

> Can we increase the stack depth before Introsorter switches to heapsort?
> 
>
> Key: LUCENE-7097
> URL: https://issues.apache.org/jira/browse/LUCENE-7097
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: trunk, 6.1
>
>
> Introsort is a "safe" quicksort: it uses quicksort but detects when an 
> adversary is at work and cuts over to heapsort at that point.
> The description at https://en.wikipedia.org/wiki/Introsort shows the cutover 
> as 2X log_2(N) but our impl ({{IntroSorter}}) currently uses just log_2.
> So I tested using 2X log_2 instead, and I see a decent (~5.6%, from 98.2 sec 
> to 92.7 sec) speedup in the time for offline sorter to sort when doing the 
> force merge of 6.1 LatLonPoints from the London UK benchmark.
> Is there any reason not to switch?  I know this means 2X the stack required, 
> but since this is log_2 space that seems fine?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8819) Implement DatabaseMetaDataImpl getTables() and fix getSchemas()

2016-03-11 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8819:
---
Attachment: SOLR-8819.patch

Improves on [~cahilltr]'s patch. His patch and subsequent manual testing with 
DbVisualizer uncovered an issue with getSchemas. We shouldn't be returning any 
schema information since we don't have the concept of schemas. getTables is 
really what we want and this is now implemented in a TablesStream.

This is a change to what is returned is returned by the JDBC driver. This 
should be put into Solr 6 before the release so we don't have to deal with 
backwards compatibility issues.

> Implement DatabaseMetaDataImpl getTables() and fix getSchemas()
> ---
>
> Key: SOLR-8819
> URL: https://issues.apache.org/jira/browse/SOLR-8819
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
> Attachments: SOLR-8819.patch, SOLR-8819.patch
>
>
> DbVisualizer NPE when clicking on DB References tab. After connecting, NPE if 
> double click on "DB" under connection name then click on References tab.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7097) Can we increase the stack depth before Introsorter switches to heapsort?

2016-03-11 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191099#comment-15191099
 ] 

Adrien Grand commented on LUCENE-7097:
--

If we change it to add this 2x factor then maybe we should also take the floor 
of the log2 instead of the ceil to be on par with the paper.

> Can we increase the stack depth before Introsorter switches to heapsort?
> 
>
> Key: LUCENE-7097
> URL: https://issues.apache.org/jira/browse/LUCENE-7097
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: trunk, 6.1
>
>
> Introsort is a "safe" quicksort: it uses quicksort but detects when an 
> adversary is at work and cuts over to heapsort at that point.
> The description at https://en.wikipedia.org/wiki/Introsort shows the cutover 
> as 2X log_2(N) but our impl ({{IntroSorter}}) currently uses just log_2.
> So I tested using 2X log_2 instead, and I see a decent (~5.6%, from 98.2 sec 
> to 92.7 sec) speedup in the time for offline sorter to sort when doing the 
> force merge of 6.1 LatLonPoints from the London UK benchmark.
> Is there any reason not to switch?  I know this means 2X the stack required, 
> but since this is log_2 space that seems fine?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7091) Add doc values support to MemoryIndex

2016-03-11 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191098#comment-15191098
 ] 

David Smiley commented on LUCENE-7091:
--

bq. Returning the same doc values instance for binary, sorted, number and 
sorted number doc values is fine, but not for sorted set doc values. 

Good point.

Some comments:

* addField: unfortunately this method is very long and it's difficult to 
follow.  I understand that it's not easy to split it up because of the number 
of local variables.  One thing that would help is renaming "longValues" and 
"bytesValuesSet" to make them clearly associated with doc-values.  I suggest 
"dvLongValues" and "dvBytesValuesSet" and add a comment to the former {{//NOT a 
set}}.  Another thing that would help is comments to declare the different 
phases of this method... like definitely before the switch(docValuesType) and 
at other junctures.  But I already see some code duplication in how 
numericProducer & binaryProducer are initialized.  Here's an idea:  Maybe Info 
could be changed to hold this state mutably.  Then, there wouldn't be a long 
stage of pulling out each var from the info only to put it all back again.  If 
this idea is successful, there would be much fewer local variables, and then 
you could easily extract a method to handle the DV stuff and a separate method 
for the Terms stuff.  What do you think?

* instead of freeze() knowing to call both getNormDocValues & prepareDocValues 
(and to sort terms), I suggest that freeze be implemented on each Info where 
those methods can be called there.  I think that's easier to maintain.

... to be continued; I didn't finish reviewing ...

> Add doc values support to MemoryIndex
> -
>
> Key: LUCENE-7091
> URL: https://issues.apache.org/jira/browse/LUCENE-7091
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Martijn van Groningen
> Attachments: LUCENE-7091.patch, LUCENE-7091.patch, LUCENE-7091.patch
>
>
> Sometimes queries executed via the MemoryIndex require certain things to be 
> stored as doc values. Today this isn't possible because the memory index 
> doesn't support this and these queries silently return no results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8819) Implement DatabaseMetaDataImpl getTables() and fix getSchemas()

2016-03-11 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8819:
---
Summary: Implement DatabaseMetaDataImpl getTables() and fix getSchemas()  
(was: Implement DatabaseMetaDataImpl.getTables())

> Implement DatabaseMetaDataImpl getTables() and fix getSchemas()
> ---
>
> Key: SOLR-8819
> URL: https://issues.apache.org/jira/browse/SOLR-8819
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
> Attachments: SOLR-8819.patch
>
>
> DbVisualizer NPE when clicking on DB References tab. After connecting, NPE if 
> double click on "DB" under connection name then click on References tab.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7097) Can we increase the stack depth before Introsorter switches to heapsort?

2016-03-11 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191095#comment-15191095
 ] 

Adrien Grand commented on LUCENE-7097:
--

If we change it to add this 2x factor then maybe we should also take the floor 
of the log2 instead of the ceil to be on par with the paper.

> Can we increase the stack depth before Introsorter switches to heapsort?
> 
>
> Key: LUCENE-7097
> URL: https://issues.apache.org/jira/browse/LUCENE-7097
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: trunk, 6.1
>
>
> Introsort is a "safe" quicksort: it uses quicksort but detects when an 
> adversary is at work and cuts over to heapsort at that point.
> The description at https://en.wikipedia.org/wiki/Introsort shows the cutover 
> as 2X log_2(N) but our impl ({{IntroSorter}}) currently uses just log_2.
> So I tested using 2X log_2 instead, and I see a decent (~5.6%, from 98.2 sec 
> to 92.7 sec) speedup in the time for offline sorter to sort when doing the 
> force merge of 6.1 LatLonPoints from the London UK benchmark.
> Is there any reason not to switch?  I know this means 2X the stack required, 
> but since this is log_2 space that seems fine?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 8 - Still Failing

2016-03-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/8/

4 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=154889, 
name=testExecutor-11149-thread-4, state=RUNNABLE, 
group=TGRP-HdfsUnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=154889, name=testExecutor-11149-thread-4, 
state=RUNNABLE, group=TGRP-HdfsUnloadDistributedZkTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:39250
at __randomizedtesting.SeedInfo.seed([BF95851EA151982F]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$3(BasicDistributedZkTest.java:583)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$6(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:39250
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$3(BasicDistributedZkTest.java:581)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:482)
... 8 more


FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload

Error Message:
expected:<[{indexVersion=1457710316338,generation=2,filelist=[_1bo.cfe, 
_1bo.cfs, _1bo.si, _1bp.cfe, _1bp.cfs, _1bp.si, _1bq.cfe, _1bq.cfs, _1bq.si, 
_1br.cfe, _1br.cfs, _1br.si, segments_2]}]> but 
was:<[{indexVersion=1457710316338,generation=2,filelist=[_1bo.cfe, _1bo.cfs, 
_1bo.si, _1bp.cfe, _1bp.cfs, _1bp.si, _1bq.cfe, _1bq.cfs, _1bq.si, _1br.cfe, 
_1br.cfs, _1br.si, segments_2]}, 
{indexVersion=1457710316338,generation=3,filelist=[_1bp.cfe, _1bp.cfs, _1bp.si, 
_1bq.cfe, _1bq.cfs, _1bq.si, _1bs.cfe, _1bs.cfs, _1bs.si, segments_3]}]>

Stack Trace:
java.lang.AssertionError: 
expected:<[{indexVersion=1457710316338,generation=2,filelist=[_1bo.cfe, 
_1bo.cfs, _1bo.si, _1bp.cfe, _1bp.cfs, _1bp.si, _1bq.cfe, _1bq.cfs, _1bq.si, 
_1br.cfe, 

[jira] [Commented] (SOLR-8831) allow _version_ field to be unstored

2016-03-11 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191090#comment-15191090
 ] 

Yonik Seeley commented on SOLR-8831:


bq. they are just two different implementations of the logical concept of 
storing data for later retreival 

I agree - I've been occasionally using the term "row stored" and "column 
stored".
While we won't be able to totally squash the terms "stored" or "docValues"  
(too much history), in certain contexts it will certainly be easier to use an 
all encompassing term like "retrievable".  I'll update this patch to reflect 
that unless someone comes up with a better word for it.

> allow _version_ field to be unstored
> 
>
> Key: SOLR-8831
> URL: https://issues.apache.org/jira/browse/SOLR-8831
> Project: Solr
>  Issue Type: Improvement
>Reporter: Yonik Seeley
> Attachments: SOLR-8831.patch
>
>
> Right now, one is prohibited from having an unstored _version_ field, even if 
> docValues are enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8831) allow _version_ field to be unstored

2016-03-11 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191075#comment-15191075
 ] 

Jack Krupansky edited comment on SOLR-8831 at 3/11/16 3:49 PM:
---

Can we come up with a nice clean term for "stored or docValues are enabled"?

I mean, the issue title here is misleading, as the description then indicates - 
"if docValues are enabled." So, it should be "allow _version_ field to be 
unstored if docValues are enabled."

Traditional database nomenclature is no help here since the concept of 
non-stored data is meaningless in a true database.

Personally, I'd be happier if Solr hid a lot of the byzantine complexity of 
Lucene, including this odd distinction between stored and docValues. I mean, to 
me they are just two different implementations of the logical concept of 
storing data for later retreival - how the data is stored rather than whether 
it is stored.

I'll offer two suggested simple terms to be used at the Solr level even if 
Lucene insists on remaining byzantine: "xstored" or "retrievable", both meaning 
that the field attributes make it possible for Solr to retrieve data after 
indexing, either because the field is stored or has docValues enabled. This is 
not a proposal for a feature, but simply terminology to be used to talk about 
fields which are... "either stored or have docValues enabled." (If I wanted a 
feature, it might be to have a new attribute like 
retrieval_storage="\{by_field|by_document|none}" or... 
stored="\{yes|no|docValues|fieldValues}".)

I'm not proposing any feature here since that would be out of the scope of the 
issue, but since this issue needs doc, I am just proposing new terminology for 
that doc.

Again, to summarize more briefly, I am proposed that the terminology of 
"retrievable" be used to refer to fields that are either stored or have 
docValues enabled.


was (Author: jkrupan):
Can we come up with a nice clean term for "stored or docValues are enabled"?

I mean, the issue title here is misleading, as the description then indicates - 
"if docValues are enabled." So, it should be "allow _version_ field to be 
unstored if docValues are enabled."

Traditional database nomenclature is no help here since the concept of 
non-stored data is meaningless in a true database.

Personally, I'd be happier if Solr hid a lot of the byzantine complexity of 
Lucene, including this odd distinction between stored and docValues. I mean, to 
me they are just two different implementations of the logical concept of 
storing data for later retreival - how the data is stored rather than whether 
it is stored.

I'll offer two suggested simple terms to be used at the Solr level even if 
Lucene insists on remaining byzantine: "xstored" or "retrievable", both meaning 
that the field attributes make it possible for Solr to retrieve data after 
indexing, either because the field is stored or has docValues enabled. This is 
not a proposal for a feature, but simply terminology to be used to talk about 
fields which are... "either stored or have docValues enabled." (If I wanted a 
feature, it might be to have a new attribute like 
retrieval_storage="{by_field|by_document|none}" or... 
stored="{yes|no|docValues|fieldValues}".)

I'm not proposing any feature here since that would be out of the scope of the 
issue, but since this issue needs doc, I am just proposing new terminology for 
that doc.

Again, to summarize more briefly, I am proposed that the terminology of 
"retrievable" be used to refer to fields that are either stored or have 
docValues enabled.

> allow _version_ field to be unstored
> 
>
> Key: SOLR-8831
> URL: https://issues.apache.org/jira/browse/SOLR-8831
> Project: Solr
>  Issue Type: Improvement
>Reporter: Yonik Seeley
> Attachments: SOLR-8831.patch
>
>
> Right now, one is prohibited from having an unstored _version_ field, even if 
> docValues are enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8831) allow _version_ field to be unstored

2016-03-11 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191075#comment-15191075
 ] 

Jack Krupansky commented on SOLR-8831:
--

Can we come up with a nice clean term for "stored or docValues are enabled"?

I mean, the issue title here is misleading, as the description then indicates - 
"if docValues are enabled." So, it should be "allow _version_ field to be 
unstored if docValues are enabled."

Traditional database nomenclature is no help here since the concept of 
non-stored data is meaningless in a true database.

Personally, I'd be happier if Solr hid a lot of the byzantine complexity of 
Lucene, including this odd distinction between stored and docValues. I mean, to 
me they are just two different implementations of the logical concept of 
storing data for later retreival - how the data is stored rather than whether 
it is stored.

I'll offer two suggested simple terms to be used at the Solr level even if 
Lucene insists on remaining byzantine: "xstored" or "retrievable", both meaning 
that the field attributes make it possible for Solr to retrieve data after 
indexing, either because the field is stored or has docValues enabled. This is 
not a proposal for a feature, but simply terminology to be used to talk about 
fields which are... "either stored or have docValues enabled." (If I wanted a 
feature, it might be to have a new attribute like 
retrieval_storage="{by_field|by_document|none}" or... 
stored="{yes|no|docValues|fieldValues}".)

I'm not proposing any feature here since that would be out of the scope of the 
issue, but since this issue needs doc, I am just proposing new terminology for 
that doc.

Again, to summarize more briefly, I am proposed that the terminology of 
"retrievable" be used to refer to fields that are either stored or have 
docValues enabled.

> allow _version_ field to be unstored
> 
>
> Key: SOLR-8831
> URL: https://issues.apache.org/jira/browse/SOLR-8831
> Project: Solr
>  Issue Type: Improvement
>Reporter: Yonik Seeley
> Attachments: SOLR-8831.patch
>
>
> Right now, one is prohibited from having an unstored _version_ field, even if 
> docValues are enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7097) Can we increase the stack depth before Introsorter switches to heapsort?

2016-03-11 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-7097:
--

 Summary: Can we increase the stack depth before Introsorter 
switches to heapsort?
 Key: LUCENE-7097
 URL: https://issues.apache.org/jira/browse/LUCENE-7097
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: trunk, 6.1


Introsort is a "safe" quicksort: it uses quicksort but detects when an 
adversary is at work and cuts over to heapsort at that point.

The description at https://en.wikipedia.org/wiki/Introsort shows the cutover as 
2X log_2(N) but our impl ({{IntroSorter}}) currently uses just log_2.

So I tested using 2X log_2 instead, and I see a decent (~5.6%, from 98.2 sec to 
92.7 sec) speedup in the time for offline sorter to sort when doing the force 
merge of 6.1 LatLonPoints from the London UK benchmark.

Is there any reason not to switch?  I know this means 2X the stack required, 
but since this is log_2 space that seems fine?




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+107) - Build # 96 - Still Failing!

2016-03-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/96/
Java: 64bit/jdk-9-ea+107 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.lucene.index.TestTermsEnum2.testIntersect

Error Message:
source=3 is out of bounds (maxState is 2)

Stack Trace:
java.lang.IllegalArgumentException: source=3 is out of bounds (maxState is 2)
at 
__randomizedtesting.SeedInfo.seed([993D6A7FCE6E6AC5:D46826A12DF7541B]:0)
at 
org.apache.lucene.util.automaton.Automaton.addTransition(Automaton.java:165)
at 
org.apache.lucene.util.automaton.MinimizationOperations.minimize(MinimizationOperations.java:245)
at 
org.apache.lucene.util.automaton.RegExp.toAutomatonInternal(RegExp.java:537)
at org.apache.lucene.util.automaton.RegExp.findLeaves(RegExp.java:617)
at org.apache.lucene.util.automaton.RegExp.findLeaves(RegExp.java:614)
at org.apache.lucene.util.automaton.RegExp.findLeaves(RegExp.java:614)
at 
org.apache.lucene.util.automaton.RegExp.toAutomatonInternal(RegExp.java:521)
at org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:495)
at org.apache.lucene.util.automaton.RegExp.toAutomaton(RegExp.java:426)
at 
org.apache.lucene.index.TestTermsEnum2.testIntersect(TestTermsEnum2.java:162)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 

[jira] [Commented] (LUCENE-6966) Contribution: Codec for index-level encryption

2016-03-11 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191059#comment-15191059
 ] 

Thomas Mueller commented on LUCENE-6966:


The approach taken in LUCENE-2228 sounds sensible to me: "AESDirectory extends 
FSDirectory". Even thought the patch would need to be improved: nowadays XTS 
should be used.

> Contribution: Codec for index-level encryption
> --
>
> Key: LUCENE-6966
> URL: https://issues.apache.org/jira/browse/LUCENE-6966
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/other
>Reporter: Renaud Delbru
>  Labels: codec, contrib
>
> We would like to contribute a codec that enables the encryption of sensitive 
> data in the index that has been developed as part of an engagement with a 
> customer. We think that this could be of interest for the community.
> Below is a description of the project.
> h1. Introduction
> In comparison with approaches where all data is encrypted (e.g., file system 
> encryption, index output / directory encryption), encryption at a codec level 
> enables more fine-grained control on which block of data is encrypted. This 
> is more efficient since less data has to be encrypted. This also gives more 
> flexibility such as the ability to select which field to encrypt.
> Some of the requirements for this project were:
> * The performance impact of the encryption should be reasonable.
> * The user can choose which field to encrypt.
> * Key management: During the life cycle of the index, the user can provide a 
> new version of his encryption key. Multiple key versions should co-exist in 
> one index.
> h1. What is supported ?
> - Block tree terms index and dictionary
> - Compressed stored fields format
> - Compressed term vectors format
> - Doc values format (prototype based on an encrypted index output) - this 
> will be submitted as a separated patch
> - Index upgrader: command to upgrade all the index segments with the latest 
> key version available.
> h1. How it is implemented ?
> h2. Key Management
> One index segment is encrypted with a single key version. An index can have 
> multiple segments, each one encrypted using a different key version. The key 
> version for a segment is stored in the segment info.
> The provided codec is abstract, and a subclass is responsible in providing an 
> implementation of the cipher factory. The cipher factory is responsible of 
> the creation of a cipher instance based on a given key version.
> h2. Encryption Model
> The encryption model is based on AES/CBC with padding. Initialisation vector 
> (IV) is reused for performance reason, but only on a per format and per 
> segment basis.
> While IV reuse is usually considered a bad practice, the CBC mode is somehow 
> resilient to IV reuse. The only "leak" of information that this could lead to 
> is being able to know that two encrypted blocks of data starts with the same 
> prefix. However, it is unlikely that two data blocks in an index segment will 
> start with the same data:
> - Stored Fields Format: Each encrypted data block is a compressed block 
> (~4kb) of one or more documents. It is unlikely that two compressed blocks 
> start with the same data prefix.
> - Term Vectors: Each encrypted data block is a compressed block (~4kb) of 
> terms and payloads from one or more documents. It is unlikely that two 
> compressed blocks start with the same data prefix.
> - Term Dictionary Index: The term dictionary index is encoded and encrypted 
> in one single data block.
> - Term Dictionary Data: Each data block of the term dictionary encodes a set 
> of suffixes. It is unlikely to have two dictionary data blocks sharing the 
> same prefix within the same segment.
> - DocValues: A DocValues file will be composed of multiple encrypted data 
> blocks. It is unlikely to have two data blocks sharing the same prefix within 
> the same segment (each one will encodes a list of values associated to a 
> field).
> To the best of our knowledge, this model should be safe. However, it would be 
> good if someone with security expertise in the community could review and 
> validate it. 
> h1. Performance
> We report here a performance benchmark we did on an early prototype based on 
> Lucene 4.x. The benchmark was performed on the Wikipedia dataset where all 
> the fields (id, title, body, date) were encrypted. Only the block tree terms 
> and compressed stored fields format were tested at that time. 
> h2. Indexing
> The indexing throughput slightly decreased and is roughly 15% less than with 
> the base Lucene. 
> The merge time slightly increased by 35%.
> There was no significant difference in term of index size.
> h2. Query Throughput
> With respect to query throughput, we observed no significant impact on the 
> following queries: 

[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+107) - Build # 16179 - Failure!

2016-03-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16179/
Java: 32bit/jdk-9-ea+107 -server -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.UnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=344, 
name=testExecutor-150-thread-6, state=RUNNABLE, 
group=TGRP-UnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=344, name=testExecutor-150-thread-6, 
state=RUNNABLE, group=TGRP-UnloadDistributedZkTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:44551
at __randomizedtesting.SeedInfo.seed([B7620DB66155B62]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:583)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1158)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632)
at java.lang.Thread.run(Thread.java:804)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:44551
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:581)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:482)
... 8 more


FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val' for path 'x' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{"wt":"json"},   
"context":{ "webapp":"", "path":"/test1", "httpMethod":"GET"},   
"class":"org.apache.solr.core.BlobStoreTestRequestHandler",   "x":null}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val' for path 'x' 
full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{"wt":"json"},
  "context":{
"webapp":"",
"path":"/test1",
"httpMethod":"GET"},
  "class":"org.apache.solr.core.BlobStoreTestRequestHandler",
  "x":null}
at 
__randomizedtesting.SeedInfo.seed([B7620DB66155B62:D33B0D8C91C8FEC2]:0)
at 

[jira] [Commented] (LUCENE-7084) fail precommit on comparingIdentical

2016-03-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190993#comment-15190993
 ] 

Michael McCandless commented on LUCENE-7084:


[~cpoerschke] can this be resolved now?

> fail precommit on comparingIdentical
> 
>
> Key: LUCENE-7084
> URL: https://issues.apache.org/jira/browse/LUCENE-7084
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: master, 6.1
>
> Attachments: LUCENE-7084.patch
>
>
> I learnt about the 
> [ecj.javadocs.prefs|https://github.com/apache/lucene-solr/blob/master/lucene/tools/javadoc/ecj.javadocs.prefs]
>  via LUCENE-7077 earlier today. This ticket proposes to make 
> {{org.eclipse.jdt.core.compiler.problem.comparingIdentical}} an error also, 
> this would require replacing one assert in the 
> {{SingletonSortedSetDocValues}} constructor with an equivalent test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8819) Implement DatabaseMetaDataImpl.getTables()

2016-03-11 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190990#comment-15190990
 ] 

Kevin Risden commented on SOLR-8819:


Since DatabaseMetaDataImpl.getTables isn't fully implemented, it looks like it 
causes the following errors in DbVisualizer when clicking on the table looking 
icon under the table name. We should probably implement all of the return 
columns for DatabaseMetaDataImpl.getTables instead of just the two currently in 
the patch.

{code}
2016-03-11 08:24:34.821 FINE   870 [pool-3-thread-2 - E.ᅣチ] RootConnection: 
DatabaseMetaDataImpl.getTables("localhost:9983", "test", "%", null)
2016-03-11 08:24:34.843 FINE   870 [ExecutorRunner-pool-2-thread-1 - E.ᅣツ] 
getting column 1 (java.lang.String) 'TABLE_SCHEM' using getString()
2016-03-11 08:24:34.843 FINE   870 [ExecutorRunner-pool-2-thread-1 - E.ᅣツ] 
getting column 2 (java.lang.String) 'TABLE_CATALOG' using getString()
2016-03-11 08:24:34.844 FINE   870 [ExecutorRunner-pool-2-thread-1 - 
Z.processResultSet] Fetched Rows: 1 Columns: 2 Exec: 0.022 Fetch: 0.000 sec
2016-03-11 08:24:34.847 WARN   870 [ExecutorRunner-pool-2-thread-1 - 
C.getValueAt] IndexOutOfBoundsException: row=0 column=2 rowCount=1 columnCount=2
java.lang.IndexOutOfBoundsException: Index: 2, Size: 2
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at com.onseven.dbvis.K.A.C.getValueAt(Z:2606)
at com.onseven.dbvis.K.B.B.ᅥネ(Z:61)
at com.onseven.dbvis.n.I.K.ᅣチ(Z:2413)
at com.onseven.dbvis.n.I.K.ᅣチ(Z:240)
at com.onseven.dbvis.n.I.K.ᅣチ(Z:2852)
at com.onseven.dbvis.n.I.K.execute(Z:659)
at com.onseven.dbvis.K.B.Z.ᅣチ(Z:2285)
at com.onseven.dbvis.K.B.L.ᅣツ(Z:1374)
at com.onseven.dbvis.K.B.L.doInBackground(Z:1521)
at javax.swing.SwingWorker$1.call(SwingWorker.java:295)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at javax.swing.SwingWorker.run(SwingWorker.java:334)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2016-03-11 08:24:34.850 WARN   870 [ExecutorRunner-pool-2-thread-1 - 
C.getValueAt] IndexOutOfBoundsException: row=0 column=3 rowCount=1 columnCount=2
java.lang.IndexOutOfBoundsException: Index: 3, Size: 2
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at com.onseven.dbvis.K.A.C.getValueAt(Z:2606)
at com.onseven.dbvis.K.B.B.ᅥネ(Z:61)
at com.onseven.dbvis.n.I.K.ᅣチ(Z:2413)
at com.onseven.dbvis.n.I.K.ᅣチ(Z:240)
at com.onseven.dbvis.n.I.K.ᅣチ(Z:2852)
at com.onseven.dbvis.n.I.K.execute(Z:659)
at com.onseven.dbvis.K.B.Z.ᅣチ(Z:2285)
at com.onseven.dbvis.K.B.L.ᅣツ(Z:1374)
at com.onseven.dbvis.K.B.L.doInBackground(Z:1521)
at javax.swing.SwingWorker$1.call(SwingWorker.java:295)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at javax.swing.SwingWorker.run(SwingWorker.java:334)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2016-03-11 08:24:34.853 WARN   870 [ExecutorRunner-pool-2-thread-1 - 
C.getValueAt] IndexOutOfBoundsException: row=0 column=4 rowCount=1 columnCount=2
java.lang.IndexOutOfBoundsException: Index: 4, Size: 2
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at com.onseven.dbvis.K.A.C.getValueAt(Z:2606)
at com.onseven.dbvis.K.B.B.ᅥネ(Z:61)
at com.onseven.dbvis.n.I.K.ᅣチ(Z:2413)
at com.onseven.dbvis.n.I.K.ᅣチ(Z:240)
at com.onseven.dbvis.n.I.K.ᅣチ(Z:2852)
at com.onseven.dbvis.n.I.K.execute(Z:659)
at com.onseven.dbvis.K.B.Z.ᅣチ(Z:2285)
at com.onseven.dbvis.K.B.L.ᅣツ(Z:1374)
at com.onseven.dbvis.K.B.L.doInBackground(Z:1521)
at javax.swing.SwingWorker$1.call(SwingWorker.java:295)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at javax.swing.SwingWorker.run(SwingWorker.java:334)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2016-03-11 08:24:34.855 WARN   870 [ExecutorRunner-pool-2-thread-1 - 
C.getValueAt] IndexOutOfBoundsException: row=0 column=3 rowCount=1 columnCount=3
java.lang.IndexOutOfBoundsException: Index: 3, Size: 3
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at 

[jira] [Closed] (SOLR-8820) SolrJ JDBC - DbVisualizer DB -> Table -> Tables tab ResultSet is empty

2016-03-11 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden closed SOLR-8820.
--
Resolution: Duplicate

The fix for SOLR-8819 resolves this issue.

> SolrJ JDBC - DbVisualizer DB -> Table -> Tables tab ResultSet is empty
> --
>
> Key: SOLR-8820
> URL: https://issues.apache.org/jira/browse/SOLR-8820
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
>
> After connecting, ResultSet is empty if double click on "Table" under "DB" 
> then click on Tables tab.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8821) SolrJ JDBC - DbVisualizer DB -> DB -> References tab NPE

2016-03-11 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden closed SOLR-8821.
--

The fix for SOLR-8819 resolves this issue.

> SolrJ JDBC - DbVisualizer DB -> DB -> References tab NPE
> 
>
> Key: SOLR-8821
> URL: https://issues.apache.org/jira/browse/SOLR-8821
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
>
> After connecting, NPE if double click on "DB" under "DB" then click on 
> References tab.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8818) SolrJ JDBC - DbVisualizer DB Tables ResultSet is empty

2016-03-11 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden closed SOLR-8818.
--
Resolution: Duplicate

The fix for SOLR-8819 resolves this issue.

> SolrJ JDBC - DbVisualizer DB Tables ResultSet is empty
> --
>
> Key: SOLR-8818
> URL: https://issues.apache.org/jira/browse/SOLR-8818
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
>
> After connecting, ResultSet is empty if double click on "DB" under connection 
> name then click on Tables tab.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7092) Point range factory methods for excluded bounds

2016-03-11 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190975#comment-15190975
 ] 

Robert Muir commented on LUCENE-7092:
-

I still think its ok to provide deprecated methods to do the conversion in the 
meantime as mentioned. We did explain how to do these things in the javadocs 
already.

But the previous booleans and nulls damaged the API beyond repair, especially 
in the multi-dimensional case (boolean arrays). The previous "format" was also 
ambiguous, e.g. that you can provide open+exclusive and other crazy 
combinations.

With the current API for example its just:
{code}
IntPoint.newRangeQuery(Integer.MIN_VALUE, max); // open
IntPoint.newRangeQuery(min, max-1); // exclusive
FloatPoint.newRangeQuery(min, Math.nextDown(max)); // exclusive
FloatPoint.newRangeQuery(Float.NEGATIVE_INFINITY, max); // open
{code}


> Point range factory methods for excluded bounds
> ---
>
> Key: LUCENE-7092
> URL: https://issues.apache.org/jira/browse/LUCENE-7092
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> I am playing with using the new points API with elasticsearch and one 
> challenge is to generate range queries whose bounds are excluded, which is 
> something that was very easy with the previous numerics implementation. It is 
> easy to do externally with ints, but becomes tricky with floats or ip 
> addresses. Maybe we should have factory methods that take 2 additional 
> booleans to allow the bounds to be excluded?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8790) Add node name back to the core level responses in OverseerMessageHandler

2016-03-11 Thread Nicholas Knize (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190971#comment-15190971
 ] 

Nicholas Knize commented on SOLR-8790:
--

[~anshumg] I'm okay with this making it in 6_0.

> Add node name back to the core level responses in OverseerMessageHandler
> 
>
> Key: SOLR-8790
> URL: https://issues.apache.org/jira/browse/SOLR-8790
> Project: Solr
>  Issue Type: Bug
>Reporter: Anshum Gupta
> Attachments: SOLR-8790-followup.patch, SOLR-8790.patch
>
>
> Continuing from SOLR-8789, now that this test runs, time to fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Speculating about the removal of the standalone Solr mode

2016-03-11 Thread Dyer, James
I would think it unfortunate if this ever happens.  Solr in non-cloud mode is 
simple, easy-to-understand, has few moving parts.  Many installations do not 
need to shard, have real-time-updates, etc.  Using the replication handler in 
"legacy mode" works great for us.  The config files are on the filesystem.  You 
need not learn a cli to interact with zookeeper, etc.  I would be scared to 
death running cloud mode in Production if I didn't first obtain an in-depth 
understanding of zookeeper internals.

I can see if there is a huge burden imposed here and if almost all use-cases 
require cloud.  But as for "api consolidation", there are few api's you need to 
learn if running non-cloud.  So what stops us from focusing apis on the need of 
cloud installations?  And the documentation for non-cloud ought to be simple to 
maintain, there's so much less to learn and know.

For those of you that work as consultants or for support providers, it may seem 
that everyone is running cloud mode.  But my guess is those who run cloud mode 
are the ones that cannot get by without your services.

James Dyer
Ingram Content Group

-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org] 
Sent: Wednesday, March 09, 2016 11:34 AM
To: dev@lucene.apache.org
Subject: Speculating about the removal of the standalone Solr mode

I've been thinking about the fact that standalone and cloud modes in
Solr are very different.

The writing on the wall suggests that Solr will eventually (probably 7.0
minimum) eliminate the standalone mode and always operate with
zookeeper.  A "standalone" node would in fact be a single-node cloud
running the embedded zookeeper.

Once zk-as-truth becomes a reality, I can see a few advantages to always
running in cloud mode.  The documentation can include one way to
accomplish basic tasks.  The CoreAdmin API can be eliminated, and any
required functionality fully merged into the Collections API. 
CloudSolrClient will work for all installations.  A script that works
for cloud mode will also work for standalone mode, because that's just a
smaller cloud.

I was planning to open an issue to discuss and implement this.  If
that's not a good idea, please let me know.

None of my main Solr installations are running in cloud mode, so the
removal of standalone mode will be an inconvenience for me, but I still
think it's the right thing to do in the long term.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-7092) Point range factory methods for excluded bounds

2016-03-11 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand closed LUCENE-7092.


OK. Fair enough.

> Point range factory methods for excluded bounds
> ---
>
> Key: LUCENE-7092
> URL: https://issues.apache.org/jira/browse/LUCENE-7092
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> I am playing with using the new points API with elasticsearch and one 
> challenge is to generate range queries whose bounds are excluded, which is 
> something that was very easy with the previous numerics implementation. It is 
> easy to do externally with ints, but becomes tricky with floats or ip 
> addresses. Maybe we should have factory methods that take 2 additional 
> booleans to allow the bounds to be excluded?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8833) Is there anyway that I can rebalance leader to different hosts?

2016-03-11 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey closed SOLR-8833.
--
Resolution: Invalid

Support requests belong on the mailing list.  Jira is for bug reports, and we 
like to confirm them on the mailing list or IRC channel before people create 
them, because sometimes the "bugs" people see are incorrect usage or incorrect 
configuration.

SolrCloud does have the ability to rebalance leaders.  It was added in one of 
the 5.x releases, but I'm not sure which one.  Here's some documentation:

https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-RebalanceLeaders

Note that the online reference guide that I have linked above currently targets 
version 6.0 (unreleased), but I believe *this* particular functionality hasn't 
changed recently.  If you want to be absolutely certain that you are reading 
docs for your specific version, there are PDF releases of the reference guide 
for earlier versions that you can find on the Solr website.

http://lucene.apache.org/solr/resources.html


> Is there anyway that I can rebalance leader to different hosts?
> ---
>
> Key: SOLR-8833
> URL: https://issues.apache.org/jira/browse/SOLR-8833
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4
>Reporter: YuxuanWang
>  Labels: leader, shard
>
> I deployed SlorCloud on two hosts, with 2 shards and 2 replicas. 
> The problem is leaders of the two shards always elected on the same host. 
> This will makes all the write load on the chosen one.
> Is there anyway that I can rebalance leader to different hosts?
> Thank you very much!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8819) Implement DatabaseMetaDataImpl.getTables()

2016-03-11 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8819:
---
Description: DbVisualizer NPE when clicking on DB References tab. After 
connecting, NPE if double click on "DB" under connection name then click on 
References tab.  (was: After connecting, NPE if double click on "DB" under 
connection name then click on References tab.)

> Implement DatabaseMetaDataImpl.getTables()
> --
>
> Key: SOLR-8819
> URL: https://issues.apache.org/jira/browse/SOLR-8819
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
> Attachments: SOLR-8819.patch
>
>
> DbVisualizer NPE when clicking on DB References tab. After connecting, NPE if 
> double click on "DB" under connection name then click on References tab.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8819) Implement DatabaseMetaDataImpl.getTables()

2016-03-11 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8819:
---
Summary: Implement DatabaseMetaDataImpl.getTables()  (was: SolrJ JDBC - 
DbVisualizer NPE when clicking on DB References tab)

> Implement DatabaseMetaDataImpl.getTables()
> --
>
> Key: SOLR-8819
> URL: https://issues.apache.org/jira/browse/SOLR-8819
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
> Attachments: SOLR-8819.patch
>
>
> After connecting, NPE if double click on "DB" under connection name then 
> click on References tab.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7092) Point range factory methods for excluded bounds

2016-03-11 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190932#comment-15190932
 ] 

Robert Muir commented on LUCENE-7092:
-

This explanation doesnt make sense. The range facets in lucene are also 
inclusive too. This just sounds like more brain damage from the past.

Honestly, i dont see whats difficult about this. You just add/subtract 1 to 
make something exclusive. If its float/double use Math.nextUp/nextDown.

> Point range factory methods for excluded bounds
> ---
>
> Key: LUCENE-7092
> URL: https://issues.apache.org/jira/browse/LUCENE-7092
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> I am playing with using the new points API with elasticsearch and one 
> challenge is to generate range queries whose bounds are excluded, which is 
> something that was very easy with the previous numerics implementation. It is 
> easy to do externally with ints, but becomes tricky with floats or ip 
> addresses. Maybe we should have factory methods that take 2 additional 
> booleans to allow the bounds to be excluded?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8730) Experimental UI, the hl.fl is not properly set doing queries

2016-03-11 Thread Upayavira (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Upayavira resolved SOLR-8730.
-
   Resolution: Fixed
Fix Version/s: 6.0

Fixed, thanks for the report!

> Experimental UI, the hl.fl is not properly set doing queries
> 
>
> Key: SOLR-8730
> URL: https://issues.apache.org/jira/browse/SOLR-8730
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1
> Environment: Debian wheezy x64, 4 processors, 4gb memory, 4 SOLR 
> clouds servers
>Reporter: Jean-Renaud Margelidon
>Assignee: Upayavira
>Priority: Minor
> Fix For: 6.0
>
>
> When using the experiment UI and doing searches on collection, when 
> populating the hl.fl field, the value is used for the fl instead.
> URL generated:
> http://127.0.0.1/solr/collection/select?fl=content=on=on=html=json
> URL Expected:
> http://127.0.0.1/solr/collection/select?hl.fl=content=on=on=html=json



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+107) - Build # 95 - Still Failing!

2016-03-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/95/
Java: 64bit/jdk-9-ea+107 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.lucene.search.TestTermScorer.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([C9DB33D16158B9AB:418F0C0BCFA4D453]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at org.apache.lucene.search.TestTermScorer.test(TestTermScorer.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:804)




Build Log:
[...truncated 541 lines...]
   [junit4] Suite: org.apache.lucene.search.TestTermScorer
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestTermScorer 
-Dtests.method=test -Dtests.seed=C9DB33D16158B9AB -Dtests.multiplier=3 
-Dtests.slow=true -Dtests.locale=nb -Dtests.timezone=NET -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.01s J0 | TestTermScorer.test <<<
   [junit4]> Throwable #1: java.lang.AssertionError
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([C9DB33D16158B9AB:418F0C0BCFA4D453]:0)
   [junit4]>at 
org.apache.lucene.search.TestTermScorer.test(TestTermScorer.java:80)
   [junit4]>at 

[jira] [Commented] (LUCENE-7092) Point range factory methods for excluded bounds

2016-03-11 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190924#comment-15190924
 ] 

Adrien Grand commented on LUCENE-7092:
--

One use-case I have in mind is faceting. Say you are computing range facets on 
a numeric field (price, temperature, anything): it is common to have an 
inclusive lower bound and an exclusive upper bound so that buckets are 
exclusive. Then if you want to refine search results for a specific bucket, you 
would have to convert it to a filter, but this would be hard to do today since 
there is no easy way to build a point range query that has an exclusive upper 
bound?

> Point range factory methods for excluded bounds
> ---
>
> Key: LUCENE-7092
> URL: https://issues.apache.org/jira/browse/LUCENE-7092
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> I am playing with using the new points API with elasticsearch and one 
> challenge is to generate range queries whose bounds are excluded, which is 
> something that was very easy with the previous numerics implementation. It is 
> easy to do externally with ints, but becomes tricky with floats or ip 
> addresses. Maybe we should have factory methods that take 2 additional 
> booleans to allow the bounds to be excluded?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8730) Experimental UI, the hl.fl is not properly set doing queries

2016-03-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190921#comment-15190921
 ] 

ASF subversion and git services commented on SOLR-8730:
---

Commit f0aa4fc15a29b0c9e0ef7cd075e3bf2db48efa46 in lucene-solr's branch 
refs/heads/branch_6_0 from [~upayavira]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f0aa4fc ]

SOLR-8730: Fix highlighting in new UI query pane


> Experimental UI, the hl.fl is not properly set doing queries
> 
>
> Key: SOLR-8730
> URL: https://issues.apache.org/jira/browse/SOLR-8730
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1
> Environment: Debian wheezy x64, 4 processors, 4gb memory, 4 SOLR 
> clouds servers
>Reporter: Jean-Renaud Margelidon
>Assignee: Upayavira
>Priority: Minor
>
> When using the experiment UI and doing searches on collection, when 
> populating the hl.fl field, the value is used for the fl instead.
> URL generated:
> http://127.0.0.1/solr/collection/select?fl=content=on=on=html=json
> URL Expected:
> http://127.0.0.1/solr/collection/select?hl.fl=content=on=on=html=json



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8730) Experimental UI, the hl.fl is not properly set doing queries

2016-03-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190919#comment-15190919
 ] 

ASF subversion and git services commented on SOLR-8730:
---

Commit ef916c1e7eab01cb5d43ac3a7146f4cb7f4b9916 in lucene-solr's branch 
refs/heads/branch_6x from [~upayavira]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ef916c1 ]

SOLR-8730: Fix highlighting in new UI query pane


> Experimental UI, the hl.fl is not properly set doing queries
> 
>
> Key: SOLR-8730
> URL: https://issues.apache.org/jira/browse/SOLR-8730
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1
> Environment: Debian wheezy x64, 4 processors, 4gb memory, 4 SOLR 
> clouds servers
>Reporter: Jean-Renaud Margelidon
>Assignee: Upayavira
>Priority: Minor
>
> When using the experiment UI and doing searches on collection, when 
> populating the hl.fl field, the value is used for the fl instead.
> URL generated:
> http://127.0.0.1/solr/collection/select?fl=content=on=on=html=json
> URL Expected:
> http://127.0.0.1/solr/collection/select?hl.fl=content=on=on=html=json



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8730) Experimental UI, the hl.fl is not properly set doing queries

2016-03-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190917#comment-15190917
 ] 

ASF subversion and git services commented on SOLR-8730:
---

Commit fe21f7a4c3a135caa39b1e25e640bc28c069b0a6 in lucene-solr's branch 
refs/heads/master from [~upayavira]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fe21f7a ]

SOLR-8730: Fix highlighting in new UI query pane


> Experimental UI, the hl.fl is not properly set doing queries
> 
>
> Key: SOLR-8730
> URL: https://issues.apache.org/jira/browse/SOLR-8730
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1
> Environment: Debian wheezy x64, 4 processors, 4gb memory, 4 SOLR 
> clouds servers
>Reporter: Jean-Renaud Margelidon
>Assignee: Upayavira
>Priority: Minor
>
> When using the experiment UI and doing searches on collection, when 
> populating the hl.fl field, the value is used for the fl instead.
> URL generated:
> http://127.0.0.1/solr/collection/select?fl=content=on=on=html=json
> URL Expected:
> http://127.0.0.1/solr/collection/select?hl.fl=content=on=on=html=json



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2016-03-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190907#comment-15190907
 ] 

ASF subversion and git services commented on SOLR-8029:
---

Commit b1b1e97cf2e1665cd00d4e655b77216fb0415682 in lucene-solr's branch 
refs/heads/apiv2 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b1b1e97 ]

SOLR-8029: bug fixes


> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: master
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8776) Support RankQuery in grouping

2016-03-11 Thread Diego Ceccarelli (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190864#comment-15190864
 ] 

Diego Ceccarelli commented on SOLR-8776:


I uploaded a new patch, now groups are reranked according to the reranking max 
scores, in the {{finish()}} method of the grouping {{CommandField}} I added: 

{code:java}
if (result != null && query instanceof RankQuery && groupSort == 
Sort.RELEVANCE){
// if we are sorting for relevance and query is a RankQuery, it may be 
that
// the order of the groups changed, we need to reorder
GroupDocs[] groups = result.groups;
Arrays.sort(groups, new Comparator() {
  @Override
  public int compare(GroupDocs o1, GroupDocs o2) {
  if (o1.maxScore > o2.maxScore) return -1;
  if (o1.maxScore < o2.maxScore) return 1; 
  return 0;
  }});
  }
{code}

This will reorder the groups if we re-rank the documents with the rank query. 
The second test succeeds. 

I'm still thinking what it should be the correct semantic to implement 
reranking + grouping: 

When you apply a query {{q}} and then a rank-query {{rq}} , you first score all 
the documents and then rescore top-N documents with the rank-query. The problem 
with grouping is that in order to get the top-groups you first need to score 
the collection: you may have a document that scored really low with {{q}} but 
got a high score with {{rq}}, but the only way to find it is to rerank the 
whole collection (impracticable). There are two possible solutions then:
  - if we want to apply {{rq}} on the top 1000 documents, we can collect the 
groups in the top-1000 documents, and they will be the same obtained scoring 
directly with {{rq}}, but in a different order;
  - we can collect more groups than what we need, and then rerank the top 
documents in each group - I would call this solution: **Group Reranking**.

In my opinion group reranking is a better solution: imagine we have a group 
containing the top-1000 documents ranked with {{q}} we will rerank them maybe 
just to return one document. I guess the best would be, assuming that we want 
to apply rerank query to N documents and return the top K groups you can 
retrieve top K*y groups and then rerank N/(K*y) documents in each group.



> Support RankQuery in grouping
> -
>
> Key: SOLR-8776
> URL: https://issues.apache.org/jira/browse/SOLR-8776
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: master
>Reporter: Diego Ceccarelli
>Priority: Minor
> Fix For: master
>
> Attachments: 0001-SOLR-8776-Support-RankQuery-in-grouping.patch, 
> 0001-SOLR-8776-Support-RankQuery-in-grouping.patch, 
> 0001-SOLR-8776-Support-RankQuery-in-grouping.patch
>
>
> Currently it is not possible to use RankQuery [1] and Grouping [2] together 
> (see also [3]). In some situations Grouping can be replaced by Collapse and 
> Expand Results [4] (that supports reranking), but i) collapse cannot 
> guarantee that at least a minimum number of groups will be returned for a 
> query, and ii) in the Solr Cloud setting you will have constraints on how to 
> partition the documents among the shards.
> I'm going to start working on supporting RankQuery in grouping. I'll start 
> attaching a patch with a test that fails because grouping does not support 
> the rank query and then I'll try to fix the problem, starting from the non 
> distributed setting (GroupingSearch).
> My feeling is that since grouping is mostly performed by Lucene, RankQuery 
> should be refactored and moved (or partially moved) there. 
> Any feedback is welcome.
> [1] https://cwiki.apache.org/confluence/display/solr/RankQuery+API 
> [2] https://cwiki.apache.org/confluence/display/solr/Result+Grouping
> [3] 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201507.mbox/%3ccahm-lpuvspest-sw63_8a6gt-wor6ds_t_nb2rope93e4+s...@mail.gmail.com%3E
> [4] 
> https://cwiki.apache.org/confluence/display/solr/Collapse+and+Expand+Results



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >