[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+147) - Build # 2617 - Unstable!

2017-01-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2617/
Java: 64bit/jdk-9-ea+147 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
PeerSynced node did not become leader expected:https://127.0.0.1:43839/collection1]> but was:https://127.0.0.1:36516/collection1]>

Stack Trace:
java.lang.AssertionError: PeerSynced node did not become leader 
expected:https://127.0.0.1:43839/collection1]> but 
was:https://127.0.0.1:36516/collection1]>
at 
__randomizedtesting.SeedInfo.seed([1CBBCE03514C79E1:94EFF1D9FFB01419]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:157)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:538)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1206 - Still Unstable

2017-01-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1206/

10 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterRestart

Error Message:
Timeout waiting for CDCR replication to complete @source_collection:shard2

Stack Trace:
java.lang.RuntimeException: Timeout waiting for CDCR replication to complete 
@source_collection:shard2
at 
__randomizedtesting.SeedInfo.seed([A805E4B311DA048F:F4180332DFFAB0B9]:0)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:795)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterRestart(CdcrReplicationDistributedZkTest.java:236)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_112) - Build # 18724 - Unstable!

2017-01-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18724/
Java: 64bit/jdk1.8.0_112 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:39961","node_name":"127.0.0.1:39961_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/30)={   
"replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:39236;,   
"core":"c8n_1x3_lf_shard1_replica3",   "node_name":"127.0.0.1:39236_"}, 
"core_node2":{   "core":"c8n_1x3_lf_shard1_replica2",   
"base_url":"http://127.0.0.1:41282;,   "node_name":"127.0.0.1:41282_",  
 "state":"down"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:39961;,   "node_name":"127.0.0.1:39961_",  
 "state":"active",   "leader":"true",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:39961","node_name":"127.0.0.1:39961_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/30)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:39236;,
  "core":"c8n_1x3_lf_shard1_replica3",
  "node_name":"127.0.0.1:39236_"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica2",
  "base_url":"http://127.0.0.1:41282;,
  "node_name":"127.0.0.1:41282_",
  "state":"down"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:39961;,
  "node_name":"127.0.0.1:39961_",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([749AE4B6842C4694:FCCEDB6C2AD02B6C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:170)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1064 - Still Unstable!

2017-01-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1064/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testMaxDocs

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([1FBDFA3E765036C4:A63C2CE15ABA324E]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:818)
at 
org.apache.solr.update.AutoCommitTest.testMaxDocs(AutoCommitTest.java:225)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:14=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:811)
... 40 more


FAILED:  org.apache.solr.update.SolrIndexMetricsTest.testIndexMetrics

Error Message:
minorMerge: 3 

[jira] [Resolved] (SOLR-9935) When hl.method=unified add support for hl.fragsize param

2017-01-07 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-9935.

   Resolution: Fixed
Fix Version/s: 6.4

> When hl.method=unified add support for hl.fragsize param
> 
>
> Key: SOLR-9935
> URL: https://issues.apache.org/jira/browse/SOLR-9935
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.4
>
> Attachments: SOLR_9935_UH_fragsize.patch, SOLR_9935_UH_fragsize.patch
>
>
> In LUCENE-7620 the UnifiedHighlighter is getting a BreakIterator that allows 
> it to support the equivalent of Solr's {{hl.fragsize}}.  So lets support this 
> on the Solr side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9935) When hl.method=unified add support for hl.fragsize param

2017-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808704#comment-15808704
 ] 

ASF subversion and git services commented on SOLR-9935:
---

Commit d195c2525b00ef6e12b88f838167475feb5d2d19 in lucene-solr's branch 
refs/heads/branch_6x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d195c25 ]

SOLR-9935: Add hl.fragsize support when using the UnifiedHighlighter

(cherry picked from commit 570880d)


> When hl.method=unified add support for hl.fragsize param
> 
>
> Key: SOLR-9935
> URL: https://issues.apache.org/jira/browse/SOLR-9935
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR_9935_UH_fragsize.patch, SOLR_9935_UH_fragsize.patch
>
>
> In LUCENE-7620 the UnifiedHighlighter is getting a BreakIterator that allows 
> it to support the equivalent of Solr's {{hl.fragsize}}.  So lets support this 
> on the Solr side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9935) When hl.method=unified add support for hl.fragsize param

2017-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808700#comment-15808700
 ] 

ASF subversion and git services commented on SOLR-9935:
---

Commit 570880d3acb45c925e8dc77172e0725ab8ba07b8 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=570880d ]

SOLR-9935: Add hl.fragsize support when using the UnifiedHighlighter


> When hl.method=unified add support for hl.fragsize param
> 
>
> Key: SOLR-9935
> URL: https://issues.apache.org/jira/browse/SOLR-9935
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR_9935_UH_fragsize.patch, SOLR_9935_UH_fragsize.patch
>
>
> In LUCENE-7620 the UnifiedHighlighter is getting a BreakIterator that allows 
> it to support the equivalent of Solr's {{hl.fragsize}}.  So lets support this 
> on the Solr side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7620) UnifiedHighlighter: add target character width BreakIterator wrapper

2017-01-07 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved LUCENE-7620.
--
Resolution: Fixed

Thanks for the review feedback Jim & Tim!   6.4 is going to be a great release 
for the UnifiedHighlighter.  I hope features like this and other improvements 
this release get more folks using the UH.

> UnifiedHighlighter: add target character width BreakIterator wrapper
> 
>
> Key: LUCENE-7620
> URL: https://issues.apache.org/jira/browse/LUCENE-7620
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.4
>
> Attachments: LUCENE_7620_UH_LengthGoalBreakIterator.patch, 
> LUCENE_7620_UH_LengthGoalBreakIterator.patch, 
> LUCENE_7620_UH_LengthGoalBreakIterator.patch
>
>
> The original Highlighter includes a {{SimpleFragmenter}} that delineates 
> fragments (aka Passages) by a character width.  The default is 100 characters.
> It would be great to support something similar for the UnifiedHighlighter.  
> It's useful in its own right and of course it helps users transition to the 
> UH.  I'd like to do it as a wrapper to another BreakIterator -- perhaps a 
> sentence one.  In this way you get back Passages that are a number of 
> sentences so they will look nice instead of breaking mid-way through a 
> sentence.  And you get some control by specifying a target number of 
> characters.  This BreakIterator wouldn't be a general purpose 
> java.text.BreakIterator since it would assume it's called in a manner exactly 
> as the UnifiedHighlighter uses it.  It would probably be compatible with the 
> PostingsHighlighter too.
> I don't propose doing this by default; besides, it's easy enough to pick your 
> BreakIterator config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7620) UnifiedHighlighter: add target character width BreakIterator wrapper

2017-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808672#comment-15808672
 ] 

ASF subversion and git services commented on LUCENE-7620:
-

Commit ff5fdcde422033d9cbe3dbe11f2abda9ee3a408b in lucene-solr's branch 
refs/heads/branch_6x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ff5fdcd ]

LUCENE-7620: UnifiedHighlighter: new LengthGoalBreakIterator wrapper

(cherry picked from commit ea49989)


> UnifiedHighlighter: add target character width BreakIterator wrapper
> 
>
> Key: LUCENE-7620
> URL: https://issues.apache.org/jira/browse/LUCENE-7620
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.4
>
> Attachments: LUCENE_7620_UH_LengthGoalBreakIterator.patch, 
> LUCENE_7620_UH_LengthGoalBreakIterator.patch, 
> LUCENE_7620_UH_LengthGoalBreakIterator.patch
>
>
> The original Highlighter includes a {{SimpleFragmenter}} that delineates 
> fragments (aka Passages) by a character width.  The default is 100 characters.
> It would be great to support something similar for the UnifiedHighlighter.  
> It's useful in its own right and of course it helps users transition to the 
> UH.  I'd like to do it as a wrapper to another BreakIterator -- perhaps a 
> sentence one.  In this way you get back Passages that are a number of 
> sentences so they will look nice instead of breaking mid-way through a 
> sentence.  And you get some control by specifying a target number of 
> characters.  This BreakIterator wouldn't be a general purpose 
> java.text.BreakIterator since it would assume it's called in a manner exactly 
> as the UnifiedHighlighter uses it.  It would probably be compatible with the 
> PostingsHighlighter too.
> I don't propose doing this by default; besides, it's easy enough to pick your 
> BreakIterator config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7620) UnifiedHighlighter: add target character width BreakIterator wrapper

2017-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808666#comment-15808666
 ] 

ASF subversion and git services commented on LUCENE-7620:
-

Commit ea49989524e96563f2b9bdd4256012239907882f in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ea49989 ]

LUCENE-7620: UnifiedHighlighter: new LengthGoalBreakIterator wrapper


> UnifiedHighlighter: add target character width BreakIterator wrapper
> 
>
> Key: LUCENE-7620
> URL: https://issues.apache.org/jira/browse/LUCENE-7620
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.4
>
> Attachments: LUCENE_7620_UH_LengthGoalBreakIterator.patch, 
> LUCENE_7620_UH_LengthGoalBreakIterator.patch, 
> LUCENE_7620_UH_LengthGoalBreakIterator.patch
>
>
> The original Highlighter includes a {{SimpleFragmenter}} that delineates 
> fragments (aka Passages) by a character width.  The default is 100 characters.
> It would be great to support something similar for the UnifiedHighlighter.  
> It's useful in its own right and of course it helps users transition to the 
> UH.  I'd like to do it as a wrapper to another BreakIterator -- perhaps a 
> sentence one.  In this way you get back Passages that are a number of 
> sentences so they will look nice instead of breaking mid-way through a 
> sentence.  And you get some control by specifying a target number of 
> characters.  This BreakIterator wouldn't be a general purpose 
> java.text.BreakIterator since it would assume it's called in a manner exactly 
> as the UnifiedHighlighter uses it.  It would probably be compatible with the 
> PostingsHighlighter too.
> I don't propose doing this by default; besides, it's easy enough to pick your 
> BreakIterator config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9944) Map the nodes function name to the GatherNodesStream

2017-01-07 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-9944.
--
Resolution: Resolved

> Map the nodes function name to the GatherNodesStream
> 
>
> Key: SOLR-9944
> URL: https://issues.apache.org/jira/browse/SOLR-9944
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-9944.patch
>
>
> This ticket maps the *nodes* function name to the GatherNodesStream. The 
> *gatherNodes* function name will also remain mapped. 
> I think nodes is just a better name for this function. Gather was meant to 
> signify a breadth first traversal, but I think the docs can make this clear.
> I'll update the docs as well and mention that both function names work, but 
> in the future gatherNodes may be phased out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9944) Map the nodes function name to the GatherNodesStream

2017-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808636#comment-15808636
 ] 

ASF subversion and git services commented on SOLR-9944:
---

Commit 6adc03a5da22760f64bc79a5663953d719eaeb3b in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6adc03a ]

SOLR-9944: Update CHANGES.txt


> Map the nodes function name to the GatherNodesStream
> 
>
> Key: SOLR-9944
> URL: https://issues.apache.org/jira/browse/SOLR-9944
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-9944.patch
>
>
> This ticket maps the *nodes* function name to the GatherNodesStream. The 
> *gatherNodes* function name will also remain mapped. 
> I think nodes is just a better name for this function. Gather was meant to 
> signify a breadth first traversal, but I think the docs can make this clear.
> I'll update the docs as well and mention that both function names work, but 
> in the future gatherNodes may be phased out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9944) Map the nodes function name to the GatherNodesStream

2017-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808595#comment-15808595
 ] 

ASF subversion and git services commented on SOLR-9944:
---

Commit ac14fc32e045d45b5129dc237f7e5472fc86e4a0 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ac14fc3 ]

SOLR-9944: Update CHANGES.txt


> Map the nodes function name to the GatherNodesStream
> 
>
> Key: SOLR-9944
> URL: https://issues.apache.org/jira/browse/SOLR-9944
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-9944.patch
>
>
> This ticket maps the *nodes* function name to the GatherNodesStream. The 
> *gatherNodes* function name will also remain mapped. 
> I think nodes is just a better name for this function. Gather was meant to 
> signify a breadth first traversal, but I think the docs can make this clear.
> I'll update the docs as well and mention that both function names work, but 
> in the future gatherNodes may be phased out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9944) Map the nodes function name to the GatherNodesStream

2017-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808594#comment-15808594
 ] 

ASF subversion and git services commented on SOLR-9944:
---

Commit aae4217abc09163837597bf761f21d8019091216 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=aae4217 ]

SOLR-9944: Map the nodes function name to the GatherNodesStream


> Map the nodes function name to the GatherNodesStream
> 
>
> Key: SOLR-9944
> URL: https://issues.apache.org/jira/browse/SOLR-9944
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-9944.patch
>
>
> This ticket maps the *nodes* function name to the GatherNodesStream. The 
> *gatherNodes* function name will also remain mapped. 
> I think nodes is just a better name for this function. Gather was meant to 
> signify a breadth first traversal, but I think the docs can make this clear.
> I'll update the docs as well and mention that both function names work, but 
> in the future gatherNodes may be phased out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 629 - Unstable!

2017-01-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/629/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.update.SolrIndexMetricsTest.testIndexMetrics

Error Message:
minorMerge: 3 expected:<4> but was:<3>

Stack Trace:
java.lang.AssertionError: minorMerge: 3 expected:<4> but was:<3>
at 
__randomizedtesting.SeedInfo.seed([B05AD9203A5F546:C7D5902EC32B0E7D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.update.SolrIndexMetricsTest.testIndexMetrics(SolrIndexMetricsTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11221 lines...]
   [junit4] Suite: org.apache.solr.update.SolrIndexMetricsTest
   [junit4]   2> Creating dataDir: 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3763 - Still Unstable!

2017-01-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3763/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([46C8FCAE122A307B:CE9CC374BCD65D83]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:311)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:262)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:244)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Solr-Artifacts-6.x - Build # 217 - Failure

2017-01-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-Artifacts-6.x/217/

No tests ran.

Build Log:
[...truncated 480 lines...]
[javac] Compiling 77 source files to 
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/build/suggest/classes/java
[javac] 
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/suggest/src/java/org/apache/lucene/search/suggest/DocumentValueSourceDictionary.java:26:
 error: package org.apache.lucene.queries.function does not exist
[javac] import org.apache.lucene.queries.function.ValueSource;
[javac]  ^
[javac] 
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/suggest/src/java/org/apache/lucene/search/suggest/DocumentValueSourceDictionary.java:77:
 error: cannot find symbol
[javac]ValueSource 
weightsValueSource, String payload, String contexts) {
[javac]^
[javac]   symbol:   class ValueSource
[javac]   location: class DocumentValueSourceDictionary
[javac] 
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/suggest/src/java/org/apache/lucene/search/suggest/DocumentValueSourceDictionary.java:104:
 error: cannot find symbol
[javac]ValueSource 
weightsValueSource, String payload) {
[javac]^
[javac]   symbol:   class ValueSource
[javac]   location: class DocumentValueSourceDictionary
[javac] 
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/suggest/src/java/org/apache/lucene/search/suggest/DocumentValueSourceDictionary.java:130:
 error: cannot find symbol
[javac]ValueSource 
weightsValueSource) {
[javac]^
[javac]   symbol:   class ValueSource
[javac]   location: class DocumentValueSourceDictionary
[javac] Note: 
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/suggest/src/java/org/apache/lucene/search/suggest/jaspell/JaspellLookup.java
 uses or overrides a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 4 errors

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/solr/build.xml:539: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/solr/common-build.xml:418:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/module-build.xml:670:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/common-build.xml:501:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/common-build.xml:1955:
 Compile failed; see the compiler error output for details.

Total time: 31 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_112) - Build # 675 - Unstable!

2017-01-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/675/
Java: 64bit/jdk1.8.0_112 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.component.DistributedQueryElevationComponentTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryElevationComponentTest_E8C52F3F5249420-001\tempDir-001\control\collection1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryElevationComponentTest_E8C52F3F5249420-001\tempDir-001\control\collection1

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryElevationComponentTest_E8C52F3F5249420-001\tempDir-001\control:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryElevationComponentTest_E8C52F3F5249420-001\tempDir-001\control

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryElevationComponentTest_E8C52F3F5249420-001\tempDir-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryElevationComponentTest_E8C52F3F5249420-001\tempDir-001

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryElevationComponentTest_E8C52F3F5249420-001\tempDir-001\control\collection1\conf:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryElevationComponentTest_E8C52F3F5249420-001\tempDir-001\control\collection1\conf

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryElevationComponentTest_E8C52F3F5249420-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryElevationComponentTest_E8C52F3F5249420-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryElevationComponentTest_E8C52F3F5249420-001\tempDir-001\control\collection1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryElevationComponentTest_E8C52F3F5249420-001\tempDir-001\control\collection1
   
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryElevationComponentTest_E8C52F3F5249420-001\tempDir-001\control:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryElevationComponentTest_E8C52F3F5249420-001\tempDir-001\control
   
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryElevationComponentTest_E8C52F3F5249420-001\tempDir-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryElevationComponentTest_E8C52F3F5249420-001\tempDir-001
   
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryElevationComponentTest_E8C52F3F5249420-001\tempDir-001\control\collection1\conf:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryElevationComponentTest_E8C52F3F5249420-001\tempDir-001\control\collection1\conf
   
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryElevationComponentTest_E8C52F3F5249420-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.DistributedQueryElevationComponentTest_E8C52F3F5249420-001

at __randomizedtesting.SeedInfo.seed([E8C52F3F5249420]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:323)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 

[jira] [Commented] (SOLR-9936) Allow configuration for recoveryExecutor thread pool size

2017-01-07 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808282#comment-15808282
 ] 

Mark Miller commented on SOLR-9936:
---

We would want to do some vetting and testing, but perhaps currently okay to 
limit the recovery executor.

> Allow configuration for recoveryExecutor thread pool size
> -
>
> Key: SOLR-9936
> URL: https://issues.apache.org/jira/browse/SOLR-9936
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java)
>Affects Versions: 6.3
>Reporter: Tim Owen
> Attachments: SOLR-9936.patch
>
>
> There are two executor services in {{UpdateShardHandler}}, the 
> {{updateExecutor}} whose size is unbounded for reasons explained in the code 
> comments. There is also the {{recoveryExecutor}} which was added later, and 
> is the one that executes the {{RecoveryStrategy}} code to actually fetch 
> index files and store to disk, eventually calling an {{fsync}} thread to 
> ensure the data is written.
> We found that with a fast network such as 10GbE it's very easy to overload 
> the local disk storage when doing a restart of Solr instances after some 
> downtime, if they have many cores to load. Typically we have each physical 
> server containing 6 SSDs and 6 Solr instances, so each Solr has its home dir 
> on a dedicated SSD. With 100+ cores (shard replicas) on each instance, 
> startup can really hammer the SSD as it's writing in parallel from as many 
> cores as Solr is recovering. This made recovery time bad enough that replicas 
> were down for a long time, and even shards marked as down if none of its 
> replicas have recovered (usually when many machines have been restarted). The 
> very slow IO times (10s of seconds or worse) also made the JVM pause, so that 
> disconnects from ZK, which didn't help recovery either.
> This patch allowed us to throttle how much parallelism there would be writing 
> to a disk - in practice we're using a pool size of 4 threads, to prevent the 
> SSD getting overloaded, and that worked well enough to make recovery of all 
> cores in reasonable time.
> Due to the comment on the other thread pool size, I'd like some comments on 
> whether it's OK to do this for the {{recoveryExecutor}} though?
> It's configured in solr.xml with e.g.
> {noformat}
>   
> ${solr.recovery.threads:4}
>   
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7588) A parallel DrillSideways implementation

2017-01-07 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808262#comment-15808262
 ] 

Michael McCandless commented on LUCENE-7588:


Oh I think I may see the issue!  When we call {{TopDocs.merge}} in the 
{{DrillSideways#search}} that takes a sort, we are failing to pass that sort on 
to {{TopDocs.merge}} ... so it's not merge-sorting in the correct order?

> A parallel DrillSideways implementation
> ---
>
> Key: LUCENE-7588
> URL: https://issues.apache.org/jira/browse/LUCENE-7588
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (7.0), 6.3.1
>Reporter: Emmanuel Keller
>Priority: Minor
>  Labels: facet, faceting
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7588.patch, lucene-7588-test.patch
>
>
> Currently DrillSideways implementation is based on the single threaded 
> IndexSearcher.search(Query query, Collector results).
> On large document set, the single threaded collection can be really slow.
> The ParallelDrillSideways implementation could:
> 1. Use the CollectionManager based method IndexSearcher.search(Query query, 
> CollectorManager collectorManager)  to get the benefits of multithreading on 
> index segments,
> 2. Compute each DrillSideway subquery on a single thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7588) A parallel DrillSideways implementation

2017-01-07 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808257#comment-15808257
 ] 

Michael McCandless commented on LUCENE-7588:


Hmm, but in {{testRandom}} we seem to always sort by {{id}} (a unique field for 
each document) for each search?

So, regardless of using a single thread for the search, or doing the map/reduce 
w/ N threads and merging with {{TopDocs.merge}}, the result order should have 
been identical, I think?

> A parallel DrillSideways implementation
> ---
>
> Key: LUCENE-7588
> URL: https://issues.apache.org/jira/browse/LUCENE-7588
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (7.0), 6.3.1
>Reporter: Emmanuel Keller
>Priority: Minor
>  Labels: facet, faceting
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7588.patch, lucene-7588-test.patch
>
>
> Currently DrillSideways implementation is based on the single threaded 
> IndexSearcher.search(Query query, Collector results).
> On large document set, the single threaded collection can be really slow.
> The ParallelDrillSideways implementation could:
> 1. Use the CollectionManager based method IndexSearcher.search(Query query, 
> CollectorManager collectorManager)  to get the benefits of multithreading on 
> index segments,
> 2. Compute each DrillSideway subquery on a single thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9930) Incomplete documentation for analysis-extra

2017-01-07 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808252#comment-15808252
 ] 

Erick Erickson commented on SOLR-9930:
--

I'd like to see the relevant bits of your solrconfig and schema file if you 
would. While I can reproduce including ICUTokenizerFactory in my schema and 
failing to load, it is not cured by simply adding solr-analysis-extras.X.Y.jar 
to my lib directive in solrconfig so something else is going on.

What version of Solr are you using?

> Incomplete documentation for analysis-extra
> ---
>
> Key: SOLR-9930
> URL: https://issues.apache.org/jira/browse/SOLR-9930
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Jakob Kylberg
>Priority: Minor
>  Labels: documentation
>
> The documentation regarding which dependencies that have to be added in order 
> to activate e.g. the ICU analyzer is incomplete. This leads to unnecessary 
> trouble for the user when they have to find the missing dependencies 
> themselves, especially since the error message in the logs and Solr GUI 
> doesn't give a clear hint on what's missing.
> I've created a pull request with updated documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_112) - Build # 6342 - Unstable!

2017-01-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6342/
Java: 64bit/jdk1.8.0_112 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([9ECAAEBA747E1AB3:169E9160DA82774B]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:311)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:262)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:244)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Resolved] (SOLR-9883) example solr config files can lead to invalid tlog replays when using add-unknown-fields-to-schema updat chain

2017-01-07 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-9883.
--
   Resolution: Fixed
Fix Version/s: 6.4
   master (7.0)

> example solr config files can lead to invalid tlog replays when using 
> add-unknown-fields-to-schema updat chain
> --
>
> Key: SOLR-9883
> URL: https://issues.apache.org/jira/browse/SOLR-9883
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3, trunk
>Reporter: Erick Erickson
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9883.patch, SOLR-9883.patch, SOLR-9883.patch
>
>
> The current basic_configs and data_driven_schema_configs try to create 
> unknown fields. The problem is that the date processing 
> "ParseDateFieldUpdateProcessorFactory" is not invoked if the doc is replayed 
> from the tlog. Whether there are other places this is a problem I don't know, 
> this is a concrete example that fails in the field.
> So say I have a pattern for dates that omits the trialing 'Z', as:
> -MM-dd'T'HH:mm:ss.SSS
> This work fine when the doc is initially indexed. Now say the doc must be 
> replayed from the tlog. The doc errors out with "unknown date format" since 
> (apparently) this doesn't go through the same update chain, perhaps due to 
> the sample configs defining ParseDateFieldUpdateProcessorFactory after  
> DistributedUpdateProcessorFactory?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9883) example solr config files can lead to invalid tlog replays when using add-unknown-fields-to-schema updat chain

2017-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808200#comment-15808200
 ] 

ASF subversion and git services commented on SOLR-9883:
---

Commit 9a6ff177b6f7c776cc6bf4625ed2d5dd7cce81d2 in lucene-solr's branch 
refs/heads/branch_6x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9a6ff17 ]

SOLR-9883: In example schemaless configs' default update chain, move the DUP to 
after the AddSchemaFields URP (which is now tagged as RunAlways), to avoid 
invalid buffered tlog entry replays.


> example solr config files can lead to invalid tlog replays when using 
> add-unknown-fields-to-schema updat chain
> --
>
> Key: SOLR-9883
> URL: https://issues.apache.org/jira/browse/SOLR-9883
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3, trunk
>Reporter: Erick Erickson
>Assignee: Steve Rowe
> Attachments: SOLR-9883.patch, SOLR-9883.patch, SOLR-9883.patch
>
>
> The current basic_configs and data_driven_schema_configs try to create 
> unknown fields. The problem is that the date processing 
> "ParseDateFieldUpdateProcessorFactory" is not invoked if the doc is replayed 
> from the tlog. Whether there are other places this is a problem I don't know, 
> this is a concrete example that fails in the field.
> So say I have a pattern for dates that omits the trialing 'Z', as:
> -MM-dd'T'HH:mm:ss.SSS
> This work fine when the doc is initially indexed. Now say the doc must be 
> replayed from the tlog. The doc errors out with "unknown date format" since 
> (apparently) this doesn't go through the same update chain, perhaps due to 
> the sample configs defining ParseDateFieldUpdateProcessorFactory after  
> DistributedUpdateProcessorFactory?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9883) example solr config files can lead to invalid tlog replays when using add-unknown-fields-to-schema updat chain

2017-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808201#comment-15808201
 ] 

ASF subversion and git services commented on SOLR-9883:
---

Commit d817fd43eccd67a5d73c3bbc49561de65d3fc9cb in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d817fd4 ]

SOLR-9883: In example schemaless configs' default update chain, move the DUP to 
after the AddSchemaFields URP (which is now tagged as RunAlways), to avoid 
invalid buffered tlog entry replays.


> example solr config files can lead to invalid tlog replays when using 
> add-unknown-fields-to-schema updat chain
> --
>
> Key: SOLR-9883
> URL: https://issues.apache.org/jira/browse/SOLR-9883
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3, trunk
>Reporter: Erick Erickson
>Assignee: Steve Rowe
> Attachments: SOLR-9883.patch, SOLR-9883.patch, SOLR-9883.patch
>
>
> The current basic_configs and data_driven_schema_configs try to create 
> unknown fields. The problem is that the date processing 
> "ParseDateFieldUpdateProcessorFactory" is not invoked if the doc is replayed 
> from the tlog. Whether there are other places this is a problem I don't know, 
> this is a concrete example that fails in the field.
> So say I have a pattern for dates that omits the trialing 'Z', as:
> -MM-dd'T'HH:mm:ss.SSS
> This work fine when the doc is initially indexed. Now say the doc must be 
> replayed from the tlog. The doc errors out with "unknown date format" since 
> (apparently) this doesn't go through the same update chain, perhaps due to 
> the sample configs defining ParseDateFieldUpdateProcessorFactory after  
> DistributedUpdateProcessorFactory?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9883) example solr config files can lead to invalid tlog replays when using add-unknown-fields-to-schema updat chain

2017-01-07 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-9883:
-
Attachment: SOLR-9883.patch

Updated patch, moves config files to temp dir to avoid permission failures when 
auto-upgrading the schema file to {{managed-schema}}.  (Didn't see this failure 
when running from IntelliJ.)

All Solr tests pass, and precommit passes.  Committing shortly.

> example solr config files can lead to invalid tlog replays when using 
> add-unknown-fields-to-schema updat chain
> --
>
> Key: SOLR-9883
> URL: https://issues.apache.org/jira/browse/SOLR-9883
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3, trunk
>Reporter: Erick Erickson
>Assignee: Steve Rowe
> Attachments: SOLR-9883.patch, SOLR-9883.patch, SOLR-9883.patch
>
>
> The current basic_configs and data_driven_schema_configs try to create 
> unknown fields. The problem is that the date processing 
> "ParseDateFieldUpdateProcessorFactory" is not invoked if the doc is replayed 
> from the tlog. Whether there are other places this is a problem I don't know, 
> this is a concrete example that fails in the field.
> So say I have a pattern for dates that omits the trialing 'Z', as:
> -MM-dd'T'HH:mm:ss.SSS
> This work fine when the doc is initially indexed. Now say the doc must be 
> replayed from the tlog. The doc errors out with "unknown date format" since 
> (apparently) this doesn't go through the same update chain, perhaps due to 
> the sample configs defining ParseDateFieldUpdateProcessorFactory after  
> DistributedUpdateProcessorFactory?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 669 - Failure

2017-01-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/669/

No tests ran.

Build Log:
[...truncated 41985 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 260 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (40.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.0.0-src.tgz...
   [smoker] 30.5 MB in 0.03 sec (1122.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.tgz...
   [smoker] 65.0 MB in 0.05 sec (1201.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.zip...
   [smoker] 75.9 MB in 0.07 sec (1116.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6182 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6182 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 215 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (304.4 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.0.0-src.tgz...
   [smoker] 40.1 MB in 0.04 sec (1075.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.tgz...
   [smoker] 140.5 MB in 0.13 sec (1092.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.zip...
   [smoker] 150.1 MB in 0.13 sec (1143.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]  
   [smoker] Started Solr server on port 8983 (pid=23175). Happy searching!
   

[jira] [Updated] (SOLR-9944) Map the nodes function name to the GatherNodesStream

2017-01-07 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9944:
-
Attachment: SOLR-9944.patch

> Map the nodes function name to the GatherNodesStream
> 
>
> Key: SOLR-9944
> URL: https://issues.apache.org/jira/browse/SOLR-9944
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-9944.patch
>
>
> This ticket maps the *nodes* function name to the GatherNodesStream. The 
> *gatherNodes* function name will also remain mapped. 
> I think nodes is just a better name for this function. Gather was meant to 
> signify a breadth first traversal, but I think the docs can make this clear.
> I'll update the docs as well and mention that both function names work, but 
> in the future gatherNodes may be phased out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9883) example solr config files can lead to invalid tlog replays when using add-unknown-fields-to-schema updat chain

2017-01-07 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-9883:
-
Attachment: SOLR-9883.patch

Patch with a new automated data corruption test.  I tried to make a cloud test, 
but I couldn't get it to work.  Instead, the test in the patch simulates this 
situation by directly turning on tlog buffering mode in a single core, and 
sending in an update (with param {{update.distrib=fromleader}}) after manually 
running the "add-unknown-fields-to-schema" update chain on it up through the 
DUP.  The test succeeds with the solr config modifications in the patch, and 
fails without it.

The patch also fixes a typos in the replay failure log message 
({{REYPLAY}}->{{REPLAY}}).

I'm running all Solr tests and precommit now.  When they succeed, I'll commit.

> example solr config files can lead to invalid tlog replays when using 
> add-unknown-fields-to-schema updat chain
> --
>
> Key: SOLR-9883
> URL: https://issues.apache.org/jira/browse/SOLR-9883
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3, trunk
>Reporter: Erick Erickson
>Assignee: Steve Rowe
> Attachments: SOLR-9883.patch, SOLR-9883.patch
>
>
> The current basic_configs and data_driven_schema_configs try to create 
> unknown fields. The problem is that the date processing 
> "ParseDateFieldUpdateProcessorFactory" is not invoked if the doc is replayed 
> from the tlog. Whether there are other places this is a problem I don't know, 
> this is a concrete example that fails in the field.
> So say I have a pattern for dates that omits the trialing 'Z', as:
> -MM-dd'T'HH:mm:ss.SSS
> This work fine when the doc is initially indexed. Now say the doc must be 
> replayed from the tlog. The doc errors out with "unknown date format" since 
> (apparently) this doesn't go through the same update chain, perhaps due to 
> the sample configs defining ParseDateFieldUpdateProcessorFactory after  
> DistributedUpdateProcessorFactory?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9944) Map the nodes function name to the GatherNodesStream

2017-01-07 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-9944:


 Summary: Map the nodes function name to the GatherNodesStream
 Key: SOLR-9944
 URL: https://issues.apache.org/jira/browse/SOLR-9944
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


This ticket maps the *nodes* function name to the GatherNodesStream. The 
*gatherNodes* function name will also remain mapped. 

I think nodes is just a better name for this function. Gather was meant to 
signify a breadth first traversal, but I think the docs can make this clear.

I'll update the docs as well and mention that both function names work, but in 
the future gatherNodes may be phased out.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7691) SolrEntityProcessor as SubEntity doesn't work with delta-import

2017-01-07 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808114#comment-15808114
 ] 

Mikhail Khludnev commented on SOLR-7691:


# I can commit a patch, if it's provided.
# the configuration above seems like N+1 antipattern.
# the better approach is join=”zipper”, _however_ I don't know how it works 
with delta-import. Patches are welcome.  

> SolrEntityProcessor as SubEntity doesn't work with delta-import
> ---
>
> Key: SOLR-7691
> URL: https://issues.apache.org/jira/browse/SOLR-7691
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 5.0, 5.1, 5.2, 5.2.1
>Reporter: Sebastian Krebs
>
> I've used the {{SolrEntityProcessor}} as sub-entity in the dataimporter like 
> this
> {code:lang=xml}
> 
> 
>  name="outer"
> dataSource="my_datasource"
> pk="id"
> query="..."
> deltaQuery="..."
> deltaImportQuery="..."
> >
>  name="solr"
> processor="SolrEntityProcessor"
> url="http://127.0.0.1:8983/solr/${solr.core.name};
> query="Xid:${outer.Xid}"
> rows="1"
> fl="Id,FieldA,FieldB"
> wt="javabin"
> />
> 
> 
> 
> {code}
> Recently I decided to upgrade to 5.x, but the delta-import stopped working. 
> At all it looks like the http-connection used by the {{SolrEntityProcessor}} 
> is closed right _after_ the request/response, because the first document is 
> indexed properly and for the second connection the dataimport fetches the 
> record from the database, but after that exists 
> This is the stacktrace taken from the log
> {code:lang=none}
> java.lang.RuntimeException: java.lang.RuntimeException: 
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> java.lang.IllegalStateException: Connection pool shut down
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:270)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:444)
> at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:482)
> at 
> org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:461)
> Caused by: java.lang.RuntimeException: 
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> java.lang.IllegalStateException: Connection pool shut down
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:416)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:363)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:224)
> ... 3 more
> Caused by: org.apache.solr.handler.dataimport.DataImportHandlerException: 
> java.lang.IllegalStateException: Connection pool shut down
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:62)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:246)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
> ... 5 more
> Caused by: java.lang.IllegalStateException: Connection pool shut down
> at org.apache.http.util.Asserts.check(Asserts.java:34)
> at org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:184)
> at org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:217)
> at 
> org.apache.http.impl.conn.PoolingClientConnectionManager.requestConnection(PoolingClientConnectionManager.java:184)
> at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:415)
> at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
> at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
> at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
> at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:466)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:235)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:227)
> at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
> at 

[jira] [Commented] (SOLR-9939) Ping handler logs each request twice

2017-01-07 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808095#comment-15808095
 ] 

Mikhail Khludnev commented on SOLR-9939:


wow.. PingRequestHandler makes reentrant call core.execute() (and I can only 
guess why). Since SolrQueryResponse is instantiated by PingRequestHandler can't 
logging be suppressed by clearing rsp.getToLog() or so..

> Ping handler logs each request twice
> 
>
> Key: SOLR-9939
> URL: https://issues.apache.org/jira/browse/SOLR-9939
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.4
>Reporter: Shawn Heisey
>Priority: Minor
> Attachments: SOLR-9939.patch
>
>
> Requests to the ping handler are being logged twice.  The first line has 
> "hits" and the second one doesn't, but other than that they have the same 
> info.
> These lines are from a 5.3.2-SNAPSHOT version.  In the IRC channel, 
> [~ctargett] confirmed that this also happens in 6.4-SNAPSHOT.
> {noformat}
> 2017-01-06 14:16:37.253 INFO  (qtp1510067370-186262) [   x:sparkmain] 
> or.ap.so.co.So.Request [sparkmain] webapp=/solr path=/admin/ping params={} 
> hits=400271103 status=0 QTime=4
> 2017-01-06 14:16:37.253 INFO  (qtp1510067370-186262) [   x:sparkmain] 
> or.ap.so.co.So.Request [sparkmain] webapp=/solr path=/admin/ping params={} 
> status=0 QTime=4
> {noformat}
> Unless there's a good reason to have it that I'm not aware of, the second log 
> should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 603 - Unstable!

2017-01-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/603/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
expected:<3> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([C5D1FB6D03A54ECC:8DA48FD905966159]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:516)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11852 lines...]
   [junit4] Suite: 

[jira] [Commented] (SOLR-9939) Ping handler logs each request twice

2017-01-07 Thread Trey Cahill (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808037#comment-15808037
 ] 

Trey Cahill commented on SOLR-9939:
---

The uploaded patch will filter the second request logging line from the Ping 
request.

Looking at the thread dump 
https://gist.github.com/cahilltr/e957857b7893c871022551f0e4daab28, it looks 
like SolrCore.execute() is called twice, which has request logging code in it 
(https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/core/SolrCore.java#L2327).

Not sure if this is intended or filtering the second log message is sufficient.

> Ping handler logs each request twice
> 
>
> Key: SOLR-9939
> URL: https://issues.apache.org/jira/browse/SOLR-9939
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.4
>Reporter: Shawn Heisey
>Priority: Minor
> Attachments: SOLR-9939.patch
>
>
> Requests to the ping handler are being logged twice.  The first line has 
> "hits" and the second one doesn't, but other than that they have the same 
> info.
> These lines are from a 5.3.2-SNAPSHOT version.  In the IRC channel, 
> [~ctargett] confirmed that this also happens in 6.4-SNAPSHOT.
> {noformat}
> 2017-01-06 14:16:37.253 INFO  (qtp1510067370-186262) [   x:sparkmain] 
> or.ap.so.co.So.Request [sparkmain] webapp=/solr path=/admin/ping params={} 
> hits=400271103 status=0 QTime=4
> 2017-01-06 14:16:37.253 INFO  (qtp1510067370-186262) [   x:sparkmain] 
> or.ap.so.co.So.Request [sparkmain] webapp=/solr path=/admin/ping params={} 
> status=0 QTime=4
> {noformat}
> Unless there's a good reason to have it that I'm not aware of, the second log 
> should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9939) Ping handler logs each request twice

2017-01-07 Thread Trey Cahill (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trey Cahill updated SOLR-9939:
--
Attachment: SOLR-9939.patch

> Ping handler logs each request twice
> 
>
> Key: SOLR-9939
> URL: https://issues.apache.org/jira/browse/SOLR-9939
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.4
>Reporter: Shawn Heisey
>Priority: Minor
> Attachments: SOLR-9939.patch
>
>
> Requests to the ping handler are being logged twice.  The first line has 
> "hits" and the second one doesn't, but other than that they have the same 
> info.
> These lines are from a 5.3.2-SNAPSHOT version.  In the IRC channel, 
> [~ctargett] confirmed that this also happens in 6.4-SNAPSHOT.
> {noformat}
> 2017-01-06 14:16:37.253 INFO  (qtp1510067370-186262) [   x:sparkmain] 
> or.ap.so.co.So.Request [sparkmain] webapp=/solr path=/admin/ping params={} 
> hits=400271103 status=0 QTime=4
> 2017-01-06 14:16:37.253 INFO  (qtp1510067370-186262) [   x:sparkmain] 
> or.ap.so.co.So.Request [sparkmain] webapp=/solr path=/admin/ping params={} 
> status=0 QTime=4
> {noformat}
> Unless there's a good reason to have it that I'm not aware of, the second log 
> should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9935) When hl.method=unified add support for hl.fragsize param

2017-01-07 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-9935:
---
Attachment: SOLR_9935_UH_fragsize.patch

Updated patch to account for API change in LUCENE-7620. Clarified the test a 
bit and some other related test methods.  I'll commit later today.  In 
CHANGES.txt I'll remove the note about UH not supporting hl.fragsize (yay).

Features in the original highlighter that are _not_ in the UH (as seen through 
Solr) are:
* influence passage scoring from boosts in the query
* {{hl.mergeContiguous}} defaults to false.  In the UH, DefaultPassageFormatter 
always merges contiguous passages.  
* {{hl.alternateField}} and related options
* {{hl.maxMultiValueToExamine}} (a performance circuit-breaker). Doesn't seem 
as pertinent to the UH as the original Highlighter.
* regex based Passage delineation option
* {{hl.preserveMulti}} the original Highlighter supports "true" (false by 
default) but the UH doesn't do this.

> When hl.method=unified add support for hl.fragsize param
> 
>
> Key: SOLR-9935
> URL: https://issues.apache.org/jira/browse/SOLR-9935
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR_9935_UH_fragsize.patch, SOLR_9935_UH_fragsize.patch
>
>
> In LUCENE-7620 the UnifiedHighlighter is getting a BreakIterator that allows 
> it to support the equivalent of Solr's {{hl.fragsize}}.  So lets support this 
> on the Solr side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9893) EasyMock/Mockito no longer works with Java 9 b148+

2017-01-07 Thread Julian Hyde (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807982#comment-15807982
 ] 

Julian Hyde commented on SOLR-9893:
---

We are running into the same issue in Calcite/Avatica: CALCITE-1567. Do you 
know if there is a Mockito bug logged for this? Somewhere in 
https://github.com/cglib/cglib/issues/93 someone suggests that it is fixed in a 
later version of Mockito. If so I would like to upgrade to that version of 
Mockito.

> EasyMock/Mockito no longer works with Java 9 b148+
> --
>
> Key: SOLR-9893
> URL: https://issues.apache.org/jira/browse/SOLR-9893
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 6.x, master (7.0)
>Reporter: Uwe Schindler
>Priority: Blocker
>
> EasyMock does not work anymore with latest Java 9, because it uses cglib 
> behind that is trying to access a protected method inside the runtime using 
> setAccessible. This is no longer allowed by Java 9.
> Actually this is really stupid. Instead of forcefully making the protected 
> defineClass method available to the outside, it is much more correct to just 
> subclass ClassLoader (like the Lucene expressions module does).
> I tried updating to easymock/mockito, but all that does not work, approx 25 
> tests fail. The only way is to disable all Mocking tests in Java 9. The 
> underlying issue in cglib is still not solved, master's code is here: 
> https://github.com/cglib/cglib/blob/master/cglib/src/main/java/net/sf/cglib/core/ReflectUtils.java#L44-L62
> As we use an old stone-aged version of mockito (1.x), a fix is not expected 
> to happen, although cglib might fix this!
> What should we do? This stupid issue prevents us from testing Java 9 with 
> Solr completely! 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1597 - Unstable

2017-01-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1597/

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionTooManyReplicasTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.CollectionTooManyReplicasTest: 1) Thread[id=11498, 
name=OverseerHdfsCoreFailoverThread-97243141804916745-127.0.0.1:42050_solr-n_02,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.CollectionTooManyReplicasTest: 
   1) Thread[id=11498, 
name=OverseerHdfsCoreFailoverThread-97243141804916745-127.0.0.1:42050_solr-n_02,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([19805CD5117C2EF5]:0)




Build Log:
[...truncated 12257 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionTooManyReplicasTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/J2/temp/solr.cloud.CollectionTooManyReplicasTest_19805CD5117C2EF5-001/init-core-data-001
   [junit4]   2> 1135550 INFO  
(SUITE-CollectionTooManyReplicasTest-seed#[19805CD5117C2EF5]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 1135550 INFO  
(SUITE-CollectionTooManyReplicasTest-seed#[19805CD5117C2EF5]-worker) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 3 servers in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/J2/temp/solr.cloud.CollectionTooManyReplicasTest_19805CD5117C2EF5-001/tempDir-001
   [junit4]   2> 1135550 INFO  
(SUITE-CollectionTooManyReplicasTest-seed#[19805CD5117C2EF5]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1135551 INFO  (Thread-4687) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1135551 INFO  (Thread-4687) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1135651 INFO  
(SUITE-CollectionTooManyReplicasTest-seed#[19805CD5117C2EF5]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:46459
   [junit4]   2> 1135687 INFO  (jetty-launcher-1470-thread-1) [] 
o.e.j.s.Server jetty-9.3.14.v20161028
   [junit4]   2> 1135710 INFO  (jetty-launcher-1470-thread-2) [] 
o.e.j.s.Server jetty-9.3.14.v20161028
   [junit4]   2> 1135710 INFO  (jetty-launcher-1470-thread-3) [] 
o.e.j.s.Server jetty-9.3.14.v20161028
   [junit4]   2> 1135722 INFO  (jetty-launcher-1470-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@562f67e5{/solr,null,AVAILABLE}
   [junit4]   2> 1135723 INFO  (jetty-launcher-1470-thread-1) [] 
o.e.j.s.AbstractConnector Started 
ServerConnector@2626e282{HTTP/1.1,[http/1.1]}{127.0.0.1:33551}
   [junit4]   2> 1135723 INFO  (jetty-launcher-1470-thread-1) [] 
o.e.j.s.Server Started @1139796ms
   [junit4]   2> 1135723 INFO  (jetty-launcher-1470-thread-1) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=33551}
   [junit4]   2> 1135723 ERROR (jetty-launcher-1470-thread-1) [] 
o.a.s.s.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 1135723 INFO  (jetty-launcher-1470-thread-1) [] 
o.a.s.s.SolrDispatchFilter  ___  _   Welcome to Apache Solr™ version 
7.0.0
   [junit4]   2> 1135723 INFO  (jetty-launcher-1470-thread-1) [] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 1135723 INFO  (jetty-launcher-1470-thread-1) [] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 1135723 INFO  (jetty-launcher-1470-thread-1) [] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|Start time: 
2017-01-07T18:09:48.748Z
   [junit4]   2> 1135725 INFO  (jetty-launcher-1470-thread-2) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@105f05f1{/solr,null,AVAILABLE}
   [junit4]   2> 1135725 INFO  (jetty-launcher-1470-thread-2) [] 
o.e.j.s.AbstractConnector Started 
ServerConnector@43c70b9c{HTTP/1.1,[http/1.1]}{127.0.0.1:42627}
   [junit4]   2> 1135725 INFO  (jetty-launcher-1470-thread-2) [] 
o.e.j.s.Server Started @1139798ms
   [junit4]   2> 1135725 INFO  (jetty-launcher-1470-thread-2) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1063 - Still Unstable!

2017-01-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1063/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([F8B5996390C104A2:900AAC49405B164E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.cancelDelegationToken(TestDelegationWithHadoopAuth.java:128)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenCancelFail(TestDelegationWithHadoopAuth.java:280)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Comment Edited] (SOLR-9928) MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super

2017-01-07 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807920#comment-15807920
 ] 

Andrzej Bialecki  edited comment on SOLR-9928 at 1/7/17 6:17 PM:
-

I fixed the inconsistent unwrapping. Thank you Mike and Mark for your help!


was (Author: ab):
I fixed the inconsistent unwrapping. Thank you Mike and Mark for you help!

> MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super
> 
>
> Key: SOLR-9928
> URL: https://issues.apache.org/jira/browse/SOLR-9928
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (7.0), 6.4
>Reporter: Mike Drob
>Assignee: Andrzej Bialecki 
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9928.patch, SOLR-9928.patch
>
>
> MetricsDirectoryFactory::renameWithOverwrite should call the delegate instead 
> of super. Trivial patch forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9928) MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super

2017-01-07 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-9928.
-
Resolution: Fixed

I fixed the inconsistent unwrapping. Thank you Mike and Mark for you help!

> MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super
> 
>
> Key: SOLR-9928
> URL: https://issues.apache.org/jira/browse/SOLR-9928
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (7.0), 6.4
>Reporter: Mike Drob
>Assignee: Andrzej Bialecki 
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9928.patch, SOLR-9928.patch
>
>
> MetricsDirectoryFactory::renameWithOverwrite should call the delegate instead 
> of super. Trivial patch forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9928) MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super

2017-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807919#comment-15807919
 ] 

ASF subversion and git services commented on SOLR-9928:
---

Commit e275e91293e2ecb0356415a178c7ccd38a7182ff in lucene-solr's branch 
refs/heads/branch_6x from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e275e91 ]

SOLR-9928 Unwrap Directory consistently whenever it's passed as an argument.


> MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super
> 
>
> Key: SOLR-9928
> URL: https://issues.apache.org/jira/browse/SOLR-9928
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (7.0), 6.4
>Reporter: Mike Drob
>Assignee: Andrzej Bialecki 
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9928.patch, SOLR-9928.patch
>
>
> MetricsDirectoryFactory::renameWithOverwrite should call the delegate instead 
> of super. Trivial patch forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_112) - Build # 18720 - Still Unstable!

2017-01-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18720/
Java: 32bit/jdk1.8.0_112 -client -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateAndDeleteShard

Error Message:
Error from server at https://127.0.0.1:34652/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:34652/solr: create the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([5DB0A3CE006A4F90:98EE17D2869F856D]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:439)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:391)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1344)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1095)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1037)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateAndDeleteShard(CollectionsAPISolrJTest.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (SOLR-9943) Add shards Streaming Expression

2017-01-07 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-9943:


 Summary: Add shards Streaming Expression
 Key: SOLR-9943
 URL: https://issues.apache.org/jira/browse/SOLR-9943
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


The shards() streaming expression is a stream source that emits a set of shards 
to be searched. All the Solr stream sources will accept this expression and use 
it instead of *collection* name to contact the shards if it's passed in.

A base ShardsStream implementation that simply takes a list of shards as will 
be mapped to the shards() expression.

Users can override the shards expression with custom functionality by extending 
the ShardsStream and mapping the custom implementation in the solrconfig.

This is a generic solution for people who don't use SolrCloud but want to use 
Streaming and Parallel SQL and JDBC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7611) Make suggester module use LongValuesSource

2017-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807814#comment-15807814
 ] 

ASF subversion and git services commented on LUCENE-7611:
-

Commit 67261d2fb515f255e05c138281ab6c6b71d66716 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=67261d2 ]

LUCENE-7611: Remove unnecessary Exception wrapping from 
DocumentValueSourceDictionary


> Make suggester module use LongValuesSource
> --
>
> Key: LUCENE-7611
> URL: https://issues.apache.org/jira/browse/LUCENE-7611
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7611.patch, LUCENE-7611.patch
>
>
> This allows us to remove the suggester module's dependency on the queries 
> module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7611) Make suggester module use LongValuesSource

2017-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807813#comment-15807813
 ] 

ASF subversion and git services commented on LUCENE-7611:
-

Commit 31db19d3e4e0dec89ece38ef27577e8b668c93c2 in lucene-solr's branch 
refs/heads/branch_6x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=31db19d ]

LUCENE-7611: Remove unnecessary Exception wrapping from 
DocumentValueSourceDictionary


> Make suggester module use LongValuesSource
> --
>
> Key: LUCENE-7611
> URL: https://issues.apache.org/jira/browse/LUCENE-7611
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7611.patch, LUCENE-7611.patch
>
>
> This allows us to remove the suggester module's dependency on the queries 
> module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9509) Fix problems in shell scripts reported by "shellcheck"

2017-01-07 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807798#comment-15807798
 ] 

Erick Erickson commented on SOLR-9509:
--

Rishabh:

Please do.

> Fix problems in shell scripts reported by "shellcheck"
> --
>
> Key: SOLR-9509
> URL: https://issues.apache.org/jira/browse/SOLR-9509
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>  Labels: newdev
> Attachments: bin_solr_shellcheck.txt, shellcheck_solr_20160915.txt, 
> shellcheck_solr_bin_bash_20160915.txt, shellcheck_solr_bin_sh_20160915.txt, 
> shellcheck_solr_usr_bin_env_bash_20160915.txt
>
>
> Running {{shellcheck}} on our shell scripts reveal various improvements we 
> should consider.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9942) MoreLikeThis Performance Degraded With Filtered Query

2017-01-07 Thread Ivan Provalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Provalov updated SOLR-9942:

Attachment: (was: solr_mlt_test.tar)

> MoreLikeThis Performance Degraded With Filtered Query
> -
>
> Key: SOLR-9942
> URL: https://issues.apache.org/jira/browse/SOLR-9942
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: MoreLikeThis
>Affects Versions: 5.5.2
>Reporter: Ivan Provalov
> Attachments: solr_mlt_test2.tar
>
>
> Without any filters, the MLT is performing normal.  With any added filters, 
> the performance degrades compared to 4.6.1 (2.5-3.0X in our case).  The issue 
> goes away with 6.0 upgrade.  The hot method is Lucene's DisiPriorityQueue 
> downHeap(), which takes 5X more calls in 5.5.2 compared to 6.0.  I am 
> guessing that some of the Solr filters refactoring fixed it for 6.0 release.
> As a work-around, for now I just refactored the custom MLT handler to convert 
> the filters into boolean clauses, which takes care of the issue.   
> Our configuration: 
> 1. mlt.maxqt=100
> 2. There is an additional filter passed as a parameter
> 3.  multiValued="true" omitNorms="false" termVectors="true"/>
> 4. text_general is a pretty standard text fieldType.
> I have a code to populate a test dataset and run a query in order to 
> reproduce this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9942) MoreLikeThis Performance Degraded With Filtered Query

2017-01-07 Thread Ivan Provalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Provalov updated SOLR-9942:

Description: 
Without any filters, the MLT is performing normal.  With any added filters, the 
performance degrades compared to 4.6.1 (2.5-3.0X in our case).  The issue goes 
away with 6.0 upgrade.  The hot method is Lucene's DisiPriorityQueue 
downHeap(), which takes 5X more calls in 5.5.2 compared to 6.0.  I am guessing 
that some of the Solr filters refactoring fixed it for 6.0 release.

As a work-around, for now I just refactored the custom MLT handler to convert 
the filters into boolean clauses, which takes care of the issue.   

Our configuration: 
1. mlt.maxqt=100
2. There is an additional filter passed as a parameter
3. 
4. text_general is a pretty standard text fieldType.

I have a code to populate a test dataset and run a query in order to reproduce 
this.

  was:
Without any filters, the MLT is performing normal.  With any added filters, the 
performance degrades compared to 4.6.1 (2.5-3.0X in our case).  The issue goes 
away with 6.0 upgrade.  The hot method is Lucene's DisiPriorityQueue 
downHeap(), which takes 5X more calls in 5.5.2 compared to 6.0.  I am guessing 
that some of the Solr filters refactoring fixed it for 6.0 release.

As a work-around, for now I just refactored the custom MLT handler to convert 
the filters into boolean clauses, which takes care of the issue.   

Our configuration: 
1. mlt.maxqt=100
2. There is an additional filter passed as a parameter
3. 
4. text_en is a pretty standard text fieldType.

I have a code to populate a test dataset and run a query in order to reproduce 
this.


> MoreLikeThis Performance Degraded With Filtered Query
> -
>
> Key: SOLR-9942
> URL: https://issues.apache.org/jira/browse/SOLR-9942
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: MoreLikeThis
>Affects Versions: 5.5.2
>Reporter: Ivan Provalov
> Attachments: solr_mlt_test2.tar
>
>
> Without any filters, the MLT is performing normal.  With any added filters, 
> the performance degrades compared to 4.6.1 (2.5-3.0X in our case).  The issue 
> goes away with 6.0 upgrade.  The hot method is Lucene's DisiPriorityQueue 
> downHeap(), which takes 5X more calls in 5.5.2 compared to 6.0.  I am 
> guessing that some of the Solr filters refactoring fixed it for 6.0 release.
> As a work-around, for now I just refactored the custom MLT handler to convert 
> the filters into boolean clauses, which takes care of the issue.   
> Our configuration: 
> 1. mlt.maxqt=100
> 2. There is an additional filter passed as a parameter
> 3.  multiValued="true" omitNorms="false" termVectors="true"/>
> 4. text_general is a pretty standard text fieldType.
> I have a code to populate a test dataset and run a query in order to 
> reproduce this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9942) MoreLikeThis Performance Degraded With Filtered Query

2017-01-07 Thread Ivan Provalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Provalov updated SOLR-9942:

Attachment: solr_mlt_test2.tar

test case

> MoreLikeThis Performance Degraded With Filtered Query
> -
>
> Key: SOLR-9942
> URL: https://issues.apache.org/jira/browse/SOLR-9942
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: MoreLikeThis
>Affects Versions: 5.5.2
>Reporter: Ivan Provalov
> Attachments: solr_mlt_test2.tar
>
>
> Without any filters, the MLT is performing normal.  With any added filters, 
> the performance degrades compared to 4.6.1 (2.5-3.0X in our case).  The issue 
> goes away with 6.0 upgrade.  The hot method is Lucene's DisiPriorityQueue 
> downHeap(), which takes 5X more calls in 5.5.2 compared to 6.0.  I am 
> guessing that some of the Solr filters refactoring fixed it for 6.0 release.
> As a work-around, for now I just refactored the custom MLT handler to convert 
> the filters into boolean clauses, which takes care of the issue.   
> Our configuration: 
> 1. mlt.maxqt=100
> 2. There is an additional filter passed as a parameter
> 3.  multiValued="true" omitNorms="false" termVectors="true"/>
> 4. text_general is a pretty standard text fieldType.
> I have a code to populate a test dataset and run a query in order to 
> reproduce this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9942) MoreLikeThis Performance Degraded With Filtered Query

2017-01-07 Thread Ivan Provalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Provalov updated SOLR-9942:

Description: 
Without any filters, the MLT is performing normal.  With any added filters, the 
performance degrades compared to 4.6.1 (2.5-3.0X in our case).  The issue goes 
away with 6.0 upgrade.  The hot method is Lucene's DisiPriorityQueue 
downHeap(), which takes 5X more calls in 5.5.2 compared to 6.0.  I am guessing 
that some of the Solr filters refactoring fixed it for 6.0 release.

As a work-around, for now I just refactored the custom MLT handler to convert 
the filters into boolean clauses, which takes care of the issue.   

Our configuration: 
1. mlt.maxqt=100
2. There is an additional filter passed as a parameter
3. 
4. text_en is a pretty standard text fieldType.

I have a code to populate a test dataset and run a query in order to reproduce 
this.

  was:
Without any filters, the MLT is performing normal.  With any added filters, the 
performance degrades (2.5-3.0X in our case).  The issue goes away with 6.0 
upgrade.  The hot method is Lucene's DisiPriorityQueue downHeap(), which takes 
5X more calls in 5.5.2 compared to 6.0.  I am guessing that some of the Solr 
filters refactoring fixed it for 6.0 release.

As a work-around, for now I just refactored the custom MLT handler to convert 
the filters into boolean clauses, which takes care of the issue.   

Our configuration: 
1. mlt.maxqt=100
2. There is an additional filter passed as a parameter
3. 
4. text_en is a pretty standard text fieldType.

I have a code to populate a test dataset and run a query in order to reproduce 
this.


> MoreLikeThis Performance Degraded With Filtered Query
> -
>
> Key: SOLR-9942
> URL: https://issues.apache.org/jira/browse/SOLR-9942
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: MoreLikeThis
>Affects Versions: 5.5.2
>Reporter: Ivan Provalov
> Attachments: solr_mlt_test.tar
>
>
> Without any filters, the MLT is performing normal.  With any added filters, 
> the performance degrades compared to 4.6.1 (2.5-3.0X in our case).  The issue 
> goes away with 6.0 upgrade.  The hot method is Lucene's DisiPriorityQueue 
> downHeap(), which takes 5X more calls in 5.5.2 compared to 6.0.  I am 
> guessing that some of the Solr filters refactoring fixed it for 6.0 release.
> As a work-around, for now I just refactored the custom MLT handler to convert 
> the filters into boolean clauses, which takes care of the issue.   
> Our configuration: 
> 1. mlt.maxqt=100
> 2. There is an additional filter passed as a parameter
> 3.  multiValued="true" omitNorms="false" termVectors="true"/>
> 4. text_en is a pretty standard text fieldType.
> I have a code to populate a test dataset and run a query in order to 
> reproduce this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9942) MoreLikeThis Performance Degraded With Filtered Query

2017-01-07 Thread Ivan Provalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Provalov updated SOLR-9942:

Attachment: solr_mlt_test.tar

test for mlt performance issue

> MoreLikeThis Performance Degraded With Filtered Query
> -
>
> Key: SOLR-9942
> URL: https://issues.apache.org/jira/browse/SOLR-9942
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: MoreLikeThis
>Affects Versions: 5.5.2
>Reporter: Ivan Provalov
> Attachments: solr_mlt_test.tar
>
>
> Without any filters, the MLT is performing normal.  With any added filters, 
> the performance degrades (2.5-3.0X in our case).  The issue goes away with 
> 6.0 upgrade.  The hot method is Lucene's DisiPriorityQueue downHeap(), which 
> takes 5X more calls in 5.5.2 compared to 6.0.  I am guessing that some of the 
> Solr filters refactoring fixed it for 6.0 release.
> As a work-around, for now I just refactored the custom MLT handler to convert 
> the filters into boolean clauses, which takes care of the issue.   
> Our configuration: 
> 1. mlt.maxqt=100
> 2. There is an additional filter passed as a parameter
> 3.  multiValued="true" omitNorms="false" termVectors="true"/>
> 4. text_en is a pretty standard text fieldType.
> I have a code to populate a test dataset and run a query in order to 
> reproduce this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9942) MoreLikeThis Performance Degraded With Filtered Query

2017-01-07 Thread Ivan Provalov (JIRA)
Ivan Provalov created SOLR-9942:
---

 Summary: MoreLikeThis Performance Degraded With Filtered Query
 Key: SOLR-9942
 URL: https://issues.apache.org/jira/browse/SOLR-9942
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: MoreLikeThis
Affects Versions: 5.5.2
Reporter: Ivan Provalov


Without any filters, the MLT is performing normal.  With any added filters, the 
performance degrades (2.5-3.0X in our case).  The issue goes away with 6.0 
upgrade.  The hot method is Lucene's DisiPriorityQueue downHeap(), which takes 
5X more calls in 5.5.2 compared to 6.0.  I am guessing that some of the Solr 
filters refactoring fixed it for 6.0 release.

As a work-around, for now I just refactored the custom MLT handler to convert 
the filters into boolean clauses, which takes care of the issue.   

Our configuration: 
1. mlt.maxqt=100
2. There is an additional filter passed as a parameter
3. 
4. text_en is a pretty standard text fieldType.

I have a code to populate a test dataset and run a query in order to reproduce 
this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9509) Fix problems in shell scripts reported by "shellcheck"

2017-01-07 Thread Rishabh Patel (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807706#comment-15807706
 ] 

Rishabh Patel commented on SOLR-9509:
-

May I work on this, if it has not been assigned?

> Fix problems in shell scripts reported by "shellcheck"
> --
>
> Key: SOLR-9509
> URL: https://issues.apache.org/jira/browse/SOLR-9509
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>  Labels: newdev
> Attachments: bin_solr_shellcheck.txt, shellcheck_solr_20160915.txt, 
> shellcheck_solr_bin_bash_20160915.txt, shellcheck_solr_bin_sh_20160915.txt, 
> shellcheck_solr_usr_bin_env_bash_20160915.txt
>
>
> Running {{shellcheck}} on our shell scripts reveal various improvements we 
> should consider.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7622) Should BaseTokenStreamTestCase catch analyzers that create duplicate tokens?

2017-01-07 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807640#comment-15807640
 ] 

Uwe Schindler commented on LUCENE-7622:
---

Hi Robert. I know that you can tune. Maybe I was a bit unclear. I wanted to say 
that unlike with stupid CrappyDefaultSim it's no longer possible to boost terms 
more or less unlimited (like a document with 1 times the same term no 
longer beats all others). So to repeat terms at same position with a repeater 
token filter is still useful, but no longer so drastic. So sorry for being 
unclear. 邏 Maybe I change or remove the last sentence in my comment to remove 
the misunderstanding.

> Should BaseTokenStreamTestCase catch analyzers that create duplicate tokens?
> 
>
> Key: LUCENE-7622
> URL: https://issues.apache.org/jira/browse/LUCENE-7622
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Attachments: LUCENE-7622.patch
>
>
> The change to BTSTC is quite simple, to catch any case where the same term 
> text spans from the same position with the same position length. Such 
> duplicate tokens are silly to add to the index, or to search at search time.
> Yet, this change produced many failures, and I looked briefly at them, and 
> they are cases that I think are actually OK, e.g. 
> {{PatternCaptureGroupTokenFilter}} capturing (..)(..) on the string {{ktkt}} 
> will create a duplicate token.
> Other cases looked more dubious, e.g. {{WordDelimiterFilter}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3762 - Unstable!

2017-01-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3762/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([633FC0C64BDC11BA:EB6BFF1CE5207C42]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:311)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:262)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:244)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 250 - Still Unstable

2017-01-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/250/

10 tests failed.
FAILED:  org.apache.lucene.facet.TestParallelDrillSideways.testRandom

Error Message:
expected:<10[032]> but was:<10[26]>

Stack Trace:
org.junit.ComparisonFailure: expected:<10[032]> but was:<10[26]>
at 
__randomizedtesting.SeedInfo.seed([294C85BC60DE41E8:5B00A0B3D1BEF79B]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.lucene.facet.TestDrillSideways.verifyEquals(TestDrillSideways.java:1034)
at 
org.apache.lucene.facet.TestDrillSideways.testRandom(TestDrillSideways.java:818)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.facet.TestParallelDrillSideways

Error Message:
3 threads leaked from SUITE scope at 
org.apache.lucene.facet.TestParallelDrillSideways: 1) Thread[id=88, 
name=LuceneTestCase-4-thread-3, state=WAITING, 
group=TGRP-TestParallelDrillSideways] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+147) - Build # 18719 - Unstable!

2017-01-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18719/
Java: 64bit/jdk-9-ea+147 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateAndDeleteAlias

Error Message:
Error from server at https://127.0.0.1:32931/solr: Could not fully create 
collection: aliasedCollection

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:32931/solr: Could not fully create collection: 
aliasedCollection
at 
__randomizedtesting.SeedInfo.seed([8AF6CD2B66C11468:44158AD1C67EE5F3]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:439)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:391)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1344)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1095)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1037)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateAndDeleteAlias(CollectionsAPISolrJTest.java:128)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:538)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-7622) Should BaseTokenStreamTestCase catch analyzers that create duplicate tokens?

2017-01-07 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807508#comment-15807508
 ] 

Robert Muir commented on LUCENE-7622:
-

BM25 does not make this harder. It just normalizes term frequency in a way that 
isn't as brain dead as {{sqrt}}. And unlike Crappy^H^H^H^HDefaultSimilarity, 
its totally tunable without modifying source code, e.g. adjust {{k1}} parameter 
to your needs.

Sorry, you are wrong: it only makes this kind of thing way easier.

> Should BaseTokenStreamTestCase catch analyzers that create duplicate tokens?
> 
>
> Key: LUCENE-7622
> URL: https://issues.apache.org/jira/browse/LUCENE-7622
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Attachments: LUCENE-7622.patch
>
>
> The change to BTSTC is quite simple, to catch any case where the same term 
> text spans from the same position with the same position length. Such 
> duplicate tokens are silly to add to the index, or to search at search time.
> Yet, this change produced many failures, and I looked briefly at them, and 
> they are cases that I think are actually OK, e.g. 
> {{PatternCaptureGroupTokenFilter}} capturing (..)(..) on the string {{ktkt}} 
> will create a duplicate token.
> Other cases looked more dubious, e.g. {{WordDelimiterFilter}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7611) Make suggester module use LongValuesSource

2017-01-07 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-7611.
---
   Resolution: Fixed
Fix Version/s: 6.4
   master (7.0)

> Make suggester module use LongValuesSource
> --
>
> Key: LUCENE-7611
> URL: https://issues.apache.org/jira/browse/LUCENE-7611
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7611.patch, LUCENE-7611.patch
>
>
> This allows us to remove the suggester module's dependency on the queries 
> module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7610) Migrate facets module from ValueSource to Double/LongValuesSource

2017-01-07 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-7610.
---
   Resolution: Fixed
Fix Version/s: 6.4
   master (7.0)

> Migrate facets module from ValueSource to Double/LongValuesSource
> -
>
> Key: LUCENE-7610
> URL: https://issues.apache.org/jira/browse/LUCENE-7610
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7610.patch, LUCENE-7610.patch
>
>
> Unfortunately this doesn't allow us to break the facets dependency on the 
> queries module, because facets also uses TermsQuery - perhaps this should 
> move to core as well?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7610) Migrate facets module from ValueSource to Double/LongValuesSource

2017-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807477#comment-15807477
 ] 

ASF subversion and git services commented on LUCENE-7610:
-

Commit ce8b678ba19a53c43033a235bdca54e5a68adcc8 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ce8b678 ]

LUCENE-7610: Remove deprecated facet ValueSource methods


> Migrate facets module from ValueSource to Double/LongValuesSource
> -
>
> Key: LUCENE-7610
> URL: https://issues.apache.org/jira/browse/LUCENE-7610
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-7610.patch, LUCENE-7610.patch
>
>
> Unfortunately this doesn't allow us to break the facets dependency on the 
> queries module, because facets also uses TermsQuery - perhaps this should 
> move to core as well?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7611) Make suggester module use LongValuesSource

2017-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807478#comment-15807478
 ] 

ASF subversion and git services commented on LUCENE-7611:
-

Commit 8f4fee3ad1c0027587d0de96f59cf61b2df67bc8 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8f4fee3 ]

LUCENE-7611: Remove queries dependency from suggester module


> Make suggester module use LongValuesSource
> --
>
> Key: LUCENE-7611
> URL: https://issues.apache.org/jira/browse/LUCENE-7611
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-7611.patch, LUCENE-7611.patch
>
>
> This allows us to remove the suggester module's dependency on the queries 
> module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7691) SolrEntityProcessor as SubEntity doesn't work with delta-import

2017-01-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807458#comment-15807458
 ] 

Martin Čambal edited comment on SOLR-7691 at 1/7/17 12:53 PM:
--

Yes, whole method destroy() in SolrEntityProcessor is useless. It can work only 
with full-import. None of EntityProcessor classes e.g. (SqlEntityProcessor, 
LineEntityProcessor) which extends EntityProcessorBase have destroy method() 
made this way.




was (Author: martin.cambal):
Yes, whole method destroy() in SolrEntityProcessor is useless. It can work only 
with full-import. None of EntityProcessor classes e.g. (SqlEntityProcessor, 
LineEntityProcessor) which extends EntityProcessorBase have destroy method() 
made in this way.



> SolrEntityProcessor as SubEntity doesn't work with delta-import
> ---
>
> Key: SOLR-7691
> URL: https://issues.apache.org/jira/browse/SOLR-7691
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 5.0, 5.1, 5.2, 5.2.1
>Reporter: Sebastian Krebs
>
> I've used the {{SolrEntityProcessor}} as sub-entity in the dataimporter like 
> this
> {code:lang=xml}
> 
> 
>  name="outer"
> dataSource="my_datasource"
> pk="id"
> query="..."
> deltaQuery="..."
> deltaImportQuery="..."
> >
>  name="solr"
> processor="SolrEntityProcessor"
> url="http://127.0.0.1:8983/solr/${solr.core.name};
> query="Xid:${outer.Xid}"
> rows="1"
> fl="Id,FieldA,FieldB"
> wt="javabin"
> />
> 
> 
> 
> {code}
> Recently I decided to upgrade to 5.x, but the delta-import stopped working. 
> At all it looks like the http-connection used by the {{SolrEntityProcessor}} 
> is closed right _after_ the request/response, because the first document is 
> indexed properly and for the second connection the dataimport fetches the 
> record from the database, but after that exists 
> This is the stacktrace taken from the log
> {code:lang=none}
> java.lang.RuntimeException: java.lang.RuntimeException: 
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> java.lang.IllegalStateException: Connection pool shut down
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:270)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:444)
> at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:482)
> at 
> org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:461)
> Caused by: java.lang.RuntimeException: 
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> java.lang.IllegalStateException: Connection pool shut down
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:416)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:363)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:224)
> ... 3 more
> Caused by: org.apache.solr.handler.dataimport.DataImportHandlerException: 
> java.lang.IllegalStateException: Connection pool shut down
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:62)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:246)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
> ... 5 more
> Caused by: java.lang.IllegalStateException: Connection pool shut down
> at org.apache.http.util.Asserts.check(Asserts.java:34)
> at org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:184)
> at org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:217)
> at 
> org.apache.http.impl.conn.PoolingClientConnectionManager.requestConnection(PoolingClientConnectionManager.java:184)
> at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:415)
> at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
> at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
> at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
> at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
> at 
> 

[jira] [Commented] (SOLR-7691) SolrEntityProcessor as SubEntity doesn't work with delta-import

2017-01-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807458#comment-15807458
 ] 

Martin Čambal commented on SOLR-7691:
-

Yes, whole method destroy() in SolrEntityProcessor is useless. It can work only 
with full-import. None of EntityProcessor classes e.g. (SqlEntityProcessor, 
LineEntityProcessor) which extends EntityProcessorBase have destroy method() 
made in this way.



> SolrEntityProcessor as SubEntity doesn't work with delta-import
> ---
>
> Key: SOLR-7691
> URL: https://issues.apache.org/jira/browse/SOLR-7691
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 5.0, 5.1, 5.2, 5.2.1
>Reporter: Sebastian Krebs
>
> I've used the {{SolrEntityProcessor}} as sub-entity in the dataimporter like 
> this
> {code:lang=xml}
> 
> 
>  name="outer"
> dataSource="my_datasource"
> pk="id"
> query="..."
> deltaQuery="..."
> deltaImportQuery="..."
> >
>  name="solr"
> processor="SolrEntityProcessor"
> url="http://127.0.0.1:8983/solr/${solr.core.name};
> query="Xid:${outer.Xid}"
> rows="1"
> fl="Id,FieldA,FieldB"
> wt="javabin"
> />
> 
> 
> 
> {code}
> Recently I decided to upgrade to 5.x, but the delta-import stopped working. 
> At all it looks like the http-connection used by the {{SolrEntityProcessor}} 
> is closed right _after_ the request/response, because the first document is 
> indexed properly and for the second connection the dataimport fetches the 
> record from the database, but after that exists 
> This is the stacktrace taken from the log
> {code:lang=none}
> java.lang.RuntimeException: java.lang.RuntimeException: 
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> java.lang.IllegalStateException: Connection pool shut down
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:270)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:444)
> at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:482)
> at 
> org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:461)
> Caused by: java.lang.RuntimeException: 
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> java.lang.IllegalStateException: Connection pool shut down
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:416)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:363)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:224)
> ... 3 more
> Caused by: org.apache.solr.handler.dataimport.DataImportHandlerException: 
> java.lang.IllegalStateException: Connection pool shut down
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:62)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:246)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
> ... 5 more
> Caused by: java.lang.IllegalStateException: Connection pool shut down
> at org.apache.http.util.Asserts.check(Asserts.java:34)
> at org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:184)
> at org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:217)
> at 
> org.apache.http.impl.conn.PoolingClientConnectionManager.requestConnection(PoolingClientConnectionManager.java:184)
> at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:415)
> at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
> at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
> at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
> at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:466)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:235)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:227)
> at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
> at 

[jira] [Commented] (LUCENE-7620) UnifiedHighlighter: add target character width BreakIterator wrapper

2017-01-07 Thread Jim Ferenczi (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807423#comment-15807423
 ] 

Jim Ferenczi commented on LUCENE-7620:
--

That makes sense. The tests looks good, . I think it's ready, thanks for doing 
this David !

> UnifiedHighlighter: add target character width BreakIterator wrapper
> 
>
> Key: LUCENE-7620
> URL: https://issues.apache.org/jira/browse/LUCENE-7620
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.4
>
> Attachments: LUCENE_7620_UH_LengthGoalBreakIterator.patch, 
> LUCENE_7620_UH_LengthGoalBreakIterator.patch, 
> LUCENE_7620_UH_LengthGoalBreakIterator.patch
>
>
> The original Highlighter includes a {{SimpleFragmenter}} that delineates 
> fragments (aka Passages) by a character width.  The default is 100 characters.
> It would be great to support something similar for the UnifiedHighlighter.  
> It's useful in its own right and of course it helps users transition to the 
> UH.  I'd like to do it as a wrapper to another BreakIterator -- perhaps a 
> sentence one.  In this way you get back Passages that are a number of 
> sentences so they will look nice instead of breaking mid-way through a 
> sentence.  And you get some control by specifying a target number of 
> characters.  This BreakIterator wouldn't be a general purpose 
> java.text.BreakIterator since it would assume it's called in a manner exactly 
> as the UnifiedHighlighter uses it.  It would probably be compatible with the 
> PostingsHighlighter too.
> I don't propose doing this by default; besides, it's easy enough to pick your 
> BreakIterator config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9928) MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super

2017-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807415#comment-15807415
 ] 

ASF subversion and git services commented on SOLR-9928:
---

Commit e5f39f62f76677a5f500af4f323c0c31afb26228 in lucene-solr's branch 
refs/heads/master from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e5f39f6 ]

SOLR-9928 Unwrap Directory consistently whenever it's passed as an argument.


> MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super
> 
>
> Key: SOLR-9928
> URL: https://issues.apache.org/jira/browse/SOLR-9928
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (7.0), 6.4
>Reporter: Mike Drob
>Assignee: Andrzej Bialecki 
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9928.patch, SOLR-9928.patch
>
>
> MetricsDirectoryFactory::renameWithOverwrite should call the delegate instead 
> of super. Trivial patch forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7609) Refactor expressions module to use DoubleValuesSource

2017-01-07 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-7609.
---
   Resolution: Fixed
 Assignee: Alan Woodward
Fix Version/s: 6.4

Thanks for the reviews Adrien!

> Refactor expressions module to use DoubleValuesSource
> -
>
> Key: LUCENE-7609
> URL: https://issues.apache.org/jira/browse/LUCENE-7609
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 6.4
>
> Attachments: LUCENE-7609.patch, LUCENE-7609.patch
>
>
> With DoubleValuesSource in core, we can refactor the expressions module to 
> use these instead of ValueSource, and remove the dependency of expressions on 
> the queries module in master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7617) Improve GroupingSearch API and extensibility

2017-01-07 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-7617.
---
   Resolution: Fixed
Fix Version/s: 6.4

The ASF bot didn't pick up the commits, for some reason:

branch_6x: d4d3ede51cc114ad98fb05e19fd6c6e15e724f34
master: da30f21f5d2c90a4e3d4fae87a297adfd4bbb3cb

Thanks for the reviews Martijn!

> Improve GroupingSearch API and extensibility
> 
>
> Key: LUCENE-7617
> URL: https://issues.apache.org/jira/browse/LUCENE-7617
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 6.4
>
> Attachments: LUCENE-7617.patch, LUCENE-7617.patch, LUCENE-7617.patch
>
>
> While looking at how to make grouping work with the new XValuesSource API in 
> core, I thought I'd try and clean up GroupingSearch a bit.  We have three 
> different ways of grouping at the moment: by doc block, using a single-pass 
> collector; by field; and by ValueSource.  The latter two both use essentially 
> the same two-pass mechanism, with different Collector implementations.
> I can see a number of possible improvements here:
> * abstract the two-pass collector creation into a factory API, which should 
> allow us to add the XValuesSource implementations as well
> * clean up the generics on the two-pass collectors - maybe look into removing 
> them entirely?  I'm not sure they add anything really, and we don't have them 
> on the equivalent plan search APIs
> * think about moving the document block method into the join module instead, 
> alongside all the other block-indexing code
> * rename the various Collector base classes so that they don't have 
> 'Abstract' in them anymore



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7611) Make suggester module use LongValuesSource

2017-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807385#comment-15807385
 ] 

ASF subversion and git services commented on LUCENE-7611:
-

Commit 1a95c5acd0f69efb1a24b2c980a289289e703758 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1a95c5a ]

LUCENE-7611: Suggester uses LongValuesSource in place of ValueSource


> Make suggester module use LongValuesSource
> --
>
> Key: LUCENE-7611
> URL: https://issues.apache.org/jira/browse/LUCENE-7611
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-7611.patch, LUCENE-7611.patch
>
>
> This allows us to remove the suggester module's dependency on the queries 
> module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7609) Refactor expressions module to use DoubleValuesSource

2017-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807383#comment-15807383
 ] 

ASF subversion and git services commented on LUCENE-7609:
-

Commit 8b055382d6c88acaed9fe472a038c7ee6b35c016 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8b05538 ]

LUCENE-7609: Refactor expressions module to use DoubleValuesSource


> Refactor expressions module to use DoubleValuesSource
> -
>
> Key: LUCENE-7609
> URL: https://issues.apache.org/jira/browse/LUCENE-7609
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
> Attachments: LUCENE-7609.patch, LUCENE-7609.patch
>
>
> With DoubleValuesSource in core, we can refactor the expressions module to 
> use these instead of ValueSource, and remove the dependency of expressions on 
> the queries module in master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7610) Migrate facets module from ValueSource to Double/LongValuesSource

2017-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807384#comment-15807384
 ] 

ASF subversion and git services commented on LUCENE-7610:
-

Commit 713b65d1dcc80c1fe147a5bf999e1a88b63b9dce in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=713b65d ]

LUCENE-7610: Deprecate ValueSource methods in facets module


> Migrate facets module from ValueSource to Double/LongValuesSource
> -
>
> Key: LUCENE-7610
> URL: https://issues.apache.org/jira/browse/LUCENE-7610
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-7610.patch, LUCENE-7610.patch
>
>
> Unfortunately this doesn't allow us to break the facets dependency on the 
> queries module, because facets also uses TermsQuery - perhaps this should 
> move to core as well?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7611) Make suggester module use LongValuesSource

2017-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807381#comment-15807381
 ] 

ASF subversion and git services commented on LUCENE-7611:
-

Commit d268055ca3f6fc6885940bdca39bed36b8f558fc in lucene-solr's branch 
refs/heads/branch_6x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d268055 ]

LUCENE-7611: Suggester uses LongValuesSource in place of ValueSource


> Make suggester module use LongValuesSource
> --
>
> Key: LUCENE-7611
> URL: https://issues.apache.org/jira/browse/LUCENE-7611
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-7611.patch, LUCENE-7611.patch
>
>
> This allows us to remove the suggester module's dependency on the queries 
> module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7609) Refactor expressions module to use DoubleValuesSource

2017-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807379#comment-15807379
 ] 

ASF subversion and git services commented on LUCENE-7609:
-

Commit 776087eef48dbeba639b94b574f806b7265a7ffe in lucene-solr's branch 
refs/heads/branch_6x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=776087e ]

LUCENE-7609: Refactor expressions module to use DoubleValuesSource


> Refactor expressions module to use DoubleValuesSource
> -
>
> Key: LUCENE-7609
> URL: https://issues.apache.org/jira/browse/LUCENE-7609
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
> Attachments: LUCENE-7609.patch, LUCENE-7609.patch
>
>
> With DoubleValuesSource in core, we can refactor the expressions module to 
> use these instead of ValueSource, and remove the dependency of expressions on 
> the queries module in master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7610) Migrate facets module from ValueSource to Double/LongValuesSource

2017-01-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807380#comment-15807380
 ] 

ASF subversion and git services commented on LUCENE-7610:
-

Commit a238610bab1499b340fde8e120f02b33233b40e1 in lucene-solr's branch 
refs/heads/branch_6x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a238610 ]

LUCENE-7610: Deprecate ValueSource methods in facets module


> Migrate facets module from ValueSource to Double/LongValuesSource
> -
>
> Key: LUCENE-7610
> URL: https://issues.apache.org/jira/browse/LUCENE-7610
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-7610.patch, LUCENE-7610.patch
>
>
> Unfortunately this doesn't allow us to break the facets dependency on the 
> queries module, because facets also uses TermsQuery - perhaps this should 
> move to core as well?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7588) A parallel DrillSideways implementation

2017-01-07 Thread Emmanuel Keller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807344#comment-15807344
 ] 

Emmanuel Keller edited comment on LUCENE-7588 at 1/7/17 11:39 AM:
--

This patch changes the verifyEquals behaviour. It checks that the documents are 
present and that they are equals, regardless the order.


was (Author: ekeller):
This patch change the verifyEquals behaviour. It checks that the documents are 
present and that they are equals, regardless the order.

> A parallel DrillSideways implementation
> ---
>
> Key: LUCENE-7588
> URL: https://issues.apache.org/jira/browse/LUCENE-7588
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (7.0), 6.3.1
>Reporter: Emmanuel Keller
>Priority: Minor
>  Labels: facet, faceting
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7588.patch, lucene-7588-test.patch
>
>
> Currently DrillSideways implementation is based on the single threaded 
> IndexSearcher.search(Query query, Collector results).
> On large document set, the single threaded collection can be really slow.
> The ParallelDrillSideways implementation could:
> 1. Use the CollectionManager based method IndexSearcher.search(Query query, 
> CollectorManager collectorManager)  to get the benefits of multithreading on 
> index segments,
> 2. Compute each DrillSideway subquery on a single thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7588) A parallel DrillSideways implementation

2017-01-07 Thread Emmanuel Keller (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Emmanuel Keller updated LUCENE-7588:

Attachment: lucene-7588-test.patch

This patch change the verifyEquals behaviour. It checks that the documents are 
present and that they are equals, regardless the order.

> A parallel DrillSideways implementation
> ---
>
> Key: LUCENE-7588
> URL: https://issues.apache.org/jira/browse/LUCENE-7588
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (7.0), 6.3.1
>Reporter: Emmanuel Keller
>Priority: Minor
>  Labels: facet, faceting
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7588.patch, lucene-7588-test.patch
>
>
> Currently DrillSideways implementation is based on the single threaded 
> IndexSearcher.search(Query query, Collector results).
> On large document set, the single threaded collection can be really slow.
> The ParallelDrillSideways implementation could:
> 1. Use the CollectionManager based method IndexSearcher.search(Query query, 
> CollectorManager collectorManager)  to get the benefits of multithreading on 
> index segments,
> 2. Compute each DrillSideway subquery on a single thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order

2017-01-07 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807338#comment-15807338
 ] 

Ishan Chattopadhyaya edited comment on SOLR-9941 at 1/7/17 11:35 AM:
-

Attaching a fix to clear out the DBQs and oldDeletes lists before a log replay 
upon a startup. [~hossman], [~markrmil...@gmail.com], [~yo...@apache.org], can 
you please review / suggest alternate fixes?


was (Author: ichattopadhyaya):
Attaching a fix to clear out the DBQs and oldDeletes lists before a log replay 
upon a startup. [~hossman], [~markrmil...@gmail.com], [~yo...@apache.org], can 
you please review?

> log replay redundently (pre-)applies DBQs as if they were out of order
> --
>
> Key: SOLR-9941
> URL: https://issues.apache.org/jira/browse/SOLR-9941
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9941.patch, SOLR-9941.patch, SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up 
> (after a crash) and tries to recover from it's tlog that causes deletes to be 
> redundantly & excessively applied -- at a minimum it causes confusing really 
> log messages
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most 
> recent log files found (based on numRecordsToKeep) and then builds a 
> {{RecentUpdates}} instance from them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used 
> to replay any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain 
> to for the various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to 
> {{DirectUpdateHandler2}}, which (independent of the fact that we're in log 
> replay) calls {{UpdateLog.getDBQNewer}} for every add, looking for any 
> "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a 
> low level {{IndexWriter.updateDocument}} and then immediately executes _all_ 
> the newer DBQs ... _once per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, 
> because they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 
> DBQs, then *each* of those 5 DBQs will each be executed 91 times -- and for 
> 90 of those executions, a DUH2 INFO log messages will say {{"Reordered DBQs 
> detected. ..."}} even tough the only reason they are out of order is because 
> Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog 
> replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order

2017-01-07 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-9941:
---
Attachment: SOLR-9941.patch

Attaching a fix to clear out the DBQs and oldDeletes lists before a log replay 
upon a startup. [~hossman], [~markrmil...@gmail.com], [~yo...@apache.org], can 
you please review?

> log replay redundently (pre-)applies DBQs as if they were out of order
> --
>
> Key: SOLR-9941
> URL: https://issues.apache.org/jira/browse/SOLR-9941
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9941.patch, SOLR-9941.patch, SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up 
> (after a crash) and tries to recover from it's tlog that causes deletes to be 
> redundantly & excessively applied -- at a minimum it causes confusing really 
> log messages
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most 
> recent log files found (based on numRecordsToKeep) and then builds a 
> {{RecentUpdates}} instance from them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used 
> to replay any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain 
> to for the various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to 
> {{DirectUpdateHandler2}}, which (independent of the fact that we're in log 
> replay) calls {{UpdateLog.getDBQNewer}} for every add, looking for any 
> "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a 
> low level {{IndexWriter.updateDocument}} and then immediately executes _all_ 
> the newer DBQs ... _once per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, 
> because they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 
> DBQs, then *each* of those 5 DBQs will each be executed 91 times -- and for 
> 90 of those executions, a DUH2 INFO log messages will say {{"Reordered DBQs 
> detected. ..."}} even tough the only reason they are out of order is because 
> Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog 
> replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7588) A parallel DrillSideways implementation

2017-01-07 Thread Emmanuel Keller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807272#comment-15807272
 ] 

Emmanuel Keller edited comment on LUCENE-7588 at 1/7/17 10:58 AM:
--

Both actual array and expected array contains 24 documents. But not equally 
sorted.

The test expects that the retrieved ScoreDoc array is ordered. However the 
scores are identical for all documents.

As we are using a multithreaded map/reduce design we can't expect that the 
order will be preserved.
[~mikemccand] am I right ?

IMHO, the equality check must be modified to only check that each document are 
present and equals.

Here is the current check test for the ScoreDoc array:

{code:java}
for (int i = 0; i < expected.hits.size(); i++) {
  if (VERBOSE) {
System.out.println("hit " + i + " expected=" + 
expected.hits.get(i).id);
  }
  assertEquals(expected.hits.get(i).id, 
s.doc(actual.hits.scoreDocs[i].doc).get("id"));
  // Score should be IDENTICAL:
  assertEquals(scores.get(expected.hits.get(i).id), 
actual.hits.scoreDocs[i].score, 0.0f);
}
{code}


was (Author: ekeller):
Both actual array and expected array contains 24 documents. But not equally 
sorted.

The test expects that the retrieved ScoreDoc array is ordered. In this test, 
but the score are identical for all documents.

As we are using a multithreaded map/reduce design we can't expect that the 
order will be preserved.
[~mikemccand] am I right ?

IMHO, the equality check must be modified to only check that each document are 
present and equals.

Here is the current check test for the ScoreDoc array:

{code:java}
for (int i = 0; i < expected.hits.size(); i++) {
  if (VERBOSE) {
System.out.println("hit " + i + " expected=" + 
expected.hits.get(i).id);
  }
  assertEquals(expected.hits.get(i).id, 
s.doc(actual.hits.scoreDocs[i].doc).get("id"));
  // Score should be IDENTICAL:
  assertEquals(scores.get(expected.hits.get(i).id), 
actual.hits.scoreDocs[i].score, 0.0f);
}
{code}

> A parallel DrillSideways implementation
> ---
>
> Key: LUCENE-7588
> URL: https://issues.apache.org/jira/browse/LUCENE-7588
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (7.0), 6.3.1
>Reporter: Emmanuel Keller
>Priority: Minor
>  Labels: facet, faceting
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7588.patch
>
>
> Currently DrillSideways implementation is based on the single threaded 
> IndexSearcher.search(Query query, Collector results).
> On large document set, the single threaded collection can be really slow.
> The ParallelDrillSideways implementation could:
> 1. Use the CollectionManager based method IndexSearcher.search(Query query, 
> CollectorManager collectorManager)  to get the benefits of multithreading on 
> index segments,
> 2. Compute each DrillSideway subquery on a single thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7588) A parallel DrillSideways implementation

2017-01-07 Thread Emmanuel Keller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807272#comment-15807272
 ] 

Emmanuel Keller edited comment on LUCENE-7588 at 1/7/17 10:50 AM:
--

Both actual array and expected array contains 24 documents. But not equally 
sorted.

The test expects that the retrieved ScoreDoc array is ordered. In this test, 
but the score are identical for all documents.

As we are using a multithreaded map/reduce design we can't expect that the 
order will be preserved.
[~mikemccand] am I right ?

IMHO, the equality check must be modified to only check that each document are 
present and equals.

Here is the current check test for the ScoreDoc array:

{code:java}
for (int i = 0; i < expected.hits.size(); i++) {
  if (VERBOSE) {
System.out.println("hit " + i + " expected=" + 
expected.hits.get(i).id);
  }
  assertEquals(expected.hits.get(i).id, 
s.doc(actual.hits.scoreDocs[i].doc).get("id"));
  // Score should be IDENTICAL:
  assertEquals(scores.get(expected.hits.get(i).id), 
actual.hits.scoreDocs[i].score, 0.0f);
}
{code}


was (Author: ekeller):
Bot actual array and expected array contains 24 documents. But not equally 
sorted.

The test expects that the retrieved ScoreDoc array is ordered. In this test, 
but the score are identical for all documents.

As we are using a multithreaded map/reduce design we can't expect that the 
order will be preserved.
[~mikemccand] am I right ?

IMHO, the equality check must be modified to only check that each document are 
present and equals.

Here is the current check test for the ScoreDoc array:

{code:java}
for (int i = 0; i < expected.hits.size(); i++) {
  if (VERBOSE) {
System.out.println("hit " + i + " expected=" + 
expected.hits.get(i).id);
  }
  assertEquals(expected.hits.get(i).id, 
s.doc(actual.hits.scoreDocs[i].doc).get("id"));
  // Score should be IDENTICAL:
  assertEquals(scores.get(expected.hits.get(i).id), 
actual.hits.scoreDocs[i].score, 0.0f);
}
{code}

> A parallel DrillSideways implementation
> ---
>
> Key: LUCENE-7588
> URL: https://issues.apache.org/jira/browse/LUCENE-7588
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (7.0), 6.3.1
>Reporter: Emmanuel Keller
>Priority: Minor
>  Labels: facet, faceting
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7588.patch
>
>
> Currently DrillSideways implementation is based on the single threaded 
> IndexSearcher.search(Query query, Collector results).
> On large document set, the single threaded collection can be really slow.
> The ParallelDrillSideways implementation could:
> 1. Use the CollectionManager based method IndexSearcher.search(Query query, 
> CollectorManager collectorManager)  to get the benefits of multithreading on 
> index segments,
> 2. Compute each DrillSideway subquery on a single thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7588) A parallel DrillSideways implementation

2017-01-07 Thread Emmanuel Keller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807272#comment-15807272
 ] 

Emmanuel Keller edited comment on LUCENE-7588 at 1/7/17 10:50 AM:
--

Bot actual array and expected array contains 24 documents. But not equally 
sorted.

The test expects that the retrieved ScoreDoc array is ordered. In this test, 
but the score are identical for all documents.

As we are using a multithreaded map/reduce design we can't expect that the 
order will be preserved.
[~mikemccand] am I right ?

IMHO, the equality check must be modified to only check that each document are 
present and equals.

Here is the current check test for the ScoreDoc array:

{code:java}
for (int i = 0; i < expected.hits.size(); i++) {
  if (VERBOSE) {
System.out.println("hit " + i + " expected=" + 
expected.hits.get(i).id);
  }
  assertEquals(expected.hits.get(i).id, 
s.doc(actual.hits.scoreDocs[i].doc).get("id"));
  // Score should be IDENTICAL:
  assertEquals(scores.get(expected.hits.get(i).id), 
actual.hits.scoreDocs[i].score, 0.0f);
}
{code}


was (Author: ekeller):
The test expects that the retrieved ScoreDoc array is ordered. In this test, 
the score are identical for all documents.

As we are using a multithreaded map/reduce design we can't expect that the 
order will be preserved.
[~mikemccand] am I right ?

IMHO, the equality check must be modified to only check that the document are 
present with the same score.  

Here is the current check test for the ScoreDoc array:

{code:java}
for (int i = 0; i < expected.hits.size(); i++) {
  if (VERBOSE) {
System.out.println("hit " + i + " expected=" + 
expected.hits.get(i).id);
  }
  assertEquals(expected.hits.get(i).id, 
s.doc(actual.hits.scoreDocs[i].doc).get("id"));
  // Score should be IDENTICAL:
  assertEquals(scores.get(expected.hits.get(i).id), 
actual.hits.scoreDocs[i].score, 0.0f);
}
{code}

> A parallel DrillSideways implementation
> ---
>
> Key: LUCENE-7588
> URL: https://issues.apache.org/jira/browse/LUCENE-7588
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (7.0), 6.3.1
>Reporter: Emmanuel Keller
>Priority: Minor
>  Labels: facet, faceting
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7588.patch
>
>
> Currently DrillSideways implementation is based on the single threaded 
> IndexSearcher.search(Query query, Collector results).
> On large document set, the single threaded collection can be really slow.
> The ParallelDrillSideways implementation could:
> 1. Use the CollectionManager based method IndexSearcher.search(Query query, 
> CollectorManager collectorManager)  to get the benefits of multithreading on 
> index segments,
> 2. Compute each DrillSideway subquery on a single thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7622) Should BaseTokenStreamTestCase catch analyzers that create duplicate tokens?

2017-01-07 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807277#comment-15807277
 ] 

Uwe Schindler commented on LUCENE-7622:
---

For the above boosting use cases, it would be better to have an additional 
attribute in TokenStreams that default to 1, but returns a "frequency" or 
"boost" if used. Then you could stop cloning the tokens. FYI: I know that BM25 
makes this type of boosting harder, but you can still add emphasis on tokens in 
a text by duplicating them

> Should BaseTokenStreamTestCase catch analyzers that create duplicate tokens?
> 
>
> Key: LUCENE-7622
> URL: https://issues.apache.org/jira/browse/LUCENE-7622
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Attachments: LUCENE-7622.patch
>
>
> The change to BTSTC is quite simple, to catch any case where the same term 
> text spans from the same position with the same position length. Such 
> duplicate tokens are silly to add to the index, or to search at search time.
> Yet, this change produced many failures, and I looked briefly at them, and 
> they are cases that I think are actually OK, e.g. 
> {{PatternCaptureGroupTokenFilter}} capturing (..)(..) on the string {{ktkt}} 
> will create a duplicate token.
> Other cases looked more dubious, e.g. {{WordDelimiterFilter}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7588) A parallel DrillSideways implementation

2017-01-07 Thread Emmanuel Keller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807272#comment-15807272
 ] 

Emmanuel Keller edited comment on LUCENE-7588 at 1/7/17 10:47 AM:
--

The test expects that the retrieved ScoreDoc array is ordered. In this test, 
the score are identical for all documents.

As we are using a multithreaded map/reduce design we can't expect that the 
order will be preserved.
[~mikemccand] am I right ?

IMHO, the equality check must be modified to only check that the document are 
present with the same score.  

Here is the current check test for the ScoreDoc array:

{code:java}
for (int i = 0; i < expected.hits.size(); i++) {
  if (VERBOSE) {
System.out.println("hit " + i + " expected=" + 
expected.hits.get(i).id);
  }
  assertEquals(expected.hits.get(i).id, 
s.doc(actual.hits.scoreDocs[i].doc).get("id"));
  // Score should be IDENTICAL:
  assertEquals(scores.get(expected.hits.get(i).id), 
actual.hits.scoreDocs[i].score, 0.0f);
}
{code}


was (Author: ekeller):
The test expects that the retrieved ScoreDoc array is ordered. In this test, 
the score are identical for all documents.

As we are using a multithreaded map/reduce design we can't expect that the 
order will be preserved.
[~mikemccand] am I right ?

IMHO, the equality check must be modified to only check that the document are 
present with the same score.  

{code:java}
for (int i = 0; i < expected.hits.size(); i++) {
  if (VERBOSE) {
System.out.println("hit " + i + " expected=" + 
expected.hits.get(i).id);
  }
  assertEquals(expected.hits.get(i).id, 
s.doc(actual.hits.scoreDocs[i].doc).get("id"));
  // Score should be IDENTICAL:
  assertEquals(scores.get(expected.hits.get(i).id), 
actual.hits.scoreDocs[i].score, 0.0f);
}
{code}

> A parallel DrillSideways implementation
> ---
>
> Key: LUCENE-7588
> URL: https://issues.apache.org/jira/browse/LUCENE-7588
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (7.0), 6.3.1
>Reporter: Emmanuel Keller
>Priority: Minor
>  Labels: facet, faceting
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7588.patch
>
>
> Currently DrillSideways implementation is based on the single threaded 
> IndexSearcher.search(Query query, Collector results).
> On large document set, the single threaded collection can be really slow.
> The ParallelDrillSideways implementation could:
> 1. Use the CollectionManager based method IndexSearcher.search(Query query, 
> CollectorManager collectorManager)  to get the benefits of multithreading on 
> index segments,
> 2. Compute each DrillSideway subquery on a single thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7622) Should BaseTokenStreamTestCase catch analyzers that create duplicate tokens?

2017-01-07 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807274#comment-15807274
 ] 

Uwe Schindler commented on LUCENE-7622:
---

I agree that by default TokenStreams should not produce duplicate tokens, but 
there are use cases (boosting) where you might want to do this. E.g., if you 
want to raise the boost of a term in a document (e.g., if its inside a  
HTML tag and should have emphasis), you can duplicate the token to increase its 
frequency (with same position). The alternative would be payloads and payload 
query, but this is cheap to do.

Also: If you use ASCIIFoldingFilter or stemming and add the folded/stemmed 
terms together with the original ones to the index, those terms with no 
folding/stemming applied would get duplicated. But If you don't do this the 
statistics would be wrong. I agree, for this case it would be better to have a 
separate field, but some people like to have it in the same.

> Should BaseTokenStreamTestCase catch analyzers that create duplicate tokens?
> 
>
> Key: LUCENE-7622
> URL: https://issues.apache.org/jira/browse/LUCENE-7622
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Attachments: LUCENE-7622.patch
>
>
> The change to BTSTC is quite simple, to catch any case where the same term 
> text spans from the same position with the same position length. Such 
> duplicate tokens are silly to add to the index, or to search at search time.
> Yet, this change produced many failures, and I looked briefly at them, and 
> they are cases that I think are actually OK, e.g. 
> {{PatternCaptureGroupTokenFilter}} capturing (..)(..) on the string {{ktkt}} 
> will create a duplicate token.
> Other cases looked more dubious, e.g. {{WordDelimiterFilter}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7588) A parallel DrillSideways implementation

2017-01-07 Thread Emmanuel Keller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807272#comment-15807272
 ] 

Emmanuel Keller commented on LUCENE-7588:
-

The test expects that the retrieved ScoreDoc array is ordered. In this test, 
the score are identical for all documents.

As we are using a multithreaded map/reduce design we can't expect that the 
order will be preserved.
[~mikemccand] am I right ?

IMHO, the equality check must be modified to only check that the document are 
present with the same score.  

{code:java}
for (int i = 0; i < expected.hits.size(); i++) {
  if (VERBOSE) {
System.out.println("hit " + i + " expected=" + 
expected.hits.get(i).id);
  }
  assertEquals(expected.hits.get(i).id, 
s.doc(actual.hits.scoreDocs[i].doc).get("id"));
  // Score should be IDENTICAL:
  assertEquals(scores.get(expected.hits.get(i).id), 
actual.hits.scoreDocs[i].score, 0.0f);
}
{code}

> A parallel DrillSideways implementation
> ---
>
> Key: LUCENE-7588
> URL: https://issues.apache.org/jira/browse/LUCENE-7588
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (7.0), 6.3.1
>Reporter: Emmanuel Keller
>Priority: Minor
>  Labels: facet, faceting
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7588.patch
>
>
> Currently DrillSideways implementation is based on the single threaded 
> IndexSearcher.search(Query query, Collector results).
> On large document set, the single threaded collection can be really slow.
> The ParallelDrillSideways implementation could:
> 1. Use the CollectionManager based method IndexSearcher.search(Query query, 
> CollectorManager collectorManager)  to get the benefits of multithreading on 
> index segments,
> 2. Compute each DrillSideway subquery on a single thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7622) Should BaseTokenStreamTestCase catch analyzers that create duplicate tokens?

2017-01-07 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7622:
---
Attachment: LUCENE-7622.patch

Here's a simple patch ... but I don't plan to pursuing this further now ... I 
think it's maybe too anal to insist on this from all analyzers ... so I'm 
posting the patch here in case anyone else gets itchy!

> Should BaseTokenStreamTestCase catch analyzers that create duplicate tokens?
> 
>
> Key: LUCENE-7622
> URL: https://issues.apache.org/jira/browse/LUCENE-7622
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Attachments: LUCENE-7622.patch
>
>
> The change to BTSTC is quite simple, to catch any case where the same term 
> text spans from the same position with the same position length. Such 
> duplicate tokens are silly to add to the index, or to search at search time.
> Yet, this change produced many failures, and I looked briefly at them, and 
> they are cases that I think are actually OK, e.g. 
> {{PatternCaptureGroupTokenFilter}} capturing (..)(..) on the string {{ktkt}} 
> will create a duplicate token.
> Other cases looked more dubious, e.g. {{WordDelimiterFilter}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7622) Should BaseTokenStreamTestCase catch analyzers that create duplicate tokens?

2017-01-07 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-7622:
--

 Summary: Should BaseTokenStreamTestCase catch analyzers that 
create duplicate tokens?
 Key: LUCENE-7622
 URL: https://issues.apache.org/jira/browse/LUCENE-7622
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless


The change to BTSTC is quite simple, to catch any case where the same term text 
spans from the same position with the same position length. Such duplicate 
tokens are silly to add to the index, or to search at search time.

Yet, this change produced many failures, and I looked briefly at them, and they 
are cases that I think are actually OK, e.g. {{PatternCaptureGroupTokenFilter}} 
capturing (..)(..) on the string {{ktkt}} will create a duplicate token.

Other cases looked more dubious, e.g. {{WordDelimiterFilter}}.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1205 - Still Unstable

2017-01-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1205/

9 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testUpdateLogSynchronisation

Error Message:
Timeout waiting for CDCR replication to complete @source_collection:shard2

Stack Trace:
java.lang.RuntimeException: Timeout waiting for CDCR replication to complete 
@source_collection:shard2
at 
__randomizedtesting.SeedInfo.seed([7FE6A9EC141366C1:8189F14FD63345D0]:0)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:795)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testUpdateLogSynchronisation(CdcrReplicationDistributedZkTest.java:377)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (LUCENE-7617) Improve GroupingSearch API and extensibility

2017-01-07 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-7617:
--
Attachment: LUCENE-7617.patch

Final patch.  I ended up removing the no-op group head collectors, as Solr was 
relying on the AllGroupHeadCollector returning a FixedBitSet - this should 
probably be just a Bits instance instead, but that can be dealt with in a later 
issue.  Will commit later on today.

> Improve GroupingSearch API and extensibility
> 
>
> Key: LUCENE-7617
> URL: https://issues.apache.org/jira/browse/LUCENE-7617
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-7617.patch, LUCENE-7617.patch, LUCENE-7617.patch
>
>
> While looking at how to make grouping work with the new XValuesSource API in 
> core, I thought I'd try and clean up GroupingSearch a bit.  We have three 
> different ways of grouping at the moment: by doc block, using a single-pass 
> collector; by field; and by ValueSource.  The latter two both use essentially 
> the same two-pass mechanism, with different Collector implementations.
> I can see a number of possible improvements here:
> * abstract the two-pass collector creation into a factory API, which should 
> allow us to add the XValuesSource implementations as well
> * clean up the generics on the two-pass collectors - maybe look into removing 
> them entirely?  I'm not sure they add anything really, and we don't have them 
> on the equivalent plan search APIs
> * think about moving the document block method into the join module instead, 
> alongside all the other block-indexing code
> * rename the various Collector base classes so that they don't have 
> 'Abstract' in them anymore



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7613) Update Surround query language

2017-01-07 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-7613:
-
Attachment: LUCENE-7613-spanstree.patch

Patch of 7 Jan 2017, combine with LUCENE-7580.

This issue and LUCENE-7580 both depend on LUCENE-7615, and this patch is to use 
that dependency only via LUCENE-7580.

To use this with SpansTreeQuery, apply the patch at LUCENE-7580 first, and then 
apply this patch of 7 Jan 2017.

This contains the changes of this issue to surround/query, updates the surround 
tests to use SpansTreeQuery.wrapAfterRewrite(), and changes a few expected 
document orders in the surround tests.



> Update Surround query language
> --
>
> Key: LUCENE-7613
> URL: https://issues.apache.org/jira/browse/LUCENE-7613
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: LUCENE-7613-spanstree.patch, LUCENE-7613.patch, 
> LUCENE-7613.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7614) Allow single prefix "phrase*" in complexphrase queryparser

2017-01-07 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev resolved LUCENE-7614.
--
Resolution: Fixed

> Allow single prefix "phrase*" in complexphrase queryparser 
> ---
>
> Key: LUCENE-7614
> URL: https://issues.apache.org/jira/browse/LUCENE-7614
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Mikhail Khludnev
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7614.patch, LUCENE-7614.patch
>
>
> {quote}
> From  Otmar Caduff 
> Subject   ComplexPhraseQueryParser with wildcards
> Date  Tue, 20 Dec 2016 13:55:42 GMT
> Hi,
> I have an index with a single document with a field "field" and textual
> content "johnny peters" and I am using
> org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser to
> parse the query:
>field: (john* peter)
> When searching with this query, I am getting the document as expected.
> However with this query:
>field: ("john*" "peter")
> I am getting the following exception:
> Exception in thread "main" java.lang.IllegalArgumentException: Unknown
> query type "org.apache.lucene.search.PrefixQuery" found in phrase query
> string "john*"
> at
> org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser$ComplexPhraseQuery.rewrite(ComplexPhraseQueryParser.java:268)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9859) replication.properties cannot be updated after being written and neither replication.properties or index.properties are durable in the face of a crash

2017-01-07 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807127#comment-15807127
 ] 

Cao Manh Dat commented on SOLR-9859:


[~markrmil...@gmail.com] It seems we still have an exception being logged. It 
belong to the case when "replication.properties" do not exist.
{code}
java.nio.file.NoSuchFileException: 
/tmp/solr.cloud.OnlyLeaderIndexesTest_77D87333D3A12E8B-001/tempDir-001/node2/collection1_shard1_replica3/data/replication.properties
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:244)
at 
sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
at java.nio.file.Files.delete(Files.java:1126)
at 
org.apache.lucene.store.FSDirectory.privateDeleteFile(FSDirectory.java:373)
at org.apache.lucene.store.FSDirectory.deleteFile(FSDirectory.java:335)
at 
org.apache.lucene.store.FilterDirectory.deleteFile(FilterDirectory.java:62)
at 
org.apache.solr.core.DirectoryFactory.renameWithOverwrite(DirectoryFactory.java:193)
at 
org.apache.solr.core.MetricsDirectoryFactory.renameWithOverwrite(MetricsDirectoryFactory.java:201)
at 
org.apache.solr.handler.IndexFetcher.logReplicationTimeAndConfFiles(IndexFetcher.java:726)
at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:519)
at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:274)
at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:406)
at 
org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$2(ReplicationHandler.java:1163)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}


> replication.properties cannot be updated after being written and neither 
> replication.properties or index.properties are durable in the face of a crash
> --
>
> Key: SOLR-9859
> URL: https://issues.apache.org/jira/browse/SOLR-9859
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.3, 6.3
>Reporter: Pushkar Raste
>Assignee: Mark Miller
>Priority: Minor
> Attachments: SOLR-9859.patch, SOLR-9859.patch, SOLR-9859.patch, 
> SOLR-9859.patch, SOLR-9859.patch, SOLR-9859.patch
>
>
> If a shard recovers via replication (vs PeerSync) a file named 
> {{replication.properties}} gets created. If the same shard recovers once more 
> via replication, IndexFetcher fails to write latest replication information 
> as it tries to create {{replication.properties}} but as file already exists. 
> Here is the stack trace I saw 
> {code}
> java.nio.file.FileAlreadyExistsException: 
> \shard-3-001\cores\collection1\data\replication.properties
>   at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(Unknown Source)
>   at java.nio.file.spi.FileSystemProvider.newOutputStream(Unknown Source)
>   at java.nio.file.Files.newOutputStream(Unknown Source)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413)
>   at 
> org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409)
>   at 
> org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253)
>   at 
> org.apache.solr.handler.IndexFetcher.logReplicationTimeAndConfFiles(IndexFetcher.java:689)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:501)
>   at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:265)
>   at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397)
>   

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_112) - Build # 18717 - Unstable!

2017-01-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18717/
Java: 64bit/jdk1.8.0_112 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.codecs.lucene70.TestLucene70DocValuesFormat

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([EC1BB8EBF423DABB]:0)


FAILED:  
org.apache.lucene.codecs.lucene70.TestLucene70DocValuesFormat.testSparseShortNumericsVsStoredFields

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([EC1BB8EBF423DABB]:0)




Build Log:
[...truncated 1658 lines...]
   [junit4] JVM J2: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J2-20170107_053119_492.sysout
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] Default case invoked for: 
   [junit4]opcode  = 0, "Node"
   [junit4] Default case invoked for: 
   [junit4]opcode  = 0, "Node"
   [junit4] Default case invoked for: 
   [junit4]opcode  = 200, "Phi"
   [junit4] <<< JVM J2: EOF 

[...truncated 116 lines...]
   [junit4] Suite: org.apache.lucene.codecs.lucene70.TestLucene70DocValuesFormat
   [junit4] IGNOR/A 0.00s J1 | 
TestLucene70DocValuesFormat.testSortedSetVariableLengthManyVsStoredFields
   [junit4]> Assumption #1: 'nightly' test group is disabled (@Nightly())
   [junit4] IGNOR/A 0.00s J1 | 
TestLucene70DocValuesFormat.testSortedVariableLengthManyVsStoredFields
   [junit4]> Assumption #1: 'nightly' test group is disabled (@Nightly())
   [junit4] IGNOR/A 0.00s J1 | 
TestLucene70DocValuesFormat.testTermsEnumRandomMany
   [junit4]> Assumption #1: 'nightly' test group is disabled (@Nightly())
   [junit4]   2> Jan 07, 2017 10:36:03 AM 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
   [junit4]   2> WARNING: Suite execution timed out: 
org.apache.lucene.codecs.lucene70.TestLucene70DocValuesFormat
   [junit4]   2>1) Thread[id=1, name=main, state=WAITING, group=main]
   [junit4]   2> at java.lang.Object.wait(Native Method)
   [junit4]   2> at java.lang.Thread.join(Thread.java:1249)
   [junit4]   2> at java.lang.Thread.join(Thread.java:1323)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:608)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.run(RandomizedRunner.java:457)
   [junit4]   2> at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:243)
   [junit4]   2> at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:354)
   [junit4]   2> at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:10)
   [junit4]   2>2) Thread[id=11, name=JUnit4-serializer-daemon, 
state=TIMED_WAITING, group=main]
   [junit4]   2> at java.lang.Thread.sleep(Native Method)
   [junit4]   2> at 
com.carrotsearch.ant.tasks.junit4.events.Serializer$1.run(Serializer.java:50)
   [junit4]   2>3) Thread[id=1882, 
name=SUITE-TestLucene70DocValuesFormat-seed#[EC1BB8EBF423DABB], state=RUNNABLE, 
group=TGRP-TestLucene70DocValuesFormat]
   [junit4]   2> at java.lang.Thread.getStackTrace(Thread.java:1556)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$4.run(ThreadLeakControl.java:690)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$4.run(ThreadLeakControl.java:687)
   [junit4]   2> at java.security.AccessController.doPrivileged(Native 
Method)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.getStackTrace(ThreadLeakControl.java:687)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.getThreadsWithTraces(ThreadLeakControl.java:703)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.formatThreadStacksFull(ThreadLeakControl.java:683)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.access$1000(ThreadLeakControl.java:64)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2.evaluate(ThreadLeakControl.java:415)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:678)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$200(RandomizedRunner.java:140)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:598)
   [junit4]   2>4) Thread[id=1883, 
name=TEST-TestLucene70DocValuesFormat.testSparseShortNumericsVsStoredFields-seed#[EC1BB8EBF423DABB],
 state=TIMED_WAITING, 

[jira] [Comment Edited] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order

2017-01-07 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807087#comment-15807087
 ] 

Ishan Chattopadhyaya edited comment on SOLR-9941 at 1/7/17 8:21 AM:


Seems like the fix I posted in my previous patch won't work due to the fact 
that doing so would preclude us from processing actually re-ordered DBQs also, 
and hence leave out some documents that should've been deleted. Added a test 
for this (which should anyway be committed, I think).


was (Author: ichattopadhyaya):
Seems like the fix I posted below won't work due to the fact that doing so 
would preclude us from processing actually re-ordered DBQs also, and hence 
leave out some documents that should've been deleted. Added a test for this 
(which should anyway be committed, I think).

> log replay redundently (pre-)applies DBQs as if they were out of order
> --
>
> Key: SOLR-9941
> URL: https://issues.apache.org/jira/browse/SOLR-9941
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9941.patch, SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up 
> (after a crash) and tries to recover from it's tlog that causes deletes to be 
> redundantly & excessively applied -- at a minimum it causes confusing really 
> log messages
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most 
> recent log files found (based on numRecordsToKeep) and then builds a 
> {{RecentUpdates}} instance from them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used 
> to replay any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain 
> to for the various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to 
> {{DirectUpdateHandler2}}, which (independent of the fact that we're in log 
> replay) calls {{UpdateLog.getDBQNewer}} for every add, looking for any 
> "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a 
> low level {{IndexWriter.updateDocument}} and then immediately executes _all_ 
> the newer DBQs ... _once per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, 
> because they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 
> DBQs, then *each* of those 5 DBQs will each be executed 91 times -- and for 
> 90 of those executions, a DUH2 INFO log messages will say {{"Reordered DBQs 
> detected. ..."}} even tough the only reason they are out of order is because 
> Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog 
> replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order

2017-01-07 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-9941:
---
Attachment: SOLR-9941.patch

Seems like the fix I posted below won't work due to the fact that doing so 
would preclude us from processing actually re-ordered DBQs also, and hence 
leave out some documents that should've been deleted. Added a test for this 
(which should anyway be committed, I think).

> log replay redundently (pre-)applies DBQs as if they were out of order
> --
>
> Key: SOLR-9941
> URL: https://issues.apache.org/jira/browse/SOLR-9941
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9941.patch, SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up 
> (after a crash) and tries to recover from it's tlog that causes deletes to be 
> redundantly & excessively applied -- at a minimum it causes confusing really 
> log messages
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most 
> recent log files found (based on numRecordsToKeep) and then builds a 
> {{RecentUpdates}} instance from them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used 
> to replay any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain 
> to for the various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to 
> {{DirectUpdateHandler2}}, which (independent of the fact that we're in log 
> replay) calls {{UpdateLog.getDBQNewer}} for every add, looking for any 
> "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a 
> low level {{IndexWriter.updateDocument}} and then immediately executes _all_ 
> the newer DBQs ... _once per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, 
> because they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 
> DBQs, then *each* of those 5 DBQs will each be executed 91 times -- and for 
> 90 of those executions, a DUH2 INFO log messages will say {{"Reordered DBQs 
> detected. ..."}} even tough the only reason they are out of order is because 
> Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog 
> replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9928) MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super

2017-01-07 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807060#comment-15807060
 ] 

Andrzej Bialecki  commented on SOLR-9928:
-

bq. this factory tries to inject itself in an abnormal way, rather than 
counting on being configured
That's exactly the intent. The reason for this design is that I wanted 
consistency of adding the metrics monitoring no matter what implementation 
users provided, and I couldn't count on metrics being injected in every 
implementation (adding this functionality to base {{DirectoryFactory}} wouldn't 
do it, because users would be free to create non-instrumented {{Directory}} 
impls anyway).

So yes, it looks like we need to be consistent and unwrap this one level in 
every call that takes {{Directory}} as argument.

> MetricsDirectoryFactory::renameWithOverwrite incorrectly calls super
> 
>
> Key: SOLR-9928
> URL: https://issues.apache.org/jira/browse/SOLR-9928
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (7.0), 6.4
>Reporter: Mike Drob
>Assignee: Andrzej Bialecki 
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9928.patch, SOLR-9928.patch
>
>
> MetricsDirectoryFactory::renameWithOverwrite should call the delegate instead 
> of super. Trivial patch forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org