[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4282 - Still Unstable!

2017-11-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4282/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.AutoAddReplicasPlanActionTest.testSimple

Error Message:
IOException occured when talking to server at: http://127.0.0.1:65361/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:65361/solr
at 
__randomizedtesting.SeedInfo.seed([ABE1F31C5D6DC3F4:9352D7E27A9E1725]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:657)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.autoscaling.AutoAddReplicasPlanActionTest.testSimple(AutoAddReplicasPlanActionTest.java:108)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 785 - Still Unstable!

2017-11-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/785/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseParallelGC

2 tests failed.
FAILED:  
org.apache.solr.security.TestAuthorizationFramework.authorizationFrameworkTest

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:37771/c

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:37771/c
at 
__randomizedtesting.SeedInfo.seed([C7920C71E5634973:893179A2F4B85863]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:314)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Resolved] (SOLR-11472) Leader election bug

2017-11-09 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-11472.
--
Resolution: Duplicate

The root cause has been fixed in SOLR-11448. I found 1 similar test failure on 
Oct 23 which was after SOLR-11448 was committed but the logs no longer exist 
and I haven't seen anything since. So I'll close this and re-open if necessary.

> Leader election bug
> ---
>
> Key: SOLR-11472
> URL: https://issues.apache.org/jira/browse/SOLR-11472
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Shalin Shekhar Mangar
> Attachments: 
> Console_output_of_AutoscalingHistoryHandlerTest_failure.txt
>
>
> SOLR-11407 uncovered a bug in leader election, where the same failing node is 
> retried indefinitely. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11628) Add documentation of maxRamMB for filter cache and query result cache

2017-11-09 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-11628.
--
Resolution: Fixed

> Add documentation of maxRamMB for filter cache and query result cache
> -
>
> Key: SOLR-11628
> URL: https://issues.apache.org/jira/browse/SOLR-11628
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 7.2, master (8.0)
>
>
> The query settings in solrconfig page uses LRUCache in the filter cache 
> example. But by default the FastLRUCache is used. Also, SOLR-9633 added 
> support for maxRamMB for filter cache which is not documented at all.
> https://builds.apache.org/view/L/view/Lucene/job/Solr-reference-guide-7.1/javadoc/query-settings-in-solrconfig.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11628) Add documentation of maxRamMB for filter cache and query result cache

2017-11-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247110#comment-16247110
 ] 

ASF subversion and git services commented on SOLR-11628:


Commit 2f4ddae6bec4f2d64ec12daeb6ca22cafc682aa6 in lucene-solr's branch 
refs/heads/branch_7x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2f4ddae ]

SOLR-11628: Add documentation of maxRamMB for filter cache and query result 
cache

(cherry picked from commit b03c724)


> Add documentation of maxRamMB for filter cache and query result cache
> -
>
> Key: SOLR-11628
> URL: https://issues.apache.org/jira/browse/SOLR-11628
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 7.2, master (8.0)
>
>
> The query settings in solrconfig page uses LRUCache in the filter cache 
> example. But by default the FastLRUCache is used. Also, SOLR-9633 added 
> support for maxRamMB for filter cache which is not documented at all.
> https://builds.apache.org/view/L/view/Lucene/job/Solr-reference-guide-7.1/javadoc/query-settings-in-solrconfig.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8040) Optimize IndexSearcher.collectionStatistics

2017-11-09 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247097#comment-16247097
 ] 

Robert Muir commented on LUCENE-8040:
-

Its not saving a "lot". We are talking about microseconds here either way.

IndexSearcher does not *contain* the querycache. The caching is at the segment 
level. You just configure it by passing it in there.

Big difference: I'm strongly against caching on index searcher. especially for 
something that takes microseconds.

> Optimize IndexSearcher.collectionStatistics
> ---
>
> Key: LUCENE-8040
> URL: https://issues.apache.org/jira/browse/LUCENE-8040
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 7.2
>
> Attachments: MyBenchmark.java, lucenecollectionStatisticsbench.zip
>
>
> {{IndexSearcher.collectionStatistics(field)}} can do a fair amount of work 
> because with each invocation it will call {{MultiFields.getTerms(...)}}.  The 
> effects of this are aggravated for queries with many fields since each field 
> will want statistics, and also aggravated when there are many segments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9.0.1) - Build # 299 - Still unstable!

2017-11-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/299/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseG1GC

8 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestDemoParallelLeafReader

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDemoParallelLeafReader_9E11470C033E8FC4-001\tempDir-004:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDemoParallelLeafReader_9E11470C033E8FC4-001\tempDir-004
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDemoParallelLeafReader_9E11470C033E8FC4-001\tempDir-004:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDemoParallelLeafReader_9E11470C033E8FC4-001\tempDir-004

at __randomizedtesting.SeedInfo.seed([9E11470C033E8FC4]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.store.TestNRTCachingDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.store.TestNRTCachingDirectory_9E11470C033E8FC4-001\tempDir-008:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.store.TestNRTCachingDirectory_9E11470C033E8FC4-001\tempDir-008
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.store.TestNRTCachingDirectory_9E11470C033E8FC4-001\tempDir-008:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.store.TestNRTCachingDirectory_9E11470C033E8FC4-001\tempDir-008

at __randomizedtesting.SeedInfo.seed([9E11470C033E8FC4]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest.testConsistencyOnExceptions

Error Message:
Captured an uncaught exception in thread: Thread[id=15, 
name=ReplicationThread-indexAndTaxo, state=RUNNABLE, 
group=TGRP-IndexAndTaxonomyReplicationClientTest]

Stack Trace:

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 20877 - Still Unstable!

2017-11-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20877/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseParallelGC

5 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth: 1) 
Thread[id=3385, name=jetty-launcher-521-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth: 
   1) Thread[id=3385, name=jetty-launcher-521-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)
at __randomizedtesting.SeedInfo.seed([E521A573E71ECEC5]:0)


FAILED:  
org.apache.lucene.search.similarities.TestBasicModelIn.testRandomScoring

Error Message:
score(1.0,15)=3.7663744E9 < score(1.0,16)=3.76637466E9

Stack Trace:
java.lang.AssertionError: score(1.0,15)=3.7663744E9 < score(1.0,16)=3.76637466E9
at 
__randomizedtesting.SeedInfo.seed([BBBE98D990924CB0:3021C16B8AE5AABA]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.search.similarities.BaseSimilarityTestCase.doTestScoring(BaseSimilarityTestCase.java:423)
at 
org.apache.lucene.search.similarities.BaseSimilarityTestCase.testRandomScoring(BaseSimilarityTestCase.java:355)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 

[jira] [Commented] (SOLR-11632) Creating an collection with an empty node set logs a WARN

2017-11-09 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247026#comment-16247026
 ] 

Erick Erickson commented on SOLR-11632:
---

Seems like a user who uses EMPTY wouldn't be surprised by this so I think it's 
OK to leave in. 

I don't have strong opinions though, I guess I could argue equally that any 
user who specifies EMPTY should expect, well, an collection without any 
replicas.

> Creating an collection with an empty node set logs a WARN
> -
>
> Key: SOLR-11632
> URL: https://issues.apache.org/jira/browse/SOLR-11632
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Minor
>
> When I create a collection with an empty node set I get a message like this 
> in the logs
> {code}
> 14127 WARN  
> (OverseerThreadFactory-12-thread-3-processing-n:127.0.0.1:61605_solr) 
> [n:127.0.0.1:61605_solr] o.a.s.c.CreateCollectionCmd It is unusual to 
> create a collection (backuprestore_restored) without cores.
> {code}
> Should we just remove this? A user who uses EMPTY will get this message. A 
> user who doesn't pass a set of candidate nodes then the collection creation 
> will fail anyways



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2167 - Still unstable

2017-11-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2167/

3 tests failed.
FAILED:  org.apache.lucene.search.TestSimilarity.testSimilarity

Error Message:
expected:<2.0> but was:<1.0>

Stack Trace:
java.lang.AssertionError: expected:<2.0> but was:<1.0>
at 
__randomizedtesting.SeedInfo.seed([CC0F988A905E3647:3DA0D8B290BEA35C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:443)
at org.junit.Assert.assertEquals(Assert.java:512)
at 
org.apache.lucene.search.TestSimilarity$4.collect(TestSimilarity.java:146)
at 
org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
at 
org.apache.lucene.search.AssertingCollector$1.collect(AssertingCollector.java:56)
at 
org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
at 
org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:293)
at 
org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:236)
at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
at 
org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:69)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:645)
at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:72)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:449)
at 
org.apache.lucene.search.TestSimilarity.testSimilarity(TestSimilarity.java:137)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.1) - Build # 7011 - Still Unstable!

2017-11-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7011/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseG1GC

8 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.index.TestDoc

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDoc_937588564D6F27B7-001\testIndex-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDoc_937588564D6F27B7-001\testIndex-001

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDoc_937588564D6F27B7-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDoc_937588564D6F27B7-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDoc_937588564D6F27B7-001\testIndex-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDoc_937588564D6F27B7-001\testIndex-001
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDoc_937588564D6F27B7-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDoc_937588564D6F27B7-001

at __randomizedtesting.SeedInfo.seed([937588564D6F27B7]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.handler.dataimport.TestSqlEntityProcessor.testWithSimpleTransformer

Error Message:
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSqlEntityProcessor_D7D36F7C37A4FD76-001\tempDir-001

Stack Trace:
java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSqlEntityProcessor_D7D36F7C37A4FD76-001\tempDir-001
at 
__randomizedtesting.SeedInfo.seed([D7D36F7C37A4FD76:B8974796ED72917B]:0)
at 
java.base/sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:267)
at 
java.base/sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:110)
at java.base/java.nio.file.Files.deleteIfExists(Files.java:1173)
at 
org.apache.solr.handler.dataimport.AbstractSqlEntityProcessorTestCase.afterSqlEntitiyProcessorTestCase(AbstractSqlEntityProcessorTestCase.java:98)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:992)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Created] (SOLR-11634) Create collection doesn't respect `maxShardsPerNode`

2017-11-09 Thread Nikolay Martynov (JIRA)
Nikolay Martynov created SOLR-11634:
---

 Summary: Create collection doesn't respect `maxShardsPerNode`
 Key: SOLR-11634
 URL: https://issues.apache.org/jira/browse/SOLR-11634
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.6.1
Reporter: Nikolay Martynov


Command
{noformat}
curl 
'http://host:8983/solr/admin/collections?action=CREATE=xxx=16=3=config=2=shard:*,replica:<2,node:*=shard:*,replica:<2,sysprop.aws.az:*'
{noformat}

creates collection with 1,2 and 3 shard per nodes - looks like 
{{maxShardsPerNode}} is being ignored.

Adding {{rule=replica:<{},node:*}} seems to help, but I'm not sure if this is 
correct and it doesn't seem to match documented behaviour.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 784 - Still Unstable!

2017-11-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/784/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery

Error Message:
Error from server at http://127.0.0.1:43205/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:43205/solr: create the collection time out:180s
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.MissingSegmentRecoveryTest.setup(MissingSegmentRecoveryTest.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:968)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Created] (SOLR-11633) terms component doesn't work for point-date fields

2017-11-09 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-11633:
---

 Summary: terms component doesn't work for point-date fields
 Key: SOLR-11633
 URL: https://issues.apache.org/jira/browse/SOLR-11633
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.0
Reporter: Yonik Seeley


Point-based date fields don't work with the terms component.

{code}
{
  "responseHeader":{
"status":500,
"QTime":7,
"params":{
  "distrib":"false",
  "echoParams":"ALL",
  "terms":"true",
  "terms.fl":"manufacturedate_dt"}},
  "terms":{},
  "error":{
"trace":"java.lang.NullPointerException\n\tat 
org.apache.solr.search.PointMerger$ValueIterator.(PointMerger.java:83)\n\tat
 
org.apache.solr.search.PointMerger$ValueIterator.(PointMerger.java:54)\n\tat
 
org.apache.solr.handler.component.TermsComponent.process(TermsComponent.java:167)\n\tat
 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)\n\tat
 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)\n\tat
 org.apache.solr.core.SolrCore.execute(SolrCore.java:2484)\n\tat 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:710)\n\tat 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)\n\tat 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
 org.eclipse.jetty.server.Server.handle(Server.java:534)\n\tat 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)\n\tat 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)\n\tat
 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)\n\tat
 org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)\n\tat 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)\n\tat
 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)\n\tat
 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)\n\tat
 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)\n\tat
 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)\n\tat
 java.lang.Thread.run(Thread.java:745)\n",
"code":500}}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8047) Comparison of String objects using == or !=

2017-11-09 Thread song (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246958#comment-16246958
 ] 

song commented on LUCENE-8047:
--

A static bug detection tool from my research work, the paper about my tool is 
still under double-blind review, so I cannot release more information. Thx.

> Comparison of String objects using == or !=
> ---
>
> Key: LUCENE-8047
> URL: https://issues.apache.org/jira/browse/LUCENE-8047
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 7.0.1
> Environment: Ubuntu 14.04.5 LTS
>Reporter: song
>Priority: Minor
>  Labels: performance
>
> My tool has scanned the whole codebase of Lucene and found there are eight 
> practice issues of string comparison, in which strings are compared by using 
> ==/!= instead of equals( ).
> analysis/common/src/java/org/apache/lucene/analysis/hunspell/Dictionary.java
> {code:java}
> conditionPattern == SUFFIX_CONDITION_REGEX_PATTERN
> {code}
> analysis/common/src/java/org/apache/lucene/analysis/cjk/CJKBigramFilter.java
> {code:java}
>   if (type == doHan || type == doHiragana || type == doKatakana || type == 
> doHangul) {
> {code}
> analysis/common/src/java/org/apache/lucene/analysis/standard/ClassicFilter.java
> {code:java}
>  if (type == APOSTROPHE_TYPE &&...){
>  } else if (type == ACRONYM_TYPE) {  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8047) Comparison of String objects using == or !=

2017-11-09 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246955#comment-16246955
 ] 

Mike Drob commented on LUCENE-8047:
---

Which tool?

> Comparison of String objects using == or !=
> ---
>
> Key: LUCENE-8047
> URL: https://issues.apache.org/jira/browse/LUCENE-8047
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 7.0.1
> Environment: Ubuntu 14.04.5 LTS
>Reporter: song
>Priority: Minor
>  Labels: performance
>
> My tool has scanned the whole codebase of Lucene and found there are eight 
> practice issues of string comparison, in which strings are compared by using 
> ==/!= instead of equals( ).
> analysis/common/src/java/org/apache/lucene/analysis/hunspell/Dictionary.java
> {code:java}
> conditionPattern == SUFFIX_CONDITION_REGEX_PATTERN
> {code}
> analysis/common/src/java/org/apache/lucene/analysis/cjk/CJKBigramFilter.java
> {code:java}
>   if (type == doHan || type == doHiragana || type == doKatakana || type == 
> doHangul) {
> {code}
> analysis/common/src/java/org/apache/lucene/analysis/standard/ClassicFilter.java
> {code:java}
>  if (type == APOSTROPHE_TYPE &&...){
>  } else if (type == ACRONYM_TYPE) {  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11632) Creating an collection with an empty node set logs a WARN

2017-11-09 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-11632:


 Summary: Creating an collection with an empty node set logs a WARN
 Key: SOLR-11632
 URL: https://issues.apache.org/jira/browse/SOLR-11632
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Varun Thacker
Priority: Minor


When I create a collection with an empty node set I get a message like this in 
the logs

{code}
14127 WARN  
(OverseerThreadFactory-12-thread-3-processing-n:127.0.0.1:61605_solr) 
[n:127.0.0.1:61605_solr] o.a.s.c.CreateCollectionCmd It is unusual to 
create a collection (backuprestore_restored) without cores.
{code}

Should we just remove this? A user who uses EMPTY will get this message. A user 
who doesn't pass a set of candidate nodes then the collection creation will 
fail anyways



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 293 - Unstable!

2017-11-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/293/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
ObjectTracker found 4 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, InternalHttpClient, SolrCore] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:92)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:742)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:935)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:844)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1037)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:642)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:501)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:345) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:421) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$12(ReplicationHandler.java:1183)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.http.impl.client.InternalHttpClient  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:289)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:298)
  at 
org.apache.solr.handler.IndexFetcher.createHttpClient(IndexFetcher.java:224)  
at org.apache.solr.handler.IndexFetcher.(IndexFetcher.java:266)  at 
org.apache.solr.handler.ReplicationHandler.inform(ReplicationHandler.java:1214) 
 at org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:696) 
 at org.apache.solr.core.SolrCore.(SolrCore.java:968)  at 
org.apache.solr.core.SolrCore.reload(SolrCore.java:637)  at 
org.apache.solr.core.CoreContainer.reload(CoreContainer.java:1285)  at 
org.apache.solr.handler.IndexFetcher.lambda$reloadCore$0(IndexFetcher.java:917) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1020)  at 
org.apache.solr.core.SolrCore.reload(SolrCore.java:637)  at 
org.apache.solr.core.CoreContainer.reload(CoreContainer.java:1285)  at 
org.apache.solr.handler.IndexFetcher.lambda$reloadCore$0(IndexFetcher.java:917) 
 at java.lang.Thread.run(Thread.java:748)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [MockDirectoryWrapper, MockDirectoryWrapper, InternalHttpClient, 
SolrCore]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
at 
org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:92)
at 

[jira] [Created] (LUCENE-8047) Comparison of String objects using == or !=

2017-11-09 Thread song (JIRA)
song created LUCENE-8047:


 Summary: Comparison of String objects using == or !=
 Key: LUCENE-8047
 URL: https://issues.apache.org/jira/browse/LUCENE-8047
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 7.0.1
 Environment: Ubuntu 14.04.5 LTS
Reporter: song
Priority: Minor


My tool has scanned the whole codebase of Lucene and found there are eight 
practice issues of string comparison, in which strings are compared by using 
==/!= instead of equals( ).

analysis/common/src/java/org/apache/lucene/analysis/hunspell/Dictionary.java
{code:java}
conditionPattern == SUFFIX_CONDITION_REGEX_PATTERN
{code}

analysis/common/src/java/org/apache/lucene/analysis/cjk/CJKBigramFilter.java
{code:java}
  if (type == doHan || type == doHiragana || type == doKatakana || type == 
doHangul) {
{code}

analysis/common/src/java/org/apache/lucene/analysis/standard/ClassicFilter.java
{code:java}
 if (type == APOSTROPHE_TYPE &&...){

 } else if (type == ACRONYM_TYPE) {  
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8040) Optimize IndexSearcher.collectionStatistics

2017-11-09 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-8040:
-
Attachment: MyBenchmark.java

I updated the benchmark to use a custom FilterDirectoryReader that ultimately 
has a custom FilterLeafReader that caches the Terms impls into a HashMap.  Then 
I reran the benchmark with 150 fields, 30 segments:
{noformat}
IndexSearcher MultiFields (current)
  346.155 ±(99.9%) 57.775 us/op [Average]
  (min, avg, max) = (334.952, 346.155, 371.996), stdev = 15.004
  CI (99.9%): [288.380, 403.930] (assumes normal distribution)

Raw compute on demand each time
  196.271 ±(99.9%) 14.716 us/op [Average]
  (min, avg, max) = (192.012, 196.271, 201.187), stdev = 3.822
  CI (99.9%): [181.555, 210.987] (assumes normal distribution)

ConcurrentHashMap lazy cache of raw compute
  4.553 ±(99.9%) 0.245 us/op [Average]
  (min, avg, max) = (4.465, 4.553, 4.636), stdev = 0.064
  CI (99.9%): [4.308, 4.799] (assumes normal distribution)
{noformat}

Clearly the ConcurrentHashMap is saving us a lot.

You say we shouldn't add caching to IndexSearcher.  IndexSearcher contains the 
QueryCache.  Looking at LRUQueryCache, I think I can safely say that a 
ConcurrentHashMap is comparatively more lightweight.  Do you disagree?

> Optimize IndexSearcher.collectionStatistics
> ---
>
> Key: LUCENE-8040
> URL: https://issues.apache.org/jira/browse/LUCENE-8040
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 7.2
>
> Attachments: MyBenchmark.java, lucenecollectionStatisticsbench.zip
>
>
> {{IndexSearcher.collectionStatistics(field)}} can do a fair amount of work 
> because with each invocation it will call {{MultiFields.getTerms(...)}}.  The 
> effects of this are aggravated for queries with many fields since each field 
> will want statistics, and also aggravated when there are many segments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.1) - Build # 20876 - Still Unstable!

2017-11-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20876/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale

Error Message:
Error from server at 
https://127.0.0.1:39605/solr/stale_state_test_col_shard1_replica_n1: Expected 
mime type application/octet-stream but got text/html.Error 
404HTTP ERROR: 404 Problem accessing 
/solr/stale_state_test_col_shard1_replica_n1/update. Reason: Can not 
find: /solr/stale_state_test_col_shard1_replica_n1/update http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 
  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
https://127.0.0.1:39605/solr/stale_state_test_col_shard1_replica_n1: Expected 
mime type application/octet-stream but got text/html. 


Error 404 


HTTP ERROR: 404
Problem accessing /solr/stale_state_test_col_shard1_replica_n1/update. 
Reason:
Can not find: 
/solr/stale_state_test_col_shard1_replica_n1/update
http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.20.v20170531



at 
__randomizedtesting.SeedInfo.seed([F39EEC43FB657301:47AF74AB188C052D]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:559)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale(CloudSolrClientTest.java:844)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (LUCENE-8046) Redundant assign values to one variable continuously, which makes the first assignment redundant and useless

2017-11-09 Thread song (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

song updated LUCENE-8046:
-
Description: 
Our static code analysis tool has scanned the codebase of Lucene 7.0.1, and 
find  29 cases, that developers reassigned values to a variable continuously. 

For example, the following code from file: 
analysis/common/src/java/org/tartarus/snowball/ext/CatalanStemmer.java

{code:java}
cursor=limit - v_5;
cursor=limit_backward;
{code}

In the above code snippet, the second statement makes the first one redundant 
and useless.
There are 29 cases in total in the codebase of Lucene.

  was:
Our static code analysis tool has scanned the codebase of Lucene 7.0.1, and 
find  29 cases, that developers reassigned values to a variable continuously. 

For example, the following code from file: 
`analysis/common/src/java/org/tartarus/snowball/ext/CatalanStemmer.java` 

{code:java}
cursor=limit - v_5;
cursor=limit_backward;
{code}

In the above code snippet, the second statement makes the first one redundant 
and useless.
There are 29 cases in total in the codebase of Lucene.


> Redundant assign values to one variable continuously, which makes the first 
> assignment redundant and useless
> 
>
> Key: LUCENE-8046
> URL: https://issues.apache.org/jira/browse/LUCENE-8046
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 7.0.1
> Environment: Ubuntu 14.04.5 LTS
>Reporter: song
>  Labels: performance
>
> Our static code analysis tool has scanned the codebase of Lucene 7.0.1, and 
> find  29 cases, that developers reassigned values to a variable continuously. 
> For example, the following code from file: 
> analysis/common/src/java/org/tartarus/snowball/ext/CatalanStemmer.java
> {code:java}
> cursor=limit - v_5;
> cursor=limit_backward;
> {code}
> In the above code snippet, the second statement makes the first one redundant 
> and useless.
> There are 29 cases in total in the codebase of Lucene.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8046) Redundant assign values to one variable continuously, which makes the first assignment redundant and useless

2017-11-09 Thread song (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

song updated LUCENE-8046:
-
Priority: Minor  (was: Major)

> Redundant assign values to one variable continuously, which makes the first 
> assignment redundant and useless
> 
>
> Key: LUCENE-8046
> URL: https://issues.apache.org/jira/browse/LUCENE-8046
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 7.0.1
> Environment: Ubuntu 14.04.5 LTS
>Reporter: song
>Priority: Minor
>  Labels: performance
>
> Our static code analysis tool has scanned the codebase of Lucene 7.0.1, and 
> find  29 cases, that developers reassigned values to a variable continuously. 
> For example, the following code from file: 
> analysis/common/src/java/org/tartarus/snowball/ext/CatalanStemmer.java
> {code:java}
> cursor=limit - v_5;
> cursor=limit_backward;
> {code}
> In the above code snippet, the second statement makes the first one redundant 
> and useless.
> There are 29 cases in total in the codebase of Lucene.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8046) Redundant assign values to one variable continuously, which makes the first assignment redundant and useless

2017-11-09 Thread song (JIRA)
song created LUCENE-8046:


 Summary: Redundant assign values to one variable continuously, 
which makes the first assignment redundant and useless
 Key: LUCENE-8046
 URL: https://issues.apache.org/jira/browse/LUCENE-8046
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 7.0.1
 Environment: Ubuntu 14.04.5 LTS
Reporter: song


Our static code analysis tool has scanned the codebase of Lucene 7.0.1, and 
find  29 cases, that developers reassigned values to a variable continuously. 

For example, the following code from file: 
`analysis/common/src/java/org/tartarus/snowball/ext/CatalanStemmer.java` 

{code:java}
cursor=limit - v_5;
cursor=limit_backward;
{code}

In the above code snippet, the second statement makes the first one redundant 
and useless.
There are 29 cases in total in the codebase of Lucene.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11631) Schema API always has status 0

2017-11-09 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-11631:
--
Attachment: SOLR-11631.patch

Patch that throws {{SolrException}} rather than adding an "errors" section to 
the response; also adds a test.

CC [~noble.paul].

> Schema API always has status 0
> --
>
> Key: SOLR-11631
> URL: https://issues.apache.org/jira/browse/SOLR-11631
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-11631.patch
>
>
> Schema API failures always return status=0.
> Consumers should be able to detect failure using normal mechanisms (i.e. 
> status != 0) rather than having to parse the response for "errors".  Right 
> now if I attempt to {{add-field}} an already existing field, I get:
> {noformat}
> {responseHeader={status=0,QTime=XXX},errors=[{add-field={name=YYY, ...}, 
> errorMessages=[Field 'YYY' already exists.]}]}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11631) Schema API always has status 0

2017-11-09 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-11631:
-

 Summary: Schema API always has status 0
 Key: SOLR-11631
 URL: https://issues.apache.org/jira/browse/SOLR-11631
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Steve Rowe


Schema API failures always return status=0.

Consumers should be able to detect failure using normal mechanisms (i.e. status 
!= 0) rather than having to parse the response for "errors".  Right now if I 
attempt to {{add-field}} an already existing field, I get:

{noformat}
{responseHeader={status=0,QTime=XXX},errors=[{add-field={name=YYY, ...}, 
errorMessages=[Field 'YYY' already exists.]}]}
{noformat}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 783 - Still Unstable!

2017-11-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/783/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseG1GC

3 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale

Error Message:
Error from server at 
https://127.0.0.1:38643/solr/stale_state_test_col_shard1_replica_n1: Expected 
mime type application/octet-stream but got text/html.Error 
404HTTP ERROR: 404 Problem accessing 
/solr/stale_state_test_col_shard1_replica_n1/update. Reason: Can not 
find: /solr/stale_state_test_col_shard1_replica_n1/update http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 
  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
https://127.0.0.1:38643/solr/stale_state_test_col_shard1_replica_n1: Expected 
mime type application/octet-stream but got text/html. 


Error 404 


HTTP ERROR: 404
Problem accessing /solr/stale_state_test_col_shard1_replica_n1/update. 
Reason:
Can not find: 
/solr/stale_state_test_col_shard1_replica_n1/update
http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.20.v20170531



at 
__randomizedtesting.SeedInfo.seed([1D670CA03ECF9456:A9569448DD26E27A]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:559)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale(CloudSolrClientTest.java:844)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-11487) Collection Alias metadata for time partitioned collections

2017-11-09 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246858#comment-16246858
 ] 

David Smiley commented on SOLR-11487:
-

* The zkVersion int need not be volatile because it is only ever read/written 
from within synchronized block.  Any way, if you want to try to put it back in 
Aliases, that's fine.  I found it a bit annoying to have Aliases with zkVersion 
yet also find a way to set it despite Aliases immutability.  Nothing that we 
can't figure out but it was that trip-up that led me to the path of zkVersion 
decoupled from the Aliases class.
* I introduced a bug causing the AliasIntegrationTest.test() failure.  
ZkStateReader.createClusterStateWatchersAndUpdate should call refreshAliases 
with the field reference aliasesHolder instead of constructing a new instance.  
This took a while to figure out; DEBUG logging (with additional log statements 
and references to "this" to get the object ID) proved indispensable. I think 
this bug would never have happened if the AliasesManager did not implement 
Watcher but instead had a newWatcher() method to return an anonymous instance.
* At the end of CreateAliasCmd.call, I sadly think we need to put back the 
100ms wait (I added more commentary below):
{code}
// Give other nodes a bit of time to see these changes. Solr is eventually 
consistent, so we expect other Solr nodes
// and even CloudSolrClient (ZkClientClusterStateProvider) to eventually 
become aware of the change.
Thread.sleep(100);
{code}
If we remove it with this new change for metadata, we might add more test 
instability (and it's already on fire) or increase the likelihood that some 
real code out there won't work. The caller should sleep perhaps but that's also 
sad.  I've been ruminating on this a bit and may file an issue with more 
specific ideas.
* in CollectionsHandler, LISTALIASES_OP (~line 480) add this line:
{code}
zkStateReader.aliasesHolder.update(); // just in case there are changes being 
propagated through ZK
{code}

> Collection Alias metadata for time partitioned collections
> --
>
> Key: SOLR-11487
> URL: https://issues.apache.org/jira/browse/SOLR-11487
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
> Attachments: SOLR_11487.patch, SOLR_11487.patch, SOLR_11487.patch, 
> SOLR_11487.patch, SOLR_11487.patch, SOLR_11487.patch
>
>
> SOLR-11299 outlines an approach to using a collection Alias to refer to a 
> series of collections of a time series. We'll need to store some metadata 
> about these time series collections, such as which field of the document 
> contains the timestamp to route on.
> The current {{/aliases.json}} is a Map with a key {{collection}} which is in 
> turn a Map of alias name strings to a comma delimited list of the collections.
> _If we change the comma delimited list to be another Map to hold the existing 
> list and more stuff, older CloudSolrClient (configured to talk to ZooKeeper) 
> will break_.  Although if it's configured with an HTTP Solr URL then it would 
> not break.  There's also some read/write hassle to worry about -- we may need 
> to continue to read an aliases.json in the older format.
> Alternatively, we could add a new map entry to aliases.json, say, 
> {{collection_metadata}} keyed by alias name?
> Perhaps another very different approach is to attach metadata to the 
> configset in use?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1524 - Unstable!

2017-11-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1524/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.lucene.search.TestSimilarity.testSimilarity

Error Message:
expected:<2.0> but was:<1.0>

Stack Trace:
java.lang.AssertionError: expected:<2.0> but was:<1.0>
at 
__randomizedtesting.SeedInfo.seed([D83374549F176AB:FC2C777D4911E3B0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:443)
at org.junit.Assert.assertEquals(Assert.java:512)
at 
org.apache.lucene.search.TestSimilarity$4.collect(TestSimilarity.java:146)
at 
org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
at 
org.apache.lucene.search.AssertingCollector$1.collect(AssertingCollector.java:56)
at 
org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
at 
org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:293)
at 
org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:236)
at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
at 
org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:69)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:645)
at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:72)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:449)
at 
org.apache.lucene.search.TestSimilarity.testSimilarity(TestSimilarity.java:137)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (LUCENE-8014) Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor

2017-11-09 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246834#comment-16246834
 ] 

Alan Woodward commented on LUCENE-8014:
---

Thanks [~steve_rowe], will look asap.

> Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor
> -
>
> Key: LUCENE-8014
> URL: https://issues.apache.org/jira/browse/LUCENE-8014
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-8014.patch
>
>
> This supersedes LUCENE-8013.
> We should hardcode computeSlopFactor to 1/(N+1) in SloppyPhraseScorer and 
> move computePayloadFactor to PayloadFunction so that all the payload scoring 
> logic is in a single place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+29) - Build # 20875 - Still Unstable!

2017-11-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20875/
Java: 64bit/jdk-10-ea+29 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.lucene.search.TestSimilarity.testSimilarity

Error Message:
expected:<2.0> but was:<1.0>

Stack Trace:
java.lang.AssertionError: expected:<2.0> but was:<1.0>
at 
__randomizedtesting.SeedInfo.seed([BA037FAC98FB9706:4BAC3F94981B021D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:443)
at org.junit.Assert.assertEquals(Assert.java:512)
at 
org.apache.lucene.search.TestSimilarity$4.collect(TestSimilarity.java:146)
at 
org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
at 
org.apache.lucene.search.AssertingCollector$1.collect(AssertingCollector.java:56)
at 
org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
at 
org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
at 
org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:293)
at 
org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:236)
at 
org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:79)
at 
org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:63)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:645)
at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:72)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:449)
at 
org.apache.lucene.search.TestSimilarity.testSimilarity(TestSimilarity.java:137)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-10697) Improve defaults for maxConnectionsPerHost

2017-11-09 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246796#comment-16246796
 ] 

Varun Thacker commented on SOLR-10697:
--

I think we should bump it up. There's no reason to say  we will allow 10 
DEFAULT_MAXUPDATECONNECTIONSPERHOST for updates but only 20 for searches

> Improve defaults for maxConnectionsPerHost
> --
>
> Key: SOLR-10697
> URL: https://issues.apache.org/jira/browse/SOLR-10697
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
>
> Twice recently I've increased 
> {{HttpShardHandlerFactory#maxConnectionsPerHost}} at a client and it helped 
> improve query latencies a lot.
> Should we increase the default to say 100 ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 884 - Failure

2017-11-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/884/

No tests ran.

Build Log:
[...truncated 27995 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.02 sec (11.3 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-8.0.0-src.tgz...
   [smoker] 29.8 MB in 0.11 sec (268.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.tgz...
   [smoker] 70.9 MB in 0.26 sec (275.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.zip...
   [smoker] 81.3 MB in 0.31 sec (260.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6190 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6190 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] 
   [smoker] command "export JAVA_HOME="/home/jenkins/tools/java/latest1.8" 
PATH="/home/jenkins/tools/java/latest1.8/bin:$PATH" 
JAVACMD="/home/jenkins/tools/java/latest1.8/bin/java"; ant clean test 
-Dtests.slow=false" failed:
   [smoker] Buildfile: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/lucene-8.0.0/build.xml
   [smoker] 
   [smoker] clean:
   [smoker][delete] Deleting directory 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/lucene-8.0.0/build
   [smoker] 
   [smoker] ivy-availability-check:
   [smoker] [loadresource] Do not set property disallowed.ivy.jars.list as its 
length is 0.
   [smoker] 
   [smoker] -ivy-fail-disallowed-ivy-version:
   [smoker] 
   [smoker] ivy-fail:
   [smoker] 
   [smoker] ivy-configure:
   [smoker] [ivy:configure] :: Apache Ivy 2.4.0 - 20141213170938 :: 
http://ant.apache.org/ivy/ ::
   [smoker] [ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/lucene-8.0.0/top-level-ivy-settings.xml
   [smoker] 
   [smoker] -clover.load:
   [smoker] 
   [smoker] resolve-groovy:
   [smoker] [ivy:cachepath] :: resolving dependencies :: 
org.codehaus.groovy#groovy-all-caller;working
   [smoker] [ivy:cachepath] confs: [default]
   [smoker] [ivy:cachepath] found org.codehaus.groovy#groovy-all;2.4.12 in 
public
   [smoker] [ivy:cachepath] :: resolution report :: resolve 630ms :: artifacts 
dl 2ms
   [smoker] 
-
   [smoker] |  |modules||   
artifacts   |
   [smoker] |   conf   | number| search|dwnlded|evicted|| 
number|dwnlded|
   [smoker] 
-
   [smoker] |  default |   1   |   0   |   0   |   0   ||   1   |   
0   |
   [smoker] 
-
   [smoker] 
   [smoker] -init-totals:
   [smoker] 
   [smoker] test-core:
   [smoker] 
   [smoker] -clover.disable:
   [smoker] 
   [smoker] ivy-availability-check:
   [smoker] [loadresource] Do not set property disallowed.ivy.jars.list as its 
length is 0.
   [smoker] 
   [smoker] -ivy-fail-disallowed-ivy-version:
   [smoker] 
   [smoker] ivy-fail:
   [smoker] 
   [smoker] ivy-configure:
   [smoker] [ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/lucene-8.0.0/top-level-ivy-settings.xml
   [smoker] 
   [smoker] -clover.load:
   [smoker] 
   [smoker] 

[jira] [Resolved] (SOLR-6155) Multiple copy field directives are created in a mutable managed schema when identical copy field directives are added

2017-11-09 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-6155.
--
Resolution: Won't Fix

> Multiple copy field directives are created in a mutable managed schema when 
> identical copy field directives are added
> -
>
> Key: SOLR-6155
> URL: https://issues.apache.org/jira/browse/SOLR-6155
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>
> If I add the same copy field directive more than once, e.g. source=sku1 , 
> dest=sku2, then this directive will appear in the schema as many times as it 
> was added.
> It should only appear once.  I guess we could keep the current behavior of 
> not throwing an error when a copy field directive is added that already 
> exists in the schema, but rather than adding a duplicate directive, just have 
> a no-op.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10697) Improve defaults for maxConnectionsPerHost

2017-11-09 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246785#comment-16246785
 ] 

Erick Erickson commented on SOLR-10697:
---

Why don't we have these all listed in the solr.xml file, along with, perhaps, 
appropriate comments?
socketTimeout
connTimeout
maxConnectionsPerHost
maxConnections
retry
allowCompression
followRedirects
httpBasicAuthUser
httpBasicAuthPassword

Well, I'm not totally sure about the BasicAuth stuff, or maybe 
allowCompression. But maxConnectionsPerHost and maybe maxConnections have 
tripped up more than one person, having it in solr.xml might make it easier to 
find/fix.

> Improve defaults for maxConnectionsPerHost
> --
>
> Key: SOLR-10697
> URL: https://issues.apache.org/jira/browse/SOLR-10697
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
>
> Twice recently I've increased 
> {{HttpShardHandlerFactory#maxConnectionsPerHost}} at a client and it helped 
> improve query latencies a lot.
> Should we increase the default to say 100 ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8014) Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor

2017-11-09 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246738#comment-16246738
 ] 

Steve Rowe commented on LUCENE-8014:


{{git bisect}} blames the {{946ec9d5b94}} commit on this issue for the 2 
following reproducing failures from 
[https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4281/]:

{noformat}
   [junit4] Suite: org.apache.lucene.search.spans.TestSpans
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSpans 
-Dtests.method=testSpanScorerZeroSloppyFreq -Dtests.seed=4FC38F5EE84A65AB 
-Dtests.slow=true -Dtests.locale=no -Dtests.timezone=Etc/GMT-14 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.03s J0 | TestSpans.testSpanScorerZeroSloppyFreq <<<
   [junit4]> Throwable #1: java.lang.AssertionError: first doc score should 
be zero, 3.0794418
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([4FC38F5EE84A65AB:73B03B755EA850AC]:0)
   [junit4]>at 
org.apache.lucene.search.spans.TestSpans.testSpanScorerZeroSloppyFreq(TestSpans.java:320)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70), 
sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@44515136),
 locale=no, timezone=Etc/GMT-14
   [junit4]   2> NOTE: Mac OS X 10.11.6 x86_64/Oracle Corporation 1.8.0_144 
(64-bit)/cpus=3,threads=1,free=116775256,total=317194240
{noformat}

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSimilarity 
-Dtests.method=testSimilarity -Dtests.seed=4FC38F5EE84A65AB -Dtests.slow=true 
-Dtests.locale=pt-BR -Dtests.timezone=Europe/Ulyanovsk -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.02s J1 | TestSimilarity.testSimilarity <<<
   [junit4]> Throwable #1: java.lang.AssertionError: expected:<2.0> but 
was:<1.0>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([4FC38F5EE84A65AB:BE6CCF66E8AAF0B0]:0)
   [junit4]>at 
org.apache.lucene.search.TestSimilarity$4.collect(TestSimilarity.java:146)
   [junit4]>at 
org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
   [junit4]>at 
org.apache.lucene.search.AssertingCollector$1.collect(AssertingCollector.java:56)
   [junit4]>at 
org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
   [junit4]>at 
org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:293)
   [junit4]>at 
org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:236)
   [junit4]>at 
org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
   [junit4]>at 
org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:69)
   [junit4]>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:645)
   [junit4]>at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:72)
   [junit4]>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:449)
   [junit4]>at 
org.apache.lucene.search.TestSimilarity.testSimilarity(TestSimilarity.java:137)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
{field=PostingsFormat(name=Memory)}, docValues:{}, maxPointsInLeafNode=1240, 
maxMBSortInHeap=5.9146841525724625, 
sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@2e5f1dd1),
 locale=pt-BR, timezone=Europe/Ulyanovsk
   [junit4]   2> NOTE: Mac OS X 10.11.6 x86_64/Oracle Corporation 1.8.0_144 
(64-bit)/cpus=3,threads=1,free=258508624,total=309329920
{noformat}

> Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor
> -
>
> Key: LUCENE-8014
> URL: https://issues.apache.org/jira/browse/LUCENE-8014
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-8014.patch
>
>
> This supersedes LUCENE-8013.
> We should hardcode computeSlopFactor to 1/(N+1) in SloppyPhraseScorer and 
> move computePayloadFactor to PayloadFunction so that all the payload scoring 
> logic is in a single place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 298 - Unstable!

2017-11-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/298/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

15 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not find collection:collection2

Stack Trace:
java.lang.AssertionError: Could not find collection:collection2
at 
__randomizedtesting.SeedInfo.seed([E08CE5EBC034E3AE:68D8DA316EC88E56]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:140)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:907)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:612)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-11625) Solr may remove live index on Solr shutdown

2017-11-09 Thread Nikolay Martynov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246718#comment-16246718
 ] 

Nikolay Martynov commented on SOLR-11625:
-

Hi.

We are using c4.8xlarge, we have 24 nodes, 3 replicas, 16 shards - 2 cores per 
node.
Exact indexing rate hard to estimate, but probably 10-20 threads hitting with 
20 docs batches.

We have a script to roll these boxes one by one: i.e. roll one, wait for 
cluster to become 'green', roll next one. This script rarely finished because 
of this problem.

> Solr may remove live index on Solr shutdown
> ---
>
> Key: SOLR-11625
> URL: https://issues.apache.org/jira/browse/SOLR-11625
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.1
>Reporter: Nikolay Martynov
>
> This has been observed in the wild:
> {noformat}
> 2017-11-07 02:35:46.909 ERROR (qtp1724399560-8090) [c:xxx s:shard4 
> r:core_node399 x:xxx_shard4_replica8] o.a.s.c.SolrCore 
> :java.nio.channels.ClosedByInterruptException
>   at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
>   at sun.nio.ch.FileChannelImpl.size(FileChannelImpl.java:315)
>   at 
> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:242)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:192)
>   at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:356)
>   at 
> org.apache.solr.core.SolrCore.cleanupOldIndexDirectories(SolrCore.java:3044)
>   at org.apache.solr.core.SolrCore.close(SolrCore.java:1575)
>   at org.apache.solr.servlet.HttpSolrCall.destroy(HttpSolrCall.java:582)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:374)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> 2017-11-07 02:35:46.912 INFO  
> (OldIndexDirectoryCleanupThreadForCore-xxx_shard4_replica8) [c:xxx s:shard4 
> r:core_node399 x:xxx_shard4_replica8] o.a.s.c.DirectoryFactory Found 1 old 
> index directories to clean-up under 
> /opt/solr/server/solr/xxx_shard4_replica8/data/ afterReload=false
> {noformat}
> After this Solr cannot start claiming that some 

[jira] [Commented] (SOLR-8975) SolrClient setters should be deprecated in favor of SolrClientBuilder methods

2017-11-09 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246700#comment-16246700
 ] 

Shawn Heisey commented on SOLR-8975:


General thoughts:

A lot of recent work, including this issue, is moving towards a goal of 
immutable SolrClient objects.  It's discussed in the comments here.  I think 
this is the right direction -- HttpClient has headed the same direction, 
beginning deprecation of anything related to mutable objects in version 4.3.

At the point where all the deprecated code goes away. I think that we should 
EXPLICITLY declare/document/enforce that SolrClient objects are immutable.

I don't know if that's something to do as part of this issue or as a new one.


> SolrClient setters should be deprecated in favor of SolrClientBuilder methods
> -
>
> Key: SOLR-8975
> URL: https://issues.apache.org/jira/browse/SOLR-8975
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Jason Gerlowski
>Priority: Minor
> Attachments: SOLR-8975.patch
>
>
> SOLR-8097 added a builder layer on top of each {{SolrClient}} implementation.
> Now that builders are in place for SolrClients, the setters used in each 
> SolrClient can be deprecated, and their functionality moved over to the 
> Builders.  This change brings a few benefits:
> - unifies SolrClient configuration under the new Builders.  It'll be nice to 
> have all the knobs, and levers used to tweak SolrClients available in a 
> single place (the Builders).
> - reduces SolrClient thread-safety concerns.  Currently, clients are mutable. 
>  Using some SolrClient setters can result in erratic and "trappy" behavior 
> when the clients are used across multiple threads.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11625) Solr may remove live index on Solr shutdown

2017-11-09 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246678#comment-16246678
 ] 

Amrit Sarkar commented on SOLR-11625:
-

[~mar-kolya],

I tried to replicate the issue on t2x.2xlarge AWS instance, with heavy indexing 
(10 simultaneous indexing threads pushing 1000 doc batches) and restarting 
single node cluster with embedded zookeeper. I was not able to get the 
"InterruptException" or the "old index directories ..." error.

Can you share more details on the test scenario? Number of nodes, indexing rate 
etc. Thank you in advance.

> Solr may remove live index on Solr shutdown
> ---
>
> Key: SOLR-11625
> URL: https://issues.apache.org/jira/browse/SOLR-11625
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.1
>Reporter: Nikolay Martynov
>
> This has been observed in the wild:
> {noformat}
> 2017-11-07 02:35:46.909 ERROR (qtp1724399560-8090) [c:xxx s:shard4 
> r:core_node399 x:xxx_shard4_replica8] o.a.s.c.SolrCore 
> :java.nio.channels.ClosedByInterruptException
>   at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
>   at sun.nio.ch.FileChannelImpl.size(FileChannelImpl.java:315)
>   at 
> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:242)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:192)
>   at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:356)
>   at 
> org.apache.solr.core.SolrCore.cleanupOldIndexDirectories(SolrCore.java:3044)
>   at org.apache.solr.core.SolrCore.close(SolrCore.java:1575)
>   at org.apache.solr.servlet.HttpSolrCall.destroy(HttpSolrCall.java:582)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:374)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> 2017-11-07 02:35:46.912 INFO  
> (OldIndexDirectoryCleanupThreadForCore-xxx_shard4_replica8) [c:xxx s:shard4 
> r:core_node399 x:xxx_shard4_replica8] o.a.s.c.DirectoryFactory Found 1 old 
> index directories to clean-up under 
> /opt/solr/server/solr/xxx_shard4_replica8/data/ afterReload=false
> {noformat}
> 

[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 76 - Still Failing

2017-11-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/76/

No tests ran.

Build Log:
[...truncated 28030 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.03 sec (8.5 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.2.0-src.tgz...
   [smoker] 31.2 MB in 0.09 sec (361.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.2.0.tgz...
   [smoker] 71.0 MB in 0.20 sec (347.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.2.0.zip...
   [smoker] 81.4 MB in 0.25 sec (331.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.2.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6223 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.2.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6223 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.2.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.02 sec (15.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.2.0-src.tgz...
   [smoker] 53.2 MB in 0.33 sec (160.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.2.0.tgz...
   [smoker] 146.0 MB in 0.75 sec (194.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.2.0.zip...
   [smoker] 147.0 MB in 0.43 sec (344.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.2.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.2.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.2.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.2.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.2.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.2.0-java8
   [smoker] Creating Solr home directory 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.2.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] "bin/solr" start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]   [|]   [/]   [-]   [\]  

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4281 - Still Unstable!

2017-11-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4281/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

5 tests failed.
FAILED:  org.apache.lucene.search.TestSimilarity.testSimilarity

Error Message:
expected:<2.0> but was:<1.0>

Stack Trace:
java.lang.AssertionError: expected:<2.0> but was:<1.0>
at 
__randomizedtesting.SeedInfo.seed([4FC38F5EE84A65AB:BE6CCF66E8AAF0B0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:443)
at org.junit.Assert.assertEquals(Assert.java:512)
at 
org.apache.lucene.search.TestSimilarity$4.collect(TestSimilarity.java:146)
at 
org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
at 
org.apache.lucene.search.AssertingCollector$1.collect(AssertingCollector.java:56)
at 
org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
at 
org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:293)
at 
org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:236)
at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
at 
org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:69)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:645)
at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:72)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:449)
at 
org.apache.lucene.search.TestSimilarity.testSimilarity(TestSimilarity.java:137)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 782 - Still Unstable!

2017-11-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/782/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
max version bucket seed not updated after recovery!

Stack Trace:
java.lang.AssertionError: max version bucket seed not updated after recovery!
at 
__randomizedtesting.SeedInfo.seed([DC06EE4BF292881F:5452D1915C6EE5E7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:377)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:132)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-10469) setParallelUpdates should be deprecated in favor of SolrClientBuilder methods

2017-11-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246491#comment-16246491
 ] 

ASF subversion and git services commented on SOLR-10469:


Commit b5e8c2e68a6efcc78c2fcc0bd3df549cb52abee1 in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b5e8c2e ]

SOLR-10469: Move CloudSolrClient.setParallelUpdates to its Builder

(cherry picked from commit df3b017)


> setParallelUpdates should be deprecated in favor of SolrClientBuilder methods
> -
>
> Key: SOLR-10469
> URL: https://issues.apache.org/jira/browse/SOLR-10469
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Reporter: Jason Gerlowski
>Assignee: David Smiley
>Priority: Minor
> Fix For: 7.2
>
> Attachments: 
> SOLR_10469_CloudSolrClient_setParallelUpdates_move_to_Builder.patch
>
>
> Now that builders are in place for {{SolrClients}}, the setters used in each 
> {{SolrClient}} can be deprecated, and their functionality moved over to the 
> Builders. This change brings a few benefits:
> - unifies {{SolrClient}} configuration under the new Builders. It'll be nice 
> to have all the knobs, and levers used to tweak {{SolrClient}}s available in 
> a single place (the Builders).
> - reduces {{SolrClient}} thread-safety concerns. Currently, clients are 
> mutable. Using some {{SolrClient}} setters can result in erratic and "trappy" 
> behavior when the clients are used across multiple threads.
> This subtask endeavors to change this behavior for the {{setParallelUpdates}} 
> setter on all {{SolrClient}} implementations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11507) simplify and extend SolrTestCaseJ4.CloudSolrClientBuilder randomisation

2017-11-09 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-11507.
-
   Resolution: Fixed
 Assignee: David Smiley
Fix Version/s: 7.2

> simplify and extend SolrTestCaseJ4.CloudSolrClientBuilder randomisation
> ---
>
> Key: SOLR-11507
> URL: https://issues.apache.org/jira/browse/SOLR-11507
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: David Smiley
>Priority: Minor
> Fix For: 7.2
>
> Attachments: SOLR-11507.patch, SOLR-11507.patch
>
>
> [~dsmiley] wrote in SOLR-9090:
> bq. [~cpoerschke] I'm looking at {{SolrTestCaseJ4.CloudSolrClientBuilder}}. 
> Instead of the somewhat complicated tracking using configuredDUTflag, 
> couldn't you simply remove all that stuff and just modify the builder's 
> constructor to randomize the settings?
> bq. Furthermore, shouldn't {{shardLeadersOnly}} be randomized as well?
> This ticket is to follow-up on that suggestion since SOLR-9090 is already 
> closed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11507) simplify and extend SolrTestCaseJ4.CloudSolrClientBuilder randomisation

2017-11-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246505#comment-16246505
 ] 

ASF subversion and git services commented on SOLR-11507:


Commit a43c318a51d3583a3ebbba3499cd0f2708032d29 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a43c318 ]

SOLR-11507: randomize parallelUpdates for test CloudSolrClientBuilder


> simplify and extend SolrTestCaseJ4.CloudSolrClientBuilder randomisation
> ---
>
> Key: SOLR-11507
> URL: https://issues.apache.org/jira/browse/SOLR-11507
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-11507.patch, SOLR-11507.patch
>
>
> [~dsmiley] wrote in SOLR-9090:
> bq. [~cpoerschke] I'm looking at {{SolrTestCaseJ4.CloudSolrClientBuilder}}. 
> Instead of the somewhat complicated tracking using configuredDUTflag, 
> couldn't you simply remove all that stuff and just modify the builder's 
> constructor to randomize the settings?
> bq. Furthermore, shouldn't {{shardLeadersOnly}} be randomized as well?
> This ticket is to follow-up on that suggestion since SOLR-9090 is already 
> closed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11507) simplify and extend SolrTestCaseJ4.CloudSolrClientBuilder randomisation

2017-11-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246507#comment-16246507
 ] 

ASF subversion and git services commented on SOLR-11507:


Commit 4a221ae4ef12db196a4affd7936160a44379fb3f in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4a221ae ]

SOLR-11507: randomize parallelUpdates for test CloudSolrClientBuilder

(cherry picked from commit a43c318)


> simplify and extend SolrTestCaseJ4.CloudSolrClientBuilder randomisation
> ---
>
> Key: SOLR-11507
> URL: https://issues.apache.org/jira/browse/SOLR-11507
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-11507.patch, SOLR-11507.patch
>
>
> [~dsmiley] wrote in SOLR-9090:
> bq. [~cpoerschke] I'm looking at {{SolrTestCaseJ4.CloudSolrClientBuilder}}. 
> Instead of the somewhat complicated tracking using configuredDUTflag, 
> couldn't you simply remove all that stuff and just modify the builder's 
> constructor to randomize the settings?
> bq. Furthermore, shouldn't {{shardLeadersOnly}} be randomized as well?
> This ticket is to follow-up on that suggestion since SOLR-9090 is already 
> closed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10469) setParallelUpdates should be deprecated in favor of SolrClientBuilder methods

2017-11-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246489#comment-16246489
 ] 

ASF subversion and git services commented on SOLR-10469:


Commit df3b01744c46587db2055e1ffd15393c46c55019 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=df3b017 ]

SOLR-10469: Move CloudSolrClient.setParallelUpdates to its Builder


> setParallelUpdates should be deprecated in favor of SolrClientBuilder methods
> -
>
> Key: SOLR-10469
> URL: https://issues.apache.org/jira/browse/SOLR-10469
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Reporter: Jason Gerlowski
>Assignee: David Smiley
>Priority: Minor
> Fix For: 7.2
>
> Attachments: 
> SOLR_10469_CloudSolrClient_setParallelUpdates_move_to_Builder.patch
>
>
> Now that builders are in place for {{SolrClients}}, the setters used in each 
> {{SolrClient}} can be deprecated, and their functionality moved over to the 
> Builders. This change brings a few benefits:
> - unifies {{SolrClient}} configuration under the new Builders. It'll be nice 
> to have all the knobs, and levers used to tweak {{SolrClient}}s available in 
> a single place (the Builders).
> - reduces {{SolrClient}} thread-safety concerns. Currently, clients are 
> mutable. Using some {{SolrClient}} setters can result in erratic and "trappy" 
> behavior when the clients are used across multiple threads.
> This subtask endeavors to change this behavior for the {{setParallelUpdates}} 
> setter on all {{SolrClient}} implementations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11624) _default configset overwrites a a configset if collection.configName isn't specified even if a confiset of the same name already exists.

2017-11-09 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246454#comment-16246454
 ] 

Erick Erickson commented on SOLR-11624:
---

bq: Having an option on the create command to force a config overwrite...

I'd prefer that remain 'bin/solr zk upconfig.', but that's not a strong 
preference.

> _default configset overwrites a a configset if collection.configName isn't 
> specified even if a confiset of the same name already exists.
> 
>
> Key: SOLR-11624
> URL: https://issues.apache.org/jira/browse/SOLR-11624
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-11624.patch
>
>
> Looks like a problem that crept in when we changed the _default configset 
> stuff.
> setup:
> upload a configset named "wiki"
> collections?action=CREATE=wiki&.
> My custom configset "wiki" gets overwritten by _default and then used by the 
> "wiki" collection.
> Assigning to myself only because it really needs to be fixed IMO and I don't 
> want to lose track of it. Anyone else please feel free to take it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3504) Clearly document the limit for the maximum number of documents in a single index

2017-11-09 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246444#comment-16246444
 ] 

Varun Thacker commented on SOLR-3504:
-

I haven't looked closely but if we have over 2B docs spread across multiple 
shards in the single collection and we do a match all query , does Solr deal 
with it correctly? 

> Clearly document the limit for the maximum number of documents in a single 
> index
> 
>
> Key: SOLR-3504
> URL: https://issues.apache.org/jira/browse/SOLR-3504
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation, update
>Affects Versions: 3.6
>Reporter: Jack Krupansky
>Assignee: Cassandra Targett
>Priority: Minor
> Fix For: 7.2, master (8.0)
>
>
> Although the actual limit to the number of documents supported by a Solr 
> implementation depends on the number of shards, unless the user is intimately 
> familiar with the implementation of Lucene, they may not realize that a 
> single Solr index (single shard, single core) is limited to approximately 
> 2.14 billion documents regardless of their processing power or memory. This 
> limit should be clearly documented for the Solr user.
> Granted, users should be strongly discouraged from attempting to create a 
> single, unsharded index of that size, but they certainly should have to find 
> out about the Lucene limit by accident.
> A subsequent issue will recommend that Solr detect and appropriately report 
> to the user when and if this limit is hit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3504) Clearly document the limit for the maximum number of documents in a single index

2017-11-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246433#comment-16246433
 ] 

ASF subversion and git services commented on SOLR-3504:
---

Commit 514be8c4e5a76848ad96f3cbbe319c0c715a23b1 in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=514be8c ]

SOLR-3504: add note about hard num docs limit in Lucene to planning 
installation section


> Clearly document the limit for the maximum number of documents in a single 
> index
> 
>
> Key: SOLR-3504
> URL: https://issues.apache.org/jira/browse/SOLR-3504
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation, update
>Affects Versions: 3.6
>Reporter: Jack Krupansky
>Assignee: Cassandra Targett
>Priority: Minor
> Fix For: 7.2, master (8.0)
>
>
> Although the actual limit to the number of documents supported by a Solr 
> implementation depends on the number of shards, unless the user is intimately 
> familiar with the implementation of Lucene, they may not realize that a 
> single Solr index (single shard, single core) is limited to approximately 
> 2.14 billion documents regardless of their processing power or memory. This 
> limit should be clearly documented for the Solr user.
> Granted, users should be strongly discouraged from attempting to create a 
> single, unsharded index of that size, but they certainly should have to find 
> out about the Lucene limit by accident.
> A subsequent issue will recommend that Solr detect and appropriately report 
> to the user when and if this limit is hit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3504) Clearly document the limit for the maximum number of documents in a single index

2017-11-09 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-3504.
-
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.2

> Clearly document the limit for the maximum number of documents in a single 
> index
> 
>
> Key: SOLR-3504
> URL: https://issues.apache.org/jira/browse/SOLR-3504
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation, update
>Affects Versions: 3.6
>Reporter: Jack Krupansky
>Assignee: Cassandra Targett
>Priority: Minor
> Fix For: 7.2, master (8.0)
>
>
> Although the actual limit to the number of documents supported by a Solr 
> implementation depends on the number of shards, unless the user is intimately 
> familiar with the implementation of Lucene, they may not realize that a 
> single Solr index (single shard, single core) is limited to approximately 
> 2.14 billion documents regardless of their processing power or memory. This 
> limit should be clearly documented for the Solr user.
> Granted, users should be strongly discouraged from attempting to create a 
> single, unsharded index of that size, but they certainly should have to find 
> out about the Lucene limit by accident.
> A subsequent issue will recommend that Solr detect and appropriately report 
> to the user when and if this limit is hit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3504) Clearly document the limit for the maximum number of documents in a single index

2017-11-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246431#comment-16246431
 ] 

ASF subversion and git services commented on SOLR-3504:
---

Commit 0546a64acfaed28da1cd1af8de9a069ed292a2c1 in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0546a64 ]

SOLR-3504: add note about hard num docs limit in Lucene to planning 
installation section


> Clearly document the limit for the maximum number of documents in a single 
> index
> 
>
> Key: SOLR-3504
> URL: https://issues.apache.org/jira/browse/SOLR-3504
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation, update
>Affects Versions: 3.6
>Reporter: Jack Krupansky
>Assignee: Cassandra Targett
>Priority: Minor
>
> Although the actual limit to the number of documents supported by a Solr 
> implementation depends on the number of shards, unless the user is intimately 
> familiar with the implementation of Lucene, they may not realize that a 
> single Solr index (single shard, single core) is limited to approximately 
> 2.14 billion documents regardless of their processing power or memory. This 
> limit should be clearly documented for the Solr user.
> Granted, users should be strongly discouraged from attempting to create a 
> single, unsharded index of that size, but they certainly should have to find 
> out about the Lucene limit by accident.
> A subsequent issue will recommend that Solr detect and appropriately report 
> to the user when and if this limit is hit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-3504) Clearly document the limit for the maximum number of documents in a single index

2017-11-09 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett reassigned SOLR-3504:
---

Assignee: Cassandra Targett

> Clearly document the limit for the maximum number of documents in a single 
> index
> 
>
> Key: SOLR-3504
> URL: https://issues.apache.org/jira/browse/SOLR-3504
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation, update
>Affects Versions: 3.6
>Reporter: Jack Krupansky
>Assignee: Cassandra Targett
>Priority: Minor
>
> Although the actual limit to the number of documents supported by a Solr 
> implementation depends on the number of shards, unless the user is intimately 
> familiar with the implementation of Lucene, they may not realize that a 
> single Solr index (single shard, single core) is limited to approximately 
> 2.14 billion documents regardless of their processing power or memory. This 
> limit should be clearly documented for the Solr user.
> Granted, users should be strongly discouraged from attempting to create a 
> single, unsharded index of that size, but they certainly should have to find 
> out about the Lucene limit by accident.
> A subsequent issue will recommend that Solr detect and appropriately report 
> to the user when and if this limit is hit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3398) Using solr.UUIDField give -> Caused by: org.apache.solr.common.SolrException: Invalid UUID String: '1'

2017-11-09 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-3398.
-
Resolution: Not A Problem

>From the comment history, I feel like this was a misconfiguration error and 
>can be closed - please reopen if that is incorrect.

> Using solr.UUIDField give -> Caused by: org.apache.solr.common.SolrException: 
> Invalid UUID String: '1'
> --
>
> Key: SOLR-3398
> URL: https://issues.apache.org/jira/browse/SOLR-3398
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis
>Affects Versions: 3.6
> Environment: Linux: Centos 6.2,  2.6.32-220.7.1.el6.x86_64 #1 SMP Wed 
> Mar 7 00:52:02 GMT 2012 x86_64 x86_64 x86_64 GNU/Linux
> Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)
>Reporter: Marek Dabrowski
>
> I try generate index for Oracle data dump (all file have about 100 000 000 
> docs, all data about 30TB) . Data don't have primary key filed. I would like 
> use for generate it like this -> http://wiki.apache.org/solr/UniqueKey
> I added to schema.conf
>  
>  
> and description filed from dump files.
> When I starting jetty error occur:
> Apr 23, 2012 2:47:13 PM org.apache.solr.common.SolrException log
> SEVERE: org.apache.solr.common.SolrException
> at org.apache.solr.core.SolrCore.(SolrCore.java:600)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:483)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:335)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:219)
> at 
> org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:161)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96)
> at 
> org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
> at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
> at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
> at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
> at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
> at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
> at org.mortbay.jetty.Server.doStart(Server.java:224)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.mortbay.start.Main.invokeMain(Main.java:194)
> at org.mortbay.start.Main.start(Main.java:534)
> at org.mortbay.start.Main.start(Main.java:441)
> at org.mortbay.start.Main.main(Main.java:119)
> Caused by: org.apache.solr.common.SolrException: Error initializing 
> QueryElevationComponent.
> at 
> org.apache.solr.handler.component.QueryElevationComponent.inform(QueryElevationComponent.java:202)
> at 
> org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:527)
> at org.apache.solr.core.SolrCore.(SolrCore.java:594)
> ... 30 more
> Caused by: org.apache.solr.common.SolrException: Invalid UUID String: '1'
> at org.apache.solr.schema.UUIDField.toInternal(UUIDField.java:85)
> at 
> org.apache.solr.schema.FieldType.readableToIndexed(FieldType.java:379)
> at 
> org.apache.solr.handler.component.QueryElevationComponent$ElevationObj.(QueryElevationComponent.java:119)
> at 
> org.apache.solr.handler.component.QueryElevationComponent.loadElevationMap(QueryElevationComponent.java:264)
>

[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 298 - Failure!

2017-11-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/298/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseConcMarkSweepGC

No tests ran.

Build Log:
[...truncated 957 lines...]
   [junit4] Suite: org.apache.lucene.store.TestMultiMMap
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
{docid=PostingsFormat(name=Asserting), junk=FSTOrd50}, docValues:{}, 
maxPointsInLeafNode=1608, maxMBSortInHeap=5.210417873157954, 
sim=RandomSimilarity(queryNorm=false): {}, locale=ja-JP, 
timezone=America/Santiago
   [junit4]   2> NOTE: Windows 10 10.0 x86/Oracle Corporation 1.8.0_144 
(32-bit)/cpus=3,threads=1,free=413670264,total=518979584
   [junit4]   2> NOTE: All tests run in this JVM: [TestDisjunctionMaxQuery, 
TestFixedBitSet, TestBytesRefHash, TestTransactionRollback, 
TestPositiveScoresOnlyCollector, TestNoDeletionPolicy, TestDocIDMerger, 
TestSimpleExplanations, TestSimilarity, Test2BSortedDocValuesFixedSorted, 
TestBlockPostingsFormat, TestBooleanScorer, TestPostingsOffsets, 
TestIndexWriterLockRelease, TestNGramPhraseQuery, TestTermQuery, 
TestManyFields, TestControlledRealTimeReopenThread, 
TestApproximationSearchEquivalence, TestDateTools, TestIndexOrDocValuesQuery, 
TestStringHelper, TestMixedDocValuesUpdates, TestMatchNoDocsQuery, 
FiniteStringsIteratorTest, TestIsCurrent, TestLongPostings, TestNot, TestField, 
TestPerFieldPostingsFormat, TestNearSpansOrdered, TestBoostQuery, 
TestSortRandom, TestLucene60FieldInfoFormat, TestRadixSelector, TestFlex, 
TestDirectoryReaderReopen, Test2BDocs, TestPerFieldDocValuesFormat, 
TestTermsEnum2, TestNRTReaderWithThreads, TestIndexWriterFromReader, 
TestCachingTokenFilter, TestTransactions, TestPrefixCodedTerms, 
TestStressAdvance, TestFieldType, TestTermVectorsReader, TestRegExp, 
TestTermdocPerf, Test2BNumericDocValues, TestIndexWriterDeleteByQuery, 
TestIndexWriterForceMerge, TestIntroSorter, TestNativeFSLockFactory, 
TestCompiledAutomaton, TestSegmentMerger, TestRollback, 
TestBlockPostingsFormat3, TestSortedNumericSortField, TestSpanCollection, 
TestBKD, TestNRTReaderCleanup, TestLucene50CompoundFormat, TestAutomatonQuery, 
TestLucene50StoredFieldsFormatHighCompression, TestIndexWriterNRTIsCurrent, 
TestIndexWriterThreadsToSegments, TestCharsRef, TestMergePolicyWrapper, 
TestIndexCommit, TestQueryRescorer, TestGeoUtils, TestTrackingDirectoryWrapper, 
TestMutablePointsReaderUtils, TestSparseFixedBitDocIdSet, 
TestFilterDirectoryReader, TestParallelTermEnum, TestStressIndexing2, 
TestMergeSchedulerExternal, TestFilterSpans, TestFastCompressionMode, 
TestTotalHitCountCollector, TestDocValuesScoring, TestSetOnce, 
TestLucene70NormsFormat, TestTryDelete, TestMutableValues, TestStressNRT, 
TestSortedSetDocValues, TestSubScorerFreqs, TestFilterDirectory, 
TestLongBitSet, TestCharArraySet, TestMultiLevelSkipList, 
TestOneMergeWrappingMergePolicy, TestSpanOrQuery, TestFastDecompressionMode, 
TestSpanFirstQuery, TestIntRangeFieldQueries, TestPackedInts, TestSpansEnum, 
TestSpanNearQuery, TestConstantScoreQuery, Test2BPostings, 
TestIndexWriterDelete, TestDocValuesRewriteMethod, TestIntArrayDocIdSet, 
TestStressIndexing, TestBoolean2ScorerSupplier, TestFileSwitchDirectory, 
TestByteBlockPool, TestFieldMaskingSpanQuery, TestSimpleSearchEquivalence, 
TestIndexWriter, TestIndexWriterReader, TestFSTs, TestBytesStore, 
TestIndexWriterWithThreads, TestGraphTokenizers, TestShardSearching, 
TestMultiMMap]
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestMultiMMap 
-Dtests.seed=F0C293843B2DD05E -Dtests.slow=true -Dtests.locale=ja-JP 
-Dtests.timezone=America/Santiago -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] ERROR   0.00s J1 | TestMultiMMap (suite) <<<
   [junit4]> Throwable #1: java.io.IOException: Could not remove the 
following files (in the order of attempts):
   [junit4]>
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.store.TestMultiMMap_F0C293843B2DD05E-001\testSeekZero-004:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.store.TestMultiMMap_F0C293843B2DD05E-001\testSeekZero-004
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([F0C293843B2DD05E]:0)
   [junit4]>at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
   [junit4] Completed [218/455 (1!)] on J1 in 6.78s, 54 tests, 1 error <<< 
FAILURES!

[...truncated 12238 lines...]
   [junit4] Suite: org.apache.solr.TestCursorMarkWithoutUniqueKey
   [junit4]   2> 2389780 INFO  
(SUITE-TestCursorMarkWithoutUniqueKey-seed#[6FBB5D5A3049EDFB]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 

[jira] [Updated] (SOLR-3504) Clearly document the limit for the maximum number of documents in a single index

2017-11-09 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-3504:

Component/s: documentation

> Clearly document the limit for the maximum number of documents in a single 
> index
> 
>
> Key: SOLR-3504
> URL: https://issues.apache.org/jira/browse/SOLR-3504
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation, update
>Affects Versions: 3.6
>Reporter: Jack Krupansky
>Priority: Minor
>
> Although the actual limit to the number of documents supported by a Solr 
> implementation depends on the number of shards, unless the user is intimately 
> familiar with the implementation of Lucene, they may not realize that a 
> single Solr index (single shard, single core) is limited to approximately 
> 2.14 billion documents regardless of their processing power or memory. This 
> limit should be clearly documented for the Solr user.
> Granted, users should be strongly discouraged from attempting to create a 
> single, unsharded index of that size, but they certainly should have to find 
> out about the Lucene limit by accident.
> A subsequent issue will recommend that Solr detect and appropriately report 
> to the user when and if this limit is hit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9120) LukeRequestHandler logs WARN "Error getting file length for [segments_NNN]" for inconsequential NoSuchFileException situations -- looks scary but is not a problem, loggin

2017-11-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-9120.

   Resolution: Fixed
Fix Version/s: master (8.0)
   7.2

> LukeRequestHandler logs WARN "Error getting file length for [segments_NNN]" 
> for inconsequential NoSuchFileException situations -- looks scary but is not 
> a problem, logging should be reduced
> -
>
> Key: SOLR-9120
> URL: https://issues.apache.org/jira/browse/SOLR-9120
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.5, 6.0
>Reporter: Markus Jelsma
>Assignee: Hoss Man
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-9120.patch, SOLR-9120.patch, SOLR-9120.patch
>
>
> Begining with Solr 5.5, the LukeRequestHandler started attempting to report 
> the name and file size of the segments file for the _current_ 
> Searcher+IndexReader in use by Solr -- however the filesize information is 
> not always available from the Directory in cases where "on disk" commits have 
> caused that file to be removed, for example...
> * you perform index updates & commits w/o "newSearcher" being opened
> * you "concurrently" make requests to the LukeRequestHandler or the 
> CoreAdminHandler requesting "STATUS" (ie: after the commit, before any 
> newSearcher)
> ** these requests can come from the Admin UI passively if it's open in a 
> browser
> In situations like this, a decision was made in SOLR-8587 to log a WARNing in 
> the event that the segments file size could not be determined -- but these 
> WARNing messages look scary and have lead (many) users to assume something is 
> wrong with their solr index.
> We should reduce the severity of these log messages, and improve the wording 
> to make it more clear that this is not a fundemental problem with the index.
> 
> Here's some trivial steps to reproduce the WARN message...
> {noformat}
> $ bin/solr -e techproducts
> ...
> $ tail -f example/techproducts/logs/solr.log
> ...
> {noformat}
> In another terminal...
> {noformat}
> $ curl -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/techproducts/update?commit=true=false'
>  --data-binary '[{"id":"HOSS"}]'
> ...
> $ curl 'http://localhost:8983/solr/techproducts/admin/luke'
> ...
> {noformat}
> When the "/admin/luke" URL is hit, this will show up in the logs – but the 
> luke request will finish correctly...
> {noformat}
> WARN  - 2017-11-08 17:23:44.574; [   x:techproducts] 
> org.apache.solr.handler.admin.LukeRequestHandler; Error getting file length 
> for [segments_2]
> java.nio.file.NoSuchFileException: 
> /home/hossman/lucene/dev/solr/example/techproducts/solr/techproducts/data/index/segments_2
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
>   at 
> sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
>   at 
> sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.Files.size(Files.java:2332)
>   at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:128)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(LukeRequestHandler.java:611)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:584)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.handleRequestBody(LukeRequestHandler.java:136)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
> ...
> INFO  - 2017-11-08 17:23:44.587; [   x:techproducts] 
> org.apache.solr.core.SolrCore; [techproducts]  webapp=/solr path=/admin/luke 
> params={} status=0 QTime=15
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.1) - Build # 20874 - Still Unstable!

2017-11-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20874/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseParallelGC

5 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node5:{"core":"c8n_1x3_lf_shard1_replica_n2","base_url":"http://127.0.0.1:38957","node_name":"127.0.0.1:38957_","state":"active","type":"NRT","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/25)={   
"pullReplicas":"0",   "replicationFactor":"1",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node4":{   "core":"c8n_1x3_lf_shard1_replica_n1",   
"base_url":"http://127.0.0.1:33449;,   "node_name":"127.0.0.1:33449_",  
 "state":"down",   "type":"NRT"}, "core_node5":{
   "core":"c8n_1x3_lf_shard1_replica_n2",   
"base_url":"http://127.0.0.1:38957;,   "node_name":"127.0.0.1:38957_",  
 "state":"active",   "type":"NRT",   "leader":"true"},  
   "core_node6":{   "state":"down",   
"base_url":"http://127.0.0.1:38835;,   
"core":"c8n_1x3_lf_shard1_replica_n3",   
"node_name":"127.0.0.1:38835_",   "type":"NRT",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"3",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node5:{"core":"c8n_1x3_lf_shard1_replica_n2","base_url":"http://127.0.0.1:38957","node_name":"127.0.0.1:38957_","state":"active","type":"NRT","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/25)={
  "pullReplicas":"0",
  "replicationFactor":"1",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node4":{
  "core":"c8n_1x3_lf_shard1_replica_n1",
  "base_url":"http://127.0.0.1:33449;,
  "node_name":"127.0.0.1:33449_",
  "state":"down",
  "type":"NRT"},
"core_node5":{
  "core":"c8n_1x3_lf_shard1_replica_n2",
  "base_url":"http://127.0.0.1:38957;,
  "node_name":"127.0.0.1:38957_",
  "state":"active",
  "type":"NRT",
  "leader":"true"},
"core_node6":{
  "state":"down",
  "base_url":"http://127.0.0.1:38835;,
  "core":"c8n_1x3_lf_shard1_replica_n3",
  "node_name":"127.0.0.1:38835_",
  "type":"NRT",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"3",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([BB3A76160B322F48:336E49CCA5CE42B0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:169)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:56)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.1) - Build # 7010 - Still Unstable!

2017-11-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7010/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseParallelGC

9 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestBackwardsCompatibility

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_4AD81EEA31B91251-001\3.6.0-cfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_4AD81EEA31B91251-001\3.6.0-cfs-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_4AD81EEA31B91251-001\3.6.0-cfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_4AD81EEA31B91251-001\3.6.0-cfs-001

at __randomizedtesting.SeedInfo.seed([4AD81EEA31B91251]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.lucene.search.TestSimilarity.testSimilarity

Error Message:
expected:<2.0> but was:<1.0>

Stack Trace:
java.lang.AssertionError: expected:<2.0> but was:<1.0>
at 
__randomizedtesting.SeedInfo.seed([E8791717EDD7A545:19D6572FED37305E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:443)
at org.junit.Assert.assertEquals(Assert.java:512)
at 
org.apache.lucene.search.TestSimilarity$4.collect(TestSimilarity.java:146)
at 
org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
at 
org.apache.lucene.search.AssertingCollector$1.collect(AssertingCollector.java:56)
at 
org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
at 
org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
at 
org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:293)
at 
org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:236)
at 
org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:79)
at 
org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:63)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:645)
at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:72)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:449)
at 
org.apache.lucene.search.TestSimilarity.testSimilarity(TestSimilarity.java:137)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 

[jira] [Resolved] (SOLR-2786) solr binary releases do not include readily available copies of all lucene jars

2017-11-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-2786.

   Resolution: Not A Problem
Fix Version/s: 6.0

yup - no longer a concern

> solr binary releases do not include readily available copies of all lucene 
> jars
> ---
>
> Key: SOLR-2786
> URL: https://issues.apache.org/jira/browse/SOLR-2786
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Fix For: 6.0
>
>
> a user on the mailing list was asking a question about embedded solr, and was 
> getting class not found errors for lucene core classes - which made me 
> realize that the only place lucene jars are available in the solr binary 
> release is embedded inside the solr war, which is not entirely obvious to 
> users who are trying to develope java applications around solr



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2786) solr binary releases do not include readily available copies of all lucene jars

2017-11-09 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246194#comment-16246194
 ] 

Cassandra Targett commented on SOLR-2786:
-

I think this is no longer an issue since there is no longer a war?

> solr binary releases do not include readily available copies of all lucene 
> jars
> ---
>
> Key: SOLR-2786
> URL: https://issues.apache.org/jira/browse/SOLR-2786
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>
> a user on the mailing list was asking a question about embedded solr, and was 
> getting class not found errors for lucene core classes - which made me 
> realize that the only place lucene jars are available in the solr binary 
> release is embedded inside the solr war, which is not entirely obvious to 
> users who are trying to develope java applications around solr



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Tests-MMAP-master - Build # 454 - Failure

2017-11-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Tests-MMAP-master/454/

2 tests failed.
FAILED:  org.apache.lucene.search.TestSimilarity.testSimilarity

Error Message:
expected:<2.0> but was:<1.0>

Stack Trace:
java.lang.AssertionError: expected:<2.0> but was:<1.0>
at 
__randomizedtesting.SeedInfo.seed([4BC3F707CDE44D88:BA6CB73FCD04D893]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:443)
at org.junit.Assert.assertEquals(Assert.java:512)
at 
org.apache.lucene.search.TestSimilarity$4.collect(TestSimilarity.java:146)
at 
org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
at 
org.apache.lucene.search.AssertingCollector$1.collect(AssertingCollector.java:56)
at 
org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
at 
org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
at 
org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:293)
at 
org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:236)
at 
org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:79)
at 
org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:63)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:645)
at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:72)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:449)
at 
org.apache.lucene.search.TestSimilarity.testSimilarity(TestSimilarity.java:137)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Created] (SOLR-11630) Support SVG in ExplainAugmenterFactory

2017-11-09 Thread Timo Hund (JIRA)
Timo Hund created SOLR-11630:


 Summary: Support SVG in ExplainAugmenterFactory
 Key: SOLR-11630
 URL: https://issues.apache.org/jira/browse/SOLR-11630
 Project: Solr
  Issue Type: Wish
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Timo Hund
Priority: Minor


I would be nice to have the explain repsonse as a pi chart to see which field 
has which impact on the score. My idea would be to support this in 
ExplainAugmenterFactory as "svg".

Do you think that this makes sence? I could try to have a look if i am able to 
implement that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2140) Distributed search treats "score" as multivalued if schema has matching multivalued dynamicField

2017-11-09 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-2140.
-
Resolution: Cannot Reproduce

> Distributed search treats "score" as multivalued if schema has matching 
> multivalued dynamicField
> 
>
> Key: SOLR-2140
> URL: https://issues.apache.org/jira/browse/SOLR-2140
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 1.4.1
>Reporter: Hoss Man
>
> http://search.lucidimagination.com/search/document/e8d10e56ee3ac24b/solr_with_example_jetty_and_score_problem
> {noformat}
> : But when I issue the query with shard(two instances), the response XML will
> : be like following.
> : as you can see, that score has bee tranfer to a element  of 
> ...
> : 
> : 1.9808292
> : 
> The root cause of these seems to be your catchall dynamic field
> declaration...
> : : multiValued="true" termVectors="true"
> : termPositions="true"
> : termOffsets="true" omitNorms="false"/>
> ...that line (specificly the fact that it's multiValued="true") seems to
> be confusing the results aggregation code.  my guess is that it's
> looping over all the fields, and looking them up in the schema to see if
> they are single/multi valued but not recognizing that "score" is
> special.
> {noformat}
> This is trivial to reproduce using the example schema, just add a 
> dynamicField type like this...
> {noformat}
> 
> {noformat}
> Load up some data, and then hit this URL...
> http://localhost:8983/solr/select?q=*:*=score,id=localhost:8983/solr/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2140) Distributed search treats "score" as multivalued if schema has matching multivalued dynamicField

2017-11-09 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246145#comment-16246145
 ] 

Cassandra Targett commented on SOLR-2140:
-

I can't reproduce this on 7.1, so I think some change in the intervening years 
has fixed this.

For the query:

{code}
http://localhost:8983/solr/gettingstarted/select?fl=score,id=*:*=xml=localhost:8983/solr/gettingstarted_shard1_replica_n1,localhost:8983/solr/gettingstarted_shard2_replica_n4
{code}

I get:

{code}


GB18030TEST
1.0


IW-02
1.0


MA147LL/A
1.0


adata
1.0

{code}

> Distributed search treats "score" as multivalued if schema has matching 
> multivalued dynamicField
> 
>
> Key: SOLR-2140
> URL: https://issues.apache.org/jira/browse/SOLR-2140
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 1.4.1
>Reporter: Hoss Man
>
> http://search.lucidimagination.com/search/document/e8d10e56ee3ac24b/solr_with_example_jetty_and_score_problem
> {noformat}
> : But when I issue the query with shard(two instances), the response XML will
> : be like following.
> : as you can see, that score has bee tranfer to a element  of 
> ...
> : 
> : 1.9808292
> : 
> The root cause of these seems to be your catchall dynamic field
> declaration...
> : : multiValued="true" termVectors="true"
> : termPositions="true"
> : termOffsets="true" omitNorms="false"/>
> ...that line (specificly the fact that it's multiValued="true") seems to
> be confusing the results aggregation code.  my guess is that it's
> looping over all the fields, and looking them up in the schema to see if
> they are single/multi valued but not recognizing that "score" is
> special.
> {noformat}
> This is trivial to reproduce using the example schema, just add a 
> dynamicField type like this...
> {noformat}
> 
> {noformat}
> Load up some data, and then hit this URL...
> http://localhost:8983/solr/select?q=*:*=score,id=localhost:8983/solr/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9120) LukeRequestHandler logs WARN "Error getting file length for [segments_NNN]" for inconsequential NoSuchFileException situations -- looks scary but is not a problem, loggi

2017-11-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246139#comment-16246139
 ] 

ASF subversion and git services commented on SOLR-9120:
---

Commit e0455440fe241477f9a269926a7a710e538074e2 in lucene-solr's branch 
refs/heads/branch_7x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e045544 ]

SOLR-9120: Reduce log level for inconsequential NoSuchFileException that 
LukeRequestHandler may encounter

(cherry picked from commit 15fe53e10be74a0c953c4e0fac6815798cf66772)


> LukeRequestHandler logs WARN "Error getting file length for [segments_NNN]" 
> for inconsequential NoSuchFileException situations -- looks scary but is not 
> a problem, logging should be reduced
> -
>
> Key: SOLR-9120
> URL: https://issues.apache.org/jira/browse/SOLR-9120
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.5, 6.0
>Reporter: Markus Jelsma
>Assignee: Hoss Man
> Attachments: SOLR-9120.patch, SOLR-9120.patch, SOLR-9120.patch
>
>
> Begining with Solr 5.5, the LukeRequestHandler started attempting to report 
> the name and file size of the segments file for the _current_ 
> Searcher+IndexReader in use by Solr -- however the filesize information is 
> not always available from the Directory in cases where "on disk" commits have 
> caused that file to be removed, for example...
> * you perform index updates & commits w/o "newSearcher" being opened
> * you "concurrently" make requests to the LukeRequestHandler or the 
> CoreAdminHandler requesting "STATUS" (ie: after the commit, before any 
> newSearcher)
> ** these requests can come from the Admin UI passively if it's open in a 
> browser
> In situations like this, a decision was made in SOLR-8587 to log a WARNing in 
> the event that the segments file size could not be determined -- but these 
> WARNing messages look scary and have lead (many) users to assume something is 
> wrong with their solr index.
> We should reduce the severity of these log messages, and improve the wording 
> to make it more clear that this is not a fundemental problem with the index.
> 
> Here's some trivial steps to reproduce the WARN message...
> {noformat}
> $ bin/solr -e techproducts
> ...
> $ tail -f example/techproducts/logs/solr.log
> ...
> {noformat}
> In another terminal...
> {noformat}
> $ curl -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/techproducts/update?commit=true=false'
>  --data-binary '[{"id":"HOSS"}]'
> ...
> $ curl 'http://localhost:8983/solr/techproducts/admin/luke'
> ...
> {noformat}
> When the "/admin/luke" URL is hit, this will show up in the logs – but the 
> luke request will finish correctly...
> {noformat}
> WARN  - 2017-11-08 17:23:44.574; [   x:techproducts] 
> org.apache.solr.handler.admin.LukeRequestHandler; Error getting file length 
> for [segments_2]
> java.nio.file.NoSuchFileException: 
> /home/hossman/lucene/dev/solr/example/techproducts/solr/techproducts/data/index/segments_2
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
>   at 
> sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
>   at 
> sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.Files.size(Files.java:2332)
>   at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:128)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(LukeRequestHandler.java:611)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:584)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.handleRequestBody(LukeRequestHandler.java:136)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
> ...
> INFO  - 2017-11-08 17:23:44.587; [   x:techproducts] 
> org.apache.solr.core.SolrCore; [techproducts]  webapp=/solr path=/admin/luke 
> params={} status=0 QTime=15
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional 

[jira] [Created] (SOLR-11629) CloudSolrClient.Builder should accept a zk host

2017-11-09 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-11629:


 Summary: CloudSolrClient.Builder should accept a zk host
 Key: SOLR-11629
 URL: https://issues.apache.org/jira/browse/SOLR-11629
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Varun Thacker


Today we need to create an empty builder and then wither pass zkHost or 
withSolrUrl
{code}
SolrClient solrClient = new 
CloudSolrClient.Builder().withZkHost("localhost:9983").build();
solrClient.request(updateRequest, "gettingstarted");
{code}

What if we have two constructors , one that accepts a zkHost and one that 
accepts a SolrUrl .

The advantages that I can think of are:
- It will be obvious to users that we support two mechanisms of creating a 
CloudSolrClient . The SolrUrl option is cool and applications don't need to 
know about ZooKeeper and new users will learn about this . Maybe our example's 
on the ref guide should use this? 
- Today people can set both zkHost and solrUrl  but CloudSolrClient can only 
utilize one of them

HttpClient's Builder accepts the host 
{code}
HttpSolrClient client = new 
HttpSolrClient.Builder("http://localhost:8983/solr;).build();
client.request(updateRequest, "techproducts");
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 781 - Still Unstable!

2017-11-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/781/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseParallelGC

5 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.MoveReplicaHDFSFailoverTest

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
at __randomizedtesting.SeedInfo.seed([ADC430DFD62C4DEB]:0)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1207)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:840)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:746)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:616)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:105)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:63)
at 
org.apache.solr.cloud.MoveReplicaHDFSFailoverTest.setupClass(MoveReplicaHDFSFailoverTest.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.HdfsDirectoryFactoryTest

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
at __randomizedtesting.SeedInfo.seed([ADC430DFD62C4DEB]:0)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1207)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:840)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:746)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:616)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:105)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:67)
at 
org.apache.solr.core.HdfsDirectoryFactoryTest.setupClass(HdfsDirectoryFactoryTest.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 

[jira] [Commented] (SOLR-9120) LukeRequestHandler logs WARN "Error getting file length for [segments_NNN]" for inconsequential NoSuchFileException situations -- looks scary but is not a problem, loggi

2017-11-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246012#comment-16246012
 ] 

ASF subversion and git services commented on SOLR-9120:
---

Commit 15fe53e10be74a0c953c4e0fac6815798cf66772 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=15fe53e ]

SOLR-9120: Reduce log level for inconsequential NoSuchFileException that 
LukeRequestHandler may encounter


> LukeRequestHandler logs WARN "Error getting file length for [segments_NNN]" 
> for inconsequential NoSuchFileException situations -- looks scary but is not 
> a problem, logging should be reduced
> -
>
> Key: SOLR-9120
> URL: https://issues.apache.org/jira/browse/SOLR-9120
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.5, 6.0
>Reporter: Markus Jelsma
>Assignee: Hoss Man
> Attachments: SOLR-9120.patch, SOLR-9120.patch, SOLR-9120.patch
>
>
> Begining with Solr 5.5, the LukeRequestHandler started attempting to report 
> the name and file size of the segments file for the _current_ 
> Searcher+IndexReader in use by Solr -- however the filesize information is 
> not always available from the Directory in cases where "on disk" commits have 
> caused that file to be removed, for example...
> * you perform index updates & commits w/o "newSearcher" being opened
> * you "concurrently" make requests to the LukeRequestHandler or the 
> CoreAdminHandler requesting "STATUS" (ie: after the commit, before any 
> newSearcher)
> ** these requests can come from the Admin UI passively if it's open in a 
> browser
> In situations like this, a decision was made in SOLR-8587 to log a WARNing in 
> the event that the segments file size could not be determined -- but these 
> WARNing messages look scary and have lead (many) users to assume something is 
> wrong with their solr index.
> We should reduce the severity of these log messages, and improve the wording 
> to make it more clear that this is not a fundemental problem with the index.
> 
> Here's some trivial steps to reproduce the WARN message...
> {noformat}
> $ bin/solr -e techproducts
> ...
> $ tail -f example/techproducts/logs/solr.log
> ...
> {noformat}
> In another terminal...
> {noformat}
> $ curl -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/techproducts/update?commit=true=false'
>  --data-binary '[{"id":"HOSS"}]'
> ...
> $ curl 'http://localhost:8983/solr/techproducts/admin/luke'
> ...
> {noformat}
> When the "/admin/luke" URL is hit, this will show up in the logs – but the 
> luke request will finish correctly...
> {noformat}
> WARN  - 2017-11-08 17:23:44.574; [   x:techproducts] 
> org.apache.solr.handler.admin.LukeRequestHandler; Error getting file length 
> for [segments_2]
> java.nio.file.NoSuchFileException: 
> /home/hossman/lucene/dev/solr/example/techproducts/solr/techproducts/data/index/segments_2
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
>   at 
> sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
>   at 
> sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.Files.size(Files.java:2332)
>   at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:128)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(LukeRequestHandler.java:611)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:584)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.handleRequestBody(LukeRequestHandler.java:136)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
> ...
> INFO  - 2017-11-08 17:23:44.587; [   x:techproducts] 
> org.apache.solr.core.SolrCore; [techproducts]  webapp=/solr path=/admin/luke 
> params={} status=0 QTime=15
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9120) LukeRequestHandler logs WARN "Error getting file length for [segments_NNN]" for inconsequential NoSuchFileException situations -- looks scary but is not a problem, logging

2017-11-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9120:
---
Affects Version/s: 5.5
  Description: 
Begining with Solr 5.5, the LukeRequestHandler started attempting to report the 
name and file size of the segments file for the _current_ Searcher+IndexReader 
in use by Solr -- however the filesize information is not always available from 
the Directory in cases where "on disk" commits have caused that file to be 
removed, for example...

* you perform index updates & commits w/o "newSearcher" being opened
* you "concurrently" make requests to the LukeRequestHandler or the 
CoreAdminHandler requesting "STATUS" (ie: after the commit, before any 
newSearcher)
** these requests can come from the Admin UI passively if it's open in a browser

In situations like this, a decision was made in SOLR-8587 to log a WARNing in 
the event that the segments file size could not be determined -- but these 
WARNing messages look scary and have lead (many) users to assume something is 
wrong with their solr index.

We should reduce the severity of these log messages, and improve the wording to 
make it more clear that this is not a fundemental problem with the index.




Here's some trivial steps to reproduce the WARN message...

{noformat}
$ bin/solr -e techproducts
...
$ tail -f example/techproducts/logs/solr.log
...
{noformat}

In another terminal...

{noformat}
$ curl -H 'Content-Type: application/json' 
'http://localhost:8983/solr/techproducts/update?commit=true=false' 
--data-binary '[{"id":"HOSS"}]'
...
$ curl 'http://localhost:8983/solr/techproducts/admin/luke'
...
{noformat}

When the "/admin/luke" URL is hit, this will show up in the logs – but the luke 
request will finish correctly...

{noformat}
WARN  - 2017-11-08 17:23:44.574; [   x:techproducts] 
org.apache.solr.handler.admin.LukeRequestHandler; Error getting file length for 
[segments_2]
java.nio.file.NoSuchFileException: 
/home/hossman/lucene/dev/solr/example/techproducts/solr/techproducts/data/index/segments_2
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
at 
sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
at 
sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.Files.size(Files.java:2332)
at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243)
at 
org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:128)
at 
org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(LukeRequestHandler.java:611)
at 
org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:584)
at 
org.apache.solr.handler.admin.LukeRequestHandler.handleRequestBody(LukeRequestHandler.java:136)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
...
INFO  - 2017-11-08 17:23:44.587; [   x:techproducts] 
org.apache.solr.core.SolrCore; [techproducts]  webapp=/solr path=/admin/luke 
params={} status=0 QTime=15
{noformat}

  was:
On Solr 6.0, we frequently see the following errors popping up:

{code}
java.nio.file.NoSuchFileException: 
/var/lib/solr/logs_shard2_replica1/data/index/segments_2c5
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
at 
sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
at 
sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.Files.size(Files.java:2332)
at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243)
at 
org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:131)
at 
org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(LukeRequestHandler.java:597)
at 
org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:585)
at 
org.apache.solr.handler.admin.LukeRequestHandler.handleRequestBody(LukeRequestHandler.java:137)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
at 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 230 - Unstable

2017-11-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/230/

3 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not load collection from ZK: collection1

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
collection1
at 
__randomizedtesting.SeedInfo.seed([B51135E2AC48524C:3D450A3802B43FB4]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1172)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:692)
at 
org.apache.solr.common.cloud.ZkStateReader.forceUpdateCollection(ZkStateReader.java:365)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.updateMappingsFromZk(AbstractFullDistribZkTestBase.java:674)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1311)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1302)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1294)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexQueryDeleteHierarchical(FullSolrCloudDistribCmdsTest.java:528)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:134)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-11625) Solr may remove live index on Solr shutdown

2017-11-09 Thread Nikolay Martynov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245958#comment-16245958
 ] 

Nikolay Martynov commented on SOLR-11625:
-

Yes.

So setup is fairly easy:
* Create a cluster.
* Start sending a lot of updates to the cluster.
* Start rebooting nodes in that cluster - 'graceful' shutdown is important.

>From time to time Solr doesn't come back up complaining that it cannot find 
>index file.

> Solr may remove live index on Solr shutdown
> ---
>
> Key: SOLR-11625
> URL: https://issues.apache.org/jira/browse/SOLR-11625
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.1
>Reporter: Nikolay Martynov
>
> This has been observed in the wild:
> {noformat}
> 2017-11-07 02:35:46.909 ERROR (qtp1724399560-8090) [c:xxx s:shard4 
> r:core_node399 x:xxx_shard4_replica8] o.a.s.c.SolrCore 
> :java.nio.channels.ClosedByInterruptException
>   at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
>   at sun.nio.ch.FileChannelImpl.size(FileChannelImpl.java:315)
>   at 
> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:242)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:192)
>   at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:356)
>   at 
> org.apache.solr.core.SolrCore.cleanupOldIndexDirectories(SolrCore.java:3044)
>   at org.apache.solr.core.SolrCore.close(SolrCore.java:1575)
>   at org.apache.solr.servlet.HttpSolrCall.destroy(HttpSolrCall.java:582)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:374)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> 2017-11-07 02:35:46.912 INFO  
> (OldIndexDirectoryCleanupThreadForCore-xxx_shard4_replica8) [c:xxx s:shard4 
> r:core_node399 x:xxx_shard4_replica8] o.a.s.c.DirectoryFactory Found 1 old 
> index directories to clean-up under 
> /opt/solr/server/solr/xxx_shard4_replica8/data/ afterReload=false
> {noformat}
> After this Solr cannot start claiming that some files that are supposed to 
> exist in the index do not exist. On one occasion we observed 

[JENKINS] Lucene-Solr-Tests-master - Build # 2166 - Failure

2017-11-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2166/

14 tests failed.
FAILED:  org.apache.solr.cloud.AliasIntegrationTest.test

Error Message:
Unexpected exception type, expected SolrException but got 
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:51328/solr/testalias, 
http://127.0.0.1:56893/solr/testalias]

Stack Trace:
junit.framework.AssertionFailedError: Unexpected exception type, expected 
SolrException but got org.apache.solr.client.solrj.SolrServerException: No live 
SolrServers available to handle this 
request:[http://127.0.0.1:51328/solr/testalias, 
http://127.0.0.1:56893/solr/testalias]
at 
__randomizedtesting.SeedInfo.seed([3AA943ED58D09FF4:B2FD7C37F62CF20C]:0)
at 
org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2660)
at 
org.apache.solr.cloud.AliasIntegrationTest.test(AliasIntegrationTest.java:119)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)

[jira] [Commented] (SOLR-11487) Collection Alias metadata for time partitioned collections

2017-11-09 Thread Gus Heck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245905#comment-16245905
 ] 

Gus Heck commented on SOLR-11487:
-

* Consolidation of the alias related stuff in ZkStateReader is nice.
* I think the getter would be consistent and play nicer with IDE auto complete, 
but as you say, its more of a taste/style issue, not a material issue.
* The use of UnaryOperator rather than Function of course 
makes good sense
* I suspect in this patch the int version should also be volatile, but I 
haven't looked carefully enough to see if we have sufficient monitor locking to 
make that unnecessary yet...
* I don't like moving the version out of the Aliases object. The version in zk 
that this instance was derived from is information about the Aliases object and 
therefore should be a property of the object. I like it much better as an 
immutable property on Aliases that is set directly upon creation, and can be 
made accessible from the Aliases object (don't recall if I provided a getter in 
my patch but it should probably be there to support folks who are working with 
aliases and some other data in zk so they can know if changes to aliases.json 
have occurred). Future modifications to the code could more easily get the 
version out of sync this way by failing to update the field in AliasManager 
whereas having it as required in the constructor enforces and communicates the 
need to track the version. 
* This patch places the burden of coordinating a set of changes on the caller 
of the API instead of handling it transparently. This is reflected by line 111 
in Test where you wrapped the previously independent clone operations in a 
single UnaryOperator, which basically redesigns the test such that it passes 
due to the special case in the test of consecutive invocations that are easily 
wrapped together. The present patch will require that UnaryOperation be used 
like a transaction wrapper whereas the previous patch tracked changes 
internally and then transparently re-applied them in the event of a conflict. 
This made series of changes transactional by default without any explicit 
coordination code on the caller's part, and thus somewhat fool proofed the 
usage of the API. If substantial logic is involved in calculating multiple 
pieces of metadata and/or a collection name and that logic that has to all be 
applied at the same time to ensure consistent information in zookeeper then ALL 
that logic has to be place inside the UnaryOperation. In the Prior patch it was 
sufficient to perform several clone operations and then exportToZk with no 
effect on the organization of the calling code. I feel this patch simplifies 
the current code by adding complexity to future code using the API.
* AliasIntegrationTest.test() seems to fail?

> Collection Alias metadata for time partitioned collections
> --
>
> Key: SOLR-11487
> URL: https://issues.apache.org/jira/browse/SOLR-11487
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
> Attachments: SOLR_11487.patch, SOLR_11487.patch, SOLR_11487.patch, 
> SOLR_11487.patch, SOLR_11487.patch, SOLR_11487.patch
>
>
> SOLR-11299 outlines an approach to using a collection Alias to refer to a 
> series of collections of a time series. We'll need to store some metadata 
> about these time series collections, such as which field of the document 
> contains the timestamp to route on.
> The current {{/aliases.json}} is a Map with a key {{collection}} which is in 
> turn a Map of alias name strings to a comma delimited list of the collections.
> _If we change the comma delimited list to be another Map to hold the existing 
> list and more stuff, older CloudSolrClient (configured to talk to ZooKeeper) 
> will break_.  Although if it's configured with an HTTP Solr URL then it would 
> not break.  There's also some read/write hassle to worry about -- we may need 
> to continue to read an aliases.json in the older format.
> Alternatively, we could add a new map entry to aliases.json, say, 
> {{collection_metadata}} keyed by alias name?
> Perhaps another very different approach is to attach metadata to the 
> configset in use?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 20873 - Still Unstable!

2017-11-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20873/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth: 1) 
Thread[id=239, name=jetty-launcher-12-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)   
 2) Thread[id=236, name=jetty-launcher-12-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth: 
   1) Thread[id=239, name=jetty-launcher-12-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 

[jira] [Commented] (SOLR-11625) Solr may remove live index on Solr shutdown

2017-11-09 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245890#comment-16245890
 ] 

Erick Erickson commented on SOLR-11625:
---

Nikolay:

>From our offline conversations, you were able to reproduce this fairly easily 
>using AWS, correct? Could you outline the steps so someone can reproduce?

Thanks!

> Solr may remove live index on Solr shutdown
> ---
>
> Key: SOLR-11625
> URL: https://issues.apache.org/jira/browse/SOLR-11625
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.1
>Reporter: Nikolay Martynov
>
> This has been observed in the wild:
> {noformat}
> 2017-11-07 02:35:46.909 ERROR (qtp1724399560-8090) [c:xxx s:shard4 
> r:core_node399 x:xxx_shard4_replica8] o.a.s.c.SolrCore 
> :java.nio.channels.ClosedByInterruptException
>   at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
>   at sun.nio.ch.FileChannelImpl.size(FileChannelImpl.java:315)
>   at 
> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:242)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:192)
>   at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:356)
>   at 
> org.apache.solr.core.SolrCore.cleanupOldIndexDirectories(SolrCore.java:3044)
>   at org.apache.solr.core.SolrCore.close(SolrCore.java:1575)
>   at org.apache.solr.servlet.HttpSolrCall.destroy(HttpSolrCall.java:582)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:374)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> 2017-11-07 02:35:46.912 INFO  
> (OldIndexDirectoryCleanupThreadForCore-xxx_shard4_replica8) [c:xxx s:shard4 
> r:core_node399 x:xxx_shard4_replica8] o.a.s.c.DirectoryFactory Found 1 old 
> index directories to clean-up under 
> /opt/solr/server/solr/xxx_shard4_replica8/data/ afterReload=false
> {noformat}
> After this Solr cannot start claiming that some files that are supposed to 
> exist in the index do not exist. On one occasion we observed segments file 
> not being present.
> We were able to trace this problem to 

[jira] [Commented] (SOLR-11626) Filesystems do not guarantee order of directories updates

2017-11-09 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245881#comment-16245881
 ] 

Erick Erickson commented on SOLR-11626:
---

Not sure how related these two are but they're at least in the same vicinity.

> Filesystems do not guarantee order of directories updates
> -
>
> Key: SOLR-11626
> URL: https://issues.apache.org/jira/browse/SOLR-11626
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Nikolay Martynov
>
> Currently when index is written to disk the following sequence of events is 
> taking place:
> * write segment file
> * sync segment file
> * write segment file
> * sync segment file
> ...
> * write list of segments
> * sync list of segments
> * rename list of segments
> * sync index directory
> This sequence leads to potential window of opportunity for system to crash 
> after 'rename list of segments' but before 'sync index directory' and 
> depending on exact filesystem implementation this may potentially lead to 
> 'list of segments' being visible in directory while some of the segments are 
> not.
> Solution to this is to sync index directory after all segments have been 
> written. [This 
> commit|https://github.com/mar-kolya/lucene-solr/commit/58e05dd1f633ab9b02d9e6374c7fab59689ae71c]
>  shows idea implemented. I'm fairly certain that I didn't find all the places 
> this may be potentially happening.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11624) _default configset overwrites a a configset if collection.configName isn't specified even if a confiset of the same name already exists.

2017-11-09 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245844#comment-16245844
 ] 

Shawn Heisey edited comment on SOLR-11624 at 11/9/17 3:42 PM:
--

I agree with Erick.  If a configset already exists, Solr should *not* be 
changing it just because a collection creation with the same name was 
requested.  What if there were a hundred existing collections all using that 
configset?

Having an option on the create command to force a config overwrite wouldn't be 
a bad idea, but that shouldn't be the default behavior.


was (Author: elyograg):
I agree with Erick.  If a configset already exists, Solr should *not* be 
changing it just because a collection creation with the same name was 
requested.  What if there were a hundred existing collections all using that 
configset?


> _default configset overwrites a a configset if collection.configName isn't 
> specified even if a confiset of the same name already exists.
> 
>
> Key: SOLR-11624
> URL: https://issues.apache.org/jira/browse/SOLR-11624
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-11624.patch
>
>
> Looks like a problem that crept in when we changed the _default configset 
> stuff.
> setup:
> upload a configset named "wiki"
> collections?action=CREATE=wiki&.
> My custom configset "wiki" gets overwritten by _default and then used by the 
> "wiki" collection.
> Assigning to myself only because it really needs to be fixed IMO and I don't 
> want to lose track of it. Anyone else please feel free to take it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11624) _default configset overwrites a a configset if collection.configName isn't specified even if a confiset of the same name already exists.

2017-11-09 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245844#comment-16245844
 ] 

Shawn Heisey commented on SOLR-11624:
-

I agree with Erick.  If a configset already exists, Solr should *not* be 
changing it just because a collection creation with the same name was 
requested.  What if there were a hundred existing collections all using that 
configset?


> _default configset overwrites a a configset if collection.configName isn't 
> specified even if a confiset of the same name already exists.
> 
>
> Key: SOLR-11624
> URL: https://issues.apache.org/jira/browse/SOLR-11624
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-11624.patch
>
>
> Looks like a problem that crept in when we changed the _default configset 
> stuff.
> setup:
> upload a configset named "wiki"
> collections?action=CREATE=wiki&.
> My custom configset "wiki" gets overwritten by _default and then used by the 
> "wiki" collection.
> Assigning to myself only because it really needs to be fixed IMO and I don't 
> want to lose track of it. Anyone else please feel free to take it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10840) Random Index Corruption during bulk indexing

2017-11-09 Thread Simon Rosenthal (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Rosenthal resolved SOLR-10840.

Resolution: Cannot Reproduce

After moving our production Solr server to a new AWS instance, the problem 
disappeared. Heaven knows why.

> Random Index Corruption during bulk indexing
> 
>
> Key: SOLR-10840
> URL: https://issues.apache.org/jira/browse/SOLR-10840
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.3, 6.5.1
> Environment: AWS EC2 instance running Centos 7
>Reporter: Simon Rosenthal
>
> I'm seeing a randomly occuring Index Corruption exception during a Solr  data 
> ingest. This can occur anywhere during the 7-8 hours our ingests take. I'm 
> initially submitting this as a Solr bug as this is the envioronment I'm 
> using, but it does look as though the error is occurring in Lucene code.
> Some background:
> AWS EC2 server running CentOS 7
> java.​runtime.​version: 1.8.0_131-b11  (also occurred with 1.8.0_45).
> Solr 6.3.0 (have also seen it with Solr 6.5.1). It did not happen with 
> Solr 5.4 9which i can't go back to). Oddly enough, I ran Solr 6.3.0 
> unvenetfully for several weeks before this problem first occurred.
> Standalone  (non cloud) environment.
> Our indexing subsystem is a complex Python script which creates multiple 
> indexing subprocesses in order to make use of multiple cores. Each subprocess 
> reads records from a MySQL database, does  some significant preprocessing and 
> sends a batch of documents (defaults to 500) to the Solr update handler 
> (using the Python 'scorched' module). Each content source (there are 5-6) 
> requires a separate instantiation of the script, and these wrapped in a Bash 
> script to run serially.
> 
> When the exception occurs, we always see something like the following in 
> the solr.log
> 
> ERROR - 2017-06-06 14:37:34.639; [   x:stresstest1] 
> org.apache.solr.common.SolrException; org.apache.solr.common.SolrException: 
> Exception writing document id med-27840-00384802 to the index; possible 
> analysis error.
> at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:178
> ...
> Caused by: org.apache.lucene.store.AlreadyClosedException: this 
> IndexWriter is closed
> at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:740)
> at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:754)
> at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1558)
> at 
> org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:279)
> at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:211)
> at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:166)
> ... 42 more
> Caused by: java.io.EOFException: read past EOF: 
> MMapIndexInput(path="/indexes/solrindexes/stresstest1/index/_441.nvm")
> at 
> org.apache.lucene.store.ByteBufferIndexInput.readByte(ByteBufferIndexInput.java:75)
> at 
> org.apache.lucene.store.BufferedChecksumIndexInput.readByte(BufferedChecksumIndexInput.java:41)
> at org.apache.lucene.store.DataInput.readInt(DataInput.java:101)
> at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:194)
> at org.apache.lucene.codecs.CodecUtil.checkIndexHeader(CodecUtil.java:255)
> at 
> org.apache.lucene.codecs.lucene53.Lucene53NormsProducer.(Lucene53NormsProducer.java:58)
> at 
> org.apache.lucene.codecs.lucene53.Lucene53NormsFormat.normsProducer(Lucene53NormsFormat.java:82)
> at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:113)
> at org.apache.lucene.index.SegmentReader.(SegmentReader.java:74)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
> at 
> org.apache.lucene.index.BufferedUpdatesStream$SegmentState.(BufferedUpdatesStream.java:384)
> at 
> org.apache.lucene.index.BufferedUpdatesStream.openSegmentStates(BufferedUpdatesStream.java:416)
> at 
> org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:261)
> at org.apache.lucene.index.IndexWriter._mergeInit(IndexWriter.java:4068)
> at org.apache.lucene.index.IndexWriter.mergeInit(IndexWriter.java:4026)
> at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3880)
> at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
> at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
> Suppressed: org.apache.lucene.index.CorruptIndexException: checksum 
> 

[jira] [Comment Edited] (SOLR-11624) _default configset overwrites a a configset if collection.configName isn't specified even if a confiset of the same name already exists.

2017-11-09 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245749#comment-16245749
 ] 

Erick Erickson edited comment on SOLR-11624 at 11/9/17 2:43 PM:


Ishan:

Yeah, reverting overwriting the named configuration set would be my preference.

BTW, the wonkiness I was getting with my patch was due to it being late and 
having a massive elevate.xml left over from chasing a totally unrelated issue 
so it might be viable after all.

Feel free to grab this JIRA if you want, I only assigned it to myself to make 
sure I didn't lose track of it.


was (Author: erickerickson):
Ishan:

Yeah, reverting overwriting the named configuration set would be my preference.

BTW, the wonkiness I was getting with my patch was due to it being late and 
having a massive elevate.xml left over from chasing a totally unrelated issue 
so it might be viable after all.

> _default configset overwrites a a configset if collection.configName isn't 
> specified even if a confiset of the same name already exists.
> 
>
> Key: SOLR-11624
> URL: https://issues.apache.org/jira/browse/SOLR-11624
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-11624.patch
>
>
> Looks like a problem that crept in when we changed the _default configset 
> stuff.
> setup:
> upload a configset named "wiki"
> collections?action=CREATE=wiki&.
> My custom configset "wiki" gets overwritten by _default and then used by the 
> "wiki" collection.
> Assigning to myself only because it really needs to be fixed IMO and I don't 
> want to lose track of it. Anyone else please feel free to take it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11624) _default configset overwrites a a configset if collection.configName isn't specified even if a confiset of the same name already exists.

2017-11-09 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245749#comment-16245749
 ] 

Erick Erickson commented on SOLR-11624:
---

Ishan:

Yeah, reverting overwriting the named configuration set would be my preference.

BTW, the wonkiness I was getting with my patch was due to it being late and 
having a massive elevate.xml left over from chasing a totally unrelated issue 
so it might be viable after all.

> _default configset overwrites a a configset if collection.configName isn't 
> specified even if a confiset of the same name already exists.
> 
>
> Key: SOLR-11624
> URL: https://issues.apache.org/jira/browse/SOLR-11624
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-11624.patch
>
>
> Looks like a problem that crept in when we changed the _default configset 
> stuff.
> setup:
> upload a configset named "wiki"
> collections?action=CREATE=wiki&.
> My custom configset "wiki" gets overwritten by _default and then used by the 
> "wiki" collection.
> Assigning to myself only because it really needs to be fixed IMO and I don't 
> want to lose track of it. Anyone else please feel free to take it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11620) Occasional NullPointerException in CloudSolrClient

2017-11-09 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245745#comment-16245745
 ] 

Erick Erickson commented on SOLR-11620:
---

Great info Rob, thanks! I'm not quite sure what the right fix here is, but this 
is great info for someone who is.

> Occasional NullPointerException in CloudSolrClient 
> ---
>
> Key: SOLR-11620
> URL: https://issues.apache.org/jira/browse/SOLR-11620
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.6.1
>Reporter: Rob Trickey
>Priority: Minor
>
> When sending document updates to Solr, we occasionally see the following 
> error:
> java.lang.NullPointerException
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1182)
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1073)
>   at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
>   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:173)
>   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
>   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:152)
> Looking at the code, there is a lack of null check on requestedCollections 
> around the for loop
> for (DocCollection ext : requestedCollections) 
> which causes the error. Wrapping this loop with if(requestedCollections != 
> null) would solve the problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.1) - Build # 780 - Still Unstable!

2017-11-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/780/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestCryptoKeys.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:39905/g/hn

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:39905/g/hn
at 
__randomizedtesting.SeedInfo.seed([E52EED5CF0FD0329:6D7AD2865E016ED1]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1096)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:875)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:808)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:315)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Resolved] (SOLR-11610) Use PayloadDecoder instead of PayloadScoringSimilarityWrapper

2017-11-09 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved SOLR-11610.
--
   Resolution: Fixed
Fix Version/s: 7.2

> Use PayloadDecoder instead of PayloadScoringSimilarityWrapper
> -
>
> Key: SOLR-11610
> URL: https://issues.apache.org/jira/browse/SOLR-11610
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 7.2
>
> Attachments: SOLR-11610.patch
>
>
> Follow up to LUCENE-8038, we should move Solr's payload handling to be in 
> line with the new PayloadScoreQuery methods.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-8014) Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor

2017-11-09 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward reassigned LUCENE-8014:
-

Assignee: Alan Woodward

> Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor
> -
>
> Key: LUCENE-8014
> URL: https://issues.apache.org/jira/browse/LUCENE-8014
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-8014.patch
>
>
> This supersedes LUCENE-8013.
> We should hardcode computeSlopFactor to 1/(N+1) in SloppyPhraseScorer and 
> move computePayloadFactor to PayloadFunction so that all the payload scoring 
> logic is in a single place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8038) Decouple payload decoding from Similarity

2017-11-09 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-8038.
---
   Resolution: Fixed
Fix Version/s: 7.2

> Decouple payload decoding from Similarity
> -
>
> Key: LUCENE-8038
> URL: https://issues.apache.org/jira/browse/LUCENE-8038
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 7.2
>
> Attachments: LUCENE-8038-master.patch, LUCENE-8038.patch
>
>
> PayloadScoreQuery is the only place that currently uses 
> SimScorer.computePayloadFactor(), and as discussed on LUCENE-8014, this seems 
> like the wrong place for it.  We should instead add a PayloadDecoder 
> abstraction that is passed to PayloadScoreQuery.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8014) Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor

2017-11-09 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-8014.
---
   Resolution: Fixed
Fix Version/s: 7.2
   master (8.0)

> Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor
> -
>
> Key: LUCENE-8014
> URL: https://issues.apache.org/jira/browse/LUCENE-8014
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-8014.patch
>
>
> This supersedes LUCENE-8013.
> We should hardcode computeSlopFactor to 1/(N+1) in SloppyPhraseScorer and 
> move computePayloadFactor to PayloadFunction so that all the payload scoring 
> logic is in a single place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11610) Use PayloadDecoder instead of PayloadScoringSimilarityWrapper

2017-11-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245723#comment-16245723
 ] 

ASF subversion and git services commented on SOLR-11610:


Commit 1a80bc76b12e74a3fea065ac6989a9a72662f5f4 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1a80bc7 ]

SOLR-11610: Move SOLR to PayloadDecoder


> Use PayloadDecoder instead of PayloadScoringSimilarityWrapper
> -
>
> Key: SOLR-11610
> URL: https://issues.apache.org/jira/browse/SOLR-11610
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-11610.patch
>
>
> Follow up to LUCENE-8038, we should move Solr's payload handling to be in 
> line with the new PayloadScoreQuery methods.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11617) Expose Alias Metadata CRUD in REST API

2017-11-09 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245727#comment-16245727
 ] 

David Smiley commented on SOLR-11617:
-

Perhaps we don't need entirely separate commands.  When we retrieve aliases 
today to list them, we can return the metadata in a new separate section (easy 
back-compat).  When creating an alias, we can specify the metadata with a 
prefix, e.g. "metadata.route.field" would set the "route.field" metadata.  This 
is consistent with such settings on collection creation.  

One new command "MODIFYALIAS" could be used to set alias metadata on an 
existing alias.  This "MODIFY" prefix is consistent with "MODIFYCOLLECTION" in 
naming convention.  In the future, perhaps we might abuse MODIFYALIAS slightly 
to not just be about setting metadata, but to effectively issue commands to 
time partitioning, like to tell it to create a new collection, rolling new 
indexing traffic to it.

BTW during implementation keep in mind the V2 stuff, including introspect.

> Expose Alias Metadata CRUD in REST API
> --
>
> Key: SOLR-11617
> URL: https://issues.apache.org/jira/browse/SOLR-11617
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (8.0)
>Reporter: Gus Heck
>
> SOLR-11487 is adding Java API for metadata on aliases, this task is to expose 
> that functionality to end-users via a REST API.
> Some proposed commands, for initial discussion:
>  - *SETALIASMETA* - upsert, or delete if blank/null/white-space provided.
>  - *GETALIASMETA* - read existing alias metadata
> Given that the parent ticket to this task is going to rely on the alias 
> metadata, and I suspect a user would potentially completely break their time 
> partitioned data configuration by editing system metadata directly, we should 
> either document these commands as "use at your own risk, great 
> power/responsibility etc" or consider protecting some subset of metadata.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8014) Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor

2017-11-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245721#comment-16245721
 ] 

ASF subversion and git services commented on LUCENE-8014:
-

Commit 0f4604d03b28da5e55c008ad61829d77ab2a1d9e in lucene-solr's branch 
refs/heads/branch_7x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0f4604d ]

LUCENE-8014: Deprecate SimScorer.computeSlopFactor and computePayloadFactor


> Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor
> -
>
> Key: LUCENE-8014
> URL: https://issues.apache.org/jira/browse/LUCENE-8014
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8014.patch
>
>
> This supersedes LUCENE-8013.
> We should hardcode computeSlopFactor to 1/(N+1) in SloppyPhraseScorer and 
> move computePayloadFactor to PayloadFunction so that all the payload scoring 
> logic is in a single place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8014) Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor

2017-11-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245724#comment-16245724
 ] 

ASF subversion and git services commented on LUCENE-8014:
-

Commit bba2b6d418e0fbbbe0f65ae2bee9a6a71b27a3ea in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bba2b6d ]

LUCENE-8014: Deprecate SimScorer.computeSlopFactor and computePayloadFactor


> Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor
> -
>
> Key: LUCENE-8014
> URL: https://issues.apache.org/jira/browse/LUCENE-8014
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8014.patch
>
>
> This supersedes LUCENE-8013.
> We should hardcode computeSlopFactor to 1/(N+1) in SloppyPhraseScorer and 
> move computePayloadFactor to PayloadFunction so that all the payload scoring 
> logic is in a single place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8038) Decouple payload decoding from Similarity

2017-11-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245725#comment-16245725
 ] 

ASF subversion and git services commented on LUCENE-8038:
-

Commit a744654bcae0b71232f009297d590a06574ce434 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a744654 ]

LUCENE-8038: Remove deprecated PayloadScoreQuery methods


> Decouple payload decoding from Similarity
> -
>
> Key: LUCENE-8038
> URL: https://issues.apache.org/jira/browse/LUCENE-8038
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-8038-master.patch, LUCENE-8038.patch
>
>
> PayloadScoreQuery is the only place that currently uses 
> SimScorer.computePayloadFactor(), and as discussed on LUCENE-8014, this seems 
> like the wrong place for it.  We should instead add a PayloadDecoder 
> abstraction that is passed to PayloadScoreQuery.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8038) Decouple payload decoding from Similarity

2017-11-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245722#comment-16245722
 ] 

ASF subversion and git services commented on LUCENE-8038:
-

Commit 5c9bcc9e900de027931a86704a8ab5fd4c525d9f in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5c9bcc9 ]

LUCENE-8038: Add PayloadDecoder


> Decouple payload decoding from Similarity
> -
>
> Key: LUCENE-8038
> URL: https://issues.apache.org/jira/browse/LUCENE-8038
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-8038-master.patch, LUCENE-8038.patch
>
>
> PayloadScoreQuery is the only place that currently uses 
> SimScorer.computePayloadFactor(), and as discussed on LUCENE-8014, this seems 
> like the wrong place for it.  We should instead add a PayloadDecoder 
> abstraction that is passed to PayloadScoreQuery.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8038) Decouple payload decoding from Similarity

2017-11-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245719#comment-16245719
 ] 

ASF subversion and git services commented on LUCENE-8038:
-

Commit 44bd8e4d7922b3233c7db6cc435e95959a0bc1ee in lucene-solr's branch 
refs/heads/branch_7x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=44bd8e4 ]

LUCENE-8038: Add PayloadDecoder


> Decouple payload decoding from Similarity
> -
>
> Key: LUCENE-8038
> URL: https://issues.apache.org/jira/browse/LUCENE-8038
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-8038-master.patch, LUCENE-8038.patch
>
>
> PayloadScoreQuery is the only place that currently uses 
> SimScorer.computePayloadFactor(), and as discussed on LUCENE-8014, this seems 
> like the wrong place for it.  We should instead add a PayloadDecoder 
> abstraction that is passed to PayloadScoreQuery.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11610) Use PayloadDecoder instead of PayloadScoringSimilarityWrapper

2017-11-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245720#comment-16245720
 ] 

ASF subversion and git services commented on SOLR-11610:


Commit 943f5bebc5dab5944f201b7a4207ce9d1a458413 in lucene-solr's branch 
refs/heads/branch_7x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=943f5be ]

SOLR-11610: Move SOLR to PayloadDecoder


> Use PayloadDecoder instead of PayloadScoringSimilarityWrapper
> -
>
> Key: SOLR-11610
> URL: https://issues.apache.org/jira/browse/SOLR-11610
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-11610.patch
>
>
> Follow up to LUCENE-8038, we should move Solr's payload handling to be in 
> line with the new PayloadScoreQuery methods.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8014) Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor

2017-11-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245726#comment-16245726
 ] 

ASF subversion and git services commented on LUCENE-8014:
-

Commit 946ec9d5b945b68c4aae88f582de2b6a02e6bfd0 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=946ec9d ]

LUCENE-8014: Remove deprecated SimScorer methods


> Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor
> -
>
> Key: LUCENE-8014
> URL: https://issues.apache.org/jira/browse/LUCENE-8014
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8014.patch
>
>
> This supersedes LUCENE-8013.
> We should hardcode computeSlopFactor to 1/(N+1) in SloppyPhraseScorer and 
> move computePayloadFactor to PayloadFunction so that all the payload scoring 
> logic is in a single place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8045) ParallelReader does not propagate doc values generation numbers

2017-11-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245694#comment-16245694
 ] 

ASF subversion and git services commented on LUCENE-8045:
-

Commit f0fec1fc5f037ed18c901e43f1d17c4e6594f152 in lucene-solr's branch 
refs/heads/branch_7x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f0fec1f ]

Revert "LUCENE-8017: Don't use ParallelReader in tests"

This reverts commit ff4874f3d3ff6c307121a6a1f6d87a33d45a48a4.

LUCENE-8045 makes this unnecessary


> ParallelReader does not propagate doc values generation numbers
> ---
>
> Key: LUCENE-8045
> URL: https://issues.apache.org/jira/browse/LUCENE-8045
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 7.2
>
> Attachments: LUCENE-8045.patch
>
>
> Exposed by this test failure: 
> https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/777/testReport/junit/org.apache.lucene.search/TestLRUQueryCache/testDocValuesUpdatesDontBreakCache/
> A reader is randomly wrapped with a ParallelLeafReader, which does not then 
> correctly propagate the dvGen into its own FieldInfo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8017) FunctionRangeQuery and FunctionMatchQuery can pollute the QueryCache

2017-11-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245696#comment-16245696
 ] 

ASF subversion and git services commented on LUCENE-8017:
-

Commit e827f17be59d6f505cd920756e3ce780d30e2eb2 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e827f17 ]

Revert "LUCENE-8017: Don't use ParallelReader in tests"

This reverts commit ff4874f3d3ff6c307121a6a1f6d87a33d45a48a4.

LUCENE-8045 makes this unnecessary


> FunctionRangeQuery and FunctionMatchQuery can pollute the QueryCache
> 
>
> Key: LUCENE-8017
> URL: https://issues.apache.org/jira/browse/LUCENE-8017
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 7.2
>
> Attachments: LUCENE-8017.patch, LUCENE-8017.patch, LUCENE-8017.patch
>
>
> The QueryCache assumes that queries will return the same set of documents 
> when run over the same segment, independent of all other segments held by the 
> parent IndexSearcher.  However, both FunctionRangeQuery and 
> FunctionMatchQuery can select hits based on score, which depend on term 
> statistics over the whole index, and could therefore theoretically return 
> different result sets on a given segment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >