[jira] [Commented] (SOLR-10028) SegmentsInfoRequestHandlerTest.testSegmentInfosVersion fails in master

2018-08-25 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-10028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592809#comment-16592809
 ] 

Tomás Fernández Löbbe commented on SOLR-10028:
--

I was missing a part of the fix. The problem could be the merge (decreasing the 
number of segments to 1) but it could also have been the flushing (having more 
segments than commits). I think I fixed this in my last patch by setting the MP 
to NoMergePolicy and setting the maxBufferedDocs and ramBufferSizeMB to high 
enough numbers to prevent flushing (otherwise those values are randomly set in 
the base test class). Also added a test for the segment names. 

> SegmentsInfoRequestHandlerTest.testSegmentInfosVersion fails in master
> --
>
> Key: SOLR-10028
> URL: https://issues.apache.org/jira/browse/SOLR-10028
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-10028-alternative.patch, SOLR-10028.patch, 
> SOLR-10028.patch
>
>
> Failed in Jenkins: 
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1092/
> It reproduces consistently in my mac also with the latest master 
> (ca50e5b61c2d8bfb703169cea2fb0ab20fd24c6b):
> {code}
> ant test  -Dtestcase=SegmentsInfoRequestHandlerTest 
> -Dtests.method=testSegmentInfosVersion -Dtests.seed=619B9D838D6F1E29 
> -Dtests.slow=true -Dtests.locale=en-AU -Dtests.timezone=America/Manaus 
> -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
> {code}
> There are similar failures in previous Jenkins builds since last month



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10028) SegmentsInfoRequestHandlerTest.testSegmentInfosVersion fails in master

2018-08-25 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-10028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-10028:
-
Attachment: SOLR-10028.patch

> SegmentsInfoRequestHandlerTest.testSegmentInfosVersion fails in master
> --
>
> Key: SOLR-10028
> URL: https://issues.apache.org/jira/browse/SOLR-10028
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-10028-alternative.patch, SOLR-10028.patch, 
> SOLR-10028.patch
>
>
> Failed in Jenkins: 
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1092/
> It reproduces consistently in my mac also with the latest master 
> (ca50e5b61c2d8bfb703169cea2fb0ab20fd24c6b):
> {code}
> ant test  -Dtestcase=SegmentsInfoRequestHandlerTest 
> -Dtests.method=testSegmentInfosVersion -Dtests.seed=619B9D838D6F1E29 
> -Dtests.slow=true -Dtests.locale=en-AU -Dtests.timezone=America/Manaus 
> -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
> {code}
> There are similar failures in previous Jenkins builds since last month



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-master-Linux (64bit/jdk1.8.0_172) - Build # 84 - Still Unstable!

2018-08-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/84/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestPolicyCloud.testCreateCollectionAddReplica

Error Message:
Timeout waiting for collection to become active Live Nodes: 
[127.0.0.1:10004_solr, 127.0.0.1:10002_solr, 127.0.0.1:10006_solr, 
127.0.0.1:10003_solr, 127.0.0.1:10005_solr] Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/26)={   
"replicationFactor":"1",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"1",   "tlogReplicas":"0",   
"autoCreated":"true",   "policy":"c1",   "shards":{"shard1":{   
"replicas":{"core_node1":{   
"core":"testCreateCollectionAddReplica_shard1_replica_n1",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10003_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0}}, 
  "range":"8000-7fff",   "state":"active"}}}

Stack Trace:
java.lang.AssertionError: Timeout waiting for collection to become active
Live Nodes: [127.0.0.1:10004_solr, 127.0.0.1:10002_solr, 127.0.0.1:10006_solr, 
127.0.0.1:10003_solr, 127.0.0.1:10005_solr]
Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/26)={
  "replicationFactor":"1",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"1",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "policy":"c1",
  "shards":{"shard1":{
  "replicas":{"core_node1":{
  "core":"testCreateCollectionAddReplica_shard1_replica_n1",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:10003_solr",
  "state":"active",
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6,
  "SEARCHER.searcher.numDocs":0}},
  "range":"8000-7fff",
  "state":"active"}}}
at 
__randomizedtesting.SeedInfo.seed([201719DD338478B8:A0377CF322C7901E]:0)
at 
org.apache.solr.cloud.CloudTestUtils.waitForState(CloudTestUtils.java:70)
at 
org.apache.solr.cloud.autoscaling.sim.TestPolicyCloud.testCreateCollectionAddReplica(TestPolicyCloud.java:121)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 2743 - Unstable

2018-08-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2743/

3 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:35340/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:39896/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:37081/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[https://127.0.0.1:35340/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:39896/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:37081/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([EC987D131A9C67BE:4655AEE1AD4FB26E]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1109)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:996)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:291)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   

[jira] [Commented] (SOLR-12691) Index corruption when sending updates to multiple cores, if those cores can get unloaded by LotsOfCores

2018-08-25 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592776#comment-16592776
 ] 

Erick Erickson commented on SOLR-12691:
---

[~elyograg]

It _is_ an LRU cache. The LinkedHashMap is created as:

LinkedHashMap(Math.min(cacheSize, 1000), 0.75f, true)

>From the Javadocs:
{quote}public LinkedHashMap(int initialCapacity, float loadFactor, boolean 
accessOrder)
 Constructs an empty {{LinkedHashMap}} instance with the specified initial 
capacity, load factor and ordering mode.
 Parameters:
 {{initialCapacity}} - the initial capacity
 {{loadFactor}} - the load factor
 {{accessOrder}} - the ordering mode - {{true}} for access-order, {{false}} for 
insertion-order
{quote}
Although the tests don't show that explicitly. Here's a slight change to 
TestLazyCores showing that the transient cache is LRU that should be 
incorporated in any fixes here.
  

 

> Index corruption when sending updates to multiple cores, if those cores can 
> get unloaded by LotsOfCores
> ---
>
> Key: SOLR-12691
> URL: https://issues.apache.org/jira/browse/SOLR-12691
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4
>Reporter: Shawn Heisey
>Priority: Minor
> Attachments: SOLR-12691-test.patch
>
>
> When the LotsOfCores setting 'transientCacheSize' results in the unloading of 
> cores that are getting updates, the indexes in one or more of those cores can 
> get corrupted.
> How to reproduce:
>  * Set the "transientCacheSize" to 1
>  * Create two cores that are both set to transient.
>  * Ingest high load to core1 only (no issue at this time)
>  * Continue ingest high load to core1 and start ingest load to core2 
> simultaneously (core2 immediately corrupted)
> Error with stacktrace:
> {noformat}
> 2018-08-16 23:02:31.212 ERROR (qtp225472281-4098) [   
> x:aggregator-core-be43376de27b1675562841f64c498] o.a.s.u.SolrIndexWriter 
> Error closing IndexWriter
> java.nio.file.NoSuchFileException: 
> /opt/solr/volumes/data1/4cf838d4b9e4675-core-897/index/_2_Lucene50_0.pos
> at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) 
> ~[?:1.8.0_162]
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) 
> ~[?:1.8.0_162]
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) 
> ~[?:1.8.0_162]
> at 
> sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
>  ~[?:1.8.0_162]
> at 
> sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
>  ~[?:1.8.0_162]
> at 
> sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>  ~[?:1.8.0_162]
> at java.nio.file.Files.readAttributes(Files.java:1737) ~[?:1.8.0_162]
> at java.nio.file.Files.size(Files.java:2332) ~[?:1.8.0_162]
> at 
> org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243) 
> ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at 
> org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:128)
>  ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at 
> org.apache.lucene.index.SegmentCommitInfo.sizeInBytes(SegmentCommitInfo.java:217)
>  ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at org.apache.lucene.index.MergePolicy.size(MergePolicy.java:558) 
> ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at 
> org.apache.lucene.index.TieredMergePolicy.getSegmentSizes(TieredMergePolicy.java:279)
>  ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at 
> org.apache.lucene.index.TieredMergePolicy.findMerges(TieredMergePolicy.java:300)
>  ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at 
> org.apache.lucene.index.IndexWriter.updatePendingMerges(IndexWriter.java:2199)
>  ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at 
> org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2162) 
> ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3571) 
> ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at 
> 

[jira] [Updated] (SOLR-12691) Index corruption when sending updates to multiple cores, if those cores can get unloaded by LotsOfCores

2018-08-25 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-12691:
--
Attachment: SOLR-12691-test.patch

> Index corruption when sending updates to multiple cores, if those cores can 
> get unloaded by LotsOfCores
> ---
>
> Key: SOLR-12691
> URL: https://issues.apache.org/jira/browse/SOLR-12691
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4
>Reporter: Shawn Heisey
>Priority: Minor
> Attachments: SOLR-12691-test.patch
>
>
> When the LotsOfCores setting 'transientCacheSize' results in the unloading of 
> cores that are getting updates, the indexes in one or more of those cores can 
> get corrupted.
> How to reproduce:
>  * Set the "transientCacheSize" to 1
>  * Create two cores that are both set to transient.
>  * Ingest high load to core1 only (no issue at this time)
>  * Continue ingest high load to core1 and start ingest load to core2 
> simultaneously (core2 immediately corrupted)
> Error with stacktrace:
> {noformat}
> 2018-08-16 23:02:31.212 ERROR (qtp225472281-4098) [   
> x:aggregator-core-be43376de27b1675562841f64c498] o.a.s.u.SolrIndexWriter 
> Error closing IndexWriter
> java.nio.file.NoSuchFileException: 
> /opt/solr/volumes/data1/4cf838d4b9e4675-core-897/index/_2_Lucene50_0.pos
> at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) 
> ~[?:1.8.0_162]
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) 
> ~[?:1.8.0_162]
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) 
> ~[?:1.8.0_162]
> at 
> sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
>  ~[?:1.8.0_162]
> at 
> sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
>  ~[?:1.8.0_162]
> at 
> sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>  ~[?:1.8.0_162]
> at java.nio.file.Files.readAttributes(Files.java:1737) ~[?:1.8.0_162]
> at java.nio.file.Files.size(Files.java:2332) ~[?:1.8.0_162]
> at 
> org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243) 
> ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at 
> org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:128)
>  ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at 
> org.apache.lucene.index.SegmentCommitInfo.sizeInBytes(SegmentCommitInfo.java:217)
>  ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at org.apache.lucene.index.MergePolicy.size(MergePolicy.java:558) 
> ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at 
> org.apache.lucene.index.TieredMergePolicy.getSegmentSizes(TieredMergePolicy.java:279)
>  ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at 
> org.apache.lucene.index.TieredMergePolicy.findMerges(TieredMergePolicy.java:300)
>  ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at 
> org.apache.lucene.index.IndexWriter.updatePendingMerges(IndexWriter.java:2199)
>  ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at 
> org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2162) 
> ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3571) 
> ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at 
> org.apache.lucene.index.IndexWriter.shutdown(IndexWriter.java:1028) 
> ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1071) 
> ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at 
> org.apache.solr.update.SolrIndexWriter.close(SolrIndexWriter.java:286) 
> [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz 
> - 2018-06-18 16:55:13]
> at 
> org.apache.solr.update.DirectUpdateHandler2.closeWriter(DirectUpdateHandler2.java:917)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> 

[jira] [Commented] (SOLR-12691) Index corruption when sending updates to multiple cores, if those cores can get unloaded by LotsOfCores

2018-08-25 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592767#comment-16592767
 ] 

Shawn Heisey commented on SOLR-12691:
-

An additional thought:  Even if the problem can be found and fixed so the 
two-core reproduction scenario works perfectly, I can tell you that the 
performance will be *awful* as Solr continually unloads and loads cores.  The 
same thing might happen in the real-world scenario.  That would likely be 
preferable to index corruption, though.

[~erickerickson], should we have another issue to improve choosing which 
transient core to unload?  Do it on an LRU basis, instead of load order?  I was 
looking into request handler code.  A number of request handlers implement 
SolrCoreAware, but it's done on an individual handler basis, not on 
RequestHandlerBase.  If the base class were to implement SolrCoreAware and 
handle updating the timestamp, I think the handler code would be overall 
cleaner, and we might be in a better position to make LRU unloading happen.

> Index corruption when sending updates to multiple cores, if those cores can 
> get unloaded by LotsOfCores
> ---
>
> Key: SOLR-12691
> URL: https://issues.apache.org/jira/browse/SOLR-12691
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4
>Reporter: Shawn Heisey
>Priority: Minor
>
> When the LotsOfCores setting 'transientCacheSize' results in the unloading of 
> cores that are getting updates, the indexes in one or more of those cores can 
> get corrupted.
> How to reproduce:
>  * Set the "transientCacheSize" to 1
>  * Create two cores that are both set to transient.
>  * Ingest high load to core1 only (no issue at this time)
>  * Continue ingest high load to core1 and start ingest load to core2 
> simultaneously (core2 immediately corrupted)
> Error with stacktrace:
> {noformat}
> 2018-08-16 23:02:31.212 ERROR (qtp225472281-4098) [   
> x:aggregator-core-be43376de27b1675562841f64c498] o.a.s.u.SolrIndexWriter 
> Error closing IndexWriter
> java.nio.file.NoSuchFileException: 
> /opt/solr/volumes/data1/4cf838d4b9e4675-core-897/index/_2_Lucene50_0.pos
> at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) 
> ~[?:1.8.0_162]
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) 
> ~[?:1.8.0_162]
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) 
> ~[?:1.8.0_162]
> at 
> sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
>  ~[?:1.8.0_162]
> at 
> sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
>  ~[?:1.8.0_162]
> at 
> sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>  ~[?:1.8.0_162]
> at java.nio.file.Files.readAttributes(Files.java:1737) ~[?:1.8.0_162]
> at java.nio.file.Files.size(Files.java:2332) ~[?:1.8.0_162]
> at 
> org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243) 
> ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at 
> org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:128)
>  ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at 
> org.apache.lucene.index.SegmentCommitInfo.sizeInBytes(SegmentCommitInfo.java:217)
>  ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at org.apache.lucene.index.MergePolicy.size(MergePolicy.java:558) 
> ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at 
> org.apache.lucene.index.TieredMergePolicy.getSegmentSizes(TieredMergePolicy.java:279)
>  ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at 
> org.apache.lucene.index.TieredMergePolicy.findMerges(TieredMergePolicy.java:300)
>  ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at 
> org.apache.lucene.index.IndexWriter.updatePendingMerges(IndexWriter.java:2199)
>  ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at 
> org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2162) 
> ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:51:45]
> at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3571) 
> ~[lucene-core-7.4.0.jar:7.4.0 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-10) - Build # 7485 - Unstable!

2018-08-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7485/
Java: 64bit/jdk-10 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestComputePlanAction.testNodeAdded

Error Message:
ComputePlanAction should have computed exactly 1 operation, but was: 
[org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@549803ba,
 
org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@4b388412,
 
org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@56576a75]
 expected:<1> but was:<3>

Stack Trace:
java.lang.AssertionError: ComputePlanAction should have computed exactly 1 
operation, but was: 
[org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@549803ba,
 
org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@4b388412,
 
org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@56576a75]
 expected:<1> but was:<3>
at 
__randomizedtesting.SeedInfo.seed([3802A4D36D1C6348:5DC1F2A4CFBFCB4B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.autoscaling.sim.TestComputePlanAction.testNodeAdded(TestComputePlanAction.java:314)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2018-08-25 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592747#comment-16592747
 ] 

Shawn Heisey commented on SOLR-11934:
-

Side note.  Probably needs its own issue.  Doesn't impact the main log that 
this issue is concerned about, it's separate.  I'm only commenting about it 
here because this issue is open and tangentially related, and I'm trying to get 
a sense as to whether or not I should create the new issue.

I think we should enable the Jetty request log in the Solr download.  The 
config is already there, it just needs to be uncommented.It even has config 
to delete old logs -- retention is set to 90 days, which might be too long.

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are several existing JIRAs about logging in general, let's link them in 
> here as well.
> Let the discussion begin!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1107 - Still Failing

2018-08-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1107/

No tests ran.

Build Log:
[...truncated 23221 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2311 links (1863 relative) to 3146 anchors in 247 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 

[JENKINS] Lucene-Solr-repro - Build # 1309 - Still Unstable

2018-08-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1309/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/136/consoleText

[repro] Revision: f26dd13b34e3d3a6921230cfe44ff34b2c319e7b

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testMixedBounds -Dtests.seed=82F25BEC85E7F49C 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=uk 
-Dtests.timezone=Cuba -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
f26dd13b34e3d3a6921230cfe44ff34b2c319e7b
[repro] git fetch
[repro] git checkout f26dd13b34e3d3a6921230cfe44ff34b2c319e7b

[...truncated 1 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3392 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror  
-Dtests.seed=82F25BEC85E7F49C -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=uk -Dtests.timezone=Cuba 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[...truncated 15557 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   3/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro] git checkout f26dd13b34e3d3a6921230cfe44ff34b2c319e7b

[...truncated 1 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-repro - Build # 1308 - Still Unstable

2018-08-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1308/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/141/consoleText

[repro] Revision: da37ffb510540af930a79eb1535258b5047a4eba

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.method=testNodeLost -Dtests.seed=B45589A1896DA2FA -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=hr-HR 
-Dtests.timezone=America/Indiana/Marengo -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=SearchRateTriggerIntegrationTest 
-Dtests.method=testDeleteNode -Dtests.seed=B45589A1896DA2FA 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=nl-NL -Dtests.timezone=Europe/Chisinau -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testRetryUpdatesWhenClusterStateIsStale 
-Dtests.seed=F09ACB766EEB52FA -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=es-HN -Dtests.timezone=US/Hawaii 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
f26dd13b34e3d3a6921230cfe44ff34b2c319e7b
[repro] git fetch
[repro] git checkout da37ffb510540af930a79eb1535258b5047a4eba

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestLargeCluster
[repro]   SearchRateTriggerIntegrationTest
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro] ant compile-test

[...truncated 3408 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.TestLargeCluster|*.SearchRateTriggerIntegrationTest" 
-Dtests.showOutput=onerror  -Dtests.seed=B45589A1896DA2FA -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=hr-HR 
-Dtests.timezone=America/Indiana/Marengo -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 277994 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 454 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror  
-Dtests.seed=F09ACB766EEB52FA -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=es-HN -Dtests.timezone=US/Hawaii 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[...truncated 143 lines...]
[repro] Failures:
[repro]   0/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest
[repro]   0/5 failed: 
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest
[repro]   2/5 failed: org.apache.solr.cloud.autoscaling.sim.TestLargeCluster
[repro] git checkout f26dd13b34e3d3a6921230cfe44ff34b2c319e7b

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Resolved] (SOLR-12700) solr user used for crypto mining hack

2018-08-25 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-12700.

Resolution: Invalid

Please ask questions like this on the solr-user mailing list, not in JIRA.

There is nothing in the information provided that gives any clue that Solr 
would be the reason for your issues. However, there has been a number of 
security issues patched in recent versions of Solr. Stating 6.6 as your version 
does not tell us what bugfix release you are on, so you could still be 
vulnerable to some of these that were fixed in 6.6.4. or 6.6.5.

I'm closing this issue as invalid. Your next steps could be
 # Send an email to the solr-user list 
([http://lucene.apache.org/solr/community.html#mailing-lists-irc)] asking for 
advice. You should include much more details, suspicious logs etc when you send 
that email
 # Seek professional guidance to clean your servers or start with clean servers 
to make sure no malware remains. The OS, Java etc should of course also be 
fully patched.
 # Upgrade to the newest Solr release (either latest 7.x or latest 6.6.x) which 
plugs some known weaknesses in various request handlers which COULD potentially 
be ways to break into a system. See 
[https://lucene.apache.org/solr/7_4_0/changes/Changes.html] for details.
 # Make sure that Solr is NEVER exposed to an insecure network, it should 
always be behind firewalls, open only to your app servers.
 # I'm sure you may get more advice on the user's mailing list

Please do not continue discussion in this Jira issue. Only if/when a NEW code 
issue has been identified in Solr after the mailing list discussion, should you 
file a new bug report here.

> solr user used for crypto mining hack
> -
>
> Key: SOLR-12700
> URL: https://issues.apache.org/jira/browse/SOLR-12700
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6
> Environment: Ubuntu running Solr 6.6
>Reporter: Robert Gillen
>Priority: Major
>
> I am struggling to fight an attack were the solr user is being used to crate 
> files used for mining cryptocurrencies. The files are being created in the 
> /var/tmp and /tmp folders.
> It will use 100% of the CPU. 
> I am looking for help in stopping these attacks.
> All files are created under the solr user.
> Any help would be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12692) Add hints/warnings for the ZK Status Admin UI

2018-08-25 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592720#comment-16592720
 ] 

Jan Høydahl commented on SOLR-12692:


The patch looks good. A bit hard to test the various error conditions if we 
don't mock the response data. Few comments:
 * If you get into zk_max_latency issues, the same error may be added once for 
each ZK if all are busy. Perhaps it helps to include the host name in the 
message to distinguish?
 * The "ok" key for each zkhost is "true" if RUOK returns IMOK. Should we flip 
that to false if we detect issues with that host?
 * Not high prio, but code-wise it would perhaps be cleaner to separate 
information fetch phase ({{monitorZookeeper}}) from the inspection and 
detection of errors. I.e. keep {{monitorZookeeper}} as-is and add a new method 
{{detectIssues(zkStatus, errors)}} where all analysis, both existing and the 
new per-host analysis is done? This is less important though.

> Add hints/warnings for the ZK Status Admin UI
> -
>
> Key: SOLR-12692
> URL: https://issues.apache.org/jira/browse/SOLR-12692
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Varun Thacker
>Priority: Minor
> Attachments: SOLR-12692.patch, wrong_zk_warning.png, zk_ensemble.png
>
>
> Firstly I love the new UI pages ( ZK Status and Nodes ) . Thanks [~janhoy] 
> for all the great work!
> I setup a 3 node ZK ensemble to play around with the UI and attaching the 
> screenshot for reference.
>  
> Here are a few suggestions I had
>  # Let’s show Approximate Size in human readable form.  We can use 
> RamUsageEstimator#humanReadableUnits to calculate it
>  # Show warning symbol when Ensemble is standalone
>  # If maxSessionTimeout < Solr's ZK_CLIENT_TIMEOUT then ZK will only honor 
> up-to the maxSessionTimeout value for the Solr->ZK connection. We could mark 
> that as a warning.
>  # If maxClientCnxns < live_nodes show this as a red? Each solr node connects 
> to all zk nodes so if the number of nodes in the cluster is high one should 
> also be increasing maxClientCnxns
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #433: SOLR-12357 Premptive creation of collections ...

2018-08-25 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/433#discussion_r212809165
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/processor/TimeRoutedAliasUpdateProcessorTest.java
 ---
@@ -392,29 +393,73 @@ public void testPreemptiveCreation() throws Exception 
{
 CollectionAdminRequest.setAliasProperty(alias)
 .addProperty(TimeRoutedAlias.ROUTER_PREEMPTIVE_CREATE_MATH, 
"3DAY").process(solrClient);
 
-Thread.sleep(1000); // a moment to be sure the alias change has taken 
effect
-
 assertUpdateResponse(add(alias, Collections.singletonList(
 sdoc("id", "7", "timestamp_dt", "2017-10-25T23:01:00Z")), // 
should cause preemptive creation now
 params));
 assertUpdateResponse(solrClient.commit(alias));
 waitCol("2017-10-27", numShards);
-waitCol("2017-10-28", numShards);
 
 cols = new 
CollectionAdminRequest.ListAliases().process(solrClient).getAliasesAsLists().get(alias);
-assertEquals(6,cols.size());
+assertEquals(5,cols.size()); // only one created in async case
 assertNumDocs("2017-10-23", 1);
 assertNumDocs("2017-10-24", 1);
 assertNumDocs("2017-10-25", 5);
 assertNumDocs("2017-10-26", 0);
 assertNumDocs("2017-10-27", 0);
+
+assertUpdateResponse(add(alias, Collections.singletonList(
+sdoc("id", "8", "timestamp_dt", "2017-10-25T23:01:00Z")), // 
should cause preemptive creation now
+params));
+assertUpdateResponse(solrClient.commit(alias));
+waitCol("2017-10-27", numShards);
+waitCol("2017-10-28", numShards);
+
+cols = new 
CollectionAdminRequest.ListAliases().process(solrClient).getAliasesAsLists().get(alias);
+assertEquals(6,cols.size()); // Subsequent documents continue to 
create up to limit
+assertNumDocs("2017-10-23", 1);
+assertNumDocs("2017-10-24", 1);
+assertNumDocs("2017-10-25", 6);
+assertNumDocs("2017-10-26", 0);
+assertNumDocs("2017-10-27", 0);
 assertNumDocs("2017-10-28", 0);
 
 QueryResponse resp;
 resp = solrClient.query(alias, params(
 "q", "*:*",
 "rows", "10"));
-assertEquals(7, resp.getResults().getNumFound());
+assertEquals(8, resp.getResults().getNumFound());
+
+assertUpdateResponse(add(alias, Arrays.asList(
--- End diff --

Shouldn't we use `addDocsAndCommit` here and the other spots where multiple 
docs are added at a time?  I know you're passing params but it's empty.

Otherwise, tests look good!


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #438: SOLR-12593: Remove date parsing functionality from e...

2018-08-25 Thread dsmiley
Github user dsmiley commented on the issue:

https://github.com/apache/lucene-solr/pull/438
  
Another little detail I see that's different is the default locale.  
ParseDateFieldUpdateProcessorFactory uses ROOT but ExtractDateUtils uses 
ENGLISH.  AFAIK, such a difference would appear when parsing timezones like 
AKST.  I could be wrong.  Can we see if this is indeed a problem?  Ideally a 
test would demonstrate if it is or not.  If we do find an issue here... then we 
could simply always configure the locale in the configs (my preference) or we 
could change the internal default of `ParseDateFieldUpdateProcessorFactory`


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #438: SOLR-12593: Remove date parsing functionality...

2018-08-25 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/438#discussion_r212807806
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/processor/ParsingFieldUpdateProcessorsTest.java
 ---
@@ -992,6 +992,14 @@ public void testLenient() throws IOException {
 assertParsedDate("Friday Oct 7 13:14:15 2005", 
Date.from(inst20051007131415()), "parse-date-patterns-default-config");
   }
 
+  public void testRfc2616() throws Exception {
+assertParsedDate("Fri Oct 7 13:14:15 2005" , 
Date.from(inst20051007131415()), "parse-date-patterns-default-config");
+  }
+
+  public void testRfc2616Leniency() throws Exception {
--- End diff --

Should be named testAsctimeLeniency (see my previous comment).  Glad to see 
this test works.

'course there  are already some tests with similar names... so feel free to 
combine some.  For example the "testLenient" method is an Ansi C test of 
leniency; by all means combine tests into one test method as appropriate.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #438: SOLR-12593: Remove date parsing functionality...

2018-08-25 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/438#discussion_r212807780
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/processor/ParsingFieldUpdateProcessorsTest.java
 ---
@@ -992,6 +992,14 @@ public void testLenient() throws IOException {
 assertParsedDate("Friday Oct 7 13:14:15 2005", 
Date.from(inst20051007131415()), "parse-date-patterns-default-config");
   }
 
+  public void testRfc2616() throws Exception {
--- End diff --

Should be named testAsctime since that's the format of this input you are 
testing.  When I mentioned RFC2616, that is a spec that in turn refers to 
multiple formats (including asctime).

Also, like I pointed out, need to test with 2 spaces left of a single digit 
day.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 136 - Unstable

2018-08-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/136/

1 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds

Error Message:
expected: but was:

Stack Trace:
java.lang.AssertionError: expected: but was:
at 
__randomizedtesting.SeedInfo.seed([82F25BEC85E7F49C:8871E441C85CFFC6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds(IndexSizeTriggerTest.java:669)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14004 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
   [junit4]   2> Creating dataDir: 

[jira] [Created] (SOLR-12700) solr user used for crypto mining hack

2018-08-25 Thread Robert Gillen (JIRA)
Robert Gillen created SOLR-12700:


 Summary: solr user used for crypto mining hack
 Key: SOLR-12700
 URL: https://issues.apache.org/jira/browse/SOLR-12700
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.6
 Environment: Ubuntu running Solr 6.6
Reporter: Robert Gillen


I am struggling to fight an attack were the solr user is being used to crate 
files used for mining cryptocurrencies. The files are being created in the 
/var/tmp and /tmp folders.

It will use 100% of the CPU. 

I am looking for help in stopping these attacks.

All files are created under the solr user.

Any help would be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1307 - Unstable

2018-08-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1307/

[...truncated 38 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/817/consoleText

[repro] Revision: da37ffb510540af930a79eb1535258b5047a4eba

[repro] Repro line:  ant test  -Dtestcase=MoveReplicaHDFSTest 
-Dtests.method=testFailedMove -Dtests.seed=DDC8408B1482A6D5 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=es-SV 
-Dtests.timezone=SystemV/MST7MDT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=DDC8408B1482A6D5 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=fr-CH 
-Dtests.timezone=Europe/Luxembourg -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
f26dd13b34e3d3a6921230cfe44ff34b2c319e7b
[repro] git fetch
[repro] git checkout da37ffb510540af930a79eb1535258b5047a4eba

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   MoveReplicaHDFSTest
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3408 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.MoveReplicaHDFSTest|*.IndexSizeTriggerTest" 
-Dtests.showOutput=onerror  -Dtests.seed=DDC8408B1482A6D5 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=es-SV -Dtests.timezone=SystemV/MST7MDT 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 7782 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.MoveReplicaHDFSTest
[repro]   3/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro] git checkout f26dd13b34e3d3a6921230cfe44ff34b2c319e7b

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 141 - Still Unstable

2018-08-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/141/

3 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testDeleteNode

Error Message:
unexpected DELETENODE status: 
{responseHeader={status=0,QTime=60},status={state=notfound,msg=Did not find 
[search_rate_trigger3/3e5c641f3c5c80T5cnk5ejvurxjjwxte7ofskgep/0] in any tasks 
queue}}

Stack Trace:
java.lang.AssertionError: unexpected DELETENODE status: 
{responseHeader={status=0,QTime=60},status={state=notfound,msg=Did not find 
[search_rate_trigger3/3e5c641f3c5c80T5cnk5ejvurxjjwxte7ofskgep/0] in any tasks 
queue}}
at 
__randomizedtesting.SeedInfo.seed([B45589A1896DA2FA:96C74723BEA72D87]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.lambda$testDeleteNode$5(SearchRateTriggerIntegrationTest.java:683)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at 
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testDeleteNode(SearchRateTriggerIntegrationTest.java:675)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

Re: SQL OR in lucene : where ((term1=a and term2=b) OR (term3=a and term4=b)) and context in (2,3,4,5.....200)

2018-08-25 Thread Shawn Heisey

On 8/24/2018 4:45 AM, Khurram Shehzad wrote:
I have a requirement to replicate following SQL query logic containing 
OR condition as


*where
*

*((term1=a and term2=b) OR (term3=a and term4=b)) and context 
in (2,3,4,5.200)

*

roughly in lucene


*(+term1:a +term2:b) (+term3:a and +term4:b) #context:2 4 7 ... 198*



Tomoko is correct, this should be on the user list, not the dev list.  
This list is for discussing the development of the Lucene/Solr software 
itself, not for questions or user code.


The following is probably the syntax you're looking for.  I placed it on 
two lines to control the line wrapping by the email client.  When you do 
it for real, it should only be one line:


+((+term1:a +term2:b) (+term3:a +term4:b))
+context:(2 4 7 ... 200)

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #438: SOLR-12593: Remove date parsing functionality from e...

2018-08-25 Thread barrotsteindev
Github user barrotsteindev commented on the issue:

https://github.com/apache/lucene-solr/pull/438
  
Added test for RFC-2616 (ParsingFieldUpdateProcessorsTest#testRfc2616) :-).


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #438: SOLR-12593: Remove date parsing functionality...

2018-08-25 Thread barrotsteindev
Github user barrotsteindev commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/438#discussion_r212803611
  
--- Diff: solr/server/solr/configsets/_default/conf/solrconfig.xml ---
@@ -1141,11 +1141,13 @@
   
   
 
-  -MM-dd'T'HH:mm[:ss[.SSS]][z
-  -MM-dd'T'HH:mm[:ss[,SSS]][z
+  -MM-dd['T'[HH:mm[:ss[.SSS]][z
+  -MM-dd['T'[HH:mm[:ss[,SSS]][z
   -MM-dd HH:mm[:ss[.SSS]][z
   -MM-dd HH:mm[:ss[,SSS]][z
-  -MM-dd
+  EEE MMM d [HH:mm:ss ][z ]
+  , dd-MMM-yy HH:mm:ss [z
--- End diff --

>And can you add a test we parse "Sun Nov 6 08:49:37 1994"

BTW, I used "Fri Oct 7 13:14:15 2005" instead, since it uses the same date 
format, but there is a helper method to generate this instant in 
ParsingFieldUpdateProcessorsTest#inst20051007131415.
Hope it is OK.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #438: SOLR-12593: Remove date parsing functionality...

2018-08-25 Thread barrotsteindev
Github user barrotsteindev commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/438#discussion_r212803494
  
--- Diff: solr/server/solr/configsets/_default/conf/solrconfig.xml ---
@@ -1141,11 +1141,13 @@
   
   
 
-  -MM-dd'T'HH:mm[:ss[.SSS]][z
-  -MM-dd'T'HH:mm[:ss[,SSS]][z
+  -MM-dd['T'[HH:mm[:ss[.SSS]][z
+  -MM-dd['T'[HH:mm[:ss[,SSS]][z
   -MM-dd HH:mm[:ss[.SSS]][z
   -MM-dd HH:mm[:ss[,SSS]][z
-  -MM-dd
+  EEE MMM d [HH:mm:ss ][z ]
+  , dd-MMM-yy HH:mm:ss [z
--- End diff --

Oh, now I recall why I added the optional time zone to asctime.
When the optional time zone is added, the pattern covers AKS and EDT date 
formats as well.
IMO, it might be beneficial to keep it in the default config.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #438: SOLR-12593: Remove date parsing functionality...

2018-08-25 Thread barrotsteindev
Github user barrotsteindev commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/438#discussion_r212803184
  
--- Diff: solr/server/solr/configsets/_default/conf/solrconfig.xml ---
@@ -1141,11 +1141,13 @@
   
   
 
-  -MM-dd'T'HH:mm[:ss[.SSS]][z
-  -MM-dd'T'HH:mm[:ss[,SSS]][z
+  -MM-dd['T'[HH:mm[:ss[.SSS]][z
+  -MM-dd['T'[HH:mm[:ss[,SSS]][z
   -MM-dd HH:mm[:ss[.SSS]][z
   -MM-dd HH:mm[:ss[,SSS]][z
-  -MM-dd
+  EEE MMM d [HH:mm:ss ][z ]
+  , dd-MMM-yy HH:mm:ss [z
--- End diff --

I made time of day optional in asciitime so we can remove this 
configuration -MM-dd, which uses same without a time of day.
I will remove the optional timezone in RFC-1123 & RFC-1036, guess it is my 
bad.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #433: SOLR-12357 Premptive creation of collections ...

2018-08-25 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/433#discussion_r212802626
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/TimeRoutedAliasUpdateProcessor.java
 ---
@@ -94,13 +92,15 @@
   private final SolrCmdDistributor cmdDistrib;
   private final CollectionsHandler collHandler;
   private final SolrParams outParamsToLeader;
+  @SuppressWarnings("FieldCanBeLocal")
   private final CloudDescriptor cloudDesc;
 
   private List> parsedCollectionsDesc; // 
k=timestamp (start), v=collection.  Sorted descending
   private Aliases parsedCollectionsAliases; // a cached reference to the 
source of what we parse into parsedCollectionsDesc
   private SolrQueryRequest req;
+  private ExecutorService preemptiveCreationExecutor;
--- End diff --

Since it will be nulled out in another thread, we ought to declare this as 
volatile.  I know this is being a bit pedantic since I don't think it'd be a 
real problem.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #433: SOLR-12357 Premptive creation of collections ...

2018-08-25 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/433#discussion_r212802141
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/TimeRoutedAliasUpdateProcessor.java
 ---
@@ -167,59 +167,17 @@ private String getAliasName() {
   public void processAdd(AddUpdateCommand cmd) throws IOException {
 SolrInputDocument solrInputDocument = cmd.getSolrInputDocument();
 final Object routeValue = 
solrInputDocument.getFieldValue(timeRoutedAlias.getRouteField());
-final Instant routeTimestamp = parseRouteKey(routeValue);
-
+final Instant docTimestampToRoute = parseRouteKey(routeValue);
 updateParsedCollectionAliases();
-String targetCollection;
-do { // typically we don't loop; it's only when we need to create a 
collection
-  targetCollection = 
findTargetCollectionGivenTimestamp(routeTimestamp);
-
-  if (targetCollection == null) {
-throw new SolrException(SolrException.ErrorCode.BAD_REQUEST,
-"Doc " + cmd.getPrintableId() + " couldn't be routed with " + 
timeRoutedAlias.getRouteField() + "=" + routeTimestamp);
-  }
-
-  // Note: the following rule is tempting but not necessary and is not 
compatible with
-  // only using this URP when the alias distrib phase is NONE; 
otherwise a doc may be routed to from a non-recent
-  // collection to the most recent only to then go there directly 
instead of realizing a new collection is needed.
-  //  // If it's going to some other collection (not "this") then 
break to just send it there
-  //  if (!thisCollection.equals(targetCollection)) {
-  //break;
-  //  }
-  // Also tempting but not compatible:  check that we're the leader, 
if not then break
-
-  // If the doc goes to the most recent collection then do some checks 
below, otherwise break the loop.
-  final Instant mostRecentCollTimestamp = 
parsedCollectionsDesc.get(0).getKey();
-  final String mostRecentCollName = 
parsedCollectionsDesc.get(0).getValue();
-  if (!mostRecentCollName.equals(targetCollection)) {
-break;
-  }
-
-  // Check the doc isn't too far in the future
-  final Instant maxFutureTime = 
Instant.now().plusMillis(timeRoutedAlias.getMaxFutureMs());
-  if (routeTimestamp.isAfter(maxFutureTime)) {
-throw new SolrException(SolrException.ErrorCode.BAD_REQUEST,
-"The document's time routed key of " + routeValue + " is too 
far in the future given " +
-TimeRoutedAlias.ROUTER_MAX_FUTURE + "=" + 
timeRoutedAlias.getMaxFutureMs());
-  }
-
-  // Create a new collection?
-  final Instant nextCollTimestamp = 
timeRoutedAlias.computeNextCollTimestamp(mostRecentCollTimestamp);
-  if (routeTimestamp.isBefore(nextCollTimestamp)) {
-break; // thus we don't need another collection
-  }
-
-  createCollectionAfter(mostRecentCollName); // *should* throw if 
fails for some reason but...
-  final boolean updated = updateParsedCollectionAliases();
-  if (!updated) { // thus we didn't make progress...
-// this is not expected, even in known failure cases, but we check 
just in case
-throw new SolrException(SolrException.ErrorCode.SERVER_ERROR,
-"We need to create a new time routed collection but for 
unknown reasons were unable to do so.");
-  }
-  // then retry the loop ...
-} while(true);
-assert targetCollection != null;
-
+String candidateCollection = 
findCandidateCollectionGivenTimestamp(docTimestampToRoute, 
cmd.getPrintableId());
--- End diff --

You can move this line to immediately before its first use of the result.  
Presently, the maxFutureTime check is inbetween which breaks up the natural 
flow.
Hmm; even the "updateParsedCollectionAliases()" call can move down.
Finally, some newlines here & there would help separate separate steps.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #433: SOLR-12357 Premptive creation of collections ...

2018-08-25 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/433#discussion_r212802993
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/TimeRoutedAliasUpdateProcessor.java
 ---
@@ -230,6 +188,95 @@ public void processAdd(AddUpdateCommand cmd) throws 
IOException {
 }
   }
 
+
+  private String createCollectionsIfRequired(Instant docTimestamp, String 
targetCollection, String printableId) {
+// Even though it is possible that multiple requests hit this code in 
the 1-2 sec that
+// it takes to create a collection, it's an established anti-pattern 
to feed data with a very large number
+// of client connections. This in mind, we only guard against spamming 
the overseer within a batch of
+// updates. We are intentionally tolerating a low level of redundant 
requests in favor of simpler code. Most
+// super-sized installations with many update clients will likely be 
multi-tenant and multiple tenants
+// probably don't write to the same alias. As such, we have deferred 
any solution to the "many clients causing
+// collection creation simultaneously" problem until such time as 
someone actually has that problem in a
+// real world use case that isn't just an anti-pattern.
+try {
+  CreationType creationType = requiresCreateCollection(docTimestamp, 
timeRoutedAlias.getPreemptiveCreateWindow());
+  switch (creationType) {
+case SYNCHRONOUS:
+  // This next line blocks until all collections required by the 
current document have been created
+  return maintain(targetCollection, docTimestamp, printableId, 
false);
+case ASYNC_PREEMPTIVE:
+  // Note: creating an executor and throwing it away is slightly 
expensive, but this is only likely to happen
+  // once per hour/day/week (depending on time slice size for the 
TRA). If the executor were retained, it
+  // would need to be shut down in a close hook to avoid test 
failures due to thread leaks which is slightly
+  // more complicated from a code maintenance and readability 
stand point. An executor must used instead of a
+  // thread to ensure we pick up the proper MDC logging stuff from 
ExecutorUtil. T
+  if (preemptiveCreationExecutor == null) {
+DefaultSolrThreadFactory threadFactory = new 
DefaultSolrThreadFactory("TRA-preemptive-creation");
+preemptiveCreationExecutor = 
newMDCAwareSingleThreadExecutor(threadFactory);
+preemptiveCreationExecutor.execute(() -> {
+  maintain(targetCollection, docTimestamp, printableId, true);
+  preemptiveCreationExecutor.shutdown();
+  preemptiveCreationExecutor = null;
+});
+  }
+  return targetCollection;
+case NONE:
+  return targetCollection; // just for clarity...
+default:
+  return targetCollection; // could use fall through, but fall 
through is fiddly for later editors.
+  }
+  // do nothing if creationType == NONE
+} catch (SolrException e) {
+  throw e;
+} catch (Exception e) {
+  throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, e);
+}
+  }
+
+  /**
+   * Determine if the a new collection will be required based on the 
document timestamp. Passing null for
+   * preemptiveCreateInterval tells you if the document is beyond all 
existing collections with a response of
+   * {@link CreationType#NONE} or {@link CreationType#SYNCHRONOUS}, and 
passing a valid date math for
+   * preemptiveCreateMath additionally distinguishes the case where the 
document is close enough to the end of
+   * the TRA to trigger preemptive creation but not beyond all existing 
collections with a value of
+   * {@link CreationType#ASYNC_PREEMPTIVE}.
+   *
+   * @param routeTimestamp The timestamp from the document
+   * @param preemptiveCreateMath The date math indicating the {@link 
TimeRoutedAlias#preemptiveCreateMath}
+   * @return a {@code CreationType} indicating if and how to create a 
collection
+   */
+  private CreationType requiresCreateCollection(Instant routeTimestamp,  
String preemptiveCreateMath) {
+final Instant mostRecentCollTimestamp = 
parsedCollectionsDesc.get(0).getKey();
+final Instant nextCollTimestamp = 
timeRoutedAlias.computeNextCollTimestamp(mostRecentCollTimestamp);
+if (!routeTimestamp.isBefore(nextCollTimestamp)) {
+  // current document is destined for a collection that doesn't exist, 
must create the destination
+  // to proceed with this add command
+  return SYNCHRONOUS;
+}
+
+if (isBlank(preemptiveCreateMath)) {
--- End diff --
 

[GitHub] lucene-solr pull request #433: SOLR-12357 Premptive creation of collections ...

2018-08-25 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/433#discussion_r212802686
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/TimeRoutedAliasUpdateProcessor.java
 ---
@@ -230,6 +188,95 @@ public void processAdd(AddUpdateCommand cmd) throws 
IOException {
 }
   }
 
+
+  private String createCollectionsIfRequired(Instant docTimestamp, String 
targetCollection, String printableId) {
+// Even though it is possible that multiple requests hit this code in 
the 1-2 sec that
+// it takes to create a collection, it's an established anti-pattern 
to feed data with a very large number
+// of client connections. This in mind, we only guard against spamming 
the overseer within a batch of
+// updates. We are intentionally tolerating a low level of redundant 
requests in favor of simpler code. Most
+// super-sized installations with many update clients will likely be 
multi-tenant and multiple tenants
+// probably don't write to the same alias. As such, we have deferred 
any solution to the "many clients causing
+// collection creation simultaneously" problem until such time as 
someone actually has that problem in a
+// real world use case that isn't just an anti-pattern.
+try {
+  CreationType creationType = requiresCreateCollection(docTimestamp, 
timeRoutedAlias.getPreemptiveCreateWindow());
+  switch (creationType) {
+case SYNCHRONOUS:
+  // This next line blocks until all collections required by the 
current document have been created
+  return maintain(targetCollection, docTimestamp, printableId, 
false);
+case ASYNC_PREEMPTIVE:
+  // Note: creating an executor and throwing it away is slightly 
expensive, but this is only likely to happen
+  // once per hour/day/week (depending on time slice size for the 
TRA). If the executor were retained, it
+  // would need to be shut down in a close hook to avoid test 
failures due to thread leaks which is slightly
+  // more complicated from a code maintenance and readability 
stand point. An executor must used instead of a
+  // thread to ensure we pick up the proper MDC logging stuff from 
ExecutorUtil. T
+  if (preemptiveCreationExecutor == null) {
+DefaultSolrThreadFactory threadFactory = new 
DefaultSolrThreadFactory("TRA-preemptive-creation");
+preemptiveCreationExecutor = 
newMDCAwareSingleThreadExecutor(threadFactory);
+preemptiveCreationExecutor.execute(() -> {
--- End diff --

the code executed in the new thread should not call maintain() since I see 
you had to make maintain more complicated to tell if it's being called from 
"async".  It can call: 
```
  final String mostRecentCollName = 
parsedCollectionsDesc.get(0).getValue();
  createCollectionAfter(mostRecentCollName);
```


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #438: SOLR-12593: Remove date parsing functionality...

2018-08-25 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/438#discussion_r212800845
  
--- Diff: solr/server/solr/configsets/_default/conf/solrconfig.xml ---
@@ -1141,11 +1141,13 @@
   
   
 
-  -MM-dd'T'HH:mm[:ss[.SSS]][z
-  -MM-dd'T'HH:mm[:ss[,SSS]][z
+  -MM-dd['T'[HH:mm[:ss[.SSS]][z
+  -MM-dd['T'[HH:mm[:ss[,SSS]][z
   -MM-dd HH:mm[:ss[.SSS]][z
   -MM-dd HH:mm[:ss[,SSS]][z
-  -MM-dd
+  EEE MMM d [HH:mm:ss ][z ]
+  , dd-MMM-yy HH:mm:ss [z
--- End diff --

These last two patterns are RFC-1036 and RFC-1123.  Neither should have an 
optional timezone -- as seen in 
`ExtractDateUtils.PATTERN_RFC1036`, `ExtractDateUtils.PATTERN_RFC1123`, 
`DateTimeFormatter.RFC_1123_DATE_TIME`, and I looked at RFC-1036 spec as well.

Why did you make the time of day optional in ASCTIME?  I don't see that in 
ExtractDateUtils.  And can you add a test we parse "Sun Nov  6 08:49:37 1994"  
(example date from RFC-2616 which is HTTP/1.1 spec which lists RFC-1123, 
RFC-1036, and asctime() -- the origin of why we see these particular patterns 
in ExtractDateUtils, borrowed from ApacheHttpClient).  The double-space before 
the single digit day is deliberate.  It may be necessary to use the 'p' (pad 
modifier) as specified in DateTimeFormatter.  I've seen conflicting information 
from internet searches on the "day" portion of asctime() as either 2-digit or 
1; so it'd be good to test that either work.  "Leniency" will hopefully ensure 
one pattern works without needing to add more variations.

Good catch on noticing the seconds is optional in RFC-1123!

Can you reverse the order of these last 3 patterns?  Based on RFC-2616 
(HTTP/1.1), this is the order defined by preference.

BTW if we really did want RFC-1123 & RFC-1036 patterns to have an optional 
timezone, then it would need to be specified differently than how you did it.  
You put the optional start bracket to the right of the space when it would need 
to be to the left of it.

Obviously, all tweaks we do to these patterns need to be redone between 
* `solr/server/solr/configsets/_default/conf/solrconfig.xml` 
* `solr/core/src/test- 
files/solr/collection1/conf/solrconfig-parsing-update-processor-chains.xml`
* 
`solr/contrib/extraction/src/test-files/extraction/solr/collection1/conf/solrconfig.xml`
Probably elsewhere; I can check before committing.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 817 - Still Unstable

2018-08-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/817/

2 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:36278/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:36493/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:36278/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:36493/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([DDC8408B1482A6D5:77059379A3517305]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1109)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:996)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:289)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 2740 - Still Unstable

2018-08-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2740/

2 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.sim.TestPolicyCloud.testDataProvider

Error Message:
Timeout waiting for collection to become active Live Nodes: 
[127.0.0.1:10102_solr, 127.0.0.1:10100_solr, 127.0.0.1:10104_solr, 
127.0.0.1:10101_solr, 127.0.0.1:10103_solr] Last available state: 
DocCollection(policiesTest//clusterstate.json/43)={   "replicationFactor":"1",  
 "pullReplicas":"0",   "router":{"name":"implicit"},   "maxShardsPerNode":"1",  
 "autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{"shard1":{   "replicas":{ 
"core_node10":{   "core":"policiesTest_shard1_replica_n1",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10104_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0},  
   "core_node11":{   "core":"policiesTest_shard1_replica_n2",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   
"node_name":"127.0.0.1:10103_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":9.5367431640625E-6,   
"SEARCHER.searcher.numDocs":0}},   "state":"active"}}}

Stack Trace:
java.lang.AssertionError: Timeout waiting for collection to become active
Live Nodes: [127.0.0.1:10102_solr, 127.0.0.1:10100_solr, 127.0.0.1:10104_solr, 
127.0.0.1:10101_solr, 127.0.0.1:10103_solr]
Last available state: DocCollection(policiesTest//clusterstate.json/43)={
  "replicationFactor":"1",
  "pullReplicas":"0",
  "router":{"name":"implicit"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "shards":{"shard1":{
  "replicas":{
"core_node10":{
  "core":"policiesTest_shard1_replica_n1",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:10104_solr",
  "state":"active",
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6,
  "SEARCHER.searcher.numDocs":0},
"core_node11":{
  "core":"policiesTest_shard1_replica_n2",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:10103_solr",
  "state":"active",
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6,
  "SEARCHER.searcher.numDocs":0}},
  "state":"active"}}}
at 
__randomizedtesting.SeedInfo.seed([5C3844A526EDE99A:6493EC90942FED43]:0)
at 
org.apache.solr.cloud.CloudTestUtils.waitForState(CloudTestUtils.java:70)
at 
org.apache.solr.cloud.autoscaling.sim.TestPolicyCloud.testDataProvider(TestPolicyCloud.java:324)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 

[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 779 - Unstable!

2018-08-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/779/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir

Error Message:
Captured an uncaught exception in thread: Thread[id=35159, 
name=cdcr-replicator-11123-thread-1, state=RUNNABLE, 
group=TGRP-CdcrBidirectionalTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=35159, name=cdcr-replicator-11123-thread-1, 
state=RUNNABLE, group=TGRP-CdcrBidirectionalTest]
Caused by: java.lang.AssertionError: 1609772381774020608 != 1609772381770874880
at __randomizedtesting.SeedInfo.seed([DF3225FBA616CE40]:0)
at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14198 lines...]
   [junit4] Suite: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest
   [junit4]   2> 3473625 INFO  
(SUITE-CdcrBidirectionalTest-seed#[DF3225FBA616CE40]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/solr/build/solr-core/test/J0/temp/solr.cloud.cdcr.CdcrBidirectionalTest_DF3225FBA616CE40-001/init-core-data-001
   [junit4]   2> 3473626 INFO  
(SUITE-CdcrBidirectionalTest-seed#[DF3225FBA616CE40]-worker) [] 
o.a.s.SolrTestCaseJ4 Using TrieFields (NUMERIC_POINTS_SYSPROP=false) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 3473626 INFO  
(SUITE-CdcrBidirectionalTest-seed#[DF3225FBA616CE40]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 3473635 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[DF3225FBA616CE40]) [] 
o.a.s.SolrTestCaseJ4 ###Starting testBiDir
   [junit4]   2> 3473635 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[DF3225FBA616CE40]) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 1 servers in 
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/solr/build/solr-core/test/J0/temp/solr.cloud.cdcr.CdcrBidirectionalTest_DF3225FBA616CE40-001/cdcr-cluster2-001
   [junit4]   2> 3473635 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[DF3225FBA616CE40]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 3473636 INFO  (Thread-5629) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 3473636 INFO  (Thread-5629) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 3473639 ERROR (Thread-5629) [] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 3473736 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[DF3225FBA616CE40]) [] 
o.a.s.c.ZkTestServer start zk server on port:45935
   [junit4]   2> 3473742 INFO  (zkConnectionManagerCallback-12821-thread-1) [   
 ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 3473749 INFO  (jetty-launcher-12818-thread-1) [] 
o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: 
d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b11
   [junit4]   2> 3473749 INFO  (jetty-launcher-12818-thread-1) [] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 3473749 INFO  (jetty-launcher-12818-thread-1) [] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 3473749 INFO  (jetty-launcher-12818-thread-1) [] 
o.e.j.s.session node0 Scavenging every 66ms
   [junit4]   2> 3473750 INFO  (jetty-launcher-12818-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@74efdd35{/solr,null,AVAILABLE}
   [junit4]   2> 3473758 INFO  (jetty-launcher-12818-thread-1) [] 
o.e.j.s.AbstractConnector Started 
ServerConnector@52350f78{HTTP/1.1,[http/1.1]}{127.0.0.1:41329}
   [junit4]   2> 3473758 INFO  (jetty-launcher-12818-thread-1) [] 
o.e.j.s.Server Started @3475494ms
   [junit4]   2> 3473758 INFO  (jetty-launcher-12818-thread-1) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=41329}
   [junit4]   2> 3473758 ERROR (jetty-launcher-12818-thread-1) [] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. 

[JENKINS] Lucene-Solr-repro - Build # 1301 - Unstable

2018-08-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1301/

[...truncated 37 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2739/consoleText

[repro] Revision: f26dd13b34e3d3a6921230cfe44ff34b2c319e7b

[repro] Repro line:  ant test  -Dtestcase=TestComputePlanAction 
-Dtests.method=testNodeAdded -Dtests.seed=219A838FB3C168D0 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=et-EE -Dtests.timezone=Africa/Kigali 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestPolicyCloud 
-Dtests.method=testCreateCollectionAddReplica -Dtests.seed=219A838FB3C168D0 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ru 
-Dtests.timezone=Asia/Muscat -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
f26dd13b34e3d3a6921230cfe44ff34b2c319e7b
[repro] git fetch
[repro] git checkout f26dd13b34e3d3a6921230cfe44ff34b2c319e7b

[...truncated 1 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestPolicyCloud
[repro]   TestComputePlanAction
[repro] ant compile-test

[...truncated 3392 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.TestPolicyCloud|*.TestComputePlanAction" 
-Dtests.showOutput=onerror  -Dtests.seed=219A838FB3C168D0 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=ru -Dtests.timezone=Asia/Muscat 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[...truncated 3004 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.autoscaling.TestPolicyCloud
[repro]   0/5 failed: org.apache.solr.cloud.autoscaling.sim.TestPolicyCloud
[repro]   2/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestComputePlanAction
[repro] git checkout f26dd13b34e3d3a6921230cfe44ff34b2c319e7b

[...truncated 1 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1624 - Failure

2018-08-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1624/

1 tests failed.
FAILED:  org.apache.lucene.document.TestLatLonLineShapeQueries.testRandomBig

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at 
__randomizedtesting.SeedInfo.seed([71EA2A518493616A:F6BD57DE15CA1DEA]:0)
at java.util.Arrays.copyOf(Arrays.java:3332)
at 
java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
at 
java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)
at java.lang.StringBuilder.append(StringBuilder.java:136)
at 
org.apache.lucene.store.MockIndexInputWrapper.(MockIndexInputWrapper.java:40)
at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:771)
at 
org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:100)
at 
org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:100)
at 
org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:100)
at 
org.apache.lucene.store.Directory.openChecksumInput(Directory.java:157)
at 
org.apache.lucene.util.bkd.BKDWriter.verifyChecksum(BKDWriter.java:1371)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1795)
at org.apache.lucene.util.bkd.BKDWriter.finish(BKDWriter.java:1004)
at 
org.apache.lucene.index.RandomCodec$1$1.writeField(RandomCodec.java:139)
at 
org.apache.lucene.codecs.PointsWriter.mergeOneField(PointsWriter.java:62)
at org.apache.lucene.codecs.PointsWriter.merge(PointsWriter.java:186)
at 
org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:144)
at 
org.apache.lucene.codecs.asserting.AssertingPointsFormat$AssertingPointsWriter.merge(AssertingPointsFormat.java:142)
at 
org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:201)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:161)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4436)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4058)
at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2160)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1993)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1944)
at 
org.apache.lucene.document.BaseLatLonShapeTestCase.indexRandomShapes(BaseLatLonShapeTestCase.java:226)
at 
org.apache.lucene.document.BaseLatLonShapeTestCase.verify(BaseLatLonShapeTestCase.java:192)
at 
org.apache.lucene.document.BaseLatLonShapeTestCase.doTestRandom(BaseLatLonShapeTestCase.java:173)
at 
org.apache.lucene.document.BaseLatLonShapeTestCase.testRandomBig(BaseLatLonShapeTestCase.java:149)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)




Build Log:
[...truncated 10012 lines...]
   [junit4] Suite: org.apache.lucene.document.TestLatLonLineShapeQueries
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestLatLonLineShapeQueries -Dtests.method=testRandomBig 
-Dtests.seed=71EA2A518493616A -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=sr-RS -Dtests.timezone=Turkey -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR206s J2 | TestLatLonLineShapeQueries.testRandomBig <<<
   [junit4]> Throwable #1: java.lang.OutOfMemoryError: Java heap space
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([71EA2A518493616A:F6BD57DE15CA1DEA]:0)
   [junit4]>at java.util.Arrays.copyOf(Arrays.java:3332)
   [junit4]>at 
java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
   [junit4]>at 
java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)
   [junit4]>at 
java.lang.StringBuilder.append(StringBuilder.java:136)
   [junit4]>at 
org.apache.lucene.store.MockIndexInputWrapper.(MockIndexInputWrapper.java:40)
   [junit4]>at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:771)
   [junit4]>at 
org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:100)
   [junit4]>at 
org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:100)
   [junit4]>at 
org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:100)
   

[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 299 - Still Failing

2018-08-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/299/

No tests ran.

Build Log:
[...truncated 23261 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: WARNING: simulations.adoc: line 25: section 
title out of sequence: expected level 3, got level 4
[asciidoctor:convert] asciidoctor: WARNING: simulations.adoc: line 89: section 
title out of sequence: expected level 3, got level 4
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: WARNING: simulations.adoc: line 25: section 
title out of sequence: expected levels 0 or 1, got level 2
[asciidoctor:convert] asciidoctor: WARNING: simulations.adoc: line 89: section 
title out of sequence: expected levels 0 or 1, got level 2
 [java] Processed 2312 links (1864 relative) to 3144 anchors in 246 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.5.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its 

[jira] [Commented] (SOLR-12643) Adding metrics support for Http2SolrClient

2018-08-25 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592518#comment-16592518
 ] 

Cao Manh Dat commented on SOLR-12643:
-

I can't find any way to add metrics listed in 
{{InstrumentedPoolingHttpClientConnectionManager}} for \{{Http2SolrClient}}. 
Metrics listed in that class also seems rarely used, therefore in the commit 
don't contain a replacement for that. 

> Adding metrics support for Http2SolrClient
> --
>
> Key: SOLR-12643
> URL: https://issues.apache.org/jira/browse/SOLR-12643
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Priority: Major
>
> This means to create other {{SolrMetricProducer}}s that can produce metrics 
> for Http2SolrClient. Like current: 
> {{InstrumentedPoolingHttpClientConnectionManager}}, 
> {{InstrumentedHttpRequestExecutor}}, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 301 - Still Unstable

2018-08-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/301/

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestTlogReplica

Error Message:
ObjectTracker found 5 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, SolrCore, MockDirectoryWrapper, 
MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:95)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:768)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:960)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:869)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1138)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1048)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
  at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)  
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)  
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533) 
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:674)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
 at org.eclipse.jetty.server.Server.handle(Server.java:531)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)  at 
org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)  at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
  at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
  at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
  at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
  at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:358)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:737)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:960)  at 

[jira] [Commented] (SOLR-12643) Adding metrics support for Http2SolrClient

2018-08-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592516#comment-16592516
 ] 

ASF subversion and git services commented on SOLR-12643:


Commit ab28046f24ceaea5e57f8c9d9b34f785f5432964 in lucene-solr's branch 
refs/heads/jira/http2 from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ab28046 ]

SOLR-12643: Adding metrics support for Http2SolrClient


> Adding metrics support for Http2SolrClient
> --
>
> Key: SOLR-12643
> URL: https://issues.apache.org/jira/browse/SOLR-12643
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Priority: Major
>
> This means to create other {{SolrMetricProducer}}s that can produce metrics 
> for Http2SolrClient. Like current: 
> {{InstrumentedPoolingHttpClientConnectionManager}}, 
> {{InstrumentedHttpRequestExecutor}}, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-11-ea+28) - Build # 2625 - Unstable!

2018-08-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2625/
Java: 64bit/jdk-11-ea+28 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestDeleteCollectionOnDownNodes.deleteCollectionWithDownNodes

Error Message:
Timed out waiting for leader elections null Live Nodes: [127.0.0.1:33661_solr, 
127.0.0.1:43369_solr] Last available state: null

Stack Trace:
java.lang.AssertionError: Timed out waiting for leader elections
null
Live Nodes: [127.0.0.1:33661_solr, 127.0.0.1:43369_solr]
Last available state: null
at 
__randomizedtesting.SeedInfo.seed([A04DD39110E35EC7:3E78B76936C0124F]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.TestDeleteCollectionOnDownNodes.deleteCollectionWithDownNodes(TestDeleteCollectionOnDownNodes.java:47)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)




Build Log:

[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2018-08-25 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16592488#comment-16592488
 ] 

Shawn Heisey commented on SOLR-11934:
-

I do like the notion of changing some of the classes in the config to WARN.  It 
can make some real progress in the short term with not a lot of effort.

Long term, I think we should try to adjust the overall choice of logging levels 
in the code so that the logging config needs fewer entries.  If we do leave 
some entries at WARN, they should be entries that are *commonly* needed for 
deeper troubleshooting, but don't provide much value for a system that's 
working as expected.  I'm not sure what classes those would be.

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are several existing JIRAs about logging in general, let's link them in 
> here as well.
> Let the discussion begin!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org